The Seven Core Principles of Digital Transformation

Digital Transformation

Digital Transformation has become a hot buzzword recently, being adopted by Microsoft as the overarching theme for their cloud based business apps and the subject of many studies from McKinsey and company, Gartner and other research firms.

I wanted to share some of our approach and lessons learned working with companies in different industries such as Insurance and Manufacturing on their digital transformation initiatives.

A transformation does not happen overnight. It is a long and sometimes painful process that to be honest, never really ends. The rate of innovation and change is increasing and new business and customer needs will constantly emerge.

Therefore, our approach is very much grounded in the concepts of agility. The right foundation built with change in mind. In such an approach, it is not always beneficial to try and document every future requirement to see how to accommodate it but to have a very strong foundation and an agile, open framework that can be easily adapted.

A good way to judge your current agility level is to perform a Digital Agility Gap test. For small, medium size and large changes business has requested in the last year, what is the gap between when the business would like to see the change made to when your organization was able to deploy? The larger the gap, the more acute the need is for a comprehensive digital transformation.

agility-gap

The following 7 core principles should drive every digital transformation initiative, large or small:

  • Business Driven. This may sound obvious but all digital initiatives need to have a business reasoning and business sponsor. Technology can be a game changer but very often, the digital channel needs to be part of an omni-channel approach. eCommerce can augment retails stores or distribution channels but will not replace them for a long while. Digital must be part of the overall business and market strategy. The new role of Chief Digital Officer is a great example for how organizations integrate the digital as a business channel with broad responsibilities and a chair at the executive table. The Digital aspect needs to be part of every major organizational strategy, not a separate one. For example: you are launching a new product, how will you design it, support the manufacturing/supply chain, market, sale and support the product using Digital means?
  • Data is King. Having enterprise information available in digital format with a single source for the truth is the absolute foundation of a digital transformation. Without “Good data” the effect of garbage in, garbage out will produce inconsistent results and systems people can’t trust. This is usually the hardest part for many companies as organizational data may be residing in many legacy systems and too intimately tied to old business applications. It also is hard work. Hard to understand and hard to put a direct ROI on. It is not glamorous and will not be visible to most people. In lieu of complete data re-architecture, most organizations start with master data management and data warehouse / operational datamarts to get around the limitations of the various systems where data is actually stored. The imperative is to know what the single source of the truth is and abstract the details through data access layer and services. The emerging area of Big Data allows capturing and processing ever larger amounts of data, especially related to customer interactions. Data flows, validation and storage needs to be looked at again with new vision into what and how data is captured, stored, processed and managed.
  • Actionable Analytics. Many organizations invested heavily in Business Intelligence and use decision support systems to run analysis and produce reports. The expanding scope of data capture and processing now allows analytics to serve as actionable triggers for real time decisions and other systems. For example, your website’s ability to make customer specific product recommendation can be a result of real time process that conducts a customer analysis and what similar customers have bought and can execute an RFM analysis to assign a tier to the customer and derive relevant offers. Marketing campaigns can target prospects based on predictive analytics etc. Closed loop analysis is critical for understanding the impact of decisions or campaigns. The ability to see the connection between an offer or search campaign and the revenue it generated is the foundation of future investment decisions.
  • Customer Centricity. One of the main drivers and benefits of the digital transformation is the ability to meet the new world of customer expectations and needs. Customers want access to information and ability to take action and interact anytime, anyplace, from any device. The new Digital Experience maps to the customer lifecycle, journey or buying flow and data is collected at every point of interaction to feed personalization, targeting and marketing. When done correctly, an intelligent user experience will improve engagement, loyalty and conversion. In designing new digital user experience, we usually recommend mapping the user interactions across all touch points and focusing on finding common needs rather than a “Persona” driven approach. Those in our experience are too generic and lead to oversimplification of the model.
  • Agility in Technology and Process. Agility is at the heart of our approach and without it you would go through a transformation every few years. It is broader than just IT and impacts many business and operational processes. Few key concepts of planning for agility:
    • De-coupling. A large part of what makes changes hard, is the intertwined nature of most IT environments. Proprietary databases, older applications without outside interfaces, hard coded database calls in code, heavily customized but dated applications, etc. The solution is to de-couple the elements and create a modular, service oriented architecture. Data should be separated from logic, services, and user interaction allowing each tier to grow and evolve without requiring complete system re-write. For example, the biggest driver of transformation in the last few years has been the user experience and the need to support users in various mobile devices. A de-coupled architecture would allow UX overhaul using the same services and backend.
    • Agile / Rapid application development. Application development needs to be able to create prototypes and test ideas on a regular basis. For that to happen, the process of definition, design, implementation and testing software has to be more responsive to business needs. Whether following Agile Methodology principles or just a more iterative version of traditional models, application development has to be able to quickly show business users what they would get, and adopt a minimal viable product approach to releasing software. An emerging model of continuous delivery allows faster, automated deployment of software when it is ready.
    • Cloud and Infrastructure agility. The emergence of cloud services is making agile environments so much easier to implement. From an infrastructure perspective, you no longer need to invest in hardware resources for your worst-case load scenario. The ability to get just as much computing resources as needed on demand and scale as needed in matter of minutes makes platforms like AWS and Azure very appealing. Many applications now offer only cloud based versions and even the large players like Microsoft and Oracle are now pressuring all customers to get on the cloud versions of their applications. The ability easily to plug a cloud application into the environment is the ideal of agility. With a common security and authentication layer, the modern corporate application landscape is comprised of many different cloud applications being available to each user based on their role and integrated to a degree that makes the user experience as seamless as possible.
    • In addition to the environment, software and infrastructure, organizational processes have to be more flexible too. Change management needs to become a process that enables change, not one the stops it.
  • Process Automation: with the new landscape comprised of so many different and independent application, process automation and leverages the open interfaces of application is becoming critical. Traditional Business Process Management application are now morphing into cloud orchestration and an ability to allow processes to be created across multiple applications and managed / updated by business users without IT involvement.
  • Security. Last but not least, the open, flexible nature of the future landscape we were describing here, requires new levels of security that should be an integral part of all facets of the environment. Data security and encryption. Services security, security in application design, all layers and components have to consider the rising threat of hacking, stealing data and denial of service that are more prevalent than ever. We see this as the primary concern for companies looking to adopt a more digital and agile environment and a large emphasis on risk management, security standards and audits should be a primary component of any digital transformation initiative.
blended project management

Have you tried to blend agile methodologies into your traditional waterfall world?

Did you get a tasty stew?

….Or a culinary disaster?

I like to cook, but I’m not exactly a purist.  By that I mean that I almost never follow a recipe exactly.  Instead I treat a recipe as more of a guide.  Sometimes I omit ingredients that my family doesn’t like; other times I add in ingredients to see what the affect will be. I often combine a couple of recipes together, mixing and blending ideas from several sources.  Sometimes the result is a wonderful creation that suits the tastes of my family.  Other times, we take a bite and reach for the pile of take-out menus.

I find myself taking the same approach to development methodologies.  Like most of you, I find traditional, waterfall methodologies to be too rigid, too slow, and too removed from reality.  Their assumption that everything about a project can be known and documented up front has always struck me as laughable.

But when I look at pure agile methodologies, I find them too rigid and idealistic as well.  Successful projects need a framework around them; they can’t be driven simply by empowering a team to prioritize a backlog and deliver chunks of code.  Project components need to be fit into a larger vision and architecture; organizations need to have a sense of scope, plan, and budget.  Large, complex systems can’t always be nicely packaged in 2 or 3 week sprints.

So I find myself mixing and blending.  Take a few waterfall concepts like a defined project scope, written business requirements, defined technical architecture, and a project plan.  Blend with an agile development window where the project team can work through detailed requirements, development, and testing together; shifting priorities as business needs change.  Garnish with some user testing, training, and release planning.

Maybe this is agile book-ended by just enough waterfall to frame the work that the agile teams will take-on and integrate their work with the organization’s larger planning processes.  Maybe this is diluting agile precepts by subjugating them to overreaching controls.  Some call these approaches “waterscrumfall”; some call them an abomination.

My experience (and the experience of at least one of my colleagues) has been that a pragmatic blending better suits the needs of most projects and most organizations. It creates just enough structure to tame the chaos while recognizing that projects can’t be and shouldn’t be totally defined up-front.  It ensures that project deliverables fit into the larger enterprise architecture and meet strategic objectives.  Yet it takes advantage of the agile team’s strengths, allowing them to drive the project’s pace and details.

What has your experience been?  Have you tried more blended approaches?  Have they been successful?  Or have they resulted in the equivalent of a culinary disaster?

Part 2: Creating an Editable Grid in CRM 2013 Using Knockout JS

This is the second installment following Part 1, which introduced the editable grid in CRM 2011. Since then, I have upgraded the editable grid to work in CRM 2013.

In this blog, I will first demo what the grid looks like in CRM 2013. Afterwards, I will walk through the main block of code.

Demo

The following screen shots demonstrate the editable grid of opportunity products inside of the opportunity.

MPitts KnockoutJS 2 image 1

The above demonstrates:

  • Editing existing data, including lookup data
  • Adding a new record
  • Deleting an existing record
  • The introduction of the custom option set field, ‘projected schedule’ (new to this post)

MPitts KnockoutJS 2 image 2

Steps

  1. Click ‘Add Opportunity’
  2. The standard lookup appears; select a product and click ‘Add’

MPitts KnockoutJS 2 image 3

3. Choose a ‘projected schedule’

MPitts KnockoutJS 2 image 4

4. Click ‘Save’

MPitts KnockoutJS 2 image 5

5. As a proof of concept, the code takes the count of opportunity products, and updates the parent ‘estimated budget’ field.  This demonstrates updating the parent opportunity.

Code Walkthrough

The following code is reference implementation that you can adapt to your needs.

The following web resources make up this solution:

  • Jquery.js
  • Knockout.js (version 2.2)
  • SDK_REST.js
  • SDK.MetaData.js
  • The below html web resource.

Code Walk Through

Code Comments

What’s next?

In future posts:

  • Resolving the known issue with IE (yes, IE only – sigh)
    • Object Expected with JsProvider.ashx
    • How to integrate this code in CRM
    • Sorting
    • Paging through the grid

Do PMOs Matter

successProject Management Offices (PMOs) have become a fixture in many organizations.  According to the 2012 State of the PMO Study, 87% of organizations surveyed have a PMO, up from 47% in 2000.   Although mid-size and large companies are more likely to have a PMO than small companies, the biggest growth, by far, was in small companies – 73% of small firms now have PMOs and over 90% of mid-size and large companies have them.

Still, people question their value and ask, Does having a PMO matter?

The statistics say yes.  According to the 2012 State of the PMO study, PMOs directly contribute to the following performance improvements:  a 25% increase in projects delivered under budget, a 31% increase in customer satisfaction, a 39% improvement in projects aligned with objectives, a 15% cost savings per project, and a 30% decrease in failed projects.

Experience also says yes.  A 2012 PMI White Paper described multiple PMO success stories.  In one case study, a company shifted the focus of its PMO from a process orientation to an outcome orientation. The PMO now focuses on working with organizational units as they launch projects, requiring them to build a business case and complete a scorecard to demonstrate how the project aligns with corporate strategy.  The reorganized PMO was able to triple the number of projects that delivered on organizational strategy.  In another case study, an organization was able to reduce project planning time by 75% through standardizing and leveraging common project plans defined by the PMO.

Having a PMO CAN matter, but simply having a PMO is not a silver bullet.  Implementing an idealized methodology that isn’t tailored to an organization’s needs and cultures won’t suddenly or magically solve all problems.  To meet its objective of improving the likelihood of project success, the PMO must gain maturity and acceptance within the organization.  It must become more than another structure or process.  It must become focused on continuous process improvement and proactive management of the project portfolio.  Keys to achieving this level of maturity are strong executive sponsorship, tailoring the PMO to solve practical, real-world problems within the organization, and actively measuring PMO and project success.

So if you are an organization that struggles to consistently deliver business critical projects do you have a PMO?  If you do, is it as effective as it could be?  Maybe it is time to consider the benefits of a PMO assessment and reap the rewards that a highly functioning PMO can bring to your organization.

Hunting Down an Apache PermGen Issue

imageWe have been experiencing a PermGen out-of-memory problem in an application we have been working on when redeploying our .war to a Tomcat 7.0.33 application server.  This would cause Tomcat to become unresponsive and crash after about 4 or 5 redeploys.

Our application is using Spring (Core, MVC, Security), Hibernate, and JPA.  Our database pool is being maintained by Tomcat and we get connections to this pool via JNDI.

When redeploying our .war by copying it to $CATALINA_HOME/webapps/ we would see a few lines in our logs that looked something like this:

SEVERE: The web application [/myApp] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@d73b31]) and a value of type [java.lang.Class] (value [class oracle.sql.TypeDescriptorFactory]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.

So in our case it seemed to be an issue with the Oracle drivers.  Tomcat 7 does a bit better of a job determining whether there have been leaks and reporting the problem than 6 so there may or may not be useful info in the logs of an older Tomcat install.

Background

The PermGen memory space is typically a small memory space that the JVM uses to store class information and meta data.  It is different from the heap in that your application code has no real control over its contents.  When you start an application all the classes that are loaded will store information in the PermGen space.  When your application is undeployed this space will be reclaimed.

Cause

The cause of the problem of running out of PermGen space when re-deploying a web app in general is that some references to classes can’t be destroyed by the JVM when the application is unregistered.  This leaves all that classloader information sitting in PermGen since there are still live references.  This can be caused by threads started by the application that aren’t cleaned up, database connections that are not cleaned up, etc.  Sometimes this can be in libraries in use.

A good article on this can be found here:

http://plumbr.eu/blog/what-is-a-permgen-leak

HOWEVER, In our case this was caused by the way our application was deployed.  Maven was configured with ojdbc6.jar as a dependency thusly:

<dependency>
​    <groupId>oracle</groupId>
​    <artifactId>ojdbc6</artifactId>
​    <version>11.2.0.3</version>
</dependency>

This caused our war to include ojdb6.jar when deployed.  So in WEB-INF/lib we would see “ojdbc6.jar” in our deployed application.  This is where the problem starts.  When our code requests a database connection it is creating a DataSource object using our *local* ojdbc6.jar (actually our local ClassLoader which uses our local .jar) rather than the one installed in Tomcat’s shared library directory (via Tomcat’s own ClassLoader).  This now creates an object that Tomcat itself will hold on to  but references code in our application and thus our ClassLoader.  Now when our application is destroyed we get a leak since our ClassLoader can’t be removed.

Solution

In this scenario the solution is to tell Maven that the oracle library will be provided by our container.  It will be available at compile-time for our code but will NOT be included in our .war.

<dependency>
​    <groupId>oracle</groupId>
​    <artifactId>ojdbc6</artifactId>
​    <version>11.2.0.3</version>
​    <scope>provided</scope>
</dependency>

Now when our application requests a DataSource the server ojdbc6.jar will be used removing the dependency on our application.  This allows Tomcat to properly cleanup after “myApp.war” when it is removed.

How to Optimize Microsoft Dynamics CRM 2011 Applications (Part 3)

In Part 1, I highlighted changes to improve performance for the Microsoft Dynamics CRM Web Application. Part 2 focused on performance improvements for CRM customizations. In this blog, I will highlight changes to improve performance for the following:

  • Microsoft Dynamics CRM SDK Applications and
  • Microsoft Dynamics Reporting Services.

Microsoft Dynamics CRM SDK Applications

It is imperative to ensure the optimal performance of any custom applications, plug-ins, or add-ins developed using the Microsoft Dynamics CRM 2011 SDK.

A precise recommendation for any custom application is to limit any columns and rows retrieved to those required to achieve the application’s business goals. This technique is particularly significant when CRM users access the data from a Wide Area Network (WAN) with higher network latencies. The data returned by custom applications can be limited:

  • when using ‘Condition’ attributes to restrict the data that the ‘FetchXML’ and ‘ConditionExpressions’ queries return,  and
  • When using paging to restrict the number of rows returned by a custom application.

NOTE: For Microsoft Dynamics CRM deployments that are integrated with other systems, test custom applications in an environment similar to the complexity and integration present in the production environment. Another thing to keep in mind is that performance results may vary if the database on the test system is not of similar size and structure to that in the production environment.

Microsoft Dynamics CRM Reporting Services

There are a variety of factors that can affect report server performance:

  • hardware,
  • number of concurrent users accessing reports,
  • the amount of data in a report, and
  • output format.

Organizations that have smaller data sets and a smaller amount of users can deploy on a single server or on two servers, with one computer running Microsoft Dynamics CRM Server 2011 and the other computer running Microsoft SQL Server and SQL Server Reporting Services.  Performance will be affected with larger datasets, more users, or heavier loads.  Optimizing reporting services would be required, if it is noted that the usage for reports has increased.

Consider these guidelines when optimizing Microsoft Dynamics CRM 2011 Reporting Services:

  • Ensure that the computer hosting the report server includes ample memory as report processing and rendering are memory intensive operations.
  • Host the report server and the report server database on separate computers instead of hosting both on a single high-end computer.
  • When there are signs of all reports processing slowly, contemplate a scaled-out deployment with multiple report server instances. For best results consider using load balancing software and hardware to allocate requests evenly across multiple report servers in the deployment.
  • If only one report is processing slowly, tune the query if the report must run on demand.  Consider caching the report or running it as a snapshot as well.
  • Consider the following if all reports process slowly in a specific format (for example, while rendering to PDF):
    • File share delivery;
    • Adding more memory;
    • Using another format (Excel, CSV, etc)

Time to Remodel the Kitchen?

Although determining full and realistic corporate valuation is a task I’ll leave to people of sterner stuff than I (since Facebook went public, not many could begin to speculate on the bigger picture of even small enterprise valuation), I’ve recently been working with a few clients whom have reminded me of why one sometimes needs to remodel.

Nowadays, information technology is often seen as a means to an end. It’s a necessary evil. It’s overhead to your real business. You joined the technological revolution, and your competitors who didn’t, well… sunk. Or… you entered the market with the proper technology in place, and, seatbelt fastened, have taken your place in the market. Good for you. You’ve got this… right?

I’m a software system architect. I envision and build out information technology. I often like to model ideas around analogies to communicate them, because it takes the tech jargon out of it. Now that I’ve painted the picture, let’s think about what’s cooking behind the office doors.

It’s been said that the kitchen is the heart of the home. When it comes to the enterprise (big and small) your company’s production might get done in the shop, but sooner or later, everyone gets fed business processes, which are often cooked in the kitchen of technology. In fact, technology is often so integral to what many companies do nowadays that it’s usually hard to tell where, in your technology stack, business and production processes begin. Indeed, processes all cycle back around, and they almost certainly end with information technology again.

Truly, we’ve come a long way since the ’70s, when implementing any form of “revolutionary” information technology was the basis of a competitive advantage. Nowadays, if you don’t have information technology in the process somewhere, you’re probably only toying with a hobby. It’s not news. Technology graduated from a revolutionary competitive advantage to the realm of commoditized overhead well over a decade ago.

Ok… ok… You have the obligatory kitchen in your home. So what?

If you think of the kitchen in your home as commoditized overhead, you probably are missing out on the even bigger value an update could bring you at appraisal time. Like a home assessment, due diligence as part of corporate valuation will turn up the rusty mouse traps behind the avocado refridgerator and under the porcelain sink:

  • Still rocking 2000 Server with ActiveX?
  • Cold Fusion skills are becoming a specialty, probably not a good talent pool in the area, might be expensive to find resources to maintain those components.
  • Did you say you can spell iSeries? Great, can you administer it?
  • No one’s even touched the SharePoint Team Services server since it was installed by folks from overseas.
  • The community that supported your Open Source components… dried up?
  • Cloud SLAs, Serviceability?
  • Compliance?
  • Disaster Management?
  • Scalability?
  • Security?
  • Documentation…?
    • Don’t even go there.

As you can see… “Everything but the kitchen sink” no longer applies. The kitchen sink is transparently accounted for as well. A well designed information technology infrastructure needs to go beyond hardware and software. It considers redundancy/disaster management, security, operating conditions, such as room to operate and grow, and of course, if there are any undue risks or burdens placed on particular technologies, vendors, or even employees. Full valuation goes further, looking outside the walls to cloud providers and social media outlets. Finally, no inspection would be complete without a look at compliance, of course.

If your information technology does not serve your investors’ needs, your CEO’s needs, your VP of Marketing and Sales’ needs, as well as production’s… but most importantly your customers’, your information technology is detracting from the valuation of your company.

If the work has been done, due diligence will show off the working utility, maintainability, security, scalability, and superior added value of the well-designed enterprise IT infrastructure refresh.

To elaborate on that, a good information technology infrastructure provides a superior customer experience no matter how a customer chooses to interact with your company. Whether it’s at the concierge’s counter, in the drive-through, at a kiosk, on the phone, at your reseller’s office, in a browser or mobile app, your customers should be satisfied with their experience.

Don’t stop with simply tossing dated appliances and replacing them. Really think about how the technologies work together, and how people work with them. This is key… if you take replacement appliances off the shelf and simply plug them in, you are (at best) merely keeping up with your competitors. If you want the full value add, you need to specialize. You need to bend the components to your processes. It’s not just what you’ve got.  It’s how you use it.  It’s the critical difference between overhead and advantage.

Maybe the Augmented Reality Kitchen won’t provide a good return on investment (yet), but… there’s probably a lot that will.

Technical Debt – Managing near term technical “borrowing” to prevent bankruptcy

In my recent client engagements, I’ve discovered that increased flexibility (in product development / deployment, mobile capabilities, back office integration, etc.) is still top of mind. But, as organizations weigh options for meeting their specific business needs, tailored, custom build efforts may be required when system replacements or refacing/modernizing front ends fail to meet long term business objectives.

Often times, proposed modifications are defined to bolster existing systems as a short term, quick win solution, until a more permanent solution can be afforded.Carriers who elect to undertake custom build efforts in-house are faced with balancing the following resource challenges:

  1. Retaining full-time resources with sufficient expertise in both initial custom development and ongoing maintenance efforts.
  2. Enlisting external consultants who have both System Development Life Cycle (SDLC) expertise and significant industry expertise.

Either approach drives the carrier to consider the cost of ensuring quality design and development practices against tight budgets and competing business priorities.

Although a quick fix enhancement may seem to be the cheapest route, a Software Productivity Research study from 2010 found that a patch is only less expensive through the early rounds of coding. After that, it is significantly cheaper to code — and significantly more cost effective to maintain — for the longer term solution.

Most concerning is the tendency for organizations to prioritize the short term objective without fully considering the potential long term ramifications.  I can see the value of targeted modifications to existing systems for a short term, short expected lifecycle goal.  However, it seems that regardless of the intended short term lifecycle for these “Band-Aids,” the modifications are exercised years longer than planned.  The term “technical debt” has been top of mind in many discussions I’ve had recently, where we face the challenge of helping carriers fully scrutinize their options and understand the consequences of their decisions. Carriers performing more internal development need to understand that any short cuts made for an immediate patch MUST be structurally reworked in order to repay the technical debt instigated to get the fix up and running.

For example, in one instance, an organization has been weighing options to achieve a business goal given several unknown future factors.  Options included expanding an internal system – which evolved through bolt-on requests and had become a critical system – or building these capabilities within a new system.  The technical debt factor was paramount in this case, as the expected lifecycle of the selected solution weighed in heavily.  Given the uncertain future state, a short term solution may work for a year or two, but the probability of a three+ year expectancy drives a far more strategic approach.  Any short term patches made to the existing system would become exponentially more costly to support as the system remains in use.

This doesn’t mean that there can’t be quick fixes applied to meet an immediate need, but carriers should look beyond the next quarter and evaluate their debt repayment plan before making the decision to implement a quick fix.  Nearly every carrier I’ve worked with has an internal system that grew to be a critical platform and now requires full time business and IT resources purely for maintenance alone.  As the cost to maintain such a system grows, so does the cost to replace it. Carriers must consider the true long term benefits and ramifications of their development efforts and make strategically sound decisions to meet both short term needs and long term business goals.