Top 10 Reasons for System Implementation Failure

Insurance carriers are not typically custom development or system integration experts – their business is insurance, not software.  There are some national carriers that have built their company around providing insurance directly over the web and have created proven methods of software development and integration.  That is not the case for most, especially small- or mid-size carriers.  These carriers see the need to improve  products, speed to market, and flexibility. They address the underlying system needs by buying, building, or a combination, and introducing new systems into their technology ecosystem.  Unfortunately, too many times these projects fail.

There are many different reasons why  system implementations to fail, but this is the top 10 in my experience.top_10-708117

1)       No experience running an RFP to begin their selection. The RFP was pulled from the internet, modified slightly and sent out.  Out of 4 responses, three eventually dropped out, so they went with the one remaining, never taking a close look at what they were buying.

2)       Inadequate Project / Program Management Process. The project was driven by hard completion dates without having a valid work breakdown structure for a project plan to really understand what it would take.  One high level executive ran the project, and no one was empowered to reveal that the emperor had no clothes.

3)       The software development company was located in Europe and the stateside representation was a consulting company that did not understand the carrier’s business.

4)       The carrier failed to properly check the vendor’s references of successful installations, of which they would discover there were none.

5)       The carrier wanted to perform too much of the work themselves, learning as they go, and not allowing the experts to do what they do best.

6)       Tried a big bang (All lines, all states, and all systems on day one).

7)       Scope management. Tried to put everything in the first release afraid that if it was not there, they would never get it.  As a result, they got nothing.

8)       The carrier’s requirements were actually pretty good, but they did not know how to manage them. They could not set and hold on to their agenda, rather than letting the vendor set the agenda.

9)      Not enough involvement in the process by the business.  After the selection all the work was done in a “black box” and what was delivered was not what the business wanted.

10)   Poor or no Quality Assurance process.

If Master Data Management is on your agenda

Start with Data Quality

Many organizations are currently working on Master Data Management (MDM) strategies as a core IT initiative. One of the fastest paths to failure for these large, multiyear initiatives is to ignore the quality of the data. This is a good post on other MDM design pitfalls.

Master Data Management (MDM) is defined as the centralization or single view of X (Customer, Product or other reference data) in an enterprise. Wikipedia says: “master data management (MDM) comprises a set of processes and tools that consistently defines and manages non-transactional data entities of an organization (also called reference data).” MDM typically is a large, multiyear initiative with significant investments in tools, with two to five times the investment in labor or services to enable the integration of subscribing and consuming systems. For many companies you are talking millions of dollars over the course of the implementation. According to Forrester, on average, cross-enterprise implementations range anywhere from $500K to $2 million and professional services costs are usually two dollars for every dollar of software license costs. When you consider integration of all your systems for bi-directional synchronization for customer or product information, the services investment over time can be up to five times the license cost.

At its simplest level, MDM is like a centralized data pump or the heart of your customer or product data (the most popular implementations). But once you hook this pump up, if you haven’t taken care of the quality of the data first, what have you done? You have just spent millions of dollars in tools and effort to pollute the quality of data across the entire organization.

Unless you profile the systems to be integrated, the quality of the data is impossible to quantify. The analysts who work with the data in a particular system have an idea of what areas are suspect (e.g., “we don’t put much weight in the forecast of X because we know the data is sourced from our legacy distribution system which has data ‘problems’ or ‘inconsistencies’”). The problem is that the issues are known at the subconscious level but are never quantified, which means a business case to fix the issues never materializes or gets funding to make improvements. In many cases, the business is not aware there is a problem until they try to mine a data source for business intelligence.
According to a study by the Standish Group, 83% of data integration/migration projects fail or overrun substantially due to a lack of understanding of the data and its quality. Anyone ever work on a data integration project or data mart or data warehouse that ran long? I have, and I’m sure most of the people reading this have too.

The good news is that data profiling and analyzing is a small step you can undertake now to prepare and position yourself for the larger MDM effort. With the right tools, you can assess the quality of the data in your most important data sources in as little as three weeks depending upon the number of tables and attributes. Further, it is an inexpensive way to ensure that you are laying the foundation for your MDM or Business Intelligence initiatives. It is much more expensive to uncover your data quality problems in user acceptance testing. Many times it is fatal.

Success of your MDM initiative depends on the quality of the data – you can profile and quantify your data quality issues now to proactively head off problems down the road and build a business case to implement improvement in your existing data assets (marts, warehouses and transactional systems). The byproduct of this analysis is that you can improve the quality of the business intelligence derived from these systems and help the business make better decisions with accurate information.

“The Trouble with the Future…

fortune_teller…Is that it arrives before we are ready for it.”  A bit of plainspoken wisdom from American humorist Arnold H. Glasow. Thanks to the miracle of google, it becomes our intro quote for today’s topic of acquisition integration readiness.

In an earlier post, we talked about data integration readiness, but that’s only one task on a list of things you should be doing now if you plan to acquire a company in 09. Readiness is the word of the day, and the best way to sum it up is you have to have a documented platform to integrate with across the board, or you will lose time during your integration period. Lost time means revenue drag–you won’t hit your projections.

So, let’s make a list.

1. Data integration readiness, already covered in detail here.

2. Process readiness – are your procedures for key business areas up to date? You will need to walk through them with business team leads on the acquisition side to rapidly understand the gaps between the way they do business and the way you do business. Can you rapidly train the influx of people you will be onboarding with the acquisition? An effective training plan is a solid way to minimize post-close chaos.

3. Collaboration readiness – don’t underestimate the amount of time those new employees will take up with endless “How do I?” questions. Hopefully, you have a corporate knowledge portal in place already and you can give them access and a navigation walkthrough on Day 1. Make sure it includes discussion groups, so that the answers to their common questions can be searchable and institutionalized. There was a great post on this recently describing how IBM is using collaboration tools to help with acquisitions, and Edgewater’s Ori Fishler and Peter Mularien have posted extensively on Web 2.0 tools for corporate collaboration.

While we are on the subject of collaboration tools, let me tip you off to an important secondary benefit. The people that use them and participate actively in discussions are your change agents, the people that can help lead the rest of the acquired workforce through the integration. The people that don’t participate, well, they are your change resistors. They need to be watched, because they may have emotionally detached from this whole acquisition thing. If they are key employees, you want to make sure they don’t have one foot out the door.

4. System integration readiness – It’s oh-so-much-more-challenging (meaning time consuming and costly) to integrate into an undocumented or underdocumented architecture. Get your data flow diagrams and infrastructure diagrams, as well as your hardware and software inventories up to date before you close.

That first quarter after you close will still be a wild ride, but you can be sure you’ve cut the stress level down significantly if you make these readiness tasks a priority before closing day.

E-Billing Expose – Part One

Designed correctly, E-Bills are a great use for the web, allowing insurance companies to communicate critical financial and census data to their customers in a controlled fashion while increasing efficiency and accuracy for the customer and themselves.

Unfortunately, it has been my experience that there’s a lot of misleading information about E-Billing “products” and “systems” on the Internet. In my upcoming posts, I’m going to wade through this morass of information and detail the approach that works – both from the company and customer standpoint.

The bill I’m speaking of is for group insurance. In the paper world, it’s that complicated, multi-page, already incorrect when it’s mailed, document that details what the insurance company believes the customer owes them. In the group insurance world (especially worksite marketing), where products may be voluntary and usually involve payroll deductions, I contend the bill is always wrong when received by customer due to the nature of the beast. By the time the bill arrives, the customer has employees who have joined the company, left the company, added dependents, etc. The poor customer then moves to try to reconcile said manual bill at a “point in time” that has nothing to do with the insurance company’s computer system used to generate the bill. The customer remits the reconciled (from their standpoint) bill, and it starts all over at the company end. There, it’s a reverse reconciliation as the company tries to figure out the entries the customer made on their end. It’s not unusual to see bills arrive at an insurance company with lines marked out, additions written in the margins, incorrect calculations, etc. How can it be right? The customer doesn’t know the insurance company’s business rules.

A statement about E-Billing products – based on the complexity of insurance billing systems (especially legacy systems), the inherent dynamic nature of the customer’s census and the integration needed to design posting for a true E-Billing System (Presentment, Reconciliation and Payment) back to the billing system after a bill is finalized, products haven’t met the mark.  I have yet to see a product that meets the needs of an insurance company requiring true branding, specific process flow, true self-service automation, and SOA compliant integration.

Contributing to this state of confusion, I’ve seen three completely different levels of E-Billing as requested by clients. All are referred to as E-Billing and in many cases, in previous engagements, the client visualized more and received a lot less.

Stay tuned for a discussion of (1) E-Bill Presentment, (2) E-Bill Presentment and Payment and (3) E-Bill Presentment, Reconciliation and Payment (true E-Billing).

Image appears courtesy of graphicalwonder.com