Six Core Principles for Transforming Healthcare – People, Technology, Data & Analytics

Digital technologies are changing the landscape of healthcare service delivery and raising patient expectations on where, when and how they engage with providers – and payers. Leading organizations are responding to these challenges and opportunities by implementing patient-centric communications and analytical tools and changing how they deliver core services – transforming their business models, operations and the patient experience in the process. To understand the legitimate potential offered by these tools, we need to unpack the buzzwords and examine the benefits and risks of specific digital capabilities – and then consider what they enable in a healthcare service delivery setting.

The following six core principles should be at the heart of every digital transformation initiative, large or small. While we have found these primary drivers to be applicable across various industry settings, here we outline their specific relevance to Healthcare.

1. Business Driven. Many digital technology initiatives in healthcare are driven by one or more core elements of the Triple Aim:

· Improve the health of populations – this principle is driving virtually every organization to identify and track populations of high-risk, over-utilizing patients; establish agreed-upon outcomes goals for defined segments and strata with similar characteristics or needs; and measure the impact of care plans tailored for each individual patient;

· Reduce the per-capita costs of care – value-based reimbursement programs and other risk-based arrangements are focusing attention on both clinical outcomes and financial results – driving the need for self-service analytics for patients, providers and payers – to measure the actual costs of care delivery for each patient;

· Improve the patient’s experience of receiving health care services – increasing transparency and coordinating patient-focused care across an expanding set of partners and providers helps to deliver the right care at the right time in the right setting – increasing patient satisfaction and improving compliance with care plans.

All the above elements are driving the need for better integration of primary service delivery processes and the resulting data streams – motivating an increasing availability of business intelligence (BI) and analytics capabilities and an omni-channel communication platform across the entire enterprise value chain. Digital technologies must be part of every aspect of the overall business-level strategy.

How are you anticipating the needs for and incorporating the capabilities of digital devices and data streams into your business execution and communications strategies?

2. Data is a Core Asset. Organizations that define, measure and adjust their operations using diverse and relevant data sets will realize many performance advantages – to the benefits of all stakeholders.

· Assembling Good Data – capturing enterprise information in digital format – and verifying the quality of those data sets against defined standards for completeness, accuracy and veracity – is an absolute foundation for preparing and enabling digital transformation. The core data systems for the execution of primary transactions and analysis of results must be credible and trustworthy – and this is only achieved – like any relationship – over a period of consistent behavior and positive results.

· Not a Simple Task – for many, this is a major challenge and a significant hurdle to overcome. Most operations are dependent upon data sets that originate in multiple legacy source systems – many of which are too narrowly focused or too closely aligned with aging or inflexible business applications. Understanding the actual contents of these older systems is challenging – envisioning their utility and engineering their transformation for novel purposes represents the “heavy lifting” of data integration. These efforts are difficult to quantify based on a direct ROI – and they are very often on the critical path to deploying and making effective use of newer digital technologies. However, opening these core assets to more transparent use by diverse participants will very often yield unanticipated benefits.

· Incremental Strategy – many organizations will not be able re-architect their data systems from the ground up – in these cases, a more incremental approach is much more viable. Most organizations will begin with a more focused implementation, building the data supply lines to capture and move data from core operational sources into a data warehouse or set of data stores optimized for BI and analytics.

· Managing Data as an Asset – proactive data governance that designates authoritative sources, establishes and enforces quality criteria, defines and assigns roles and responsibilities for managing defined data sets, and facilitates the use of data for various purposes is a critical aspect of any successful implementation.

· Anticipating Scale – the incorporation of so-called “big data” is also growing in importance in healthcare. The volumes, variety and value of these expanding and emerging data sets is driving further elaboration of the data flows, validation criteria, storage approaches and dissemination for novel use cases and analytical applications.

3. Actionable Analytics. Digital interactions – whether improved access to diverse data sources or primary transactions – are most valuable when self-service users can make timely and informed decisions and take appropriate actions based on what the data is indicating.

As the scope, diversity and ubiquity of digital devices continues to grow, the capture and dissemination of data will spread – and more users will be better informed about both the specific details and the broader context of their operating choices.

· Patients can access their care plans – and be completely up to date on their responsibilities for complying with expectations on medications, lab results, diet, exercise, follow-up appointments and monitor their overall progress toward agreed-upon clinical goals;

· Providers can access populations – and can stratify sub-segments of their panels according to clinical risk and compliance – tailoring their communications and interventions to keep patients on-track with their outcomes goals;

· Payers can review patient populations and provider networks – identifying attributed patient groups against value-based performance goals and profile provider effectiveness in meeting clinical and financial goals on risk contracts and alternative payment models.

All these capabilities empower the various user groups to more clearly understand and localize the issues and factors underlying excellent or poor performance – and focus the reinforcing or remedial actions to the benefit of all stakeholders.

4. Patient-Centered Experience. A key driver and a widely recognized benefit of the increasing availability of digital technologies is their ability to both stimulate demand and meet the rising expectations of patients for convenient access to all forms of healthcare information and services through their hand-held or wearable devices.

· Ubiquity – the emergence of the “connected anywhere, information everywhere” operating experience has given patients greater power and influence in engaging and steering their relationship with providers. So-called “activated patients” are more equipped to make informed choices and take the initiative to research their conditions, identify and understand their care alternatives, communicate and coordinate with care providers, exchange stories and find support from other patients in shared-need communities, set agreed-upon goals for their care with their providers, and measure their results.

· Flexibility – providers can no longer hold fast to rigid or single-stream operating models – imposing their internal structures, processes and workflows onto patients from the inside out. For digitally-enabled patients, the care experience is becoming much more of a self-directed journey. Providers who recognize this reorientation to facilitate the “Patient Journey” and unbundle and organize their delivery of services according to this revised model will realize greater patient satisfaction with their own care experiences, better compliance with care plans, and improved outcomes – both clinical and financial.

· Adaptability – similarly, payers are coming under increasing pressure to unbundle and adapt to the disaggregating needs and demands of their patients (members). Patients are seeking customized configurations of benefits packages that are more cost-effective and focused on their specific anticipated needs for services. These trends will continue to play out as more patients enter the individual market for health insurance products and payers are forced to adapt and devise new benefits plans.

5. Agile Technologies, Agile Processes. Agility must be a core value throughout the transformation effort – it must pervade every aspect of envisioning, defining, designing and implementing solutions in this continually evolving setting. The unbundling of service components and their flexible deployment and execution on-demand to patients and other users will create new challenges for providers.

· Feedback and Response – having an agile structure will enable more responsive delivery models – and capturing data at each point of interaction and each touch point along the Patient Journey enables near-real-time analysis of service delivery, care compliance, and their impact on outcomes. It allows feeding of detailed care experience data back into processes and workflows to enable greater personalization, better communication, and more accurate and effective segmentation for population analytics.

The commitment to an agile operation carries additional demands and benefits that must be considered as part of the transformation strategy:

· De-Coupling – tightly coupled databases, applications, custom code, execution logic and various other technical components can complicate the process of revising or enhancing services and their operation – an agile architecture will mandate an explicit de-coupling and un-bundling of tightly-bound components.

· Rapid Application Development – the technical environment and the operational culture must encourage and enable experimentation – where minimally vetted ideas can be prototyped and evaluated – facilitating an ongoing and in some sense relentless exploration of new areas for improvement or innovation.

· Infrastructure – the cloud explicitly provisions a clearly defined, precisely tuned and proactively managed capacity for services delivery and data access – ready to invoke and activate (or deactivate) as the demand specifically ramps up and down – the responsive and adaptive provisioning of this capacity of computing resources increases both the effectiveness and the efficiency of the business operations and the satisfaction of stakeholders.

· Cloud Orchestration – these unbundling and decoupling features combine to enable and facilitate a more agile operation. The execution model for the primary data sources and system services becomes one of flexible activation and deactivation of cloud-deployed capacities at a more granular level – tuned to the needs and demands of the external users rather than the constraints of internal operations.

6. Security & Access Control – the increased openness of these services demands more rigorous and reliable levels of security – including data security, application security, data encryption, compliance with regulations, and more informative monitoring of the ongoing state of the systems. Threats to on-line computing resources continue to rise as the incidence of hacking, data stealing, and denial of service attacks increases in number and sophistication. Added attention to risk management, strict adherence to appropriate security standards, and regular audits must be part of any such initiative.

The increasing availability of digital technologies is reinforcing expectations of timeliness, flexibility and convenience with patients, care givers, providers and payers in an evolving ecosystem of service delivery and information exchange. The relentless focus on quality and outcomes, cost control, value creation, and satisfaction will continue to drive innovation in service delivery across an expanding and diversifying network of healthcare industry participants. Organizations and individuals that respond and adapt will realize distinct advantages in both clinical and financial performance.

Assumptions Are a Necessary Evil

In over 27 years, I have never experienced a major problem on a systems implementation that did not begin with an assumption.

“Of course they can do it; they have a ton of experience.”
“Of course the development servers are being backed up.”
“Of course the new system can do that; it’s a tier 1 ERP- how can it not do it?”
“Of course there’s a compatible upgrade path; the vendor’s web site said so.”

Yeah, well, not always.

Fear the statement that begins, “Of course…”. From a handy web dictionary, assumption is defined as “A thing that is accepted as true or as certain to happen, without proof.”

So, assumptions are bad and should be eliminated. If you get rid of all assumptions, then you are good to go, right?

Yeah, well, not always.

Why? Because eliminating all assumptions takes time. It takes a lot of time and costs a ton of money.

Consider a project to select a new ERP system. A well architected project that includes a good process and the right level of participation from the right people generally takes six months for an average mid-sized manufacturer. If you hit that schedule, you have made a lot of assumptions, whether you know it or not. Why? Because if you try to eliminate every possible assumption, that same selection project would take years, if it could even be finished at all.

The pace of change within your technology environment, much less your business, as well as the tools you are considering, turns a nicely bounded selection project into a fruitless attempt to match your knowledge and certainty to things that are constantly evolving. There would be no end point in that scenario. By the time you have eliminated all assumptions, the people and technology have evolved from underneath all your hard-won knowledge.

So, we have a conundrum: if you make assumptions, you will screw up; yet if you don’t make assumptions, you cannot proceed. Your options appear to be limited. Certainly, there are situations that require eliminating all assumptions – I’m thinking here of building a space shuttle. But if you aren’t shooting for the moon with your project, what do you do?

You must make assumptions to move you forward, while balancing against overall risk. You may never get to the point where you make assumption your ally, but you can at least reach a cautious neutrality with them.

EDGEWATER EXPERT’S CORNER: Diving into the Deeper End of SQL – Part 1

SQL is something of a funny language insofar as most every developer I have ever met seems to believe they are “fluent” in it, but the fact of the matter is that most developers just wade around in the shallows and never really dive into the deep end. Instead, from time to time we get pushed into the deep end learning additional bits and pieces expanding our vocabulary to simply keep from drowning.

The real challenge here is that there are several dialects of SQL and multiple SQL based procedural languages (i.e. PL/SQL, T-SQL, Watcom-SQL, PLpg/SQL, NZPLSQL, etc.) and not everything you learn in one dialect is implemented the same in other dialects. In 1986 the ANSI/ISO SQL standard was created with the objective of SQL interoperability across RDBMS products. Unfortunately, since the inception of this standard and with every subsequent revision (8 in all) since, there are still no database vendors that adhere directly to that standard. Individual vendors instead choose to add their own extensions of the language to provide additional functionality. Some of these extensions go full circle and get folded into later versions of the standard and others remain product specific.

Something of a long winded introduction, but necessary for what I want to discuss. Over the coming months I will be posting some write-ups on the deeper end of SQL and discussing some topics that aimed at expanding our SQL vocabularies. Today, I want to talk about window functions. These were introduced as part of the 2003 revision to the ANSI/ISO SQL standard. Window functions are probably one of the most powerful extensions to SQL language ever introduced, and most developers – yes, even the ones that consider themselves fluent in SQL – have never even heard of them. The short definition of a window function is a function that allows us to perform a calculation or aggregate across set of rows within a partition of a dataset having something in common. Something of lack luster definition you say? I agree, but before you click away, take a peek at a couple of examples below and I am sure you’ll find something useful.

For starters, I would like to explain what a “window” of data is. Simply put, a window of data is a group of rows in a table or query with common partition-able attributes shared across rows. In the table below, I have highlighted 5 distinct windows of data. The windows in this example are based on a partition by department. In general data windows can be created with virtually any foreign key that repeats in a dataset or any other repeating value in a dataset. [Image]

Example 1: Ranked List Function – In this example using the RANK function, I will create a ranked list of employees in each department by salary.   Probably not the most exciting example, but think about alternate methods of doing the same with SQL and not having the RANK function and the simple query below gets really ugly….quick. [Image]

Example 2: Dense Ranked List Function – Similar to the RANK function, but the DENSE_RANK value is the same for members of the window having the same salary value. [Image]

Example 3: FIRST and LAST Functions – Using the first and last functions we can easily get the MIN and MAX salary values for the department window and include it with our ranked list. Yup, you are sitting on one row in the window and looking back to the first row and forward to the last row of the same window all at the same time!   No Cursors Needed!!! [Image]

Example 4: LEAD and LAG Functions – These two are without a doubt a couple of the most amazing functions you will ever use. The LAG function allows us to be sitting on one row in a data window and then look back at any previous row in the window of data. Conversely, the LEAD function allows us to be sitting on one row in a data window and then look forward at any upcoming row in the window of data.

Syntax:

LAG (value_expression [,offset] [,default]) OVER ([query_partition_clause] order_by_clause)

LEAD (value_expression [,offset] [,default]) OVER ([query_partition_clause] order_by_clause)

In the illustration below from within the context of the data window, I am looking up at the previous record and down at the next record and presenting that data as part of the current record. To look further ahead or behind in the same data window, simply change the value of the offset parameter. Prior to the introduction of these functions mimicking the same functionality without a cursor was essentially impossible and now with a single simple line of code, I can look up or down at other records from a record. Just too darn cool! [Image]

Example 5: LEAD and LAG Functions – Just another example of what you can do with the lead and lag functions to get you thinking. In this example, our billing system has a customer credit limit table where for each customer a single record is active and historical data is preserved in inactive records. We want to add this table to our data warehouse but bring it in as a type-2 dimension and need to end date and key all the records as part of the process. We could write a cursor and loop through the records multiple times to calculate the end date and then post them to the data warehouse…or using the LEAD function we can calculate the end date based on the create date of the next record in the window. The two illustrations depict the data in the source (billing system), then in the target data warehouse table.   All of this with just a dozen lines of SQL using window functions – How many lines of code would this take without using a window functions?

Data in source billing system. [Image]

Transformed data for load to data warehouse as T-2 dimension. [Image]

Example 6: LISTAGG Function – The LISTAGG function allows us to return values a column of columns for multiple rows as a single column, aka a “multi-valued field” – Remember PICK or Revelation? [Image]

One closing note; All of the examples shown were in Oracle, but the equilavent functionallity also exists in MS-SQL Server, IBM DB2 & Netezza and PostgreSQL.

So what do you think? Ready to dive into the deep end and try some of this? At Edgewater Consulting, we have over 25 years of successful database and data warehouse implementations behind us so if you’re still wading in the kiddie pool or worse yet, swimming with the sharks! – give us a call and we can provide you with a complimentary consultation with one of our database experts. To learn more about our consulting services, download our new Digital Transformation Guide.

EDGEWATER EXPERT’S CORNER: The Pros and Cons of Exposing Data Warehouse Content via Object Models

So you’re the one that’s responsible for your company’s enterprise reporting environment. Over the years, you have succeeded in building out a very stable and yet constantly expanding and diversifying data warehouse, a solid end-user reporting platform, great analytics and flashy corporate dashboards. You’ve done all the “heavy lifting” associated with integrating data from literally dozens of source systems into a single cohesive environment that has become the go-to source for any reporting needs.

Within your EDW, there are mashup entities that exist nowhere else in the corporate domain and now you are informed that some of the warehouse content you have created will be needed as source data for a new customer service site your company is creating.

So what options do you have to accommodate this? The two most common approaches that come to mind are: a) generating extracts to feed to the subscribing application on a scheduled basis; or b) just give the application development team direct access to the EDW tables and views. Both methods have no shortage of pros and cons.

  • Extract Generation – Have the application development team identify the data they want up front and as a post-process to your nightly ETL run cycles, dump the data to the OS and leave consuming it up to the subscribing apps.
Pros Cons
A dedicated extract is a single daily/nightly operation that will not impact other subscribers to the warehouse. You’re uncomfortable publishing secure content to a downstream application environment that may not have the same stringent user-level security measures in place as the EDW has.
Application developers will not be generating ad hoc queries that could negatively impact performance for other subscribing users’ reporting operations and analytics activity. Generating extracts containing large amounts of content may not be the most efficient method for delivering needed information to subscribing applications.
Nightly dumps or extracts will only contain EDW data that was available at the time the extracts were generated and will not contain the near- real-time content that is constantly being fed to the EDW – and that users will likely expect.
  • Direct Access – Give the subscribing application developers access to exposed EDW content directly so they can query tables and views for the content they want as they need it.

 

Pros Cons
It’s up to the application development team to get what they need, how they need it and when they need it. You’re uncomfortable exposing secure content to application developers that may not have the same stringent user-level security measures in place as the EDW has.
More efficient than nightly extracts as the downstream applications will only pull data as needed. Application developers will be generating ad hoc queries that could negatively impact performance for other subscribing users’ reporting operations and analytics activity.
Near-real-time warehouse content will be available for timely consumption by the applications.

 

While both of the above options have merits, they also have a number of inherent limitations – with data security being at the top of the list. Neither of these approaches enforces the database-level security that is already implemented explicitly in the EDW – side-stepping this existing capability will force application developers to either reinvent that wheel or implement some broader, but generally less stringent, application-level security model.

There is another option, though, one we seldom consider as warehouse developers. How about exposing an object model that represents specific EDW content consistently and explicitly to any subscribing applications? You may need to put on your OLTP hat for this one, but hear me out.

The subscribing application development team would be responsible to identify the specific objects (collections) they wish to consume and would access these objects through a secured procedural interface. On the surface, this approach may sound like you and your team will get stuck writing a bunch of very specific custom procedures, but if you take a step back and think it through, the reality is that your team can create an exposed catalog of rather generic procedures, all requiring input parameters, including user tokens – so the EDW security model remains in charge of exactly which data is returned to which users on each retrieval.

The benefits of this approach are numerous, including:

  • Data Security – All requests leverage the existing EDW security model via a user token parameter for every “Get” method.
  • Data Latency – Data being delivered by this interface is as current as it is in the EDW so there are no latency issues as would be expected with extracted data sets.
  • Predefined Get Methods – No ad hoc or application-based SQL being sent to the EDW. Only procedures generated and/or approved by the EDW team will be hitting the database.
  • Content Control – Only the content that is requested is delivered. All Get methods returning non-static data will require input parameter values for any required filtering criteria – all requests can be validated.
  • Data Page Control – Subscribing applications will not only be responsible for identifying what rows they want via input parameters, but also how many rows per page to keep network traffic in check.
  • EDW Transaction Logging – An EDW transaction log can be implemented with autonomous logging that records every incoming request, the accompanying input parameters, the number of rows returned and the duration it took for the transaction to run. This can aid performance tuning for the actual request behaviors from subscribing applications.
  • Object Reuse – Creation of a generic exposed object catalog will allow other applications to leverage the same consistent set of objects providing continuity of data and interface across all subscribing applications.
  • Nested and N Object Retrieval – Creation of single Get methods that can return multiple and/or nested objects in a single database call.
  • Physical Database Objects – All consumable objects are physically instantiated in the database as user-defined types based on native database data types or other user-defined types.
  • Backend Compatibility – Makes no difference what type of shop you are, i.e.; Oracle, Microsoft, IBM, PostgreSQL or some other mainstream RDBMS; conceptually, the approach is the same.
  • Application Compatibility – This approach is compatible with both Java and .NET IDE’s, as well as other application development platforms.
  • Reduced Data Duplication – Because data is directly published to subscribing applications, there is no need for subscribers to store that detail content in their transactional database, just key value references.

There are also a few Cons that also need to be weighed when considering this path:

  • EDW Table Locks – the warehouse ETL needs to be constructed so that tables that are publishing to the object model are not exclusively locked during load operations. This eliminates brown-out situations for subscribing applications.
  • Persistent Surrogate Keys – EDW tables that are publishing data to subscribing applications via the object model will need to have persistent surrogate primary keys so that subscribing applications can locally store key values obtained from the publisher and leverage the same key values in future operations.
  • Application Connection/Session Pooling – Each application connection (session) to the EDW will need to be established based on an EDW user for security to persist to the object model, so no pooling of open connections.
  • Reduced Data Duplication – This is a double-edged sword in this context because subscribing applications will not be storing all EDW content locally. As a result, there may be limitations to the reporting operations of subscribing applications. However, the subscribing applications can also be downstream publishers of data to the same EDW and can report from there. Additionally, at the risk of convoluting this particular point, I would also point out that “set” methods can also be created which would allow the subscribing application(s) to publish relevant content directly back to the EDW, thus eliminating the need for batch loading back to the EDW from subscribing application(s). Probably a topic for another day, but I wanted to put it out there.

 

So, does that sound like something that you may just want to explore? For more information on this or any of our offerings, please do not hesitate to reach out to us at makewaves@edgewater.com. Thanks!

Empowering digital transformation together at IASA 2017

Our Edgewater Insurance team is packing their bags and is excited to participate in IASA 2017, June 4-7 in Orlando, Florida. We’re proud to, once again, participate in a forum that brings together so many varied professionals in the insurance industry who are passionate about being prepared for the unprecedented change that is sweeping the industry. We look forward to meeting you there to show how our deep expertise, delivering solutions built on trusted technologies, can help you transform your business to become more competitive in a digital world.

Come and see how our experienced team of consultants can help your organization drive change

In industry, technology has often served as a catalyst for modernization, but within insurance we need to do more to understand the consumer to drive change. More than any other opportunity today, CEOs are focused on how to leverage digital technologies within their companies. But there’s still a lot of noise about what digital transformation even means. At Edgewater, we have a unique perspective based on our 25 years of working with insurance carriers. Our consulting team has spent many years working in the industry all the way from producers to adjusters, and in vendor management. We have a deep understanding of the business and technology trends impacting the industry as well as the all-important consumer trends. We know that transformation in any company can start big or of course, it can start small. From creating entirely new business models to remaking just one small business process in a way that delights your customer, changes their engagement model, or improves your speed to market.

We work with executives every day to create and implement their digital transformation strategies. At this event, we will be discussing how digital transformation needs to be at the top of each insurance carrier’s business strategy as the enabler that will bring together consumer, producer, and carrier. Attendees can come and experience first-hand how technology innovations are sweeping the industry, and how insurance carriers are progressing in their efforts to digitize through our interactive solution showcase. You will be able to explore solutions across many functional areas, including creating a unified experience for the consumer, enabling the producer to engage and add value, and how to learn and act on new insights by analyzing the data of transactions and behavior to create more personalized products and services.

But wait, you don’t have to wait

Get a sneak peek at the strategies we’ll be sharing at the event by downloading our Digital Transformation Quick Start Guide for Insurance Carriers at http://info.edgewater-consulting.com/insuranceguide. The guide is a starting point for how leaders should help their companies create and execute a customer engagement strategy. The Quick Start Guide will help you understand

  • What Digital Transformation is and what it is not
  • How producers should be using technology to connect with customers
  • How updating your web presence can improve how you engage with customers

See you there!

If you are planning to be at the event, visit our booth #1110 to meet our team and learn more about Edgewater’s solutions and consulting services for the Insurance industry. We’re excited to help you get started on your digital transformation journey.

Digital Transformation Starts with….Exploring the Possibilities. Here’s how

You can learn a lot about what digital transformation is, by first understanding what it is not. Digital transformation is not about taking an existing business process and simply making it digital – going paperless, as an example. Remaking manual processes reduces cost and increases productivity – no question – but the impact of these changes is not exactly transformative. At some point, you’ve squeezed as much efficiency as you can out of your current methods to the point where additional change has limited incremental value.

Digital transformation starts with the idea that you are going to fundamentally change an existing business model. This concept can seem large and ill-defined. Many executives struggle with where to even start. Half of the top six major barriers to digital transformation, according to CIO Insight, are directly related to a hazy vision for success: 1) no sense of urgency, 2) no vision for future uses, and 3) fuzzy business case.

 

It isn’t a big leap to imagine how Disney might be using the geolocation and transaction data from these bracelets to learn more about our preferences and activities in the park so they could better personalize our experience.

This MagicBand, as an example, immediately generates new expectations from customers that laggards in the industry have a hard time matching quickly.

 

 

At Edgewater, we worked with Spartan Chemical to create an innovative mobile application to drive customer loyalty. Spartan manufactures chemicals for cleaning and custodial services. They set themselves apart by working with us to build a mobile app that allows their customers to inspect, report on, and take pictures of the offices and warehouses they cleaned so that Spartan could easily identify and help the customer order the correct cleaning products.

Once you’ve defined your vision and decided where you will start, you should assess your landscape and determine the personas you will target with this new capability, product, or service.

At Edgewater, we help you create a digital transformation roadmap to define and implement strategy based on best practices in your industry.

To learn more:

You can rescue a failing IT project

If you work in the IT world, you’ve probably seen projects that have come off the rails and require a major course correction to get back on track. In this blog post, I will highlight the warning signs of a failing project from a recent client, along with the process we follow to get critical initiatives back on track.

Danger ahead!

This client was replacing an important legacy system as part of a long-term modernization program. The project had been in danger from the start:

  • High IT team turnover rate led to new hires that didn’t know the business
  • No strong project management on the team
  • Selected this project to initiate an Agile development approach
  • No Product Owner to represent the needs of the business

After two years only one major module had been delivered and the updated project timeline was three times longer than the original schedule. The alarming and unexpected extension of the timeline was the motivation our client needed to contact Edgewater for help.

Project Assessment

Our first step was to conduct an assessment of the project to better understand:

  • Major risks
  • Staffing and capabilities
  • The estimation approach
  • User involvement
  • Agile adoption

In this case, the findings clearly indicated a project at a high risk of failure.

Recommendations

Given the determination of “high risk”, Edgewater recommended some bold changes:

  • Establishing a realistic project schedule with achievable milestones
  • Hiring a full-time Product Owner to lead the requirements effort and build the backlog
  • Doubling the size of the IT development team to increase productivity and reduce the timeline
  • Using a blended team of full-time resources and consultants
  • Adding a full-time Project Manager/Scrum Master to lead the Agile development team, keep the project on schedule, and provide reporting to senior management

Initial results

After the first six months, the results are very promising:Productivity-for-PR

  • The project timeline has been cut in half
  • The development team has increased productivity by over 50% and has delivered modules on schedule
  • The requirements backlog has doubled
  • The client IT team is learning best practices so they will be able to support and enhance the system on their own
  • The Project Manager is mentoring the team on Agile roles and responsibilities, and managing the development team

Our client is extremely happy with the productivity improvements, and the users are excited to work on this project.  There’s still a long way to go, but the project rescue has been a success.

To learn more, watch our video then contact kparks@edgewater.com.

Top 5 Warning Signs you are on the ERP Desert Highway

desert carThere are many wrong turns on the road to the Desert of ERP Disillusionment.  Some teams go wrong right out of the gate. Here are the top five warning signs that your real destination is not the pinnacle of ERP success, but the dry parched sands of the desert.

1. Your steering committee is texting while driving. If your key decision makers are multi-tasking through every steering committee session, its easy for them to miss critical information they need to actually steer.

2. The distraction of backseat squabbling causes the PM to miss a turn.  Political infighting and lack of alignment among key stakeholders can be as difficult to manage as any carful of kids on a family roadtrip AFTER you have taken away their favorite electronic toys.

3. The driver is looking in the rearview mirror instead of the road ahead.  While there are some lessons to be learned from your last ERP implementation (how long ago was that?) , modern state of the art systems require significant behavior changes in the way users interact with information in the system.   If they are used to greenbar reports laid on their desks every morning, the gap may be too big to jump. 

4. You read a guidebook about the wilderness once….  You can’t learn all your survival skills from a book.  In life threatening terrain, there is no substitute for having an experienced guide on the team.  If you haven’t put experienced change leadership into place before you bid your consultants goodbye, you will have neither the insight to recognize the warning signs, nor the skill to lead your people out of the desert.

5. You ran out of gas!  You didn’t fill up at the last station because the ATM was out of cash, your credit card is maxxed out,  and you used your last dollars on Slurpees and Twizzlers for the kids.  If you fritter away your project budget on non-value added-customizations like moving fields on forms and cosmetic report changes, you won’t have money left to address any business critical requirements that come up late in the game.

(Hat tip to Mark Farrell for #5!)

Project Triage During Rapid Business Change Cycles

A few years ago, we ran a series of blog posts on project triage, diagnosis and rescue:

How often do you perform project triage?triage

Preparing for Project Rescue: Diagnosis

Restoring Projects to Peak Performance

In much of our work since then, we have been working with organizations that struggle with performing meaningful project interventions to align their project portfolio with sudden shifts in business strategy, or to support their underlying corporate culture as it shifts toward more rapid innovation, originality, adaptability, engagement, collaboration and efficacy.

In such fluid business environments, our original medical metaphor doesn’t fully apply; triage and diagnosis were performed from a perspective of project internals.  In today’s world, the old project success indicators can be very much out of sync with the business.  If IT projects, the project portfolio, and a PMO are not accountable in terms of their value to the business, it’s time to change the ways we think and talk about projects, and begin to define new KPI’s for success.

  • First of all, let’s stop using the term scope creep.  To deliver business value, the project organization must be agile enough to rapidly address scope fluidity. Would it make more sense to measure how quickly a project team can replan/re-estimate a shift in scope?
  • Quality metrics may also need to fall by the wayside. Is the current release good enough to push into production with defects noted, and expectations managed–think of the release as a minimum viable product, like lean startups do?
  • In rapidly changing businesses, it’s very difficult to plan out a 12 month milestone plan for IT projects. It makes more sense to define a backlog of objectives at the beginning of the planning phase, and perform rolling prioritization, with the complete expectation that the prioritization will change at multiple points during the coming year. In such an environment, how meaningful is it to judge project success against the old notion of “on time”?

In the context of all of this change, it is no longer reasonable to judge projects based on their internal conditions. The measures of project success in today’s world lie in the greater business context.

  • Has the project or project portfolio enabled the business to respond to threats and opportunities more rapidly?
  • Has it increased our innovation speed?
  • Even if the application is buggy, has it improved overall efficiency, enhanced the quality of goods and services, reduced operating costs, or improved the business’ relationship to its customers?

While these questions have answers that are less quantifiable, they are certainly more meaningful in today’s business context. How is your business evaluating project success these days?

Happy Holidays

hotcocoWith the holidays quickly approaching, we reflect on this time of appreciation. Edgewater would like to take this opportunity to thank you for reading our blog and following our thoughts. Whether you are a client, a partner, a team member, or a reader, we hope that you find peace and enjoyment during this holiday season.

May the holidays and the new year be healthy and happy for you and your family. We look forward to sharing with you all in the coming year. See you in 2013!