Data Darwinism – Capabilities that provide a competitive advantage

In my previous post, I introduced the concept of Data Darwinism, which states that for a company to be the ‘king of the jungle’ (and remain so), they need to have the ability to continually innovate.   Let’s be clear, though.   Innovation must be aligned with the strategic goals and objectives of the company.   The landscape is littered with examples of innovative ideas that didn’t have a market.  

So that begs the question “What are the behaviors and characteristics of companies that are at the top of the food chain?”    The answer to that question can go in many different directions.   With respect to Data Darwinism, the following hierarchy illustrates the categories of capabilities that an organization needs to demonstrate to truly become a dominant force.

Foundational

The impulse will be for an organization to want to immediately jump to implementing capabilities that they think will allow them to be at the top of the pyramid.   And while this is possible to a certain extent, you must put in place certain foundational capabilities to have a sustainable model.     Examples of capabilities at this level include data integration, data standardization, data quality, and basic reporting.

Without clean, integrated, accurate data that is aligned with the intended business goals, the ability to implement the more advanced capabilities is severely limited.    This does not mean that all foundational capabilities must be implemented before moving on to the next level.  Quite the opposite actually.   You must balance the need for the foundational components with the return that the more advanced capabilities will enable.

Transitional

Transitional capabilities are those that allow an organization to move from silo’d, isolated, often duplicative efforts to a more ‘centralized’ platform in which to leverage their data.    Capabilities at this level of the hierarchy start to migrate towards an enterprise view of data and include such things as a more complete, integrated data set, increased collaboration, basic analytics and ‘coordinated governance’.

Again, you don’t need to fully instantiate the capabilities at this level before building capabilities at the next level.   It continues to be a balancing act.

Transformational

Transformational capabilities are those that allow the company to start to truly differentiate themselves from their competition.   It doesn’t fully deliver the innovative capabilities that set them head and shoulders above other companies, but rather sets the stage for such.   This stage can be challenging for organizations as it can require a significant change in mind-set compared to the current way its conducts its operations.   Capabilities at this level of the hierarchy include more advanced analytical capabilities (such as true data mining), targeted access to data by users, and ‘managed governance’.

Innovative

Innovative capabilities are those that truly set a company apart from its competitors.   They allow for innovative product offerings, unique methods of handling the customer experience and new ways in which to conduct business operations.   Amazon is a great example of this.   Their ability to customize the user experience and offer ‘recommendations’ based on a wealth of user buying  trend data has set them apart from most other online retailers.    Capabilities at this level of the hierarchy include predictive analytics, enterprise governance and user self-service access to data.

The bottom line is that moving up the hierarchy requires vision, discipline and a pragmatic approach.   The journey is not always an easy one, but the rewards more than justify the effort.

Check back for the next installment of this series “Data Darwinism – Evolving Your Data Environment.”

Cutting costs should not mean cutting revenue.

0925_mz_skinflint

Image courtesy of BusinessWeek 9/25/08: "AmerisourceBergen's Scrimp-and-Save Dave"

The financial panic of late has caused a lot of attention on cutting costs – from frivolities like pens at customer service counters to headcount – organizations are slowing spending. Bad times force management to review every expense, and in these times obsess with them. Financial peace however has two sides – expense and revenue.

A side effect of cost cutting can be stunted revenue, over both the short and long terms. It is easier to evaluate costs than to uncover revenue opportunities, such as determining  truly profitable offerings and adapting your strategies to maximize sales. Also as difficult to quantify are the true loses in unprofitable transactions, and competitive strategies that can negatively impact your competition.

The answers to many of these questions can be  unearthed from data scattered around an organization, groking customers and instantly shared knowledge between disciplines. For example, by combining:

  • customer survey data;
  • external observations;
  • clues left on web visits;
  • and other correspondence within the corporation;

…an organization can uncover unmet needs to satisfy before the competition, and at reduced investment cost.

When external factors, like a gloomy job outlook, cause customers to change behavior, it is time to use all information at your disposal. Those prospects changing preferences for your offerings can provide golden intelligence about the competition or unmet needs.

Pumping information like this is the heart of business intelligence. Marketing and Sales can uncover the opportunity; however, it is up to the enterprise to determine how to execute a timely offering. Financials, human capital planning, and operations, work in concert to develop the strategy which requires forecasting data, operational statistics and capacity planning data to line up.

A good strategist views all angles, not just reducing cost.

Why Analytics Projects Fail

During an informal forum recently, (whose members shall remain nameless to protect my sorry existence a few more years), analytics projects came up as a topic.  The question was a simple one.  All of the industry analysts and surveys said analytic products and projects would be hot and soak up the bulk of the meager discretionary funds availed a CIO by his grateful company.  If true, why were things so quiet?  Why no “thundering” successes?

My answer was to put forward the “typical” project plan of a hypothetical predictive analytics project as a straw man to explore the topic:

  • First, spend $50 to $100K on product selection.
  • Second, hire a contractor in the product selected and tell him you want a forecasting model for revenue and cost. 
  • The contractor says fine, I’ll set up default questions, by the way where is the data?
  • The contractor is pointed to the users. He successively moves down the organization until he passes through the hands-on user actually driving the applications and reporting (ultimately fingering IT as the source of all data).  On the way the contractor finds a fair amount of the data he needs in Excel spreadsheets and Access databases on the user’s PCs (at this point a CFO in the group hails me as Nostradamus because that is where his data resides).
  • IT gets some extracts together containing the remaining data required that seems to meet the needs the contractor described (as far as they can tell, then IT hits the Staple’s Easy Button —  got to get back to keeping the lights on and the mainline applications running!).
  • Contractor puts the extracts in the analytics product, does some back testing with what ever data he has, makes some neat graphics and charts and declares victory.
  • Senior management is thrilled, the application is quite cool and predicts last month spot on.  Next month even looks close to the current Excel spreadsheet forecast.
  • During the ensuing quarter, the cool charts and graphs look stranger and stranger until the model flames out with bizarre error messages.
  • The conclusion is drawn that the technology is obviously not ready for prime time and that lazy CIO should have warned us.  It’s his problem and he should fix it, isn’t that why we keep him around?

At this point there are a number of shaking heads and muffled chuckles; we have seen this passion play before.  The problem is not any product’s fault or really any individual’s fault (it is that evil nobody again, the bane of my life).  The problem lies in the project approach.

So what would a better approach be?  The following straw man ensued from the discussion:

  • First, in this case, skip the product selection.  There are only two leading commercial products for predictive analytic modeling (SAS, SPSS).  Flip a coin (if you have a three-headed coin look at an open source solution, R or ESS), maybe it’s already on your shelf, blow the dust off.  Better yet, would a standard planning and budgeting package fit (Oracle/Hyperion)?  The next step should give us that answer anyway, no need to rush to buy, vendors are always ready to sell you something (especially at month/quarter end — my, that big a discount!).
  • Use the money saved for a strategic look at the questions that will be asked of the model: What are the key performance indicators for the industry?  Are there any internal benchmarks, industry benchmarks or measures?  Will any external data be needed to ensure optimal (correct?) answers to the projected questions?
  • Now take this information and do some data analysis (much like dumpster diving).  The key is to find the correct data in a format that is properly governed and updated (no Excel or Access need apply).  The key is accurate sustainability of all data inputs, remember our friend GIGO (I feel 20 years old all over again!).  This should sound very much like a standard Data Quality and Governance Project (boring, but necessary evil to prevent future embarrassment to the guilty).  
  • Now that all of the data is dropped into a cozy data mart and supporting extracts are targeted there, set up all production jobs to keep everything fresh.
  • This is also a great time to give that contractor or consultant the questions and analysis done earlier, so it will be at hand with a companion sustainable datamart.  Now iterations begin — computation, aggregation, correlation, derivation, deviation, visualization, (Oh My!). The controlled environment holds everybody’s feet to the fire and provides excellent history to tune the model with.
  • A reasonable model should result, enjoy!

No approach is perfect, and all have their risks, but this one has a better probability of success than most.