Avoiding Website Launch Nightmares

Much has been written (examples here and here) about the horrendous launch of Cuil.com this week. Despite extensive PR work that succeeded in getting people to the site, the user experience was so bad that it ensured none would return. From error messages to a message saying “too many people are currently searching” and despite claiming to have an index larger than Google’s, it could not find this humble blog among other missing sites.

 

We’ve encountered many types of failures to launch and have tried to help others avoid them.

The most common is the site’s inability to handle the traffic spikes. Sometimes a PR success can lead to availability nightmares.

 

When the much anticipated Britanica.com launched in 1999, the demand was so high, the hardware was not able to support it and the site had to shut down for about 10 days before re-launching. When Burundis.com, a Mexican greeting card site launched, they handled traffic well but did not anticipate that at Valentines Day, their traffic will spike 10 fold and bring down the system.

 

What can be done to ensure a successful launch?

  1. Sizing. Estimating the potential traffic and translating that into bandwidth, CPU and Memory utilization. The key to sizing is to correctly identify the resource utilization of a user session and how it scales. The best metric to use is concurrent sessions and a server’s ability to handle X number of concurrent sessions is the essential key for determining both individual server specs and the total number of server required.
  2. Finding the bottleneck. What chocks first as the load increases? Do calls to the database start to queue up? Finding and addressing bottlenecks is the first method for optimizing performance.
  3. Software Vs. Hardware. It is always surprising to see how little time and thought is put into performance planning upfront on the software side. As hardware resources provide amazing power at a reasonable cost, many software developers have forgotten the art of coding for performance. It is very common that in the late stages of performance testing and optimization, a lot of code is being re-written to optimize caching, reduce calls to the database, and transfer load into compiled objects.
  4. Planning for the spikes. Building hardware infrastructure that will hold maximum spikes can be expensive. Now, with Amazon Computing Services and other providers, a virtual server can be up and running in 7 minutes and you pay only for the resources you use.

 

Other methods that are often recommended to mitigate the risk of a filed launch include:

  • Soft Launch or a limited release. Build traffic gradually rather than in a big bang. Monitor behavior and optimize before the Press Release goes out.
  • Monitoring and Contingency plans. The amount of traffic expected if often unknown. Monitoring tools that track actual performance and resource utilization that are watched constantly are a must. A contingency plan has to be in place. From Amazon flexible computing to shutting down a resource intensive feature, these plans need to get you alive until adjustments in software or hardware can be made.

 

Finally, a few lessons from the trenches:

  • Be absolutely positively sure that your development and staging environment are identical to your production environment. When a large printer manufacturer launched their revamped dealer extranet site a few years back, it suffered from horrible performance problems on production that did not show in the performance tests done in staging. The system administrators swore on their life that the environments were identical. IBM consulting had to be brought in to investigate and after 2 weeks of testing found that a faulty DB2 patch was applied to the production database. Once the correct patch was applied, the problems disappeared.
  • Have software developers on site during the first 24 hours of the launch. It is easy to delegate monitoring to the NOC and let the developers who may have worked 72 hours straight to get the launch done on time, go home to sleep. A fresh and ready senior member of the development team has to be there to react if anything goes wrong.

 If you have any other tips to share, we’d love to hear them.

Putting Enterprise 2.0 Solutions in Order

A visitor walking the demo floor at the recent Enterprise 2.0 conference would find it hard to define what all these companies and product offerings have in common and what qualifies them to be categorized as Enterprise 2.0 solution providers.

While vendors of organizational social networks are a clear fit, what is common to advanced search vendors, enterprise mashup providers, Content Management vendors and video broadcasting solutions?

It seems that the common thread is a shared vision of the future enterprise as a social, open and collaborative place where data, content, knowledge and expertise are more easily available and where productivity results from enhanced collaboration and information sharing.

We can categorize the solution areas based on what they allow the user to do:
 

Finding information and data across silos and systems is still the holy grail of today’s information systems. Our information workers are dependent on their access to information but the ever growing amount and complexity of the data makes it harder and harder.

Most basic Enterprise 2.0 products cover the first 4 levels. They include a basic search for content within the network, provide tools for creating new content, sharing, and collaboration using technologies like discussions, wiki’s, blogs, RSS, Public Profiles, and groups.

Products in this category include: Microsoft Sharepoint, SocialText, Telligent , Thoughfarmer and GroupSwim among many others.

The fifth level offers a unique opportunity to leverage the interactions, conversations and links to add context and intelligence. By using Tags or by auto detection of terms and traffic patterns, some of the solutions can help create a layer of relationships and meaning on top of the content and link together disparate pieces of content, data and people for a complete picture.

Products in this category include: OpenWater, Connectbeam, Inquira

The 6th level in our stack consists of tools that try to bring together and connect data from disparate systems and source and allow the user to connect them and create custom applications and views on demand. By using open standards and web services, these tools called Mashups attempt to simplify our search for information across multiple systems by allowing us to pull from them without creating a separate datamart as the baseline for data and correlation.

Mashups are a hot topic for enterprise portals and enterprise web 2.0 initiatives. IBM, Oracle and Micosoft are releasing mashup tools as well as a few smaller vendors like Jackbe and Serena

At the final level, we would all like to have a toolset that will allow us to discover ideas, bring important knowledge to our attention, alert us in real time to activities and trends we should be watching, feed us in real time information that is relevant to the tasks we are performing. No tools in this category yet but check again in a few months…

The ROI and game changing benefits of Enterprise web 2.0 internal implementations can go well beyond important outcomes like of employee involvement, morale and collaboration. It would come from harnessing the intelligence, context and knowledge within the organization (data, content and people) and outside sources to increase productivity, shorten development lifecycles, enhnace relationships make better decisions and inspire innovation.

Effective M&A Technology Plans

The best M&A technology plan isn’t really a technology plan at all. It’s a comprehensive business integration plan, owned and driven by a seasoned integration program manager with experience in systems integration, data conversion, and organizational change management. Failure to pull the technology, data, business process and organization integration workstreams into a single plan is a surefire recipe for integration delay or failure.

The benefits of a unified business integration plan include:

  • Easier recognition of schedule and resource conflicts that cut across the workstreams. 
  • More agility in replanning the entire body of work when milestones in any single area are missed.
  • Better insight into interdependencies in the plan.
  • More effective use of team time, since there is a single status meeting to cover all workstreams.

A key to success with the unified plan is to write this master plan at the correct level of detail. The unified plan should not contain detailed tasks and action items, but it needs to provide enough detail for meaningful tracking. Supporting tools, such as detailed project plans and Excel tracking workbooks can and should be used within a particular working group to manage all the moving parts.

How to profit from the Long Tail: catering to savvy consumers

A new article by Anita Elberse in the Harvard Business Review is challenging some of Chris Anderson’s conclusions regarding the importance of investing in the long tail (for a short description of the long tail theory please see here

The article titled “Should you invest in the long tail?” is based on research into buying patterns of digital media like movie rentals and song downloads.

While Mr. Anderson puts emphasis on the long tail as accounting for approximately 25% of total sales and encourages companies to spend resources building the long tail and directing users there, Elbrese cautions against it. She shows that the cut-off point happens early, the tail becomes very narrow quickly and that the hits at the head are still the main drivers of business. In her conclusions she recommends to invest only minimally in the long tail as it contributes little to the bottom line.

One of the more interesting insights in the article, is the observation that light users of these services consume almost exclusively hits, while more savvy consumers venture into the long tail but are not always happy with what they find.

In short:

The bars of the chart show that the higher the decile, of course, the more customers rent titles within it. Note that the average number of titles shipped is much higher for customers of titles in the lowest decile than for customers of titles in the highest. Heavy users are more likely to venture into the long tail, but they choose a mix of hit and obscure products.

It seems that more attention should be given to the different behaviors of user segments as to marketing and tools that support them.

  • First time users or light users should be directed to the hits table where in over 90% of the time, they’ll find what they want.
  • Heavy users should be taken to the hits area but with recommendations and more options to explore.

Is the key to making a user a heavy user lays in the ability to introduce them into a broader set of options and by expanding their horizons making them a more profitable consumer?

Neither author addresses this question but in many cases, this is the right thing to do for long term audience cultivation and the right strategy for profiting from the long tail.

If movie buffs are the best customers of a movie rental business, invest in making movie buffs hip and in expanding the cinematic understanding of hit buyers. These passionate buyers have a disproportionate influence in the websphere and investing in providing tools for finding gems in the long tail and the context and support of users with similar tastes can help make the long tail profitable. By pushing consumers up the savvy line, they will increase their purchases and venture down the long tail making it a very worthwhile investment.

If, on the other hand you are a producer of long tail products, focus you marketing efforts on the savvy consumers and their exposure. It may be that the unexplained bump in the 6th decile is where the hits of the niche market reside.

In a different take on the long tail, John Hyde of LeftClick has a great example of the long tail as it reflects in SEO terms

It shows that as the search terms used become more specific, it reaches a smaller audience. No surprises there but the opportunity, as in the discussion above is for provider of specialty solutions to make sure they tag and use the specialty terms savvy searchers will be using.

Remember, the savvy consumer is your best customer and they are the ones who appreciate and consume the long tail.

Will value creation be sustainable?

Erin Griffiths at PEHub raised an interesting question yesterday. After citing statistics on PE value creation, she asked:

“Is the value creation permanent and sustainable? Did the company wilt without the PE firm’s pressure, best practices, and drive for improvement? Was the company overburdened with leftover LBO debt? The answers to these questions are important in addressing the general public’s negative image of private equity. Without taking a long term look, it’s like a High School that says, “100% of our graduates go to Harvard,” but doesn’t say whether any of those students made it out of Cambridge with diplomas.”

The question is important whether the seller is a PE firm, or a public or private company. Clearly, the answer varies with the management style of the seller, and the buyer must look for clues during due diligence. We have two tips for buyers who want to assess value creation sustainability:

1. Know your seller. Management styles vary considerably, even within a particular PE firm. Look for clues in the post-sale performance of similar portfolio assets or previously-sold business units.

2. Watch for warning signs. On the IT side, aging infrastructure, historic CapEx underspending, inadequate network performance, and low employee retention rates are among the many clues that value creation via cost-cutting initiatives may have been too aggressive to sustain going forward.

IT Due Diligence for Distressed Acquisitions

For many deals, IT Due Diligence resembles a home inspection. The goals are to identify and mitigate risks prior to closing and to develop cost estimates for addressing risks.  The IT Due Diligence Team looks to see if disaster recovery plans are in order, the com­pany is in compliance with software licenses, and whether internal controls exist.  With a distressed acquisition, there are larger risks that must be addressed.  Acquirers don’t want to be saddled with an IT organization that dooms the corporate turnaround before it starts. Before moving forward, the deal team needs answers to questions such as:

  • Where do opportunities exist to streamline, consolidate, and optimize IT operations?
  • Which IT expenditures are aligned and which are not aligned with business objectives?
  • Which pending projects and expenditures are of questionable value?
  • What’s the order of priority of pending projects and expenditures?
  • How do IT costs benchmark against industry standards and how can they be driven below benchmarks to spending levels more appropriate for the turnaround period
  • Are investors paying too much for IT assets?
  • Are non-tangibles (like data repositories) properly valued — can they be monetized?
  • Are service level agreements and contracts with vendors adequate and enforced?

After the deal closes, there will be new opportunities to improve the effectiveness and alignment of IT operations — improvements any future buyer will likely view as table stakes in evaluating corporate value.  One example is management dash­boards for better performance visibility and fact-based decision-making.  Just the sheer presence of such tools says much about the quality of the management.  But what they enable management to do has an even greater bearing on corporate value — monitoring key performance indicators that show how well management’s turnaround initiatives are actually succeeding day-to-day.  Moving forward without such tools puts the turnaround at risk, because there are no tools in place for fact-based decision making within short timeframes.

Web Analytics: Resolving Visit Duration Discrepancies

This week, on one of our Web Analytics projects, we encountered a discrepancy in the Avg. Visit Duration calculation between a set of dashboard reports, and a set of ad hoc reports. We did some testing and research and discovered that the issue was actually a direct reflection on the fact that there are limited industry standards in web analytics. Visit duration is generally defined as the amount of time spent on the web site. It is measured by calculating the difference between the first time stamp in the visit, and the last time stamp in the visit.

One of the noticeable issues with this calculation is that last time stamp of the visit occurs when the user starts viewing the last page in their visit, not when they leave the page. The user could continue to dwell on the page, but that dwell time will not be counted as a part of the overall duration. This is because there is no way to determine how much time the user spent, since they send no additional requests back to the server.

This flaw is then exacerbated by the case of a single page view visit. When a visit includes a single page view (a bounce, in Google analytics terms) the result is a visit with duration = 0 because it contains only a single page view with a single time stamp. Many web analytics end users may consider this to be a bug, but it is a limitation associated with log data.

But, is duration = 0 really true? Isn’t it more like duration = unknown?

And then, how do we calculate Avg. Visit Duration? After some research and testing, we determined that the discrepancy due to the fact that formula for Avg. Visit Duration in the dashboard was:

  •     Total Time Spent/(Visits – Single Page Visits)

In other words, all of the visits with an “unknown” duration had been removed. Not a bad idea, but it needed to be declared in the documentation. As it stands, this formula violates the definition of  “Average”.

But, in the ad hoc reporting sections of the product, the formula for Avg. Visit Duration was:

  •     Total Time Spent/Visits

The Web Analytics Association has released a standard definition of visit duration, and it includes a note that visits with a single page have a duration that cannot be calculated. But, the standard does not indicate how those visits should be handled in aggregate calculations. Therefore, it is still up to the software vendors, and in this case, we see both calculations in the same product!

We think assigning a value to an unknown is a bit deceptive, it masks the unknown. It would be preferable to make the volume of single page visits visible, and then Avg. Visit Duration of the remaining visits. If reports called attention to the single page visits, there would be more questions regarding their business value and how to improve it.