SharePoint 2010 Migration: Options & Planning

Many organizations that are running SharePoint 2003/2007 or other CMS are either actively considering or in the midst of upgrading to SharePoint 2010. In this blog we will look at what is involved in upgrading to SharePoint 2010, various options available for the upgrade, and initial planning that needs to precede the migration.

 There are two basic methods of upgrading/migrating from an older version of SharePoint to SharePoint 2010 that are provided by Microsoft: in-place upgrade and database attach upgrade. In addition, there are numerous third-party tools that can help you migrate content and upgrade to SharePoint 2010 not only from an older version of SharePoint but also from other CMS’. Each method has its own set of benefits depending on the objectives of the migration and specifics of the environment. When selecting a migration path, some of the aspects you may need to consider include:

  • Ability to take the production system offline during the migration
  • Amount of change involved in content and its organization during migration
  • Number of customizations (web parts, themes, meta-data, workflows, etc.)
  • Amount of content being migrated
  • Need to upgrade hardware
  • Need to preserve server farm settings

It is much easier to migrate a clean and lean environment than an environment that is full of obsolete content, unused features and broken customization. Start with cleaning up your existing sites and check for the orphaned sites, lists, web parts, etc. Remove any content that is no longer in use, remove unused features and ensure used features are present and working. Once your existing SharePoint site is in tiptop shape you are ready to plan your migration steps.

Before you put your migration/upgrade in motion you need to understand what migration aspects you can compromise on and hard constraints you have. For example:

  • Can you afford to put your environment in read-only mode for the duration of the upgrade?
  • Does the amount of content you have make it prohibitive to copy it over the network?
  • Do you have a lot of customization that you have to deal with?
  • Are you planning to reorganize or selectively migrate your content?

The answers to these kinds of questions will direct your choice of migration tools. Here is a check list that will help you get organized.


Customizations can have a big impact on how quickly and smoothly your migration goes. Therefore it is important to identify and account for as many of them as possible. PreUpgradeCheck can help but here is a list to help you identify and uncover customizations that can add complexity to your migration efforts.

Hidden Impact of the iPad on Your Corporate Website?

Even though the iPad appears to be a device for Apple fans, gadget freaks, tech savvy consumers, etc., it is already positioned to have a significant impact on the nature and shape of corporate websites. The adoption and growth rate of specialized content consumption devices such as Smart Phones, Tablets, and Net Books can no longer be simply ignored. B2B and especially B2C companies need to ensure that their websites not just “function” on the iPad and other similar devices but they must provide a good user experience. Just as higher bandwidths and more powerful computers once changed static websites forever the new content consumption devices are positioned to do the same again. The buzz around the iPad and its successful launch means that any future website planning and upgrades need to keep the new realities in mind.

Perhaps the most talked about aspect of the iPad browsing has been lack of support for the Adobe Flash. Apple CEO Steve Jobs has termed Flash buggy, a CPU hog, and a security risk. Regardless of whether you agree with Jobs or not, recent trends in website design and development do point towards less Flash usage for various reasons. HTML 5 is being touted as the new replacement and many key companies and websites have already adopted it. Therefore any decision to use Flash or to continue supporting existing Flash applications should be weighted carefully. Many major websites like WSJ, NPR, USA Today, and NYT are doing away with Flash and taking a dual approach of providing the iPad optimized apps and rolling out new Flash-less websites.

Still common among corporate websites are the slide out menus that require constant cursor presence which makes for not so friendly experience for the finger based navigation. Touch interfaces are rapidly gaining in popularity and are no longer limited to Smart Phones and Tablets, some of the newer laptops also support them. Another implication of touch interface is that links that are too small or too close to each other make it difficult for users to click them and create a frustrating user experience. The ever shrinking size of buttons and links now needs to be reconsidered and their placement must also be rethought. Touch is here to stay and sites need to evolve to ensure they are “touch” friendly.

In the days immediately following Netscape’s collapse, Internet Explorer became the king of browsers and any corporate website needed only to be tested with 2 or 3 versions of Internet Explorer. Since then FireFox, Chrome, and Safari have gained significant market share, and any updates to the corporate website must once again be tested against a plethora of browsers. Safari has hitched a ride with the iPhone and the iPad and therefore requires special consideration in ensuring that the corporate website renders and functions well. However, browsers no longer hold a monopoly on being the sole interface to the internet content.  The rise of proprietary apps providing customized experience means now there is a challenger to the mighty browser. Major media sites like NYT, WSJ, BBC, etc. all have rolled out their iPhone and iPad apps which provide a highly controlled experience outside of the browser.  

The iPad’s larger screen provides an excellent full-page browsing experience. However, number of sites treat the iPad as a mobile client and serve up the mobile version of the web-site. In most cases user can click on the full-site link and access the standard website but that is cumbersome to the users. Websites now need to differentiate between smaller mobile clients where limited mobile version of the website makes sense and newer larger content consumption devices like the iPad where nothing but the full-site will do. Needless to say the iPad and other similar devices are creating a new reality and the corporate websites need to take notice or risk being considered old and obsolete.

Ten Ways to Ensure Project Failure

Sometime ago I read an article about the top ten ways to destroy the earth. Although it is a bit morbid to even think about such a topic let alone compile a top ten list, it certainly is an interesting scientific problem. Blowplanet_collide0ing planet earth to bits is not as simple as it may seem. It takes considerable amount of energy to blow up six sextillion tons of rock and metal. However, there are some exotic ways to get the job done. From creating a micro black-hole on the surface of the earth to creating an anti-matter bomb with 2.5 trillion tons of anti-matter to creating perfect Von Neumann machines (self-replicating), they are all pretty futuristic and not part of our everyday experience. Some may say- “why even think about such an absurd subject?”, but it does have few practical applications. If nothing else, it helps us think about possible dangers to the only known planet capable of supporting life.

While blowing up earth may for now be out of our grasp and may require giant leaps in technology, blowing up an IT project is quite easy. I can say that with authority, because I have seen many projects self-destruct right in front of my eyes and at times I may have contributed to some of them. So here are the ten ways of blowing up an IT project:

  1. The Missing Matter—Requirements: Lack of business and functional requirements or requirements lacking appropriate level of detail.
  2. Progress Black Hole: Lack of mechanisms to measure progress, milestones, and deadlines.
  3. Caught in the Gravitational Pull of Technology: Focus on technology itself rather than achieving business objectives through technology.
  4. Supernova – Out of Resources: Unrealistic expectations and deadlines – trying to achieve too much in too little time and with too few resources.success_failure1
  5. Consumed by Nebulous Clouds: Constantly changing requirements and feature creep. Inability to give the project and product a solid shape and direction. Lack of proper change control process.
  6. Bombarded by Asteroids: Loss of focus and progress due to multi-tasking on unnecessary side projects and other distractions.
  7. Lost in Space: Lack of a well defined project plan with appropriate level of details, milestones, and resource allocations.
  8. Too many WIMPS (Weakly Interacting Massive Particles): Lack of interaction with the business users, lack of sufficient number of check points, lack of business user involvement during the planning, build, and deployment phases.
  9. Journey to the Edge of the Universe: Attempting to run a project with bleeding edge technology, inexperienced project team, and poorly understood business objectives.
  10. Starless Solar System: Lack of clear and convincing business case and mapping of how the project will help to achieve the business objectives.

Calculating the ROI of an IT Project

ROI communication & calculation can make a difference in a project getting funded or not

roi-pictureIn today’s economic climate justifying the cost and benefits of an IT initiative has become more important than ever. Often the fate of an IT project depends on the justification of benefits and recuperation of the costs. Therefore calculating, presenting, and demonstrating the benefits of a project in an appropriate manner can make the difference between getting a green light and getting stuck in an endless review cycle.

Value-statements can help bridge the communication gap

During my interactions with various IT departments I have noticed that the IT staff values a project differently than the business sponsors or executive team. Calculating and communicating the value of an IT project puts the IT staff in an uncomfortable and unfamiliar role which requires financial, sales, and technical skills. Quite often even when the IT staff tries to focus on business benefits they fail to align the benefits of a project to the business concerns in a manner that resonates with the executive team. The use of appropriate terms and prioritization of business concerns is key to grabbing the attention of the business sponsors and the executive team. A value statement [i]can be a useful tool to summarize and contextualize benefits of a project in almost all circumstances. Sometimes there are cases when ROI is not clearly defined, is impossible to define, or simply not that important to the stakeholders. Under such circumstances a value statement can be instrumental or even a must. They help overcome resistance, bind together stakeholders, and focus the project around delivering real business value. To summarize, they help you see the forest from the trees where as ROI calculations help you count the trees.

A value statement can take many shapes and formats depending on the context of the project and audience reviewing it. However, it is always a good idea to try and tie it back to the mission or value statement of the project. For example consider the following sample value statement:

roi-table

The benefits need to be tied back to the capabilities by linking them to preferably operational measures. Financial measures although are more accurate they typically lag operational measures.

Calculating ROI

When calculating direct [i]and indirect [ii]benefits of a project it is important to keep three important aspects in mind:

  • The Rate of Return of the Investment
  • Capital Recovery Horizon
  • Variance Potential

The rate of return of the investment is not just returns exceeding the original investment plus the cost of capital but it should also include compensation for the risk of undertaking the project. For example if a project returns 20% and cost of capital is 18% then the additional 2% may not be sufficient justification even for a “sure shot” of a project. As a very rough rule of thumb ROI of less than twice the cost of capital should be considered high-risk. A ROI of 4 times or more of the cost of capital is considered ideal. Capital recovery horizon is the time that a project will need to generate enough benefits to recover the original investment. The rapid pace at which technology, business environment, trends, and preferences change (especially in the IT industry) pose a significant risk of future benefits not being realized. Changing market conditions and new competition can create new more lucrative opportunities as well diminish the value of existing ones. Therefore it is highly desirable to recoup the original investment as quickly as possible to minimize exposure to sudden changes in market conditions. The ideal recovery time can vary significantly and depends significantly on the market conditions and maturity of a given vertical. For fast moving verticals recovery time of 1 to 2 years is ideal. Variance potential portrays the risk associated with changes or variances in the calculations and estimates of the future benefits of a project. Indirect benefits are hard to calculate accurately and are most susceptible to errors and variances. If indirect benefits constitute a high percentage of the overall benefits or a project then the variance potential for the project is high and vice-versa. Indirect benefits that are less than 50% of the overall benefits are ideal whereas a number of 90% or more indicates high risk to the project.

Calculating the ROI with appropriate level of rigor can be a daunting task. Coming up with a justification for a project is not a “one size fits all” exercise and does not always have to result in punching numbers and formulas into a spreadsheet. Depending on the type of project and the company the right type of benefits calculation has to be tied to the appropriate level of rigor. Sometimes a clear and well articulated value-statement can be sufficient while at other times you have to have hard numbers to backup your claim. A simplified ROI sheet that ties the benefits to the operational levers (operational capabilities) is presented below.

roi-spreadsheet


[i] A value statement focus more on the qualitative aspects rather than the quantitative ones, it is simply narrates the benefits without tying it to the bottom-line.

[ii] Direct benefits are the ones that are only one step removed or are a direct consequence of the implementation of the project (e.g. hardware/software/staff consolidation, time savings, inventory reductions, increase in revenue, etc.).

[iii] Indirect benefits are the sub-consequence of the change brought about the project implementation or are the unquantifiable side-effects (e.g. improvement in morale & good will, more knowledgeable support staff, increase in revenue, etc.).

Assessing an Acquisition’s IT Capabilities: “What’s in Your Portfolio”

Why do I need to think about assessing the IT capabilities of an acquisition?

sherlockSo you just acquired a company as part of your growth, diversification, or some other strategy. The new company along with its LOB (line of business) expertise comes with an entire IT infrastructure that was thus far responsible for supporting the acquired company’s information needs only. While a great deal of due diligence goes into understanding the viability of the business and its value the same level of rigor is typically not applied to evaluating its IT infrastructure and support staff. In order for the two companies to work together well it is important to understand the capabilities of the two IT infrastructures and how to best integrate or not integrate them. If a detailed and careful plan is not put together to understand the capabilities and assets of the new acquisition you risk inheriting a vulnerability that can spread throughout the larger organization or risk stifling a capability that really should be promoted to the larger organization.

Why you need an independent perspective?

Sometimes companies use their own internal staff to assess the target or recent acquisition. The problem with this approach is that these assessments can be tainted by hidden agendas, lack of impartiality, and departmental politics. Entrenched interests can also slant information one way or the other as it passes up and down various departmental hierarchies.  I was part of a software company which wanted to acquire another software company with similar technology. Our own internal research and development department was tasked with the assessment of the company’s technology. Quite understandably the leaders in the R&D group thought it was inferior to what was developed in-house even though that was far from the reality. The key decision makers involved in the deal, including the CEO and the board of directors, were getting conflicting accounts of the reality and did not know whose version of truth to trust. This is where an independent perspective from external consultants can come in handy. They often face less resistance when digging around and are able to see beyond the personal bias and the “ugly baby” syndrome. A fresh perspective can also help see the forest from the trees which can sometimes be missed by the people who are working on the trees on daily basis. I came across another post acquisition assessment where the management was told that acquired company’s technology and custom developed software was topnotch. Upon further investigation we discovered that the most of the custom software was developed in a little known RAD development environment and less than a handful of people in the company knew how to maintain it. While this creates tremendous job security for some it creates a significant risk for the company. In another situation credit card numbers and all other consumer related information was stored in a database unencrypted!  Stories like these are all too common and point to the need for an independent perspective.

Is it too late to do an assessment after the deal has been signed?

During my days as a consultant I have primarily come across two types of assessments: pre-acquisition and post-acquisition. Pre-acquisition assessments are important where IT infrastructure is a primary part of the value of the business being acquired (software companies, online businesses, etc.). The focus of a pre-acquisition IT due diligence assessment is primarily on ensuring that the IT assets are as good as they have been portrayed,  that they are capable of supporting the business objectives associated with the acquisition, and that there are no hidden risks that will require significant expense to remedy after the buyer takes ownership. For example can a wildly successful but local online service be introduced in a new geographic region with a new language, currency, tax and privacy laws, etc? Post-acquisition assessments are important when the prime value of the acquisition is derived from the LOB (e.g. selling insurance policies, financial management, etc.). The focus of this type of assessment is typically ensuring that IT infrastructure is solid enough to continue to support the business; there are no vulnerabilities that can jeopardize the combined entity, finding areas of excellence to propagate, finding redundancies, and figuring out an integration plan. It is always good to get an outside assessment done before the deal is inked however, if that doesn’t happen it is still very important to at least get the post-acquisition assessment and planning done.

What kind of acquisitions can benefit from an assessment?

These days even small companies whose business does not directly intersect with information technology rely on some sort of back-office IT infrastructure to run their day to day operations. A back-office infrastructure may contain email servers, phone/fax servers, internet gateways, website servers, database servers, LOB applications, etc. A front-office infrastructure may contain client facing applications, online portals, CRM applications, LOB applications, etc. As the number of servers and employees grow the need for proper management and use of sound practices to manage them become more important. If access to IT infrastructure such as LOB applications, databases, email, website, etc. is essential to the daily operations of your business it is vital to ensure that proper assessment of the potential risks is done and the IT assets are managed properly.

What are some of the key aspects that should be examined during an assessment?

Start with creating an overall blueprint of the IT assets and how they interact with each. You would be surprised to learn how often such a fundamental document does not exist. Look at the hardware/software redundancy needs to provide the needed uptime to the business. Determine what disaster recovery plans exist, when they were last tested, and what kind of situations they can handle. Examine the security risks and ensure that the security practices match or exceed the required level of protection warrant by the business. Does the infrastructure have the capacity meet or exceed the demands placed by the peak loads and growth in the business? Examine the hosting environment for security, redundant power, redundant internet, redundant cooling, proper fire suppression, etc. Ensure that hardware and software assets are not so old that they are longer supported and can’t be upgraded. Is the technology stack compatible with the umbrella company’s technology stack? Are they any strange or esoteric practices or standards that could introduce risk? And never forget to identify practices, technology, and people (centers of excellence) that can benefit the entire organization and should be propagated to the entire company.

Reviewing security policies and procedures is another key aspect of the assessment. The risks associated with a weak security structure are obvious and too numerous to describe here. You need to not only think about electronic and online security (firewalls, virus and spam filters, internet intrusion attacks, etc.) but also about physical security. Most companies tend to neglect one or the other and sometimes both. In today’s environment the physical as well the data security should be considered a top priority for any IT assessment. The risks are high no matter what business it is, including legal consequences and public embarrassment.    At a fortune 500 company where I once had the privilege to work became a victim when half the office noticed that there computers were running slower than usual. Upon further examination it was discovered that each computer was missing half the memory chips that they had. Someone had simply walked in after hours before the lock-down, removed memory, and walked away. At another client site we discovered that they have neatly documented their security policies and key passwords but the passwords for all the accounts were exactly the same!

Depending on the industry you work in you may also have to worry about compliance and regulatory issues. For health care industry you have to worry about HIPAA compliance. All personal information and medical records have to be protected according to the guidelines of the HIPAA act. All publicly traded companies have to be in compliance with Sarbanes-Oxley act (SOX). Even though the SOX act never mentions the word software the audit trails and record keeping required by the act ensures sizeable investment in IT infrastructure and processes to manage it. There are various other acts and standards like the Patriot act, DOD 5015.2, SEC regulations, ISO standards (9000, 15489), etc. that may apply based on the industry and business practices. Sometimes the process may be even more confusing and harder when acquiring companies in different countries or different states where the local laws are not the same. All of this means that you must ensure that your new acquisition does not expose you to compliance issues that you didn’t have to worry about before.

How do we plan for the joint future?

Now that you have good handle on what you just acquired you need to plan how you are going to move forward. You need to think about cost saving opportunities by consolidating sites, hardware, and other resources. You need to think about standardization of software, hardware, and operational practices. You will have to decide how want to handle common branding and identification issues such as email domain names, website, central call-in numbers, etc. You will need to examine what support contracts and license agreements exist and how they need to be modified as part of the larger organization. The integration with the umbrella organization needs to phased in and timing needs to be planned carefully to minimize impact to the business. A combined successful and seamless existence doesn’t happen on its own it needs to be planned and carefully executed. If your company is planning to grow through acquisitions you may want to create a process for assessing and integrating new acquisitions based on your current experience.

If the business you are acquiring is being carved out of a larger parent company, you also need to plan for a migration plan off of the services that the parent company is offering during the transition period.  There are further complications if you intend for your new acquisition to be platform company to which you will add other newly-acquired companies over time.

Fresh! Content is King

fresh-content-squaresUp until few years ago most companies were satisfied with creating websites that were largely static.  A website designer would organize largely pre-existing content into a collection of content buckets, slick graphics, and flash presentations and a website developer would bring the website into existence. New content would be added when either the old one became obsolete or new products or services were created. This model is essentially one step above the electronic brochure style websites of yesteryear, when companies essentially copied their existing paper brochures to web and called it a website.

In today’s environment of social networking, blogs, and collaboration, static content is not only passé it prevents companies from driving advantage from their internal and external user bases and communities of experts. Fresh and timely content helps drive new traffic to the website and is an effective marketing tool. Unfortunately, most companies do not realize the need for fresh and rapidly evolving content on their website and the role it can play in engaging their customers and prospects. Even companies whose products and services remain largely stable overtime need to think about their websites differently. It is not just a one way medium to push static content outwards, it is in fact one of the most cost-effective mechanisms to engage customers and prospects and turn them into a long-term asset. If you believe that the nature of your business is such that you don’t need to think about using your website to engage your customers and prospects, chances are you haven’t fully explored the possibilities. It may take some effort to figure out creative and effective mechanisms to drive advantage from your ability to create fresh and meaningful content and interactions with your customers and prospects, but the rewards are well worth it. From local doctor’s offices to insurance companies to Fortune 500 companies, all can benefit from large, loyal, and engaged communities of customers and prospects.

However, most likely your existing static content-based website can’t support the type of content and interactions needed to support what we just discussed. If your website infrastructure still relies on IT staff to update the content chances are you won’t be able to morph your website into a hub of fresh and dynamic content that attracts new and repeat visits. The business users or the content creators must be able to update the content easily and as frequently as needed.

Of course, you would want some sort of approval workflow and a content publishing process to manage rapidly changing content. Fortunately there is a category of software that is designed to do just that. Web content management systems (WCMs) such as Drupal, Joomla, Microsoft SharePoint, DotNetNuke, etc., are designed to give business users and content creators control over the ability to update content easily and frequently. In most cases, users can manipulate the content by logging into the administrative version of the website and updating the content in a WYSIWYG environment. Content creation and updates can be brought under customized workflows and approval chains which are quite important in a fast moving environment. WCM systems also boost many other capabilities like:

  • Content Categorization
  • Document Management
  • Delegation
  • Audit Trails
  • Content Creator Grouping
  • Content Templates
  • Discussion Forums
  • Blogs
  • Reviews and Ratings
  • Etc.

Discussion forums and blogs can be used to create vibrant user and expert communities that revolve around your products and services and continuously create new content that keeps customers and prospects coming back to your site. These tools not only provide a mechanism for external parties to contribute new content but also provide a mechanism for them to communicate directly with you about what is important to them. Insights gleaned from such content can be quite valuable in creating new products and services or improving the existing ones.

Now that we’ve talked about the virtues of fresh content and using your website as a two way medium, you are probably wondering if you would be able to afford it. A little known secret about good WCMs is how cost effective they can be. Creating a custom website from scratch can be a very onerous and expansive proposition. However, most well respected WCMs offer out-of-box templates and web components that actually make is much faster and cheaper to build a website if you take advantage of their off-the-shelf goodies. If you are considering investing in an upgrade of your website — even if you are NOT (consider the cost of lost opportunity) investing any money in your website —  it would behoove you to look at the benefits of upgrading your website using a WCM system.

IT Cost Cutting and Revenue Enhancing Projects

scissorsIn the current economic climate the CIOs and IT managers are constantly pushed to “do more with less”. However, blindly following this mantra can be a recipe for disaster. These days IT budgets are getting squeezed and there are fewer resources to go around however, literally trying to “do more with less” is the wrong approach. The “do more” approach implies that IT operations were not running efficiently and there was a lot of fat that could be trimmed — quite often that is simply not the case. It is not always possible to find a person or a piece of hardware that is sitting idle which can be cut from the budget without impacting something. However, in most IT departments there are still a lot of opportunities to save cost. But the “do more with less” mantra’s approach of actually trying to do more with less maybe flawed! Instead the right slogan should be something along the lines of “work smarter” or “smart utilization of shrinking resources”; not exactly catchy but conveys what is really needed.

polar bearWhen the times are tough IT departments tend to hunker down and act like hibernating bears – they reduce all activity (especially new projects) to a minimum and try to ride out the winter, not recognizing the opportunity that a recession brings. A more productive approach is to rethink your IT strategy, initiate new projects that enhance your competitive advantage, cut those that don’t, and reinvigorate the IT department in better alignment with the business needs and a more efficient cost structure. The economic climate and the renewed focus on cost reduction provides the much needed impetus to push new initiatives through that couldn’t be done before. Corporate strategy guru Richard Rumelt says,

“There are only two paths to substantially higher performance, one is through continued new inventions and the other requires exploiting changes in your environment.”

Inventing something substantial and new is not always easy or even possible but as the luck would have it the winds of change is blowing pretty hard these days both in technology and in the business environment. Cloud computing has emerged as a disruptive technology and is changing the way applications are built and deployed. Virtualization is changing the way IT departments buy hardware and build data centers. There is a renewed focus on enterprise wide information systems and emergence of new software and techniques have made business intelligence affordable and easy to deploy. These are all signs of major changes afoot in the IT industry. On the business side of the equation the current economic climate is reshaping the landscape and a new breed of winners and losers is sure to emerge. What is needed is a vision, strategy, and will to capitalize on these opportunities and turn them into competitive advantage. Recently a health care client of ours spent roughly $1 million on a BI and data strategy initiative and realized $5 million in savings in the first year due to increased operational efficiency.
 
Broadly speaking IT initiatives can be evaluated along two dimensions cost efficiency and competitive advantage. Cost efficiency defines a project’s ability to lower the cost structure and help you run operations more efficiently. Projects along the competitive advantage dimension provide greater insight into your business and/or market trends and help you gain an edge on the competition. Quite often projects along this dimension rely on an early mover’s advantage which overtime may turn into a “me too” as the competitors jump aboard the same bandwagon. The life of such a competitive advantage can be extended by superior execution but overtime it will fade – think supply-chain automation that gave Dell its competitive advantage in early years. Therefore such projects should be approached with a sense of urgency as each passing day erodes the potential for higher profits. In this framework each project can be considered to have a component of each dimension and can be plotted along these dimensions to help you prioritize projects that can turn recession into an opportunity for gaining competitive edge. Here are six initiatives that can help you break the IT hibernation, help you lower your cost structure, and gain an edge on the competition:

Figure-1-Categorization-of-

Figure 1: Categorization of IT Projects 

Figure-2-Key-Benefits

In the current economic climate no project can go too far without an ROI justification and calculating ROI for an IT project especially something that does not directly produce revenue can be notoriously hard. While calculating ROI for these projects is beyond the scope of this article I hope to return to this issue soon with templates to help you get through the scrutiny of the CFO’s office. For now I will leave you with the thought that ROI can be thought of in terms three components:

  • A value statement
  • Hard ROI (direct ROI)
  • Soft ROI (indirect ROI)

Each one is progressively harder to calculate and requires additional level of rigor and detail but improves the accuracy of calculation. I hope to discuss this subject in more detail in future blog entries.

Cloud Computing Trends: Thinking Ahead (Part 3)

cloud-burstIn the first part of this series we discussed the definition of cloud computing and its various flavors. The second part focused on the offerings from three major players: Microsoft, Amazon, and Google. The third and final part discusses the issues and concerns related to the cloud as well as possible future directions.

A company may someday decide to bring the application in-house due to data security or cost related concerns. An ideal solution would allow creation of a “private in-house cloud” just like some product/ASP companies allow option of running a licensed version in-house or as a hosted service. A major rewrite of existing applications in order to run in a cloud is probably also a non-starter for most organizations. Monitoring and diagnosing applications in the cloud is a concern. Developers must be enabled to diagnose and debug in the cloud and not just in a simulation on a local desktop. Anyone who has spent enough time in the trenches coding and supporting complex applications knows that trying to diagnose complex intermittent problems in a production environment by debugging on a simulated environment on a desktop is going to be an uphill battle to say the least. A credible and sophisticated mechanism is needed to support complex applications running in the cloud. The data and meta-data ownership and security may also give companies dealing with sensitive information a pause. The laws and technology are still playing catch-up when it comes to some thorny issues around data collection, distribution rights, liability, etc.

If cloud computing is to truly fulfill its promise the technology has to evolve and the major players have to ensure that a cloud can be treated like a commodity and allow applications to move seamlessly between the clouds, without requiring a major overhaul of the code. At least some of the major players in cloud computing today don’t have a good history of allowing cross-vendor compatibility and are unlikely to jump on this bandwagon anytime soon. They will likely fight any efforts or trends to commoditize cloud computing. However, based on the history of other platform paradigm shifts they would be fighting against the market forces and the desires of their clients. Similar situations in the past have created opportunities for other vendors and startups to offer solutions that bypass the entrenched interests and offer what the market is looking for. It is not too hard to imagine an offering or a service that can abstract away the actual cloud running the application.

New design patterns and techniques may also emerge to make the transition from one cloud vendor to another easier. Not too long ago this role was played by design patterns like the DAO (data access object) and various OR (object relational) layers to reduce the database vendor lock-in. A similar trend could evolve in the cloud based applications.

All of the above is not meant to condemn cloud computing as an immature technology not ready for the prime time. The discussion above is meant to arm the organization with potential pitfalls of a leading edge technology that can still be a great asset under the right circumstances. Even today’s offerings fit the classic definition of a disruptive technology. Any organization that is creating a new application or over hauling an existing one must seriously consider architecting the application for the cloud. The benefits of instant scalability and “pay for only what you use” are too significant to ignore, especially for small to mid size companies. Not having to tie up your cash in servers and infrastructure alone warrants a serious consideration. Also not having to worry about setting up a data center that can handle the load in case your application goes viral is liberating to say the least. Any application with seasonal demand can also greatly benefit. If you are an online retailer the load on your website probably surges to several times it average volume during the holiday shopping season. Having to buy servers to handle the holiday season load which then remains idle during rest of the year can tie up your capital unnecessarily when it could have been used to grow the business. Cloud computing in its current maturity may not make sense to pursue for every enterprise. However, you should get a solid understanding of what cloud computing has to offer and adjust the way you approach IT today. This will position you to more cost effectively capitalize on what it has to offer today and tomorrow.

Cloud Computing Trends: Thinking Ahead (Part 2)

cloud-burstIn the first part of this series we discussed the definition of cloud computing and its various flavors. The second part of the three part series focuses on the offerings from three major players: Microsoft, Amazon, and Google.

Amazon (AWS)

AWS is a collection of services offered by Amazon to implement Amazon’s vision of IaaS and to some extent PaaS. Some of the main services of AWS include: EC2 (Elastic Compute Cloud), S3 (Simple Storage Service), & SQS (Simple Queue Service). Amazon differentiates itself from the other two players by providing PaaS and IaaS in an integrated package. You can simply start an instance of a Windows machine or a Linux machine and select either a big machine with lots of memory and CPU or a small one.

Once you have an instance you can interact with it via remote services and install any software you want and configure it. If you had a Java application that used Tomcat and MySQL running on an old server under someone’s desk, you could simply start a desired size machine instance, install Tomcat/MySQL, and start running your application from EC2 without changing any code. However, this flexibility comes at a price. If the needs of your application went past the single largest instance EC2 can provide then you would need to re-architect your application to distribute processing across multiple large instances.

Of course the benefit of this approach is that it allows you to move your application into the cloud without rewriting code as long as your needs don’t outstrip a single instance. There are also no limitations in terms of programming language or software that you use. Your application can be based on J2EE, .Net, Python, Ruby, or anything else that runs on Windows or Linux OS.

Microsoft (Azure)

Microsoft calls Azure  a cloud operating system (PaaS) and as such it abstracts away the regular OS that we are familiar with. You write your application to specifically work with Azure using its storage, queuing, and other services and then simply publish it to run on the Azure’s compute fabric. The good thing is that you use all the tools that you are already familiar with to write an application (e.g. Visual Studio 2008).

After the application is written it can be tested locally and only published to the Azure cloud when satisfied. Once the application is running in Azure you cannot attach a debugger to it and must rely on good old log messages for debugging and gathering state information. Azure supports .Net as the de facto framework and supports assemblies compiled from any CLR-compliant language as long as they comply with Azure’s framework. The obvious drawback of this approach is that an existing application cannot be simply moved to cloud without rewriting its key components and perhaps without a complete redesign. However, the advantages are also obvious and significant; once you have written your application to run on Azure it gives you instant ability to scale by simply reconfiguring the number of instances your application runs on. This can be very powerful if your application’s usage is unpredictable or has a seasonal variation. Another issue with this approach is that it ties your application to the platform and you cannot simply take your application and run it in another cloud.

Google (App Engine)

Google App Engine is a PaaS platform and is a combination of an instance service, a data service, and a queue service. It also hides the OS and hardware from the user/application like Azure. But it limits the application to be written in Python in order to be hosted on Google’s cloud. You have to write the application using the SDK provided by Google and once again you can configure it to run multiple instances and automatically benefit from seamless scalability and redundancy. However, you not only have to rewrite your existing applications you also have to commit to a specific language and platform which may not match the requirements or the skill set of your organization.

Cloud Computing Trends: Thinking Ahead (Part 1)

cloud-burstThe aim of this three part series is to gain insight into the capabilities of cloud computing, some of the major vendors involved, and assessment of their offerings.  This series will help you assess whether cloud computing makes sense for your organization today and how it can help or hurt you. The first part focuses on defining cloud computing and its various flavors, the second part focuses on offerings from some of the major players, and the third part talks about how it can be used today and possible future directions.

Today cloud computing and its sub-categories do not have precise definitions. Different groups and companies define them in different and overlapping ways. While it is hard to find a single useful definition of the term “cloud computing” it is somewhat easier to dissect some of its better known sub-categories such as:

  • SaaS (software as a service)
  • PaaS (platform as a service)
  • IaaS (infrastructure as a service)
  • HaaS (hardware as a service)

Among these categories SaaS is the most well known, since it has been around for a while, and enjoys a well established reputation as a solid way of providing enterprise quality business software and services. Well known examples include: SalesForce.com, Google Apps, SAP, etc. HaaS is the older term used to describe IaaS and is typically considered synonymous with IaaS.  Compared to SaaS, PaaS and IaaS are relatively new, less understood, and less used today. In this series we will mostly focus on PaaS on IaaS as the up and coming forms of cloud computing for the enterprise.

The aim of IaaS is to abstract away the hardware (network, servers, etc.) and allow applications to run virtual instances of servers without ever touching a piece of hardware. PaaS takes the abstraction further eliminates the need to worry about the operating system and other foundational software. If the aim of virtualization is to make a single large computer appear as multiple small dedicated computers, one of the aims of PaaS is to make multiple computers appear as one and make it simple to scale from a single server to many. PaaS aims to abstract away the complexity of the platform and allow your application to automatically scale as the load grows without worrying about adding more servers, disks, or bandwidth. PaaS presents significant benefits for companies that are poised for aggressive organic growth or growth by acquisition.
cloud-computing-diagram

Cloud Computing: Abstraction Layers

So which category/abstraction level (IaaS, Paas, SaaS) of the cloud is right for you? The answer to this question depends on many factors such as: what kind of applications your organization runs on (proprietary vs. commodity), the development stage of these applications (legacy vs. newly developed), time and cost of deployment (immediate/low, long/high), scalability requirements (low vs. high), and vendor lock-in (low vs. high). PaaS is highly suited for applications that inherently have seasonal or highly variable demand and thus require high degree of scalability.  However, PaaS may require a major rewrite or redesign of the application to fit the vendor’s framework and as a result it may cost more and cause vendor lock-in. IaaS is great if your application’s scalability needs are predictable and can be fully satisfied with a single instance. SaaS has been a tried and true way of getting access to software and services that follow industry standards. If you are looking for a good CRM, HR management, or leads management system, etc. your best bet is to go with a SaaS vendor. The relative strengths and weaknesses of these categories are summarized in the following table.

 

App Type

Prop. vs. Commodity

Scalability

 

Vendor

Lock-in

Development Stage

Time & Cost

(of deployment)

IaaS

Proprietary

Low

Low

Late/Legacy

Low

PaaS

Proprietary

High

High

Early

High

SaaS

Commodity

High

High

NA

Low

Cloud Computing: Category Comparison