Best Practice: Cloud Computing

Red sky at morning, sailor take warning.redsky

Here’s a forecast: clouds are rolling in. Architecting for cloud computing will, very soon, become a conscious best practice.

There are lots of handy objections to Cloud Computing: Regulatory compliance, geographic containment requirements, taxes, liability, vendor lock-ins and lack of standards. Many are brushing off cloud technologies as a result, and maybe rightly so… for about another minute, anyway.

Last year, I was involved in a client’s effort to re-provision an application from an in-house infrastructure to a SaaS vendor. All told, the effort was risky and enormous. The administration of it took a year. It took a team of talented engineers from several different companies over six months to implement the transfer. When it was done, everyone breathed a sigh of relief.

The amazing part was that it wasn’t about changing applications. It was just changing who hosted the application. Simply put, no one had the fore-sight to architect for a transition of this nature, and so the ROI was heavily diluted.

Market fluctuations, re-focused specialization, business units changing hands, economic right-sizing, disaster recovery; there are many reasons agile infrastructures can be useful. Cloud computing technology is evolving quickly and has the very real potential to offer agility at a dramatically lower cost, if you’re prepared to leverage it. You don’t have to go in to the cloud to see what you might gain from it. The important part is preparing for it so you can use it when it makes sense for you. And, you could even go green at the same time.

I won’t try to predict what your organization has to gain by architecting around cloud technology. It’s more about what your organization is at risk of losing if you don’t.

Cloud Computing Trends: Thinking Ahead (Part 2)

cloud-burstIn the first part of this series we discussed the definition of cloud computing and its various flavors. The second part of the three part series focuses on the offerings from three major players: Microsoft, Amazon, and Google.

Amazon (AWS)

AWS is a collection of services offered by Amazon to implement Amazon’s vision of IaaS and to some extent PaaS. Some of the main services of AWS include: EC2 (Elastic Compute Cloud), S3 (Simple Storage Service), & SQS (Simple Queue Service). Amazon differentiates itself from the other two players by providing PaaS and IaaS in an integrated package. You can simply start an instance of a Windows machine or a Linux machine and select either a big machine with lots of memory and CPU or a small one.

Once you have an instance you can interact with it via remote services and install any software you want and configure it. If you had a Java application that used Tomcat and MySQL running on an old server under someone’s desk, you could simply start a desired size machine instance, install Tomcat/MySQL, and start running your application from EC2 without changing any code. However, this flexibility comes at a price. If the needs of your application went past the single largest instance EC2 can provide then you would need to re-architect your application to distribute processing across multiple large instances.

Of course the benefit of this approach is that it allows you to move your application into the cloud without rewriting code as long as your needs don’t outstrip a single instance. There are also no limitations in terms of programming language or software that you use. Your application can be based on J2EE, .Net, Python, Ruby, or anything else that runs on Windows or Linux OS.

Microsoft (Azure)

Microsoft calls Azure  a cloud operating system (PaaS) and as such it abstracts away the regular OS that we are familiar with. You write your application to specifically work with Azure using its storage, queuing, and other services and then simply publish it to run on the Azure’s compute fabric. The good thing is that you use all the tools that you are already familiar with to write an application (e.g. Visual Studio 2008).

After the application is written it can be tested locally and only published to the Azure cloud when satisfied. Once the application is running in Azure you cannot attach a debugger to it and must rely on good old log messages for debugging and gathering state information. Azure supports .Net as the de facto framework and supports assemblies compiled from any CLR-compliant language as long as they comply with Azure’s framework. The obvious drawback of this approach is that an existing application cannot be simply moved to cloud without rewriting its key components and perhaps without a complete redesign. However, the advantages are also obvious and significant; once you have written your application to run on Azure it gives you instant ability to scale by simply reconfiguring the number of instances your application runs on. This can be very powerful if your application’s usage is unpredictable or has a seasonal variation. Another issue with this approach is that it ties your application to the platform and you cannot simply take your application and run it in another cloud.

Google (App Engine)

Google App Engine is a PaaS platform and is a combination of an instance service, a data service, and a queue service. It also hides the OS and hardware from the user/application like Azure. But it limits the application to be written in Python in order to be hosted on Google’s cloud. You have to write the application using the SDK provided by Google and once again you can configure it to run multiple instances and automatically benefit from seamless scalability and redundancy. However, you not only have to rewrite your existing applications you also have to commit to a specific language and platform which may not match the requirements or the skill set of your organization.

Cloud Computing Trends: Thinking Ahead (Part 1)

cloud-burstThe aim of this three part series is to gain insight into the capabilities of cloud computing, some of the major vendors involved, and assessment of their offerings.  This series will help you assess whether cloud computing makes sense for your organization today and how it can help or hurt you. The first part focuses on defining cloud computing and its various flavors, the second part focuses on offerings from some of the major players, and the third part talks about how it can be used today and possible future directions.

Today cloud computing and its sub-categories do not have precise definitions. Different groups and companies define them in different and overlapping ways. While it is hard to find a single useful definition of the term “cloud computing” it is somewhat easier to dissect some of its better known sub-categories such as:

  • SaaS (software as a service)
  • PaaS (platform as a service)
  • IaaS (infrastructure as a service)
  • HaaS (hardware as a service)

Among these categories SaaS is the most well known, since it has been around for a while, and enjoys a well established reputation as a solid way of providing enterprise quality business software and services. Well known examples include:, Google Apps, SAP, etc. HaaS is the older term used to describe IaaS and is typically considered synonymous with IaaS.  Compared to SaaS, PaaS and IaaS are relatively new, less understood, and less used today. In this series we will mostly focus on PaaS on IaaS as the up and coming forms of cloud computing for the enterprise.

The aim of IaaS is to abstract away the hardware (network, servers, etc.) and allow applications to run virtual instances of servers without ever touching a piece of hardware. PaaS takes the abstraction further eliminates the need to worry about the operating system and other foundational software. If the aim of virtualization is to make a single large computer appear as multiple small dedicated computers, one of the aims of PaaS is to make multiple computers appear as one and make it simple to scale from a single server to many. PaaS aims to abstract away the complexity of the platform and allow your application to automatically scale as the load grows without worrying about adding more servers, disks, or bandwidth. PaaS presents significant benefits for companies that are poised for aggressive organic growth or growth by acquisition.

Cloud Computing: Abstraction Layers

So which category/abstraction level (IaaS, Paas, SaaS) of the cloud is right for you? The answer to this question depends on many factors such as: what kind of applications your organization runs on (proprietary vs. commodity), the development stage of these applications (legacy vs. newly developed), time and cost of deployment (immediate/low, long/high), scalability requirements (low vs. high), and vendor lock-in (low vs. high). PaaS is highly suited for applications that inherently have seasonal or highly variable demand and thus require high degree of scalability.  However, PaaS may require a major rewrite or redesign of the application to fit the vendor’s framework and as a result it may cost more and cause vendor lock-in. IaaS is great if your application’s scalability needs are predictable and can be fully satisfied with a single instance. SaaS has been a tried and true way of getting access to software and services that follow industry standards. If you are looking for a good CRM, HR management, or leads management system, etc. your best bet is to go with a SaaS vendor. The relative strengths and weaknesses of these categories are summarized in the following table.


App Type

Prop. vs. Commodity





Development Stage

Time & Cost

(of deployment)



















Cloud Computing: Category Comparison

Cloud Computing: Where is the Killer App?

As an avid reader, I have read too many articles lately about how the bleak economy was going to drive more IT teams to use cloud computing. The real question: what are the proper applications for Cloud Computing? For the more conservative IT leader, there must be a starting point that isn’t throwing one of your mission-critical applications into the cloud.

One of the best applications of cloud computing that I have seen implemented recently is content management software. One of the challenges with content management is that it is difficult to predict the ultimate storage needs. If the implementation is very successful, the storage needs start small and immediately zoom into hundreds of gigabytes as users learn to store spreadsheets, drawings, video and other key corporate documents. Open source content management software can be deployed quickly on cloud computing servers and the cost of storage will ramp up in line with the actual usage. Instead of guessing what the processor needs and storage will be, the IT leader can simply start the implementation and the cloud computing environment will scale as needed. My suggestion is to combine wiki, content management and Web 2.0 project management tools running in the cloud computing space for your next major software implementation project or large corporate project.

A second “killer” application for cloud computing is software development and testing. One of the headaches and major costs for software development is the infamous need for multiple environments for developing and testing. This need is compounded when your development team is using Agile development methodologies and the testing department is dealing with daily builds. The cloud computing environment provides a low-cost means of quickly “spinning up” a development environment and multiple test environments. This use of the cloud  is especially beneficial for web-based development, and testing load balancing for high traffic web sites. The ability to “move up” on processor speeds, number of processors, memory and storage helps establish real baselines for when the software project moves to actual production versus the traditional SWAG approach. The best part is that once the development is complete, the cloud computing environment can be scaled back to maintenance mode and there isn’t unused hardware waiting for re-deployment.

The third “killer” application is data migration. Typically, an IT leader will need large processing and storage needs for a short term, to rapidly migrate data from an older application to a new one. Before the cloud, companies would rent hardware, use it briefly and ship it back to vendor. The issue was guessing the necessary CPU power and storage needs to meet the time constraints for the dreaded cut-over date. The scalability of the cloud computing environment reduces the hardware cost for data migrations and allows flexibility for quickly adding processors on that all important weekend. There is simply no hardware to dispose of when the migration is complete. Now that is a “killer” application in my humble opinion. By the way, cloud computing would be an excellent choice for re-platforming an application, too, especially if the goal is to make the application scaleable.

In summary, if your IT team has a short term hardware need, then carefully consider cloud computing as a cost effective alternative. In the process, you might discover your “killer app” for cloud computing.

IBM Anounces Certification in Cloud Computing .. Huh?

IBM first announced a competency center in Cloud Computing, then a Certification over the past couple of weeks.  Well, I guess the old Druid Priests of Mainframes should recognize the resurrection of their old God TimeSharing.  Here we are, back and rested from the Breast of Gaia, Greener than druidismGreen (Drum Roll Please…….): Cloud Computing!  (Cloud Computing quickly adjusts his costume makeover to hide Ye Olde TimeSharing’s wrinkled roots)  Yes! here I am fresh, new, exciting, Web 2.0, Chrome Ready!  With me are the only guys (Big Smile from IBM!) who can Certify and Consult in My Mysteries…. IBM!

The more things change the more they stay the same, but this pushes the Great Hype Engine to a new high (or low..ha ha ha).  I can understand IBM wanting to jump on the Cloud Computing bandwagon, but are we really ready for a certification?  No one is really sure what is in the Cloud, or how it operates, but IBM is ready to lock it and load it.  Yep, they are Certifiable! (ha ha ha!).  While one can admire the desire to define and take a stand on Cloud Computing; this is one topic that requires a bit more “cook time” before full scale avarice takes hold.

Cloud Computing to too “cloudy” and “amorphous” to define today.  While expertise and advice is required, there needs to be more independent vetting and best-of-breed component level competitions.  Full solution demo platforms need to be put together to elicit ideas and act as pilots.  Case studies need to spring from these efforts as well as early adopters before an organization bets the farm on a Cloud solution.  The existing ERP platforms did not come into being overnight and these solutions have an element of their interdepency and complexity (Rome was not built in a day!).  All of the elements of backup, disaster recoverability, auditability, service level assurance, and security need to be in place before there can be a total buy in to the platform.  The reputation of Cloud Computing does hang in the balance, all that is required is one high visibility failure to set things back for potentially years (especially given the current macro environment).

Above all at this stage, a certain level of independence is required for evaluation and construction of possible solutions.  Evolution mandates independent competition (Nature Red of Tooth and Claw, Cage Fighting, Yes!).  Maturity brings vendor ecosystems and the all consuming Application Stack, but not yet.  More innovation is required, we may not even have heard of the start-up who could win this game.

Reducing IT Costs for New Acquisitions

Over at CIO magazine, Bernard Golden recently published an update on Cloud Computing. In his list of the types of companies that can benefit substantially from computing in the cloud, he left off one situation that can reap tremendous benefits from this approach: newly acquired private equity portfolio companies that are being carved out from larger businesses.

For these companies, cloud computing offers the following benefits:

  • Accelerated implementation timeline that dramatically reduces implementation costs
  • Significant savings on support costs, which typically represent 60% of the IT budget
  • Eliminates the dependency on staffing and retaining IT support staff
  • Costs scale with number of users
  • Repeatable implementation playbook
  • Easily extensible for tuck-in acquisition

One of our senior architects, Martin Sizemore, has laid out the broad strokes of this approach in a short slide show.

It’s an especially attractive M&A technology approach in the middle market, where it can help drive annual IT budgets down under 4% of revenue. While it is most advantageous for creating a new operating platform to accelerate transition services (TSA) migrations, the transition to cloud computing makes sense as a value driver at any point in the asset lifecycle.

From the Cloud to the Bunker, the Cold Splash of Reality

Queue the movie Aliens: “…we’re screwed man, it’s over, it’s over!  They’re going to come in here and they are going to…! Get a grip Hudson!”.  That is what things feel like here at the moment.  We are just welding up the armor around the bunker waiting for the Credit Crisis Aliens to get in and decimate IT with their acid blood and ability to plant parasites in our chests.  I guess we do need to get a grip and figure out what to do to shift gears for a new reality.

Anyone want to travel to the C-Suite (Alien Central) to request budget for Web 2.0, Cloud Computing, Chrome, or Green initiatives? (Just leave your dog tags and gear here soldier, it will make it easier for us to split it up among ourselves).  The whole thing makes me chuckle as I weld another piece of steel up over my door.  The first book I go for in situations like these, given my experience and training, is George Orwell’s “1984”.  Doublethink spin is the order of the day today.  Green Computing, becomes High Energy, Aggressive Server and License Rationalization Savings Initiative.  Cloud Computing becomes Radical Infrastructure Outsourcing and Savings Program.  Web 2.0 becomes Intensive Customer Acquisition and Support Cost Reduction Program by Having Them Do All of The Backoffice Work.  Everyone admit it. You’ve seen names like these before; look at the name of any Congressional Bill, they use the same playbook.

Cynicism aside, the world has changed.  IT needs to focus on providing solid data and tools to aid in planning and budgeting for the company to move forward given the new reality.  Tactical cost savings initiatives need to be put on the table to keep staff occupied in a productive manner.  This is the time to consolidate that server farm, outsource network configuration and maintenance, eliminate under-utilized software, and rationalize/outsource maintenance of the PC hardware base.  Each of these are a steel plate welded on the doors to keep the Aliens at bay.

Continue low-cost planning initiatives in new technology — all things pass and this too shall pass in time.  IT needs to be ready to move forward without skipping a beat and keeping this focus will help morale as well.  New technology is the source of most of the major productivity gains and cost savings of the last 20 years.  So the organization as a whole needs to stay tuned-in to any opportunities coming on the horizon.

Plus, think of the fun watching the trade press and the vendors being chased and harvested by the aliens, it could not happen to a better group.  I cannot wait for the shift in editorial priority and ad focus.  Get your copies of the “Aliens” and “1984” ready for reference!

Cloud Killer App: Looking for Love in All the Wrong Places

The new darling of the technical media and every product company, Cloud Computing, is searching for it’s Killer Application.  That seems to be the topic of every article and PR announcement.  Every show and seminar proclaims to have previews or insights to this great new Holy Grail. This Grail is the software that will launch the Cloud Computing platform to prominence and make everybody billions.  Really! Whatever they are smoking can I get some too!  What totally scares me is being “one” with Larry Ellison.  How did I ever get in this philosophical state?

During prehistoric times as a college student, a professor of mine returned a paper I submitted with a simple comment; “If this is the solution, what was the problem again?”.  The professor gave me the Stalinistic “opportunity” to resubmit the paper with either the same or (hint hint) modified solution (wrong choice: Gulag for you).  Believing he was the south-side of a north-bound mule, I knew there was a trick to this situation.  Disassembling the paper logical thread by logical thread revealed he was right; the solution the paper proposed did not map to the original case study problem and an all night typewriter-based re-write was in order ( I hate when that happens!).

Pardon the rambling dementia, but we have the same situation here, Cloud Computing does not necessarily lead to a new Killer Application.  Logically, Cloud Computing will lead to a new range of hardware, not software innovation.   Cloud Computing presents the opportunity not to be enslaved to a classic server based data center or even a PC.  It will supercharge mobile computing via advanced cellphones and drive further mobile gadget innovation.  Cloud Computing drives pervasive computing, that is it’s Killer Application.

Image courtesy of King Megatrip

Chrome: a new browser from Google. Or a new Web OS?

I’m very excited about the news breaking out today of Chrome: the new browser from Google. It will launch tomorrow and you can read all about it on Google’s blog and see their tech friendly comic book(that is brilliant by itself).

I have to admit that both the last release from Firefoxand especially the half baked lackluster IE8 beta from Microsoft were disappointing. While providing relatively minor improvements to most users, they failed to address the biggest challenge confronting the continuing growth of the web: inherent support for rich applications. All we want is to use our email, IM, Search and Facebook without it crashing every few hours taking all windows and tabs with it.

The browser had become the master application where most of our work and play on the computer is done these days. As Google had nicely put it in their blog post “All of us at Google spend much of our time working inside a browser. We search, chat, email and collaborate in a browser. And in our spare time, we shop, bank, read news and keep in touch with friends — all using a browser.” … “What we really needed was not just a browser, but also a modern platform for web pages and applications, and that’s what we set out to build”

So it seems that the smart guys at Google finally understood that if they base their entire business on ads presented while web browsing, they better make sure that browsing experience is fast, secure and continues to flourish. Counting on Microsoft to do that for you is not a smart business strategy.

The new Chrome browser was built from scratch not as a browser but as a platform. Most of the features and improvements are taken form the OS playbook for stability and security: process containment, sand boxing, efficient garbage collection, tight security model.

Here is a short list of some of the innovation the Chrome is introducing:

  • Process isolation for tabs and plugins within tabs. Awesome. No more will a single window force me to kill the browser with all 30 tabs I have open gone with the wind.
  • New Javascript virtual machine that will product compiled machine code. If Java script is to be the future of rich web interfaces (as opposed to the proprietary Flash or Silverlight) it needs to run fast and be more robust and that’s exactly what the new virtual machine is providing.
  • Gears Integration: with Gears support for persistency and OS level access, developers can build client level applications for the web with reasonable portability
  • Security: the new security model offers a strong foundation for ongoing security schema that can be used by application coders and plugin providers.

Google will also make the whole thing open source, allow plugins and invites everyone to add and extend.

That’s the kind of innovation we need in order to keep the web growing and becoming the robust platform for work and play.

I can’t wait to give it a full try tomorrow.

Data, The Ugly Stepsister of Web 2.0

The basket of technology comprising Web 2.0 is a wonderful thing and worthy of all of the press and commentary it receives, but what really scares me is the state of data in this new world.  Data sits in the basement of this wonderful technology edifice, ugly, dirty, surrounded by squalor, and chained in place.  It is much more fun to just buy the next storage array (disk is cheap, infinite, what power bill?), than it is to grind though it, clean it up, validate it, ensure proper governance and ontology.

What is Web 2.0 for, if not to expose more content? And data is the ultimate content.  Knowing what is hiding in the basement, there are going to be a lot of embarrassed organizations (Lucy, you got some ‘splaining to do!).  Imagine how difficult it is going to be to link and synchronize content and data in the Web 2.0 environment.  Imagine explaining the project delays and failures of Web 2.0 initiatives when the beast in the basement gets a grip on them.

Normally, the technology will be blamed.  Nobody wants to admit they store the corporate crown jewels in the local landfill.  Nobody will buy the new products fast enough.  The server farms being built to support Cloud Computing will sit spinning and melting Arctic Ice in vain (Microsoft’s container-based approach is cool).  This could seriously impact the market capitalization of our top tech giants Microsoft, Oracle, Google, Amazon.  Oh no! It could crash the stock market and bring on tech and financial Armageddon given our weakened state!  Even worse, my own career is at stake!  The devil with them, they are all rolling in money, I could starve!

Now that I have my inner chimp back in the box, we need to put together a mitigation strategy to allow for a steady phased improvement of the data situation in tandem with Web 2.0 initiatives.  It is too much to expect anybody to clean up the toxic data dump in one sitting and we can not tag Web 2.0 with the entire bill from years of neglect (just toss it in the basement, no one goes there).  If we do not ask IT to own up to the issue and instead allow projects to fail, senior management, (fade to The Office), will assume the technology is at fault and will not allocate the resources needed to make this key technological transition.