Agility on Tap: Demystifying the Virtues of Virtualization

Bryan Doerr, CTO of IT services powerhouse SAVVIS, pulled off quite a feat at the Technology Executives Club Outsourcing Update in Chicago last week: in 30 minutes, he explained how visionary CIOs were increasing the value of “IT” by making it vanish. “IT” is not merely being commoditized but must entertain an even more ignoble fate—being virtualized—and this is an exceedingly good thing.

What is “IT” and why Do CIOs Want Rid of It?

To anyone with a corporate background, “IT” evokes images of legions of recalcitrant boxes, wires and logins that talk to each other less easily than in-laws. Yet, “IT” is unquestionably, as Gartner deemed it, the “enterprise nervous system” because it controls the flow of digital information. The root cause of the problem is, all the boxes (computers), routers, networks and software have been designed and brought to market individually, to make money for their sellers. Therefore, each of them strikes a balance between its individual value-add and its requirement to blend in with the IT environment (including competitors’ products). Each component of “IT” can become a bottleneck when the assumptions behind its design become stretched too much. The ensemble is inflexible and very expensive to maintain, yet it is an overwhelming reality at global organizations.

How overwhelming? For years, studies have consistently pegged “maintenance” as consuming 70 percent of most IT budgets. New applications, and the networks, infrastructure and hardware to support them, get a mere 30 percent.

The Virtue of Virtualization

Any CIO would be a hero if s/he was able to banish the “IT” problems and inflexibility from the land while maintaining—or increasing—the level of services available. This is exactly what virtualization proposes. Don’t be put off by the hype-laced term: it simply means that most of the boxes and wires are digitized, *represented* by software, so the physical servers, routers and firewalls no longer exist; however, the services that they provided are delivered via alternate means.

As Doerr put it, “Virtualization is the definition and instantiation of an abstraction that represents a real entity” (the server is defined, and its definition is represented within a software environment, the same way an icon represents the trash can on your PC’s desktop).

Further, “An abstraction is a proxy for an entity that presents, and possibly enhances, properties of the entity relevant to its purpose in a specific context” (the same idea as above with the tweak that the “virtual” server might have its properties enhanced in certain situations).

In other words, virtualization enables CIOs to minimize the physical constraints that IT’s devices imposed on the system because the physical devices no longer exist. However, how can this be? If you think about it, an enterprise’s thousands of servers can be defined by their processors, amounts of RAM and hard disks. Each server is set up to run certain software applications that have been designed to demand certain hardware characteristics. In the current model, the software and hardware are tightly coupled to each other; for instance, when the software is upgraded, physical RAM has to be added to the server it runs on.

In the virtual model, Server A’s processor, RAM and hard disk are specified in an object on an enterprise console (think “desktop”). Since Server A doesn’t exist, how does it execute tasks? In the data center, racked “blade” servers contain only RAM and processors, and their services are provisioned by Server A’s software representation in the console. The “disk” component is provided by a huge, hyperfast SAN (Storage Area Network). Another way to put it is that the virtualized data center aggregates processor, RAM and disk into an “environment.” The individual components (servers) are represented in the console, and they specify the amount of processor, RAM and disk they need from the aggregated resources, based on the tasks they are assigned.

Other components of enterprise infrastructure—for example, routers, load balancers and firewalls—are virtualized much in the same way. To look at the larger picture, imagine that the data center has “clouds” of resources. First, the Virtual WAN Infrastructure contains “instances” (representations) of the WAN routers and firewalls, and it specifies their detailed configurations and executes their tasks via aggregated resources. Second, the Virtual Hosted Infrastructure contains instances of routers, load balancers and firewalls that work the same way. Third, the Virtual Compute Infrastructure contains the racks of blades, which function as processor and RAM resources. Last, the Virtual Storage Infrastructure is the SAN. And, it’s all connected by an Integrated Management System.

Anyone who spends any time traipsing through the digital world knows that when things are digitized—represented by software on a desktop—they become far more flexible. Want to move those three paragraphs back two pages? Just “cut” and “paste.” Old timers know that cut and paste used to be literal exercises painstakingly undertaken with the help of a light table. Whether photos, playlists in iTunes or financial reports, anything is tweakable in the digital world. Now the enterprise’s infrastructure can enjoy the same benefits.

How Virtualization Changes the Economics of Enterprise Infrastructure

Obviously, building and managing enterprise IT infrastructure in this way represents a discontinuous, game-changing proposition, which is what interests me. Here are a few highlights that Doerr used to illustrate the point.

  • Performance—In the current, non-virtual model, an application’s performance is constrained by the boxes it runs on and the infrastructure resources it can command at run time. In the virtual model, however, services can be specified based on certain conditions (“specific context”), real-time and seamlessly. In other words, it can command more RAM for certain jobs and give it back when the job is run. In addition, to upgrade server performance in the old model, it’s necessary to change out the server’s processor, add physical RAM or upgrade its disk. In the virtual model, the server can be “upgraded” by changing its definition in the console, which will reallocate the resources it demands.
  • Efficiency—All “boxes” in the infrastructure today have to be powerful enough to respond to the highest level of anticipated performance. However, most of the time, their operation falls within an average level of utilization, which means that, returning to the example of a server, a large percentage of its processor, RAM and disk is unutilized most of the time. Multiplying this percentage by the thousands of servers that exist in many organizations, one can see that the cost of unutilized resources is substantial. Another way of thinking about this is to consider a commercial airline that is the only company serving Tallahassee and Syracuse. Let’s say that the Tallahassee to Syracuse flight carries 700 passengers on the third Monday of the month, so the airline provisions a 700-seat aircraft for that flight. On all the other days of the month, however, 350 to 500 seats are empty.
  • Disaster Recovery—When considering disaster recovery, CIOs traditionally have to make a serious trade-off between performance and cost. In the non-virtual model, they have to duplicate their physical infrastructure in order to maintain IT operations, which is exceedingly costly. Virtualization, however, means that the enterprise’s data center is no longer comprised of physical machines. Rather, it is a virtual data center in that it consists of specific configurations and instructions that can be executed in any data center that has the instructions and resources. If Houston goes down due to a hurricane, the data center can be brought up in Denver. Since many virtualization providers charge for resources actually consumed, the cost of disaster recovery is far less.
  • Suggestive Numbers—Doerr didn’t have time to drill down into the numbers during the meeting, but a study jointly conducted by IDG Research and SAVVIS reports that average utilization for mainframes and Unix/Linux servers was 50 percent or less, while utilization of enterprise storage was about 60 percent.

Going Virtual

Doerr explained that, in most cases, since the company’s actual physical infrastructure is virtualized, applications don’t have to be rewritten to be deployed in the virtual environment. Most CIOs begin with running applications on a virtual infrastructure and progress to virtualizing entire data centers. In other words, it’s relatively easy to get started; there’s not a high cost of entry. He also noted that, since most applications have been written prior to virtualization, there is much room for innovation with respect to dynamic sizing and resource allocation. In the legacy model, applications specify the resources they need, and they are not “aware” of the infrastructure’s ability to execute. In the future, there will be feedback loops built into applications, which will communicate with the infrastructure to adjust their performance dynamically.

Analysis and Conclusions

  • Agility on tap—Every one of the CIO events that I have sponsored and attended during the past five years has been heavily laced with the theme of “agility.” This reflects broad recognition that the large organiztions that wield the most power and influence in the world are hamstrung because that cannot respond to change, and change is accelerating everywhere. Therefore, they are getting more out of synch with the world. As suggested above, “IT” broadly reflects the problem because, on one level, it is another organizational silo. However, in being the “enterprise nervous system,” it has the opportunity to “enable” widespread change throughout the organization. Virtualization is a game-changing way to achieve a quantum leap in agility—while changing the economics of the IT value proposition.
  • New era in IT—Combined with the enterprise software transformation at the hands of OOA/D and distributed applications about which I’ve written extensively, I hazard that we are entering into a new era of IT—perhaps akin to the transition of artisan production to mass industrialization—because computing tasks will be executed by slices of massive resources in virtual data centers. Today, typewriters are nostalgia-ridden devices for collectors because the process of recording thoughts “on paper” has been completely virtualized with “word processing.” Servers, routers, firewalls and all the other devices will meet this fate. Talk about a mixed blessing for vendors!
  • The electricity example—The development of electricity offers another example of the transition that IT is undergoing. Many businesses and even homes had their own generators because power generation was so unreliable in the early days. Since there was no reliable centralized source of power, there were myriad devices, transformers, generators, etc. as well as electrician-handymen to deal with them. Devices and appliances often ran at different voltages and had to be “stepped up” or “down” to match the power, which often fluctuated. It was quite maintenance-intensive and chaotic. Today, there is an always-on reservoir of power that devices access via standardized protocols; there is no more need for individual generators and transformers and those other other custom-made devices. That is the state in which IT has been; that is the state that we are exiting. When I consider the concept and practice of virtualization as Doerr described it, I can see that today’s problematic server, network, software compatibility and login problems will seem quaint within a few years.
  • The future—One of the biggest barriers to virtualization is a mental one. Digitization of physical things challenges established ways of looking at things and doing things. Making physical machines and all their support systems disappear, including people’s roles, stretches the mind. But the icon on the desktop is only a transitional phase; the server’s icon need only be there because applications specify it and are looking for it. Doerr’s remark about “infrastructure aware” applications suggests that applications that are built for the virtual environment won’t “care” about the processor or the amount of RAM or where their instructions are executed because they will communicate automatically with the infrastructure; in a sense, they will become “aware of themselves” and proactively manage themselves without human intervention.
  • Next stop: the organization itself—As stated above, IT reflects the organization’s inability to adjust quickly to volatile markets and shortening product life cycles. IT explicitly focuses on the digital world where other organizational areas use IT in their performance of finance, human resources or production work. As such, IT can lead and encourage new thinking. For example, by applying the methods of virtualization and abstraction to the organization itself, enterprises can begin to dismantle their unwieldy legacy organizations to make them more flexible. Specifically, they can use abstraction to model their “organizations” as more focused “business components” that can use standardized messaging to communicate with each other. In other words, rather than looking at the human body at the macro level (arm, leg, head), understand it at the cellular level.

1 comment to Agility on Tap: Demystifying the Virtues of Virtualization

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.