Chapter 1
Understanding Virtualization
IN THIS CHAPTER
- Understanding the different categories of virtualization
- Understanding early session virtualization
- Changing focus in corporate datacenters
- Examining the cloud and cloud services
- Meeting the needs of mobile workers
- Using virtualization to address todays challenges
Virtualization is a somewhat broad term that has gotten even broader over time as users, organizations, and the technologies they use have evolved. This chapter starts at the beginning of the virtualization journeymy journey anyway. I share my personal experiences to not only highlight the dramatic changes that have taken place, but also to demonstrate the many features of todays technology that still echo the ideas from over three decades ago. I discuss the ways the technology has grown, shifted, and made its way into nearly every organization and almost every aspect of our digital world. This chapter describes the main trends in virtualization over the past 30 years and provides a good background for the rest of the book, which dives deeper into Microsofts products in each area of virtualization.
What Is Virtualization?
Its important to start by defining the term virtualization. It means different things to different people when thinking about computing. For the purposes of this book, you can think of virtualization as breaking the bonds between different aspects of the computing environment, abstracting a certain feature or functionality from other parts. This abstraction and breaking of tight bonds provides great flexibility in system design and enables many of the current capabilities that are the focus of IT and this book.
Over time the virtualization tag was applied to many other technologies that had been around for some time, because they also broke those tight couplings and abstracted concepts. Over the next few pages I introduce the major types of virtualization, which are explored in detail throughout the book.
When the word virtualization is used without qualification, many people think of machine virtualization, which is the easiest to understand. With machine virtualization the abstraction occurs between the operating system and the hardware via a hypervisor. The hypervisor divides up the physical resources of the server, such as processor and memory, into virtual machines . These virtual (synthetic) machines have virtual hard disks, network adapters, and other system resources independent of the physical hardware. This means you can typically move a virtual machine fairly easily between different physical machines, as long as they use the same hypervisor. If you try to move a system drive from one physical computer and put it in another, its unlikely to work well, if at all, because of differences in hardware configuration. In addition, by creating several virtual machines on one physical computer, you can run multiple instances of the operating system on a single server, gaining higher utilization as hardware is consolidated, an idea I expand on later in this chapter.
In presentation virtualization , also called session virtualization , the user session is abstracted from the local device and runs on a remote server with connections from multiple users. Only the screen updates are sent to each users local device, while all the computing actually takes place on the remote server. In other words, the presentation of the session is abstracted from where the actual computation takes place. Terminal Services and Citrix XenApp are examples of session virtualization solutions.
Technologies that enable users to use many different devices but with the same data and environment configuration have also gained the virtualization stamp. Users of previous Windows operating systems will know of Folder Redirection and Roaming Profiles. Later in the book you learn about other, more advanced technologies, particularly for the virtualization of user settings. Here again, the user data and settings are abstracted from the underlying computer on which they are typically hard linked into the operating system.
One relatively new technology is application virtualization , which enables the decoupling of an application from the operating system. Traditionally, in order to use an application it typically had to be installed on the users computer, adding components onto the operating system, updating settings containers, and writing data to the local disk. With application virtualization, application code is downloaded from a remote site and runs locally on a computer without requiring any changes to the operating system, so it has zero footprint. Note that this differs from session virtualization in that the computation is on the users device, not on the remote server. The Microsoft application virtualization technology is App-V.
Throughout this book I describe the technologies that implement these and other categories of virtualization, including how they are used, when they should be used, and how to build the right IT infrastructure using the appropriate virtualization technology. To provide some context, the following section looks at changes in the industry over the last 30 years as Ive experienced them. This is my personal view, not a traditional textbook history of computers, which you can find elsewhere. It reflects what Ive seen in my years of consulting and acting as a trusted advisor for enterprises of various sizes and stripes, and provides some insight into what lies ahead.
The Dawn of Virtualization
For American readers, the ZX Spectrum was similar to a Commodore 64. There were many schoolyard arguments over which one was better.
When I was about eight years old I got my first computer, a ZX Spectrum with 48KB of memory, which connected to the television, and software (games mostly) on cassette tapes. I still have it on the wall in my office as a reminder of where my love of computers started. I played around with the BASIC language that came with it, creating epic code such as the following and feeling very proud when I would enter it on machines in stores:
10 PRINT "JOHN IS GREAT"20 GOTO 10
Over time I moved on to a Dragon 32 that used cartridges, a new Spectrum with 128KB of memory and built-in tape drive, called the ZX Spectrum 128 +2, and a Commodore Amiga. Then one day my dad brought home a PCI think it was a 286 with MS-DOS and 5.25-inch disks. When the technology evolved to the point of 386 computers and acquired larger internal disks, we upgraded our machine, installed Windows, and I started to play around with the C programming language and later with Java.
Session virtualization!
Machine virtualization!
When I was 18 years old I got a job at Logica, which at the time was a large systems house. I worked in the Financial Services division and was hired as the VAX/VMS systems administrator. I had no clue what that meant, but they said they would train me and pay some of my tuition while I worked toward my degree. The position sounded amazingly advanced, and as I walked into my first day at work I had visions of futuristic computing devices that would make my home machine look like junk. Unfortunately, instead of some mind-controlled holographic projection computer, I saw an old-looking console screen with green text on it.
As I would later find out, this device (a VT220) was just a dumb terminal that allowed keyboard entry to be sent to a VAX/VMS box that sat in the basement (where I spent a large part of my early systems management duties changing backup tapes and collecting printouts to deliver to developers on the team). The VAX/VMS server had all the actual computing power, memory, storage, and network connectivity. Everyone in the team shared the servers and had their own session on this shared computer, which used time-sharing of the computers resources, specifically the CPU. This was very different from my experience up to that point, whereby all the computation was performed on my actual device. For these large enterprise applications, however, it made sense to share the computer power; so there were multiple instances of our application running on the same physical server, each in its own space. There were other mainframe devices from IBM that actually created virtual environments that could act as separate environments. Windows for Workgroupsbased PCs were introduced into our team and certain types of workload, such as document creation, moved to the GUI-based Windows device.