The History and State of Virtualization
The History and State of Virtualization
This love letter to virtualization explores its history from the 1960s until today, examines some popular current trends in IT, and considers virtualization's impact.
Join the DZone community and get the full member experience.Join For Free
Insight into the right steps to take for migrating workloads to public cloud and successfully reducing cost as a result. Read the Guide.
The importance and application of virtualization extends far beyond virtual machines.
None of the advances in information technology in the past 60 years has been of such great value as virtualization. Many IT professionals think of virtualization in terms of virtual machines (VMs) and associated hypervisors and operating systems, but this is just the tip of the iceberg. An increasingly wide range of technologies, strategies and virtualization capabilities redefines the key elements of IT in organizations around the world.
Definition of Virtualization
Considering the definition of virtualization in a broader sense, we can say that this is the science of how to turn an object or resource simulated or emulated in software into an identical physically realized object in terms of functions.
In other words, we use abstraction to make software look and act like hardware, with significant advantages in flexibility, cost, scalability, overall capabilities, performance, and a wide range of applications. Thus, virtualization makes real what it really is not, using the flexibility, convenience of software capabilities, and services, replacing a similar implementation in the software.
Virtual Machines (VM)
The VM era dates back to a small number of mainframes in the 1960s, primarily from IBM 360/67, which later became common in the mainframe world in the 1970s. With the advent of Intel 386 in 1985, VMs took their place in microprocessors, which are the heart of personal computers. The modern function of a virtual machine, embedded in microprocessors with the necessary hardware support both with the help of hypervisors and with the implementation at the OS level, is important for computing performance, which is extremely important for capturing machine cycles that would otherwise have been lost with modern high-performance 3+ GHz.
Virtual machines also provide additional security, integrity, and convenience, given that they do not need large computational costs. Moreover, it is additionally possible to extend the capabilities of virtual machines by adding emulator functions for interpreters, such as the Java virtual machine, and even the functions of complete simulators. Running Windows under MacOS? Easily. Code Commodore 64 on your modern PC with Windows? No problem.
The main thing is that the software running in virtual machines does not know about this fact. Even the guest OS, originally designed for working on bare metal, believes that it is its "hardware" platform. This is the most important element of virtualization itself: The implementation of the implementation of information systems based on the isolation provided by the API and protocols.
In fact, we can trace the roots of virtualization to the era of the time-sharing regime, which also began to appear in the late 1960s. At the time, mainframes were certainly not portable, so the rapidly growing quality and availability of dial-up and leased telephone lines, as well as improved modem technology, allowed the virtual presence of the mainframe as a terminal (usually alphanumeric). Indeed, the virtual machine: Thanks to advances in technology and the economy of microprocessors, this model of the computing process led directly to the creation of personal computers in the 1980s with local computing in addition to data transmission over the telephone line that evolved into a local network and ultimately today represent the possibility of continuous access to the Internet.
The concept of virtual memory, which also developed rapidly in the 1960s, is not inferior in importance to the idea of virtual machines. The era of mainframes was characterized by the extraordinarily high cost of memory with a magnetic core, and mainframes with more than one megabyte of memory, in general, were a rare phenomenon until the 1970s. As with virtual machines, virtual memory is activated by relatively small additions to hardware and instruction sets to include parts of the storage, usually called segments and/or pages, for writing to secondary storage and for memory addresses within those blocks that will be dynamically are translated because they are unloaded from the disk.
One real megabyte of RAM on IBM 360/67, for example, can support a full 24-bit address space (16 MB) included in the computer architecture, and with proper implementation, each virtual machine can have its own complete set of virtual memory. As a result of these innovations, hardware designed for working with one program or operating system can be shared by several users even if they have different operating systems or the required amount of memory exceeds the actual bandwidth. The advantages of virtual memory, like virtual machines, are numerous: demarcation of users and applications, improved security and data integrity, and significantly improved RoI. Sound familiar?
After the virtualization of machines and memory, as well as their introduction into inexpensive microprocessors and PCs, the next step was desktop virtualization and, consequently, the availability of applications, both single-user and shared. Again, we need to return to the time-sharing model described above, but in this case, we simulate the desktop of the PC on the server and remove the graphics and other user interface elements over the network connection through the client-specific software and often through an inexpensive, easily managed, and secure thin client device. Each leading operating system today supports this capability in one form or another, with a wide range of additional hardware and software products, including VDI, the X Windows system, and the very popular (and free) VNC.
The next major achievement, which today is of great prevalence, is the virtualization of processors, storage and applications in the cloud, i.e. the ability at any time to pull out the necessary resource, which may be required right now, as well as simply adding and building capacities with little or no effort from the IT staff. Savings in physical space, capital costs, maintenance, downtime due to failures, time-consuming troubleshooting, serious performance and shut down problems, and many additional costs can actually pay off with service solutions that are stored in the cloud. For example, storage virtualization can offer many opportunities in such cases.
The widespread introduction of cloud storage (not only as a backup but also as the main storage) will become a more common phenomenon because and wired and wireless networks provide a data transfer rate of 1 Gbit/s and higher. This feature is already implemented in Ethernet, 802.11ac Wi-Fi and one of the most anticipated high-speed networks — 5G — which is currently being tested in many countries.
Even in the world of networks, the concept of virtualization is becoming more and more popular, the technology "network as a service" (NaaS) is now in many cases a promising and extremely popular option. This trend will only be popularized due to the further introduction of virtualization of network functions (NFV), which at least will definitely become the object of greatest interest for operators and providers, especially in the field of mobile communications. It is noteworthy that network virtualization can provide a real opportunity for mobile operators to expand the range of their services, increase bandwidth and thereby increase the value and attractiveness of their services to corporate clients. It is likely that over the next few years an increasing number of organizations will use NFV on their own and even in hybrid networks (again, the attractiveness factor of customers). At the same time, VLANs (802.1Q) and virtual private networks (VPNs), for their part, make a huge contribution to the approaches to the use of modern virtualization.
Virtualization Reduces Costs
Even taking into account a wide range of significant functional solutions that virtualization can offer, the economic evaluation of large-scale virtualization functions, which attracts special attention, still comes to the forefront. The competitiveness of a rapidly evolving business model based on cloud services means that the traditional time-consuming operating costs that contracting organizations carry on a daily basis will decline as service providers, based on their own experience, develop new proposals that will significantly help to save money, and offer lower prices to end users as a result of competition in the market.
With it, it is easy to increase reliability and fault tolerance by using multiple cloud service providers in a completely redundant or hot backup mode, which virtually eliminates the possibility of single failure points. As you can see, many elements of costs, laid down for capital expenditures in the IT sphere, are transferred to operating expenses, i.е. for the most part, the funds are spent not on increasing the number of equipment, capacity building and personnel of the organization, on service providers. Again, thanks to the capabilities of modern microprocessors, improvements in systems and architectural solutions, and a dramatic increase in the performance of both local networks and WAN (including wireless) networks, virtually every element of the IT industry today can indeed be virtualized and even implemented as a scalable cloud service in case of need.
Virtualization itself is not a paradigm shift, although it is often described as such.
The meaning of virtualization in any form is to allow IT processes to be more flexible, efficient, convenient, and productive with the help of a huge range of features, as described above.
Based on the virtualization strategy for the majority of cloud services in IT, one can say that virtualization is the best solution to date, as an alternative to the operating model with economic advantages, which will make it impossible to use traditional methods of work.
The development of virtualization in this area is due to the significant economic inversion of the IT operating model, which takes its roots at the beginning of the commercialization of information technology.
At the dawn of computer technology, our interests were focused on costly and often overloaded hardware elements, such as mainframes. Their huge cost and motivated for the first attempts at virtualization, which are described above.
As hardware becomes cheaper, more powerful and more accessible, the focus has shifted to applications running in virtually standardized and virtualized environments, from PCs to browsers.
The result of this evolution is what we are seeing now. Because computers and computing were the backbones of IT, we switched attention to processing information and the ability to provide it at any time and anywhere. This "Afrocentricity" - sapodilla evolution of the mobile and wireless era, and as a result, the end user can at any time, regardless of location, get this information and have it at hand.
Initially thinking about how to work more efficiently with a slow and very expensive mainframe, everything has led to the fact that now virtualization is becoming the main strategy for the whole future of the IT sphere. No IT innovation has had such a big impact as virtualization, and with the transition to the cloud virtualization infrastructure, we are really just starting the way to something global.
Opinions expressed by DZone contributors are their own.