We start our tale in the middle of the journey. Today, we have seen the evolution of computing from its inception at the earliest versions, up to a point where we have IBM Watson, Tesla Autopilot, and other self-learning, self-driven, self-organizing computing platforms. They feed on real-time information. They feed back decisions. They feed back a vision of elevating the way that we consume IT.
Rewards aren’t without risk. Sadly, we have also seen the first case where a fear of AI may have come true in the tragic loss of life involving Tesla Autopilot. This should be viewed as a difficult lesson, but not as the end of the platform because of the terrible result in this case. If anything, this marks the start of the important journey towards the next steps in our autonomic computing evolution.
In the Beginning…
Big Data may seem like a trend today as we launch into a generation where we’ve gathered trillions of data points, or made the choice to retain an unstructured data pool that can be mined for valuable information in a host of different ways. The truth of Big Data is that it doesn’t get much bigger than the early mainframe environments.
We could dedicate an entire series to the original data designs that were prevalent in the mainframe environment. Data aside, let’s just look at the mainframe conceptual design.
The centralized computing model was born. This delivered on the idea of having a central system which provided compute, memory, and storage. Network access to the compute environment was provided through external controllers, which translated to what are often called “green screen” terminals, so named due to the monochrome text-only display.
Data access was provided to the environment in real-time via the terminals and terminal emulators. As data was entered and uploaded through direct access, and through FTP (File Transfer Protocol) and other data interfaces, programs would be run in order to process that data.
Processing was done in two ways. These were known as batch and online. Online was an immediate run of the application processing, whereas batch meant that it was triggered in a bundle (thus the name batch) which happened on a time interval. Batch processing usually ran overnight.
Processing the data incurred usage of the CPU defined in MIPS (Million Instructions Per Second). A chargeback model was often employed to make sure that the cost of maintaining the large-scale centralized environment would be paid for in an even distribution, based on usage by each program. Those programs were assigned to budget codes and to departments.
The OG Virtualization and Cloud?
Could this have been the dawn of the concept of cloud computing? In effect, it was. A centralized compute model. You are charged for data storage, CPU usage, and processing time. It does contain a lot of what would one day become part of the definition of a cloud computing tenets.
LPAR (Logical PARtition) separation within the mainframe environment allowed us to divide the CPU, memory, and storage into subsets of the physical environment. LPAR isolation provided the ability to independently operate without impacting the other LPARs within the environment.
We had both virtualization and multi-tenant environments in a large centralized model. It sure sounds familiar in that sense, doesn’t it?
Everything Old is New Again
You will see a common theme through the series as we discuss the evolution of computing models and platforms. It boils down to this simple statement I like to use:
History doesn’t repeat itself, it iterates on itself.
There was much more that the mainframe platform provided, and still does. It’s an important start in the journey towards what we call autonomic computing.
Join us for our next post which will talk about the rise of the distributed computing environment alongside the centralized, mainframe-centric model.