Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

The Autonomic Computing Journey: eNTer the Distributed Dragon

DZone's Guide to

The Autonomic Computing Journey: eNTer the Distributed Dragon

It started with the mainframes. As we approach the concept of modern cloud and autonomic computing, see how Windows NT and other tools reshaped the terrain.

· Cloud Zone
Free Resource

Are you joining the containers revolution? Start leveraging container management using Platform9's ultimate guide to Kubernetes deployment.

In our first post, we went back to when mainframe computing had ruled the data center for many years. It was mostly because there were limited alternatives in the marketplace that provided the ability to solve the business needs in other forms of computational design. Slowly but surely, the market gave way to many new options in what would become known as distributed computing.

We chose to skip over the mid-tier discussion, because it was more a reflection of the mainframe model, but on smaller systems that followed a similar approach of running mostly batch, and some online systems. Beyond the mid-tier systems, we suddenly saw that the once Unix-dominated mid-sized server market was being suddenly taken over by a new player in the game.

eNTer the Dragon

Windows NT was popularized due to its ability to deploy onto affordable, small, medium, and large x86 servers. Where Unix was the flavor of choice for many of the sysadmins of the early 80s, the 90s were suddenly presented with a number of choices. Growth was happening, and the PC wars weren’t just happening at the desktop, but also in the server rooms.

x86 servers were becoming economical, and the development environments to create and operate applications was suddenly a wide open field. Sun SPARC systems were no longer the only web servers in town. File servers and other environments were also becoming a battleground for who would dominate the overall market. Novell had paired directory services along with file server, and the early versions of Windows NT were lacking in the ability to do much of what Novell had done in that space.

Windows NT also came with Domains, and at the desktop, Windows for Workgroups was also a popular alternative. We may look back and laugh at the lack of features now, but at the time, it was truly a bold move to a new area of computing. One that would change a generation.

More Servers, Please!

Data centers were now becoming filled with shelves of tower servers. Clearly seen as a problem with scaling up, the next move was one that would truly begin a new generation in physical design. The rack servers were something that presented a standardized physical template to mount and meant higher density in the same space. It also meant that more efficient cooling and power configurations could be used as they were colocated inside the now standard 42U racks.

windows servers everywhere

Designs couldn’t keep up with the needs of the consumers at this point. CPUs were getting faster. Memory chips were getting larger. Motherboard bus speeds were increasing, and the peripheral card innovations would one day settle on PCI as the de facto choice. Application developers were suddenly able to work on their own server, without having to pay for MIPS, and they had isolation by ensuring that only their application was running on it.

Server Sprawl and the Distributed Challenge

Imagine bumping into the boundaries of network capacity. Not bandwidth necessarily, but logical boundaries of VLAN and IP segmentation. Layer 2 and Layer 3 were becoming a new play space for the modern sysadmin, and designers were becoming more advanced with applications. This also meant that we were continuously testing the limits of server performance, network performance, and design practices.

Windows Active Directory launched along with the Windows 2000 server platform. Desktops became more powerful, and so did the abilities and knowledge (sometimes dangerously so) of the developers and sysadmins. We were suddenly moving at a rapid pace. A pace beyond which we were prepared for.

Server counts approached hundreds, and then thousands. This was an untenable situation. Advancements were happening across all areas of the technology realm. Data center innovations were where the new engineers began to throw themselves into. The universities around the world were now producing a new crop of innovators. They were right in the throes of a growth pattern that was on a collision course with every physical and logical limit.

We knew that something had to change. And so did a few engineers led by Diane Greene. But that’s something we will get to soon. Because, at the same time as the innovation had been occurring on the applications, some other folks were also creating a substantial change that would see a rather strange pairing that has proven to be one that would be more successful than any could have imagined.

Using Containers? Read our Kubernetes Comparison eBook to learn the positives and negatives of Kubernetes, Mesos, Docker Swarm and EC2 Container Services.

Topics:
windows nt ,server ,data center ,directory

Published at DZone with permission of Eric Wright, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}