{{announcement.body}}
{{announcement.title}}

Why I Returned to Windows

DZone 's Guide to

Why I Returned to Windows

Should you?

· Performance Zone ·
Free Resource

YouTube: https://youtu.be/VojyqiBiIYU

https://youtu.be/7aLsjhPzCAI

I’m a Unix guy. Note that I did not say Linux. When I started my career, many small to mid-sized companies were running on minicomputers from companies such as IBM, Digital Equipment Corporation (DEC), PR1ME Computer, and others. Dr. Who fans might get a chuckle out of this blast from the past.

You may also like: 10 Tips on Working Fast in UNIX or Linux

In the early to mid-eighties, with the advent of significantly less expensive componentry, beyond the desktop PC, another revolution had begun. Much more affordable, business systems based on Unix started to be produced. The movement away from proprietary operating systems combined with a shift toward less expensive componentry led to an explosion of Unix offerings.

Saying that there was a movement away from proprietary operating systems to Unix is a bit of a misnomer though as just about every company offering a Unix system customized Unix, which led to challenges in supporting various vendors. In the early nineties, I was working for a small software company. Our product was ported to 22 different flavors of Unix. It was quite challenging.

Like their proprietary minicomputer predecessors, interacting with these new Unix-based systems was primarily accomplished through a dumb terminal, or CRT.

Photo: Digital Equipment Corporation

However, joining the windowing systems party, Unix vendors began offering windowed interfaces. The X-Windows system quickly became the de-facto standard windowing system for Unix.

By Liberal Classic - Liberal Classic, MIT, https://commons.wikimedia.org/w/index.php?curid=1680280

While the advancement of Unix terminals was welcome, the terminals on which X-Windows would run, known as X Terminals, were very expensive.

It was also during this time that desktop computers began to become prevalent in the workplace. As Unix systems were becoming a major player in the business world, Microsoft Windows had arrived. Windows version 2.0 was considered by many to be the first version of Windows that could truly improve business productivity. Windows 2.0 had quickly become the primary operating system for office productivity.

With Windows gaining popularity as an operating system focused on improving employee productivity and Unix doing the same for back-office functionality, a conflict arose. People needing to use products developed for Microsoft Windows as well as requiring access to Unix servers ended up with both a Windows desktop system and an X Terminal or CRT on their desks. Aside from the fact that this was an expensive setup, desk space was a challenge as these were not small systems.

It didn’t take long for the industry to respond. Without getting into the gory details, X-Windows is a client/server protocol coordinating the terminal, which in the case of X-Windows is the server, with processes executing on the, typically, Unix system.

Given that it’s a client/server protocol, companies began offering X-Windows software for Microsoft Windows. Now a Windows desktop could serve the dual purpose of office productivity as well as windowed access to a Unix system. If you were a Unix developer or administrator working for a business that had standardized on Windows office functionality, this was a nice option to have. For businesses, it meant that developers or administrators didn’t need an expensive X Terminal.

Even though Linux was still not in the mainstream thought for businesses, the notion of a common version of Unix that would run on various hardware was appealing.

Fast forward a little over a decade. For Unix developers, the landscape had not changed much. X-Windows had gotten nicer. Although there we still several flavors of Unix in existence, the Unix landscape was consolidating.  Toward the late nineties, a project that began as a hobby was in the process of being adopted by many large corporations forgoing their version of Unix for a version named Linux. Even though Linux was still not in the mainstream thought for businesses, the notion of a common version of Unix that would run on various hardware was appealing.

Similarly, based on the fact the Linux was an open-source project, it had serious momentum behind it. This included several very nice graphical user interface choices. More importantly, Linux could run natively on a desktop system. It’s not that others Unix’s couldn’t, but they felt like afterthoughts in an attempt to thwart Linux’s momentum. Developers and administrators could now run Linux on their desktop systems, and, more natively, access other Linux and Unix systems.

By GNOME and redhat.com - http://www.fifi.org/doc/gnome-intro/html/introduction-to-gnome/C/firstglance.html, GPL, https://commons.wikimedia.org/w/index.php?curid=75565617

For Unix geeks, this was a major advancement that impacted their everyday lives. Suddenly, it was practical to run Unix on a desktop, plus it was free!

There was a turf battle underway; actually, multiple turf battles. Some of the Unix vendors wanted nothing to do with Linux. Microsoft certainly didn’t either.

Even with all these advances, a challenge remained. Most businesses used Microsoft Windows software for productivity applications and even for backbone applications such as authentication and authorization. There was a turf battle underway; actually, multiple turf battles. Some of the Unix vendors wanted nothing to do with Linux. Microsoft certainly didn’t either.

Linux was the first real potential challenger Windows’ dominance since IBM’s OS/2 many years before. Of course, OS X was making itself known, but it suffered from the same challenges as any other operating system other than Windows. Putting the different CPU architectures aside, Microsoft applications did not run on OS X, and it was unlikely to happen.

For a large segment of the technology world, Microsoft became the bad guy. Much of the infrastructure in the business world was Microsoft-based, and there was little operability between Microsoft and any other operating system. Unix users finally had a legitimate version of Unix on our desktop, but they still needed to access Windows. Enter the era of dual booting. For many, including myself, dual-booting Linux and Windows was a very workable option. If an organization allowed access to Exchange through email clients such as Thunderbird or Evolution, booting to Windows didn’t need to be done too often.

Technical people are never really, satisfied, are we? Even though booting to Windows didn’t need to be done too often, it seemed to become often enough that it was getting frustrating. Desktops and laptops continued to get more powerful. Soon it was possible to run virtualization software on one operating system and run a virtual machine with another operating system. This was starting to feel good. One could go either way. My choice was to run Linux as my primary operating system with Windows in a VM.

By Joey Sneddon, https://www.omgubuntu.co.uk/2016/07/virtualbox-5-1-new-features

Like with all other iterations, eventually, frustration arose with the limitations of this configuration. You couldn’t leave the VM running all the time. Even in minimal configurations, it took a good deal of system resources. I’m not a gamer, but I do require a certain amount of graphics performance and responsiveness. For the most part, the system performed well, but when I need it to do “real” Windows things, it just felt sluggish. I ventured on as this was not too bad a setup, and, for the most part, I could deal with the performance issues.

Enter the age of compliance. Anyone involved in the Linux community knows how secure the operating system is. Yes. There are vulnerabilities, but much fewer when compared to Windows or OS X; however, many organizations are mandated to have management software installed that allows, from a central location, monitoring, and control of computing assets; desktops, laptops, phones, tablets, etc. After running Linux in various configurations as my primary desktop for nearly ten years, I was summoned by our compliance officer.

We had a great relationship. I understood his challenges, and I certainly didn’t envy the situations in which he would sometimes find himself. I already had a feeling about what was coming. He told me that it’s not that running Linux as a desktop is insecure. It was just that in a company of 1500+ employees where two or three individuals were running Linux desktops, he couldn’t justify the cost of purchasing and maintaining the software required to manage Linux desktops.

There wasn’t much I could do, and I certainly didn’t want to make his job any harder than it already was. After I explained that I understood, from under his desk, he pulled out a brand new, Macbook Pro. I’d never seriously used a Mac. They were relegated to our UX and marketing groups. I thought, “Why not?” I knew there was something like Unix somewhere on the box.

For just shy of ten years, I was a Mac user. After the learning curve, I learned to like the machine. I loved the command line. It was Unix. Its version of Unix, but I’d been there and done that. Although it was a second-class citizen to Microsoft, it even had Microsoft Office. The system was rock solid. It performed well. Even though it was somewhat of a compromise, I ended up being quite happy with my new machine.

This, of course, is not where the story ends. Several years ago, shortly after Satya Nadella took the reins at Microsoft, I saw the winds of change. Like many who avoided the Microsoft vortex, I was very skeptical of what I saw. At the same time, it was apparent that Mr. Nadella was going to take Microsoft in a different direction.

Looking at circumstantial evidence such as Microsoft Office products running nearly as well on other platforms as it does on Windows; including my Samsung phone and tablet and seeing Microsoft get deeply involved in the Docker and Linux communities and receiving praise from non-Microsoft members in those communities, indicated to me that there appears to be legitimate change within Microsoft. There’s a long way to go, but for several years now, Microsoft is making the case in wanting to play well with others just as any other major corporation would.

I recently found myself in a position to purchase a new personal system. I started where I was most comfortable; with the Macbook Pros; however, something was nagging at me. I knew these systems were more expensive than their Windows/Linux counterparts, but that was due to how rock solid they were. As I thought through this, I also realized that OS X was no longer the rock-solid operating system it had been; still very good, but not as good as it used to be.

There’s still great support for developers and engineers, but as I contemplated that, almost everything I was using for my day to day work came from a third party that was operating system agnostic. With Docker having become an integral of a developer’s toolset, the need for a specific operating system is minimized.

I decided to, (gasp), look at systems capable of running Windows and/or Linux.

I decided to, (gasp), look at systems capable of running Windows and/or Linux. I figured that, if I wanted to, I could buy a Windows machine and rebuild it with Linux. I wanted a behemoth of a machine. I use Docker and Kubernetes a lot. I can put that on AWS or some other cloud provider, but the costs can add up quickly. I went for broke and priced out an 8 core i9 system with 32GB RAM and a 1TB NVMe SSD. The Dell version of this system, including an OLED display, was on sale for just over $2,200. The equivalent Macbook Pro was $3,600. That’s a pretty big difference.

I did it. After nearly 20 years. I switched back to a Windows machine. Upon arrival, the first thing I thought was I’m going to reformat this thing and put Linux on it. However, upon considering the newness of the hardware, GPU and OLED display, and a handful of other concerns, I thought I’d go ahead and give Windows a try. I’d read quite a bit about the Windows Subsystem for Linux (WSL) and that WSL 2 is supposed to be even better. I knew there were multiple options for running Docker and Kubernetes.

All my development tools supported Windows.  Visual Studio Code has an awesome feature integrating with the WSL file system and shell. So, why run Linux natively? That’s even a bit of a misstatement. WSL is not full Linux, but it much closer to Linux than the OS X command line. The integration with Windows is pretty smooth. It has a few warts, but I don’t find them distracting.

Here I am. A Unix guy doing all my Unix work on Windows using WSL, Minikube, and all my favorite tools. So far, I’m a pretty happy camper.

Image title

I’d say, “Mark my words,” but that may be a bit too bold. I’ve been saying it for about four years now. I believe, like OS X, Windows will become a Unix kernel-based system. Unlike OS X, I believe Windows will be Linux kernel-based. It’s just good business. Today, the money is in the applications running on the operating system. With Docker and Kubernetes spreading like wildfire, from an application standpoint, the operating system becomes even less important.

Microsoft ports its applications to the Linux kernel, and its reach expands tremendously. Applications running on Linux would then run on the Linux-based Windows. Everyone wins. This all may sound a little pollyannaish, but technology tends to make strange bedfellows.


Further Reading

Top 5 Free Courses to Learn Linux Commands in Depth

How to Find How Long a Process Has Run in UNIX

Topics:
microsoft windows ,macbook pro ,linux ,kubernetes ,software development ,performance

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}