What and Who is DevOps?
DevOps provides a system that pulls together and reeducates developers, sysadmins, and leaders on the full lifecycle of code.
Join the DZone community and get the full member experience.Join For Free
In every decent successful project's life, there comes a point when a number of servers starts growing rapidly. A server with an application ceases to cope with the load and you have to phase in another couple of servers and set a load balancer between them. A database which lived comfortably with the application on the server got larger and needs not just a separate machine, but one more for reliability and greater speed. The local theorist team finds out something about microservices, and now instead of one problem with a monolithic application, you have many microproblems.
The number of servers went far beyond several dozens in a blink of an eye, and each of them needs to be monitored, logged, protected from inner ("Whoops, I've accidentally dropped the database") and outer threats.
A number of applied technologies grows after every meeting of programmers who want to play with ElasticSearch, Elixir, Go, Lotus and only God knows what else.
Progress doesn't stand still: there is hardly a month without important updates of current software and operating system. You have just gotten used to SysVinit, but now they say you need to use systemd instead. And they are actually right.
Only couple of years ago, in order to deal with this infrastructure growth, you needed just several system administrators skilled enough in bash-scripting and manual hardware configuration. Now you need a couple of them each week to control hundreds of machines. Or search for alternative solutions.
A system administrator, or sysadmin, is a person who is responsible for the upkeep, configuration, and reliable operation of computer systems; especially multi-user computers, such as servers.
All these problems are not new – skilled system administrators have learned programming and automated everything to the limit. Now thanks to them, we have instruments like Chef and Puppet. But now we faced another problem: not all the sysadmins were skilled enough to retrain and become real software engineers of complex infrastructures.
Moreover, programmers who still do not know much about what happens with their applications after they are deployed, stubbornly continue to blame sysadmins for new versions of software eating all the CPU and opening doors to all the hackers in the world. "My code is perfect, you just can't simply tune the server," they say.
In this complicated situation engineers and those who sympathize with them had to start outreach activities. And how can one be successful in them without a buzzword? That is how DevOps was born – a marketing term, which causes different associations in people's minds from "inner company culture" to "jack of all trades."
Originally, DevOps did not have much in common with any concept in an organization. Many people still state that DevOps is a culture, not a profession, according to which there have to be close ties between developers and system administrators.
A developer should have an understanding of the infrastructure he works with and be able to find out why a new feature, which runs on a laptop, suddenly puts down half of a data center. Such knowledge helps to prevent many conflicts: a programmer who knows how servers work will never shift the responsibility to a system administrator.
DevOps also includes topics like Continuous Integration, Continuous Delivery, and Continuous Testing.
DevOps has naturally changed from "culture" and "ideology" to "profession." A number of vacancies with this word in their names grows rapidly. But what do recruiters and companies expect from a DevOps engineer? They often expect a mixture of skills which includes system administration, programming, a knowledge of cloud technologies, and large infrastructure automation.
It means that it is not enough to be a good programmer. One should also be well-informed about networks, operating systems, virtualization, providing security and resiliency of a system, as well as several other technologies ranging from common and time-proven things, like iptables and SELinux, to fresh and trendy technologies like Chef, Puppet and even Ansible.
At this point a careful reader-programmer would say:
It is silly to think that a programmer who already has tons of tasks in a project will learn so many new things about infrastructure and a system's architecture in general.
Another careful reader-sysadmin would say:
I am good at recompiling Linux kernel and network configuring; why do I need to learn programming, why do I need your Chef, Git and other weird stuff?
We would answer the following way: a real engineer is not the one who knows Ruby, Go, Bash or "network configuration," but the one who can build complex, beautiful, automated, and safe systems, understand the whole life cycle from the lowest level up to an HTML-pages generation and sending them to a browser.
Surely we can agree that one cannot be absolute professional in all IT spheres in every moment of time. But DevOps is not only about people doing everything well. It is also about maximal ignorance eradication on both sides of the fence, whether you are a tired-of-manual-work sysadmin or a developer praying on AWS.
In this series of articles, we will learn about basic instruments and technologies of a modern DevOps-engineer, step by step.
A developer who wants to know more about the life of his code after deployment will obtain necessary details and get a basic understanding of the whole ecosystem, thus becoming more than just a Ruby/Scala/Go-developer with Ansible skills.
Young (and not that young) minds, willing to do DevOps, will get a vision of how everything works and necessary guidance for further learning. Afterward, they will be able to easily maintain up to two dozens of organizations at once and make developers and sysadmins become friends.
System administrators who get bored on their job will learn a few new instruments which will help them remain in-demand professionals in the age of cloud technologies and total automation of infrastructures of different scales.
You will need Linux for this series, and we strongly insist on Red Hat distribution. The author of the article uses Fedora 27 Workstation as the main system, and mkdev servers run on Centos 7.
In the next article, we will get an overview of virtualization., and we will learn what it is used for and how to use it.
Published at DZone with permission of Kirill Shirinkin. See the original article here.
Opinions expressed by DZone contributors are their own.