PuppetConf Speaker Martin Alfke on DevOps, ChatOps, and Learning Puppet
PuppetConf Speaker Martin Alfke on DevOps, ChatOps, and Learning Puppet
In this interview, longtime Puppet and automation enthusiast Martin Alfke gives some of his simple rules for DevOps success.
Join the DZone community and get the full member experience.Join For Free
Discover how quick and easy it is to secure secrets, so you can get back to doing what you love. Try Conjur, a free open source security service for developers.
Martin Alfke is a longtime Puppet and automation enthusiast and has been a Puppet training partner since 2011. He is a Puppet Certified Professional and Puppet Certified Consultant. In his PuppetConf talk, Martin will discuss difficulties of using commands within exec resources and the process of moving to Puppet types and providers.
As the co-founder and CEO of example42, Germany-based Martin is well known for his expertise in DevOps, infrastructure planning, and Puppet. We asked Martin a few questions about his experiences so we could all get to know him better ahead of PuppetConf. Read on!
Aliza Earnshaw: When and how did you first start with Puppet?
Martin Alfke: Back in 2007 I was working at a small company with growing numbers of servers. One day I decided to automate the setup and configuration. Every Friday I started a couple of virtual machines and tried several solutions. One day my co-worker — Andreas König, author of CPAN — mentioned a new tool: Puppet. I was immediately able to have most configurations done by Puppet by the end of the day, whereas with other tools I was barely able to create a working communication to the central configuration server.
The next Tuesday, our student intern showed up, and I gave him the task of creating accounts for new users with Puppet. I sent him the link to the Puppet documentation and made him aware that he should be careful, as the Puppet master was already in production.
Half an hour later I showed up at his desk, asking him whether he was able to do the work. He told me that all accounts had already been created, users had received their welcome email, and that it had been easy to do. I was very happy to see that this new configuration management tool was so easy to learn, even for a junior system administrator.
Aliza: What do you enjoy most about training people on Puppet?
Martin: Most people who show up at Puppet trainings already have an understanding of system and configuration management. This allows us to dig into technical discussions far more easily.
As a trainer, I really like seeing my students when they get that wonderful, astonished look as they recognize why something is not working and figure out how they can proceed. This sometimes leads to funny situations, when a student slaps his face and shouts out, "Oh, my God."
Aliza: What's your best advice for anyone who wants to get started with Puppet?
Martin: Don’t say that you are going to start. Just start. And prepare yourself for tearing down the whole setup twice.
On the first implementation, start with the easiest thing to solve. Write Puppet code by yourself instead of using modules from Forge or GitHub. On the second implementation, start with the most complex configuration within your platform and start using upstream modules. This sequence will allow you to learn how to write and read Puppet code and will help you understand the concepts of Puppet in detail.
The third implementation will be the one you want to maintain for a longer period. Here, one should head for control repo and CI workflows, roles and profiles, and separation of code and data by using Hiera.
Stop adding data to Hiera when you see that you are heading to more than 10 hierarchies and start rethinking your hierarchies. Having many hierarchies leads to complexity in the data, and you will soon start wondering how and where you have to configure something. Continue when you have lowered your number of hierarchies.
Aliza: People are looking to DevOps to solve some difficult IT challenges. What do you see as the common characteristics of organizations that implement DevOps successfully?
Martin: Basically, DevOps is not tools. DevOps is people. You can have the smartest tools at hand, but you will still fail when doing DevOps if you don't have smart people. Companies that successfully implement DevOps have torn down walls and barriers between departments that collaborate with the same tools, and share common goals.
The first thing you will mostly find is metrics. Metrics give insight into systems and business; everybody wonders how they could have done their work prior to having metrics.
The next thing is dojos. Like the training center where one gets trained in martial arts, DevOps companies do something similar when it comes to learning new things, holding sessions on topics like UNIX for Devs, Coding for Ops, Metrics and Data for Sales or BI.
What I have seldom seen is combined work on technical debt, which is necessary, from my point of view.
Aliza: What are the most common blockers to successful DevOps implementation?
Martin: Especially with large enterprises, I see difficulty in doing DevOps properly within top-level and mid-level management.
Top-level management has goals to achieve, and so directs application teams to focus on getting products to market faster, while demanding that IT operations teams keep systems stable and reliable. But these conflicting goals can never lead to successful DevOps.
Mid-level management ends up being mostly a blocker as they try to reach the next hierarchy level within the company. They ignore technical requirements and stop listening to their teams as they only try to convince their managers of their management skills.
All of these are common issues and explain why so few organizations successfully implement DevOps. This also explains why it's so important for upper management to create the space and provide direction for a DevOps culture to develop.
Aliza: You're known as an advocate for ChatOps. What is ChatOps, and how does it help?
Martin: ChatOps is the integration of chat into operational tasks. Prior to ChatOps, most work was required to be done two or three times. You had to get an alert, notify the alert, and then write into chat that you were working on it. You had to find the issue, fix the issue, and describe what you have done in a wiki, a ticket system and in chat to let your colleagues know.
With ChatOps, everybody sees everything: the alert, the notification, the search for fixes, and the implementation of the fix. When using ChatOps you don’t have to switch tools. You can see what others are doing. Implementing ChatOps leads to less extra work for the Ops department.
But ChatOps is not only for Ops people. ChatOps can also be used by any other department, i.e., Sales asking for data from a production environment.
When planning for ChatOps, everybody should be aware that ChatOps is based on access grants. You can either have an ultra-secure environment, or you can have an environment where you can do stuff easily. Every organization needs to find its own level between security and flexibility.
Aliza: What are the most common problems you see when teams are trying to improve their configuration management?
Martin: I have seen many different ways that mostly lead to more complexity, and did not reach the goal. Often people decide to switch to another solution due to having done things wrong in the first place. In this case, they ignore the fact that everybody within the team must understand the new tool at least on the same level as the existing one. In other cases, people decide to refactor. Most refactoring is done by copying and pasting code from an old repository to a new one, keeping all the old ideas and complexity in place.
Most people will tell you that they lack the time for a fresh complete rewrite. I usually see them afterward when they tell me that they should have started from scratch.
The most common example nowadays is the migration to Puppet 4. Some companies just fixed their code, while others built the whole Puppet infrastructure from scratch. The latter ones are more successful.
Aliza: Is there a mindset or approach to configuration management you think is most helpful?
Martin: Four major ideas which should always be kept in mind:
- Automate the automation.
- Only let things into production that have been automated.
- K.I.S.S.—Keep it simple stupid.
- Plan for the unforeseeable.
Automate the automation. Configuration management and automation go hand in hand. It makes absolutely no sense to head for automation while keeping configuration management manual. Automating your configuration management will easily allow you to stage systems the same way you do it with applications.
Only let things into production which have been automated. Hey, it's 2016, not 1966. We have large numbers of systems, we deal with performance, bandwidth, new solutions. We don’t have time to do stuff manually. Automation systems immediately will let you concentrate on the important stuff.
K.I.S.S. You may either read KISS as “Keep it simple, stupid” or “Keep it simple stupid.” A sophisticated configuration management with automation makes no sense if most team members don’t understand how and why things are working. Build everything so that at least two-thirds of your team members understand why and how this is working. Train the remaining third to understand over time.
Plan for the unforeseeable. This last one is the most difficult. How to plan for something when you have no idea about what it might be? Basically, this means you never should build a configuration system that deals only with the existing platform and systems. Keep legacy systems in a separate place, allowing you to add new functionality more easily. Don’t over-engineer any solution, as this will lead to isolation of the solution. Keep everything extendable, use the Puppet modular approach, and use APIs wherever possible.
When it comes to successful implementations, I like drawing a picture. There is a sun, beach, waves. A cocktail in the right hand, a smartphone on the table. Suddenly the smartphone beeps and shows a message: 60 percent of servers are down. As you take a sip of the cocktail, the smartphone beeps again: All servers are up and running. This is the way a highly skilled infrastructure engineer can work when doing complete automation. Yes, there are still some steps missing, but these will be filled with solutions over time.
Aliza: What's a fun fact about you that we might not know, but would like to?
Martin: What I like telling people is that I have no idea what I am doing. I did an apprenticeship as an aircraft mechanic at Lufthansa and studied mechanical engineering afterward. My first contact with Linux was back in 1995 when a friend handed over 12 floppy disks which extracted Slackware on a UMSDOS file system.
Aliza: What's the most interesting thing you've read recently? (Could be a book, article, a tweet, or even a video you watched.)
Martin: At the moment I am reading the ELOPe novels by William Hertling. Luckily, I started them in the right order by accident. The novels build a very interesting story around the creation of an AI, how this leads to further technology, and how humanity reacts.
Aliza: Go ahead, tease us: what can we look forward to in your talk about using Puppet types and providers?
Martin: First tease: I never give talks—I like telling stories. This time, I will tell attendees how we started with a module doing exec resources, why we decided to head for a defined resource type wrapping the exec resources, and how we failed.
When it comes to custom types and providers, I always hear that they are hard to learn and difficult to do. I want to show people that this is not true.
I will finish with how we implemented custom types and providers, which generic ideas one should have in mind when doing the types, and how you should start with the providers.
Published at DZone with permission of Aliza Earnshaw , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.