Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Managing Complexity With Abstraction: Part II

DZone's Guide to

Managing Complexity With Abstraction: Part II

Things are becoming more complex every day, and we're creating more layers to hide that complexity. Why do we do this? Here is Part II to Steve Curtis's article.

· Agile Zone
Free Resource

Speed up delivery cycles and improve software quality with TestComplete. Discover the most robust automated testing tool for end-to-end desktop, mobile, and web testing. Try TestComplete Free.

Did you miss Part I? Be sure to check it out here!

Bits and Bytes

Puppet and the orchestra

In this diagram, I want to show how a simple reorganization of the same thing (bits) can yield powerful results because of well-formed structural artifacts (abstractions).

So how do we get organic capabilities from something built on such simple rules? What are good or bad abstractions? Let’s begin at the very beginning: the humble one and zero.

In the mid-70s, I bought a hobby microprocessor kit. It was a National Semiconductor Mini SCMP. The kit let me enter opcodes and data in addresses. The opcodes, addresses, and data were entered with a row of switches on the front panel and an enter button. Lights displayed the data you had entered. Simple and entertaining, but slow to program. After a while I got bored, but it did give me an insight into the inner workings of the magic. It showed me there was the practical need for better management of digital bits. Granted, we had automatic address generation and an instruction set. We had a CPU that could respond to bit patterns as opcodes and make crude decisions. But how useful was it for the real world? Not very.

From Bits to Code

In other parts of the world, other more insightful people had grappled with this problem. They had encoded information in punchcards, borrowed as an idea from Jacquard’s loom. Then they moved to magnetic tape. They took bits that by themselves are pretty limited, and aligned them in bytes. They took those bytes and represented them as binary numbers with arithmetic properties and rules. They used Boolean logic to manipulate these bits in a CPU that has a set of fixed responses to each combination of numbers (instruction set). They added an abstraction layer and made the binary numbers into hexadecimal numbers that mapped over the bits perfectly. They had encoded these numbers into magnetic signals and made them available for reuse by the rules engine — a CPU.

Then they did some of the most powerful of things in the shortest time. They made a text-editable assembler that notated the hexadecimal as pseudo code or mnemonics. These were words that could be typed and printed and read by people. The first elements of symbolic computer language were born. What people had already realized with this development was that groups of instructions or variables often re-appeared when solving similar problems. Concepts like loops, decisions, variable storage templates and developed algorithms appeared as patterns. The next leap was to encode these as subroutines, linked as reusable sections of code, whose behavior could be predictable and trusted.

Subroutines could be named with English labels that gave a mnemonic hint as to their function. On top of that, engineers also added a link function that enabled a relocatable address calculation to be done without human intervention. The improved readability and comprehension immeasurably streamlined the development of useful code. Labels formed abstractions as named placeholders for the function or variable to hand, and they could be put anywhere you wanted without concern for the call or return address. The result was code on machines that could truly help solve powerful scientific problems, and could be programmed faster for more diverse uses. An organization of symbols gave logical abstractions that translated into power. Yet this was only the start.

Engineers also saw the correlation with human linguistic representations and program structure. They developed special interpreters that generated low-level machine code from English labels and went far beyond assembler notation. This new programming environment aligned much more closely with how people think. Programming changed from being arcane technical wizardry to something that looked like human language. The first compliers were born. Under the hood were rules, templates and specialized logic that could allow for the planning and development of far more complex algorithms and program structures than ever before. This work represented another layer of amazingly useful abstraction, and the leap thus provided stood for much more than a bunch of labeled subroutines. The results would have been incredibly error-prone and slow to code at the level of the assembler, and completely impossible if all you had were punched cards or switches and lights on a front panel.

The emerging value of a multiple-pass compilation process was groundbreaking. English-like statements as input gave rise to machine-executable binary code as output. By then, computers had become formidable computational companions, yet they still used a perfunctory 1-0 bit logic in their cores. Abstraction had empowered elementary digital principles beyond imagining. Solid, trusted and predictable, typewritten English-like words could be reliably translated into bits and bytes and then executed. The natural order of human cognition (the narrative or story) had found an evolving analog in digital space, and it was a powerful combination.

Puppet

Puppet and the orchestra

So it is with Puppet-based systems, automation and the development of cloud service patterns. The cloud is an abstraction of IT services at a higher layer, and is involved with the provision and management of IT systems that support entire business functions. Puppet agents can act as a conduit for the full lifecycle management of servers, applications, individual components, and even systems of those components that make up an entire business.

Because we’re considering the top business-service level, the code to manage applications and infrastructure needs to be carefully thought out. With infrastructure as code, what can be done powerfully can also be done powerfully wrong.

Puppet automation is a lever to enable a new way to manage systems from the inside. Puppet agents allow carte-blanche access to system infrastructure and application deployment alike. Puppet agents allow infrastructure as code, and that is the new dimension of IT service provision. Making cloud-like services on internal brownfield systems using well-developed, stable automation and orchestration is the new challenge.

To do it properly, I suggest we need to align application development and infrastructure management. Puppet can make it happen. It’s time for new conversations to happen, and for a common outcome to be imagined.

Cloud hides the distinctions of application and infrastructure from the customer and the business owner. However, for the abstraction to work properly, the under-hood translations must be managed well. That is our new domain of abstraction and discipline. Like the bits-to-English abstraction pioneered by the first compiler writers, we are moving to the edge of a new abstraction layer, pulled further away from bits, files, operating systems and even servers, and centering on the end result for the consumer and business owner.

Virtualization and IT

Virtualization makes us think about IT components in new ways. ITIL may not be the only framework by which infrastructure is managed. Some ITIL capabilities may be redundant when code manages change, for example. Some ITIL -demanded checks that are done manually will be done in code, or dispensed with altogether. If a unit of change is requested from Puppet agents, they will translate the function to the particular endpoint, and no manual intervention is required.

As an example, a Linux VM may let Puppet manage the configuration on the file system, while a Windows server may require Puppet to call a function in SCCM. Either way, the question asked of Puppet will be the same. A system manager will not know the detail, except that Puppet has made the change. It’s similar to the C programmer who does not need to know the actual address of a function or variable. This is very useful when we consider systems as business services.

Control at Our Fingertips

We have transcended the barrier of platform silo,s and now have control of business systems at our fingertips. We have not had to coordinate changes across people and silo owners. Administrators will no longer have direct access to servers. Change will be managed by continuous integration over automation, and will occur more and more by code. Servers will be just another managed endpoint. Infrastructure can now be considered an amorphous layer that means nothing, and does nothing until configured — whether server, storage or network.

Production transition will have a different emphasis tomorrow than it does today. Updates and regular changes expected of applications will also be frequent for infrastructure, with changes on both possibly occurring together. Change, the bane of the old world order, must be prepared for, and embraced and accepted as a benefit to service availability, rather than regarded a threat. Applications are moving to this paradigm, and so must infrastructure. DevOps disciplines will need to be welcomed into both houses to accommodate an agile delivery and full lifecycle management of services.

Even as there will be differentiated streams under the hood, both will have the same service obligation to the business user or system manager. Gone are the days when infrastructure lights were simply on, and the job of a platform service was deemed to be done. Gone are the days when an application could be written without some awareness of infrastructure. Now they’re one and the same, and the common theme is Puppet automation, cloud service principles, and an agile change mentality. But it happens only if the two silos are merged correctly. Agile change across the silos can’t occur in isolation — it must occur in an integrated service model, and that model today is the cloud. The challenge is to bring it to existing IT shops on brownfield implementations.

The old adage was, “prevent outages by keeping change minimized.” Now the adage is, “provide service availability by preparing for and accepting well-managed change.”

"Well managed" means agile in this context, and happens well before production. It’s about publishing what is current, and making the transition to expose it to customers. It’s about moving the old frameworks into the new world while retaining the desired service values. ITIL is not dead; it just lives in a different house with new guests. Application development now has a closer bedfellow (infrastructure) because code is code.

But “code is code” makes true sense only within a proper confine: written to be robust, written to add value, written to provide layers of transformation, written to hide complexity and written to have uniform interfaces.

We are still dealing with the same disciplines as in the initial examples cited above. The same ideas that are present in our high layers of service definition exist as they were at the lowest bit levels of computing. Qualities of robustness, trustworthiness and value add must link up, especially where we have a very powerful capability like Puppet.

Release quality software faster with TestComplete. Discover how to decrease testing times and expand test coverage with the most robust automated UI testing tool. Try free for 30 days.

Topics:
orchestration ,complexity ,puppet

Published at DZone with permission of Steve Curtis, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}