When you can see a pair of jeans you like on Amazon at 10:00 AM and, through the magic of Amazon Prime Now, be wearing them at 11:01 AM without having left your house, telling someone that they have to wait 6 months to get access to more than one account in your mobile banking app just won’t fly. They’ll tweet nasty things about you or really tank you in the app store with a 1-Star review, telling everyone (in ALL CAPS) that they wished they could make it a 0-Star review.
As such, the way we design, build, and deliver applications has undergone a seismic shift over the last 15 years. To be more responsive to the needs and desires of a customer base that expects satisfaction at an ever-increasing rate, companies have to get feedback from those customers faster and deliver value to them more frequently. In an attempt to keep pace, software companies have turned to methodologies and movements like Agile, Continuous Delivery, and DevOps in order to keep pace with market demands.
This has worked really well for most components of the application stack. In the most advanced shops, a developer can now commit a change that triggers a fully automated delivery process that includes several quality checks on the way to production, only stopping when a problem is detected.
The database is the notable outlier in terms of process modernization. The database is not as flexible as other application components because it is a persistent store of a very valuable resource. Risk tolerance is incredibly low because the consequences of data loss and security breaches are incredibly severe. That’s why production databases were walled off from development decades ago and highly skilled DBAs were installed as guards and gatekeepers for the company’s most precious resource.
The first thing that needs to happen to bring database change management up to the current standard is a change in mindset for all parties in the process. The deliberate and change averse posture of a lot of data teams must be brought into alignment with the need for speed that is prevalent in other groups in the organization. But, the other groups must also understand and recognize the importance of avoiding reckless behavior in the data stack in our pursuit of faster time to market. It’s not enough to tell someone to hurry up or to be careful. They must have context. They must have a clear understanding of the process from end-to-end and how crucial their role in that process is. Only then can attitudes truly change and true collaboration begin.
Once that’s accomplished, the age-old manual processes must be rethought. We must match the new paradigm of going fast safely. Good news! It’s been done before, and not just with software. A lot of the lessons learned and principles developed in the implementation of the Toyota Production System (yes, the car company) from the 1940s to the 1970s will sound strikingly familiar to adherents of Agile Development and DevOps. What resulted was a people-focused production process that aligned well with the needs of all producers and consumers in that process. This led to better market fit and less waste of all resources, materials, and intangibles.
These are the results we’re striving for today in Enterprise IT. Does it make sense to reinvent the wheel? Do you have the time, resources, or budget to do that? If not it might make sense to look at how those before you have prospered through times of transition to chart your own path to efficient and adaptive operations that produce reliable value when it’s expected.
If you’d like to read more about how these past movements in manufacturing have helped to shape and influence the DevOps philosophy, I recently wrote a white paper called The Future of DevOps: Aligning Database Change Management with the Software Development Process—it can be downloaded here.