Moving to the Cloud: Transforming Technology and the Team
This article is featured in the new DZone Guide to Building and Deploying Applications on the Cloud. Get your free copy for more insightful articles, industry statistics, and more.
Join the DZone community and get the full member experience.Join For Free
Developing new software for, or migrating existing applications to, the cloud is inherently a transformative process. As we wrote about previously, the approach to architecture and design must change, and topics, such as distributed computing and fault tolerance, become critical. As with any transformative process within a business, there are some fundamental processes that must be established.
The first is setting clear goals. If you are working as a developer or architect, it is essential that the goal of moving to the cloud is clear for you. Is it cost reduction, a decrease in time-to-market for products, or an increased ability to innovate? The second process required is the definition of measures of success, i.e. how do we know we are making progress? Are we saving money, week-on-week? Are we deploying more applications per month, or have when run more experiments this quarter? The final process to be established is clear expectation of communications. For example, do we report current work within a daily standup, what progress reports should be delivered, and how do we communicate issues?
Scaling compute resource to real-time demand (for both user-generated traffic and batch tasks), which removes need to over-provision
Reduced TCO of infrastructure (no hidden costs of maintenance staff nor data center insurance, electricity and cooling)
Compute resource for experiments can be acquired on-demand
Experiments can be implemented at-scale (the cost of which would be prohibitive to run on-premise)
Access to cutting edge ‘as-a-service’ technology, e.g. machine learning platforms
Compute resource for can be acquired on-demand without the need for capacity planning
Environments can be replicated on-demand for testing or staging, or for entering new geographic markets
Once these fundamental guidelines are established, the creation of a further detailed plan is required. In the following section of this article, we share our experiences from several small and large scale migrations to the cloud.
A (Cloud-based) Journey of a Thousand Miles, Begins With...
When an organization is attempting to introduce cloud computing, small experiments should be conducted first. In our consulting work at OpenCredo we often recommend creating a simple proof-of-concept (POC) based on a subset of business functionality, which should be conducted separately from ‘business as usual’ (BAU) activities. We have also recommended the use of ‘hack days’ (or a ‘hack week’, depending on scope) where an IT or software development team is giving free reign to experiment with the new technology available.
Criteria for POC
Business functionality limited to a single department. This limits communication and governance overhead. E.g. new payment service
Required functionality can be provided as a ‘vertical’ (self-contained) application with minimal external integration points. E.g. new customer sign-up site/page
Buy-in achieved with the team responsible for work.
The key goal of this stage within the plan is for the IT team to become familiar with the cloud paradigm (for example, we still frequently see the look of amazement when developers realize they can spin up and access powerful hardware within seconds), which paves the way for the rest of business to adapt to the cloud paradigm (e.g. different billing mechanisms and approval processes), and enables the evaluation of the various cloud platforms in relation to the typical use cases within the business.
All Clouds Are Not Created Equal - Choose Wisely
Each cloud platform has inherent properties, strengths, and weaknesses. For example, Amazon Web Services (AWS) offers a breadth of services, but the orchestration of components required to deliver the specified functionality can at times be complex. Google Cloud Platform (GCP) offers a narrower clearly-defined range of functionality, but the offerings are more typically opinionated. The strong (‘big’) data processing focus of GCP was a strong contributor for Spotify migrating their infrastructure to this platform. Microsoft Azure has a strong set of offerings and is often a compelling platform choice if the existing organization is heavily dependent on the Microsoft stack.
The primary goal for this stage of a cloud migration is to determine and catalogue the primary use cases that will be implemented within the initial six to twelve months work, and then map these requirements to the offerings from the various cloud vendors.
Be Aware of Geography
The geographical range of the cloud is often a key enabler. No longer does expansion into a new geographical market take years of planning and infrastructure contract negotiations. Now, we simply select a different ‘region’ from our cloud vendor’s user interface, deploy our required stack, and we are good to go. The flipside of this ease of deployment is that care must be taken in regards to where data flows and is stored at rest. Data sovereignty is often a critical component of many business operations, and accidentally sending data across international borders for information processing may violate regulations and compliance mandates.
Any plan to migrate applications to the cloud must include an approach to data requirements ‘due diligence’, and the legal and regulatory constraints must clearly be communicated to the IT team and across the business. In addition, deployment of applications across multiple geographic regions and availability zones is highly recommended in order to mitigate the risk of unavailability and data loss.
Tuning Apps to the Cloud
Much is currently being written about the benefits of creating software systems as clusters of ‘microservices’. Regardless of how small your services are, any software that is deployed to the cloud will benefit from adhering to many of the same technical goals, best codified as ‘Twelve-Factor’ applications. Obviously, not all existing applications can be migrated to this format—the requirement for ‘stateless’ applications, in particular, is troublesome for many applications written before the advent of cloud technologies. However, the fact that almost all components within a cloud will be communicating over a network means that developers should become familiar with fundamental and advanced distributed computing principles.
The completion of this stage within a cloud migration plan should result in a catalogue of applications to be migrated (or created), and the associated technical, ‘non-functional’ requirements or constraints clearly enumerated. Validation of all functional and nonfunctional requirements must be included within an automated build pipeline. In relation to the previous section of this article, it is worth paying special attention to applications communicating across ‘availability zones’ (essentially datacenters) and across geographical regions. What may appear as a small gap in an architecture diagram can add significant latency or potential for failure.
Don’t Forget the Organization (and the People!)
The second part of the article discusses the organizational issues that we often see when a cloud migration is underway.
Organizational Design and Transformation
Often the first sign of organizational design struggles appear as a team moves from the proof-of-concept to implementation phase. As soon as the cloud migration implementation expands beyond one team, then the complexity of interactions increases. This can often be challenging on a technical level, but it is almost always challenging on an organizational level. We look for red flags like queues of work, long delays, or ‘sign offs’ on work as it moves around an organization, and teams pulling in different directions (or using competing technologies).
We often work with the senior management C-level within organizations, as, unless alignment and buy-in is achieved at this level, any changes made within the rest of organization can easily unravel.
The Importance of DevOps
Although the term ‘DevOps’ has become somewhat overused (and still isn’t truly defined), we believe that the concepts behind it, such as a (1) shared understanding and responsibility across development and operations, (2) automation, with principles and practices driving tooling, not the other way around, and (3) creating signals for rapid feedback, are vital for success within a cloud architecture. As the number of application components is typically higher within a cloud-based application (in comparison with more traditional platforms) we often see problems emerge rapidly where teams are mis-aligned on goals, solving the same problem in multiple different ways; cargo-culting of automation, or the incorrect use of off-the-shelf ‘DevOps tooling’ occurs; and an absence of situational awareness is rampant.
We have seen DevOps implementations create fear within organizations, and we have also seen suboptimal processes being automated (automated failure is still failure!). We believe that the concepts and goals behind the ‘DevOps’ movement are vital in the wider business context and the current economic climate, where time-to-market and speed of innovation are clear competitive advantages.
Opinions expressed by DZone contributors are their own.