API Governance Models in the Public and Private Sectors: Part 2
API Governance Models in the Public and Private Sectors: Part 2
Join us as the API Evangelist, Kin Lane, shares a detailed report concerning the U.S. Department of Veteran Affairs and its wish to understand API governance.
Join the DZone community and get the full member experience.Join For Free
This is part two (you can find part one here) of an eight-part series on the Department of Veterans Affairs microconsulting project, “Governance Models in Public and Private Sector.” Providing an overview of API governance to help the VA, “understand, with the intention to adopt, best practices from the private and public sector, specifically for prioritizing APIs to build, standards to which to build APIs, and making the APIs usable by external consumers.” Pulling together several years of research conducted by industry analyst API Evangelist, as well as phone interviews with API practitioners from large enterprise organizations who are implementing API governance on the ground across the public and private sector, conducted by Skylight Digital.
We’ve assembled this report to reflect the interview conversations we had with leaders from the space, helping provide a walkthrough of the types of roles and software architecture being employed to implement governance at large organizations. Then, we walk through governance as it pertains to identifying possible APIs, developing standards around the delivery of APIs, how organizations are moving APIs into production, as well as presenting them to their consumers. Wrapping up with an overview of formal API governance details, as well as an acknowledgment that most API governance is rarely ever a fully formed initiative at this point in time. Providing a narrative for API governance, with a wealth of bulleted elements that can be considered, and assembled in the service of helping govern the API efforts across any large enterprise.
Software Architecture Design
Governance is all about shaping and crafting the way we design and architect software, leveraging the web, and specifically, web APIs, to help drive the web, mobile, device, and network applications we depend on. There are a number of healthy and not so healthy patterns across the landscape for considering as we look to shape and transform our software architecture, being honest about the forces that influence what software is, does, and what it will ultimately become.
Software architecture is always a product of its environment, being influenced by a number of factors that already exist within any given domain. We are seeing a number of factors influence how large enterprises are investing and defining their software architecture. Here are a handful of the top areas of consideration when it comes to how the domain an enterprise exists within impact architecture:
- Resources — The types of digital resources an enterprise already possess will drive software architecture, defining how it works, grows, expands, and shifts.
- Schema — Existing schema defines how data is stored and often gathered and syndicated — even if this is abstracted away through other systems, it is still influencing architectural decisions at all levels.
- Process — Existing business processes are already in motion, driving current architecture, and is something that cannot immediately be changed without having echoes of impact on future architectural decisions.
- Industry — External industry factors are always emerging to shift how software architecture is crafted, providing design, development, and operational factors that need to be considered as architecture is refactored.
- Regulatory — Beyond more organic industry influences, there are regulatory, legal, and other government considerations that will shift how software architecture will ultimately operate.
- Definitions — The access and availability of machine-readable definitions, schemas, processes, and other guiding structural elements that can help make software architecture operate more efficiently or less efficiently in the absence of standardization and portability.
Domain expertise, awareness, and structure will always shape software architecture and the decision-making process that surrounds it. This makes it imperative for there to be an investment in internal capacity as well as leveraging external expertise and vendors when it comes to shaping the enterprise architectural landscape. Without the proper internal capacity, domain knowledge can be minimized, weakening the overall architecture of the digital infrastructure nutrients an enterprise will need to move forward.
We can never escape the past when it comes to the software architectural decisions we make, and it is important that we don’t just see legacy as a negative but also view legacy as a historical artifact that should move forward. Maybe not always the same legacy code should be in forward motion, but the wisdom, knowledge, and lessons learned around the enterprise legacy should be on display. Here are a handful of the legacy considerations we’ve identified through our discussions.
- Systems — Existing systems in operation have a significant influence over all current and future architectural decisions, making legacy system consideration a top player when it comes to decision making around software architecture conversations.
- People — Senior staff who operate and sustain legacy systems, or were around when they were developed possess a significant amount of power when it comes to influencing any new system architecture, and what gets invested in or not.
- Partners — External partners who have significant history with the enterprise possess a great deal of voting power when it comes to what software architecture gets adopted or not.
- Trauma — Legacy trauma from historical outages, breaches, and bad architectural decisions will continue to influence the future, especially when legacy teams still have influence over future projects.
Systems, people, partners, and bad decisions made in the past will continue to drive and often times haunt each wave of software architectural shifts. This influence cannot be ignored or abandoned and needs to be transformed into positive effects on next-generation investment in software architecture. Change is inevitable, and legacy technical and cultural debt needs to be addressed but not at the cost of repeating the mistakes of the past.
After legacy concerns, we all live in the reality we have been given, and it is something that will continue to shape how we define our architecture. Throughout our discussions with companies, institutions, and government agencies regarding the challenges they face and the current forces that shape their software architecture decisions, we found several recurring themes regarding contemporary considerations that were making the largest impact:
- Talent Available — The talent available for designing, developing, and deploying API infrastructure dictates what is possible at all stages.
- Offshore Workers — The offshoring of work changes the governance dynamics and requires strong processes and a different focus when it comes to execution.
- Mainstream Awareness — Keeping software architectural practices in alignment with mainstream practices helps shape software architecture decisions, allowing them to move forward at a healthier pace.
- Internal Capacity — It has been stated several times that doing APIs at scale across the enterprise would not be possible without investing in internal capacity over outsourcing, or depending on vendor solutions.
Modern practices continue to shape how we deliver our software architecture, define how we govern the evolution of our infrastructure, and find the resources and talent to make it happen. Keeping software architecture practices in alignment with contemporary approaches helps streamline the roadmap, how teams work with partners, and can outsource and work with external entities to help get the job done as efficiently as possible.
The technology we adopt helps define and govern how software architecture is delivered and evolved. There are many evolution trends in software architecture that have moved the conversation forward, allowing teams to be more agile, consistent, and efficient in doing what they do. As we studied the architectural approaches of leading API providers across the landscape and engaged in conversations with a handful of them, we found several technologically defined views of how software architecture is influencing future generations and iterations.
- Vendors — Specific vendors have their own guiding principles to how software architecture gets defined, delivered, and governed. Often times given an outsized role in dictating what happens next.
- Frameworks — Software and programming language frameworks dictate specific patterns and govern how software is delivered and lead the conversation on how it evolves. Software frameworks can possess a significant amount of dogma that will have a gravity all its own when it comes to evolving into the future.
- Cloud Platforms — Amazon, Google, and Microsoft (Azure) have a strong voice in how software architecture is defined in the current climate, providing us with the services and tooling to govern the lifecycle. This control over how we define our infrastructure is only going to increase with their market dominance in the digital realm.
- Continuous Integration/Deployment — CI/CD services and tooling have established a new way of defining software architecture and establishing a pipeline approach to moving it forward, building in the hooks needed to govern every step of its evolution. Reducing the cycles from annual down to monthly, weekly, and even daily cycles of change.
- Source Control — Github, Gitlab, and Bitbucket are defining how software is delivered, providing the vehicle for moving code forward, the hooks for governing each commit, and step forward any infrastructure makes as it is versioned, and evolved.
These areas are increasingly governing how we design, develop, deploy, and manage our infrastructure. Providing us with the scaffolding we need to hang our technological infrastructure on, and gives us the knobs and levers we can pull to consistently orchestrate, and move forward increasingly complex and large enterprise software infrastructure, across many teams, and geographic regions. The decisions we make around the technology we use will stick with us for years, and continue to influence decisions even after it is gone.
When it comes to delivering software architecture, not everything is governed by the technical components, and much of what gets delivered and moved forward will be defined by the business side of the equation. The amount of investment by a business into its overall IT, as well as more progressive groups, will determine what gets done and what doesn’t, with the following elements of the business governing software architecture in several cases:
- Budgets — How much money is allocated for a team to work when it comes to defining, deploying, managing, and iterating upon software architecture.
- Investors — Many groups are influenced, driven, and even restricted by outside investors, determining what software architecture is prioritized and even dictating the decisions around what is put to work.
- Partners — External partners with strong influence over the business discussions that drive software infrastructure decision play a big role in the governance or lack of governance involved.
- Public Image — Often times, the decisions that go into software architecture and the governance of how it moves forward will be driven by the public image concerns around a company and its stakeholders.
- Culture — The culture of a business will drive decisions being made when it comes to developing, managing, and governing software architecture, which can be more challenging to move forward than the technology in many cases.
The governance of software architecture has to be in alignment with the business objectives of an enterprise. Many groups choose to begin their API journeys based upon trends or the desire of a small group and have encountered significant friction when trying to bring in alignment with the wider enterprise business objectives. Groups that addressed business considerations earlier on in their strategy have done much better when it came to reducing friction and eliminating obstacles from their roadmap.
Almost every discussion we’ve had around governance of software infrastructure has included mentions of the importance of observability across next-generation iterations. Software designed, delivered, and supported in the darkness or in isolation either fails or is destined to become the next generation of technical debt. There were several areas of emphasis when it came to ensuring the API driven infrastructure sees the light of day from day one and continues to operate in a way that everyone involved can see what is happening.
- Out in Open — Groups who operate out in the open, sharing their progress actively with other teams and encouraging a transparent process find higher levels of success, adoption, and consistency across their architectural efforts.
- Co-Discovery — Ensuring that before work begins, teams are working together to discover new ideas, learn about alternative software solutions, and working together to create buy-in, and ultimately make decisions about what gets adopted.
- Collaborative — While identified as sometimes being slower than traditional, more isolated efforts, teams who encouraged cross-team collaboration saw that their architectural decisions were sounder, more stable, and had more longevity.
- Open Source — Following open source software development practices and working with existing open source solutions helps to ensure that enterprise software architecture lasts longer, has more support, and follows common standards over other more proprietary approaches.
- Publicly — When it makes sense from a privacy and security standpoint, groups often articulate that being public by default helps ensure project teams behave differently, enjoy more accountability, and often attract external talent, domain expertise, and public opinion along the way.
Enterprise organizations that push for observability by default find that teams tend to work better together and have a more open attitude. Attracting the right personalities, encouraging regular communication, and thinking externally by default, not as something that happens down the road. Bringing much-needed sunlight and observability into processes that can often be very complex and abstract, and pushing things to speak to a wider audience beyond the developer and IT groups.
Having a shared process that can be communicated across teams, going beyond just technical teams, and is something that business groups, partners, 3rd party, and all other stakeholders can follow and participate in, is a regular part of newer, API-centric, software delivery life cycles. Possessing several core elements that help ensure the process of defining, designing, delivering, and evolving software architecture is shared by all.
- Contract — Crafting, maintaining, and consistently applying a common machine-readable contract that is available in YAML format is a common approach to ensuring that there is a contract that can be used across all architectural projects, defining each solution as an independent business service.
- Pipeline — Extending the machine-readable service contracts with YAML defined pipeline definitions that ensure all software is delivered in a consistent, reproducible manner across many disparate teams.
- Versioning — Establishing a common approach to versioning code, definitions, and other artifacts, providing a common semantic approach to governing how architecture is evolved in a shared manner that everyone can follow.
Historically, the software development and operation lifecycle is owned by IT and development groups. Modern approaches to delivering software at scale is a shared process, including internal business and technical stakeholders while also sharing the process with external partners, 3rd party developers, and the public. Bringing software architecture out of the shadows, and conducting it on the open web, making it more inclusive amongst all stakeholders, but done in a way that respects privacy and security along the way.
Stay tuned for part three: Identifying Potential APIs.
Opinions expressed by DZone contributors are their own.