Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

API Is Not Just REST, Part 3

DZone's Guide to

API Is Not Just REST, Part 3

API is not REST. It is one tool in my toolbox. API deployment and integration are about having the right tool for the job.

· Integration Zone ·
Free Resource

The new Gartner Critical Capabilities report explains how APIs and microservices enable digital leaders to deliver better B2B, open banking and mobile projects.

gRPC Using HTTP/2

While I am forced to use Websockets for some existing integrations such as with Twitter, and other legacy implementations, it isn’t my choice for next-generation projects. I’m opting to keep things within the HTTP realm, and embracing the next evolution of the protocol, and follow Google’s lead with gRPC. As with other RPC approaches, gRPC is based around the idea of defining a service, specifying the methods that can be called remotely with their parameters and return types. gRPC embraces HTTP/2 as its next-generation transport protocol, and while also employing Protocol Buffers, Google’s open source mechanism for the serialization of structured data.

At Google, I am seeing Protocol Buffers used in parallel with OpenAPI for defining JSON APIs, providing two speed APIs using HTTP/1.1 and HTTP/2. I am also seeing Protocol Buffers used with HTTP/1.1 as a transport, making it something I have had to integrate with alongside SOAP, and other web APIs. While I am integrating with APIs that use Protocol Buffers, I am most interested in the usage of HTTP/2 as a transport for APIs, and I am investing more time learning about the next generation headers in use, and the variety of approaches in which HTTP/2 is used as a transport for traditional APIs, as well as multi-directional, streaming APIs.

Apache Kafka

Another shift I could not ignore across the API landscape in 2017 was the growth in adoption of Kafka as a distributed streaming API platform. Kafka focuses on enabling providers to read and write streams of data like a messaging system and develop applications that react to events in real-time, and store data safely in a distributed, replicated, and fault-tolerant cluster. Kafka was originally developed at LinkedIn but is now an Apache open source product that is in use across a number of very interesting companies, many of which have been sharing their stories of how efficient it is for developing internal data pipelines. I’ve been studying Kafka throughout 2017, and I have added it to my toolbox, despite it pushing the boundaries of my definition of what is an API beyond the HTTP realm.

Kafka has moved out of the realm of HTTP, using a binary protocol over TCP, defining all APIs as request-response message pairs, using its own messaging format. Each client initiates a socket connection and then writes a sequence of request messages and reads back the corresponding response message – no handshake is required on connection or disconnection. TCP is much more efficient over HTTP because it allows you to maintain persistent connections used for many requests. Taking streaming APIs to new levels, providing a super fast set of open source tools you can use internally to deliver the big data pipeline you need to get the job done. My mission is to understand how these pipelines are changing the landscape and which tools in my toolbox can help augment Kafka and deliver the last mile of connectivity to partners, and public applications.

Message Queuing Telemetry Transport (MQTT)

Continuing to round off my API toolbox in a way that pushes the definition of APIs beyond HTTP, and helping me understand how APIs are being used to drive Internet-connected devices, I’ve added Message Queuing Telemetry Transport (MQTT), an ISO standard for implementing a publish-subscribe-based messaging protocol to my toolbox. The protocol works on top of the TCP/IP protocol and is designed for connections with remote locations where a light footprint is required because compute, storage or network capacity is limited. Making MQTT optimal for considering when you are connecting devices to the Internet, and unsure of the reliability of your connection.

Both Kafka and MQTT have shown me in the last couple of years, the limitations of HTTP when it comes to the high and low volume aspects of moving data around using networks. I don’t see this as a threat to APIs that leverage HTTP as a transport, I just see them as additional tools in my toolbox, for projects that meet these requirements. This isn’t a failure of HTTP, this is simply a limitation, and when I’m working on API projects involving internet connected devices I’m going to weigh the pros and cons of using simple HTTP APIs, alongside using MQTT, and being a little more considerate about the messages I’m sending back and forth between devices and the cloud over the network I have in place. MQTT reflects my robust and diverse API toolbox, as one that gives me a wide variety of tools I’m familiar with and can use in different environments.

Mastering My Usage Of Headers

One thing I’ve learned over the years while building my API toolbox is the importance of headers, and they are something that have regularly been not just about HTTP headers, but the more general usage of network networks. I have to admit that I understood the role of headers in the API conversation, but had not fully understood the scope of their importance when it comes to taking control over how your APIs operate within a distributed environment. Knowing which headers are required to consume APIs is essential to delivering stable integrations, and providing clear guidance on headers from a provider standpoint is essential to APIs operating as expected on the open web.

Content negotiation was the header doorway I walked through that demonstrated the importance of HTTP headers when it comes to delivering meaningful API experiences. Being able to negotiate CSV, XML, and JSON message formats, as well as being able to engage with my digital resources in a deeper way using hypermedia media types. My header's mastery is allowing me to better orchestrate an event-driven experience via webhooks, and long-running HTTP connections via Server-Sent Events. They are also taking me into the next generation of connectivity using HTTP/2, making them a critical aspect of my API toolbox. Historically, headers have often been hidden in the background of my API work, but increasingly they are front and center, and essential to me getting the results I’m looking for.

Standardizing My Messaging

I have to admit I had taken the strength of message formats present in my web service days for granted. While I still think they are bloated and too complex, I feel like we threw out a lot of benefits when we made the switch to more RESTful APIs. Overall I think the benefits of the evolution were positive, and media types provide us with some strong ways to standardize the messages we pass back and forth. I’m fine operating in a chaotic world of message formats and schema that are developed in the moment, but I’m a big fan of all roads leading to standardization and reuse of meaningful formats so that we can try to speak with each other via APIs in more common formats.

I do not feel that there is one message format to rule them all, or that even one for each industry. I think innovation at the message layer is important, but I also feel like we should be leveraging JSON Schema to help tame things whenever possible, and standardize as media types. Whenever possible, reuse existing standards from day one is preferred, but I get that this isn’t always the reality, and, in many cases, we are handed the equivalent of a filing cabinet filled with handwritten notes. In my world, there will always be a mixture of known and unknown message formats, something that I will always work to tame, as well as increasingly applying machine learning models to help me identify, evolve, and make sense of standardizing things in any way I possibly can.

Knowing (Potential) Clients

I am developing APIs for a wide variety of clients. Some are designed for web applications, others are mobile applications, and some are devices. They could be spreadsheets, widgets, documents, and machine learning models. The tables could be flipped, and the APIs exist on the device, and the cloud becomes the client. Sometimes the clients are known, other times they are unknown, and I am looking to attract new types of clients I never envisioned. I am always working to understand what types of clients I am looking to serve with my APIs, but the most important aspect of this process is understanding when there will be unknown clients.

When I have a tightly controlled group of target clients, my world is much easier. When I do not know who will be developing against an API, and I am looking to encourage wider participation, this is when my toolbox comes into action. This is when I keep the bar as low as possible regarding the design of my APIs, the protocols I use, and the types of data formats and messages I use. When I do not know my API consumers and the clients they will develop, I invest more in API design and keep my default requests and responses as simple as possible. Then I also allow for the negotiation of more complex, higher speed, more control aspects of my APIs by consumers who are in the know, targeting more specific client scenarios.

Using the Right Tools for the Job

API is not REST. It is one tool in my toolbox. API deployment and integration are about having the right tool for the job. It is a waste of my time to demand that everyone understand one way of doing APIs, or my way of doing APIs. Sure, I wish people would study and learn about common API patterns, but, in reality, on the ground in companies, organizations, institutions, and government agencies, this is not the state of things. Of course, I’ll spend time educating and training folks wherever I can, but my role is always more about delivering APIs, and integrating with existing APIs, and my API toolbox reflects this reality. I do not shame API providers for their lack of knowledge and available resources, I roll up my sleeves, put my API toolbox on the table, and get to work improving any situation that I can.

My API toolbox is crafted for the world we have, as well as the world I’d like to see. I rarely get what I want on the ground deploying and integrating with APIs. I don’t let this stop me. I just keep refining my awareness and knowledge by watching, studying, and learning from what others are doing. I often find that when someone is in the business of shutting down a particular approach or being dogmatic about a single approach, it is usually because they aren’t on the ground working with average businesses, organizations, and government agencies – they enjoy a pretty isolated, privileged existence. My toolbox is almost always open, constantly evolving, and perpetually being refined based upon the reality I experience on the ground, learning from people doing the hard work to keep critical services up and running, not simply dreaming about what should be.

The new Gartner Critical Capabilities for Full Lifecycle API Management report shows how CA Technologies helps digital leaders with their B2B, open banking, and mobile initiatives. Get your copy from CA Technologies.

Topics:
integration ,api ,rest ,restful apis

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}