Over a million developers have joined DZone.

The Promise of REST, the Problem of Collective Engineering, and the (Un)reliability of the Cloud

Parse broke more than our packet-switching brains should permit. Here's how we should think in order to avoid forgetting the same things twice.

· Integration Zone

Is iPaaS solving the right problems? Not knowing the fundamental difference between iPaaS and dPaaS could cost you down the road. Brought to you in partnership with Liaison Technologies.

Robustness is more relative than it needs to be. Consider breaks in three kinds of networks:

  • What happens when a single (non-backbone) thread on a spider's web breaks? Nothing global to the web; no single link carries all that much weight. Cool.
  • What happens when a single (non-backbone) node on any packet-switched network dies? Nothing global to the network and nothing catastrophic to the particular message. ARPANET survives partial nuclear holocaust.
  • But what happens when a web service changes, or even shuts down altogether (as Parse, Facebook's mobile- backend-as-a-service, is doing right now)? A million apps break completely. Thousands of developers scramble to find a replacement service — this time, probably from someone for whom mBaaS isn't an afterthought (as it surely was to Facebook).

As web (or rather: TCP/IP)-conditioned developers, we might be forgiven for subconsciously forgetting that http(s) endpoints are far more fragile than the infrastructure they depend on. But judging from community response since the Parse blog announced the shutdown, more Parse users depended implicitly on Facebook's offhand largesse than were prepared to run their BaaS-backed mobile apps without it.

Now this carelessness doesn't come from the headspace of an engineer. If a web API call is just another RPC, then of course a specific server needs to be available in order to execute a well-defined procedure. Most developers would be perfectly unsurprised, and certainly better-prepared, for (say) a SOAP provider to stop handling requests. Something about the kind of web service provided by Parse turned off the systems engineering brain and turned on something else.

In this article we’ll consider how some mental habits we form as software developers may affect the way we think about interacting systems, and we’ll suggest ways we can apply our software engineering skills to problems that logically can be solved algorithmically but needn’t be solved physically within a system of binary logic gates. In particular, we’ll focus on how, as you integrate two heterogeneous systems, the abstraction layer at which you’re building your integration determines the friction your system will encounter in ways that depend less on the design patterns your code falls under and more on the logic of the domain system.

Many of the points raised in this piece were suggested by a few conversations I enjoyed with Mark Piller, founder and CEO of Backendless (a Parse alternative, among other things). He helped me see a deeper link (something more than ‘separation of concerns’) between object-oriented best practices, good web service design, and responsible (and realistic) decisions about how much to depend on web services built and maintained by other people.

The Promise of REST Isn't the Promise of Every RESTful API (...But it Sure Feels That Way Sometimes)

One reason for the scramble at Parse’s demise is, I think, that modern applications are so web-enmeshed that we don't really think about web API usage as a bunch of remote procedure calls. This is, on the one hand, exactly what makes the web — and the REST paradigm in particular — so exciting. If Berners-Lee's web is just the Internet brought into the application layer, and Fielding's REST is just the web for computers, then why should the application layer be any less fault-tolerant than the transport layer?

Well, on the surface there appear to be at least three reasons. First, complexity at sub-session layers is Shannon-high, not Kolmogorov-high — i.e., adequately handled by well-understood, highly mature, domain-nonspecific error correction mechanisms. Second, the physical and link layers have been both actually and conceptually around for much longer. Vint and Bob knew what they were doing partly because Western Union and Bell had already known what they were doing for decades. Third, application domains change as fast as business — so, blindingly fast — while physical domains change only as fast as (materials science)*(economies of scale), link-through-transport only as fast as internet and intranet infrastructure, etc. Any randomly selected RESTful endpoint is far more likely to require full deprecation than any randomly selected gateway.

Now for (Vannevar-Bush-style) human-read hypertext, these http endpoint changes aren't a big deal. When a website is redesigned, some users may feel a little annoyed, but for the most part nobody will suddenly lose all ability to get where they want to go (assuming the new design isn't awful, of course), but an application that specifically calls GET /some/narrow/203237/a8f7x89e will choke if one of those digits so much as increments.

...or maybe not, if the RESTafarians get their HATEOAS way. Maybe machines really can navigate a dynamic bag of http resources, if resource providers make the current interface explicit at any given state in a palatably serialized form. As Roy Fielding himself famously insists, any application so brittle as to require massive post-Parse refactoring is too tightly coupled to one particular resource set to qualify as RESTful (read: web-native) at all.

Our "golly" moment at the Parse news suggests that some of the ethos of hypertext-driven APIs has entered our architectural subconscious. We rely too much on "who cares! it's convenient at the moment!" backing services because we're already thinking more RESTfully than the web can currently support. This isn't a scientific mistake: tight coupling is just plain bad, global complexity requires massively distributed information processing, and HATEOAS keeps marching on. Maybe it's a little too much optimism for an ideal engineer. Web APIs as a set are a bunch of concrete individuals that are mostly not statefully and hypertextually self-describing, even if RESTful APIs as a conceptual category are something much more robust.

The Healthy Society of Objects (<-APIs<-People)...

So we’re oddly uncomfortable with REST endpoint unreliability, maybe because what’s happening at the application layer is a lot closer to design than execution — and we reasonably expect more robustness as work descends toward the algorithm itself.

Now most of us already have the mental habits required to avoid overeager dependence on other entities at the application level: we all know how to write SOLID object-oriented code. However, these habits affect our decisions at the level of design, not execution, and designing anything is much harder than grasping the abstract concepts behind the design. (As Alan Perlis put it: “Most people find the concept of programming obvious, but the doing impossible.”)

The interaction between strong entity boundaries and effective but nondeterministic communication produces a robust kind an interface — something more abstract than just a set of methods grouped into a class. Simply defining and implementing interfaces in Java, for example, forms mental habits that encourage a socially responsible coding strategy: “go ahead and implement this interface/protocol in as many ways as you like, just as long as you respond to x in manner y. I don’t really care about the rest.” Constrain the way agents relate to one another, not the way they work inside.

...Versus Oddly Low-level Serialization Over HTTP

Most REST implementations fight against this habit a little. Of course REST doesn’t in principle require XML or JSON, but in fact many of us associate REST with RPC+XML/JSON because HTTP/1.1 still feels like a way to stream documents, albeit in machine-readable format. So when we think of calling a RESTful API, we think of getting some serialized data and deserializing it into objects (and vice versa when serving RESTful calls). But objects, of course, need not be serialized (although they often are). Data does flow the wire in sequence but how packet-forwarding works is not an application-layer problem. So why do we insist on diving into this lower data-structure level, doing this serialize-deserialize work in our application code, when both OO and REST are supposed to be enforcing good boundaries?

Well, you might say, we tried CORBA and DCOM and those didn’t work very well. So we admitted: fine, topological distance matters. You can’t middleware away the difference between remote and in-process calls. Let’s just acknowledge that data comes over the web in dribs and drabs and make sure nothing breaks too badly if we send and grab data serially.

Now I hadn’t thought about serialization over HTTP in quite these terms before. (Why care about serial-deserial? Plenty of fine libraries do this work for me.) But speaking with Mark was a bit of an eye-opener for me. What do you get when rather old-school OO guy designs and runs something as hip as a BaaS for REST APIs? An unusually critical and yet not RESTafarian approach to RESTful design. How does he propose solving the distributed objects problem? Same as the rest of us cool kids: with microservices and containers properly conceived.

Designing Honestly for RPC: Microservices, Containers, and DDD

Classes (and interfaces) are cognitively useful because they build a semipermeable bubble around things whose activities are conceptually related. Touch it with gets and sets and that’s it. Microservices are architecturally useful for basically the same reason: when a bunch of activities group into a job or a task or a responsibility (or whatever), then build a wall around them too, permeable through a controlled port or two only. Containers do analogous work at the system resource level. What makes all of these scale effectively is that they can act in concert — unified action without (entirely) unified control. And what makes these paradigms work better than more naive distributed object systems from a decade ago is that the newer systems let developers decide very precisely where to build the walls that distinguish remote from in-process invocation.

Or in the broader language of robust systems: we’re building better distributed systems now because we’re not pretending that higher-level constructs (like objects) automatically achieve distributed reliability just because lower-level constructs (like TCP) handle dropped data well.

For this reason, as Mark and others have pointed out, good OO practices are really the same as good RESTful practices, properly split microservices, intelligently run (micro)containers — and good management practices (as DDD emphasizes), and good social practices as well. At the highest level, none of these coordinated execution enablers are worth anything unless developers design in communication with one another. There’s no REST for the OO wicked, and friends don’t let friends write non-SOLID code.

Yes, Mark and I agreed, these ‘new’ methods are really just old solutions to old problems — but the exciting thing is that solutions and problems are now matched that have never before been matched in the collective mind of the developer community and the backing infrastructure (fine, ‘the cloud’) at the same time. The mind of the developer is what makes these well-bounded systems actually act in concert — and many of the requisite isolation tools and service abstractions now actually exist.

In Backendless’ case, this translates concretely to an API engine that automatically spins up microcontainers to run microservices written in various server-side languages on machines run on premise or themselves as a service. And generates REST APIs and corresponding multi-language wrappers from individual Java classes. And maps server-side classes and methods directly to client-side types and methods, so writing Objective-C (or Java or whatever) really feels like working with distributed (Java or JavaScript or PHP but who cares?) objects. Nothing irrelevantly low-level matters; developers really can focus on creating well-designed functions and interfaces. At a technical level, we’re truly getting better at keeping developers’ headspace in the essence, not the accidents, of software engineering.

Aggregation and Interaction: From Service-Orientation to a RESTful Society of Mind

So these modern paradigms and services do begin to bridge the abstraction-layer gap from flashing fibers to high-level program design — in the mind of the developer. However, while we’re getting better at making networked computers work together, we still haven’t solved the problem of connecting minds.

Well, maybe we software engineers don’t have to. Collective human activity is already the subject of the entire discipline of economics, and one important economic philosophy locates mass information aggregation in the concept of the market. But on the other hand, who understands mind better than Marvin Minsky? His picture of mind is collective too.

So to climb the abstraction letter to its highest rung: where distributed execution is addressed by Paxos, the TCP/IP stack, FLP impossibility, and the like, and distributed design is facilitated by this decade’s post-CORBA/DCOM microservices, containers, DDD, and REST, distributed thought occurs in what economists call the market and what web developers call the marketplace of APIs.

Where error-correction and retransmission mechanisms keep distributed data-sharing reliable, aggregated API performance analytics and manual user feedback ensure the quality of REST-distributed service. Roughly speaking, as Mark and I discussed, the concept ‘marketplace of services and their APIs’ encapsulates the union of the economic concept of market as information aggregator and the technical concept of interface. (Of course, as in economics, the interaction plays out differently depending on whether the market is public or private; so the distinction between ‘enterprise’ and ‘public’ API marketplaces has already been made.)

To put these ideas into practice and learn more about the Backendless platform — which, based on my conversations with Mark, seems to be designed from a place of unusually deep RESTful-and-OO understanding — head over to their product page (and click through for specific features, described in more detail than normal for these kinds of pages) or straight to the developers page.

The platform comprises (a) an mBaas with the usual mBaaS features (user management, data persistence, pub/sub messaging, geolocation, push notifications, etc. plus a less-expected media streaming service); (b) a hosting service for static and server-side dynamic (Node.js) content; and(c) most apropos of this article, an API engine, some of whose features were discussed above.There’s a free tier in the Backendless cloud with some pretty forgiving functional limitations, a free you-hosted tier with no functional limitations (but limit of one server deployment), and a fully managed paid tier.

If REST is the axon and the network of APIs is the neural net, then the API marketplace is the latest perceptron-tuning mechanism for augmenting human intellect — this time, at web scale.

Discover the unprecedented possibilities and challenges, created by today’s fast paced data climate and why your current integration solution is not enough, brought to you in partnership with Liaison Technologies.

Topics:
rest ,cloud ,baas ,soap ,api

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}