I asked the rest-discuss group:
We simply don't have examples of HATEOAS done fully. Why is HATEOAS even worth while? What's the need for HATEOAS in machine-to-machine APIs? The best that I can come up with is that HATEOAS will help long-term evolution of APIs, especially the type of intricate fine-grained APIs that enterprises favor.
Lots of people chimed in:
Seperate deployment strategy from client's use
[S]ubsystem needs to be deployed within a larger application, and can hence be mounted anywhere in a URL hierarchy. Because of this, if my client is given a base URL to the server, but not the entry point of this particular REST API, it has to query that URL and from there find out where the REST API that it is interested in happens to recide *in that deployment of the API*.
[I]t may be that the link to my subsystem points to a *new server entirely*, if the deployment is so big that it had to be split. There is no way the client could know this in advance. By following HATEOAS my client is now independent of the deployment strategy of the application and its various subsystems.
I've designed several REST APIs over the last couple of years, but up until the most recent one, I designed and documented them in the "typical" way... My most recent effort is contributing to the design of the REST architecture for the Sun Cloud API to control virtual machines and so on. In addition, I'm very focused on writing client language bindings for this API in multiple languages (Ruby, Python, Java) ... so I get a first hand feel for programming to this API at a very low level... [T]he service would publish only *one* well-known URI... Every other URI in the entire system (including all those that do state changes) are discovered by examining these representations.
Even in the early days, I can see some significant, practical, short term benefits we have gained from taking this approach:
* REDUCED CLIENT CODING ERRORS:
... Looking back at all the REST client side interfaces... about 90% of the bugs have been in the construction of the right URIs for the server... All this goes away when the server hands you exactly the right URI to use for every circumstance. ...
* REDUCED INVALID STATE TRANSITION CALLS:
... [Clients that construct URIs] run the risk of attempting to request state transitions that are not valid for the current state of the server side resource. ...
* FINE GRAINED EVOLUTION WITHOUT (NECESSARILY) BREAKING OLD CLIENTS:
... you can evolve APIs fairly quickly without breaking all clients, or having to support multiple versions of the API simultaneously on your server. You don't have to wait years for serendipity benefits . Especially compared to something like SOAP where the syntax of your representations is versioned (in the WSDL)...
Having drunk the HATEOAS koolaid now, I would have a really hard time going back .
Kevin Duffy followed up by saying:
[A]s a client consuming an API that adheres to what Craig is saying, I can, for example rely on the fact that a given URI might be changed by the server, say due to a bug fix or a new version deployed, but mean while my client still works without breaking. ... less worry about my client breaking due to a server URI change. ...
2nd point. ... pagination. In a search engine app for example, a consumer could get a 1 - 100, 101 - 200, etc back. By returning the proper URI to get the next series, and/or previous series of results... I can simply pluck the URI the server returns for the next/previous, and use it with assurance.
Assaf Arkin said:
You have WS-Addressing (and friends) that allow you to address a given state behind the service, and all sort of WS-Addressing exchanges that are (to make a point) scripted HATEOS.
Some operations are only material at a given state, some states create more states (e.g. at some point an order is joined by shipment tracking) so you need to reason about these. Hence WS-BPEL and friends.
It's a different approach but broadly speaking to the same problem HATEOS solves: knowing what actions are relevant at any given state and how to perform them.
The key point leakage of business rules. In the absence of hyperlinks, the server will have to explain the clients the rules under which a given transition is valid so that clients can initiate them. By providing hyperlinks, the server can hide those business rules from clients.
One thing that I see missing is "full disclosure" of the operations (verbs) to be used as well as differentiation between actions vs. information. [For example a] series of elements that were influenced by or imported directly the XHTML forms (and/or possibly XForms) elements to identify what actions were possible for a given resource. That way, you'd have the full HATEOAS in the message and the clients wouldn't have to know anything except how to interpret the markup.