New Technologies Accelerate and Simplify REST
New Technologies Accelerate and Simplify REST
Join the DZone community and get the full member experience.Join For Free
Are your API program basics covered? Read the 5 Pillars of Full Lifecycle API Management eBook
Learn about new technologies that speed backend development by simplifying and accelerating SQL access, logic and security.
REST has succeeded SOAP as Basis of Web Services
REST is often thought as a core element of mobile computing, and rightly so. But it is far more than that.
REST has succeeded SOAP as the consensus approach for Web Services. It is at the core of not just mobile, but Web apps (across architectures: C#, Ruby, PHP, Java, ...), the Enterprise Service Bus, B2B, and new devices (e.g. the Internet of Things). All the major Internet companies including Twitter, Google, Yahoo and Amazon use REST today for web services.
REST is complex and time-consuming
Critical as they are, the process for installing, developing, running and managing REST servers is supported by little more than frameworks to run your code. And it’s a target rich environment, rife with tedious tasks that we should simplify and accelerate.
Corporate Data is in SQL
Most corporate data resides in SQL RDBMS. So, building web services means interacting with SQL data.
Building SQL services into REST is seemingly straightforward. In reality, it is deceptively complex. Programmers must write SQL to read and write data, marshall it to JSON, enforce corporate policies for logic and security, and do so in a way that provides good performance by minimizing system overhead and avoiding concurrency issues.
Simplify and Accelerate
To enable businesses to leverage the Web for competitive advantage and reduce costs, we must provide developers with technology to simplify and speed the way web services are defined, deployed and maintained. We’ve all been in this situation, it’s no fun explaining while something so “simple” takes so long.
Just as spreadsheets dramatically reduced the cost and time for financial analysis, we should be able to build and deploy REST servers as fast as we can imagine them, with support to simplify:
To Do List
Current Best Practice
Definition of hassle
Frameworks exist for the low-level REST Service and SQL handling, but require a fair amount of detail coding to design and decode requests, and build SQL commands
The API behavior - logic, security - represent most of the work, where the presumption is “domain specific logic requires domain specific code”
Performance and Metrics
More work to build it in
Installation: REST Server as a Service
Servers are a hassle to install, configure and administer. Database as a Service (DBaaS) might seem like an attractive approach. Just “requisition” a database by just filling out a form. Some even provide REST interfaces.
That is certainly a promising start, but falls a bit short. What if we already have a database inside our firewall? We certainly need REST interfaces to those databases. And, we need to define business logic and enforce security - no small task - exposing raw data (whether by SQL or REST) is madness.
What we really need is the ability to “requisition” a REST server, “point it” at our cloud-based or on-premise database. Nothing to install, nothing to configure.If your security issues dictate an on premise REST server, the simplest approach is to use an appliance, with point and click installation and remote maintenance.
API Infrastructure: Instant REST API
So, creating a REST server just by requesting one is a great start, but we must consider its API. What REST Resources will it expose?
There is considerable opportunity to build the API by leveraging the database catalogs. The two broad approaches include:
- Default REST wherein you just supply the database location - good for simple apps
- Custom REST which is better suited for large applications accessing many tables.
Default REST: supply database location
REST is quite aligned to the spirit of a database: just regard each database table as a REST Resource. More is certainly required, but that’s a pretty good start.Step 1 should be to simply provide a database URL. The system should read the database catalog to determine the tables, and (by convention) provide a REST Resource for each. In particular, this includes GET (with filters and sorts), PUT and POST for updates and inserts, and DELETE. The cookie-cutter code for SQL handling and JSON marshalling are no longer required.
Performance is critical, so pagination is required to break large result sets into multiple (stateless) requests. This recurrent pattern should be provided in the Default REST Server.
Custom REST: choose tables and columns
Default APIs are good for simple admin apps or single-table screens, but even moderately complex apps require multi-table results. Web latency considerations also require that multi-table results be delivered in a single response message.
These needs should be largely met by enabling developers to define multi-table Resources simply by choosing the tables/columns to be returned, with optional aliasing. Just as for default REST APIs, the system should automate the REST request handling, the SQL, and the JSON response.
The support above means that the API interface can be realized in minutes, but what about the behavior of the API? In particular, we need to enforce transactional business logic: validations, complex multi-table computations, and events (e.g. auditing, cloning, sending mail) that must be performed while processing a PUT, POST or DELETE.
Such logic has traditionally been hand-coded. Not unreasonable: domain-specific logic must surely require domain-specific code, right?
Accelerating Key Logic Patterns: Reactive Expressions
But the need to rapidly deploy REST servers forces us to look more deeply at how we might employ new technologies to address recurrent patterns within the business logic. In particular, we note that the following patterns account for vast majority of the logic code:
- Validations - far more than simple range value checks on a single field, we need to verify multi-table conditions, such as “balance < credit_limit” or “Purchase Order has Line Items”
- Computations - it seems simple to state that the "customer balance is the sum of the unpaid amount_totals". But there is an enormous amount of tedious code in the change detection and propagation: did the amount_total change? Was an order inserted or deleted? Was the order assigned to a new customer? Did the paid flag change? Did multiple of these change at the same time? Imagine how much faster and simpler things would be if such Dependency Management were automatic.
- SQL Handling - the interesting multi-table cases involve SQL, which is cumbersome and tedious to build.
A simple start is Validations. A good technology to apply is Constraint Programming, where we state conditions that must be true at some end state. For Web Service Transactions, we simply specify a set of table expressions that must be true at the familiar end-state called commit (else an exception is returned, and the transaction is rolled back).
Note that the system must de-alias Custom REST Resources, so that Table Validations are re-used. Again, this is cookie-cutter code, representing no more than programming friction.
Derivations are significantly more complex than validations, since they can be inter-dependent and must therefore be executed in a proper order. Change detection and dependency management is a large portion of business logic - high volume tedium and complexity. Here is a real opportunity to simplify and accelerate by applying innovative technology.
And one exists: Reactive Programming is a perfect solution for Dependency Management. In such an approach, you define the meaning of a variable (e.g., A = B + C). Unlike procedural code, this stipulates that subsequent changes to B or C are watched - if changed, A is recomputed, which may of course chain to variables dependent on A.
Reactive Programming is an automation of the Observer Pattern used to manage dependencies. It most successful application is the spreadsheet, where cell formulas are Reactive Expressions that provide enormous power yet are very simple.
Reactive Expressions in Action
We can map this technology onto database transaction processing by assigning Reactive Expressions to database columns. For example, we might define the customer balance as the sum of the unpaid order totals. The system would then watch for changes to orders, and (when appropriate) adjust the balance (increase it as orders are added, decrease it as orders are deleted).To see the power of this technology, let’s examine a familiar example where placing an order cannot exceed the customers’ credit limit. Without Reactive Expressions, the combination of multiple Use Cases (place order, delete order, pay order, change line item, reassign order to new customer), each with their own Dependency Management, results in 500 lines of code.
By contrast, Reactive Expressions eliminate the dependency management, revealing simplicity of the logic down its crystalline form:
Derive Lineitem.product_price as copy(product.price)
Derive Lineitem.amount as product_price * qty_ordered
Derive Purchaseorder.amount_total as sum(lineitemList.amount)
Derive Customer.balance as sum(purchasorderList.amount_total where paid = false)
Validate Customer.creditCheck as balance <= credit_limit
Reactive Expressions are remarkably powerful:
- Agility: the expressive power of reactive expressions is 2 orders of magnitude higher than traditional code, as discussed above
- Executable Documentation: these business oriented expressions are both transparent documentation to business users, and maintainable implementation
- Quality: this approach promotes quality via automated watch services that address every Use Case.
- Agility: And it dramatically reduces maintenance, since automated dependency-based ordering frees you from the tedium of deciphering existing code. So, the logic above can be specified in any order.
- SQL Automation: beyond dependency management, the expressions above can be used to automate the SQL to access the related data. And, as we’ll see below, provide critical performance optimization.
Hybrid Logic: Events and Reactive Expressions
The real trick here is to integrate the procedural event logic with the reactive logic, so that programmers can choose the technology that fits the job. A bit like a hybrid engine that provides both economy and distance, but under your control:
It thus becomes the engine’s responsibility to ensure that Reactive Updates are subjected to Events, and that Event updates are subjected to Reactive Logic, with preservation of dependence-based ordering.
The real challenge is: how do we preserve the critical watch/ordering characteristics for event code? Answer: the system must scan the event code for dependencies. The system can then invoke the code when changes are made to referenced data, in the proper order.
Security By Table Authorization Filters
Authentication Services are required to validate user access, either by a default scheme or by using an existing (e.g., LDAP) scheme.Authorization services are required for fine-grained (row, column) security. Roles should have a set of Table Permissions used to filter rows and columns from JSON responses. The User is authorized to see the “union” of permissions for all their authorized Roles.
Critical: performance, metrics
Such pattern automation can yield remarkable results, but it is critical that it does not compromise performance. This stretches from managing network latency to optimizing SQL.
We observed support above for managing latency, with provisions for multi-table REST resources and pagination. Similar patterns such as Optimistic Locking should also be supplied by the engine.
Reactive Expressions are a declarative specification for a desired end result. This high level of abstraction affords the same opportunities for optimization that SQL engines provide for retrieval.
For example, the engine should prune logic where dependent data is not changed, and avoid expensive aggregate queries relying instead on single-row updates (see the customer balance example, above). Caching should be employed to eliminate redundant SQLs and ensure a consistent view of data.
The system should also provide critical metrics to verify that the APIs are not failing, and responding in a timely manner. It should provide logging to reveal logic / SQL handling to diagnose logic and performance issues.
We have described leveraging technology to simplify and expedite the construction and operation of REST servers, empowering developers to deliver high quality results with n-fold improvements in agility.It is available today. If you want to check it out, head on over to www.espressologic.com. Or send us your comments - we’d love to hear from you!
Opinions expressed by DZone contributors are their own.