{{announcement.body}}
{{announcement.title}}

Architecting App Logic for Scalability, Reuse, and Agility

DZone 's Guide to

Architecting App Logic for Scalability, Reuse, and Agility

Find out how!

· Performance Zone ·
Free Resource

Architecting app logic

Architect your app logic!


App Logic for transactional systems enforces critical integrity for database web/mobile apps. It's a significant part of the system — nearly half.

Today's systems include significant internal/external integration. Logic needs to be re-used for these as well.

It's remarkably unclear how to architect this logic for re-usable integrity and scalability, while also meeting requirements for business agility. Here’s how to deliver very high-quality technical results, with remarkable agility.

You may also like: When to Use Logic Apps and Azure Functions

App Logic Architecture — Objectives

We focus here on Transaction Logic, as distinct from Process Logic (aka Workflow) or Decision Logic. Transaction logic supports the retrieval and update for web/mobile apps, and internal/external systems (integrations). It must not only store and retrieve the data, but it must also enforce business policies for security, derivation, validation, and integration.

Let’s identify a few key objectives for an App Logic Architecture.

Objective


Abstraction Layer over Multiple Data Sources

Multi-Data Source transactions have become the norm. This naturally recommends an abstraction layer that presents a logical view of data to multiple transaction sources (web, mobile, and systems). This means hiding the underlying database designs, locations, and architectures.

Scalability

Transactional systems focus on high performance for short interactions. Stateless servers support horizontal scalability, but more is required — careful optimization of database and network traffic is critical. For example, failure to provide pagination has resulted in many systems that fail to scale for production-scale data. Large scale systems often limit access to paths supported by indices.

Reused Logic for Security and Integrity

Business integrity depends on the reliable enforcement of logic for security and integrity. This requires that such logic be partitioned to servers, so it can be re-used regardless of transaction source.

Business Agility — Low Cost and Risk

Architectural approaches influence business cost and risk. Is the approach replicable across a team? Can results be delivered to meet the agility requirements of the business? Perhaps the cartoon below looks familiar...


From Triggers to App Servers, to Services

It’s worth a brief look at some of the past common logic architectures, and assess them relative to our goals.

Approach

Advantages

Issues

Database Triggers and Procedures

Good reuse — logic is centralized

Limited to a single schema, proprietary, and difficult to debug.

App Servers

Good Multi-Data Source Abstraction

Poor logic re-use in practice — logic was typically in App Controllers.

Services

Enables our objectives, if properly architected

Quite unclear how to architect services, and particularly how they share underlying logic.


So, our approach to App Logic is Service Logic. To make their architecture more clear, we need:

  • A means to define Services.
  • A Service Logic Architecture to process them.

Sample Problem: Place Order

Let's illustrate the Service Logic approach with an order processing example. Per our re-use objective, orders may originate from web / mobile apps, or a B2B Partner POSTing an order as shown below. For this example, we'll use the Northwind database.

  1. Partner Order: A partner places an order by POSTing to an API.
  2. Check Credit: As the App Logic Server persists the order, it runs some typical multi-table logic (sum the order amount totals into the balance, validate credit limit is not exceeded, etc).
  3. Alert Shipper: Reformat and post the order, to alert the shipper.

PartnerOrder Service Example

A Service Endpoint is a bit like a view — it selects and aliases the required columns. The results are tables, which support the life cycle operations create, read, update and delete.

Services go far beyond database views, providing complex, nested joins of multiple databases. Let's illustrate using the PartnerOrder example.

We want to process POSTs like this. Note this is a compound (nested) object, containing multiple types of data necessary for a transaction. In this case, an Order and a list of Items:

{
"Customer": "VINET",
"Shipper":  "Pavlovia",
"Items":
  [
    {
      "Product": "Chai",
      "Quantity": 1
    },
    {
      "Product": "Chang",
      "Quantity": 2
    }
  ]
}


PartnerOrder Service Definition

To process such a payload, we can imagine defining a Service (RESTful endpoint) with a SQL-like syntax:

Create Service PartnerOrder as
 Select CustomerID as Customer, Shipper.CompanyName as Shipper(Lookup), OrderID
 from Northwind:Orders
  [Join OrderDetails as Items Select Quantity, Products.ProductName as Product(Lookup)]


This is a significant extension over a mere description of the API, such as RAML. Not only does it describe the API, but the mapping onto the underlying Data Sources makes it fully executable, as described below.

Multi Data Source Joins

The SQL-like syntax for Create Service above defines three joins, using Role Names. Role Names are a slight extension to database Foreign Key Names:

  • They are bi-directional — where Foreign Keys identify the “parent” for a “child” (the Shipper. for Northwind: Orders, or the Products. row for each Items row), Role Names also identify “children for a parent”. In the example above, OrderDetails is such a child Role Name.
  • They are multi-database — where Foreign Keys operate within a schema, Roles can operate across schemas. For example, an Orders’ Shipper.CompanyName may or may not be in the same schema as the Order. It might even be a service, not a database.

We use the term Data Source (vs. database) to denote that the entities need not be restricted to database tables. They could, for example, be remote services accessed via RESTful APIs.

Metadata to Enable Client Automation

The service definitions make the service discoverable. For example, they can be used to automate the creation of client applications. Such discovery can be via the syntax above, or (more likely) standards such as Swagger.

Life Cycle Operations

The definition above is the content. As for SQL views, it is implicit that definition is executable, providing the Life Cycle (CRUD) operations of Create, Read, Update and Delete.

This is a big strength of RESTful APIs, relative to function-oriented systems such as Stored Procedures or SOAP. Client frameworks can leverage metadata and life cycle operations to create forms that make the operations available to end-users.

App Logic Architecture — Services

As Scotty might have said, “There be Patterns!” — pieces of well-known functionality required across all systems. That’s good news — we can identify where to implement (and possibly automate) them.

The diagram below proposes an App Logic Architecture — Service Logic (red) with shared Domain Logic (yellow), each implementing well-known patterns (small font).

The API Server executes Service Logic and Domain Logic:

  1. Service Logic is service-specific, e.g., processes a PartnerOrder service as described above.
  2. Domain Logic is enforced by Domain Objects (e.g., Orders and Order Details) — persistence-aware objects that are shared between all services. They may, for example, be Hibernate/JPA/Entity-Framework objects, including key security and integrity logic that governs read/write operations such as the Check Credit logic for the PartnerOrder service.
  3. Map Logic maps Service Rows (see the Partner Order JSON, above) to Domain (database) Rows, to enable Domain Logic to be reused (shared) for all Services.

The sections below describe these in further detail.

Service Logic

Service Logic is the listener for requests:

  • Authentication: this is typically addressed by delegating to a security manager (LDAP, Google, etc), and returning an auth token which is verified to ensure a valid request.
  • Serialization: it serializes/deserializes Service Objects (POJOs, POCOs) to JSON. Libraries exist to automate this (e.g., Jersey to listen, Jackson for parsing).
  • Transactions: each request is a transaction, so the listener code is a good place to implement these boundaries.

While most service data is obtained from the underlying data source, most requests need a few bits of computed data, such as an age computed from a birthdate, or a full name computed from a first/last name. Such derivations can be performed in Service Logic.

For GET requests, Service Logic delegates each “level” of the nested document to the Domain Read Logic. These return Domain Objects, which are transformed into Service Objects using Mapping Logic. In our example, that would be one request for Orders, one for Shipper, one for Order Details, and one for Products. These results are then “joined” into a JSON response and returned to the client.

Join optimizations are possible and desirable, such as chunking. Imagine retrieving a (large) number of orders for a customer. Instead of retrieving each Orders’ Order-Details in a separate query, the system can/should leverage pagination, issuing one Order-Details query for a page-full of Orders in a single GET. This results in 1 query instead of 20 and enables multi datasource joins.

Parent Lookups are particularly common and useful for update requests. They were introduced in the Service Definition with the (Lookup) syntax, shown below:

Create Service PartnerOrder as
 Select CustomerID as Customer, Shipper.CompanyName as Shipper(Lookup), OrderID
 from Northwind:Orders
  [Join OrderDetails as Items Select Quantity, Products.ProductName as Product(Lookup)]


This common case arises since the underlying database relationship uses autonum fields, here Products.ProductID. Clients should not (and cannot) know these numbers and supply them in the JSON request. The Lookup syntax instructs the service logic to use the supplied Product value (e.g, “hammer”) to obtain the ProductID (e.g., 12), and supply this to the Domain Write Logic.

Similarly, the service logic can obtain OrderIDs generated by the DBMS, and “stamp” these into each Items row.

Map Logic

Map Logic maps Service Rows (see the Partner Order JSON, above) to Domain (database) Rows. It accounts for the selection and aliases of datasource attributes to those used in the service definition.

In our service definition, we have two Service Rows: PartnerOrder, and Items. These map to 4 underlying Domain entities: Orders, Shippers, Order Details, and Products.

This “gather read/scatter write” logic is straightforward, but important: it enables Domain Logic to be reused (shared) for all Services. This addresses a common but serious problem, where the same transaction might obtain different results because they employ different services (web app vs. B2B POST).

Domain Read Logic

At its core, Domain Read Logic accepts a filter request, an Entity Name, and returns a list of Domain Objects. Key services are described below that reduce client logic and ensure data integrity and security.

Filter requests can be designed to meet standards such as oData, or common practices. In either case, SQL injection is a critical design issue.

For interactive clients, pagination should be provided to conserve database and network resources, while maintaining a stateless server that can be scaled. A common approach is to return a configurable number of rows (e.g., 20), and a URL for retrieving the next 20. The Service Logic can return this to the client.

Similarly, the server should provide support for optimistic locking, so that unlocked data can be updated without overwriting changes after the retrieval. A common approach is to attach a checksum to each row, returned to the client and submitted on updates.

Data Security is a key requirement that should be factored out of client applications by partitioning it to the server. This limits the columns and rows a client sees in the return response, per their authorization. This requires we augment the filter by injecting additional security expressions into the query sent to the database where it can be optimized.

Domain Write Logic

Domain Write Logic is a significant aspect of any data-oriented system. It protects data integrity by enforcing business policies for validation, derivation, and integration. The basic interface is to accept a Domain Row, with an indicator of whether the data is to be inserted, updated or deleted.

The domain logic performs the required derivations and validations. Failed validations cause transactions to roll back, and exceptions to be thrown to clients.

Domain-specific logic often spans Domain Entities. Consider an Order with multiple items. Each Item insertion increases the AmountTotal, which in turn increases the Balance. These can result in multiple SQL commands, so important performance considerations apply:

  • Pruning: Skipping logic that is not required can eliminate SQLs. For example, changing (only) an Order's date does not require the Balance to be recomputed.
  • SQL Optimization: If the Balance does need to be recomputed, it can be orders of magnitude faster to physically store it (where practical), and adjust the balance with a 1-row update, rather than using SQL Sum commands to add the existing orders and their items.
  • Caching: To minimize database trips, it is important to consider caching approaches to avoid multiple reads/write of the same row (e.g., the Customer and the Order for each Item row).
  • Batching: Multiple rows can be sent in a single update statement.

App Logic Automation — Declarative Logic

To recap, we have proposed an architecture that provides a data abstraction layer for mobile/web app/B2B transactions, ensures reuse through shared Domain Logic, and provides scalable performance:

You can certainly implement this in ad hoc code, building the capabilities above into each system you create. But this is a massive amount of code. Perhaps you share my bemused horror — it should not be so cumbersome. This falls far short of our enterprise agility objective.

Instead of conventional procedural code, let’s introduce some automation by using declarative specifications.

Structure Driven Automation

The items above marked without italics can be automated with a runtime system backed by a model — a description of the structure of the data, specifically:

  • Relational-like Entities, Attributes, and Relationships — this might be drawn from relational schemas or Swagger.
  • Extended with Role Names — Parent Role Names to return a parent (e.g., Shipper) from a Child (e.g., Order), and Child Role Names to return children (e.g., OrderDetails for an Order).
  • Multiple Data Sources — the entities are drawn from multiple underlying Data Sources, both relational (such as Northwind: below) and non-relational.
  • Service Definitions — declarative service specifications might be captured in syntax (introduced above, repeated below), JSON files, or model metadata.

The runtime system automates the functionality above (e.g., dynamic SQL generation/execution for relational data, service invocation for services) — an API Server.

Logic Driven Automation

The items in the diagram above marked in italics are domain-specific, and cannot be inferred from a model of the data structure. We need a complementary description of the logic, both for security and integrity, as follows.

Data Security

We can associate filter expressions with user Roles, with Authorization Logic like this:

Secure Northwind:Employees for US-Admins as 
  Select Name, Title, Phone Where State = ‘NY’


This declares row/column limitations for users authorized for the US-Admin's role. Here, we ensure that employees do not see each other's salaries and only orders in NY.

In practice, we would need to extend this to include LDAP role properties (e.g., a user's departmentID), and the contents of a data row associated with the user. The details are beyond the scope of this article.

The runtime system can use this data to augment the generated SQL to “inject” the security provisions. For example, the generated SQL would be an AND of the client-supplied filter, and the security provisions. This enables the DBMS (or service) to optimize access.

This injection occurs for any Service Definition defined over Orders, enabling compliance officers to verify security without needing to check the logic paths of all client operations and service operations.

This also partitions logic to the server that should not and can not be entrusted to clients. So not only is security enforced, but client logic is simplified and reduced.

Data Integrity

The validation/derivation logic protects the integrity of the database. This would be coded as domain logic, perhaps in Java or C#.

The most common approach to executing such logic is for a persistence framework (e.g., JPA, Entity Framework) to provide events: your code goes here. This would seem to make sense, after all, it’s domain-specific logic.

Such logic is not only critical, but it is also a substantial part of any database-oriented system, as much as half. It would thus be highly desirable to employ a higher level of abstraction.

Instead of procedural (imperative) programming, let’s imagine a declarative specification. Just as a spreadsheet associates expressions with cells, we apply functions to entities and attributes to validate and compute data, with Integrity Logic like this:

Customers.Balance < CreditLimit;                               
// validation
Customers.Balance := sum(Orders.AmountTotal where ShippedDate == null);
// derivation

Orders.AmountTotal := sum(OrderDetails.Amount);

OrderDetails.Amount := Quantity * UnitPrice;
OrderDetails.UnitPrice := parentcopy(Products.UnitPrice);
// price changes not propagated


Unlike imperative statements that run (only) when called, these are end-conditions. The system runs whatever logic and data access are required, whenever it is required, to ensure the end conditions are met by the end of a transaction:

  • Validation failures cause the transaction to be rolled back, with an exception thrown to the client.
  • Derivations (denoted by =) are recomputed (only) when the referenced data is changed (inserted, updated or deleted). These chain to other derivations/validations, as in this example.

Reuse

Consider the reuse implications. Our logic architecture ensures domain logic is reused over services. Declarative logic ensures it is reused over verbs. In our example, the balance is increased as orders are placed, and decreased when they are deleted, or paid — all inferred from the declarative derivation (:=).

As for declarative security, this dramatically improves our ability to ensure compliance. Instead of following all the code paths (potentially in clients, services and domain logic), we simply need to verify the rules are correct. Since declarative logic becomes “built-in” to the Domain Objects, the system has the responsibility to ensure it’s always executed.

That’s not just reused, that’s automatic reuse.

Conciseness

Beyond reuse, this declarative approach has profound effects on the amount of code we must write. The 5 lines of logic above would require over 400 lines of Java code. This is a substantial step in addressing our Business Agility objective.

Performance

Since the declarative statements focus on what result, the system can (should!) determine how to optimize the result to minimize SQLs. So, the pruning and SQL optimizations (described above) can be automated.

Extensible

Declarative logic can address far over 90% of requirements, but not all. We, therefore, we do require a conventional event mechanism.

Here is sample code that reformats a new order (ShipperAPIDef — another Service Definition, above) and POSTs it to the supplier. A similar code could send emails, publish messages, etc.

var shipper = row.Shippers;     // object model (row) provides accessors to related objects
if (shipper !== null && shipper.webHookURL !== null) {
    var msg = logicContext.transformCurrentRow("ShipperAPIDef"); // ShipperAPIDef xformation
    SysUtility.restPost(shipper.webHookURL, {}, ConstantsLib.supplierAuth, msg);
}


Alternative Technologies

RETE “production rules” look quite similar; however, the fundamental architecture of most existing implementations are poorly suited to transaction processing:

Performance: RETE engines are not connected to transactions against a data source, so they are inherently unaware of old (pre-transaction) values. They are therefore unable to make critical pruning optimizations by detecting rules for which their reference data has not changed. This can result in an order of magnitude impact.

Semantics: Many transaction validations depend on old values, which also requires the data source connection noted above.

Integrity: RETE engines must be manually called, which is a source of error. Transaction logic, on the other hand, is invoked (and optimized) automatically.

Visual programming is an excellent means of expressing procedural flow, such as a workflow process. It is not declarative, so would require the same 400 “steps” as a procedural program, instead of the 5 logical statements above.

Summary

We’ve covered a service architecture providing a data abstraction layer for web/mobile access, and for internal/external systems integration, which provides scalable performance and increased integrity through reuse. We’ve next indicated elements that could be built into a reusable framework.

A framework providing Structure Driven Automation is a large task, that might conceivably fit within the scope of what a large organization can tackle. The cost of such a framework could be spread over multiple projects.

Logic Driven Automation is far larger and would be best provided by product companies.

The results, however, are remarkable. Our cartoon at the top becomes true — projects in hours instead of weeks. The entire implementation consists of the Service Definition, Integrity Logic, and Event code shown above.

So, you might want to suggest such an effort to your vendors, or, if you are a vendor...dilly dilly!

One existing implementation is CA Live API Creator (LAC, Version 5, Copyright © 2019 Broadcom), which provides many of the services described above. It’s described here. Try it on your data, with this eval (install procedure is to unpack and run the zip).

Discussion

I've made some claims ("hours vs. weeks") that I recognize might be hard to believe. I encourage you to comment, and I will try to respond.


Further Reading

Azure Logic Apps Lifecycle — The Big Picture

Is Logic Apps or Durable Functions Best for Your Workflow?

Introducing Azure Logic Apps Integration Serice Environment (ISE)

Topics:
architecture & design ,rest api ,app dev ,business agility ,integration architecture ,business rules

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}