DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
View Events Video Library
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Integrating PostgreSQL Databases with ANF: Join this workshop to learn how to create a PostgreSQL server using Instaclustr’s managed service

Mobile Database Essentials: Assess data needs, storage requirements, and more when leveraging databases for cloud and edge applications.

Monitoring and Observability for LLMs: Datadog and Google Cloud discuss how to achieve optimal AI model performance.

Automated Testing: The latest on architecture, TDD, and the benefits of AI and low-code tools.

Related

  • Split the Monolith: What, When, How
  • Understanding the Fan-Out/Fan-In API Integration Pattern
  • Towards Resource-Oriented REST Development
  • What Is API-First?

Trending

  • Microservices With Apache Camel and Quarkus (Part 5)
  • Generative AI: A New Tool in the Developer Toolbox
  • Database Monitoring: Key Metrics and Considerations
  • Best Practices for Writing Clean Java Code
  1. DZone
  2. Data Engineering
  3. Data
  4. Backend-for-Frontend: The Demo

Backend-for-Frontend: The Demo

This article is a demo for the backend for the frontend.

Nicolas Fränkel user avatar by
Nicolas Fränkel
CORE ·
Aug. 21, 22 · Analysis
Like (2)
Save
Tweet
Share
5.72K Views

Join the DZone community and get the full member experience.

Join For Free

In one of my earlier posts, I described the Backend-for-Frontend pattern. In short, it offers a single facade over multiple backend parts. Moreover, it provides each client type, e.g. desktop, mobile, exactly the data that it needs and not more in the format required by this client type.

The Use-case

Imagine the following use case. In an e-commerce shop, the home page should display multiple unrelated data at once.

  • Products: The business could configure which items are shown on the home page. They could be generic, "hero" products, or personalized, products that the customer ordered previously.
  • News: Again, the newsfeed could be generic or personalized.
  • Profile-related information
  • Cart content
  • Non-business-related information, such as build number, build timestamp, version, etc.

Depending on the client, we want more or fewer data. For example, on a client with a limited display size, we probably want to limit a product to its name and its image. On the other hand, on the desktop, we are happy to display both the above, plus a catchphrase (or a more catchy - and longer - name) and the full description.

Every client requires specific data and for performance reasons, we want to fetch them in a single call. It sounds like a use-case for BFF.

Setting up the Demo

In order to simplify things, I'll keep only three sources of data: products, news, and technical data. Three unrelated data sources are enough to highlight the issue.

In the demo, I'm using Python and Flask, but the underlying technology is irrelevant since BFF is an architectural pattern.

The initial situation is a monolith. The monolith offers an endpoint for each data source and a single aggregating endpoint for all of them:

Python
 
@app.route("/")
def home():
    return {
      'products': products,       #1
      'news': news,               #1
      'info': debug               #1
  }
  1. Somehow get the data internally, e.g., from the database

At this point, everything is fine. We can provide different data depending on the client:

  • If we want to put the responsibility on the client, we provide a dedicated endpoint
  • If we want it server-side, we can read the User-Agent from the request (or agree on a specific X- HTTP header)

As it doesn't add anything to the demo, I won't provide different data depending on the client in the following.

Migrating to Microservices

At one point, the organization decides to migrate to a microservices architecture. The reason might be because the CTO read about microservices in a blog post because the team lead wants to add microservices on its resume, or even because the development grew too big and the organization does need to evolve. In any case, the monolith has to be split into two microservices: a catalog providing products and a newsfeed providing... news.

Here's the code for each microservice:

Python
 
@app.route("/info")
def info():
    return debug                 #1


@app.route("/products")
def get_products():
    return jsonify(products)     #2
  1. Each microservice has its own debug endpoint
  2. The payload is not an object anymore but an array
Python
 
@app.route("/info")
def info():
    return debug                 #1


@app.route("/news")
def get_news():
    return jsonify(news)         #1
  1. As above

Now, each client needs two calls and filters out data that are not relevant.

Dedicated Backend-for-Frontend

Because of the issues highlighted above, a solution is to develop one application that does the aggregation and filtering. There should be one for each client type, and it should be cared for by the same team as the client. Again, for this demo, it's enough to have a single one that only does aggregation.

Python
 
@app.route("/")
def home():
    products = requests.get(products_uri).json()            #1
    catalog_info = requests.get(catalog_info_uri).json()    #2
    news = requests.get(news_uri).json()                    #1
    news_info = requests.get(news_info_uri).json()          #2
    return {
      'products': products,
      'news': news,
      'info': {                                             #3
          'catalog': catalog_info,
          'news': news_info
      }
    }
  1. Get data
  2. Get debug info
  3. The returned JSON should be designed for easy consumption on the client side. To illustrate it, I chose for the debug data to be nested instead of top-level.

Backend-for-Frontend at the API Gateway Level

If you're offering APIs, whether internally or to the outside world, chances are high that you're already using an API Gateway. If not, you should probably deeply consider starting to. In the following, I assume that you do use one.

In the previous section, we developed a dedicated backend-for-frontend application. However, requests already go through the gateway. In this case, the gateway can be seen as a container where to deploy BFF plugins. I'll be using Apache APISIX to demo how to do it, but the idea can be replicated on other gateways as well.

First things first, there's no generic way to achieve the result we want. For this reason, we cannot rely on an existing plugin, but we have to design our own. APISIX documents how to do so. Our goal is to fetch data from all endpoints as above but via the plugin.

First, we need to expose a dedicated endpoint, e.g., /bff/desktop or /bff/phone. APISIX allows such virtual endpoints via the public-API plugin. Next, we need to develop our plugin, bff. Here's the configuration snippet:

YAML
 
routes:
  - uri: /                     #1
    plugins:
      bff: ~                   #2
      public-api: ~            #2
  1. For demo purposes, I preferred to set it at the root instead of /bff/*
  2. Declare the two plugins. Note that I'm using the stand-alone mode.

First, we need to describe the plugin and not forget to return it at the end of the file:

Lua
 
local plugin_name = 'bff-plugin'

local _M = {                           --1
    version = 1.0,
    priority = 100,                    --2
    name = plugin_name,
    schema = {},                       --3
}

return _M                              --4
  1. The table needs to be named _M
  2. In this scenario, priority is irrelevant as no other plugins are involved (but public_api)
  3. No schema is necessary as there's no configuration
  4. Don't forget to return it!

A plugin that has a public API needs to define an api() function returning an object describing matching HTTP methods, the matching URI, and the handler function.

Lua
 
function _M.api()
    return {
        {
            methods = { 'GET' },
            uri = "/",
            handler = fetch_all_data,
        }
    }
end

Now, we have to define the fetch_all_data function. It's only a matter of making HTTP calls to the catalog and newsfeed microservices. Have a look at the code if you're interested in the exact details.

At this point, the (single) client can query http://localhost:9080/ and get the complete payload.

In a "real-life" microservices-based organization, every team should be independent of each other. With this approach, each can develop its own BFF as a plugin and deploy it independently in the gateway.

Bonus: A Poor Man’s BFF

The microservices architecture creates two problems for clients:

  1. The need to fetch all data and filter out the unnecessary ones
  2. Multiple calls to each service

The BFF pattern allows fixing both of them, at the cost of custom development, regardless of whether it's for a dedicated app or a gateway plugin. If you're not willing to spend time in custom development, you can still avoid #2 by using a nifty plugin of Apache APISIX, batch-requests:

The batch-requests plugin accepts multiple requests, sends them from APISIX via HTTP pipelining, and returns an aggregated response to the client.

This improves the performance significantly in cases where the client needs to access multiple APIs.

-- batch-requests

In essence, the client would need to send the following payload to a previously configured endpoint:

JSON
 
{
    "timeout": 502,
    "pipeline": [
        {
            "method": "GET",
            "path": "/products"
        },
        {
            "method": "GET",
            "path": "/news"
        },
        {
            "method": "GET",
            "path": "/catalog/info"
        },
        {
            "method": "GET",
            "path": "/news/info"
        }
    ]
}

The response will in turn look like this:

JSON
 
[
  {
    "status": 200,
    "reason": "OK",
    "body": "{\"ret\":200,\"products\":\"[ ... ]\"}",
    "headers": {
      "Connection": "keep-alive",
      "Date": "Sat, 11 Apr 2020 17:53:20 GMT",
      "Content-Type": "application/json",
      "Content-Length": "123",
      "Server": "APISIX web server"
    }
  },
  {
    "status": 200,
    "reason": "OK",
    "body": "{\"ret\":200,\"news\":\"[ ... ]\"}",
    "headers": {
      "Connection": "keep-alive",
      "Date": "Sat, 11 Apr 2020 17:53:20 GMT",
      "Content-Type": "application/json",
      "Content-Length": "456",
      "Server": "APISIX web server"
    }
  },
  {
    "status": 200,
    "reason": "OK",
    "body": "{\"ret\":200,\"version\":\"...\"}",
    "headers": {
      "Connection": "keep-alive",
      "Date": "Sat, 11 Apr 2020 17:53:20 GMT",
      "Content-Type": "application/json",
      "Content-Length": "78",
      "Server": "APISIX web server"
    }
  },
  {
    "status": 200,
    "reason": "OK",
    "body": "{\"ret\":200,\"version\":\"...\"}",
    "headers": {
      "Connection": "keep-alive",
      "Date": "Sat, 11 Apr 2020 17:53:20 GMT",
      "Content-Type": "application/json",
      "Content-Length": "90",
      "Server": "APISIX web server"
    }
  }
]

It's up to the client to filter out unnecessary data. It's not as good as true BFF, but we still managed to make a single call out of 4.

Conclusion

A microservices architecture brings a ton of technical issues to cope with. Among them is the need to dispatch only the required data to each kind of client. The BFF pattern aims to cope with this issue.

In the previous post, I described the pattern from a theoretical point of view. In this post, I used a very simple eCommerce use-case to demo how to implement BFF with and without the help of Apache APISIX.

The complete source code for this post can be found on GitHub.

To go further:

  • Pattern: Backends For Frontends
  • The API gateway pattern versus the Direct client-to-microservice communication
  • API Gateway vs Backend For Frontend
API IT JSON News Uniform Resource Identifier application Data (computing) Debug (command) microservice Requests

Published at DZone with permission of Nicolas Fränkel, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Split the Monolith: What, When, How
  • Understanding the Fan-Out/Fan-In API Integration Pattern
  • Towards Resource-Oriented REST Development
  • What Is API-First?

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: