DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • Why and How To Integrate Elastic APM in Apache JMeter
  • Building a REST Application With Oracle NoSQL Using Helidon
  • Getting Started With Boot Spring 3.2.0: Building a Hello World REST API With NoSQL Integration
  • Getting Started With Postgres: Three Free and Easy Ways

Trending

  • Analyzing Techniques to Provision Access via IDAM Models During Emergency and Disaster Response
  • Distributed Consensus: Paxos vs. Raft and Modern Implementations
  • Operational Principles, Architecture, Benefits, and Limitations of Artificial Intelligence Large Language Models
  • Using Python Libraries in Java
  1. DZone
  2. Data Engineering
  3. Databases
  4. A Poor Man’s API

A Poor Man’s API

Learn more about a poor man's API, an alternative to building an entire REST API.

By 
Nicolas Fränkel user avatar
Nicolas Fränkel
DZone Core CORE ·
Nov. 23, 22 · Tutorial
Likes (8)
Comment
Save
Tweet
Share
8.0K Views

Join the DZone community and get the full member experience.

Join For Free

Creating a full-fledged API requires resources, both time and money. You need to think about the model, the design, the REST principles, etc., without writing a single line of code. Most of the time, you don't know whether it's worth it: you'd like to offer a Minimum Viable Product and iterate from there. I want to show how you can achieve it without writing a single line of code.

The Solution

The main requirement of the solution is to use the PostgreSQL database. It's a well-established Open Source SQL database.

Instead of writing our REST API, we use the PostgREST component:

PostgREST is a standalone web server that turns your PostgreSQL database directly into a RESTful API. The structural constraints and permissions in the database determine the API endpoints and operations.

-- PostgREST

Let's apply it to a simple use case. Here's a product table that I want to expose via a CRUD API:

product table

Note that you can find the whole source code on GitHub to follow along.

PostgREST's Getting Started guide is pretty complete and works out of the box. Yet, I didn't find any ready-made Docker image, so I created my own:

Dockerfile
 
FROM debian:bookworm-slim                                                   #1

ARG POSTGREST_VERSION=v10.1.1                                               #2
ARG POSTGREST_FILE=postgrest-$POSTGREST_VERSION-linux-static-x64.tar.xz     #2

RUN mkdir postgrest

WORKDIR postgrest

ADD https://github.com/PostgREST/postgrest/releases/download/$POSTGREST_VERSION/$POSTGREST_FILE \
    .                                                                       #3

RUN apt-get update && \
    apt-get install -y libpq-dev xz-utils && \
    tar xvf $POSTGREST_FILE && \
    rm $POSTGREST_FILE                                                      #4
  1. Start from the latest Debian
  2. Parameterize the build
  3. Get the archive
  4. Install dependencies and unarchive

The Docker image contains a postgrest executable in the /postgrest folder. We can "deploy" the architecture via Docker Compose:

YAML
 
version: "3"
services:
  postgrest:
    build: ./postgrest                                   #1
    volumes:
      - ./postgrest/product.conf:/etc/product.conf:ro    #2
    ports:
      - "3000:3000"
    entrypoint: ["/postgrest/postgrest"]                 #3
    command: ["/etc/product.conf"]                       #4
    depends_on:
      - postgres
  postgres:
    image: postgres:15-alpine
    environment:
      POSTGRES_PASSWORD: "root"
    volumes:
      - ./postgres:/docker-entrypoint-initdb.d:ro       #5
  1. Build the above Dockerfile
  2. Share the configuration file
  3. Run the postgrest executable
  4. With the configuration file
  5. Initialize the schema, the permissions, and the data

At this point, we can query the product table:

Shell
 
curl localhost:3000/product


We immediately get the results:

JSON
 
[{"id":1,"name":"Stickers pack","description":"A pack of rad stickers to display on your laptop or wherever you feel like. Show your love for Apache APISIX","price":0.49,"hero":false}, 
 {"id":2,"name":"Lapel pin","description":"With this \"Powered by Apache APISIX\" lapel pin, support your favorite API Gateway and let everybody know about it.","price":1.49,"hero":false}, 
 {"id":3,"name":"Tee-Shirt","description":"The classic geek product! At a conference, at home, at work, this tee-shirt will be your best friend.","price":9.99,"hero":true}]


That was a quick win!

Improving the Solution

Though the solution works, it has a lot of room for improvement. For example, the database user cannot change the data, but everybody can actually access it. It might not be a big issue for product-related data, but what about medical data?

The PostgREST documentation is aware of it and explicitly advises using a reverse proxy:

PostgREST is a fast way to construct a RESTful API. Its default behavior is great for scaffolding in development. When it’s time to go to production it works great too, as long as you take precautions. PostgREST is a small sharp tool that focuses on performing the API-to-database mapping. We rely on a reverse proxy like Nginx for additional safeguards.

-- Hardening PostgREST

Instead of Nginx, we would benefit from a full-fledged API Gateway: that enters Apache APISIX. We shall add it to our Docker Compose:

YAML
 
version: "3"
services:
  apisix:
    image: apache/apisix:2.15.0-alpine                              #1
    volumes:
      - ./apisix/config.yml:/usr/local/apisix/conf/config.yaml:ro
    ports:
      - "9080:9080"
    restart: always
    depends_on:
      - etcd
      - postgrest
  etcd:
    image: bitnami/etcd:3.5.2                                       #2
    environment:
      ETCD_ENABLE_V2: "true"
      ALLOW_NONE_AUTHENTICATION: "yes"
      ETCD_ADVERTISE_CLIENT_URLS: "http://0.0.0.0:2397"
      ETCD_LISTEN_CLIENT_URLS: "http://0.0.0.0:2397"
  1. Use Apache APISIX
  2. APISIX stores its configuration in etcd

We shall first configure Apache APISIX to proxy calls to postgrest:

Shell
 
curl http://apisix:9080/apisix/admin/upstreams/1 -H 'X-API-KEY: 123xyz' -X PUT -d ' #1-2
{
  "type": "roundrobin",
  "nodes": {
    "postgrest:3000": 1                                                             #1-3
  }
}'

curl http://apisix:9080/apisix/admin/routes/1 -H 'X-API-KEY: 123xyz' -X PUT -d '    #4
{
  "uri": "/*",
  "upstream_id": 1
}'
  1. Should be run in one of the Docker nodes, so use the Docker image's name. Alternatively, use localhost but be sure to expose the ports
  2. Create a reusable upstream
  3. Point to the PostgREST node
  4. Create a route to the created upstream

We can now query the endpoint via APISIX:

Shell
 
curl localhost:9080/product

It returns the same result as above.

DDoS Protection

We haven't added anything, but we're ready to start the work. Let's first protect our API from DDoS attacks. Apache APISIX is designed around a plugin architecture. To protect from DDoS, we shall use a plugin. We can set plugins on a specific route when it's created or on every route; in the latter case, it's a global rule. We want to protect every route by default, so we shall use one.

Shell
 
curl http://apisix:9080/apisix/admin/global_rules/1 -H 'X-API-KEY: 123xyz' -X PUT -d '
{
  "plugins": {
    "limit-count": {                 #1
      "count": 1,                    #2
      "time_window": 5,              #2
      "rejected_code": 429           #3
    }
  }
}'
  1. limit-count limits the number of calls in a time window
  2. Limit to 1 call per 5 seconds; it's for demo purposes
  3. Return 429 Too Many Requests; the default is 503

Now, if we execute too many requests, Apache APISIX protects the upstream:

Shell
 
curl localhost:9080/product
HTML
 
<html>
<head><title>429 Too Many Requests</title></head>
<body>
<center><h1>429 Too Many Requests</h1></center>
<hr><center>openresty</center>
</body>
</html>

Per-Route Authorization

PostgREST also offers an Open API endpoint at the root. We thus have two routes: / for the Open API spec and /product for the products. Suppose we want to disallow unauthorized people to access our data: Regular users can access products, while admin users can access both the Open API spec and products.

APISIX offers several authentication methods. We will use the simplest one possible, key-auth. It relies on Consumer abstraction. key-auth requires a specific header: the plugin does a reverse lookup on the value and finds the consumer whose key corresponds.

Here's how to create a consumer:

Shell
 
curl http://apisix:9080/apisix/admin/consumers -H 'X-API-KEY: 123xyz' -X PUT -d '    #1
{
  "username": "admin",                                                               #2
  "plugins": {
    "key-auth": {
      "key": "admin"                                                                 #3
    }
  }
}'
  1. Create a new consumer
  2. Consumer's name
  3. Consumer's key value

We do the same with consumer user and key user. Now, we can create a dedicated route and configure it so that only requests from admin pass through:

Shell
 
curl http://apisix:9080/apisix/admin/routes -H 'X-API-KEY: 123xyz' -X POST -d ' #1
{
  "uri": "/",
  "upstream_id": 1,
  "plugins": {
    "key-auth": {},                                                             #2
    "consumer-restriction": {                                                   #2
      "whitelist": [ "admin" ]                                                  #3
    }
  }
}'
  1. Create a new route
  2. Use the key-auth and consumer-restriction plugins
  3. Only admin-authenticated requests can call the route

Let's try the following:

Shell
 
curl localhost:9080

It doesn't work as we are not authenticated via an API key header.

JSON
 
{"message":"Missing API key found in request"}
Shell
 
curl -H "apikey: user" localhost:9080


It doesn't work as we are authenticated as user, but the route is not authorized for user but for admin.

JSON
 
{"message":"The consumer_name is forbidden."}
Shell
 
curl -H "apikey: admin" localhost:9080

This time, it returns the Open API spec as expected.

Monitoring

A much-undervalued feature of any software system is monitoring. As soon as you deploy any component in production, you must monitor its health. Nowadays, many services are available to monitor. We will use Prometheus as it's Open Source, battle-proven, and widespread. To display the data, we will rely on Grafana for the same reasons. Let's add the components to the Docker Compose file:

YAML
 
version: "3"
services:
  prometheus:
    image: prom/prometheus:v2.40.1                                    #1
    volumes:
      - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml    #2
    depends_on:
      - apisix
  grafana:
    image: grafana/grafana:8.5.15                                     #3
    volumes:
      - ./grafana/provisioning:/etc/grafana/provisioning              #4
      - ./grafana/dashboards:/var/lib/grafana/dashboards              #4
      - ./grafana/config/grafana.ini:/etc/grafana/grafana.ini         #4-5
    ports:
      - "3001:3001"
    depends_on:
      - prometheus
  1. Prometheus image
  2. Prometheus configuration to scrape Apache APISIX. See the full file here
  3. Grafana image
  4. Grafana configuration. Most of it comes from the configuration provided by APISIX.
  5. Change the default port from 3000 to 3001 to avoid conflict with the PostgREST service

Once the monitoring infrastructure is in place, we only need to instruct APISIX to provide the data in a format that Prometheus expects. We can achieve it through configuration and a new global rule:

YAML
 
plugin_attr:
  prometheus:
    export_addr:
      ip: "0.0.0.0"             #1
      port: 9091                #2
  1. Bind to any address
  2. Bind to port 9091. Prometheus metrics are available on http://apisix:9091/apisix/prometheus/metrics on the Docker network

We can create the global rule:

Shell
 
curl http://apisix:9080/apisix/admin/global_rules/2 -H 'X-API-KEY: 123xyz' -X PUT -d '
{
  "plugins": {
    "prometheus": {}
  }
}'

Send a couple of queries and open the Grafana dashboard. It should look similar to this:

Grafana dashboard


Conclusion

Creating a full-fledged REST(ful) API is a huge investment. One can quickly test a simple API by exposing one's database in a CRUD API via PostgREST. However, such an architecture is not fit for production usage.

To fix it, you need to set a façade in front of PostgREST, a reverse proxy, or even better, an API Gateway. Apache APISIX offers a wide range of features, from authorization to monitoring. With it, you can quickly validate your API requirements at a low cost.

The icing on the cake: when you've validated the requirements, you can keep the existing façade and replace PostgREST with your custom-developed API.

The source code is available on GitHub.

To go further:

  • PostgREST
  • Getting started with Apache APISIX
  • Apache APISIX plugins
API Open source REST Docker (software) Database PostgreSQL

Published at DZone with permission of Nicolas Fränkel, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Why and How To Integrate Elastic APM in Apache JMeter
  • Building a REST Application With Oracle NoSQL Using Helidon
  • Getting Started With Boot Spring 3.2.0: Building a Hello World REST API With NoSQL Integration
  • Getting Started With Postgres: Three Free and Easy Ways

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!