Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Using Files on GitHub When There's No Database Behind an API

DZone's Guide to

Using Files on GitHub When There's No Database Behind an API

It is common for an API to just be a facade for a database. I'm working to eliminate the database behind APIs and store the data being served up via GitHub repositories.

· Integration Zone
Free Resource

Today’s data climate is fast-paced and it’s not slowing down. Here’s why your current integration solution is not enough. Brought to you in partnership with Liaison Technologies.

It is common for an API to just be a facade for a database, meaning the data and content served up via the API is read from and written to a database backend. This is probably the most common way to deploy an API, but increasingly, I'm working to eliminate the database behind APIs and store the content or data being served up via GitHub repositories. 

I find it easier to store individual YAML, JSON, and other machine readable files on GitHub, and just check out the repository as part of each API deployment. Each API has a different refresh rate determining how often I commit or pull a fresh copy of the content or data, but the API does all of its work with a locally checked out copy of the repository. Eliminating the need for a database backend from the required components to make the API operate.

Why am I doing this? It helps me solve the database challenges when it comes to deploying in containers, and other more modular approaches to deploying APIs as microservices. The API provides a (hopefully) well-designed facade for the data and content stories and allows me to use my verbs when reading, writing, and managing resources behind. It also injects the benefits of version control and user and organizational engagement that GitHub brings to the table.

I'm also using it in on-demand approaches to working with data. I have a lot of government and other open data stored in GitHub repositories (free if public), and when I want to work with it, I can spin up a new instance or container that checks out the latest GitHub repository and provides access for reading and writing using a GitHub OAuth token. When done, the API can be terminated, committing any changes back to the repository and reducing the need for dormant compute resources.

This approach also centralizes the data publicly on GitHub, allowing anyone else to check out and integrate with the JSON or YAML data and content sources — leveraging Git as the broker. Going down this road I have lost some of the index, search, and other common database features I enjoy, but I'm slowly evolving the backend API code to work with the YAML or JSON file stores more efficiently. I actually find going back to working with simple static machine-readable files to be refreshing, and using GitHub makes it even more usable. I will keep writing as I evolve this approach and provide more open examples of it in action.

Is iPaaS solving the right problems? Not knowing the fundamental difference between iPaaS and iPaaS+ could cost you down the road. Brought to you in partnership with Liaison Technologies.

Topics:
api ,integration ,api design ,github

Published at DZone with permission of Kin Lane, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}