Generating a Hyperlambda Database CRUD backend
Way too much work is manual in today's world. Not understanding which parts of your job can be automated, results in that you spend time doing the wrong things.
Join the DZone community and get the full member experience.Join For Free
This is going to be a "different" tutorial, since instead of creating code ourselves, we will use Magic to generate our code, and analyse what Magic did afterwards. If you prefer to watch a video where I demonstrate this process, you can watch the following video.
As you can see in the above video, what Magic does for us, is to provide us with a starting point, from where we can modify the code, to have it do whatever we want it to do for us. However, in order to modify the code, we'll need to understand it. So let us walk through it step by step, starting out with creating our database, ending up with an understanding of the code, allowing us to modify it as we see fit. So let us create our database first.
Creating your database
Open the "SQL" menu item in your dashboard, and click the "Load" button. If you're using MySQL as your main database then choose "babelfish" - If you're using SQL Server, choose "northwind-simplified". Load the database script, and click the "Execute" button.
This creates a database for you, which will be the foundation for generating our HTTP CRUD backend.
Generating your backend
After you've done the above, open the "Generator" menu item, choose your newly created database, and click "Crudify all tables". Below you can see how this process should look like.
As you generate your backend you will notice that Magic says something like "xxx LOC generated". This number is the lines of code that Magic automatically generated for you, and depends upon your database and its number of tables. A small database such as babelfish will typically only generate some 3-400 hundred lines of code - While a larger database might generate tens of thousands of lines of code for you.
Playing with our CRUD endpoints
After having done the above, Magic will have created a bunch of Hyperlambda files for you in your "/modules/xxx" folder, where "xxx" is your database name. Open the "Files" menu item in your dashboard, and take a look at this folder. These files will wrap all the 4 main CRUD operations towards your tables, in addition to a count endpoint. The structure should resemble the following.
- Create - "xxx.post.hl" - Allows you to create new records
- Read - "xxx.get.hl" - Allows you to read records
- Update - "xxx.put.hl" - Allows you to update records
- Delete - "xxx.delete.hl" - Allows you to delete records
- Count - "xxx-count.get.hl" - Allows you to count records
You will have 5 files resembling the above structure for each of your tables in your database. We will study the files generated around one of these tables, but it doesn't matter which database you generated, or which table you select - The structure will be similar enough regardless of which table you choose. However, before we start looking at the code, let's play around with the code, by going to the "Endpoints" menu item, and filter on one of your tables. In the screenshot below, we've chosen "babelfish/language". Click the "Read" endpoint, at which point you should see something resembling the following.
You can already execute your endpoint by clicking the "Invoke" button for your endpoint. If you do this, you should see a bunch of JSON objects returned from your server resembling the following, depending upon which table you chose.
The read endpoint supports the following features.
- Paging through [limit] and [offset]
- Ordering items through [order] and [direction]
- Specifying a boolean operator for filtering through [operator]
The above is what you'd typically need most of the times as you read items from your database. If you'd like to find items only matching a specific criteria, you can add a filter for your criteria, to have your backend only return items matching your filter. You will see a whole range of possible filters for every column in your table, such as I illustrate for the locale column below.
- locale.eq - Locale being exact match of the specified string
- locale.neq - Locale not equal to the specified string
- locale.like - Locale contains the specified string supporting wildcards as "%"
- locale.mt - Locale more than the specified string
- locale.lt - Locale less than the specified string
- locale.mteq - Locale more than or equal to the specified string
- locale.lteq - Locale less than or equal to the specified string
Each column will have a range of filter options matching the above comparison operators. If some of your filter conditions doesn't make sense for a particular column, you can delete these later as we start editing the endpoint's code. However, try clicking the locale.eq filter button for instance, and add the value "en" into it, click "Add", and invoke your endpoint again. This time of course only items matching your filter condition is returned, such as the following JSON illustrates.
You can combine as many conditions as you wish the same way we added the above eq filter. Conditions are by default and'ed together, implying all conditions must match - But this can be changed to or by changing the value of the [operator] argument.
You can also create and update items if you select your post or put endpoints. However, these endpoints require you to provide a JSON payload instead of parametrising your endpoint using query parameters. Try to create and update some items using these two endpoints. Just remember that regardless of what table you choose, the primary key parts to the update endpoint is the criteria of which item to update. Magic only creates endpoints that supports updating one item at the time by default. And the generator does not produce code allowing you to change the primary key.
If you go through your endpoints, you will see a lot of meta information. This was generated automatically based upon your database schema, and is also publicly exposed to the client, almost the same way the Open Web API or Swagger is able to enumerate and document your HTTP endpoints. Hence, we've already documented our HTTP endpoints, even though we haven't even manually created a single line of code. Magic also creates meta information like this for your manually created endpoints.
This meta data becomes crucial as we later start looking at how Magic creates your frontend code using similar techniques as it used to create your backend. You can see this meta information as properties of your endpoint if you go to the "Endpoints" menu item, and click any of your endpoints.
Analysing the code
Once you're done playing around with your endpoints, open up the "Files" menu item. Click the "modules" folder, then click the folder with the same name as the name of the database you generated above. Click for instance the "languages.get.hl" file, at which point you should see something resembling the following.
Notice - If you didn't generate CRUD endpoints for your babelfish database, then at least make sure that whatever file you're looking at ends with ".get.hl" such that we're looking at roughly the same thing in the rest of this tutorial.
What you are looking at now is the Hyperlambda Magic automatically generated for you. The most important part of this code is the following section.
The above invocation to the [data.read] slot is "transpiled" by Magic into an SQL statement, retrieving your records from your database, according to your filter conditions. The result of this SQL is then returned back to the client as JSON in the [return-nodes] line below. The above slot will expect an existing open database connection, which is achieved with the [data.connect] slot invocation. The below code shows you how to connect to a database.
Notice how the generated Hyperlambda for your [data.read] invocation can be found inside your [data.connect] invocation. This implies that your read invocation will use this database connection implicitly, since the read invocation is "a lambda object inside of your database connection". Hence all database operations inside of a [data.connect] invocation will by default use that database connection to connect to your database and execute its SQL. Think of these slots as an SqlConnection instance and an SqlDataReader instance, where the reader uses the connection you previously opened. Then realise that the 3 carriage returns found in front of the [data.read] invocation becomes kind of like "the scope" of the [data.connect] invocation, implying the code inside of [data.connect] is actually a lambda object, or an "argument" to your connect invocation.
In Hyperlambda code is always an argument, and all arguments are code
This is why it's called Hyperlambda, because everything is a lambda object. Hyperlambda is said to be "a functional programming language". We will go through the exact syntax of Hyperlambda in a later tutorial, but for now realise that in Hyperlambda spaces counts - Kind of like the same way they do in YAML or Python, and that 3 spaces declares a "scope", while a colon (:) declares the beginning of a node's value. Nodes again is a tree structure in the form of value, name, and children - And is the foundation of Hyperlambda. Hyperlambda is simply the textual representation of a tree structure, the same way YAML, JSON, or XML is. Nodes is Hyperlambda's object implementation again. See the documentation for magic.node for more details.
If you look at the top of your file, you will see something resembling the following.
The endpoint resolver will actually read the above [.arguments] node, and use it to retrieve meta information about which arguments your endpoint can accept. If you edit it, save it, and go back to your endpoints file menu, you can see how it's automatically updated, and the arguments provided by the meta information parts of the endpoint resolver automatically changes. The [.arguments] node is said to "declare which arguments your endpoint can accept".
Editing the arguments your endpoint accepts is typically among one of the first things you want to do if you want to modify your endpoint. In the above arguments node for instance, the [locale.mt] argument probably doesn't make much sense, and can be deleted to simplify your endpoint.
Authorisation and authentication
Your endpoint will by default require authentication and authorisation, preventing anonymous users from accessing it. This is done with the [auth.ticket.verify] slot with something resembling the following.
The above line of code basically verifies that your JWT token is valid, and that the user invoking the endpoint belongs to one of the following roles.
If the user has an invalid token, and/or the user doesn't belong to any of the above roles, this slot will throw an exception, preventing the rest of the Hyperlambda code from executing. This is the core authentication and authorisation parts of Magic, and allows you to secure your web APIs easily. If you want users belonging to different roles to be able to invoke your endpoint, you can simply edit the above code, by for instance adding another role to it, save your file - And voila; Your authorisation requirements have automagically changed. Below is an example of how to add the "translator" role as a role allowed to invoke the endpoint.
The above slot requires a comma separated list of roles as its input. You can also completely remove the above node's value parts, resulting in that any authenticated user can invoke your endpoint, as long as he or she has a valid JWT token. This completely ignores the roles the user belongs to, as long as the user is authenticated with a valid JWT token. Below is an example.
The rest of the file basically just provides meta information to the endpoint resolver, and correctly parametrises your invocation to [data.read] - However, this will be a subject of a later tutorial. If you're curious about how this work, you can check out for instance the [add] slot in the documentation for magic.lambda.
An invocation to for instance [data.read] is referred to by Magic as a "slot". If you view your other CRUD files, you will see that they are using slightly different slots, to wrap other CRUD functions. The basic CRUD operations in Magic are implemented with the following slots.
- [data.read] - Reads records from your database
- [data.delete] - Deletes records in your database
- [data.create] - Creates new records in your database
- [data.update] - Updates existing records in your database
Besides from using different slots, all of your generated Hyperlambda files are actually quite similar in structure. You still typically want to have separate files for these operations, since this allows you to easily modify for instance authorisation requirements, arguments passing, add additional business logic to your files, etc. So even though the code is not very DRY in its original state, separate endpoint files for separate operations are still typically useful, and a feature you will learn to appreciate further down the road, as you start modifying your endpoint Hyperlambda files.
If you want to see the power of these CRUD slots you can check out the documentation for the magic.data.common module, which you can find in the reference documentation for Magic.
In this tutorial we generated an HTTP REST backend wrapping our database with all CRUD operations. Afterwards we played around with our endpoints, invoking them with arguments, before finishing up analysing the code Magic generated for us. In later tutorials we will dive deeper into the syntax of Hyperlambda, and also the implementation details of the crudification process - But for now, realising that Magic creates a foundation for you to edit, is sufficient for you to start playing with Magic, to generate backends according to your needs.
Published at DZone with permission of Thomas Hansen. See the original article here.
Opinions expressed by DZone contributors are their own.