MariaDB Bulk Load API
MariaDB Bulk Load API
In this article, learn more about the MariaDB bulk load API.
Join the DZone community and get the full member experience.Join For Free
There are several ways to load data into MariaDB Platform, and some are better than others. The two basic ways are either to use LOAD DATA INFILE/LOAD DATA LOCAL INFILE, which is very fast, in particular, the non-LOCAL one and then we have the plain INSERT statement. When using an INSERT statement, you may pass an array to MariaDB Server, like this
Nothing special with that, but what we will cover in this blog is another way of doing INSERTs using arrays, one which uses the MariaDB API to pass a program array to MariaDB and which is actually a very fast way of loading data into MariaDB.
To begin with, let's look at the two APIs that we use to access a MariaDB Server from a C program. The reason that the C API is relevant is that this API is a thin wrapper around the MariaDB protocol, so explaining the C API also covers what is possible with the protocol itself. The other connectors, such as JDBC, ODBC and Node.js have various levels of functionality and in some cases have other ways of interacting with MariaDB, but then this just happens inside the connector itself.
There are two API's, one that is text-based and this is the original MariaDB API. In this API all data is sent and received as text. Let's look at a sample table first before we go into looking at some code.
You might also be interested in: Mule 4: Database Connector Bulk Insert
Now, let's look at a simple program that insert some rows into that table, using the original text-based API:
This is simple enough, we initialize a connection handle and connect and then we insert two rows using 2 INSERT statements. All columns we pass, be it strings, integers or dates are represented as strings. We can make this INSERT more effective by passing all rows in one single SQL statement, like this:
The Prepared Statement MariaDB API
The prepared statement API is different from the text-based API, but it is contained within the same library and the same connection functions, and many other functions are used in the same way. It is different in a couple of ways though. First, we don't pass data as part of the SQL statement, rather the SQL statement contains placeholder where we want data to be and then we associate these placeholders with program variables, a process called binding, where we place the actual data.
The same SQL statement only needs to be prepared once, after which time we can execute it several times and just change the data in our program variables in between. For this to work, the bind process has to know not only a reference to the variable it is binding to, but also a few other things like the data type that is being referenced, the length of it and what is called an indicator variable is necessary. And indicator variable says something more about the referenced variables, such as if it is NULL and if the referenced string is NULL terminated or if the length is taken as the actual length of the string.
As an example, let's see what the first program above would look like when using prepared statements:
So, what do you think, better or worse? Well one advantage is that we only need to parse the statement once so in the end it could be a bit faster. Maybe. On the other hand, if you are writing some piece of generic code that handles SQL-statements that aren't specifically known in advance or maybe only parts of it are known, then this is kind of neat.
To support this you can find out how many parameters you deal with by a call to the API after a statement has been prepared. The prepared statement API also handles statements that return data, such as a SELECT, in a similar way. All in all, prepared statements require a bit more code in the interface but is a fair bit more functional.
The Example Program Explained
I will hold the full description on how Prepared Statements and the corresponding API works until another blog post, but the program above still needs some explanation.
After connecting to MariaDB using the usual mysql_real_connect function, we create a handle to work with prepared statements and then we prepare the SQL statement we are to use later using the mysql_stmt_prepare function. Notice the error handling at this point, and this is repeated everywhere a prepared statement API function is called, instead of calling mysql_error, you call mysql_stmt_error which takes the statement handle, not the connection handle, as an argument. The SQL statements that we prepare has a ? to indicate where we are to bind to a parameters.
Following this it is time to do the bind, which takes up most of the code. The bind of the type MYSQL_BIND has 4 members, as there are 4 parameters to bind. This can of course be dynamic and allocated on the heap, using malloc or similar, but in this case we are working with a predefined SQL statement and we know that there are 4 parameters.
We start by zeroing all members on all the bind parameters. Following this we fill only the MYSQL_BIND members that are strictly necessary and note that we are using different types for the different columns, to match the table columns. In particular the DATETIME column which is mapped to a MYSQL_TIME struct, but this is not strictly necessary as MariaDB will supply and necessary conversion, for example we could pass a valid datetime string for the cust_regdate column. Then we do the actual bind by calling the mysql_stmt_bind_param function.
Last, we fill out the values that the parameters are bind to and we also set the indicator valiables, all of these are normal except the one for the string which is set to STMT_INDICATOR_NTS to indicate that this is a null-terminated string. Following this we call mysql_stmt_execute to execute the prepared statement.
Bulk Loading - Prepared Statements With Input Arrays
If you look at the prepared statement code above, you realize that if you are to insert two or more rows in one go, you would prepare and execute something like this:
To make this work you would then bind 8 program variables and this doesn't really seem terribly flexible, right? You would need to prepare a different statement depending on how many rows you are inserting, and this is just as clumsy as when you have to do the same thing with the text-based interface. With MariaDB and using the MariaDB Connector, there is actually a better way, which is to use array binding.
The way this works is that every bind program variable is an array of values, and then set these properly, tell MariaDB how big the array is and then an arbitrary number of rows can be inserted with one statement. This is probably best explained with an example, again performing the same thing as the previous examples, but in yet another different way:
There are a couple of key points to note here. First, when we bind to an array any data type that is a char * string or a MYSQL_TIME has to be an array of pointers, and you see this in the code above. This makes this code look somewhat overcomplicated, but in the end, this is an advantage as the bound data can be anywhere (like each row can be a member of class or struct somewhere).
Secondly, to tell MariaDB that we are passing an array, we need to call mysql_stmt_attr_set and set the STMT_ATTR_ARRAY_SIZE attribute to the number of rows in the array.
The Example Program Explained
The example above is not much different from the first prepared statement example, with a few exceptions. First, the bind process now points to our array values, we only have 2 values in the array but this should still illustrate my point. And for the and cust_regdate columns, we are also doing the bind to an array of pointers to the actual values. Before calling the single mysql_stmt_execute we also need to tell MariaDB how many rows to insert.
The ability to load data into MariaDB as program data arrays has several advantages, it is programmatically easier to deal with than a single array string, in particular, if the latter consists of data for many rows. Aligning program data contained in classes or similar is also easier, allowing for better code integration.
Finally, performance is a bit better, in particular when there are many rows of data to INSERT.
Published at DZone with permission of Anders Karlsson , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.