How to Build a Data-Warehouse in 4 weeks, Part 2
Join the DZone community and get the full member experience.Join For Free
i’ve talked about the first 2 steps you need to take to build your own data warehouse (read: how to build a data-warehouse in 4 weeks, part 1) . choosing the architecture and the dbms are the first things that need to be done. so far we have the idea of the data we need to replicate and the database we want to store data in. the missing part is the process. how do we store replicated data? how do we transform data? these are the questions i’ll answer in this post.
there are many ways of replicating data from your transaction databases to the dw. for the sake of simplicity, let’s assume that we’ll run our job replicating data once a day. and at that time our business is not working, so the transactional databases are not being updated. let’s assume that we have two transaction databases (tdb1 and tdb2) and our dw must contain data from both of them.
image1 showing two databases with 2 different schemas, our data warehouse and a question mark showing that we don’t know how to replicate data yet.
we will populate our dw using an etl (extract, transform, and load) job. we have two choices here:
making it a one-step process. in this case we have only one etl that does all the work. it extracts data, transforms it in memory and loads it into our dw.
two-step process introduces a staging area. instead of one etl we have two. the first one copies data from our transaction databases into the staging area doing some minimal transformations (like converting data types). the second etl uses heavy transformations to copy data from the staging area to the data warehouse.
let’s take a closer look at these two approaches.
the one-step process comprises one job doing everything. it sorts and merges data from different input sources (tdb1 and tdb2) in memory and loads it into our dw. though this approach is the simplest one it has some obvious flaws:
the process is monolithic. if you introduce some errors in your transformation and the process fails you will have to rerun the whole process again. don’t forget that you won’t be able to do it during business hours, as your transactional database will be under load.
generally, it’s a good idea to minimize the time you spend accessing remote servers (your database instances). having your etl job implemented this way won’t allow you to do it.
you won’t be able to use capabilities of your dbms to merge data from different input data sources. everything has to be done by your job, which may be cumbersome and error prone.
the two-step process comprises two jobs:
“replicate to staging”. it copies data from our transactional databases (tdb1 and tdb2) into another database - the staging area. we don’t do any complex transformations at this point. the purpose of this step is to copy all the data we haven’t processed yet.
“populate data marts”. it takes that data we have in the staging area, transforms it and uploads into our dw. it also cleans the staging area after it’s processed all the data. it never processes the same data twice.
there are several benefits you will get if you choose this approach:
only the first step touches your transactional databases. you can rerun “populate data marts” as many times as you want without affecting your transaction database. it means that it can be done during business hours. this aspect is crucial as usually “replicate to staging” is pretty straightforward and doesn’t cause any problems.
having all the data from our input sources in one place allows you to use capabilities of your dbms to join, merge, and filter data.
though introducing an additional step (copying data to the staging area) may complicate your implementation in the beginning the price is not so high if you think about benefits you will get. the process is more reliable and easier to extend. in addition, ability to use dbms to join data from several input sources will save you plenty of time.
in addition, i’d like to share some thoughts about some implementation details.
additional implementation notes: using bi platforms
such bi platforms as pentaho will give you all capabilities to write and execute etl jobs. if you don’t have much time and you are not afraid of using drag-and-drop programming you can write all needed etl jobs in a few days.
though i’m a big proponent of ready-to-use solutions (such as bi platforms) writing everything from scratch is a better approach in many ways. you won’t have to deploy and support one more instance of tomcat. secondly, bi platforms are very far from being agile. as a result, the only way to test your etl job is to do it manually, which basically makes any kind of refactoring extremely painful. also, it’s very hard to keep you etl jobs dry which increases the price of making changes in the future.
additional implementation notes: copying data to staging
most of the tables you will need to copy will belong to one of following groups:
- some reference tables containing up to a few thousand rows. you don’t have to bother. just copy the whole table every night.
- tables containing immutable data. you can use the primary id to copy new rows.
- tables containing mutable data and having an “updated_at” kind of column. use this column to find the data that was updated.
in some situations it’s not that easy:
- for instance, you may need to join a few tables to find updated rows. or use many columns (such as primary_id, inserted_at and updated_at) for one table.
in the end i’d like to say one more time that it’s not as complicated as people say. building a simple dw is a task that can be achieved by one person in a month. of course, there is a lot of theory behind it (like how to handle different types of dimensions etc). but to bring some value to your business you don’t have to know all this theory, just understanding the basics will be enough.
Opinions expressed by DZone contributors are their own.