DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workkloads.

Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • A Deep Dive into Apache Doris Indexes
  • Query Federation in Data Virtualization and Best Practices
  • Introduction to NoSQL Database
  • Apache Druid vs StarRocks: A Deep Dive

Trending

  • AI, ML, and Data Science: Shaping the Future of Automation
  • Measuring the Impact of AI on Software Engineering Productivity
  • Java's Quiet Revolution: Thriving in the Serverless Kubernetes Era
  • Microsoft Azure Synapse Analytics: Scaling Hurdles and Limitations
  1. DZone
  2. Data Engineering
  3. Databases
  4. Optimizing dbt and Google’s BigQuery

Optimizing dbt and Google’s BigQuery

Dbt makes it possible for data engineers to develop a workflow, write data transformation rules, and deploy the entire data modeling process.

By 
Stefan Thorpe user avatar
Stefan Thorpe
DZone Core CORE ·
Dec. 21, 20 · Analysis
Likes (2)
Comment
Save
Tweet
Share
6.2K Views

Join the DZone community and get the full member experience.

Join For Free

Setting up a data warehouse is the first step towards fully utilizing big data analysis. Still, it is one of many that need to be taken before you can generate value from the data you gather. An important step in that chain of the process is data modeling and transformation. This is where data is extracted, transformed, and loaded (ETL) or extracted, loaded, and transformed (ELT). For this article, we’ll use the term ETL synonymously.

ETL as a process is not always as straightforward as it seems. On top of that, there are real challenges to overcome in setting up streamlined data modeling processes, including dealing with relations between data points and simplifying the queries to make the whole set of processes scalable. Even with Google offering a lot of features through its BigQuery, some degree of optimization is still required.

This is where a data build tool or dbt becomes a crucial tool. Dbt is a data transformation framework that empowers data engineers through simple SQL commands. Dbt makes it possible for data engineers to develop a workflow, write data transformation rules, and deploy the entire data modeling process.

Raw to Standardized (Transformed)

In simple terms, dbt does most of the hard work of transforming raw data into transformed data. When used in conjunction with BigQuery and Cherre’s standardized framework, dbt allows us to clean data quickly and succinctly in a (mostly) automated process. By combining the two powerful tools with our framework, we can create clean, workable datasets for our engineers to customize on behalf of our clients.

Cherre transforms its data from multiple sources through our defined best practice normalization process of the following ETL transformation steps. These required steps play a key role in our company’s data integration strategy. It allows us to gather data from a comprehensive variety of sources and consolidate it into our single, centralized data warehouse location. These steps also make it possible for the many different types of data we process to all work together.

At Cherre, we load our source data (as close as possible to its original state) into BigQuery to automate much of the above process for us with dbt. Minimal changes occur at the next stage when data moves from source to raw before data is renamed—which turns every label into a lower snake case as per our organization-wide naming convention. Relabeling is one of the few manual steps in the process where we edit column x to equal y if necessary. We only make small changes at this point, and we don’t start adding types until near the end of the process. This is the lightest clean possible to get it to ‘standardized’ before we go beyond that to connect-on addressing and more intense, custom steps for clients. This light transformation produces a clean, well-structured dataset every time for us to work with easier. 

Dbt handles data consolidation and cleaning, two of the most resource-intensive tasks in complex queries, with ease. Plus, on top of consolidation and cleaning, dbt can also run regression tests at any point in the process.

That last feature is important for a simple reason: you can maintain data quality and integrity by performing tests at the right points. The resulting dataset will already be optimized for BigQuery and even the most complex SQL commands.

Dbt creates a table of modeled data with every pass it runs, so the subsequent queries can be executed faster and without consuming a lot of resources. Even better, the whole process can be defined with a simple .sql file and common SQL commands.

For example, you can create a new table of modeled data by stating {{ config(materialized=’table’) }} and then defining the queries to run in order to fill that table in a file named modeledData.sql. Subsequent SQL queries can then use modeledData as a source.

Other commands such as ORDER BY and LIMIT also work, and since BigQuery doesn’t need to go through the entire data warehouse to complete the query, each query can be made significantly faster. This is where the power of transformed data lies.

There have been a lot of tests that confirm the performance boost generated by the integration of dbt. A conventional SQL query on a 20GB dataset could take up to 30 seconds to complete, but running the same query through a processed, transformed table cuts that time down to less than one second.

Configuration Options for BigQuery

Setting up dbt and BigQuery is actually very easy to do. Once dbt is installed on your cloud cluster, you can run dbt init to create a project. You get a customizable dbt_project.yml file if you want to fine-tune the project parameters even further.

The next step is integrating dbt tables as datasets in BigQuery, but this too is something that you can do from the BigQuery console. Define a profile that uses a service-account as its method, configure your project name, and you are all set.

You can start choosing datasets as your sources. Further queries are still SQL queries, but you can now use transformed data as your source. At this point, you can return to dbt and configure how BigQuery commands are handled.

For starters, you can set the priority of BigQuery jobs accordingly. Setting the priority correctly allows for better server resource allocation, especially under heavy load. You can go with batch or interactive, with the latter being the on-demand option.

You can also configure the timeout period to adjust for your server resources or specific requirements. By default, the timeout is set to 300ms. You can extend this if you want to process larger datasets—or use more of them—and vice versa.

Of course, you can tell dbt to retry the commands it receives several times when receiving server errors in return. Ideally, you want to allow dbt to retry the command three to five times before returning an error. You can even configure the delay between retries.

Let’s not forget that BigQuery also has permission management and dataset locations, both of which are parameters you can configure to further boost performance. This opens up the possibility of setting up distributed dbt instances.

Optimizing Your Queries

Both dbt and BigQuery work really well with each other. The two tools are designed to simplify complex queries and give data engineers the opportunity to develop and deploy advanced queries without assistance. All commands are SQL commands that we already know and love.

The real challenge is improving the queries and optimizing the way dbt works with BigQuery, and that process starts with understanding how to best structure an unstructured dataset. The initial data modeling and transformation process is still crucial.

You can begin with a complete review of input data and data sources based on their sizes. Larger data sources need to be pre-processed to maintain performance. I mean, BigQuery is fast and capable, but that doesn’t mean you should not structure your data correctly.

Computation optimization is the next thing to tackle. Each query requires some amount of server resources, particularly computing power. You can configure your .sql file to be leaner when it comes to CPU use. Another way to get around this issue is through scheduling.

Dbt can run independently from BigQuery, too, so you can have the transformed dataset updated at your own pace. Depending on your application or organization's specific needs, you can easily find the right spot between performance and resource usage.

Lastly, there are some SQL best practices that still need to be maintained when using dbt in conjunction with BigQuery. As mentioned before, the available tools should not make you less diligent about how you design your queries.

Self-joins are best avoided or executed by dbt to avoid hogging BigQuery queries. Avoiding self-joins is better since you don’t risk multiplying the number of output rows. Unbalanced joins must also be avoided, but this is an easier problem to deal with.

Prefiltering rows becomes something you can now do before your datasets reach BigQuery. You can still be very specific with the sources you access, meaning you can use yourTable.yourModel to define a specific source for the query.

GROUP BY is another handy command to use if you want to optimize your queries. You’ll be surprised by how much more flexible your implementation of dbt can be when you actively use data pre-aggregation to avoid cross joins and Cartesian products.

Beyond Subqueries

One thing to note about using dbt to optimize BigQuery jobs is that you should not limit yourself to moving subqueries to dbt. The tool is so much more capable than that, particularly with the latest version. You can, for instance, use variables in the command line.

Hooks are just as handy. Hooks are perfect for triggering specific actions at a certain point in the process. On-run-start lets you check if the required table exists, and then tell dbt to automatically create a table if it doesn’t. On-run-end, on the other hand, is perfect for granting schemas.

Seeds are another powerful component you can use in dbt. Seeds are actually referenced the same way you reference models, but they are no more than a list contained in a CSV file. Seeds are handy for performing certain tasks.

If you need to exclude certain IP addresses or email addresses in your query, for instance, you can create a list of excluded addresses and add the CSV file to your dbt project. You can even reverse the process and use seeds to load exported raw data.

Lastly, we have the native support for Jinja, the templating language supported by dbt. It means you can use expressions, statements, and comments, generate macros and use references on the fly, all without manually rewriting your SQL queries every time changes are made.

There are many ways to optimize data queries and improve your data warehouse's value, but using dbt and BigQuery is certainly among the simpler methods to try. The two work really well with each other and can be optimized for maximum performance.

Database Big data sql

Published at DZone with permission of Stefan Thorpe, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • A Deep Dive into Apache Doris Indexes
  • Query Federation in Data Virtualization and Best Practices
  • Introduction to NoSQL Database
  • Apache Druid vs StarRocks: A Deep Dive

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!