DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • SQL Commands: A Brief Guide
  • Non-blocking Database Migrations
  • Building an Enterprise CDC Solution
  • Improving Backend Performance Part 1/3: Lazy Loading in Vaadin Apps

Trending

  • How the Go Runtime Preempts Goroutines for Efficient Concurrency
  • Transforming AI-Driven Data Analytics with DeepSeek: A New Era of Intelligent Insights
  • Why High-Performance AI/ML Is Essential in Modern Cybersecurity
  • A Modern Stack for Building Scalable Systems
  1. DZone
  2. Data Engineering
  3. Databases
  4. Improving Performance With SQL Aggregate Functions

Improving Performance With SQL Aggregate Functions

SQL aggregate functions helped me take the performance of my backend application to the next level and avoid the bottlenecks that were slowing it down.

By 
Antonello Zanini user avatar
Antonello Zanini
·
Jan. 05, 22 · Tutorial
Likes (2)
Comment
Save
Tweet
Share
4.8K Views

Join the DZone community and get the full member experience.

Join For Free

Soccer goalie reaching for ball

In this article, you will learn how SQL aggregate functions can represent an easy way to significantly improve your application's performance. Mainly, you will see how they were a game-changer in a real-world scenario based on a data-driven application developed for a startup operating in the sports industry.

Let's now delve deeper into this scenario and learn why you can't ignore SQL aggregate functions in data science.

Introducing the Scenario

The application I recently worked on aims to offer advanced data exploration features in the sports world through the web. In particular, it needs to allow exploration of both raw and aggregated data. Since the database involves terabytes of heterogeneous and unstructured data, the challenges were mostly on the backend and database side. Now, let's dive into this scenario.

Technologies, Server Specs, and Architecture

We developed the backend in Kotlin with the Spring Boot 2.5.3 framework and the Hibernate 5.4.32.Final ORM (Object Relational Mapping). We deployed it on an 8GB 4 CPU VPS through a Docker container managed by Dokku. The initial heap size was set to 2GB and limited to 7GB, while we allocated the remaining GB of RAM to a Redis-based caching system. We built the web application with performance in mind. Specifically, it's based on the multi-layered Spring Boot architecture described here and involves multi-thread processing.

Database Structure

We implemented the database as a MySQL server running on an 8GB 2 CPU VPS. We hosted the backend application and the database in the same server farm, but they do not share the same VPS. Since the sports data is simple but highly heterogeneous, the database was structured to avoid duplication and encourage standardization. This structure is why we chose a relational database. As it stands, the database involves hundreds of tables, and I cannot present it entirely here due to an NDA.

Luckily, the most problematic tables share more or less the same structure. So, analyzing just one table should be enough. In particular, this is what the PositionalData table looks like:

CREATE TABLE `PositionalData` (
    `id` int(11) NOT NULL AUTO_INCREMENT PRIMARY KEY,
    `area1` double DEFAULT NULL,
    `area2` double DEFAULT NULL,
    `area3` double DEFAULT NULL,
    `area4` double DEFAULT NULL,
    `area5` double DEFAULT NULL,
...
    `area140` double DEFAULT NULL,
    `area141` double DEFAULT NULL,
    `area142` double DEFAULT NULL,
    `area143` double DEFAULT NULL,
    `area144` double DEFAULT NULL,
    `value` double DEFAULT NULL,
    `parameterId` int(11) NOT NULL,
    `gameId` int(11) NOT NULL,
    `createdAt` datetime DEFAULT CURRENT_TIMESTAMP,
    `createdBy` int(11) DEFAULT NULL,
    `updatedAt` datetime DEFAULT CURRENT_TIMESTAMP,
    `updatedBy` int(11) DEFAULT NULL,
    `deletedAt` datetime DEFAULT NULL,
    `deletedBy` int(11) DEFAULT NULL,
    `active` tinyint(1) DEFAULT '1',
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

As you can see, it involves more than 100 columns, and it has more than four external IDs. On average, each of these tables contains at least 15 million rows.

Performance Issues

One of the critical features of the frontend application is to let users analyze the aggregated values of hundreds of different sport parameters (e.g., passes, throws, blocks) coming from all the selected games of one or more seasons. We developed a backend API to perform a query on the table mentioned earlier to retrieve the data. Such a query was nothing more than a trivial SELECT returning from 10k to 20k rows. Then, this data is aggregated with a multi-thread process, stored in the Redis cache, and finally serialized in JSON and returned to the frontend application. From the first moment that the API receives a hit (and thus, before the result is available in the Redis cache) to completion, users must wait between two to four seconds.

This delay was unacceptable.

Delving Into the Performance Problem

Let's now see the downsides of the approach just presented.

ORM Data Transformation Bottleneck

Most advanced ORMs abstract how they represent data at the database level. In other terms, the ORM performs the query, retrieves the desired data from the database, and takes care of transforming it into its application-level representation. This data transformation process happens behind the scene, but it undoubtedly represents an overhead. Although that process is usually negligible in terms of performance, it can quickly become a bottleneck for thousands of rows.

This slowdown is especially likely when using OO (Object Oriented) languages. Additionally, creating a new class instance takes time and resources. One way to limit the object size and heap usage might be to select only the strictly necessary set of columns. This approach would make each object lighter, even though the object creation process represents the main overhead. Therefore, the time spent performing this transformation process would not change significantly.

Looping Takes Time

Performing simple operations like sum or average on arrays of objects containing thousands of elements is not performance-free. Although this does not compare to the time spent by the ORM to transform the data, it indeed represents an additional overhead. Fortunately, Java supports many thread-safe collections to perform operations concurrently. On the other hand, opening and managing threads are complex and time-consuming tasks.

Let's see how several SQL aggregate functions helped me solve the performance issue.

What Are SQL Aggregate Functions?

SQL aggregate functions allow you to calculate several rows and obtain one value as a result. Even though each SQL language has its own aggregate function, the most common ones are:

  • COUNT(): returns a count of the number of rows selected
  • MIN(): extracts the minimum value
  • MAX(): extracts the maximum value
  • SUM(): performs the sum operation
  • AVG(): performs the average operation

They represent a potent and helpful tool when associated with the GROUP BY statement. Thanks to it, you can first group the desired data and then aggregate it by harnessing them. If you want to delve into MySQL aggregate functions, you can find all the supported ones here. I also recommend checking out this article and this resource.

Replacing Application-Level Operations With Queries

While SQL aggregation functions seemed promising, I did not know if they could make a difference before seeing them in action. Specifically, the application-level operation generated a data structure containing the average value on the value column and the sum of each areaX (with X from 1 to 144) column on each parameter chosen over the selected games. You can easily represent this in the following query:

SELECT SUM(`area1`) as `area1`,  
    SUM(`area2`) as `area2`, 
    SUM(`area3`) as `area3`,
...
    SUM(`area142`) as `area142`, 
    SUM(`area143`) as `area143`, 
    SUM(`area144`) as `area144`,
    AVG(`total`) as `total`, `parameterId`
FROM `PositionalData`
WHERE `parameterId` IN (:parameterIds) AND `gameId` IN (:gameIds)
GROUP BY `parameterId`

As you can see, this query takes advantage of the SQL aggregate functions to return aggregate data at the database level. It does all this while filtering over the desired data using the IN statement on gameId and parameterId and grouping it based on the same parameterId. In other words, data is first filtered based on the selected game of the season and the desired parameters to analyze. Then, the resulting information is grouped by parameter and aggregated by the SQL aggregate functions.

Defining the Right Indexes

Since that query involves GROUP BY, IN, and SQL aggregate statements, it might be slow. This potential slowness is why defining the proper indexes is so essential. In detail, the most critical and performance-effective index applied was the following one:

ALTER TABLE `PositionalData` ADD INDEX `PositionalData_parameterId_gameId` (`parameterId`, `gameId`) USING BTREE;

Should you always use aggregate functions? There are both some positives and negatives with this approach.

Pros

  • Database-level aggregation is much faster than executing the same aggregation logic at the application level while looping on large arrays.
  • Using queries involving SQL aggregate functions with the GROUP BY statement allows you to reduce the number of rows returned drastically. In detail, this let me go from about 10k rows to the number of rows equal to the number of parameters analyzed. Consequently, this makes the data transformation process performed by the ORM irrelevant in terms of time, preventing it from being a bottleneck.
  • Aggregating at the database level allows you to take advantage of the performance benefits from the database cache when identical requests run. This setup can make it less crucial to have an application-level cache, leading to a lighter architecture.

Cons

  • SQL aggregate functions run at SELECT time. When dealing with strongly typed programming languages, the ORM needs to know the type the result will have. Not all the ORMs allow you to define easily, sometimes even restricting SQL aggregate functions to native queries only. This reality means losing the advantage of abstraction introduced by the ORM and discourages their use.
  • The query to extract the desired data involving SQL aggregate functions is always slower than a SELECT involving a simple WHERE clause. Nevertheless, the execution time should remain in the order of tenths of a second, and in any case, much less than to perform the same operation at the application level.
  • The SQL aggregate operations available are usually limited to a dozen, of which only 5 or 6 are mathematical operations.

Performance Comparison

Let's compare the result in response time when calling the same API involving data aggregation with no cache and the same parameters.

  • Response time when performing aggregation at the application level: ~2-4s
  • Response time when performing aggregation at the database level: ~800ms

Final Thoughts on SQL Aggregate Functions

SQL aggregate functions are undoubtedly a great tool to take the performance to the next level when dealing with data science. Using them is easy and effective, although not all the ORM can fully or natively support them. Either way, knowing how to take advantage of them may become essential to improve performance, and explaining it through a real-world case study was why I wrote this article!

Relational database Database sql MySQL Data (computing) Web application Spring Framework Data transformation Spring Boot Cache (computing)

Published at DZone with permission of Antonello Zanini. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • SQL Commands: A Brief Guide
  • Non-blocking Database Migrations
  • Building an Enterprise CDC Solution
  • Improving Backend Performance Part 1/3: Lazy Loading in Vaadin Apps

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!