Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Best Performance Practices for Hibernate 5 and Spring Boot 2 (Part 3)

DZone's Guide to

Best Performance Practices for Hibernate 5 and Spring Boot 2 (Part 3)

Learn more about improving your performance in Hibernate 5 and Spring Boot 2 applications.

· Java Zone ·
Free Resource

Secure your Java app or API service quickly and easily with Okta's user authentication and authorization libraries. Developer accounts are free forever. Try Okta Instead.

Part 1 is available here (recipes from 1 to 25).

Part 2 is available here (recipes from 26 to 51).

Note: Reading this part is not imposing reading the first two as well!

If you liked the first two parts of this article, then I am sure that you will enjoy this part as well. Mainly, we continue in the same manner and tackle some persistence layer performance issues via Spring Boot applications.

Item 52: How to Store java.time.YearMonth as an Integer Or Date via Hibernate Types

Description: Hibernate Types is a set of extra types not supported by Hibernate by default. One of these types isjava.time.YearMonth. This is a Spring Boot application that uses Hibernate Type to store aYearMonthin a MySQL database as an integer or date.

Key points:

  • For Maven, add Hibernate Types as a dependency inpom.xml

  • In entity, use@TypeDefto maptypeClasstodefaultForType

Output sample:

Image title

Source code can be found here.

Item 53: How to Execute SQL Functions With Multiple Parameters in a JPQL Query

Description: Trying to use SQL functions (in MySQL, PostgreSQL, etc) via JPQL queries may result in exceptions if Hibernate will not recognize them and cannot parse the JPQL query. For example, the MySQL,concat_wsfunction is not recognized by Hibernate. This application is a Spring Boot application based on Hibernate 5.3, that registers theconcat_wsfunction viaMetadataBuilderContributorand inform Hibernate about it via the metadata_builder_contributorproperty. This example uses@QueryandEntityManageras well, so you can see two use cases.

Key points:

  • Use Hibernate 5.3 (or, to be precisely, 5.2.18) (e.g., use Spring Boot 2.1.0.RELEASE)

  • ImplementMetadataBuilderContributorand register theconcat_wsMySQL function

  • Inapplication.properties, setspring.jpa.properties.hibernate.metadata_builder_contributorto point out toMetadataBuilderContributorimplementation

Output example:

Image title

Source code can be found here.

Item 54: How to Use Hibernate @NaturalId in SpringBoot

Description: This is a Spring Boot application that maps a natural business key using Hibernate @NaturalId.

Key points:

  • In the entity (e.g.,Product), mark the properties (business keys) that should act as natural IDs with @NaturalId; commonly, there is a single such property in an entity, but multiple is supported as well as here.

  • For non-mutable ids, mark the columns as@NaturalId(mutable = false)and@Column(nullable = false, updatable = false, unique = true, ...)

  • For mutable ids, mark the columns as@NaturalId(mutable = true)and @Column(nullable = false, updatable = true, unique = true, ...)

  • Override theequals()andhashCode()using the natural id(s)

  • Define a@NoRepositoryBeaninterface (e.g., NaturalRepository) to define two methods, namedfindBySimpleNaturalId()andfindByNaturalId() 

  • Provide an implementation for this interface (e.g., NaturalRepositoryImpl) relying on Hibernate,SessionbySimpleNaturalId(),and byNaturalId()methods

  • For the entity, write a repository class (e.g., for theProductentity write ProductNaturalRepository) that extends theNaturalRepositoryImpland uses it for setting the entity class type and the natural id type (when an entity uses more than one natural ID, the type is not relevant anymore, simply set it toSerializable)

  • Inject this class in your services and callfindBySimpleNaturalId()or findByNaturalId() 

The source code is available here.

Item 55: How to JDBC Batch a Big JSON File to MySQL via ForkJoinPool and HikariCP

Description: This is a Spring Boot application that reads a relatively big JSON file (200000+ lines) and inserts its content in MySQL via batching using theForkJoinPoolAPI and HikariCP.

Key points:

  • Use MySQL jsontype

  • Read the JSON file into aListof a certain capacity, for example, equal or bigger than your batch; by default, the batch is of 300 lines, and the temporary list is 300 * 64 (don't treat these values are recommended, feel free to experiment!)

  • The list is halved and subtasks are created until the list size is small than the batch size (e.g., by default smaller than 300)

  • When the list is full, save it in batches into MySQL, clear the list, and fill it again

  • For MySQL, in theapplication.properties, you may want to attach to the JDBC URL the following:

    • rewriteBatchedStatements=true: This setting will force sending the batched statements in a single request;

    • cachePrepStmts=true: enable caching and is useful if you decide to set prepStmtCacheSize,prepStmtCacheSqlLimit, etc as well; without this setting the cache is disabled;

    • useServerPrepStmts=true: this way you switch to server-side prepared statements (may lead to a significant performance boost); moreover, you avoid thePreparedStatementto be emulated at the JDBC Driver level;

  •  we use the following JDBC URL settings:
     ...?cachePrepStmts=true&useServerPrepStmts=true&rewriteBatchedStatements=true&createDatabaseIfNotExist=true

Note: Older MySQL versions will not tolerate well to have together rewritten and server-side prepared statement activated. For being sure that these statements still valid please check the notes of the Connector/J that you are using.

  • Set the HikariCP to provide a number of database connections that ensure that the database achieves a minimum context switching (e.g., 2 * number of CPU cores)

  • This application uses StopWatchto measure the time needed to transfer the file into the database

  • In order to run the application, you have to unzip thecitylots.zipin the current location; this is the relatively big JSON file collected from the Internet

  • If you want to see details about the batch process, simply activate the DatasourceProxyBeanPostProcessor.javacomponent, uncomment @Component; This is needed because this application relies on DataSource-Proxy

The source code can be found here.

Item 56: "How To JDBC Batch a Big JSON File To MySQL Via ExecutorService And HikariCP"

Item 57: "How To JDBC Batch a Big JSON File To MySQL Via a Single Thread And a Single Database Connection"

Item 58: How to Configure HikariCP via application.properties

If you use thespring-boot-starter-jdbcorspring-boot-starter-data-jpa"starters," you automatically get a dependency to HikariCP.

Note: The best way to tune the connection pool parameters consist in using Flexy Pool by Vlad Mihalcea. Via Flexy Pool, you can find the optimum settings that sustain high-performance of your connection pool.

Description: This is a kickoff application that set up HikariCP viaapplication.properties. The jdbcUrlis set up for a MySQL database. For testing purpose, the application uses anExecutorService for simulating concurrent users. Check the HickariCP report revealing the connection pool status.

Key points:

  • Inapplication.properties, rely onspring.datasource.hikari.*to configure HikariCP

Image title

The code can be found here.

Item 59: "How to Configure HikariCP Via DataSourceBuilder"

Item 60: How to Configure BoneCP via DataSourceBuilder

Note: The best way to tune the connection pool parameters consist in using Flexy Pool by Vlad Mihalcea. Via Flexy Pool, you can find the optimum settings that sustain high-performance of your connection pool.

Description: This is a kickoff application that set up BoneCP connection pool viaDataSourceBuilder. ThejdbcUrlis set up for a MySQL database. For testing purpose, the application uses anExecutorServicefor simulating concurrent users.

Key points:

  • Inpom.xml, add thespring-boot-configuration-processordependency

  • Inpom.xmladd the BoneCP dependency

  • In theapplication.properties, configure BoneCP via a custom prefix, e.g.,app.datasource.* 

  • Write a@Beanthat returns theDataSource

Output sample of BoneCP configuration:

Image title

Source code can be found here.

Item 61: "How to Configure ViburDBCP Via DataSourceBuilder"

Item 62: "How to Configure C3P0 Via DataSourceBuilder"

Item 63: "How to Configure DBCP2 Via DataSourceBuilder"

Item 64: "How to Configure Tomcat Connection Pool Via DataSourceBuilder"

Item 65: How to Configure Two Data Sources With Two Connection Pools

Description: This is a kickoff application that uses two data sources (two MySQL databases, one namedplayers_dband one namedcoaches_db) with two connection pools (each database uses its own HikariCP connection pool with different settings). Based on the above recipes, it is pretty easy to configure two connection pools from two different providers as well.

Key points:

  • Inpom.xml, add thespring-boot-configuration-processordependency

  • Inapplication.properties, configure two HikariCP connection pools via two custom prefixes, e.g.,app.datasource.ds1andapp.datasource.ds2 

  • Write a@Beanthat returns the firstDataSourceand mark it as@Primary  

  • Write another@Beanthat returns the secondDataSource  

  • Configure twoEntityManagerFactoryand point out the packages to scan for each of them

  • Put the domains and repositories for eachEntityManagerin the right packages

Output sample:

Image title

The source code can be found here.

Item 66: How to Provide a Fluent API via Setters For Building Entities

Description: This is a sample application that alters the entities setters methods in order to empower a Fluent API.

Key points:

  • In entities, returnthisinstead of voidin setters

Example:

Image title

The source code can be found here.

Item 67: "How to Provide a Fluent API Via Additional Methods For Building Entities"

Item 68: How to Remove the Extra SELECT COUNT Query in Spring Boot Slice Paging With findAll()

Story: Spring Boot provides an offset based built-in paging mechanism that returns aPageor Slice. Each of these APIs represents a page of records and some metadata. The main difference is thatPagecontains the total number of records, whileSlicecan only tell if there is another page available. ForPage, Spring Boot provides afindAll()method capable to take as arguments aPageableand/or aSpecification. In order to populate aPagecontaining the total number of records, this method triggers aSELECT COUNTextra-query next to the query used to fetch the current page. This can be a performance penalty since theSELECT COUNTquery is triggered every time we request a page. In order to avoid this extra-query, Spring Boot provides a more relaxed API, theSliceAPI. UsingSliceinstead ofPageremoves the need for this extraSELECT COUNTquery and returns the page (records) and some metadata without the total number of records. So, whileSlice doesn't know the total number of records, it still can tell if there is another page available after the current one or this is the last page. The problem is thatSlicework fine for queries containing the SQL,WHEREclause (including those that uses the query builder mechanism built into Spring Data), but it doesn't work for findAll(). This method will still return aPageinstead ofSlicetherefore theSELECT COUNTquery is triggered for Slice<T> findAll(...); .

Description: This recipe contains a suite of samples applications that provides different versions of aSlice<T> findAll(...)method. We have from a minimalist implementation that relies on a hardcoded query as:"SELECT e FROM " + entityClass.getSimpleName() + " e";to a custom implementation that supports sorting, specification, lock mode and query hints to an implementation that relies on extendingSimpleJpaRepository.

Key points:

  • Write anabstractclass that expose theSlice<T> findAll(...) methods (SlicePagingRepositoryImplementation)

  • Implement thefindAll()methods to return Slice<T>(or  Page<T>, but without the total number of elements)

  • Return aSliceImpl(Slice<T>) or aPageImpl(Page<T>) without the total number of elements

  • Implement a new readSlice()method or override the SimpleJpaRepository#readPage()page to avoid theSELECT COUNTquery

  • Pass the entity class (e.g., Player.class) to thisabstractclass via a class repository (PlayerRepository)

Available Implementations:

  • This is a thin implementation based on a hard-coded SQL:"SELECT e FROM " + entityClass.getSimpleName() + " e;" 

  • This is just another minimalist implementation based onCriteriaBuilderinstead of hard-coded SQL

  • This is an implementation that allows us to provide aSort, so sorting results is possible

  • This is an implementation that allows us to provide aSortand a Spring Data,Specification 

  • This is an implementation that allows us to provide a Sort, a LockModeType, aQueryHintsand a Spring Data,Specification 

  • This is an implementation that allows us to provide a Spring Data,Pageableand/orSpecificationby extending theSimpleJpaRepositoryfrom Spring Data. Basically, this implementation is the only one that returns Page<T>  instead of  Slice<T>, but it doesn't trigger the extraSELECT COUNTsince it was eliminated by overriding the Page<T> readPage(...)method fromSimpleJpaRepository. The main drawback is that by returning aPage<T>you don't know if there is a next page or the current one is the last. Nevertheless, there are workarounds to have this as well. In this implementation, you cannot setLockModeType or use query hints.

Item 69: How to Implement Offset Pagination in Spring Boot

Description: When we rely on an offset paging, we have the performance penalty induced by throwing away n records before reaching the desired offset. Larger n leads to a significant performance penalty. Another penalty is the extra-SELECTneeded to count the total number of records. But, for relatively small datasets, offset and keyset provides almost the same performances results. Spring Boot provides built-in support for offset pagination via thePagingAndSortingRepositoryPage, andSliceAPIs; therefore, it is very easy to use it in an application. This application relies onPagingAndSortingRepositoryandPageand extracts records as entities, but if all you want to do is to read this data as pages then consider DTOs to avoid consuming memory and CPU for nothing. As a rule, extract entities only if you plan to modify them. In this case, using DTOs implies a native SQL or another approach instead of, Page<T>.

Key points:

  • Write a repository that extendsPagingAndSortingRepository

  • Call or write methods that return  Page<T> 

Examples:

  • Call the built-infindAll(Pageable)without sorting:
     repository.findAll(PageRequest.of(page, size)); 

  • Call the built-infindAll(Pageable)with sorting:
     repository.findAll(PageRequest.of(page, size, new Sort(Sort.Direction.ASC, "name")));

  • Use Spring Data query creation to define new methods in your repository:

    • Page<Player> findByName(String name, Pageable pageable);

    • Page<Player> queryFirst10ByName(String name, Pageable pageable); 

The source code can be found here.

Item 70: How to Remove the Extra SELECT COUNT Query in Spring Boot Paging via Window Functions

Description: Most of the time, we want to count the total number of rows to calculate the total number of possible pages, so this is an extraSELECT COUNT. So, if we don't want to go with keyset pagination and avoid counting that total number of records, which can be very costly, we have to tackle this performance penalty somehow. For databases vendors that support Window Functions, there is a solution relying onCOUNT(*) OVER()as in this application that uses this window function in a native query against MySQL 8.

Key points:

  • Create a DTO projection to cover the extra-column added by theCOUNT(*) OVER()window function

  • Write a native query relying on this window function

Example:

Image title

Source code can be found here.

Item 71: How to Implement Keyset Pagination in Spring Boot

Note: For a list of pros and cons of offset vs keyset please check my book: Java Persistence Performance Illustrated Guide.

Description: When we rely on an offset paging, we have the performance penalty induced by throwing away n records before reaching the desired offset. Larger n leads to a significant performance penalty. When we have a large n is better to rely on keyset pagination, which maintains a "constant" response time for large datasets. In order to understand how bad offset can perform, please check this article.

Need to Know if There Are More Records?
By its nature, keyset doesn't use aSELECT COUNTto fetch the number of total records. But, we a little tweak we can easily say if there are more records, therefore to show a button of type Next Page. Mainly, if you need such a thing, then consider this application. The solution resume to this snippet of code:

public Map<List<Player>, Boolean> fetchNextSlice(long id, int limit) {
     List<Player> players = playerRepository.fetchAllPlayers(id, limit + 1);

     if(players.size() == (limit + 1)) {
          players.remove(players.size() -1);
          return Collections.singletonMap(players, true);
     }

     return Collections.singletonMap(players, false);
}


A Previous Page button can be implemented easily based on the first record.

Key points:

  • Choose a column to act as the latest visited record (e.g.,id)

  • Use this column in theWHEREclause of your SQL

The source code can found here.

Item 72: How to Optimize Batch Inserts of Parent-Child Relationships and Cascade Persist

Description: Let's suppose that we have a one-to-many relationship betweenTournamentandTennisPlayerentities. Thanks to cascading, when we save a tournament, we save its players as well. We want to create a bunch of tournaments with players and save them in the database (e.g., a MySQL database) using the batch technique. By default, this will result in batching each tournament and the players per tournament. In order to batch tournaments and players, we need to order inserts as in this application.

Key points:

Beside all setting specific to batching inserts in MySQL, we need to set up in theapplication.propertiesthe following property:spring.jpa.properties.hibernate.order_inserts=true 

Example Without Ordered Inserts:

Image title

Example With Ordered Inserts:

Image title

The source code can be found here.

Item 73: How to Batch Updates In MySQL

Implementations:

Update single entity
Parent-child relationship update

Description: Batch updates in MySQL.

Key Points:

  • Inapplication.properties,set spring.jpa.properties.hibernate.jdbc.batch_size (recommendded between 5-30) 

  • Inapplication.properties, set JDBC URL withrewriteBatchedStatements=true(optimization for MySQL, statements get rewritten into a singleStringbuffer and sent in a single request)

  • Inapplication.properties, set JDBC URL withcachePrepStmts=true(enable caching and is useful if you decide to set prepStmtCacheSize,prepStmtCacheSqlLimit, etc. as well; without this setting, the cache is disabled)

  • Inapplication.properties, set JDBC URL withuseServerPrepStmts=true(this way, you switch to server-side prepared statements (may lead to significant performance boost))

  • In case of using a parent-child relationship with cascade all/merge (e.g. one-to-many, many-to-many), then consider to set upspring.jpa.properties.hibernate.order_updates=trueto optimize the batching by ordering updates

  • Before Hibernate 5, we need to set inapplication.propertiesa setting for enabling batching for versioned entities during update and delete operations (entities that contain@Versionfor implicit optimistic locking).

  • Set:spring.jpa.properties.hibernate.jdbc.batch_versioned_data=true. Starting with Hibernate 5, this setting should betrueby default.

Output Example for Single Entity Update:

Image title

Output Example for Parent-Child Relationship Update:

Image title

Item 74: How to Batch Deletes in MySQL via SQL "on Delete Cascade"

Description: Batch deletes in MySQL via on delete cascade. Auto-generated database schema will containON DELETE CASCADEdirective.

Note: SpringdeleteAllInBatch()anddeleteInBatch()don't use batching. The first one simply triggers adelete from entity_namestatement, while the second one triggers adelete from entity_name where id=? or id=? or id=? ... statement. Rely on thedelete()  method.

Key Points:

  • In this example, we have aTournamententity and each tournament can have severalTennisPlayer(one-to-many)

  • First, we removeorphanRemovalor set it tofalse

  • Second, we use onlyCascadeType.PERSISTandCascadeType.MERGE 

  • Third, we set@OnDelete(action = OnDeleteAction.CASCADE)next to@OneToMany 

  • Fourth, we setspring.jpa.properties.hibernate.dialecttoorg.hibernate.dialect.MySQL5InnoDBDialect

  • fifth, we use the Springdelete()method to delete allTournament 

Output Example:

Image title

The source code can be found here.

Item 75: "How To Batch Deletes In MySQL (no relationships)"

Item 76: "How To Batch Deletes In MySQL Via orphanRemoval=true"

Item 77: How To Retry Transactions After OptimisticLockException

Description: This is a Spring Boot application that simulates a scenario that leads to an OptimisticLockException. When such an exception occurs, the application retries the corresponding transaction via the db-util library developed by Vlad Mihalcea.

Key points:

  • in pom.xml, add the db-util dependency

  • Configure theOptimisticConcurrencyControlAspectbean

  • Mark the method that is prone to throwOptimisticLockExceptionwith @Retry(times = 10, on = OptimisticLockException.class)  

The source code is here.

Done!

Secure your Java app or API service quickly and easily with Okta's user authentication and authorization libraries. Developer accounts are free forever. Try Okta Instead.

Topics:
hibernate ,spring boot ,persistence ,java ,hibernate 5 ,spring boot 2 ,performance

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}