{{announcement.body}}
{{announcement.title}}

Best Performance Practices for Hibernate 5 and Spring Boot 2 (Part 3)

DZone 's Guide to

Best Performance Practices for Hibernate 5 and Spring Boot 2 (Part 3)

Learn more about improving your performance in Hibernate 5 and Spring Boot 2 applications.

· Java Zone ·
Free Resource

Part 1 is available here (recipes from 1 to 25).

Part 2 is available here (recipes from 26 to 51).

Note: Reading this part is not imposing reading the first two as well!

If you liked the first two parts of this article, then I am sure that you will enjoy this part as well. Mainly, we continue in the same manner and tackle some persistence layer performance issues via Spring Boot applications. Moreover, for a detailed explanation of 140+ performance items check out my book, Spring Boot Persistence Best Practices. This book helps every Spring Boot developer to squeeze the performances of the persistence layer.

Item 52: How to Store java.time.YearMonth as an Integer Or Date via Hibernate Types

Description: Hibernate Types is a set of extra types not supported by Hibernate by default. One of these types isjava.time.YearMonth. This is a Spring Boot application that uses Hibernate Type to store aYearMonthin a MySQL database as an integer or date.

Key points:

  • For Maven, add Hibernate Types as a dependency inpom.xml

  • In entity, use@TypeDefto maptypeClasstodefaultForType

Output sample:

Image title

Source code can be found here.

Item 53: How to Execute SQL Functions With Multiple Parameters in a JPQL Query

For a detailed explanation of this item and 140+ items check out my book Spring Boot Persistence Best Practices. This book helps every Spring Boot developer to squeeze the performances of the persistence layer.

Note: Using SQL functions in the WHERE  part (not in the SELECT  part) of the query in JPA 2.1 can be done via function()   as here.

Description: Trying to use SQL functions (in MySQL, PostgreSQL, etc) via JPQL queries may result in exceptions if Hibernate will not recognize them and cannot parse the JPQL query. For example, the MySQL,concat_wsfunction is not recognized by Hibernate. This application is a Spring Boot application based on Hibernate 5.3, that registers theconcat_wsfunction viaMetadataBuilderContributorand inform Hibernate about it via the metadata_builder_contributorproperty. This example uses@QueryandEntityManageras well, so you can see two use cases.

Key points:

  • Use Hibernate 5.3 (or, to be precise, 5.2.18) (e.g., use Spring Boot 2.1.0.RELEASE)

  • ImplementMetadataBuilderContributorand register theconcat_wsMySQL function

  • Inapplication.properties, setspring.jpa.properties.hibernate.metadata_builder_contributorto point out toMetadataBuilderContributorimplementation

Output example:

Image title

Source code can be found here.

Item 54: How to Use Hibernate @NaturalId in SpringBoot

Description: This is a Spring Boot application that maps a natural business key using Hibernate @NaturalId. 

Key points:

  • In the entity (e.g., Book ), mark the fields (business keys) that should act as natural IDs with  @NaturalId ; commonly, there is a single such property, but multiple are supported as well as here

  • For non-mutable ids, mark the columns as @NaturalId(mutable = false)   and @Column(nullable = false, updatable = false, unique = true, ...)  

  • for mutable ids, mark the columns as @NaturalId(mutable = true)   and @Column(nullable = false, updatable = true, unique = true, ...)  
  • Override theequals()andhashCode()using the natural id(s)

  • Define a@NoRepositoryBeaninterface (e.g., NaturalRepository) to define two methods, namedfindBySimpleNaturalId()andfindByNaturalId() 

  • Provide an implementation for this interface (e.g., NaturalRepositoryImpl) relying on Hibernate,SessionbySimpleNaturalId(),and byNaturalId()methods

  • For the entity, write a repository class (e.g., for theBookentity write BookNaturalRepository) that extends theNaturalRepositoryImpland uses it for setting the entity class type and the natural id type (when an entity uses more than one natural ID, the type is not relevant anymore, simply set it toSerializable)

  • Inject this class in your services and callfindBySimpleNaturalId()or findByNaturalId() 

The source code is available here.

Item 55: How to JDBC Batch a Big JSON File to MySQL via ForkJoinPool and HikariCP

Description: This is a Spring Boot application that reads a relatively big JSON file (200000+ lines) and inserts its content in MySQL via batching using theForkJoinPoolAPI and HikariCP.

Key points:

  • Use MySQL jsontype

  • Read the file content into a List  

  • The list is halved and subtasks are created until the list size is small than the batch size (e.g., by default smaller than 30)

  • For MySQL, in theapplication.properties, you may want to attach to the JDBC URL the following:

    • rewriteBatchedStatements=true: This setting will force sending the batched statements in a single request;

    • cachePrepStmts=true: enable caching and is useful if you decide to set prepStmtCacheSize,prepStmtCacheSqlLimit, etc as well; without this setting the cache is disabled;

    • useServerPrepStmts=true: this way you switch to server-side prepared statements (may lead to a significant performance boost); moreover, you avoid thePreparedStatementto be emulated at the JDBC Driver level;

  •  we use the following JDBC URL settings:
     ...?cachePrepStmts=true&useServerPrepStmts=true&rewriteBatchedStatements=true&createDatabaseIfNotExist=true

Note: Older MySQL versions will not tolerate well to have together rewritten and server-side prepared statements activated. For being sure that these statements still valid please check the notes of the Connector/J that you are using.

  • Set the HikariCP to provide a number of database connections that ensure that the database achieves a minimum context switching (e.g., 2 * number of CPU cores)

  • This application uses StopWatchto measure the time needed to transfer the file into the database

  • In order to run the application, you have to unzip thecitylots.zipin the current location; this is the relatively big JSON file collected from the Internet

  • If you want to see details about the batch process, simply activate the DatasourceProxyBeanPostProcessor.javacomponent, uncomment @Component; This is needed because this application relies on DataSource-Proxy

The source code can be found here.

Item 56: "Batch Inserts In Spring Boot Style Via CompletableFuture"

Item 57: "Batch Inserts In Spring Boot Style And Batch Per Transaction"

Item 58: How to Configure HikariCP via application.properties

If you use thespring-boot-starter-jdbcorspring-boot-starter-data-jpa"starters," you automatically get a dependency to HikariCP.

Note: The best way to tune the connection pool parameters consist in using Flexy Pool by Vlad Mihalcea. Via Flexy Pool, you can find the optimum settings that sustain high-performance of your connection pool.

Description: This is a kickoff application that set up HikariCP viaapplication.properties. The jdbcUrlis set up for a MySQL database. For testing purposes, the application uses anExecutorService for simulating concurrent users. Check the HickariCP report revealing the connection pool status.

Key points:

  • Inapplication.properties, rely onspring.datasource.hikari.*to configure HikariCP

Image title

The code can be found here.

Item 59: "How to Configure HikariCP Via DataSourceBuilder"

Item 60: How to Configure BoneCP via DataSourceBuilder

Note: The best way to tune the connection pool parameters consists of using Flexy Pool by Vlad Mihalcea. Via Flexy Pool, you can find the optimum settings that sustain high-performance of your connection pool.

Description: This is a kickoff application that set up BoneCP connection pool viaDataSourceBuilder. ThejdbcUrlis set up for a MySQL database. For testing purposes, the application uses anExecutorServicefor simulating concurrent users.

Key points:

  • Inpom.xml, add thespring-boot-configuration-processordependency

  • Inpom.xmladd the BoneCP dependency

  • In theapplication.properties, configure BoneCP via a custom prefix, e.g.,app.datasource.* 

  • Write a@Beanthat returns theDataSource

Output sample of BoneCP configuration:

Image title

Source code can be found here.

Item 61: "How to Configure ViburDBCP Via DataSourceBuilder"

Item 62: "How to Configure C3P0 Via DataSourceBuilder"

Item 63: "How to Configure DBCP2 Via DataSourceBuilder"

Item 64: "How to Configure Tomcat Connection Pool Via DataSourceBuilder"

Item 65: How to Configure Two Data Sources With Two Connection Pools

Description: This is a kickoff application that uses two data sources (two MySQL databases, one namedauthorsdband one namedbooksdb) with two connection pools (each database uses its own HikariCP connection pool with different settings). Based on the above recipes, it is pretty easy to configure two connection pools from two different providers as well.

Key points:

  • Inpom.xml, add thespring-boot-configuration-processordependency

  • Inapplication.properties, configure two HikariCP connection pools via two custom prefixes, e.g.,app.datasource.ds1andapp.datasource.ds2 

  • Write a@Beanthat returns the firstDataSourceand mark it as@Primary  

  • Write another@Beanthat returns the secondDataSource  

  • Configure twoEntityManagerFactoryand point out the packages to scan for each of them

  • Put the domains and repositories for eachEntityManagerin the right packages

Output sample:

Image title

The source code can be found here.

Item 66: How to Provide a Fluent API via Setters For Building Entities

Description: This is a sample application that alters the entities setters methods in order to empower a Fluent API.

Key points:

  • In entities, returnthisinstead of voidin setters

Example:

Image title

The source code can be found here.

Item 67: "How to Provide a Fluent API Via Additional Methods For Building Entities"

Item 68: How to Remove the Extra SELECT COUNT Query in Spring Boot Slice Paging With findAll()

Most probably this is all you want: How To Fetch Slice<entity>/Slice<dto> Via fetchAll/fetchAllDto

Story: Spring Boot provides an offset based built-in paging mechanism that returns a Page  or Slice . Each of these APIs represents a page of data and some metadata. The main difference is that Page  contains the total number of records, while Slice  can only tell if there is another page available. For Page, Spring Boot provides a findAll()   method capable to take as arguments a Pageable  and/or a Specification  or Example . In order to create a Page  that contains the total number of records, this method triggers a SELECT COUNT   extra-query next to the query used to fetch the data of the current page. This can be a performance penalty since the SELECT COUNT   query is triggered every time we request a page. In order to avoid this extra-query, Spring Boot provides a more relaxed API, the Slice  API. Using Slice  instead of Page  removes the need of this extra SELECT COUNT   query and returns the page (records) and some metadata without the total number of records. So, while Slice  doesn't know the total number of records, it still can tell if there is another page available after the current one or this is the last page. The problem is that Slice  work fine for queries containing the SQL, WHERE  clause (including those that use the Query Builder mechanism built into Spring Data), but it doesn't work for  findAll() . This method will still return a Page  instead of Slice  therefore the SELECT COUNT   query is triggered for  Slice findAll(...); .

Key points:

  • Consider reading further here.

Item 69: How to Implement Offset Pagination in Spring Boot

Description: When we rely on an offset paging, we have the performance penalty induced by throwing away n records before reaching the desired offset. Larger n leads to a significant performance penalty. Another penalty is the extra-SELECTneeded to count the total number of records. But, for relatively small datasets, offset and keyset provides almost the same performance results. Spring Boot provides built-in support for offset pagination via thePagingAndSortingRepositoryPage, andSliceAPIs; therefore, it is very easy to use it in an application. This application relies onPagingAndSortingRepositoryandPageand extracts records as entities, but if all you want to do is to read this data as pages then consider DTOs to avoid consuming memory and CPU for nothing. As a rule, extract entities only if you plan to modify them. In this case, using DTOs implies a native SQL or another approach instead of, Page<T>.

Key points:

  • Write a repository that extendsPagingAndSortingRepository

  • Call or write methods that return  Page<T> 

Examples:

  • Call the built-infindAll(Pageable)without sorting:
     repository.findAll(PageRequest.of(page, size)); 

  • Call the built-infindAll(Pageable)with sorting:
     repository.findAll(PageRequest.of(page, size, new Sort(Sort.Direction.ASC, "name")));

  • Use Spring Data query creation to define new methods in your repository:

    • Page<Author> findByName(String name, Pageable pageable);

    • Page<Author> queryFirst10ByName(String name, Pageable pageable); 

The source code can be found here.

Item 70: How to Remove the Extra SELECT COUNT Query in Spring Boot Paging via Window Functions

Description: Typically, in offset pagination, there is one query needed for fetching the data and one for counting the total number of records. But, we can fetch this information in a single database roundtrip via a SELECT COUNT   subquery nested in the main SELECT . Even better, for database vendors that support Window Functions there is a solution relying on COUNT(*) OVER()   as in this application that uses this window function in a native query against MySQL 8. So, prefer this one instead of SELECT COUNT   subquery. 

Key points:

Fetch a page as a List:

Fetch a page as a Page:

Example:

Image title

Item 71: How to Implement Keyset Pagination in Spring Boot

For a detailed explanation of this item and 140+ items check out my book Spring Boot Persistence Best Practices. This book helps every Spring Boot developer to squeeze the performances of the persistence layer.

Description: When we rely on an offset paging, we have the performance penalty induced by throwing away n records before reaching the desired offset. Larger n leads to a significant performance penalty. When we have a large n is better to rely on keyset pagination, which maintains a "constant" response time for large datasets. In order to understand how bad offset can perform, please check this article.

Need to Know if There Are More Records?

Then "How To Implement Keyset Pagination With "Next Page" Button In Spring Boot" is for you!

Key points:

  • Choose a column to act as the latest visited record (e.g.,id)

  • use the column(s) in the WHERE  and ORDER BY   clauses of your SQL

The source code can found here.

Item 72: How to Optimize Batch Inserts of Parent-Child Relationships and Cascade Persist

Description: Let's suppose that we have a one-to-many relationship between Author  and Book  entities. When we save an author, we save his books as well thanks to cascading all/persist. We want to create a bunch of authors with books and save them in the database (e.g., a MySQL database) using the batch technique. By default, this will result in batching each author and the books per author (one batch for the author and one batch for the books, another batch for the author and another batch for the books, and so on). In order to batch authors and books, we need to order inserts as in this application.

Key points:

Beside all setting specific to batching inserts in MySQL, we need to set up in theapplication.propertiesthe following property:spring.jpa.properties.hibernate.order_inserts=true 

Example Without Ordered Inserts:

Image title

Example With Ordered Inserts:

Image title

The source code can be found here.

Item 73: How to Batch Updates In MySQL

For a detailed explanation of this item and 140+ items check out my book Spring Boot Persistence Best Practices. This book helps every Spring Boot developer to squeeze the performances of the persistence layer.

Implementations:

Update single entity
Parent-child relationship update

Description: Batch updates in MySQL.

Key Points:

  • Inapplication.properties,set spring.jpa.properties.hibernate.jdbc.batch_size (recommended between 5-30) 

  • Inapplication.properties, set JDBC URL withrewriteBatchedStatements=true(optimization for MySQL, statements get rewritten into a singleStringbuffer and sent in a single request)

  • Inapplication.properties, set JDBC URL withcachePrepStmts=true(enable caching and is useful if you decide to set prepStmtCacheSize,prepStmtCacheSqlLimit, etc. as well; without this setting, the cache is disabled)

  • Inapplication.properties, set JDBC URL withuseServerPrepStmts=true(this way, you switch to server-side prepared statements (may lead to significant performance boost))

  • In case of using a parent-child relationship with cascade all/merge (e.g. one-to-many, many-to-many), then consider to set upspring.jpa.properties.hibernate.order_updates=trueto optimize the batching by ordering updates

  • Before Hibernate 5, we need to set inapplication.propertiesa setting for enabling batching for versioned entities during update and delete operations (entities that contain@Versionfor implicit optimistic locking).

  • Set:spring.jpa.properties.hibernate.jdbc.batch_versioned_data=true. Starting with Hibernate 5, this setting should betrueby default.

Output Example for Parent-Child Relationship Update:

Image title

Item 74: How to Batch Deletes in MySQL via SQL "on Delete Cascade"

Description: Batch deletes in MySQL via on delete cascade. Auto-generated database schema will containON DELETE CASCADEdirective.

Note: Spring deleteAllInBatch()   and deleteInBatch()   don't use delete batching and don't take advantage of optimistic locking mechanism to prevent lost updates. They trigger bulk operations via  Query#executeUpdate() , therefore, the Persistent Context is not synchronized accordingly (it is advisable to flush (before delete) and close/clear (after delete) the Persistent Context accordingly to avoid issues created by unflushed (if any) or outdated (if any) entities). The first one simply triggers a delete from entity_name   statement, while the second one triggers a delete from entity_name where id=? or id=? or id=? ...   statement. Both of them take advantage of ON DELETE CASCADE   and are very efficient. For delete in batches rely on  deleteAll() deleteAll(Iterable entities)   or delete()   method. Behind the scene, the two flavors of deleteAll()   relies on  delete() . Mixing batching with database automatic actions ( ON DELETE CASCADE ) will result in a partially synchronized Persistent Context. 

Key Points:

  • In this application, we have a Author   entity and each author can have several Book   (one-to-many)

  • First, we removeorphanRemovalor set it tofalse

  • Second, we use onlyCascadeType.PERSISTandCascadeType.MERGE 

  • Third, we set@OnDelete(action = OnDeleteAction.CASCADE)next to@OneToMany 

  • Fourth, we setspring.jpa.properties.hibernate.dialecttoorg.hibernate.dialect.MySQL5InnoDBDialect 

  •     

    Fourth, for MySQL 8, we setspring.jpa.properties.hibernate.dialecttoorg.hibernate.dialect.MySQL8Dialect

  • fifth, we run through each deleteFoo()   method

Output Example:

Image title

The source code can be found here.

Item 75: "How To Batch Deletes In MySQL (no relationships)"

Item 76: "How To Batch Deletes In MySQL Via orphanRemoval=true"

Item 77: How To Retry Transactions After OptimisticLockException

Description: This is a Spring Boot application that simulates a scenario that leads to an OptimisticLockException. When such an exception occurs, the application retries the corresponding transaction via the db-util library developed by Vlad Mihalcea.

Key points:

  • in pom.xml, add the db-util dependency

  • Configure theOptimisticConcurrencyControlAspectbean

  • Mark the method that is prone to throwOptimisticLockExceptionwith @Retry(times = 10, on = OptimisticLockException.class

The source code is here.

Done! For a detailed explanation of 140+ performance items check out my book Spring Boot Persistence Best Practices. This book helps every Spring Boot developer to squeeze the performances of the persistence layer.

Topics:
hibernate ,spring boot ,persistence ,java ,hibernate 5 ,spring boot 2 ,performance

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}