DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Curious about the future of data-driven systems? Join our Data Engineering roundtable and learn how to build scalable data platforms.

Data Engineering: The industry has come a long way from organizing unstructured data to adopting today's modern data pipelines. See how.

Threat Detection: Learn core practices for managing security risks and vulnerabilities in your organization — don't regret those threats!

Managing API integrations: Assess your use case and needs — plus learn patterns for the design, build, and maintenance of your integrations.

Related

  • Cost Optimization Strategies for Managing Large-Scale Open-Source Databases
  • Implement Hibernate Second-Level Cache With NCache
  • A Comprehensive Guide to Database Sharding: Building Scalable Systems
  • Leveraging IBM WatsonX Data With Milvus to Build an Intelligent Slack Bot for Knowledge Retrieval

Trending

  • Faster Startup With Spring Boot 3.2 and CRaC, Part 1: Automatic Checkpoint
  • Supporting Multiple Redis Databases With Infinispan Cache Aliases Enhancement
  • How to Create a Pokémon Breeding Gaming Calculator Using HTML, CSS, and JavaScript
  • Inside the World of Data Centers
  1. DZone
  2. Data Engineering
  3. Data
  4. Optimizing Access Patterns for Extendible Hashing

Optimizing Access Patterns for Extendible Hashing

In this article, let's take a look at optimizing access patterns for extendible hashing.

By 
Oren Eini user avatar
Oren Eini
·
Dec. 17, 19 · Analysis
Likes (1)
Comment
Save
Tweet
Share
19.5K Views

Join the DZone community and get the full member experience.

Join For Free

Triangular pattern

Optimizing Access Patterns for Extendible Hashing

I'm continuing to explore the use of extendible hashing, and I run into an interesting scenario. The whole point of using a hash table is to reduce the cost of lookups to O(1). When using persistent data structures, the usual cost that we care about is not the number of CPU instructions, but the number of disk accesses.

For B-Trees, the usual cost is O(log(N, fanout) ). A typical fanout rate for a B-Tree in Voron would be around 256. In other words, if we have a hundred million records in a B-Tree, we can expect 3 - 4 disk reads to find a particular record. That doesn't sound too bad until you realize just how slow the disks are. The good thing about that is that you can typically cache things.

The first two levels of the B-Tree can be cached really efficiently, and we need less than 100 KB to keep them all in memory. For another 25MB, we can keep the third layer in the B-Tree in main memory. That fourth level, on the other hand, has about 300 - 400 thousand pages and takes 6 - 9 GB of space (depending on fragmentation).

You may also enjoy:  Design Patterns for Microservices

Let's assume a system that has 1 GB of RAM available. That means that we'll likely be able to keep the first three levels of the B-Tree in main memory (costing us measly 25MB) but be forced to mostly do disk reads on each access to a page on the last level.


I'm oversimplifying things because a lot depends on the kind of insert patterns that built the B+Tree. Assuming that it has minimal fragmentation (generated from sorted data), it is likely that there are quite a few leaf pages in the 3rd level, but that only works if we don't store any sort of value in the B-Tree. These are back of the envelope computations, not exact numbers.

The good thing about this is that B-Tree has a number of interesting properties, chief among them is the fact that the data is sorted. If you are accessing the data in sorted order, you are going to get the optimal access pattern. For example, Voron will analyze your work and prefetch related data before you ask for it. That means that you are likely not going to be hitting many disk reads directly. How does that work when we use hashing?

Pretty much by definition, the data is much more widely spread. That means that accessing any two keys is very likely going to result in a disk read. Even if we have only a theoretical cost of O(1) compared to the B-Tree cost of 3 - 4, the B-Tree can be trivially reduced to a single disk read in most cases as well. In the extendible hashing case, for hundred million records, assuming that we can fit a maximum of 256 entries per page, we'll need:

  • About 3 MB for the directory
  • About 6 - 8 GB for the data itself, depending on the load factor.

Here, we can see something really interesting. In the B-Tree case, we needed about 25MB of memory just for branch pages. In the hash case, we need only 3 MB. The nice thing about this is that the difference isn't too big between the two options. Yes, there is an order of magnitude gap between them, but for practical purposes, there isn't much of a one.

However, access to the hash table in an ordered fashion is going to result in pretty-much-guaranteed disk reads on every call. We will essentially be doing full random reads. And that isn't a nice thing to do to our disk or our performance.

We can take advantage on a feature of the extendible hashing. The hash table has the notion of a global depth, it guarantees that each data page we use will have the same depth bits in it. The actual promise is a bit wonkier than that, there is a local depth and a global depth, but for our purposes, it means that if we access to key with the same value in their first depth bits, we'll end up in the same page.

That means that if we sort the keys we want to search for by the first depth bits, we can be sure that we'll get good results. We're very likely to hit the same page (already in memory), instead of having to fetch a whole other page from the disk. There aren't as many advantages to finding patterns in the data access, however, but there are still a few things that you might be able to do.

My use case calls for an extendible hash of an int64 key to int64 value, so something like:

 Map<int64, int64> FriendshipStorage; 

In other words, the key for this map would be the user id, and the value would be the file offset where I can read the list of friends that this user has. If I want to do something like find friends of friends (three levels deep), I can do:

Java
 




x
20


 
1
 List<long> FindFriendsOfFriends(long start) {
2
    var depthMask = (1 << FriendsStorage.Depth) -1;
3
    var sortedByDepth = new SortedList<long>(friends, (x,y) => (x & depthMask) - (y & depthMask));
4
    
5
    var startingFriendsOffset = FriendsStorage.Get(start);
6
    sortedByDepth.AddRange(GetFriendsList(startingFriendsOffset));
7

           
8
    var visited = new HashSet<long>();
9
    var matches = new List<long>();
10
    while(sortedByDepth.Count > 0)
11
    {
12
        var current = sortedByDepth[0];
13
        sortedByDepth.RemoveAt(0);
14
        if(visited.Add(current))
15
            continue;
16
        var friendsOffset = FriendsStorage.Get(current);
17
        sortedByDepth.AddRange(GetFriendsList(friendsOffset));
18
    }
19
    return matches;
20
}



First, I'm going to create a list that will sort the values by the depth, then I'm going to read the starting offset and start scanning through the values. I'm always going to scan the data based on the order of the first depth bits, which should mean that I'm hitting the same page for related data. I'm also taking advantage on the fact that I'm likely to discover data that is already in locations that I visited. I'm going to favor going first to places that I have already seen, assuming that they are still retained in memory.

This isn't going to always work, of course, but I suspect that this is going to be a pretty good approach from a heuristic point of view.

Further Reading

HashMap Internal Implementation Analysis in Java

Database Data (computing)

Published at DZone with permission of Oren Eini, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Cost Optimization Strategies for Managing Large-Scale Open-Source Databases
  • Implement Hibernate Second-Level Cache With NCache
  • A Comprehensive Guide to Database Sharding: Building Scalable Systems
  • Leveraging IBM WatsonX Data With Milvus to Build an Intelligent Slack Bot for Knowledge Retrieval

Partner Resources


Comments

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: