RAM is the New SSD
Don't believe your data fits in RAM? Think again. With recent advancements, RAM is the new SSD. Read why RAM is poised to really boost performance.
Join the DZone community and get the full member experience.Join For Free
Your data fits in RAM. Yes, it does. Don’t believe it? Visit the hilarious yourdatafitsinram.com website.
But there is an entirely new dimension to this since last week’s announcement by Intel, which hasn’t gotten enough attention in the blogosphere yet.
New 3D XPoint™ technology brings non-volatile memory speeds up to 1,000 times faster than NAND, the most popular non-volatile memory in the marketplace today.
The companies invented unique material compounds and a cross point architecture for a memory technology that is 10 times denser than conventional memory
This is colossal news, which you can read from the official source here:
What Does it Mean for Software?
SSD has already had a big impact on how we think about software, especially in the database business. Many RDBMS’s internal optimisations are based on the fact that the database is installed on a system with few CPUs, a bit of RAM and a large HDD. HDD’s are very slow and suffer from a lot of latency due to their spinning. Data needs to be cached on several layers – in the operating system that accesses blocks on the disk as well as in the database that accesses rows from the tables or indexes.
SSD changed a lot of this, as the spinning (and its associated latency) has gone, which is most useful for index lookups as Markus Winand from use-the-index-luke.com explains:
Index lookups have a tendency to cause many random IO operations and can thus benefit from the fast response time of SSDs. The fun part is that properly indexed databases get better benefits from SSD than poorly indexed ones
SSD is still relatively new and not yet fully adopted in enterprise data centers and associated software, yet already, we’re seeing this new trend:
RAM is the new SSD
One of the most impressive displays of yourdatafitsinram.com is Stack Exchange, the platform running the popular Stack Overflow. According to their website, the platform is transferring 48 TB data / month to its users via an average 225 requests per second.
From our perspective, the database metrics are even more interesting, as Stack Overflow is essentially running a single SQL Server instance (with a hot standby), accommodating 440M queries per day via 384 GB of RAM and a DB size of 2.4TB.
The full metrics can be found on this website:
Now, let’s apply Intel’s new 3D XPoint™ technology to this model – perhaps we don’t need any disk anymore, after all (except for logging and backups)?
Don’t Scale Out. Yet.
A lot of recent hype has been evolving around the need for scaling out as Moore’s Law has come to a halt and we now need to parallelise on many many cores. But this doesn’t mean that we absolutely need to parallelise on many machines. Keeping all data processing in one place that can be scaled up greatly with processors and RAM will help us prevent hard-to-manage network latency and will again allow us to continue using established, slightly adapted RDBMS technology. Prices for hardware will crumble soon enough.
We’re looking forward to an exciting new era of scaling up massively. With SQL, of course!
Published at DZone with permission of Lukas Eder, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.