In-Memory Computing and its Impact on Software Performance
Developers can speed up their access to data by using in-memory databases that allow for much faster systems.
Join the DZone community and get the full member experience.Join For Free
The evolution of hardware has been a steady pace of decreasing size and reducing the distance for signaling. In doing so, it has constantly given us higher performance. When you look at software, the basic principles have remained the same for decades, however, as with everything else, the software industry is making improvements.
In-memory platforms with in-memory databases are emerging and are becoming the new standard. In-memory computing is, unlike traditional computing, a technology for both application and database in the memory. Accessing data stored in-memory is much faster, up to as much as 10,000 times faster compared to the traditional system. This also minimizes the need for performance tuning and maintenance by developers and system integrators and provide a much faster experience for the end user. In-memory computing allows data to be analyzed in real-time, enabling real-time reporting and decision-making for businesses. According to Gartner, deploying business intelligence tools on the traditional system can take as much as 17 months, and many vendors therefore choose the in-memory technology to speed up the implementation time.
Since in-memory databases utilize the server’s main memory as primary storage location, besides improvement in speed, the size and cost is significantly reduced. Traditional systems keep a lot of redundant data as the system needs to create a copy of the data for each component that are added to the system, such as additional database, server, integrator, or middleware to increase the volume or performance. For every component you add to the system, the more complex it becomes. By continuously adding hardware, you have:
- A never-ending hardware cost.
- An increasing need for storage space to store the hardware.
- A continuous work on integration and maintenance.
The more hardware you add, the more copies of the data will be created and the more it needs to travel, which with time results in a decrease in performance. Hence, creating a slippery slope of reduced performance and added hardware and increased cost. With the in-memory system – since data is stored in-memory — it entails a single data transfer and doesn’t share the traditional system’s challenge of signaling and decreased performance. Because of this, the system would be able to handle everything with one server, where it would have required the traditional system 100 servers and databases. In-memory databases are from the start designed to be more streamlined, with the optimization goals of reducing memory consumption and CPU cycles.
Published at DZone with permission of Thamwika Bergstrom, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.