Over a million developers have joined DZone.

The Definitive Guide to the Modern Database: Part IV

DZone's Guide to

The Definitive Guide to the Modern Database: Part IV

Learn all about databases in part 4 of the definitive guide. This time we talk hardware vs. software.

· Database Zone ·
Free Resource

Read the 2019 State of Database DevOps Report for the very latest insights

In our last post in the Definitive Guide to the Modern Database series, we examined three different trade-offs: In-memory vs. Disk-based; Scale-In vs. Scale-out, Consistent vs. Inconsistent; and SQL vs. NoSQL, “Classical” vs. “Modern.” Today let’s take a look at the final trade-off to consider when selecting a modern database.

Trade-Off #7: Hardware vs. Software, Scale-In

Input is a high velocity stream among a wide variety of OLTP applications. This results in speed remaining as the top concern for transactional purposes. Improving hardware can be a viable option to increase speed; however, increasing server performance with faster CPUs, faster disks and/or faster memory can possibly cause you to find that the speed isn’t increasing sustainability or at all. This concept can be explained by physics: even inside a powerhouse server, the distance data must travel through in wires and chips is generally the same. Data in a fictitious silicon cable on the ocean floor would be faster, because of the straight orientation.

Picture it like this; someone has to put all the pieces of data, along with the software that processes them, as physically close as possible to gain the best performance. This practice is called scaling-in and removes proxy layers like interfaces of memory-based DBMS. Thinking about this within a limit, you can imagine a situation when all objects inside the application become database objects by option, meaning they can be retrieved through SQL, persisted on a disk and written without concurrency conflicts. This situation arises when a database engine and application are scaled-in together on the operating system level, with both of them using the same data without transmitting it through proxies. An added benefit of this approach is the increased ease of use – monotonic code to deal with wrappers is no longer necessary and for some users, this is a greater deciding factor than performance.

In order to improve hardware for performance purposes, simply adding faster standard components will not suffice. Referring to a recent battle against physics laws in high-frequency trading with Tesla and microwave transmitters in the U.S., you’ll find a small increase in nanoseconds per transaction. A similar problem of overcoming the transmission limits requires a solution in current microchip manufacturing. Instituting a new technology as a replacement for silicon seems too large of a scope, leaving the more realistic answer to be found in the emerging Systems-on-Chip field, which is already manufacturing with modern media codecs implemented in silicon. This trend might captivate the transactional world because it is directly about scaling-in at the level of hardware and subject domain.

While trade-offs and advances in technology will continue to evolve throughout the years, understanding the modern database and your options will always be an important factor in selecting the best for you and your needs. Need a refresher on the other trade-offs we’ve covered? Check them out in parts one, two and three of our series.

Read the 2019 State of Database DevOps Report for latest insights into DevOps adoption among SQL Server professionals, and the benefits and challenges of including the database in DevOps initiatives


Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}