Over a million developers have joined DZone.

Why You Need to Move Your Data Center to a Software-Defined Paradigm

DZone's Guide to

Why You Need to Move Your Data Center to a Software-Defined Paradigm

· Cloud Zone ·
Free Resource

Discover a centralized approach to monitor your virtual infrastructure, on-premise IT environment, and cloud infrastructure – all on a single platform.

An interesting technological trend has started taking off in data center market s— the development of new paradigms for the storage and networking of subsystem models. Major virtualization vendors in the market already demonstrate upgraded product lines with software-defined storage and network solutions.

“Software-Defined” describes a new technological paradigm which is commonly associated with full extract and separate operation of data from its underlying hardware layer. It offers advantages due to the consolidation of physical workloads to virtual machine infrastructure. The Software Defined approach can finally breathe new life into the major resource pools of any data center.

Software Defined Infrastructure

Software Defined Storage (SDS) is a model which brings data storage to a new level by completely separating logical storage service from its hardware. This division empowers consumers to make data portable among commodity hardware, eliminating planned and unplanned downtimes, excludes single points of failure and central element bottlenecks and provides data deduplication, replication, snapshotting and backing up. The Software Defined Network (SDN) has, in its turn, similar advantages, aiding efficient data transmission management. SDN decouples control function and switching function from the hardware and lays them into separate uniform layers to achieve significantly easier regulation, agility, programmability, vendor agnosticism and increased speed within corporate and data center networks.

Essentially, SDN is a replacement of complicated hardware “brains” laid down in the 60s and 70s. It's being replaced by the new brilliant idea of a single framework stack of uniform layers for whole organization through all data centers, no matter how big they are.

As for SDS, this technology offers key benefits:

  • There is no binding to specific hardware. It is a hardware-independent concept.
  • Really new deep level of automation in policy-driven, self-optimizing, intelligent storage provisioning.
  • Maximum management in scaling, availability, redundancy, migrating of data.

Speaking of the SDS applied within a virtual infrastructure — it provides flexible multilevel duplication in pseudo-random placement manner for virtual machines' data across different physical disks, nodes, racks, rooms and so on. As a result, each block of data is randomly written into several instances, taking into account physical disks allocation. Such storage subsystems do not have central elements and are capable of continually serving virtual machines during disk, node or a whole rack failure, with per block customizable replications with easiness in adding further disk space. These systems can achieve scaling to thousands of nodes without any downtime. Any commodity server, composed of a number of simple hard and solid-state drives, is suitable for the job. Thereby, if we tried calculating innovation value compared to current traditional storage paradigms we would see that hardware agnosticism and a rapid rise in operational efficiency leads to significantly reduced capital expense (CapEx) and operating expenses (OpEx) values.

Software Defined Storage

Talking about SDN, I would like to refer to main definition of SDN on the official Open Networking Foundation page. New network paradigms include:

  • Changing traffic patterns: Applications that commonly access geographically distributed databases and servers through public and private clouds require extremely flexible traffic management and access to bandwidth on demand;
  • The “consumerization of IT”: The Bring Your Own Device (BYOD) trend requires networks that are both flexible and secure;
  • The rise of cloud services: Users expect on-demand access to applications, infrastructure, and other IT resources;
  • “Big data” means more bandwidth: Handling today’s mega datasets requires massive parallel processing that is fueling a constant demand for additional capacity and any-to-any connectivity.

Therefore, any modern private data center architectures must comply with the aforementioned requirements. Currently many SDS and SDN developments are taking place around the world including significant open source standardization footprints.

For example, let’s compare storage capacity scaling task in traditional and SDS approaches.

Traditional data center vs Software Defined

In traditional old paradigm, you would need to plan the lifecycle of the future storage system. In the planning stage, you need to understand how much disk space and how much disk I/O operations it can serve, how it can be operated, and how it scales. You also needed to consider whether it requires planned downtimes to implement new functions or disabling some extra ones, and must plan current or future disaster replication. After understanding future storage architectures of this life cycle, you must invest in building SAN or modify your existing one. Thus, if you chose to present extended capacity to a new compute server, you will require not only adding new disks to storage and creating new LUN, but there will be a necessity to fulfill a minimum of modifying separate fabrics zoning and multipathing installation.

In the new software defined paradigm you can find completely new approach to operating — no RAID related calculation, no SAN setup, no Zones creation, no special cabling or special switch hardware, architectural unlimited space and performance scaling. Standard 10 Gbps Ethernet network is enough. Disks, nodes, and rooms, are all suitable replication locations, which guarantee multiple replication levels and the deepest granularity of the replicated object. For this new storage model, you can easily add new disks to nodes or add rack of nodes to a system without any downtime during the maintenance. Rebalancing, migration, new replication and so on can be just programmed because this storage already a program.

How long will the software-defined approach take to become an IT world trend? Well, I would like to say that this approach is on the way from chasm according to Geoffrey Moore ideology and in 2–3 years will come at the peak of its technology adoption life cycle. So if companies would like to get innovation value in operating their data center resources they should already try implementing basic keys of the new paradigm.

SDN, SDS and x86 virtulization adoption life cycle

All in all the Software Defined Infrastructure is a union of SDS and SDN developments with already existing cloud/virtualization solutions to one proven complex with strong interoperability matrix and, as results, with healthy holistic documentation and predictable evolution.

Interoperability matrix, simply speaking, takes care of researches across releases of products, who would be working on achieving the right balance between capabilities, coherence and security of the combined complex. This in turn can be implemented by preparing numerous lab deployments and performing special operating tests. Technically, those complexes can make an existence of the storage, network and virtualization subsystems possible on top of almost any commodity hardware with further smooth horizontal scaling. Each server is just a server and nothing more here — all magic happens inside of the logical subsystems.

Learn how to auto-discover your containers and monitor their performance, capture Docker host and container metrics to allocate host resources, and provision containers.


Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}