Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

How ''Software-Defined'' Opens Up IT

DZone's Guide to

How ''Software-Defined'' Opens Up IT

Part history lesson, part crystal ball, check out the journey of software-defined everything, see how it relates to the cloud, and what the future looks like.

· Cloud Zone ·
Free Resource

Insight into the right steps to take for migrating workloads to public cloud and successfully reducing cost as a result. Read the Guide.

The wave "software-defined X" and its overlay of virtualization greatly changes the vision of managing network equipment. It even messes up commercial offers tending towards on-demand. This is a progressive spread that is observed in the sphere of networks. "Software-defined" tends to penetrate all layers of information and communication systems.

According to Gartner, the concept was first applied to networks (through the declination software-defined networks for SDN) in 2011. It has since expanded to data centers (SDDC), storage (SDS), and infrastructure (SDI).

Behind a real technological revolution, marketing took over to multiply the variations. We are now talking about SD "everything" (or SD-X).

1. SDN, an Architectural Model

Software-defined is not a technology in itself, but rather an architecture model that follows the virtualization of network elements and systems.

Historically, we can make the connection to virtual networks, be it the first Virtual Local Area Networks (VLANs) over Ethernet or Virtual Private Networks (VPNs).

It was ready to separate, protect, and secure flows by restricting traffic ("multicast") on "trunks" interconnections (between services, floors, or buildings), the referral being arbitrated by the administrator of the network.

With the advent of VMware virtualization platforms, architectures have been designed to support virtual machine (VM) mobility independent of the physical network.

Then came the notions of software-defined networking (SDN) and the related introduction of virtualized network functions (NFV for Network Functions Virtualization).

Until now, network features were implemented in a switch, router, or appliance and then locked into circuits (or ASICs). This required different settings or configurations depending on the manufacturer of the network equipment.

Any change in the design of a network or modification for resource allocation (for example, to open or close a connection) required an intervention — something tedious and a source of errors and bugs.

With server virtualization, VMs move from one system to another in seconds or minutes. But at a certain level, going back up the network (level 3 of routing, via "trunks"), this movement of VMs can be blocked or can malfunction, forcing a manual reconfiguration.

The classic network architecture then shows its limits in terms of agility. Network devices (routers, switches, firewalls, etc.) have their constraints and their specificities from one manufacturer to another, even if standard protocols are gradually imposed.

Thus came the development of specialized appliances for networks such as:

  • WAN Optimization Controllers (Wireless Access Network)
  • WAN optimization control tools (WOC)
  • Application delivery controllers (ADCs)
  • Or data encryption modules

This is the interest of SDN, which provides software-defined functionality. While these features are "hard"-embedded in the circuits, they are installed on "server controllers" (classic Intel X86 servers), for example, to aggregate links, encrypt data, create VPNs, act as a firewall, etc.

Many telecom operators including AT&T, Orange, and Colt rely on SDN because of the flexibility and potential of on-demand services.

Thus, Colt chose to rely on its own SDN. “This is an open source solution based on OpenFlow. We wanted to be independent of the manufacturers to be able to connect with any operators via API [software connectors, note] ", explained recently Carl Grivner, CEO of Colt, in an interview on Silicon.fr

2. A Pillar: The OpenFlow Protocol

SDN owes a lot to the OpenFlow protocol. It was developed within the Open Networking Foundation (ONF) and supported by a broad ecosystem, including telecom operators, data center designers and managers, network equipment manufacturers, as well as Facebook and Google.

It implements the programming of network equipment from a remote central controller. It identifies specific streams using a variety of parameters (MAC address, destination IP address, etc.), and then performs actions on those streams ("forwarding" through specified ports, dropping traffic, etc.) based on “flow tables".

Such a controller, knowing the entire topology of the network, makes it possible to program rules for all switches or network routers, whatever their brand.

Routing decisions are made by the controller for each data stream and pushed into the switches as simple switching instructions. In addition to this, Quality of Service (QoS) prioritization mechanisms are compared to others (telephony versus video, for example).

The instruction set generated by an OpenFlow switch is extensible, but there is a minimum base common to all nodes in the network. The administrator can partition the traffic according to certain efficiency and governance rules (e.g. business, IT production, IT development).

3. The SDDC or the Target of Data Centers

The extension of the SDN concept into the heart of data centers came under the name SDDC (Software-Defined Data Center). SDDC has become rapidly evident in recent years — with an accompanying commercial exploitation.

In the SDDC model, the entire data center infrastructure is virtualized to both automate certain functions and provide on-demand resources or services (on the principle of "as-a-Service" popularized by the cloud).

The control of the set is provided by a central orchestration or meta-controller system that sets and enforces rules, as in the OpenFlow model.

Programmable and Automatable

Here again, the level of control is dissociated from that of applications and data processing according to a logical division with virtual "pools" — no longer entities of physical equipment. The control device is centralized, is customizable and programmable, and is largely automated.

In a software-defined data center, elements of the infrastructure include:

  • Network, calculation units (CPUs/servers with their VMs)
  • Data storage units (SDS)
  • Security devices (automatic enforcement of security rules, quarantine of suspicious terminals, etc.) are virtualized and software-defined. They become accessible as services, available on demand.

Each element is provisioned, exploited and managed via the central console through programmatic interfaces (APIs). It is at this level that the overall quality of service monitoring can be traced based on service level agreements (SLAs).

The deployment, provisioning, monitoring, and management of resources are driven by automated software that supports both legacy IT and non-virtualized applications, applications on virtual platforms, or those on the cloud.

The Advantages of SDDC

SDDC aims to reduce capital expenditures somewhat, but also to especially reduce operational expenses while improving the efficiency, agility, control, and flexibility of the whole.

It must reduce energy consumption by regulating power levels based on need in real time until it is time to totally shut down some servers whenever possible. According to Deloitte, the cost reduction would be 20%.

Finally, security is supposed to be reinforced thanks to more centralized control and ad hoc solutions, especially when data is hosted outside.

SDDC will not convince all companies, such as those that continue their development and production environments, those holding onto their SLAs, or those that continue to coexist with old and virtualized applications.

What Are the Differences Between SDDC and Cloud?

A software-defined data center shares many of the concepts implemented in a cloud. Since it is supposed to encompass the entire IT infrastructure, some consider that SDDC can constitute the global, virtual platform of all clouds (private, public, and hybrid). This assumes that it can be extended to multiple sites with many interdependent data centers.

This joins the architectural concepts implemented by high-tech giants such as Amazon (AWS), Google, IBM, or Microsoft.

It should be noted in passing that OpenStack's protocol implementations (of the open source domain) bring a level of abstraction between network resources and application processing on VMs in the cloud. They already include an interface for configuring virtual switches.

4. The Promise of SD-WAN

Designed according to the SDN model, a Software-Defined Wide Area Network (SD-WAN) is a service where the control and intelligence of a long-distance network are separated from the infrastructure and equipment.

SD-WAN solutions, made independent of "hardware" and "transport" technologies, replace traditional telecom routers. In most medium and large businesses, the infrastructure is relatively complex and dispersed in their different remote sites, whether agencies or subsidiaries. Again, these are routers, telecom access controllers, WAN optimizers (load balancing, aggregation of links, etc.), and firewalls. The cost of their maintenance and configuration is never negligible.

SD-WAN provides a response by providing dynamic, rule-based control tools that make it easy to manage a set of WAN connections from a single point.

Trend: SD-WAN as an Operator Service

SD-WAN is becoming more and more of a service provided by a telecom operator. The latter sets up a chain of services providing various additional features.

One of the key features of SD-WAN is its ability to handle multiple types of connections, from MPLS to high speed through LTE (4G) radio links.

Through its main operator or local purchase, a company obtains Internet connections at the best price on the market for a desired quality in order to serve its remote sites (agencies or branches in the regions).

SD-WAN is, therefore, a variation of the SDN architecture model resulting in a "packaged" technological offer.

Managing an SD-WAN brings tangible benefits: The administrator, aware of problems that occur, can manage all WAN connection points through a single interface.

Until SD-WAN, to make changes to the configuration of network equipment at remote sites, it was necessary to install and manage a device for managing manual configurations. And, often, a technician would have to go on-site to make changes or set up teleconferencing services at remote sites after ensuring that connectivity would be provided.

With an SD-WAN offering, thanks to a largely graphical user interface, the administrator supervises the entire network and can intervene to adapt and share bandwidth on demand.

The Combination: The True Novelty of SD-WAN

A large number of technologies that make up SD-WAN are not new. On the other hand, what is innovative is the combination and arrangement of all the functionalities around a single piece of equipment (usually called CPE — Customer Premises Equipment). In an operator, the whole will be part of a commercial offer a la carte, with monthly billing.

The remote site routers are replaced by units that are simpler to manage and include, in addition to telecom access routing, key functions such as fail-over, the security of the domain with an integrated software firewall, or a system of distribution of flows among several available Internet links.

According to Gartner (quoted by Network World), an SD-WAN installation can be up to two and a half times cheaper than a traditional WAN architecture: for 250 sites to be connected, the bill would be lowered to $452,500, compared to $1.28 million previously.

How do you explain the difference? It is a mix of the significantly lower cost of CPE equipment (compared to conventional routers/switches) and a very sharp reduction in site intervention and maintenance costs.

The Duo of On-Demand and SD-WAN Face a Certain Inertia

The craze for SD-WAN solutions is there: Two years ago at VMworld 2015 in the United States, a Riverbed survey of 269 visitors showed that 5% of respondents had adopted SD-WAN while 29% were exploring. As for SDN, 13% had already deployed it and 77% were considering it.

Two years later, the SD-WAN offer remains at an early stage. It is still a very innovative and "disruptive" solution for some companies, and network equipment manufacturers are often "hooked" to the development of their own ASIC and therefore, whether they admit it or not, it becomes a "proprietary" approach.

The current big change certainly comes from the on-demand approach proposed by some operators who are determined to position themselves as precursors.

With the commonality of cloud offerings, network and telecom services tend more and more to become "as-a-Service" offers.

The Four Pillars of SD-WAN

According to Gartner, SD-WAN is defined by four main criteria:

  • It supports multiple types of connection: MPLS, Internet, and LTE ...
  • It is able to dynamically select ports and share load balancing over WAN connections
  • It provides a single, simple interface to manage, set, and remotely enable provisioning of network resources at remote secondary sites after an installation as easy as Wi-Fi at home
  • It supports VPN networks with services from third parties and by using web gateways, WAN optimization controllers, firewalls...

In a September 2017 publication, Gartner commented: “Software-defined networking (SDN) and network feature virtualization (NFV) will enable communication service providers (CSPs) to create a cloud-based infrastructure. More agile and flexible communication (...)."

While continuing: "Today, the competitive pressure creates an urgency to operationalize and monetize the SDN and the NFV. Services such as virtualization equipment installed at customer sites (CPE) represent a high potential."

And still according to Gartner, if SD-WAN occupies less than 5% of the market today, it should attract up to 25% of companies within two years. Sales of SD-WAN suppliers, growing by 59 percent annually, are expected to reach $ 1.3 billion by 2020.

In Conclusion: Between Technical and Commercial Flexibility

SD-X is realized thanks to advanced standard protocols, including OpenFlow. Some market players, such as AT&T operators, Colt, and Orange, have begun testing the robustness of APIs and validating interoperability between their software-defined infrastructures.

The interest in SD-WAN is being verified in terms of flexibility and quality of service. The primary advantage is to enable the provision of on-demand resources and services and in consumer billing.

Finally, as IDC points out, it is not so much a technological project as an issue of transformation of organizations and skills: 'Software-defined' supposes reconciliations of the teams in charge of the infrastructure, including working with development teams, finding new ways of working together, and using "as-a-service" providers.

TrueSight Cloud Cost Control provides visibility and control over multi-cloud costs including AWS, Azure, Google Cloud, and others.

Topics:
cloud ,sddc ,sdn ,software-defined ,sd-wan

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}