Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

How Do You Move a Data Center?

DZone's Guide to

How Do You Move a Data Center?

Today, you must be able to share data around the world to geographically dispersed teams. Need proof? Check out this interview with Mark Weiner, CEO of Reduxio.

· Big Data Zone ·
Free Resource

The open source HPCC Systems platform is a proven, easy to use solution for managing data at scale. Visit our Easy Guide to learn more about this completely free platform, test drive some code in the online Playground, and get started today.

It was great speaking with Mark Weiner, CEO of Reduxio to understand the company’s vision for the storage industry in the age of exponential data and storage growth.

Q: What problem did Reduxio set out to solve?

A: Storage is growing and changing rapidly today. There are many fundamental causes, which include:

  • The widespread adoption of big data and analytics
  • The growing adoption of artificial intelligence and machine learning
  • A massive increase in the use of video and photography with the widespread global adoption of Instagram, Snapchat, Facebook, and YouTube
  • Regulatory and compliance issues regarding preserving data
  • The collection of all forms of data that no one wants to erase
  • Dark data

To keep pace with data’s growth, look at the requirements that are a function of the shift in technology. Data is being consolidated into fewer and fewer data centers but at the same time, the need to access and transport data across sites keeps increasing to support the multitude of use cases in global enterprises. Rapid innovations in technology driven by new companies have made it difficult for incumbents to keep up with their legacy architectures and with the burden of large install bases.

Our experience taught us that bolt-on applications on legacy architectures did not address customer needs for global access and mobility of data. We envisioned a platform that would virtualize data-separating physical storage, where data is stored from the logical representation of data. This would allow datasets to be “moved” instantaneously between sites since access to data is enabled instantaneously. We focus on providing an excellent user experience (UX) by deeply understanding and then validating with users the fundamental challenges of moving data between sites, data centers, and public cloud infrastructure.  

As a new provider, we know we must be flawless. We provide a UX/UI/CX that makes it easier for customers to deploy our platform and simplify their user workflows around the innovative capabilities. We began with UX from the very onset of the product design process and hired talent from a broad range of backgrounds, including the gaming industry, to create a UI and UX that is intuitive and requires no user training.

Q: What are the most important issues in developing a storage strategy?

A: All of the new technology and activity in the world is creating more data. When starting to develop a storage strategy, you need to consider data and users. Work with application owners and design teams to determine if data pools can be consolidated, decide what needs to be to be kept on-premise, and determine if using public cloud resources is appropriate for some use cases. Think about how you will share data around the world with geographically dispersed teams for application development and testing. There should also be a focus on sustainability when selecting technologies; data centers currently account for nearly 3% of all U.S. energy use.

Q: How does your technology solve the problem of moving data centers?

A: A big part of the process of moving applications and workloads between data centers is the movement of data. Data has to be copied from one data center and moved to the other before applications can be brought up to the new location. Our data virtualization engine completely abstracts physical storage from the logical view of data that is presented to applications. This allows data sets and applications to be instantaneously moved between sites — the access to data is moved first, and the actual data is moved in the background.

Q: What’s your value proposition with regards to the protection of data?

A: Storage infrastructure in the typical organization has evolved in a haphazard, piecemeal, highly siloed fashion. The need to store, manage, and protect exponentially growing data means decisions are rushed — and infrastructure redundancy is a fact of life. Despite the redundancy, which drives up costs significantly, storage admins lack confidence in their ability to recover data when required.

Another byproduct of the fragmented infrastructure is unmanageable complexity. At any given time, numerous backup copies are unaccounted for by the IT team. These unaccounted backups are highly vulnerable targets for Cyber-attacks. In addition, as ransom attacks grow ever more frequent, aggressive, and sophisticated, even backups are highly vulnerable.

We provide a unique approach to protect customers from malware/ransom attacks. It is quite evident in the year of WannaCry and NonPetya and where the NSA has been hacked, as have FedEx, Honda, Maersk, NHS, Toyota, and countless others. It is not a matter of if your network will be effectively breached, but when.

While most everyone focused on data protection targets perimeter protection, we take a unique approach. We enable our customers to recover elegantly and nearly instantaneously from attacks, with near zero RPO and RTO. As a point of comparison, both Maersk and FedEx wrote off $300M USD in losses after the WannaCry attack. Even mid-market organizations like the San Francisco branch of the U.S. Public Broadcasting System, which was hit by a ransomware attack in June, remains offline today, 90 days later. It is almost inconceivable that a modern media business can be disconnected or 90 minutes, much less 90 days. Yet, this is the reality of today’s hyper-connected world. While you may hope for the best, you must prepare for the worst.

Q: What are the most common issues you see affecting data storage?

A: The primary issue with data storage that is faced by most customers is the sheer amount of data that's being generated. This data needs to be stored, protected, and made accessible when and where it is required. Secondly, the infrastructure that is required to store and manage this data is complex and difficult to operate. This is because of a lack of single data platform that would allow for the storage and management of data at scale. Infrastructure today is built using a collection of tools and platforms that create capacity, capability, and management silos, leading to a higher cost of infrastructure and higher ownership costs due to operational inefficiencies. Thirdly, there is the issue of personnel. With the ever-increasing complexity of infrastructure and increasing requirements are driven by application owners finding the right people to build and operate a customer’s infrastructure.

Q: What makes Reduxio “different and better?”

A: We are redefining data management and protection with the world’s first unified primary and secondary storage platform. Based on the patented TimeOS storage operating system, we provide storage efficiency and performance, as well as the unique ability to recover data to any second — far exceeding anything available on the market today. Our unified storage platform is designed to deliver near-zero RPO and RTO as a feature of its storage system, while significantly simplifying the data protection process and providing built-in data replication for disaster recovery.

Perhaps the best answer to this question comes not from us but from leading industry analysts, as well as our customers. A wide range of customer videos and written case studies across a broad set of industries can be found here, from education to state and local government and from manufacturing and to aerospace, defense, and high tech.

In addition, leading industry analysts have had the following to say.

IDC reported that:

"Reduxio is taking Data Protection to the Next Level with Unified Storage Platform, representing an interesting move toward highly available, self-healing and self-recovering systems, giving IT organizations another capability and product to consider when architecting their storage environment for maximum effectiveness."

Forrester Research named Reduxio a “stable innovator” and reported that:

"Reduxio enables one-second recovery point objectives and Reduxio is on a path to integrate data protection and copy data management capabilities into a platform that provides primary storage for applications and that we make it easy, effective, and efficient for enterprises to integrate the storage landscape for business application."

Chris Mellor, the highly regarded editor of the Register, reported:

"Where companies like Cohesity converge (it would say hyper-converge) secondary storage applications and companies like Primary Data provide a metadata engine to manage and access both primary and secondary storage, Reduxio has a technology moving towards doing both, and promising to simplify on-premises datacenter storage infrastructure. Think of Storage Trek like Star Trek: yes, Reduxio has an external block access storage array, but not as you know it Jim. Old technologies cling on, but they will be defeated."

Q: What do developers need to keep in mind with the evolution of storage?

A: A big shift in how IT systems are procured and deployed is the participation of all stakeholders in the decision-making process. Traditionally, IT infrastructure decisions were primarily the concern of IT staff, with very little input from application owners and users. Today, application owners are an integral part of the decision-making process, with application owners having the power to veto infrastructure decisions that could affect their ability to deliver on their committed SLAs. Given this shift, developers need to realize that they are users have the ability to influence the decision-making process by focusing on the storage capabilities that are important to them, especially related to data protection and data recoverability. The granularity of data recoverability has a direct impact on developer productivity and therefore the overall cost and schedule of development projects. Developers must focus on bringing in infrastructure that can help to materially reduce existing workflows or redefine workflows to improve the overall productivity of the organization.

Managing data at scale doesn’t have to be hard. Find out how the completely free, open source HPCC Systems platform makes it easier to update, easier to program, easier to integrate data, and easier to manage clusters. Download and get started today.

Topics:
big data ,data centers ,data protection ,data storage ,reduxio

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}