Big Data Management and Integration Evolves
Big Data Management and Integration Evolves
As the amount of sources and data continue to grow exponentially.
Join the DZone community and get the full member experience.Join For Free
How to Simplify Apache Kafka. Get eBook.
I had the opportunity to interview with Itamar Ankorion, CMO at Attunity recently to discuss the evolution of big data management and integration. Following is a summary of our conversation:
How is Attunity involved in the integration of data? As a data integration and management company, Attunity helps more than 2,000 organizations move, prepare and analyze data – and has done so globally for over 20 years. More recently, we’ve aggressively entered the Hadoop and Apache Kafka arenas, which helped us gain recognition in the Gartner Magic Quadrant for Data Integration Tools, for the first time.
We help Fortune 1000 companies worldwide accelerate data delivery and availability for supporting BI and analytics. We bring data from disparate systems together and make it available when and where it’s needed – including in real time.
We accelerate data streaming and replication, simplifying and making it available for analytics to be used more easily without requiring developers. We offer more agile and modern approaches to handle changes and adaptations for faster iteration and delivery of data warehousing.
Lastly, we help companies optimize the data management landscape providing intelligence on how data is used within platforms and across different tiers with regards to use patterns, workloads, and cold versus hot (frequently used) data. We provide this solution for all the large data warehouse systems as well as Hadoop.
What are the keys to successfully moving, preparing, and analyzing data? The main innovation is to simplify by pre-automating processes that need to be set up to efficiently handle large volumes of data from many sources of data. We provide solutions that can scale with orders of magnitude moving data efficiently with our change data capture technology that identifies only the data that is changing with low latency and low impact on the database environment. We optimize data transfers at the platform level and package for a good user experience (UX). We automate a significant amount of the data management process with best practices for data warehouses with 80% of the process automated.
What have been the most significant changes with the integration of data? The adoption of big data platforms like Hadoop data lakes have changed the mindsets of our clients. Now they want to get all of the data in from hundreds or thousands of systems, capture it, and then figure out how to use it. Companies used to afford weeks for the development effort. Now, you must ingest data immediately. We provide a zero-footprint architecture that reduces the impact on the IT landscape and resources, which becomes more valuable as the number of data sources scale.
What “real world” problems are your clients solving with your solutions? One of our key solutions is helping customers to ingest data directly into data lakes. We have a financial services company collecting all of its transactions in a data lake to identify fraud while ensuring they’re meeting all of the regulations, and not conducting business with any inappropriate countries. In another example, we have a manufacturing client collecting data from a variety of geographically dispersed production systems, correlating that information from automobiles to improve warranties and provide preventive maintenance using predictive analytics. Business intelligence clients are using AWS, Azure and Google to migrate or load data to the cloud and make it available for analytics, which is ideal given the elastic nature of the cloud.
What the future of data integration from your perspective – where do the greatest opportunities lie? Leveraging analytics from more platforms, more quickly, and efficiently. Also, providing performance management to more efficiently balance data placement between the data warehouse and Hadoop to replace data and workloads freeing up resources to perform analytics. We can show clients how far back they are touching data and moving the data they’re not touching as frequently back to Hadoop.
Do you see Spark replacing Hadoop as a big data platform since it analyzes and hands off data more quickly? We don’t see Spark replacing Hadoop as much as we see it complementing it. There’s certainly an appetite to do more with data. We’re seeing a trend of enterprises leveraging in-memory processing technology that enables acceleration of data insights with less cost. And rebalancing cold data to Hadoop is making storage cheaper. Spark can leverage all of this data; however, we’re still learning what workloads work best with what technology. Different architectures embrace different needs.
What skills do developers need to integrate data well? We design our tools to make developers’ lives easier – get more done with less work, moving data from place to place and preparing the data for analytics using data warehouse automation technology. Developers should become familiar with the tools that will help them get more done faster achieving large scale efficiency and optimization.
- What have I failed to ask you that you think developers and engineers need to know about how Attunity helps solve business problems today and in the future? We’ve grown rapidly over the years as the sources and amount of data has grown. Since data will continue to grow exponentially, we’re in a great position to help enterprises focus on innovating and optimizing performance and processes while simplifying the user experience (UX).
Opinions expressed by DZone contributors are their own.