Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Building LinkedIn's Real-Time Data Pipeline

DZone's Guide to

Building LinkedIn's Real-Time Data Pipeline

· Integration Zone ·
Free Resource

The new Gartner Critical Capabilities report explains how APIs and microservices enable digital leaders to deliver better B2B, open banking and mobile projects.


At the core of many of LinkedIn's analytics applications is a real-time data pipeline built on top of Apache Kafka. This system handles over 10 billion messages writes per day for thousands of production processes. This talk will cover some of the challenges of building and scaling this data pipeline for log data, system metrics, and other high-volume data streams. It will also cover some details of the design of Kafka, as well as some of the particular requirements of Hadoop data loads and real-time processing applications.

About Jay Kreps
Jay is the technical lead for LinkedIn's data team, which is responsible for the site's core data technologies including storage systems, data pipelines, Hadoop, search, social graph, and recommendation systems. He is an original author on several open source projects including Apache Kafka, a real-time distributed messaging system, and Project Voldemort a distributed key-value store. He has a Masters degree in computer science from UC Santa Cruz where he studied machine learning.

The new Gartner Critical Capabilities for Full Lifecycle API Management report shows how CA Technologies helps digital leaders with their B2B, open banking, and mobile initiatives. Get your copy from CA Technologies.

Topics:

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}