Play-by-Play: Data Hacks and Demos (Demo 1)
Welcome to a series of practical Hortonworks demos with NiFi, Spark, Storm, Zeppelin, Azure, Raspberry Pi, EC2, and AWS IoT.
Join the DZone community and get the full member experience.Join For Free
So, it’s been a month since Hadoop Summit San Jose, where more than 5,000 of the leading tech innovators in big data came together to share their inventions, wisdom and know-how. One of the sessions – a PowerPoint-free zone, was Data Hacks & Demos, a keynote session hosted by Joe Witt and starring an international team from the US, Germany, and UK: Jeremy Dyer, Kay Lerch, Simon Ball. It was a 20-minute session that created a series of live interactive IoT and streaming analytics demos using Apache NiFi, Spark, Storm, Zeppelin, Azure, Raspberry Pi, Amazon EC2 and AWS IoT.
The demo simulated how a brick-and-mortar retail store could identify which customers are walking in the door, speak to them, and then find out their preferences to be able to provide personalized offers in real time. The demo showcased how you can today:
- Correlate an image to an identifier, correlate it with other data points, and initiate a personalized, real-time electronic conversation with a customer in store.
- Algorithmically prioritize specific images of interest to be sent to a larger pool of data and perform computer vision machine learning.
- Use IoT technology to allow the customer to vote on their preferences.
- Streaming analytics to create an accurate single buyer identity in real-time.
In the next few blog posts, we will go through a play-by-play of how it really all went down, one demo at a time. First up, Demo #1.
During the first demo, Jeremy Dyer modelled the scenario of a customer walking into a store, where a retailer can find out who they are, what kind of shopper they are, and what they are interested in so they can engage and create the most appealing offer.
So What Did Jeremy Tell the Audience?
Physical stores are losing a lot of perishable data, but it doesn’t have to be that way.
Most digital businesses can capture your clickstream – what you are surfing for on the web, who you are, what your interests are. But there is an entire physical world of interactions that cannot be tracked, or necessarily correlated with your “digital identity.” In fact, for physical retailers, it can be very difficult to capture and interpret any of this information. People are interacting with products, with their phone, with each other. But why is this difficult? How does a brick-and-mortar store associate you with a digital identity before you checkout? How can a physical store know if they are stocking the right products that appeal to their customer base? Are potential customers price shopping while in the store?
And being able to figure this out isn’t a “science fiction” type moment from Star Trek or The Jetsons. All the technology needed to do this exists today — Wi-Fi, iBeacons, closed-circuit security, cell phones, and Apache NiFi! Together, all this enables a real-time capability to collect data, make it more valuable, and analyze it.
During the three days of Hadoop Summit, the audience participated in a live demo that had a stationary Thunderbolt display capturing images of Hadoop Summit attendees, along with a mobile cell phone and backpack roaming the venue capturing attendees presence. The images were matched to a bar code on the bottom of their badge, and matched to a snapshot of their face, and then sent into the cloud.
This is the essence of an edge system, capturing raw data from the point of origin with Apache NiFi , very focused on sensing information. Then this data is flowed into another system — a more back office system using Apache NiFi and more, where you have the most resources to both acquire and combine information. Armed with all this information, you can decide if you want to interact with the customer in the store — perhaps send a coupon or send some kind of audio feedback.
How Did Edge Processing of Images Work?
To demonstrate how it all worked, Jeremy uploaded an image of himself and ran it through the edge processing system — Apache NiFi. Apache NiFi takes the data and sends it to a backend server, which is listening for data. When it receives the image file, it runs a detection algorithm tied to the bar code system, and looks up the first name, last name, then it combines it with information it has correlated to the bar code and the electronic voice speaks “Je-re-mee from Hor-ton-works, how was your tra-vel from At-lan-ta?"
That was the first demo of Data Hacks & Demos at Hadoop Summit San Jose. The second demo about Apache NiFi and Spark for facial recognition is up next in this blog series. In the meantime, to get started with building something like this yourself, check out these links:
Published at DZone with permission of Anna Yong, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.