Twitter, Inc. announced its acquisition of London-based Magic Pony Technology, a company that designed an innovative algorithm that clears up blurry pictures and video, on June 20 for a cool sum of up to $150 million. That sum contains retention bonuses for 11 of its staff members—not bad for a company that remained in relative secrecy since inception, operating a website that provided little information and filing patents left and right.
This makes Magic Pony the third company Twitter has purchased that works with algorithms designed to replicate human processing and the neural connections that determine how humans think, act, and learn.
"Magic Pony’s technology—based on research by the team to create algorithms that can understand the features of imagery—will be used to enhance our strength in live and video and opens up a whole lot of exciting creative possibilities for Twitter," Jack Dorsey, CEO of Twitter, wrote.
In the quest for the latest algorithm, machine learning seems almost certain to transform industries that go far beyond social media. That leaves the question of how to regulate that information.
"Machine Learning Is...the Core of Everything We Build."
Machine learning is simple as a concept and detailed in execution. Although much development over the past two decades has been for machines that are supervised, recent tech companies have focused on unsupervised learning—where machines are programmed with an algorithm and fed a steady stream of stimuli, from which the machines learn to accomplish some function. This gave birth to artificial intelligence.
"Machine learning is increasingly at the core of everything we build at Twitter," Dorsey wrote.
The Intellectual Property Office of the United Kingdom shows that Magic Pony has filed for 15 patents, most of which had to do with the encoding and decoding of video and pictures.
It works like this.
Human brains change due to stimulus. A new experience, new information, making a choice, even an illness changes a human brain. The signal carriers are called neurons, and as they travel through pathways in the brain called axons, they interact with various elements of brain—lobes and cortexes—and ignite a series of complex chemical interactions. This is the foundation for the neural network methodology in machine learning.
While the human brain evolves over time, a machine can have a similar process with an algorithm. Magic Pony designed their machine with an algorithm to absorb, break down, and categorize data from images and video. Then it uses that information to "correct" other pictures based on the information it has absorbed.
As Devin Coldeway of Tech Crunch puts it, "Just as you could supply the probable details of a pixelated face because you are familiar with how faces look, the AI can extrapolate as well, having examined on a pixel by pixel basis what certain features look like at various levels of detail."
The result is that those blurry pictures you send on Twitter and those videos you watch on its video apps (Vine and Periscope) get cleaned up—not only look good as new, but also look better, sharper, and clearer.
Correcting pictures is not an easy task. Human evolution of this ability took an immeasurable amount of time, whereas Magic Pony has succeeded in developing artificial intelligence capable of this in less than three years. One of the big hurdles for machine learning was developing an algorithm to teach a machine how to recognize numbers. But more and more breakthroughs come each year.
Amazon knows what books you want to read. Google and Facebook provide you with personalized ads. Police use analytics to recognize criminals. And Fitbit records your every heartbeat, step, and location.
We all want it for the good stuff. The efficiency, the accuracy, the ability to make lives better. So why does Jarno Koponen, co-founder of the app Random, think that algorithms need to learn boundaries?
It's simple: To drive a wedge between ourselves and a steady stream of curated, predictable information that leads to the same place over and over again, we need a personal algorithm that evolves with us as we grow.
A Discovery Engine to Combat Digital Inertia
The concept of inertia reaches back to Aristotle in Western philosophy. His theory of motion, that all objects would come to a rest if not acted on by an external force, was furthered in Newton's First Law of Motion, which stated that the velocity of objects remained the same unless a force is exerted upon it.
Koponen has a new term for the digital age: Digital inertia.
Increasingly, algorithms determine the media we encounter on the internet, especially through social media. According to the Pew Research Center, a nonpartisan think tank located in Washington D.C., 30 percent of Americans receive their news from Facebook. Eight percent receive their news from Twitter.
Stacy Hale, in her interview with Koponen, writes that, "'Digital inertia'” keeps us walking down the path that the sites we depend on have laid out for us."
Random is, in essence, a personal algorithm that takes into account a user's subjective view of reality and provide them what they want, before they know they want it.
“For many people, personal data is abstract,” Koponen told Design4Emergence. “Generally we don’t have a lot of awareness about how our data is being used and how it affects what we see. How could this data be powering experiences that are more in tune with who we are as individuals?”
He describes the app as looking for "stepping stones," subtle links between searched-for topics that may produce unexpected content for the user. It's a means to branch out from the trap of content that never exposes a user to new material.
The Ethics of Algorithms
Algorithms and the artificial intelligence they produce are becoming increasingly sophisticated within each passing year. The market economy continues to evolve, with jobs created and phased out depending on technology.
Pictures and videos are, emotional content aside, representations of reality at a point in time. Cleaning them up—making them look prettier, brighter, and more vivid—is hardly a matter of ethical dilemma. Magic Pony developed an amazing technological advance and were rewarded handsomely for it.
The price tag is telling: Machine learning is a profitable business and innovations can prove beneficial and disturbing within the same month. They say data doesn't lie and that algorithms are objective. Is there a need to recycle that old Mark Twain quote—something about lies and statistics?