Can You See the Algorithm?
Can You See the Algorithm?
Ultimately, it is still up to me, the human, to process and distil down into a single meaningful image that will speak to other humans.
Join the DZone community and get the full member experience.Join For Free
Hortonworks Sandbox for HDP and HDF is your chance to get started on learning, developing, testing and trying out new features. Each download comes preconfigured with interactive tutorials, sample data and developments from the Apache community.
Can you see an algorithm? Algorithms are behind many common analog and digital actions that we execute daily. Can you see what is going on behind each task? Can you observe what is going on? To use an antiquated analogy, can you take the back off your watch? An example of this is our world right now would be the immigration debate — whether you are viewing on Twitter, Facebook, or any other source of news and discussion around the immigration debate. Can you see the algorithm that powers the Twitter or Facebook's #immigration feed?
Algorithms that drive the web are often purposefully opaque and unobservable, yet they are still right behind the curtain of your browser, UI, and social media content card. They are supposed to be magic. You aren't supposed to be able to see the magic behind. The closest we can get to seeing an algorithm is via their APIs which (might) give us access to an algorithms inputs and outputs, hopefully making it more observable. APIs do not guarantee that you can fully understand what an API or the algorithm behind does, but it does give us an awareness and working examples of the inputs and outputs--falling just short of being able to actually see anything.
You can develop visualizations, workflow diagrams, images, and other visuals to help us see reflections of what an algorithm does using its API (if it's available), but if we don't have a complete picture of the surface area of an algorithm or of all its parameters and other inputs, we will only paint a partial picture of an algorithm. I'm super fascinated with not just trying to find different ways of seeing an algorithm but also finding some dead simple ways to offer up a shared meaning of what your eyes are seeing to make an immediate impact.
How do I distil down the algorithm behind the #immigration debate hashtag on Twitter and Facebook in a single image? I don't think you can. There are many different ways to interpret the meaning of the data I can pull from the Twitter and Facebook APIs. Which users are part of the conversation? Which users are bots? What is being said and what is the sentiment? There are many different ways I can extract meaning from this data, but ultimately, it is still up to me, the human, to process and distil down into a single meaningful image that will speak to other humans. Even though the image could be worth 1000 words, which thousand words would that be?
I blog as the API Evangelist to polish my API stories. I write code to polish how I can use APIs to tell better stories. I take photos in the real world so that I can tell better stories online and in print. I'm trying to leverage all of this to help me better tell stories about how algorithms pulling the strings in our world, and help everyone see algorithms. Sadly, I do not think we will ever precisely see an algorithm, but we can develop ways of refracting light through them helping us see the moving parts, or sometimes, more importantly, see what parts are missing.
One of the things I'm working on with my algorithmic storytelling is developing Machine Learning filters that help me shine a light on the different layers and gears of an algorithm. I do not think we can use the master's tools to dismantle the house, but I don't want to dismantle the house — I just want to install a gorgeous floor to the ceiling window that spans one side of the house and maybe a couple of extra windows. I want reliable and complete access to the inputs and outputs of an algorithm so that I can experiment with a variety of ways to see what is going on, painting a picture that might help us have a conversation about what an algorithm does, or does not do.
I recently took a World War 2 Nazi propaganda poster and trained a Machine Learning model using it, and then applied the filter to a picture of the waiting room at Ellis Island waiting room. When looking at the picture, you are seeing the waiting room where millions of immigrants have waited for access to the United Sates, but the textures and colors you are seeing when you look at the image are filtered through a Machine Learning interpretation of the World War 2 Nazi poster. When you look at the image, you may never know the filter is being applied; it is just the immigration debate. However, what you are being fed algorithmically is being painted by a very loud, bot-driven, hateful, and false-content-fueled color and texture palette.
Granted, I chose the subject matter that went into the Machine Learning algorithm, but this was intentional. Much like the handful of techies who developed and operate bots, memes, and alternative news and facts engines, I was biased in how I was influencing the algorithm that is being applied. However, if you don't know the story behind it and don't understand the inputs and outputs of what is happening, you think you are looking at just a photo of Ellis Island. By giving you awareness and more of an understanding of the inputs, a regular photo of Ellis island, the filter being trained using a World War 2 Nazi poster, plus I added some Machine Learning voodoo and wizardry... poof! We helped shine a light on one layer of the algorithm, exposing just a handful of the potentially thousands or millions of gears that are driving the algorithms coloring the immigration debate.
Published at DZone with permission of Kin Lane , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.