I'm spending time on my algorithmic rotoscope work and thinking about how the machine learning style textures I've been marking can be put to use. I'm trying to see things from different vantage points and develop a better understanding of how texture styles can be put to use in the regular world.
I am enjoying using image style filters in my writing. It gives me kind of a gamified layer to my photography and drone hobby that allows me to create actual images I can use in my work as the API Evangelist. Having unique filtered images available for use in my writing is valuable to me — enough to justify the couple hundreds of dollars I spend each month on AWS servers.
I know why I like applying image styles to my photos, but why do others? Most of the image filters out there we've seen from apps like Prisma are focused on fine art. Training image style transfer machine learning models on popular art that people are already familiar with. I guess this allows people to apply the characteristics of art they like to the photographic layer of our increasingly digital lives.
To me, it feels like some sort of art placebo — a way of superficially and algorithmic injecting what are brain tells us is artsy to our fairly empty, digital photo reality. Taking photos in real-time isn't satisfying enough anymore. We need to distract ourselves from the world by applying reality to our digitally documented physical world — almost the opposite of augmented reality if there is such a thing. Getting lost in the ability to look at the real world through the algorithmic lens of our online life.
We are stealing the essence the meaningful, tangible art from our real world, and digitizing it. We take this essense and algorithmically apply it our everyday life trying to add some color, some texture, but not too much. We need the photos to still be meaningful and have some context in our life, but we need to be able to spray an algorithmic lacquer of meaning on our intangible lives.
The more filters we have, the more lenses we have to look at the exact same moment we live each day. We go to work. We go to school. We see the same scenery, the same people, and the same pictures each day. Now we are able to algorithmically shift, distort, and paint the picture of our lives we want to see.
Now we can add color to our life. We are being trained to think we can change the palette and are in control over our lives. We can colorize the old World War 2 era photos of our day and choose whether we want to color within or outside the lines. Our lives don't have to be just binary 1s and 0s, and black or white.
Slowly, picture by picture, algorithmic transfer by algorithmic transfer, the way we see the world changes. We no longer settle for the way things are, the way our mobile phone camera catches it. The digital version is the image we share with my friends, family, and the world. It should always be the most brilliant, the most colorful, and the painting that catches their eye and makes them stand in front of on the wall of your Facebook feed captivated.
We no longer will remember what reality looks like or what art looks like. Our collective social media memory will dictate what the world looks like. The number of likes will determine what is artistic, and what is beautiful or ugly. The algorithm will only show us what images match the world it wants us to see. Algorithmically, artistically painting the inside walls of our digital bubble.
Eventually, the sensors that stimulate us when we see photos will be well-worn. They will be well-programmed, with known inputs and predictable outputs. The algorithm will be able to deliver exactly what we need and correctly predict what we will need next. Scheduling up and queuing the next fifty possible scenarios with exactly the right colors, textures, and meaning.
How we see art will be forever changed by the algorithm. Our machines will never see art. Our machines will never know art. Our machines will only be able to transfer the characteristics we see and deliver them into newer, more relevant, timely, and meaningful images. Distilling down the essence of art into binary, and programming us to think this synthetic art is meaningful, and still applies to our physical world.
Like I said, I think people like applying artistic image filters to their mobile photos because it is the opposite of augmented reality. They are trying to augment their digital (hopes of reality) presence with the essence of what we (algorithm) think matters to use in the world. This process isn't about training a model to see art like some folks may tell you. It is about distilling down some of the most simple aspects of what our eyes see as art, and give this algorithm to our mobile phones and social networks to apply to the photograph digital logging of our physical reality.
It feels like this is about reprogramming people. It is about reprogramming what stimulates you. Automating an algorithmic view of what matters when it comes to art and applying it to a digital view of matters in our daily worlds via our social networks. Just one more area of our life where we are allowing algorithms to reprogram us and bend our reality to be more digital.