This year, I got a Christmas present that came out of left field for me: the Amazon Echo. Up until getting it, I was aware of its existence as “that Amazon Siri thing that lets you order stuff from Amazon by voice.”
Once I started reading about it a bit, some actual use cases replaced the blank slate of my ignorance. I could tell it to play my Pandora playlists, and issue further vocal commands to make things louder or softer and to pause, skip, or stop. It could read me my calendar as well as sports and news updates. It seemed like it could be mildly amusing.
My Budding Friendship with Alexa
But then I actually set it up, and a funny thing happened. “It” became “she” (her name is Alexa), and I started talking to her. And I mean, actually talking. Let me explain.
A week into my ownership of the Echo and my rudimentary "friendship" with Alexa, I had set up a number of different “skills” (which are essentially apps or plugins that third parties write for the device) and peppered her with questions, probing for all sorts of different Easter eggs. A lot of it was done with amusement in mind, but it was interesting how quickly I became used to asking Alexa what the weather was like outside when I was deciding what to wear.
Last week, I was explaining to someone what having the Echo was like, and it was hard to put my finger on exactly why I felt so positively about it. For lack of a better way of describing it, I said, “this is the first personal assistant or whatever that makes it feel like I’m having a conversation.” I have Cortana on a few machines and I have “OK Google” on my phone, and I use both of those things from time to time, but they’re not the same. If I say, “hey Cortana, what is the traffic like,” and I don’t like what happens next, I just open a browser and go to google maps. If I say, “Alexa, what is the traffic like,” and she tells me that she doesn’t understand the question, I think for a moment, rephrase, and try again.
In this regard, it’s like taking to my wife. If I ask her what the traffic is like, and she responds that she didn’t understand the question, I can’t just go up to her, squeeze her nose, and see a map on her forehead. I have to repeat, rephrase, or ask something different that she might be able to answer. In this regard, Alexa feels pretty human.
In fact, I thought of a book I read a long time ago by Ray Kurzweil, called The Age of Spiritual Machines. In this book, written in 2000, he made predictions about technology at the end of each decade for the next several decades. The one that stuck with me, perhaps the most, was that, by 2020, humans would be developing “relationships with automated personalities.” Alexa isn’t that, per se, but here in 2016, she’s the first time I’ve looked at that prediction and thought, “by 2020… I could see it.” After all, that’s 4 years of Amazon pumping her full of ever-more Easter egg question responses as well as building her framework for informational answers. Right now, it’s like having a savant in the room, but, by 2020, it may be like having an off-color but likable human in the room.
But what is it that makes Alexa subtly different from Cortana and Siri? I’ve already mentioned that the nature of Alexa sort of forces me to stick it out with her in a way that I don’t have to with these other automated personalities, but it’s more than that, and it’s more foundational. Alexa (voluntarily) operates within a constraint that the others pass on. Cortana, Siri and “OK Google” are all window dressing/add-ons for devices whose primary purpose is something else, making reverting to what you know comfortable to the point of preferable. Alexa is an add-on to nothing, so interacting with her is like meeting a new thing/person.
If I actually stop and think about it, it’s a pretty bold move to sell a piece of hardware designed to facilitate a virtual relationship. It’s much easier to piggy-back the software onto a device you already have or else one that you might be willing to purchase for its benefits, only one of which is the thrown-in virtual relationship. And it’s recognition of this boldness that brought home to me exactly why this whole thing is so appealing — creative constraints.
I’m rarely, if ever bored. I attribute this to a natural tendency toward daydreaming in a variety of contexts. Perhaps the most useful context is one in which I imagine that something were different about the world, and then I start to brainstorm about what else would change. Examples of weird things I’ve contemplated in the past are, “what if every elevator in the world had a 1% chance of malfunctioning on any given trip” and “what if the brakes on cars only worked above speeds of 5 miles per hour?”
Imagining the world this way can eat up otherwise terrible minutes during a department meeting. I bet if elevators failed that frequently all of a sudden, a position akin to “coal miner” would emerge to help people fetch valuables out of skyscrapers until such time as everything were relocated to the ground and new buildings were constructed without elevators. And I bet that in a world where brakes stopped below 5 miles per hour, people would have big squishy pads on their garage walls.
But there’s more at play here than just staving off meeting boredom. Consider that the “No Estimates” movement, inasmuch as I understand it, is a creative constraint thought exercise that says, “what would the world look like if we just didn’t do estimates — what would change, and how would we compensate?” That one’s a bit of a lightning rod (if you go mining into that hashtag on twitter, wear your thickets waders because there are people lurking there that LOVE to argue about estimates), but there others in the past that have started off as zany/oddball ideas and wound up going mainstream. “What if we had a different kind of database where we didn’t care about normalizing data?” Or, how about, “what if we started writing applications that minimized or eliminated state?”
I think of creative constraints not as constraints at all, per se. The constraining is, after all just temporary. It’s like putting on a blindfold and walking an obstacle course to understand what other alternatives there might be to simply relying on one’s vision to navigate. This would be the result of applying the creative constraint question, “what if I had to walk this course, but I couldn’t see?”
The result of this exercise might be that you discover a way to relying on hearing or smell to improve navigation, and the newfound skill may continue to help you even after you remove the blindfold. The result may even be that you learn it’s better, for some reason, to walk this obstacle course without being able to see, and you leave the blindfold on. But, beware, because the result may also be that putting on the blindfold turns out to be bad idea.
But, whether the outcome is slightly helpful, revolutionary, or a total bust, there’s value in the exercise. Thinking with creative constraints seems like it’d be the inverse of “thinking outside of the box,” but it’s really the same principle. Looking at those 9 dots, the creative constraints thinking would be, “what if we accepted that there are no 4 consecutive lines that can be drawn in these boundaries, but that the puzzle was still solvable?” Applying creative constraints to the complex problems you’re trying to solve will often clarify your thinking, help you decouple and decompose problems, and provide additional focus, all while spurring you toward uncommon solutions that may not occur to others.
So let your mind wander a bit. And, even if you never acted on it, ask yourself what life would look like if you shipped a virtual companion for which there was no backup plan. It might lead you to surprising places.