The Autonomic Computing Journey: Wanna Play AlphaGo?
Watson beat Jeopardy, Google's AlphaGo beat Go. AI is slowly building up a history of beating humans at games. Where to next?
Join the DZone community and get the full member experience.Join For Free
Remember when we talked about Watson playing Jeopardy? This also brought up the idea of where this type of intelligent system could be developed. Another computing environment was built by Google called DeepMind, which was used to, of all things, play the game Go. It was dubbed AlphaGo.
What made this a particular challenge is the style of gameplay and strategy used in Go:
“Go is considered much more difficult for computers to win than other games such as chess, because its much larger branching factor makes it prohibitively difficult to use traditional AI methods such as Alpha–beta pruning, Tree traversal and heuristic search” (source: https://en.wikipedia.org/wiki/AlphaGo)
Using machine learning in the way that Google DeepMind did to play Go was a new venture into tackling a unique problem. Well, not necessarily a problem, but a new challenge.
AlphaGo did for the dynamic strategy game what Watson did for Jeopardy in a way. Why is it that we are using all of these incredibly powerful systems to play games? Well, that is not the end result. It is the confirmation of the hypothesis. Using software and algorithms, we can recreate a combination of intelligent and autonomic processes.
More Than Just a Game
These innovations have happened along a section of the history of technology that I’ve been lucky enough to witness first hand. As we watch these changes occur, one of the most important things to do is to look at what the real result is that has been.
Each of these systems that we have discussed has provided a solution to a business challenge. Input, processing, output. The difference along the course of this chunk of history is that the processing has become more spectacular. We’ve seen the dawn of centralized computing and what was actually early virtualization on the mainframe platform.
The distributed systems began to take hold as they answered the questions that the mainframe couldn’t. They were purpose-built. The solved problems and became better with each iteration. They could take the processing to external systems using input and output to and from the centralized computing systems.
Desktop computing gave rise to what would become a long-time relationship of solving challenges through software with Apple and Microsoft, among many others who came up through that generation. The hybrid model of centralized computing and distributed computing was thrust upon us without us even realizing it had become a movement. We didn’t have anyone to call it bimodal back then, it was just IT.
The move into self-driving automobiles along with the introduction of Watson and AlphaGo marked another iteration. The tooling was not the point. The point was the input, processing, and result was evolving. This all comes back to a premise that made me think about crafting this series.
History doesn’t repeat itself, it iterates on itself.
Published at DZone with permission of Eric Wright, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.