This recent article from James Somers at The Atlantic tells the story of Douglas Hofstadter, the Pulitzer Prize-winning author of Gödel, Escher, Bach: an Eternal Golden Braid. Hofstadter is a pioneer in artificial intelligence, and Somers' article uses his life as a frame through which to explore common perceptions of artificial intelligence and machine learning, their origins and purposes, and their role in the Google-centric data-mining world of software today.
A central idea posed by Hofstadter is the notion that artificial intelligence as it is commonly understood today - Deep Blue, Siri, and Netflix's recommendation systems, for example - have almost nothing to do with artificial intelligence, or more accurately with intelligence, and that today's artificial intelligence is functional and practical, rather than interested in exploring the internals of human intelligence itself.
The article characterizes today's artificial intelligence using IBM's Candide - the original machine learning software - as an example. The suggestion is that things like Candide, while enormously helpful and innovative, avoid core questions of artificial intelligence and machine learning by focusing on tasks; they are tools, rather than explorations.
How important is bridging the gap between artificial intelligence and human intelligence, between machine learning and human learning? Is there really a difference between Deep Blue's brute force chess win and a human player's planned moves, aside from the scale of processing? And is the focus on practical applications of machine learning misguided, or disappointing, or just realistic? Read the full article and leave us a comment with your thoughts.