DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
  1. DZone
  2. Data Engineering
  3. AI/ML
  4. AI: Artificial? Certainly. Intelligent? Kinda.

AI: Artificial? Certainly. Intelligent? Kinda.

AI is getting a lot of attention today. Some of it is appropriate but the hype is huge. What AI does is amazing. But is what it does actually intelligent?

Emmett Coin user avatar by
Emmett Coin
·
Oct. 02, 18 · Opinion
Like (2)
Save
Tweet
Share
3.62K Views

Join the DZone community and get the full member experience.

Join For Free

Don't get me wrong, I've been developing Artificial Intelligence systems for longer than I am willing to confess. Heck, I get paid for it. Anyone who's been following AI knows that the definition of AI is a continuously morphing construct. AI has grown from primordial origins which by today's standards would not be classed as Artificial Intelligence. In the earliest AI, epoque research laboratories used curve fitting and regression or straightforward trees of human written rules to produce results that amazed the general public.

Image title

One of those amazing rule-based AI systems was so simple that it could be implemented with a handful of relays, switches, and light bulbs. It bested humans at tic-tac-toe. Interesting note: This AI tic-tac-toe system was usually incorporated in a carnival-like kiosk containing a chicken who was purported to play the game of tic-tac-toe against the human. In reality, the simple relay based tic-tac-toe playing computer turned on a light that the chicken could see (but was hidden from the human's view). The chicken was trained to peck at the space (a switch) with the light and it would get a pellet of food. (I won't in good conscience provide a link, you will have to Google this of your own free will in order to waste 20 minutes.)

What AI does today can be primarily classified as pattern recognition and secondarily classified as pattern modification/generation. Given a large enough collection of example patterns from well-defined categories the most common systems today learn how to recognize/categorize new/novel examples which are similar to the training examples. These sorts of systems work well for discerning between images of cats and dogs or characterizing the orientation of a hammer moving along on a conveyor belt or the quality of apples being sorted for eating versus applesauce. What all these systems have in common is that they examine huge numbers of examples in order to learn. They're built on observational knowledge. But they don't reason in any logical sense, and they extrapolate poorly in the case of minimal data. Also when they make predictions that venture outside of value ranges of the training range the results can be strange. Certainly, a large part of intelligence is reasoning and inferring from small numbers of examples and minimal or degraded input. Humans do not learn everything they know by culling through vast collections of examples. Humans learn from what they learned. Intelligence in part is learning about things, but the most powerful component of intelligence is learning to learn. This meta-learning is the foundation of what we call reasoning. Almost all of today's AI is only focused on learning about things.

AI research hasn't always been oblivious to this idea of reasoning. The idea of applying formal logic to concepts was very popular during the 1980s and it usually fell under the vague moniker of "expert systems". These systems operated around a general-purpose logical expression evaluation engine that for the most part processed statements of the type "if A and B then C" and "X Is A Y".

These systems were considered an improvement and a useful generalization of the very first hard/narrow AI. These early research endeavors directly hardcoded the specific application intelligence into a conventional computer language (e.g Fortran or C). These programs solved very specific problems with well-defined and anticipated inputs. All of the possibilities had to be anticipated by the programmer. In 1950 Shannon proposed writing a chess-playing program using this approach "Programming a Computer for Playing Chess ".

In these more abstract/generalizable expert systems, a generic reasoning engine was written in conventional computer code and the details of what it was supposed to reason about could be applied in the form of relatively human understandable textbased rules. These sorts of systems could reason (rigidly) but they did not learn. If they gave wrong answers the developer had to modify the rules and retest.

But just because you could write the rules in a reasonably natural language like English didn't guarantee that all the rules when applied would play well together. Systems like these would often become "brittle" once they considered too many rules because inevitably some of the rules would be in conflict. Even non-technical people are aware of the fact that the precision of computer-hosted mathematical logic is decidedly different from human-hosted common sense reasoning. This weakness of rule-driven computer reasoning systems was the plot of many a good science fiction stories in which the clever human protagonist could outwit the evil mechanical genius by posing a simple yet conflicted logical task. Almost always the end result was that the mechanical brain short-circuited and destroyed itself (usually with a lot of smoke and fire). Of course in the real world of expert systems (corroborated by my own personal experiences) a brittle system just produced strange results at unpredictable times. Of course, the worst case scenario is that sometimes these incorrect strange results seemed plausible. They weren't strange enough! The end result was that the systems were inherently untrustworthy.

One of the oldest AI systems based on symbolic logic (rules) is Cyc. It started in 1984 and although it is a human written rule-based system one of its goals was to make it less brittle when provided with less than perfectly logical input.

But there is something appealing to being able to impart world knowledge to a system (much like guiding a child or protégé) in order to substantially speed up the rate of knowledge acquisition. There's nothing wrong with giving the system a head start and letting it learn from there. A headstart speeds up the process. When you think about it, speed woven into the language we use to describe intelligence. We characterize intelligent people as quick wits or fast learners. So, one way to make today's AI into a faster learner would be to start from a similar knowledge base and extend it. In fact, this is commonly done with vision-based neural nets today. A neural net that is trained to distinguish between apples, oranges, and pears can usually be trained relatively quickly to distinguish tomatoes, potatoes, and onions. Deep neural nets learn generalized features in their hidden layers. And those features become richer and more sophisticated as they approach the final output layer. Continuing to train a deep neural net that already has conceptualized features such as color, roundness, stripes, dots, bumpy, smooth, etc. to additionally recognize a potato will lead to usable results more quickly than if you trained all of the data into a fresh tabula rasa neural net. It makes sense: The generalized features do not need to be relearned (although perhaps slightly modified). Academic work is being done in this area and it is a step in the right direction. But it also has its limitations. I leave it as an exercise to the reader to imagine extending our fruit-aware deep neural net to recognize garden tools (rake, shovel, hoe, etc.). How would these radically non-fruit-like features affect the system?

The next step in Artificial Intelligence must be the combination of reasoning along with learning by example. It's been well established that we humans cannot explain how we reason. With the exception of when we compose mathematical proofs, our human reasoning is compliant and probabilistic. For the myriad of our day-to-day decisions, we do not use rigorous if-then-else thinking. We have enough of the facts in mind and we think "I should probably buy 3 pounds of apples" (82% confidence?). And you do this while looking at the pile of fruit thinking that they are apples (91% confidence?). Enough of the time you are correct and this way of thinking is good enough (even though when you get home your partner says "why did you buy quince?").

Image title

  • Artificial Intelligence will not be intelligent until it learns to reason.
  • Real world reasoning is impossible to program directly.
  • Reasoning is not learning about things.
  • Reasoning is learning about relationships between things.
  • Reasoning is meta-knowledge.
  • Reasoning is knowledge about how to manipulate knowledge.

The probabilistic nature of real-world human reasoning is not a flaw: It makes it lightweight, quick, robust and resilient (at the risk of occasionally being wrong). Once we think about reasoning as a probability we can think of it in terms of training a neural net. We are 91% sure this fruit is an Apple and we are 82% sure that 3 pounds will be enough.

A few of us in the field are thinking about reasoning in the just this way. Using a large collection of logically correct but abstractly represented reasoning can we train a neural net to organically/probabilistically report the most likely abstract conclusion? Could the placeholder slots in this abstract logic be partnered with things that are most similar/appropriate? Could such a system that learned the logic of apples apply that logic to quince since apples and quince are somewhat similar?

Children do this from a very early age. I'm looking forward to much interesting work in this domain in the coming years.

In conclusion, AI overlords will only frighten me when they begin to exhibit the imprecision of human level reasoning.

AI

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Utilize OpenAI API to Extract Information From PDF Files
  • The Role of Data Governance in Data Strategy: Part II
  • Continuous Development: Building the Thing Right, to Build the Right Thing
  • API Design Patterns Review

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: