Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Using Neuroscience to Create Learning Machines

DZone's Guide to

Using Neuroscience to Create Learning Machines

Human learning is incremental and involves revisiting old information to synthesize it. Can machine learning designers learn from that?

· Big Data Zone ·
Free Resource

Hortonworks Sandbox for HDP and HDF is your chance to get started on learning, developing, testing and trying out new features. Each download comes preconfigured with interactive tutorials, sample data and developments from the Apache community.

Most AI systems these days have a learning component to them, and I’ve touched on the ways in which systems learn a few times previously.  One of the more interesting approaches aims to mimic the way humans learn.

Such approaches have their roots in a theory that was first published in 1995, which suggested that learning is a two-pronged approach.  The first system acquires knowledge gradually based upon our exposure to new experiences.  The second system then stores each of these experiences so that we can replay them and effectively integrate them.  It has been a fundamental bedrock of subsequent research into neural networks.

One of the authors of that original paper has now teamed up with researchers from Stanford and DeepMind to update the theory based upon the latest thinking on the topic.

“The evidence seems compelling that the brain has these two kinds of learning systems, and the complementary learning systems theory explains how they complement each other to provide a powerful solution to a key learning problem that faces the brain,” they say.

Learning Systems

The first system has many similarities with the deep neural networks used in AI today as they contain multiple layers of neurons between the input and output of the network.  The knowledge in the network, therefore, is obtained via the connections between the nodes within it.

These connections occur over time, based upon the experience of the system, thus allowing the system to do things such as recognize speech, understand language and recognize objects.

The challenge for such systems occurs when new things have to be learned.  A large influx of change into the system can sufficiently distort the network such that it alters the knowledge already stored in the network.

“That’s where the complementary learning system comes in,” the authors say. “By initially storing information about the new experience in the hippocampus, we make it available for immediate use and we also keep it around so that it can be replayed back to the cortex, interleaving it with ongoing experience and stored information from other relevant experiences.”

By adopting the two-system approach, the researchers believe this shock can be overcome by supporting both immediate learning and then the gradual integration of that learning into the structural knowledge of the system.

“Components of the neural network architecture that succeeded in achieving human-level performance in a variety of computer games like Space Invaders and Breakout were inspired by complementary learning systems theory,” the authors say. “As in the theory, these neural networks exploit a memory buffer akin to the hippocampus that stores recent episodes of gameplay and replays them in interleaved fashion. This greatly amplifies the use of actual gameplay experience and avoids the tendency for a particular local run of experience to dominate learning in the system.”

They believe that this extended version of the learning systems theory will be hugely important in future research in both neuroscience and artificial intelligence.

Hortonworks Community Connection (HCC) is an online collaboration destination for developers, DevOps, customers and partners to get answers to questions, collaborate on technical articles and share code examples from GitHub.  Join the discussion.

Topics:
neuroscience ,machine learning

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}