DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • Beyond Simple Responses: Building Truly Conversational LLM Chatbots
  • The 4 R’s of Pipeline Reliability: Designing Data Systems That Last
  • Build a Scalable E-commerce Platform: System Design Overview
  • Is AI Responsible for Its Actions, or Should Humans Take the Blame?

Trending

  • Infrastructure as Code (IaC) Beyond the Basics
  • Operational Principles, Architecture, Benefits, and Limitations of Artificial Intelligence Large Language Models
  • Agile’s Quarter-Century Crisis
  • The Full-Stack Developer's Blind Spot: Why Data Cleansing Shouldn't Be an Afterthought
  1. DZone
  2. Data Engineering
  3. AI/ML
  4. Symbolic and Connectionist Approaches: A Journey Between Logic, Cognition, and the Future Challenges of AI

Symbolic and Connectionist Approaches: A Journey Between Logic, Cognition, and the Future Challenges of AI

Explore the strengths and limitations of symbolic and connectionist AI as well as the challenges AI faces in replicating human experience and reasoning.

By 
Frederic Jacquet user avatar
Frederic Jacquet
DZone Core CORE ·
Oct. 22, 24 · Analysis
Likes (1)
Comment
Save
Tweet
Share
3.2K Views

Join the DZone community and get the full member experience.

Join For Free

This article explores two major approaches to artificial intelligence: symbolic AI, based on logical rules, and connectionist AI, inspired by neural networks. Beyond the technical aspects, the aim is to question concepts such as perception and cognition and to reflect on the challenges that AI must take up to better manage contradictions and aim to imitate human thought.

Preamble

French researcher Sébastien Konieczny was recently named EuAI Fellow 2024 for his groundbreaking work on belief fusion and inconsistency management in artificial intelligence. His research, focused on reasoning modeling and knowledge revision, opens up new perspectives to enable AI systems to tend to reason even more reliably in the face of contradictory information, and thus better manage the complexity of the real world.

Konieczny's work is part of a wider context of reflection and fundamental questioning about the very nature of artificial intelligence. These questions are at the root of the long-standing debate between symbolic and connectionist approaches to AI, where technical advances and philosophical reflections are intertwined.

Introduction

In the field of artificial intelligence, we can observe two extreme perspectives: on the one hand, boundless enthusiasm for AI's supposedly unlimited capabilities, and on the other, deep concerns about its potential negative impact on society. For a clearer picture, it makes sense to go back to the basics of the debate between symbolic and connectionist approaches to AI.

This debate, which goes back to the origins of AI, compares two fundamental visions against each other. On the one hand, the symbolic approach sees intelligence as the manipulation of symbols according to logical rules. On the other, the connectionist approach is inspired by the neuronal functioning of the human brain.

By refocusing the discussion on the relationship between perception, cognition, learning, generalization, and common sense, we can elevate the debate beyond speculation about the alleged consciousness of today's AI systems. 

The Symbolic Approach

The symbolic approach sees the manipulation of symbols as fundamental to the formation of ideas and the resolution of complex problems. According to this view, conscious thought relies on the use of symbols and logical rules to represent knowledge and, from there, to reason.

“Although recently connectionist AI has started addressing problems beyond narrowly defined recognition and classification tasks, this mostly remains a promise: it remains to be seen if connectionist AI can accomplish complex tasks that require commonsense reasoning and causal reasoning, all without including knowledge and symbols.”
- Ashok K. Goel Georgia Institute of Technology

The Connectionist Approach

This vision, projected by the symbolic approach, is contested by proponents of the connectionist approach, who maintain that intelligence emerges from complex interactions between numerous simple units, much like the neurons in the brain. They argue that current AI models, based on deep learning, demonstrate impressive capabilities without explicit symbol manipulation.

Konieczny's work on reasoning modeling and knowledge revision provides food for thought in this debate. By focusing on the ability of AI systems to handle uncertain and contradictory information, this research highlights the complexity of autonomous reasoning. They highlight what’s perhaps the real challenge: how to enable an AI to revise its knowledge in the face of new information while maintaining internal consistency.

Experiencing the World

Now, we know very well that seeing, touching, and hearing the world (in other words, experiencing the world through the body) are essential for humans to build cognitive structures. When we consider the evolution of AI systems, particularly in the field of Generative AI (GenAI) democratized by the release of ChatGPT in 2022, AI systems are approaching a form of “thinking” in their own way. 

This theory is based on the fact that on the basis of massive data sets, collected from real-world sensors, advanced systems, such as autonomous systems for example, would already be able to establish models that mimic forms of understanding. 

Mimicking Cognitive Abilities

Although AI lacks consciousness, these systems process and react to their environment in a way that suggests a data-driven way of mimicking cognitive abilities. 

This imitation of cognition raises fascinating questions about the nature of intelligence and consciousness. Are we creating truly intelligent entities, or simply extremely sophisticated systems of imitation?

"In many computer science fields, one needs to synthesize a coherent belief from several sources. The problem is that, in general, these sources contradict each other. So the merging of belief sources is a non-trivial issue”
- Sébastien Konieczny, CNRS

Konieczny's observation reinforces the idea that AI needs to reconcile contradictory information. This problem, far from being purely and solely technical, opens the way to deeper reflections on the nature of reasoning and understanding.

The theme of managing inconsistencies echoes philosophical debates on experience and common sense in AI. Indeed, the ability to detect and resolve contradictions is a fundamental quality of human reasoning and a key element in our understanding of the world. 

The Concept of Experience

If we were to transpose Kant's concept of experience onto AI technologies, we might suggest that the risk for some — and the opportunity for others — lies in how our understanding of "experience" itself is evolving.

If, for Kant, experience is the result of a synthesis between sensible data — i.e., the raw information perceived by our senses — and the concepts of understanding, on what criteria can we base our assertion that machines can acquire experience? This transposition prompts us to reflect on the very nature of experience, and on the possibility of machines gaining access to a form of understanding comparable to that of humans. That would be quite a significant leap from this reflection to asserting that machines can truly acquire experience. . .

The Concept of Common Sense

If we now turn to the concept of “common sense," we can conceive of it as a form of practical wisdom derived from everyday experience. In the context of our thinking on AI, common sense could be seen as an intuitive ability to navigate the real world, to make rapid inferences without resorting to formal reasoning. 

We can attribute to common sense the ability to form a bridge between perception and cognition. This suggests that lived experience is crucial to understanding the world. So how can a machine, devoid of direct sensory experience, develop this kind of intuitive intelligence? This question raises another challenge for AI: to reproduce not only formal intelligence but also that form of wisdom that we humans often take for granted.

We need to understand that, when machines integrate data from our human experience, even though they haven't experienced it for themselves, they are making the closest thing we have to a “decision," not a choice.

Decision vs. Choice

It's necessary here to make a clear distinction between “decision” and “choice." A machine can make decisions by executing algorithms and analyzing data, but can it really make choices? 

Where decision involves a logical process of selection among options, based on predefined criteria, choice on the other hand involves an extra dimension of free will, common sense, self-awareness, and moral responsibility. 

When an AI “decides," it follows a logical path determined by its programming and data. But a real choice, like those made by humans, implies a deep understanding of the consequences, an ability to reason abstractly about values, and potentially even an intuition that goes beyond mere logic. This distinction highlights a fundamental limitation of today's AI: although it can make extremely complex and sophisticated decisions, it remains devoid of the ability to make choices in the fully human sense of the term.

While this distinction turns out to be far more philosophical than technical, it is nonetheless often discussed in debates on artificial intelligence and consciousness and the capacity to think.

Konieczny's research into knowledge fusion and revision sheds interesting light on this distinction. By working on methods enabling AI to handle conflicting information and estimate the reliability of sources, this work could help develop systems capable of making more nuanced decisions, perhaps coming closer to the notion of “choice” as we conceive it for humans.

See and Act in the World

AI, in processing data, is not granted with consciousness or experience. As Dr. Li Fei-Fei, Co-Director of Stanford’s Human-Centered AI Institute,) puts it:

“To truly understand the world, we must not only see but also act in it." 

She used to highlight the fundamental limitation of machines, which, deprived of autonomous action, subjectivity, and the ability to “choose," cannot truly experience the world as humans do.

In her lecture “What We See and What We Value: AI with a Human Perspective,” Dr. Li addresses the issue of visual intelligence as an essential component of animal and human intelligence. 

She argues that it is necessary to enable machines to perceive the world in a similar way while raising fundamental ethical questions about the implications of developing AI systems capable of seeing and interacting with the world around us. 

This reflection is fully in line with the wider debate on perception and cognition in AI, suggesting that while AI can indeed process visual data with remarkable efficiency, it remains lacking the human values and subjective experience that characterize our understanding of the world. This perspective brings us back to the central question of the experience and “lived experience” of machines, highlighting once again the gap that exists between data processing, however sophisticated, and the true understanding of the world as we humans conceive it.

Conclusion

While the progress of AI is undeniable and impressive, the debate between symbolic and connectionist approaches reminds us that we are still far from fully understanding the nature of intelligence and consciousness. This debate will continue to influence the development of AI while pushing us to reflect on what really makes us thinking and conscious beings.

One More Thing

It's important to stress that this article is not intended to suggest that machines might one day acquire true consciousness, comparable to that of humans. Rather, by exploring philosophical concepts such as experience and choice, the intention is to open up avenues of reflection on how to improve artificial intelligence. These theoretical reflections offer a framework for understanding how AI could, through advanced data processing methods, better mimic certain aspects of human cognition without claiming to achieve consciousness. 

It’s in this search for better techniques, and not in speculation about artificial consciousness, that the purpose of this exploration lies.

Machine Data (computing) systems artificial intelligence

Opinions expressed by DZone contributors are their own.

Related

  • Beyond Simple Responses: Building Truly Conversational LLM Chatbots
  • The 4 R’s of Pipeline Reliability: Designing Data Systems That Last
  • Build a Scalable E-commerce Platform: System Design Overview
  • Is AI Responsible for Its Actions, or Should Humans Take the Blame?

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!