Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Is Artificial Intelligence Possible?

DZone's Guide to

Is Artificial Intelligence Possible?

It is inevitable that AI will progress with technological and scientific discoveries, but will it ever reach its full potential?

· AI Zone
Free Resource

Find out how AI-Fueled APIs from Neura can make interesting products more exciting and engaging. 

"Artificial intelligence has been brain-dead since the 1970s."

This rather ostentatious remark made by Marvin Minsky, co-founder of the world-famous MIT Artificial Intelligence Laboratory, was referring to the fact that researchers have been primarily concerned with small facets of machine intelligence as opposed to looking at the problem as a whole. This article examines the contemporary issues of artificial intelligence (AI) and looks at the current status of the AI field together with potent arguments provided by leading experts to illustrate whether AI is an impossible concept to obtain.

Because of the scope and ambition, artificial intelligence defies simple definition. Initially, AI was defined as “the science of making machines do things that would require intelligence if done by men.” This somewhat meaningless definition shows how AI is still a young discipline — and similar early definitions have been shaped by the technological and theoretical progress made on the subject. For the time being, a good general definition that illustrates the future challenges in the AI field was made by the American Association for Artificial Intelligence (AAAI) clarifying that AI is the:

“...scientific understanding of the mechanisms underlying thought and intelligent behaviour and their embodiment in machines.”

The term “artificial intelligence” was first coined by John McCarthy at a Conference at Dartmouth College, New Hampshire, in 1956, but the concept of machine intelligence is in fact much older. In ancient Greek mythology, the smith-god Hephaestus is credited with making Talos, a "bull-headed" bronze man who guarded Crete for King Minos by patrolling the island, scaring off impostors. Similarly, in the 13th century, mechanical talking heads were said to have been created to scare intruders, with Albert the Great and Roger Bacon reputedly among the owners. However, it is only in the last 50 years that AI has really begun to pervade popular culture. Our fascination with “thinking machines” is obvious, but has been wrongfully distorted by the science-fiction connotations seen in literature, film, and television.

In reality, the AI field is far from creating the sentient beings seen in the media — yet this does not imply that successful progress has not been made. AI has been a rich branch of research for 50 years, and many famed theorists have contributed to the field. But one computer pioneer that has shared his thoughts at the beginning and still remains timely in both his assessment and arguments is British mathematician Alan Turing.

In the 1950s, Turing published a paper called “Computing Machinery and Intelligence” in which he proposed an empirical test that identifies an intelligent behavior “when there is no discernible difference between the conversation generated by the machine and that of an intelligent person.” The Turing test measures the performance of an allegedly intelligent machine against that of a human being and is arguably one of the best evaluation experiments at this present time. The Turing Test (also referred to as the “imitation game”) is carried out by having a knowledgeable human interrogator engage in a natural language conversation with two other participants: one a human and the other an “intelligent” machine communicating entirely with textual messages. If the judge cannot reliably identify which is which, it is said that the machine has passed the Turing Test and is therefore intelligent.

Although the test has a number of justifiable criticisms, such as not being able to test perceptual skills or manual dexterity, it is a great accomplishment if a machine can converse like a human and can cause a human to subjectively evaluate it as humanly intelligent by conversation alone.

Many theorists have disputed the Turing Test as an acceptable means of proving artificial intelligence. An argument posed by Professor Jefferson Lister states, “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain.”

Turing replied by saying that “we have no way of knowing that any individual other than ourselves experiences emotions and therefore we should accept the test.”

However, Lister did have a valid point to make about developing an artificial consciousness. Intelligent machines already exist that are autonomous; they can learn, communicate, and teach each other; but creating an artificial intuition, a consciousness, “is the holy grail of artificial intelligence.” When modeling AI on the human mind, many illogical paradoxes surface, and you begin to see how the complexity of the brain has been underestimated and why simulating it is not as straightforward as experts believed in the 1950s. The problem with human beings is that they are not algorithmic creatures; they prefer to use heuristic shortcuts and analogies to situations well-known. However, this is a psychological implication:

“It is not that people are smarter then explicit algorithms, but that they are sloppy and yet do well in most cases.”

The phenomenon of consciousness has caught the attention of many philosophers and scientists throughout history, and innumerable papers and books have been published devoted to the subject. However, no other biological singularity has remained so resistant to scientific evidence and “persistently ensnarled in fundamental philosophical and semantic tangles.”

Under ordinary circumstances, we have little difficulty in determining when other people lose or regain consciousness and as long as we avoid describing it, the phenomenon remains intuitively clear. Most computer scientists believe that the consciousness was an evolutionary “add-on” and can be algorithmically modeled, yet many recent claims oppose this theory. Sir Roger Penrose, an English mathematical physicist, argues that the rational processes of the human mind are not completely algorithmic and thus transcend computation, a theory that alludes to Professor Stuart Hameroff's proposal that consciousness emerges as a macroscopic quantum state from a critical level of coherence of quantum-level events in and around cytoskeletal microtubules within neurons.

Although these are all theories with not much or no empirical evidence, it is still important to consider each of them because it is vital that we understand the human mind before we duplicate it.

Another key problem with duplicating the human mind is how to incorporate various transitional states of consciousness such as REM sleep, hypnosis, drug influence, and some psychopathological states within a new paradigm. If these states are removed from the design due to their complexity or irrelevancy in a computer, then it should be pointed out that perhaps consciousness cannot be artificially imitated because these altered states have a biophysical significance for the functionality of the mind.

If consciousness is not algorithmic, then how is it created? Obviously, we do not know. Scientists who are interested in subjective awareness study the objective facts of neurology and behavior and have shed new light on how our nervous system processes and discriminates among stimuli. But although such sensory mechanisms are necessary for consciousness, it does not help to unlock the secrets of the cognitive mind, as we can perceive things and respond to them without being aware of them.

A prime example of this is sleepwalking. When sleepwalking occurs (sleepwalking comprises approximately 25 percent of all children and seven percent of adults), many of the victims carry out dangerous or stupid tasks, yet some individuals carry out complicated, distinctively human-like tasks, such as driving a car. One may dispute whether sleepwalkers are really unconscious, but if it is, in fact, true that the individuals have no awareness or recollection of what happened during their sleepwalking episode, then perhaps here is the key to the cognitive mind. Sleepwalking suggests at least two general behavioral deficiencies associated with the absence of consciousness in humans. The first is a deficiency in social skills. Sleepwalkers typically ignore the people they encounter, and the “rare interactions that occur are perfunctory and clumsy, or even violent.” The other major deficit in sleepwalking behavior is linguistics. Most sleepwalkers respond to verbal stimuli with only grunts or monosyllables or make no response at all. These two apparent deficiencies may be significant.

Sleepwalkers' use of protolanguage (short, grammar-free utterances with referential meaning lacking syntax) may illustrate that the consciousness is a social adaptation and that other animals do not lack understanding or sensation, but that they lack language skills and therefore cannot reflect on their sensations and become self-aware. In principle, Francis Crick, who helped discover double helix DNA structure, believed this hypothesis. After he and James Watson solved the mechanism of inheritance, Crick moved to neuroscience and spent the rest of his trying to answer the biggest biological question: What is the consciousness? Working closely with Christof Koch, he published his final paper in the Philosophical Transactions of the Royal Society of London, which proposed that an obscure part of the brain, the claustrum, acts like a conductor of an orchestra and “binds” vision, olfaction, and somatic sensation together with the amygdala and other neuronal processing for the unification of thought and emotion. And the fact that all mammals have a claustrum means that it is possible that other animals have high intelligence.

So how different are the minds of animals in comparison to our own? Can their minds be algorithmically simulated? Many scientists are reluctant to discuss animal intelligence, as it is not an observable property and nothing can be perceived without reason. Therefore, there is not much research published on the matter. But by avoiding the comparison of some human mental states to other animals, we are impeding the use of a comparative method that may unravel the secrets of the cognitive mind. However, primates and cetaceans have been considered by some to be extremely intelligent creatures, second only to humans. Their exalted status in the animal kingdom has lead to their involvement in almost all of published experiments related to animal intelligence. These experiments, coupled with the analysis of primate and cetacean brain structure, have led to many theories as to the development of higher intelligence as a trait. Although these theories seem to be plausible, there is some controversy over the degree to which non-human studies can be used to infer about the structure of human intelligence.

The evidence for animal consciousness is indirect. But so is the evidence for the big bang, neutrinos, or human evolution. As in any event, such unusual assertions must be subject to rigorous scientific procedures before they can be accepted as even vague possibilities. They're intriguing, but more proof is required. However, merely because we do not understand something does not mean that it is false. Studying other animal minds is a useful comparative method and could even aid the creation of artificial intelligence (that does not include irrelevant transitional states for an artificial entity) based on a model not as complex as our own. Still, the central point being illustrated is how ignorant our understanding of the human brain, or any other brain is and how one day a concrete theory can change thanks to enlightening findings.

Furthermore, an analogous incident that exemplifies this argument happened in 1847, when an Irish workman, Phineas Cage, shed new light on the field of neuroscience when a rock blasting accident sent an iron rod through the frontal region of his brain. Miraculously enough, he survived the incident, but even more astonishing to the science community at the time were the marked changes in Cage’s personality after the rode punctured his brain. Where before Cage was characterized by his mild-mannered nature, he had now become aggressive, rude, and “indulging in the grossest profanity, which was not previously his custom, manifesting but little deference for his fellows, impatient of restraint or advice when it conflicts with his desires” according to the Boston physician Harlow in 1868. However, Cage sustained no impairment with regards to his intelligence or memory.

The serendipity of the Phineas Cage incident demonstrates how architecturally robust the structure of the brain is and by comparison how rigid a computer is. All mechanical systems and algorithms would stop functioning correctly or completely if an iron rod punctured them — that is, with the exception of artificial neural systems and their distributed parallel structure. In the last decade, AI has begun to resurge thanks to the promising approach of artificial neural systems.

Artificial neural systems, or simply neural networks, are modeled on the logical associations made by the human brain. They are based on mathematical models that accumulate data, or “knowledge,” based on parameters set by administrators. Once the network is “trained” to recognize these parameters, it can make an evaluation, reach a conclusion, and take action. In the 1980s, neural networks became widely used with the backpropagation algorithm, first described by Paul John Werbos in 1974. The 1990s marked major achievements in many areas of AI and demonstrations of various applications. Most notably, in 1997, IBM's Deep Blue supercomputer defeated world chess champion Garry Kasparov. After the match, Kasparov was quoted as saying the computer played “like a god.”

That chess match and all its implications raised profound questions about neural networks. Many saw it as evidence that true artificial intelligence had finally been achieved. After all, “a man was beaten by a computer in a game of wits.” But it is one thing to program a computer to solve the kind of complex mathematical problems found in chess. It is quite another for a computer to make logical deductions and decisions on its own.

Using neural networks to emulate brain function provides many positive properties including parallel functioning, relatively quick realization of complicated tasks, distributed information, weak computation changes due to network damage (Phineas Cage), as well as learning abilities, i.e. adaptation to changes in the environment and further improvements based on experience. These beneficial properties of neural networks have inspired many scientists to propose them as a solution for most problems, so with a sufficiently large network and adequate training, the networks could accomplish many arbitrary tasks without knowing a detailed mathematical algorithm of the problem.

Currently, the remarkable ability of neural networks is best demonstrated by the ability of Honda's Asimo humanoid robot that doesn't just walk and dance but can even ride a bicycle. Asimo, an acronym for Advanced Step in Innovative Mobility, has 16 flexible joints, requiring a four-processor computer to control its movement and balance. Its exceptional human-like mobility is only possible because the neural networks that are connected to the robot's motion and positional sensors and control its “muscle actuators” are capable of being “taught” to do a particular activity.

The significance of this sort of robot motion control is the virtual impossibility of a programmer being able to actually create a set of detailed instructions for walking or riding a bicycle, instructions that could then be built into a control program. The learning ability of the neural network overcomes the need to precisely define these instructions. However, despite the impressive performance of the neural networks, Asimo still cannot think for itself, and its behavior is still firmly anchored on the lower-end of the intelligent spectrum, such as reaction and regulation.

Neural networks are slowly finding their way into the commercial world. Recently, Siemens launched a new fire detector that uses a number of different sensors and a neural network to determine whether the combination of sensor readings are from a fire or are just parts of the normal room environment, such as particles of dust. Over 50% of fire call-outs are false and of these, well over half are due to fire detectors being triggered by everyday activities as opposed to actual fires. This is clearly a beneficial use of the paradigm.

But are there limitations to the capabilities of neural networks, or will they be the solution to creating strong AI? Artificial neural networks are biologically inspired, but that does not mean that they are necessarily biologically plausible. Many scientists have published their thoughts on the intrinsic limitations of using neural networks. One book that received high exposure within the computer scientist community in 1969 was “Perceptron” by Minsky and Papert.

“Perceptron” brought clarity to the limitations of neural networks. Although many scientists were aware of the limited ability of an incomplex perceptron to classify patterns, Minsky’s and Papert’s approach of finding what neural networks are good for illustrated the impeding future development of neural networks. Within its time period, “Perceptron” was exceptionally constructive and its identifiable content gave the impetus for later research that conquered some of the depicted computational problems restricting the model.

An example is the exclusive-or problem. The exclusive-or problem contains four patterns of two inputs each; a pattern is a positive member of a set if either one of the input bits is on, but not both. Thus, changing the input pattern by one bit changes the classification of the pattern. This is the simplest example of a linearly inseparable problem. A perceptron using linear threshold functions requires a layer of internal units to solve this problem, and since the connections between the input and internal units could not be trained, a perceptron could not learn this classification. Eventually, this restriction was solved by incorporating extra “hidden” layers.

Although advances in neural network research have solved many of the limitations identified by Minsky and Papert, numerous still remain such as networks using linear threshold units still violate the limited order constraint when faced with linearly inseparable problems. Additionally, the scaling of weights as the size of the problem space increases remains an issue.

It is clear that the dismissive views about neural networks disseminated by Minsky, Papert, and many other computer scientists have some evidential support, but still, many researchers have ignored their claims and refuse to abandon this biologically inspired system.

There have been several recent advances in artificial neural networks by integrating other specialized theories into the multi-layered structure in an attempt to improve the system methodology and move one step closer to creating strong AI. One promising area is the integration of fuzzy logic, invented by Professor Lotfi Zadeh. Other admirable algorithmic ideas include quantum inspired neural networks (QUINNs) and “network cavitations” proposed by S.L. Thaler.

The history of artificial intelligence is replete with theories and failed attempts. It is inevitable that the discipline will progress with technological and scientific discoveries, but will it ever reach its full potential?

To find out how AI-Fueled APIs can increase engagement and retention, download Six Ways to Boost Engagement for Your IoT Device or App with AI today.

Topics:
ai ,neural networks ,machine learning ,artificial intelligence tools ,artificial intelligence

Published at DZone with permission of Venkatesan Murugan. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}