Introduction to Cognitive Computing
Cognition comes from the human brain. So what’s the brain of cognitive systems? Learn what cognitive computing is and how it relates to Big Data and Deep Learning.
Join the DZone community and get the full member experience.Join For Free
Cognitive computing represents self-learning systems that utilize Machine Learning models to mimic the way brain works. Eventually, this technology will facilitate the creation of automated IT models which are capable of solving problems without human assistance. The result is cognitive computing — a combination of cognitive science and computer science. Cognitive computing models provide a realistic roadmap to achieve Artificial Intelligence.
Cognition comes from the human brain. So what’s the brain of cognitive systems?
Cognitive computing represents the third era of computing. In the first era (19th century), Charles Babbage, also known as the father of the computer, introduced the concept of a programmable computer. Used in the navigational calculation, his computer was designed to tabulate polynomial functions. The second era (1950) experienced digital programming computers such as ENIAC and ushered an era of modern computing and programmable systems. Now, that's turned into cognitive computing, which works on Deep Learning algorithms and Big Data analytics to provide insights.
Thus, the brain of a cognitive system is the neural network, a fundamental concept behind Deep Learning. The neural network is a system of hardware and software mimicked after the central nervous system of humans to estimate functions that depend on the huge amount of unknown inputs.
The Features of a Cognitive Computing Solution
With the present state of cognitive computing, basic solutions can play an excellent role of an assistant or virtual advisor. Siri, Google assistant, Cortana, and Alexa are good examples of personal assistants. In order to implement cognitive computing in commercial and widespread applications, Cognitive Computing Consortium has recommended the following features for computing systems.
They must learn as information changes and as goals and requirements evolve. They must resolve ambiguity and tolerate unpredictability. They must be engineered to feed on dynamic data in real-time or near real-time.
Similar to brain the cognitive solution must interact with all elements in the system – processor, devices, cloud services and user. Cognitive systems should interact bidirectionally. It should understand human input and provide relevant results using Natural Language Processing and deep learning. Some intelligent chatbots such as Mitsuku have already achieved this feature.
3. Iterative and Stateful
They must aid in defining a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete. They must “remember” previous interactions in a process and return information that is suitable for the specific application at that point in time.
They must understand, identify, and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulations, user’s profile, process, task, and goal. They may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory, or sensor-provided).
Cognitive computing is definitely the next step in computing started by automation. It sets a benchmark for computing systems to reach the level of the human brain. But it has some limitations as AI is difficult to apply in situations with a high level of uncertainty, rapid change or creative demands. The complexity of problem grows with the number of data sources. It is challenging to aggregate, integrate, and analyze such unstructured data. A complex cognitive solution should have many technologies that coexist to give deep domain insights.
Published at DZone with permission of Rohit Akiwatkar. See the original article here.
Opinions expressed by DZone contributors are their own.