In the future, they will determine the precise date when the traditional notion of privacy expired — probably some moment in 1999. It will take a couple of decades at least for humanity to comprehend the abilities and reach of modern surveillance, and the unfathomable amount of data being generated and collected.
For example, satellites can now identify objects as small as 50 centimeters across, according to X Prize Foundation founder Nick Diamandis. Diamandis is quoted by IDG News Service’s James Niccolai in a September 24, 2015 article. The researcher states that data analysis systems such as IBM’s Watson are the only way to extract useful information from the enormous stores of data we now collect.
Diamandis predicts that we are approaching what he calls “perfect data,” the point at which everything that happens is recorded and made available for mining. He offers two examples: self-driving cars that scan and record their environment constantly, and a fleet of low-flying drones able to capture video of someone perpetrating or attempting crimes as they occur.
Does this mean we should all don our tinfoil hats and head for the hills? Far from it. Technology is neutral — and that applies just as well to scary, Big Brother technology. The data-rich world of tomorrow holds much more promise than peril. From climate change to cancer cures, Big Data is the key to solutions that will impact the world. But this level of data analytics will require machine architectures smart enough and fast enough to process all that data, as well as access tools that humans can use to make sense of the data.
IBM Watson Morphs Into a Data-Driven AI Platform
Big Data has value only if decision-makers have the tools they need to access and analyze the data. First off, this requires a platform on which the tools can run. IBM envisions Watson as the platform for data-driven Artificial Intelligence applications, as The New York Times’ Steve Lohr reports in a different September 24, 2015 article. Watson is being enhanced with language understanding, image recognition, and sentiment analysis. These human-like capabilities are well-suited to AI apps, which IBM refers to as “cognitive computing.”
IBM’s Watson includes natural language processing of questions that balances evidence against hypotheses to generate a confidence level for each proposed answer. Source: IBM.
Healthcare is expected to be one of the first beneficiaries of cognitive computing. For example, Johnson & Johnson is teaming with IBM and Apple to develop a “virtual coach” for patients recovering from knee surgery. In a September 25, 2015 article in The Wall Street Journal, Steven Norton quotes Johnson & Johnson CIO Stuart McGuigan stating that the system’s goal is to “predict patient outcomes, suggest treatment plans, and give patients targeted encouragement.”
New Supercomputer Architectures for a Data-Centric World
Data-driven applications won’t go anywhere without the processing power to crunch all that data. As part of the National Strategic Computing Initiative, IBM is working with the Oak Ridge National Laboratory and the Lawrence Livermore National Laboratory to develop a computer architecture designed specifically for analytics and Big Data applications. IBM Senior VP John E. Kelly III describes the company’s strategy in a July 31, 2015 post on A Smarter Planet Blog.
The goal of the Department of Energy’s Collaboration of Oak Ridge, Argonne, and Livermore (Coral) program is to develop the fastest supercomputers in the world. They will be based on IBM’s data-centric design that puts processing where the data resides to eliminate the overhead of shuttling data to the processor.
Another player in the smart machines field is Digital Reasoning, whose Synthesys Machine Learning system is designed for fast analysis of email, chat, voice, social networking, and other digital communications. In an August 14, 2014 article, Fortune’s Clay Dillow describes how financial institutions are applying the Machine Learning technology Digital Reasoning developed initially for the U.S. Department of Defense to combat terrorism.
Evolving the Role of Humans in the Business Decision-Making Process
The logical progression of machine intelligence leads to AI making better business decisions than human analysts. In a September 21, 2015 post on The New York Times Opinionator blog, Robert A. Burton explains that once the data in any area has been quantified (such as in the games of chess and poker, or the fields of medicine and law), we merely query the smart machine for the optimal outcome. Humans can’t compete.
However, humans will continue to play a key decision-making role that machines to date can’t duplicate: applying emotion, feelings, and intention to the conclusions derived from smart machines. For example, the image of a U.S. flag flying over a baseball park isn’t the same as the image of the same flag being raised by U.S. Marines on Iwo Jima.
For humans to apply their uniquely human sensibility to business decisions, they must have access to the data, which circles back to the need for data-analysis tools that run atop these smart-machine platforms. ComputerWeekly’s Carolyn Donnelly writes in a September 18, 2015 article that the more people who are able to analyze the data, the more informed their decisions, and the faster the technology will be integrated with day-to-day business operations.
As Donnelly writes, the businesses that are able to apply data science principles and self-service platforms will do a better job of collecting, managing, and analyzing the data. The ultimate result is more productive use of the organization’s valuable information.