We are searching data for your request:
Upon completion, a link will appear to access the found materials.
In 2011, two episodes of Jeopardy stunned the world when the best Jeopardy players in the history squared off against IBM’s Watson Cognitive Computing System and were soundly beaten. For many, this was the moment when artificial intelligence probably became a very real thing in their minds; one contestant even scrawled "I, for one, welcome our future computer overlords" on his answer in his final losing round. He likely spoke for many in the audience.
Watson dominated a game where nuanced wordplay was intrinsic to the challenge of the contest, where contestants needed to provide the question that fit an answer shrouded in double meaning. For humans, Jeopardy is a unique cognitive exercise—as anyone playing along at home can attest to—but for a machine that can be thwarted by a reCAPTCHA challenge on a web page, Watson’s success was a monumental achievement in computing that has implications for the future of practical, everyday technology.
Cognitive Computing vs. Artificial Intelligence
Calling cognitive computing a form artificial intelligence isn’t wrong, but it misses a fundamental distinction that makes it so remarkable.
When we talk about artificial intelligence, often we are talking about something that is necessarily an incredible sophisticated functional algorithm. That is, an AI is a very, very complex decision tree—one we may not even be able to follow ourselves—that when given a specific input, will produce a predictable output.
This is how autonomous vehicles work, by taking in a starting point and a destination as input and navigating between the two according to a mind-bogglingly long sequence of if-else statements.
If the light is red, stop; otherwise, proceed. No human input needed.
This is a radical simplification, but this is essentially what most people are talking about when they talk about AI. An AI is something that finds the best possible way to do something within a given set of parameters and makes a decision or takes action as a result. This applies to autonomous vehicles as much as it does to high-speed trading platforms on Wall Street.
What is Cognitive Computing?
If it isn’t just another form of AI, then what is cognitive computing?
Cognitive systems use all of the same machine learning, natural language processing, and data mining techniques that the above AI uses, but it takes things a step further and seeks to emulate the way the human brain reasons and makes decisions, often with conflicting or outright contradictory information.
It crunches all of this data and considers all the parameters and variables at play and sorts through each the way humans might choose which restaurant to eat at or which car to buy. It’s much more subjective than the typical AI system.
When it finishes its analysis, a cognitive computational system like IBM’s Watson will provide what it thinks is the best choice for a given problem from an array of possible solutions. This is not necessarily the right choice, however. It leaves it to the human who is using the system to decide what the right course of action is in a given situation.
Assisting Human Decision Making, Not Replacing It
The essential distinction between cognitive platforms and artificial intelligence systems is that you want an AI to do something for you. A cognitive platform is something you turn to for collaboration or for advice.
The applications for these platforms range from medicine to customer service.
Doctors can use these systems to assist them in diagnosing patients, utilizing their ability to analyze a patient’s medical history against every medical textbook ever written, identifying possible diseases a Doctor might never have considered, or even know about.
Businesses can use it to incorporate all kinds of risk factors into a decision before providing a company with a recommendation about an investment or a location to build a new satellite office.
The possibilities for this technology in the future are enormous, and no industry will be left untouched by it in the next decade.
Where Should We Expect to See the Biggest Impact?
Financial services are the most likely place we’ll see these systems make significant advances. As we see the exponential growth of data across every sector of the economy, there will be more and more ways to make money that may not be apparent to humans who have no way to process and analyze the petabytes of data stored in the cloud.
This sort of thing is precisely what these platforms is designed to do, and if anyone has the money to invest in this technology, it will be financial services firms.
Health care and law both stand to gain significantly from this technology as well. With the millions of pages of case law that attorneys have to sift through when preparing lawsuits—or when defending a client from one—a whole army of paralegals could not provide the kind of analysis and assistance that this platform could provide.
Consumer service will see a massive benefit from these systems, both in retail and in corporate communications, as well. Retail outlets already use AIs to act as shopping assistants for customers, and this trend will only accelerate once this technology becomes more widespread.
Chatbots are already a rudimentary form of this kind of computing that can field basic customer service inquiries. As they become more sophisticated, they could eventually replace entire call centers, routing only the most unconventional customer service issue to a human agent.
Considering the cost savings on this alone, we should expect to deal with a lot fewer humans on the phone than we already do.
Implications for the Future
When people first watched IBM’s Watson master the nuances of human language and beat the two greatest Jeopardy players back in 2011, there was considerable anxiety. A machine could suddenly beat us at something we thought only humans could do and beat us soundly.
When people express anxieties about artificial intelligence taking over entire industries and displacing millions of workers, this is the technology they are talking about. Whether or not those anxieties are exaggerated remains to be seen, but without a doubt, we'll know soon enough—the cognitive computing revolution is already here and there’ll be no going back.