Skip to main content
technology

IBM's Watson computer competes against Jeopardy!’s two most successful contestants – Ken Jennings and Brad Rutter – in a practice match in January, 2011.

Supercomputers can do a lot more than play chess and win game shows. But you wouldn't necessarily know that from IBM PR stunts over the years – such as when the tech company pitted its Deep Junior device in 2003 with chess master Garry Kasparov, or the matchup between its Watson computer and two Jeopardy champions last year.

Next week, Eric Brown, of IBM's T.J. Watson Research Center, will speak about Watson's practical business applications at the World Congress on Information Technology conference in Montreal, of which The Globe and Mail is a co-sponsor. We spoke with Dr. Brown this week.

Watson is known as a QA – a Question Answering system. How is that applicable to more than a game show?

When you look at the challenges that our clients have in managing unstructured information – in particular, text – technology like Watson is aimed at addressing that issue. And it's also representative of this next era of computing we at IBM envision that we're calling cognitive computing where, to create these kinds of technologies, first we're trying to tackle problems that historically have really been the purview of human thinking and reasoning; and, second, to build these kind of systems you actually need to leverage learning, automatic learning and machine learning in a variety of ways.

How is IBM working to adapt Watson technology for the U.S. health insurance provider Wellpoint?

We're looking at ways to apply the technology to help in a decision-making process that's currently performed by humans in evaluating requests for procedures, and support that with some automated technology that can make that decision-making process more efficient and more cost-effective so that they can be done more quickly.

Watson isn't the only system that can handle unstructured information. What's the big step forward, then?

There's been a lot of work and research done to understand natural human language, and one of the challenges has been really to exploit that in a meaningful way. So what we've done is created an overall architecture that actually allows us to combine a wide variety of techniques, so the first important element of it is – this architecture that allows us to use a wide variety of techniques from an engineering standpoint, to easily integrate them into an overall system, but then second to automatically learn how to weigh and combine the results of all these different techniques. So you can think of each technique as an expert about some aspect of natural language understanding. And when Watson is generating possible answers to a given problem, all of these different experts weigh in with their opinion, and we use a machine-learning technique to learn how to weigh and combine the opinions of all these different expert analytics.

Since we're talking about understanding language, could you explain in simple language what the biggest challenge is?

As humans we very easily understand natural language; our brains have been trained from birth to do that. And what we don't realize is how ambiguous natural language can be, how implicit it can be, how tacit it can be, and how we're actually able to communicate as humans effectively in spite of the fact that we leave a lot unsaid. So, when you try to create computer systems that can understand natural language, given all of the nuance and the ambiguity, it becomes a very significant problem.

There are a number of things that we do, however – and it actually takes you back to your eighth grade grammar class – where you learn how to diagram a sentence, identify parts of speech. There's a natural language processing technology called a parser which can do that on text, and it starts to understand the syntax of the natural language, and then we start to layer on top of that additional analytics that can do things such as identify entities – proper names, places, organizations – and also relationships between those entities.

So as you move into the medical domain, it becomes important to understand what are diseases, what are symptoms, what are treatments, and how when they're expressed in text are we talking about a symptom being present or absent – so, dealing with negation ends up being a bit of a challenge.

You're currently working with the U.S. health insurance provider Wellpoint. What other domains might we be looking at in the future?

When you look at clinical-decision support, or what they use in the medical field, that whole approach to diagnosing problems and coming up with solutions actually maps onto a wide variety of different business applications – from, say, contact centres where people at a help desk are trying to understand a problem that's being presented by their customer, diagnose that problem and then recommend a treatment; to maybe a financial services industry, where again they're trying to make different decisions, and looking at information from a variety of sources, trying to diagnose the problem and then come up with an appropriate solution or treatment.

Do you tend to think of Watson as "he," as "she" or "it"?

I tend to think of it as "it." It can be fun to anthropomorphize Watson and treat it like a personality, but I think that can be misleading with what we're trying to do here because, again, I very much think of it as a decision-support system. It's technology to help humans do their jobs better, faster, more effectively, more efficiently.

So in other words, people should not fear that Watson is going to ship jobs overseas and take all the work for himself – er, itself?

Correct. I like to think of it as the computer on Star Trek, where it's a tool that the humans are able to interact with much more effectively and use to make better decisions.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe