Turbotodd

Ruminations on tech, the digital media, and some golf thrown in for good measure.

Posts Tagged ‘craig rinehart

Live @ Information On Demand 2012: Watson’s Next Job

with one comment

As I mentioned in my last post, yesterday was day 3 of Information On Demand 2012 here in Las Vegas.

There was LOTS going on out here in the West.

We started the day by interviewing keynote speaker Nate Silver (see previous post) just prior to his going on stage for the morning general session. Really fascinating interview, and going in to it I learned that his book had reached #8 on The New York Times best seller list.

In the IOD 2012 day 3 general session, IBM Fellow Rob High explains how IBM’s Watson technology may soon help drive down call center costs by 50%, using the intelligence engine of Watson to help customer service reps faster respond to customer queries.

So congrats, Nate, and thanks again for a scintillating interview.

During the morning session, we also heard from IBM’s own Craig Rinehart about the opportunity for achieving better efficiencies in health care using enterprise content management solutions from IBM.

I nearly choked when Craig explained that thirty cents out of every dollar on healthcare in the U.S. is wasted, and despite spending more than any other country, is ranked 37th in terms of care.

Craig explained the IBM Patient Care and Insights tool was intended to bring advanced analytics out of the lab and into the hospital to help start driving down some of those costs, and more importantly, to help save lives.

We also heard from IBM Fellow and CTO of IBM Watson Solutions’ organization, Rob High, about some of the recent advancements made on the Watson front.

High explained the distinction between programmatic and cognitive computing, the latter being the direction computing is now taking, and an approach that provides for much more “discoverability” even as it’s more probabilistic in nature.

High walked through a fascinating call center demonstration, whereby Watson helped a call center agent more quickly respond to a customer query by filtering through thousands of possible answers in a few second, then honed in on the ones most likely that would answer the customer’s question.

Next, we heard from Jeff Jonas, IBM’s entity analytics “Ironman” (Jeff also just competed his 27th Ironman triathlon last weekend), who explained his latest technology, context accumulation.

Jeff observed that context accumulation was the “incremental process of integrating new observations with previous ones.”

Or, in other words, developing a better understanding of something by taking more into account the things around it.

Too often, Jeff suggested, analytics has been done in isolation, but that “the future of Big Data is the diverse integration of data” where “data finds data.”

His new method allows for self-correction, and a high tolerance for disagreement, confusion and uncertainty, and where new observations can “reverse earlier assertions.”

For now, he’s calling the technology “G2,” and explains it as a “general purpose context accumulating engine.”

Of course, there was also the Nate Silver keynote, the capstone of yesterday’s morning session, to which I’ll refer you back to the interview Scott and I conducted to get a summary taste of all the ideas Nate discussed.  Your best bet is to buy his book, if you really want to understand where he thinks we need to take the promise of prediction.

Written by turbotodd

October 25, 2012 at 5:38 pm

%d bloggers like this: