Turbotodd

Ruminations on tech, the digital media, and some golf thrown in for good measure.

Posts Tagged ‘information on demand 2012

Live @ Information On Demand 2012: Watson’s Next Job

with one comment

As I mentioned in my last post, yesterday was day 3 of Information On Demand 2012 here in Las Vegas.

There was LOTS going on out here in the West.

We started the day by interviewing keynote speaker Nate Silver (see previous post) just prior to his going on stage for the morning general session. Really fascinating interview, and going in to it I learned that his book had reached #8 on The New York Times best seller list.

In the IOD 2012 day 3 general session, IBM Fellow Rob High explains how IBM’s Watson technology may soon help drive down call center costs by 50%, using the intelligence engine of Watson to help customer service reps faster respond to customer queries.

So congrats, Nate, and thanks again for a scintillating interview.

During the morning session, we also heard from IBM’s own Craig Rinehart about the opportunity for achieving better efficiencies in health care using enterprise content management solutions from IBM.

I nearly choked when Craig explained that thirty cents out of every dollar on healthcare in the U.S. is wasted, and despite spending more than any other country, is ranked 37th in terms of care.

Craig explained the IBM Patient Care and Insights tool was intended to bring advanced analytics out of the lab and into the hospital to help start driving down some of those costs, and more importantly, to help save lives.

We also heard from IBM Fellow and CTO of IBM Watson Solutions’ organization, Rob High, about some of the recent advancements made on the Watson front.

High explained the distinction between programmatic and cognitive computing, the latter being the direction computing is now taking, and an approach that provides for much more “discoverability” even as it’s more probabilistic in nature.

High walked through a fascinating call center demonstration, whereby Watson helped a call center agent more quickly respond to a customer query by filtering through thousands of possible answers in a few second, then honed in on the ones most likely that would answer the customer’s question.

Next, we heard from Jeff Jonas, IBM’s entity analytics “Ironman” (Jeff also just competed his 27th Ironman triathlon last weekend), who explained his latest technology, context accumulation.

Jeff observed that context accumulation was the “incremental process of integrating new observations with previous ones.”

Or, in other words, developing a better understanding of something by taking more into account the things around it.

Too often, Jeff suggested, analytics has been done in isolation, but that “the future of Big Data is the diverse integration of data” where “data finds data.”

His new method allows for self-correction, and a high tolerance for disagreement, confusion and uncertainty, and where new observations can “reverse earlier assertions.”

For now, he’s calling the technology “G2,” and explains it as a “general purpose context accumulating engine.”

Of course, there was also the Nate Silver keynote, the capstone of yesterday’s morning session, to which I’ll refer you back to the interview Scott and I conducted to get a summary taste of all the ideas Nate discussed.  Your best bet is to buy his book, if you really want to understand where he thinks we need to take the promise of prediction.

Written by turbotodd

October 25, 2012 at 5:38 pm

Live @ Information On Demand 2012: A Q&A With Nate Silver On The Promise Of Prediction

with 2 comments

Day 3 at Information On Demand 2012.

The suggestion to “Think Big” continued, so Scott Laningham and I sat down very early this morning with Nate Silver, blogger and author of the now New York Times bestseller, “The Signal and the Noise” (You can read the review of the book in the Times here).

Nate, who is a youngish 34, has become our leading statistician through his innovative analyses of political polling, but made his original name by building a widely acclaimed baseball statistical analysis system called “PECOTA.”

Today, Nate runs the award-winning political website FiveThirtyEight.com, which is now published in The New York Times and which has made Nate the public face of statistical analysis and political forecasting.

In his book, the full title of which is “The Signal and The Noise: Why Most Predictions Fail — But Some Don’t,” Silver explores how data-based predictions underpin a growing sector of critical fields, from political polling to weather forecasting to the stock market to chess to the war on terror.

In the book, Nate poses some key questions, including what kind of predictions can we trust, and are the “predicters” using reliable methods? Also, what sorts of things can, and cannot, be predicted?

In our conversation in the greenroom just prior to his keynote at Information On Demand 2012 earlier today, Scott and I probed along a number of these vectors, asking Nate about the importance of prediction in Big Data, statistical influence on sports and player predictions (a la “Moneyball”), how large organizations can improve their predictive capabilities, and much more.

It was a refreshing and eye-opening interview, and I hope you enjoy watching it as much as Scott and I enjoyed conducting it!

Live @ Information On Demand 2012: Smarter Marketing Analytics

with one comment

Big Data is the digital convergence of structured and unstructured data. Those organizations that can capture and analyze their data, regardless of what type, how much, or how fast it is moving, can make more informed decisions. At Information On Demand 2012 today in Las Vegas, IBM announced a new digital marketing system to help CMOs conduct smarter marketing analytics.

The news dam has begun to break at the IBM Information On Demand And Business Analytics Forum here in Vegas.

One of the highlights of today’s announcement was IBM’s unveiling of a new digital marketing system and big data software designed to help organizations gain actionable insights.

These tackle the most pressing big data challenges facing organizations today — accessing and gaining intelligence into an enormous stream of data generated from mobile, social and digital networks.

Big Data for Chief Marketing Officers 

The emergence of big data technologies is driving the transformation of marketing for every channel. Chief Marketing Officers (CMOs) are now responsible for analyzing consumer demands from social media, mobile devices, and traditional channels and align these demands with product development and sales.

The new IBM Digital Analytics Accelerator helps CMOs tap into consumer sentiment to create targeted advertising and promotions, avoid customer churn, and perform advanced Web analytics that predict customer needs.

Now, CMOs can bring advanced analytics to all their social media, web traffic, and customer communication behind their own firewall.

The industry’s first big data solution in the digital marketing arena is powered by Netezza and Unica technologies. With this integrated offering that includes the recently announced PureData System for Analytics, clients can run complex analytics on petabytes of data in minutes, and arm marketing professionals with instant insights.

CMOs can use new insights to accelerate marketing campaigns and better meet consumer needs based on the broadest range of data.

Trident Marketing: Gaining Visibility Into Consumer Behaviors

For Trident Marketing, a direct response marketing and sales firm for leading brands such as DIRECTV, ADT and Travel Resorts of America, performing analytics on big data has helped the company gain unprecedented visibility into consumers — from predicting the precise moment in which to engage with customers to anticipating the likelihood a customer will cancel service.

Working with IBM and partner Fuzzy Logix, the company has realized massive growth including a tenfold increase in revenue in just four years, a ten percent increase in sales in the first 60 days, and decreased customer churn by 50 percent.

Big Study On Big Data

leave a comment »

Perfect timing.

In advance of IBM’s massive event next week in Las Vegas featuring all things information management, Information On Demand 2012, IBM and the Saïd Business School at the University of Oxford today released a study on Big Data.

According to a new global report from IBM and the Said Business School at the University of Oxford, less than half of the organizations engaged in active Big Data initiatives are currently analyzing external sources of data, like social media.

The headline: Most Big Data initiatives currently being deployed by organizations are aimed at improving the customer experience, yet less than half of the organizations involved in active Big Data initiatives are currently collecting and analyzing external sources of data, like social media.

One reason: Many organizations are struggling to address and manage the uncertainty inherent within certain types of data, such as the weather, the economy, or the sentiment and truthfulness of people expressed on social networks.

Another? Social media and other external data sources are being underutilized due to the skills gap. Having the advanced capabilities required to analyze unstructured data — data that does not fit in traditional databases such as text, sensor data, geospatial data, audio, images and video — as well as streaming data remains a major challenge for most organizations.

The new report, entitled “Analytics: The real-world use of Big Data,” is based on a global survey of 1,144 business and IT professionals from 95 countries and 26 industries. The report provides a global snapshot of how organizations today view Big Data, how they are building essential capabilities to tackle Big Data and to what extent they are currently engaged in using Big Data to benefit their business.

Only 25 percent of the survey respondents say they have the required capabilities to analyze highly unstructured data — a major inhibitor to getting the most value from Big Data.

The increasing business opportunities and benefits of Big Data are clear. Nearly two-thirds (63 percent) of the survey respondents report that using information, including Big Data, and analytics is creating a competitive advantage for their organizations. This is a 70 percent increase from the 37 percent who cited a competitive advantage in a 2010 IBM study.

Big Data Drivers and Adoption

In addition to customer-centric outcomes, which half (49 percent) of the respondents identified as a top priority, early applications of Big Data are addressing other functional objectives.

Nearly one-fifth (18 percent) cited optimizing operations as a primary objective. Other Big Data applications are focused on risk and financial management (15 percent), enabling new business models (14 percent) and employee collaboration (4 percent).

Three-quarters (76 percent) of the respondents are currently engaged in Big Data development efforts, but the report confirms that the majority (47 percent) are still in the early planning stages.

However, 28 percent are developing pilot projects or have already implemented two or more Big Data solutions at scale. Nearly one quarter (24 percent) of the respondents have not initiated Big Data activities, and are still studying how Big Data will benefit their organizations.

Sources of Big Data

More than half of the survey respondents reported internal data as the primary source of Big Data within their organizations. This suggests that companies are taking a pragmatic approach to Big Data, and also that there is tremendous untapped value still locked away in these internal systems.

Internal data is the most mature, well-understood data available to organizations. The data has been collected, integrated, structured and standardized through years of enterprise resource planning, master data management, business intelligence and other related work.

By applying analytics, internal data extracted from customer transactions, interactions, events and emails can provide valuable insights.

Big Data Capabilities

Today, the majority of organizations engaged in Big Data activities start with analyzing structured data using core analytics capabilities, such as query and reporting (91 percent) and data mining (77 percent).

Two-thirds (67 percent) report using predictive modeling skills.

But Big Data also requires the capability to analyze semi-structured and unstructured data, including a variety of data types that may be entirely new for many organizations.

In more than half of the active Big Data efforts, respondents reported using advanced capabilities designed to analyze text in its natural state, such as the transcripts of call center conversations.

These analytics include the ability to interpret and understand the nuances of language, such as sentiment, slang and intentions. Such data can help companies, like a bank or telco provider, understand the current mood of a customer and gain valuable insights that can be immediately used to drive customer management strategies.

You can download and read the full study here.

Update: Also check out the new IBM Big Data Hub, a compendium of videos, blog posts, podcasts, white papers, and other useful assets centering on this big topic!

%d bloggers like this: