Turbotodd

Ruminations on tech, the digital media, and some golf thrown in for good measure.

Archive for October 2012

Watson Goes Back To School

leave a comment »

Im my capacity as a cheerleader for my virtual big brother, IBM’s Watson technology, I’ve received a lot of questions along the way about how does IBM plan to use the technology in industry, and how can we most effectively put Watson to work.

Great questions, and the answer is, it depends.

Yesterday, for example, we announced a new program in partnership with the Cleveland Clinic in Cleveland, Ohio, that will create a collaboration to advance Watson’s use in the medical training field.

The IBM researchers that built Watson are going to work with Cleveland Clinic clinicians, faculty, and medical students to enhance the capabilities of Watson’s Deep Question Answering technology for the area of medicine.

Calling Dr. Watson

Watson’s ability to analyze the meaning and context of human language and quickly process information to piece together evidence for answers can help healthcare decision makers, such as clinicians, nurses and medical students, unlock important knowledge and facts buried within huge volumes of information.

Watson has been gaining knowledge in the field of medicine, and Cleveland Clinic with IBM recognized the opportunity for Watson to interact with medical students to help explore a wide variety of learning challenges facing the medical industry today.

Rather than attempting to memorize everything in text books and medical journals (now acknowledged as an impossible task), students are learning through doing — taking patient case studies, analyzing them, coming up with hypotheses, and then finding and connecting evidence in reference materials and the latest journals to identify diagnoses and treatment options in the context of medical training.

This process of considering multiple medical factors and discovering and evidencing solution paths in large volumes of data reflects the core capabilities of the Watson technology.

Watson Providing Problem-Based Learning Curriculum

Medical students will interact with Watson on challenging cases as part of a problem-based learning curriculum and in hypothetical clinical simulations.

A collaborative learning and training tool utilizing the Watson technology will be available to medical students to assist in their education to learn the process of navigating the latest content, suggesting and considering a variety of hypotheses and finding key evidence to support potential answers, diagnoses and possible treatment options.

“Every day, physicians and scientists around the world add more and more information to what I think of as an ever-expanding, global medical library,” said C. Martin Harris, M.D., Chief Information Officer of Cleveland Clinic. “Cleveland Clinic’s collaboration with IBM is exciting because it offers us the opportunity to teach Watson to ‘think’ in ways that have the potential to make it a powerful tool in medicine. Technology like this can allow us to leverage that medical library to help train our students and also find new ways to address the public health challenges we face today.”

Watson Will Learn From Medical Students

Students will help improve Watson’s language and domain analysis capabilities by judging the evidence it provides and analyzing its answers within the domain of medicine. Through engagement with this education tool and Watson, medical students and Watson will benefit from each other’s strengths and expertise to both learn and improve their collaborative performance.

The collaboration will also focus on leveraging Watson to process an electronic medical record (EMR) based on a deep semantic understanding of the content within an EMR.

The shift is clearly away from memorization and towards critical thinking where medical training programs will help student to use powerful discovery and language analysis tools like Watson to help them evaluate medical case scenarios and find evidence to help them carefully rationalize decisions. The physicians will rely on their own experience and expert critical thinking skills to read the evidence and make the final judgments.

“The practice of medicine is changing and so should the way medical students learn. In the real world, medical case scenarios should rely on people’s ability to quickly find and apply the most relevant knowledge. Finding and evaluating multistep paths through the medical literature is required to identify evidence in support of potential diagnoses and treatment options,” said Dr. David Ferrucci, IBM Fellow and Principal Investigator of the Watson project.

Over time, the expectation is that Watson will get “smarter” about medical language and how to assemble good chains of evidence from available content. Students will learn how to focus on critical thinking skills and how to best leverage informational tools like Watson in helping them learn how to diagnose and treat patients.

IBM and Cleveland Clinic will discuss the role of Watson for the future of healthcare and healthcare education this week at the Cleveland Clinic Medical Innovation Summit being held October 29-31, 2012 in Cleveland, OH.

I sat down recently at the IBM InterConnect event in Singapore to conduct a fascinating mid-year employee performance review for IBM’s Watson technology with Watson GM Manoj Saxena.  You can see the fruits of our discussion in the video below.

Sandy’s Data Center Impact

leave a comment »

Well, I sat and watched the coverage of Superstorm Sandy last night, flipping between the major cable news networks and The Weather Channel, and also trying to keep up with my northeast friends via Facebook and Twitter.

You could almost mirror match the power outages with the suddenly disappearing Facebook and Twitter streams, as one friend after another dropped from the social radar screen.

Having lived in New York City and its surroundings for the better part of eight years of my life, I was completely sympathetic to their plight, and quite frankly, astonished at some of the images I was witnessing.

I’ve been out doing some research to try and understand the negative IT impact, and it didn’t take long.

This story indicated that the flooding had hobbled two data center buildings in Lower Manhattan, mainly because it took out diesel pumps (located in basements) that were needed to refuel generators.

Datagram’s 33 Whitehall basement was also inundated, taking out some major Web sites, including Gawker, Gizmodo, Buzzfeed, The Huffington Post, and Media. The attached screenshot demonstrates the message I tried going there just this morning.

Ars Technica also had a post detailing some of the outages, in which they suggested that “customers relying on hosting providers and cloud services may want to build systems that can fail over across multiple regions,” but that “even the most extensive preparations may not be enough to stay online in the face of a storm like Hurricane Sandy.”‘

IBM’s own Business Continuity Services had this message for IBM clients posted on its home page overnight:

The IBM Business Continuity and Resiliency team is monitoring the status of Hurricane Sandy and has activated our Emergency Operations Center to ensure we are prepared to assist our customers throughout the storm. Our delivery teams are assembled in BCRS recovery centers in Sterling Forest, NY, Gaithersburg, MD and Boulder CO and all facilities are secure and ready to support all client declarations. We are proactively assessing the potential impact to our customers who are projected to be in the path of the storm, and our delivery program management team will provide regular updates to our clients as the storm progresses, and will be available to respond to any questions throughout the week. If you need to call IBM to place us on alert, or to declare a disaster, please call 1-877-IBM-REC1 (877-426-7321)

Written by turbotodd

October 30, 2012 at 7:43 pm

Think Small: IBM Researchers Demonstrate Carbon Nanotubes, Potential Silicon Successors

leave a comment »

Since I posted about Hurricane Sandy earlier in the day, I’ve seen some pretty stunning pictures and video coming in, and heard more reports from friends in and around the New York City area.

The story of the crane toppling over on a very tall building being built on West 57th Street, between 6th and 7th Avenues (my old IBM office is at Madison and 57th, further east) was most stunning. You can find some of the pics or video on CNN.

While we wait to discover how big a problem Sandy presents to the northeast Atlantic coast, I’ll share with you a diversion focusing on a much smaller topic — but one with potentially huge implications.

IBM scientists recently demonstrated a new approach to carbon technology that opens up the path for commercial fabrication of dramatically smaller, faster and more powerful computer chips.

For the first time, more than ten thousand working transistors made of nano-sized tubes of carbon have been precisely placed and tested in a single chip using standard semiconductor processes.

These carbon devices are poised to replace and outperform silicon technology allowing further miniaturization of computing components and leading the way for future microelectronics.

Four Decades Of Innovation

Aided by rapid innovation over four decades, silicon microprocessor technology has continually shrunk in size and improved in performance, thereby driving the information technology revolution.

Silicon transistors, tiny switches that carry information on a chip, have been made smaller year after year, but they are approaching a point of physical limitation.

Their increasingly small dimensions, now reaching the nanoscale, will prohibit any gains in performance due to the nature of silicon and the laws of physics. Within a few more generations, classical scaling and shrinkage will no longer yield the sizable benefits of lower power, lower cost and higher speed processors that the industry has become accustomed to.

Carbon nanotubes represent a new class of semiconductor materials whose electrical properties are more attractive than silicon, particularly for building nanoscale transistor devices that are a few tens of atoms across.

Electrons in carbon transistors can move easier than in silicon-based devices allowing for quicker transport of data. The nanotubes are also ideally shaped for transistors at the atomic scale, an advantage over silicon.

These qualities are among the reasons to replace the traditional silicon transistor with carbon — and coupled with new chip design architectures — will allow computing innovation on a miniature scale for the future.

The approach developed at IBM labs paves the way for circuit fabrication with large numbers of carbon nanotube transistors at predetermined substrate positions. The ability to isolate semiconducting nanotubes and place a high density of carbon devices on a wafer is crucial to assess their suitability for a technology — eventually more than one billion transistors will be needed for future integration into commercial chips.

Hardly A Carbon Copy

Until now, scientists have been able to place at most a few hundred carbon nanotube devices at a time, not nearly enough to address key issues for commercial applications.

Originally studied for the physics that arises from their atomic dimensions and shapes, carbon nanotubes are being explored by scientists worldwide in applications that span integrated circuits, energy storage and conversion, biomedical sensing and DNA sequencing.

This achievement was published today in the peer-reviewed journal Nature Nanotechnology.

Carbon, a readily available basic element from which crystals as hard as diamonds and as soft as the “lead” in a pencil are made, has wide-ranging IT applications.

Carbon nanotubes are single atomic sheets of carbon rolled up into a tube. The carbon nanotube forms the core of a transistor device that will work in a fashion similar to the current silicon transistor, but will be better performing. They could be used to replace the transistors in chips that power our data-crunching servers, high performing computers and ultra fast smart phones.

Earlier this year, IBM researchers demonstrated  carbon nanotube transistors can operate as excellent switches at molecular dimensions of less than ten nanometers – the equivalent to 10,000 times thinner than a strand of human hair and less than half the size of the leading silicon technology. Comprehensive modeling of the electronic circuits suggests that about a five to ten times improvement in performance compared to silicon circuits is possible.

There are practical challenges for carbon nanotubes to become a commercial technology notably, as mentioned earlier, due to the purity and placement of the devices. Carbon nanotubes naturally come as a mix of metallic and semiconducting species and need to be placed perfectly on the wafer surface to make electronic circuits. For device operation, only the semiconducting kind of tubes is useful which requires essentially complete removal of the metallic ones to prevent errors in circuits.

Also, for large scale integration to happen, it is critical to be able to control the alignment and the location of carbon nanotube devices on a substrate.

To overcome these barriers, IBM researchers developed a novel method based on ion-exchange chemistry that allows precise and controlled placement of aligned carbon nanotubes on a substrate at a high density — two orders of magnitude greater than previous experiments, enabling the controlled placement of individual nanotubes with a density of about a billion per square centimeter.

The process starts with carbon nanotubes mixed with a surfactant, a kind of soap that makes them soluble in water. A substrate is comprised of two oxides with trenches made of chemically-modified hafnium oxide (HfO2) and the rest of silicon oxide (SiO2). The substrate gets immersed in the carbon nanotube solution and the nanotubes attach via a chemical bond to the HfO2 regions while the rest of the surface remains clean.

By combining chemistry, processing and engineering expertise, IBM researchers are able to fabricate more than ten thousand transistors on a single chip.

Furthermore, rapid testing of thousands of devices is possible using high volume characterization tools due to compatibility to standard commercial processes.

As this new placement technique can be readily implemented, involving common chemicals and existing semiconductor fabrication, it will allow the industry to work with carbon nanotubes at a greater scale and deliver further innovation for carbon electronics.

You can learn more in the animation below.

Written by turbotodd

October 29, 2012 at 8:02 pm

Sandy

with one comment

Hurricane Sandy is rapidly approaching the Atlantic coast of the U.S.. As of 8 a.m., the huge storm was producing sustained winds of 85 miles per hour after turning north northwest toward the coastline of New Jersey, according to the National Hurricane Center. The center of the storm is now moving at 20 m.p.h., a significant speedup from earlier in the morning.

If it’s Monday, it must be time for a Hurricane.

And I’m not referring to the cocktail emanating from Pat O’Brien’s in New Orleans.

Hurricane Sandy is drifting up the Atlantic coast and is expected to make landfall later this afternoon, probably somewhere in New Jersey.

But as of 8:52 CST this morning, she’s already having an impact well in to New York City. I’ve already seen Twitpics of Battery Park City starting to surrender to the surge, which is truly frightening considering how much of the storm is still yet to come.

As an FYI, both The New York Times and The Wall Street Journal have eliminated their paywalls and are making their content free, if you’re looking for up-to-the-minute updates on the storm.

YouTube is also streaming The Weather Channel (where NBC’s Al Roker was just seen trying to stay vertical at Point Pleasant Beach, New Jersey).

I spoke with a good friend of mine who lives on the edge of Cobble Hill (in Brooklyn), and he indicated the water had not yet lapped over the piers there, but that it was likely only a matter of time.  Forecasters are expecting a 6-to-11 surge when high tide strikes around 8 tonight.

New York governor Andrew Cuomo just held a press conference and announced the closing of both the Holland and Brooklyn Battery tunnels at 2 P.M. EST.

If you’re interested in seeing more detaila about the storm, Google’s offering up its “Crisis Map” here, and a more specific look at NYC here.

On Twitter, the National Hurricane Center is offering facts and tips at @NHC_Atlantic, and the Weather Channel can be followed at @weatherchannel.

I was living in NYC in 1985 during Hurricane Gloria, and that storm paled by comparison.  So, please, be safe out there, stay away from the ocean, stay inside, and ride this sucker out as safely as you can!

UPDATE: I just built this Turbo Sandy Twitter list, with a list of followees from a variety of media and government sources, including the Weather Channel, NASA, FEMA, and a variety of others.

Written by turbotodd

October 29, 2012 at 2:57 pm

Live @ Information On Demand 2012: Craig Rhinehart On Predictive Healthcare

leave a comment »

I made it back to Austin late last night, mostly no worse for the wear.

There were a number of key announcements made at Information On Demand 2012 over the course of the past few days in Las Vegas.

One of those that I mentioned in one of my keynote post summaries was IBM Patient Care and Insights, new analytics software based on innovations from IBM Labs that helps healthcare organizations improve patient care and lower operational costs by considering the specific health history of each individual patient.

This is a fascinating new capability with profound implications for healthcare providers.

The new IBM solution provides the core capabilities for devising predictive models of various health conditions, which can be used to identify early intervention opportunities to improve the patient’s outlook by minimizing or avoiding potential health problems.

It features advanced analytics and care management capabilities to help identify early intervention opportunities and coordinate patient care.

Providing Individualized Care

At the core of IBM Patient Care and Insights, developed by IBM’s software, research and services teams, are similarity analytics that help drive smart, individualized care delivery.

Born in IBM Research, IBM similarity analytics is a set of core capabilities and algorithms that allow healthcare professionals to examine thousands of patient characteristics at once — including demographic, social, clinical and financial factors along with unstructured data such as physicians’ notes — to generate personalized evidence and insights, and then provide care according to personalized treatment plans.

By way of example, physicians can make personalized recommendations to improve a patient’s outcome by finding other patients with similar clinical characteristics to see what treatments were most effective or what complications they may have encountered.

They can also perform patient-physician matching so an individual is paired with a doctor that is optimal for a specific condition. With this solution, caregivers can better tap into the collective memory of the care delivery system to uncover new levels of tailored insight or “early identifiers” from historical/long term patient data that enable doctors and others to help manage a patient’s healthcare needs well into the future.

Craig Rhinehart, director for IBM’s ECM Strategy and Market Development organization, sat down with Scott Laningham and I earlier this week to describe the challenges facing health care, and how the IBM Patient Care and Insights can help improve health care by delivering dynamic case-based, patient-centric electronic care plans and population analysis.

Go here for more information on IBM Patient Care and Insights and IBM Intelligent Investigation Manager.

Live @ Information On Demand 2012: Watson’s Next Job

with one comment

As I mentioned in my last post, yesterday was day 3 of Information On Demand 2012 here in Las Vegas.

There was LOTS going on out here in the West.

We started the day by interviewing keynote speaker Nate Silver (see previous post) just prior to his going on stage for the morning general session. Really fascinating interview, and going in to it I learned that his book had reached #8 on The New York Times best seller list.

In the IOD 2012 day 3 general session, IBM Fellow Rob High explains how IBM’s Watson technology may soon help drive down call center costs by 50%, using the intelligence engine of Watson to help customer service reps faster respond to customer queries.

So congrats, Nate, and thanks again for a scintillating interview.

During the morning session, we also heard from IBM’s own Craig Rinehart about the opportunity for achieving better efficiencies in health care using enterprise content management solutions from IBM.

I nearly choked when Craig explained that thirty cents out of every dollar on healthcare in the U.S. is wasted, and despite spending more than any other country, is ranked 37th in terms of care.

Craig explained the IBM Patient Care and Insights tool was intended to bring advanced analytics out of the lab and into the hospital to help start driving down some of those costs, and more importantly, to help save lives.

We also heard from IBM Fellow and CTO of IBM Watson Solutions’ organization, Rob High, about some of the recent advancements made on the Watson front.

High explained the distinction between programmatic and cognitive computing, the latter being the direction computing is now taking, and an approach that provides for much more “discoverability” even as it’s more probabilistic in nature.

High walked through a fascinating call center demonstration, whereby Watson helped a call center agent more quickly respond to a customer query by filtering through thousands of possible answers in a few second, then honed in on the ones most likely that would answer the customer’s question.

Next, we heard from Jeff Jonas, IBM’s entity analytics “Ironman” (Jeff also just competed his 27th Ironman triathlon last weekend), who explained his latest technology, context accumulation.

Jeff observed that context accumulation was the “incremental process of integrating new observations with previous ones.”

Or, in other words, developing a better understanding of something by taking more into account the things around it.

Too often, Jeff suggested, analytics has been done in isolation, but that “the future of Big Data is the diverse integration of data” where “data finds data.”

His new method allows for self-correction, and a high tolerance for disagreement, confusion and uncertainty, and where new observations can “reverse earlier assertions.”

For now, he’s calling the technology “G2,” and explains it as a “general purpose context accumulating engine.”

Of course, there was also the Nate Silver keynote, the capstone of yesterday’s morning session, to which I’ll refer you back to the interview Scott and I conducted to get a summary taste of all the ideas Nate discussed.  Your best bet is to buy his book, if you really want to understand where he thinks we need to take the promise of prediction.

Written by turbotodd

October 25, 2012 at 5:38 pm

Live @ Information On Demand 2012: A Q&A With Nate Silver On The Promise Of Prediction

with 2 comments

Day 3 at Information On Demand 2012.

The suggestion to “Think Big” continued, so Scott Laningham and I sat down very early this morning with Nate Silver, blogger and author of the now New York Times bestseller, “The Signal and the Noise” (You can read the review of the book in the Times here).

Nate, who is a youngish 34, has become our leading statistician through his innovative analyses of political polling, but made his original name by building a widely acclaimed baseball statistical analysis system called “PECOTA.”

Today, Nate runs the award-winning political website FiveThirtyEight.com, which is now published in The New York Times and which has made Nate the public face of statistical analysis and political forecasting.

In his book, the full title of which is “The Signal and The Noise: Why Most Predictions Fail — But Some Don’t,” Silver explores how data-based predictions underpin a growing sector of critical fields, from political polling to weather forecasting to the stock market to chess to the war on terror.

In the book, Nate poses some key questions, including what kind of predictions can we trust, and are the “predicters” using reliable methods? Also, what sorts of things can, and cannot, be predicted?

In our conversation in the greenroom just prior to his keynote at Information On Demand 2012 earlier today, Scott and I probed along a number of these vectors, asking Nate about the importance of prediction in Big Data, statistical influence on sports and player predictions (a la “Moneyball”), how large organizations can improve their predictive capabilities, and much more.

It was a refreshing and eye-opening interview, and I hope you enjoy watching it as much as Scott and I enjoyed conducting it!

%d bloggers like this: