Turbotodd

Ruminations on IT, the digital media, and some golf thrown in for good measure.

Posts Tagged ‘artificial intelligence

IBM and MIT to Pursue Joint Research in Artificial Intelligence

leave a comment »

IBM and MIT today announced that IBM plans to make a 10-year, $240 million investment to create the MIT–IBM Watson AI Lab in partnership with MIT. The lab will carry out fundamental artificial intelligence (AI) research and seek to propel scientific breakthroughs that unlock the potential of AI.

The collaboration aims to advance AI hardware, software and algorithms related to deep learning and other areas, increase AI’s impact on industries, such as health care and cybersecurity, and explore the economic and ethical implications of AI on society. IBM’s $240 million investment in the lab will support research by IBM and MIT scientists.

The new lab will be one of the largest long-term university-industry AI collaborations to date, mobilizing the talent of more than 100 AI scientists, professors, and students to pursue joint research at IBM’s Research Lab in Cambridge — co-located with the IBM Watson Health and IBM Security headquarters in Kendall Square, in Cambridge, Massachusetts — and on the neighboring MIT campus.

The lab will be co-chaired by IBM Research VP of AI and IBM Q, Dario Gil, and Anantha P. Chandrakasan, dean of MIT’s School of Engineering. IBM and MIT plan to issue a call for proposals to MIT researchers and IBM scientists to submit their ideas for joint research to push the boundaries in AI science and technology in several areas, including:

  • AI algorithms: Developing advanced algorithms to expand capabilities in machine learning and reasoning. Researchers will create AI systems that move beyond specialized tasks to tackle more complex problems, and benefit from robust, continuous learning. Researchers will invent new algorithms that can not only leverage big data when available, but also learn from limited data to augment human intelligence.
  • Physics of AI: Investigating new AI hardware materials, devices, and architectures that will support future analog computational approaches to AI model training and deployment, as well as the intersection of quantum computing and machine learning. The latter involves using AI to help characterize and improve quantum devices, and also researching the use of quantum computing to optimize and speed up machine-learning algorithms and other AI applications.
  • Application of AI to industries: Given its location in IBM Watson Health and IBM Security headquarters and Kendall Square, a global hub of biomedical innovation, the lab will develop new applications of AI for professional use, including fields such as health care and cybersecurity. The collaboration will explore the use of AI in areas such as the security and privacy of medical data, personalization of healthcare, image analysis, and the optimum treatment paths for specific patients.
  • Advancing shared prosperity through AI: The MIT-IBM Watson AI Lab will explore how AI can deliver economic and societal benefits to a broader range of people, nations, and enterprises. The lab will study the economic implications of AI and investigate how AI can improve prosperity and help individuals achieve more in their lives.

In addition to IBM’s plan to produce innovations that advance the frontiers of AI, a distinct objective of the new lab is to encourage MIT faculty and students to launch companies that will focus on commercializing AI inventions and technologies that are developed at the lab. The lab’s scientists also will publish their work, contribute to the release of open source material, and foster an adherence to the ethical application of AI.

Both MIT and IBM have been pioneers in artificial intelligence research, and the new AI lab builds on a decades-long research relationship between the two. In 2016, IBM Research announced a multi-year collaboration with MIT’s Department of Brain and Cognitive Sciences to advance the scientific field of machine vision, a core aspect of artificial intelligence.

The collaboration has brought together leading brain, cognitive, and computer scientists to conduct research in the field of unsupervised machine understanding of audio-visual streams of data, using insights from next-generation models of the brain to inform advances in machine vision. In addition, IBM and the Broad Institute of MIT and Harvard have established a five-year, $50 million research collaboration on AI and Genomics.

For more information, visit MITIBMWatsonAILab.mit.edu.

Written by turbotodd

September 7, 2017 at 9:09 am

Happy Anniversary, Deep Blue

leave a comment »

It was fifteen years ago today that the IBM chess-playing supercomputer, Deep Blue, beat he-who-shall-remain-nameless, a world grandmaster, after a six-game match, which brought two wins for IBM, one for the world champion, and three draws.

On May 11, 1997, an IBM computer called IBM Deep Blue beat the world chess champion after a six-game match: two wins for IBM, one for the champion and three draws. The match lasted several days and received massive media coverage around the world. It was the classic plot line of man vs. machine. Behind the contest, however, was important computer science, pushing forward the ability of computers to handle the kinds of complex calculations needed to take computing to the its next stage of evolution.

It was classic man-versus-machine, but underlying the mythology that enveloped the John Henry storyline was something far more important: The opportunity to push the frontiers of computer science, to push computers to handle the kind of complex calculations necessary for helping discover new pharmaceuticals; to conduct the kind of financial modeling needed to identify trends and do risk analysis; to perform the kinds of massive calculations needed in many fields of science.

Solving The Problem That Is Chess

Since artificial intelligence emerged as a concept along with the first real computers in the 1940s, computer scientists compared the performance of these “giant brains” with human minds, and many gravitated to chess as a way of testing the calculating abilities of computers. Chess is a game that represents a collection of challenging problems for minds and machines, but had simple rules, and was thus an excellent testbed for laying the groundwork for the “big data” era that was soon to come.

There’s but no question that Deep Blue was such a powerful computer programmed to solve the complex, strategic game of chess.  But IBM’s goal was far deeper: To enable researchers to discover and understand the limits and opportunities presented by massively parallel processing and high performance computing.

IBM Deep Blue: Analyzing 200 Million Chess Positions Per Second

If, in fact, Deep Blue could explore up to 200 million possible chess positions per second, then could this deep computing capability be used to help society handle the kinds of complex calculations required in some of these other aforementioned areas.

Deep Blue did, in fact, prove that industry could tackle these issues with smart algorithms and sufficient computational power.

I recalled earlier this year in a blog post my own experience witnessing the Deep Blue chess match.  It evoked a lot of nostalgia for me and so many others.

IBM’s Deep Blue supercomputer could explore up to 200 million possible chess positions per second on 510 processors. Juxtapose that with IBM Blue Gene’s ability a few short years later to to routinely handle 478 trillion operations every second!

But it also laid a foundation, paving the way for new kinds of advanced computers and breakthroughs, including IBM’s Blue Gene and, later, IBM Watson.

Forever In Blue Genes

Blue Gene, introduced in 2004, demonstrated the next grand challenge in computing and was both the most powerful supercomputer and the most efficient, but was built to help biologists observe the invisible processes of protein folding and gene development. Deep Blue was also one of the earliest experiments in supercomputing that propelled IBM to become a market leader in this space to this day.

Fifteen years on, we’ve seen epic growth in the volume and variety of data being generated around the planet, via business, the social media, new sensor data helping with instrumentation of the physical world vis-a-vis IBM’s smarter planet initiative.  We’ve created so much new data that, in fact, 90% of the data in the world today was created in the last two years alone!

Calling Doctor Watson

Most recently, IBM embarked upon the next wave of this computing progress through the development of IBM’s Watson, which can hold the equivalent of about one million books worth of information. But make no mistake, Watson’s significance wasn’t just the amount of information it could process, but rather, a new generation of technology that uses algorithms to find answers in unstructured data more effectively than standard search technology, while also “understanding” natural language.

The promise of IBM Watson is now being put to productive use in industry — as an online tool to assist medical professionals in formulating diagnoses; by simplifying the banking experience by analyzing customer needs in the context of vast amounts of ever-changing financial , economic, product, and client data; and, I’m sure, other industries near you soon.

Those early chess matches were exciting, nail-biting even (and who’d have thought we’d ever say that about chess?)! But they pale by comparison to the productive work and problem-solving IBM’s Watson, and other IBM technologies, are now and will continue to be involved with as the world of big data matures and becomes adopted by an ever-increasing audience.

You can now visit Deep Blue, which ultimately was retired to the Smithsonian Museum in Washington, D.C.

But its groundbreaking contributions to artificial intelligence and computing in general continues, and now extends well beyond the confines of the chess board.

Impressions From SXSW 2012: “Conversational Commerce” with Opus Research’ Dan Miller

leave a comment »

If you want to better understand the looming intersection between voice recognition and artificial intelligence, you don’t want to talk to HAL from “2001: A Space Odyssey,” or even IBM’s Watson.

You want to speak with Opus Research analyst and co-founder, Dan Miller, which is precisely what Scott Laningham and I did recently at SXSW Interactive 2012.

Dan has spent his 20+ year career focused on marketing, business development, and corporate strategy for telecom service providers, computer manufacturers, and application software developers.

He founded Opus Research in 1985, and helped define the Conversational Access Technologies marketplace by authoring scores of reports, advisories, and newsletters addressing business opportunities that reside where automated speech leverages Web services, mobility, and enterprise software infrastructure.

If you’re thinking about things like Siri, or voice biometrics identification, or the opportunity that your voice response unit has for automating marketing touches, then Dan’s your man.

We spent a good 10 minutes talking with Dan about the idea behind “conversational commerce,”  and how important user authentication becomes in a world where the professional and personal are increasingly intertwined, and where IT staffs everywhere are suddenly confronted with new requirements brought about by the “BYOD” (Bring Your Own Device) movement into the enterprise.

Deep Blue Anniversary

with 2 comments

The Atlantic Monthly online reminds us that it was sixteen years ago today that world chess grandmaster Garry Kasparov sat down to play the sixth game of his match against IBM’s Deep Blue supercomputer.  Kasparov won that match, three games, drawing in two, and losing one.

Garry Kasparov, right, faces off against IBM's Deep Blue in his final match of a six-game tournament on this day in 1996.

I recall in this December 2010 post what happened the following year.

 

 

Written by turbotodd

February 17, 2012 at 9:47 pm

Putting Watson To Work In Healthcare

leave a comment »

If you’ve been wondering whether our IBM intelligent Q&A technology Watson (no relation) was going to ever go out and get a real job, you need wait no longer.

Just as the Watson v. Jeopardy contest is set to start being rebroadcast here in North America this very day, IBM and Wellpoint announced an agreement today to create the first commercial applications of the IBM Watson technology.

WellPoint is the nation’s largest health benefits company in terms of medical membership, with 34 million members in its affiliated health plans, and a total of more than 70 million individuals served through its subsidiaries.

Under the agreement, Wellpoint will develop and launch Watson-based solutions to help improve patient care through the delivery of up-to-date, evidence-based health care for millions of Americans.

IBM will develop the foundational Watson healthcare technology on which WellPoint’s solution will run.

What Is Watson?

Watson, named after IBM founder Thomas J. Watson, is a computing system built by a team of IBM scientists who set out to accomplish a grand challenge –- build a computing system that rivals a human’s ability to answer questions posed in natural language with speed, accuracy and confidence.

Earlier this year, Watson competed and won against two of the most celebrated players ever to appear on Jeopardy!. This historic match is being rebroadcast over three days, starting today.

Watson’s ability to analyze the meaning and context of human language, and quickly process vast amounts of information to suggest options targeted to a patient’s circumstances, can assist decision makers, such as physicians and nurses, in identifying the most likely diagnosis and treatment options for their patients.

In recent years, few areas have advanced as rapidly as health care. For physicians, incorporating hundreds of thousands of articles into practice and applying them to patient care is a significant challenge.

Watson can sift through an equivalent of about 1 million books or roughly 200 million pages of data, and analyze this information and provide precise responses in less than three seconds.

Watson: Helping Doctors With Their Diagnostics

Using this extraordinary capability WellPoint is expected to enable Watson to allow physicians to easily coordinate medical data programmed into Watson with specified patient factors, to help identify the most likely diagnosis and treatment options in complex cases. Watson is expected to serve as a powerful tool in the physician’s decision making process.

Medical conditions such as cancer, diabetes, chronic heart or kidney disease are incredibly intricate. New solutions incorporating Watson are being developed to have the ability to look at massive amounts of medical literature, population health data, and even a patient’s health record, in compliance with applicable privacy and security laws, to answer profoundly complex questions.

For example, we envision that new applications will allow physicians to use Watson to consult patient medical histories, recent test results, recommended treatment protocols and the latest research findings loaded into Watson to discuss the best and most effective courses of treatment with their patients.

“There are breathtaking advances in medical science and clinical knowledge, however; this clinical information is not always used in the care of patients. Imagine having the ability to take in all the information around a patient’s medical care — symptoms, findings, patient interviews and diagnostic studies. Then, imagine using Watson analytic capabilities to consider all of the prior cases, the state-of-the-art clinical knowledge in the medical literature and clinical best practices to help a physician advance a diagnosis and guide a course of treatment,” said Sam Nussbaum, M.D., WellPoint’s Chief Medical Officer.

“We believe this will be an invaluable resource for our partnering physicians and will dramatically enhance the quality and effectiveness of medical care they deliver to our members.”

Watson may help physicians identify treatment options that balance the interactions of various drugs and narrow among a large group of treatment choices, enabling physicians to quickly select the more effective treatment plans for their patients.

It is also expected to streamline communication between a patient’s physician and their health plan, helping to improve efficiency in clinical review of complex cases. It could even be used to direct patients to the physician in their area with the best success in treating a particular illness.

Depending on the progress of the development efforts, WellPoint anticipates employing Watson technology in early 2012, working with select physician groups in clinical pilots.

You can visit here to learn more about the IBM Watson technology.

Written by turbotodd

September 12, 2011 at 3:41 pm

Watson’s Webby

with one comment

“Thank you for the honor.”

That’s all the words IBM’s Watson will be able to convey were it to be able to stand up on the stage and accept its Webby Award.

Watson was just named person of the year by the Webbys, which is an interesting way of categorizing the IBM supercomputer that outplayed Jeopardy! world champions back in February.

What’s all this, you say?  Well, the fact is, Webby award speeches have historically been limited to five (and typically, very carefully chosen) words.

Although with all that brain power, I’m sure Watson could come up with something better and much more clever than the five I selected.  I just wanted to make sure it didn’t seem like Watson was ungrateful.

Congratulations, Watson.  You earned every word.

If you’re interested in watching, the 15th annual Webby Awards ceremony will be held June 13 and hosted by Lisa Kudrow. The show will stream live on numerous outlets, including via Facebook and the Huffington Post.

Back in Orlando, Florida, Innovate 2011 is preparing to get going over the weekend.  I mentioned in a previous post that software guru Grady Booch will actually be speaking about Watson at the conference.

Of course, we’re giving Grady more than just five words, as he has quite a bit to say about the software methods behind Watson’s madness.

Written by turbotodd

June 3, 2011 at 1:55 pm

Deep Q&A: An Interview With IBM’s Dr. David Ferrucci On Watson Beyond Jeopardy!

leave a comment »

During the recent SXSW Interactive fest here in Austin, developerWorks’ Scott Laningham and I had the opportunity to sit down and do an interview with the principal investigator behind the Watson Deep QA technology, Dr. David Ferrucci.

You may recognize Ferrucci from some of his recent TV appearances (or the IBM smarter planet TV spots addressing the power and opportunity the Watson technology presents).

Me, I was just glad to have the opportunity, along with Scott, to ask some specific questions that had been on my mind about Watson. And also to point out to Dr. Ferrucci that I had the last name Watson before our supercomputer did!

It was a fun and fascinating 13 minutes, and, for my money, one of the highlight interviews Scott and I have conducted in recent times.

Continued kudos to Ferrucci and his entire IBM team for such a great success with Watson.  Clearly, the Jeopardy! victory is just a launching point for the exciting new places where this new technology is likely to take us from here.

%d bloggers like this: