Turbotodd

Ruminations on IT, the digital media, and some golf thrown in for good measure.

Posts Tagged ‘AI

IBM and MIT to Pursue Joint Research in Artificial Intelligence

leave a comment »

IBM and MIT today announced that IBM plans to make a 10-year, $240 million investment to create the MIT–IBM Watson AI Lab in partnership with MIT. The lab will carry out fundamental artificial intelligence (AI) research and seek to propel scientific breakthroughs that unlock the potential of AI.

The collaboration aims to advance AI hardware, software and algorithms related to deep learning and other areas, increase AI’s impact on industries, such as health care and cybersecurity, and explore the economic and ethical implications of AI on society. IBM’s $240 million investment in the lab will support research by IBM and MIT scientists.

The new lab will be one of the largest long-term university-industry AI collaborations to date, mobilizing the talent of more than 100 AI scientists, professors, and students to pursue joint research at IBM’s Research Lab in Cambridge — co-located with the IBM Watson Health and IBM Security headquarters in Kendall Square, in Cambridge, Massachusetts — and on the neighboring MIT campus.

The lab will be co-chaired by IBM Research VP of AI and IBM Q, Dario Gil, and Anantha P. Chandrakasan, dean of MIT’s School of Engineering. IBM and MIT plan to issue a call for proposals to MIT researchers and IBM scientists to submit their ideas for joint research to push the boundaries in AI science and technology in several areas, including:

  • AI algorithms: Developing advanced algorithms to expand capabilities in machine learning and reasoning. Researchers will create AI systems that move beyond specialized tasks to tackle more complex problems, and benefit from robust, continuous learning. Researchers will invent new algorithms that can not only leverage big data when available, but also learn from limited data to augment human intelligence.
  • Physics of AI: Investigating new AI hardware materials, devices, and architectures that will support future analog computational approaches to AI model training and deployment, as well as the intersection of quantum computing and machine learning. The latter involves using AI to help characterize and improve quantum devices, and also researching the use of quantum computing to optimize and speed up machine-learning algorithms and other AI applications.
  • Application of AI to industries: Given its location in IBM Watson Health and IBM Security headquarters and Kendall Square, a global hub of biomedical innovation, the lab will develop new applications of AI for professional use, including fields such as health care and cybersecurity. The collaboration will explore the use of AI in areas such as the security and privacy of medical data, personalization of healthcare, image analysis, and the optimum treatment paths for specific patients.
  • Advancing shared prosperity through AI: The MIT-IBM Watson AI Lab will explore how AI can deliver economic and societal benefits to a broader range of people, nations, and enterprises. The lab will study the economic implications of AI and investigate how AI can improve prosperity and help individuals achieve more in their lives.

In addition to IBM’s plan to produce innovations that advance the frontiers of AI, a distinct objective of the new lab is to encourage MIT faculty and students to launch companies that will focus on commercializing AI inventions and technologies that are developed at the lab. The lab’s scientists also will publish their work, contribute to the release of open source material, and foster an adherence to the ethical application of AI.

Both MIT and IBM have been pioneers in artificial intelligence research, and the new AI lab builds on a decades-long research relationship between the two. In 2016, IBM Research announced a multi-year collaboration with MIT’s Department of Brain and Cognitive Sciences to advance the scientific field of machine vision, a core aspect of artificial intelligence.

The collaboration has brought together leading brain, cognitive, and computer scientists to conduct research in the field of unsupervised machine understanding of audio-visual streams of data, using insights from next-generation models of the brain to inform advances in machine vision. In addition, IBM and the Broad Institute of MIT and Harvard have established a five-year, $50 million research collaboration on AI and Genomics.

For more information, visit MITIBMWatsonAILab.mit.edu.

Written by turbotodd

September 7, 2017 at 9:09 am

IBM Watson To Generate Match Highlights At The U.S. Open

leave a comment »

IBM has announced it is launching IBM Watson Media, a new suite of AI-powered solutions on the IBM Cloud that analyze images, video, language, sentiment and tone, at the US Open.

By combining IBM Watson with IBM’s video capabilities, the United States Tennis Association (USTA) will be able to rapidly share highlight videos of more matches while engaging and informing fans more than ever before.

The US Open will use one of the first solutions available through IBM Watson Media called Cognitive Highlights. Developed at IBM Research with IBM iX, Cognitive Highlights can identify the match’s most important moments by analyzing the statistical tennis data, sounds from the crowd and the reactions of a player using both action and facial expression recognition.

The system then ranks the shots from seven US Open courts and auto-curates the highlights, which simplifies the video production process and ultimately positions the USTA team to scale and accelerate the creation of cognitive highlight packages.

The finished highlight videos will be available in four ways:

  • Each day, the USTA will post a Highlight of the Day, as ranked by Watson, on its Facebook page.
  • Fans that “favorite” players on the US Open apps will receive real-time push notification alerts about those players’ highlights. Fans on iOS 10 can play the highlights within the lock screen.
  • On the player bio page, video highlights will be available across all of the USTA’s digital platforms.
  • Onsite in the player’s lounge and in the fan-facing IBM Watson Experience on the plaza near Court 9.

“The US Open is packed with so much action across so many courts that even the fastest video team is challenged to keep pace with what’s happening,” said Noah Syken, IBM VP of Sports & Entertainment Partnerships. “To meet that challenge, Watson is now watching the matches alongside the USTA to help bring fans closer to the best moments across the courts shortly after they happen. We’re seeing this technology come to life through tennis, but the entire IBM Watson Media portfolio has the potential to impact many industries.”

Written by turbotodd

August 31, 2017 at 8:54 am

Posted in 2017, cognitve computing, ibm watson, us open

Tagged with

Codify Academy Users IBM Cloud, Watson to Design Cognitive Chatbot

leave a comment »

IBM recently announced that Codify Academy, a San Francisco-based developer education startup, tapped into IBM Cloud’s cognitive services to create an interactive cognitive chatbot, Bobbot, that is improving student experiences and increasing enrollment.

Using the IBM Watson Conversation Service, Bobbot fields questions from prospective and current students in natural language via the company’s website.

Since implementing the chatbot, Codify Academy has engaged thousands of potential leads through live conversation between the bot and site visitors, leading to a 10 percent increase in converting these visitors into students.

IBM Cloud with Watson provided Codify Academy with the speed and scale needed to immediately start building with cognitive intelligence. Bobbot can answer more than 200 common questions about enrollment, course and program details, tuition, and prerequisites, in turn enabling Codify Academy staff to focus on deeper, more meaningful exchanges.

For example, students can ask questions such as “What kind of job will I be able to find after I complete the program?” or “How do I apply, and what are tuition rates?”

“We saw a huge spike in interest from potential students in the early days of our company, which is a fortunate problem to have, but made us realize we needed to quickly build a solution to help us scale,” said Matt Brody at Codify Academy. “IBM Cloud gave us the infrastructure and access to cognitive services, including Watson, that we needed to quickly build and deploy an intelligent and intuitive bot – in turn helping us to field all inquiries and significantly increase enrollment.”

Codify Academy runs on the IBM Cloud platform, which has become one of the largest open, public cloud deployments in the world. It features more than 150 tools and services, spanning categories of cognitive intelligence, blockchain, security, Internet of Things, quantum computing and DevOps.

“We have designed our cloud platform to serve as the best possible engine for cognitive apps such as chatbots," said Adam Gunther, Director, IBM Cloud. "This enables companies to harness and fine tune incoming data quickly to create highly tailored user experiences.”

To learn more about Codify Academy, visit http://codifyacademy.com/.

Written by turbotodd

August 4, 2017 at 1:42 pm

Bot to Bot

leave a comment »

Facebook’s been in the news a fair amount this week.

Pivotal Research lowered its rating on Facebook to “sell” from “hold,” according to a report from CNBC, explaining it is “facing digital ad saturation risk as large companies are ‘scrutizing’ their marketing budgets.”

This despite the fact that Facebook has been one of the best-performing large-cap stocks in the market, growing nearly 50 percent year to date.

Earlier today, Fortune reported that Facebook is amping up its artificial intelligence capabilities, buying Ozlo, a small bot specialist based in Palo Alto.

Ozlo focuses on “conversational” bots that talk to users, and most of the company’s employees will join Facebook’s Messenger team.

But the story that really seemed to grab the Facebook headlines this week was the one that indicated two of its bots, instead of just talking to humans, were talking to one another and in a language that the chatbots “invented.”

Before you go all “Westworld” on me, let’s separate the fact from the fiction.

In an account from Karissa Bell at Mashable, Bell provided some much needed background to stifle the hype and get to the actual innovation. Bell wrote that “Facebook’s AI researchers published a paper back in June, detailing their efforts to teach chatbots to negotiate like humans. Their intention was to train the bots not just to imitate human interactions, but to actually act like humans.”

Which humans, we’re not yet sure of. The Mooch? Kim Kardashian? Kid Rock (Soon to be Senator Rock, to you!)

Unclear.

But Bell’s observation was that the narrative wasn’t just about the chats coming up with their own language, but instead this: That not only did the bots learn to act like humans, actual humans were apparently unable to discern the difference between bots and humans.

Where the bot chatter went off the rails was in their use of the English language, the grammar and syntax rules for which the bots were not instructed to use. Hence, some of the shortcut phrases like “I can can I I everything else.”

In the meantime, Elon Musk has cried AI Chicken Little once again, suggesting all this neural networking could be the end of humankind once and for all and that Zuck doesn’t “fully understand” the potential danger posed by AI.

The truth probably rests somewhere in the vast middle ground between the two, a truth I imagine the bots are having a good chuckle over as they create the new digital Esperanto they’ll need to take over the world.

Written by turbotodd

August 1, 2017 at 10:59 am

Droning On A Bad Santa

leave a comment »

Trying to get ready for the holidays?

You’re not the only ones.

United Parcel Service and FedEx Corp. are having a hard time keeping up with holiday shipping volumes that have “blown past expectations,” writes The Wall Street Journal. And the delayed delivery of millions of orders could rapidly become the Cyber Grinch that stole this Christmas.

Meanwhile, back at the Santa’s workshop located in Cambridge, U.K., Amazon has apparently made its first customer delivery by drone. It’s cargo? Some popcorn and — of course — a Fire TV video-streaming device.

Also according to the Journal, the drone made the trip in about 13 minutes, well ahead of the promised 30 minute windows for its “Prime Air” drone delivery service.

“But can it keep the pizza warm for that duration?” we ask.

If you’re tired of waiting for the drones to arrive, perhaps you’d like to learn more about our coming machine overlords?

The New York Times Magazine goes deep and long on the “Google Brain,” and the advances the company has made with its neural network capabilities for human language translation.

Before you get too excited about all these machines doing all this learning, however, you might want to take a second look at your vendor’s privacy policy.

As an example, Evernote is slated to announce a new policy on January 23, writes TechCrunch, one that is expected to “let its machine learning algorithms crunch your data” and also “let some of its employees read your notes so it can ensure that the machine learning is functioning properly.”

But worry not, Evernote responds, they’ve got someone watching the watchers: “Evernote claims that only a limited number of employees who have undergone background checks will be able to access user data and that users can encrypt notes they consider sensitive to prevent employees from reading them.”

How reassuring! If only I had my smart Amazon drone that I could hire out to keep an eye out on all those Evernote monitors?!!!

Written by turbotodd

December 14, 2016 at 3:10 pm

%d bloggers like this: