Turbotodd

Ruminations on tech, the digital media, and some golf thrown in for good measure.

Posts Tagged ‘AI

Oculus Went, Atlas Shrugged

leave a comment »

This has been some couple of weeks in technology. 

First, Facebook’s F8 conference last week, then Microsoft Build and Google I/O this week.

Coupla things stand out.

First, Facebook’s introduction of Oculus Go, the company’s new “everyperson’s” more friendly and accessible VR.  Looking back, I think the announcement was almost understated (and yes, there was a lot of other new news).

Like I always do when I’m considering a new tech purchase, I started searching for early reviews, and clearly Oculus/Facebook had done a good job getting devices into the hands of valued reviewers and sources. Almost to a one had mostly good things to say.

How often does that happen?  So I bit, and received my device one day ago this week.

For me, getting the Oculus Go was kind of like one of those moments you won’t forget: The first time I used the Netscape browser, the first time I used instant messaging, the first Tweet…Oculus Go was that moment for me where VR is concerned.

At SXSW two years ago I got to handle several of the leading VR goggles, and wasn’t really blown away by any of them — maybe it was all the umbilical cords and overweighted goggles. And maybe it was also the experiences themselves.

But when I got the Oculus Go last Friday, it just went. From the moment I turned it on to the quick setup to immediately blowing another $40 on a bundle of VR games and experiences, it was all easy peasy and sense surround.

Saturday morning, I downloaded a single shooter space game called “End Space.” It was so immersive that it had my brain thinking I was traveling through 360 degrees of space, requiring a spinning chair and, later, the spinning brain and dizziness to prove it. 

Now THAT was the kind of VR experience I’d been waiting for. Like I said when I first got it, finally good enough is more than good enough. Sure, you can complain if you’d like about the resolution not being where we’d like it and the fact that it’s really only 180 degrees, but those are minor roadblocks.

And these are still early days.

No, if nothing else, Oculus Go opens one’s imagination as to all the possibilities of full VR immersion, from education to virtual travel to gaming (already a strong suit in VR) to remote work and collaboration and beyond.  You can’t smell it yet, but you sure can touch it and feel it, and it feels pretty cool.

The second thing was experiencing yet another of those AI aha moments.

My first was back in the Web’s early Dark Ages, in 1997 in the auditorium of the Equitable Center in NYC where IBM’s Deep Blue defeated Garry Kasparov in chess — for those who saw it either there or online, it was an unmistakable leap forward — the machine beat the man. The next were IBM’s Watson beating the world’s best in Jeopardy!, and Google’s AlphaGo beating the world’s best humans in Go.

This week, watching the Google I/O webcast, I saw the Google Duplex technology in action, one where a disembodied Google Assistant voice was “smart” enough to call a hair salon and make an appointment…over the phone, and using her voice. The hair salon attendant seemed none the wiser.

I don’t know if that comes close to passing the Turing Test, but it’s pretty damned close.

And yet the very next day, I was attending a social media seminar given by our friends at Fleishman Hillard where I was introduced to “Lil Miquela” an Instagram influencer with over 1.1 million followers. 

Lil Miquela supports Black Lives Matter and seems to have a keen fashion sense. Lil Miquela is also not real. “She” was invented by an influencer marketing company called Brud, which Crunchbase says is “a group of problem solvers specializing in robotics, AI, and their applications to media businesses.”

Brud is in the business of selling access to brands to made-up influencers like Lil Miquela, and is backed by the likes of Sequoia Capital. And if you think about it, such a venture makes sense. As our Fleishman friends explained, “You don’t have to worry about Lil Miquela and her friends doing something in Vegas they shouldn’t be doing.” A, because Lil Miquela isn’t real, and B, because she has no “real” friends.

In other words, Lil Miquela and her ilk are “brand safe,” so why wouldn’t having a big brand associate themselves with her/it??

As I said in a room filled with actual real people, “We’re entering a wild wild “Westworld” where there are no rules and the boundaries aren’t clear…which makes for a nice petri dish in which just about anything and everything can by manipulated by digital, social and, now, AI and VR media. One day you’re talking to a Google Assistant to make your hair appointment, the next day you’re talking to a fake virtual IRS agent who’s taking control of your tax refund for you.

On October 30, 1938, renowned actor Orson Welles aired a radio broadcast on CBS based on H. G. Wells’ novel The War of the Worlds. 

Because the program was hosted on a “sustaining” show without commercial interruptions, “Mercury Theatre on the Air,” the program went on for 30 minutes and people across the country mistook the science fiction for an actual new broadcast. It caused panic across the country and people took to the streets. The Martians had arrived at Grover’s Mill!

That was 56 years before we saw the advent of the commercial Internet, and 80 years before we witnessed the Google Duplex phone call.

The saying used to go “Truth is stranger than fiction.”  Now, VR goggles and AI algorithms in tow, truth is increasingly turning into fiction, and that may be the slippery-est slope of all.

Written by turbotodd

May 11, 2018 at 10:21 am

Perspectives on AI

leave a comment »

​​
MIT’s "The Download" recently reported that China’s artificial intelligence startups scored more funding that America’s last year.

Of $15.2 billion invested globally in 2017 in AI, 48 percent went to China and 38 percent to America. That’s the first time China’s AI startups surpassed those in the U.S. in terms of funding.

But The Download also observes competition continues to be fierce across the board. AI startup investment rose 141 percent in 2017, and 1,100 new AI startups appeared last year.

The R&D and overall AI market may, in fact, be moving too fast.

In a separate report from Science Magazine, an analysis revealed that AI may be grappling with a replication crisis when it comes to research:

AI researchers have found it difficult to reproduce many key results, and that is leading to a new conscientiousness about research methods and publication protocols….The most basic problem is that researchers often don’t share their source code. At the AAAI meeting, Odd Erik Gundersen, a computer scientist at the Norwegian University of Science and Technology in Trondheim, reported the results of a survey of 400 algorithms presented in papers at two top AI conferences in the past few years. He found that only 6% of the presenters shared the algorithm’s code. Only a third shared the data they tested their algorithms on, and just half shared "pseudocode"—a limited summary of an algorithm. (In many cases, code is also absent from AI papers published in journals, including Science and Nature.)

Why are researchers holding back?

The article argues researchers believe some code may be a work in progress, or could be owned by a company or held tightly by a researcher eager to stay ahead of the competition.

IBM Research offered some assistance a the recent AAAI meeting, a tool for recreating unpublished source code automatically. Itself a neural network, it scans an AI research paper looking for a chart or diagram describing a neural net, parses those data into layers and connections, and generates the network in new code.

At this week’s Index | San Francisco conference, on Wednesday at 9 AM PST, New York Times journalist and author John Markoff will be hosting a session entitled "Perspectives on AI." You can register to watch the livestream here.

Written by turbotodd

February 19, 2018 at 10:11 am

Posted in 2018, AI, artificial intelligence

Tagged with , ,

IBM and MIT to Pursue Joint Research in Artificial Intelligence

leave a comment »

IBM and MIT today announced that IBM plans to make a 10-year, $240 million investment to create the MIT–IBM Watson AI Lab in partnership with MIT. The lab will carry out fundamental artificial intelligence (AI) research and seek to propel scientific breakthroughs that unlock the potential of AI.

The collaboration aims to advance AI hardware, software and algorithms related to deep learning and other areas, increase AI’s impact on industries, such as health care and cybersecurity, and explore the economic and ethical implications of AI on society. IBM’s $240 million investment in the lab will support research by IBM and MIT scientists.

The new lab will be one of the largest long-term university-industry AI collaborations to date, mobilizing the talent of more than 100 AI scientists, professors, and students to pursue joint research at IBM’s Research Lab in Cambridge — co-located with the IBM Watson Health and IBM Security headquarters in Kendall Square, in Cambridge, Massachusetts — and on the neighboring MIT campus.

The lab will be co-chaired by IBM Research VP of AI and IBM Q, Dario Gil, and Anantha P. Chandrakasan, dean of MIT’s School of Engineering. IBM and MIT plan to issue a call for proposals to MIT researchers and IBM scientists to submit their ideas for joint research to push the boundaries in AI science and technology in several areas, including:

  • AI algorithms: Developing advanced algorithms to expand capabilities in machine learning and reasoning. Researchers will create AI systems that move beyond specialized tasks to tackle more complex problems, and benefit from robust, continuous learning. Researchers will invent new algorithms that can not only leverage big data when available, but also learn from limited data to augment human intelligence.
  • Physics of AI: Investigating new AI hardware materials, devices, and architectures that will support future analog computational approaches to AI model training and deployment, as well as the intersection of quantum computing and machine learning. The latter involves using AI to help characterize and improve quantum devices, and also researching the use of quantum computing to optimize and speed up machine-learning algorithms and other AI applications.
  • Application of AI to industries: Given its location in IBM Watson Health and IBM Security headquarters and Kendall Square, a global hub of biomedical innovation, the lab will develop new applications of AI for professional use, including fields such as health care and cybersecurity. The collaboration will explore the use of AI in areas such as the security and privacy of medical data, personalization of healthcare, image analysis, and the optimum treatment paths for specific patients.
  • Advancing shared prosperity through AI: The MIT-IBM Watson AI Lab will explore how AI can deliver economic and societal benefits to a broader range of people, nations, and enterprises. The lab will study the economic implications of AI and investigate how AI can improve prosperity and help individuals achieve more in their lives.

In addition to IBM’s plan to produce innovations that advance the frontiers of AI, a distinct objective of the new lab is to encourage MIT faculty and students to launch companies that will focus on commercializing AI inventions and technologies that are developed at the lab. The lab’s scientists also will publish their work, contribute to the release of open source material, and foster an adherence to the ethical application of AI.

Both MIT and IBM have been pioneers in artificial intelligence research, and the new AI lab builds on a decades-long research relationship between the two. In 2016, IBM Research announced a multi-year collaboration with MIT’s Department of Brain and Cognitive Sciences to advance the scientific field of machine vision, a core aspect of artificial intelligence.

The collaboration has brought together leading brain, cognitive, and computer scientists to conduct research in the field of unsupervised machine understanding of audio-visual streams of data, using insights from next-generation models of the brain to inform advances in machine vision. In addition, IBM and the Broad Institute of MIT and Harvard have established a five-year, $50 million research collaboration on AI and Genomics.

For more information, visit MITIBMWatsonAILab.mit.edu.

Written by turbotodd

September 7, 2017 at 9:09 am

IBM Watson To Generate Match Highlights At The U.S. Open

leave a comment »

IBM has announced it is launching IBM Watson Media, a new suite of AI-powered solutions on the IBM Cloud that analyze images, video, language, sentiment and tone, at the US Open.

By combining IBM Watson with IBM’s video capabilities, the United States Tennis Association (USTA) will be able to rapidly share highlight videos of more matches while engaging and informing fans more than ever before.

The US Open will use one of the first solutions available through IBM Watson Media called Cognitive Highlights. Developed at IBM Research with IBM iX, Cognitive Highlights can identify the match’s most important moments by analyzing the statistical tennis data, sounds from the crowd and the reactions of a player using both action and facial expression recognition.

The system then ranks the shots from seven US Open courts and auto-curates the highlights, which simplifies the video production process and ultimately positions the USTA team to scale and accelerate the creation of cognitive highlight packages.

The finished highlight videos will be available in four ways:

  • Each day, the USTA will post a Highlight of the Day, as ranked by Watson, on its Facebook page.
  • Fans that “favorite” players on the US Open apps will receive real-time push notification alerts about those players’ highlights. Fans on iOS 10 can play the highlights within the lock screen.
  • On the player bio page, video highlights will be available across all of the USTA’s digital platforms.
  • Onsite in the player’s lounge and in the fan-facing IBM Watson Experience on the plaza near Court 9.

“The US Open is packed with so much action across so many courts that even the fastest video team is challenged to keep pace with what’s happening,” said Noah Syken, IBM VP of Sports & Entertainment Partnerships. “To meet that challenge, Watson is now watching the matches alongside the USTA to help bring fans closer to the best moments across the courts shortly after they happen. We’re seeing this technology come to life through tennis, but the entire IBM Watson Media portfolio has the potential to impact many industries.”

Written by turbotodd

August 31, 2017 at 8:54 am

Posted in 2017, cognitve computing, ibm watson, us open

Tagged with

Codify Academy Users IBM Cloud, Watson to Design Cognitive Chatbot

leave a comment »

IBM recently announced that Codify Academy, a San Francisco-based developer education startup, tapped into IBM Cloud’s cognitive services to create an interactive cognitive chatbot, Bobbot, that is improving student experiences and increasing enrollment.

Using the IBM Watson Conversation Service, Bobbot fields questions from prospective and current students in natural language via the company’s website.

Since implementing the chatbot, Codify Academy has engaged thousands of potential leads through live conversation between the bot and site visitors, leading to a 10 percent increase in converting these visitors into students.

IBM Cloud with Watson provided Codify Academy with the speed and scale needed to immediately start building with cognitive intelligence. Bobbot can answer more than 200 common questions about enrollment, course and program details, tuition, and prerequisites, in turn enabling Codify Academy staff to focus on deeper, more meaningful exchanges.

For example, students can ask questions such as “What kind of job will I be able to find after I complete the program?” or “How do I apply, and what are tuition rates?”

“We saw a huge spike in interest from potential students in the early days of our company, which is a fortunate problem to have, but made us realize we needed to quickly build a solution to help us scale,” said Matt Brody at Codify Academy. “IBM Cloud gave us the infrastructure and access to cognitive services, including Watson, that we needed to quickly build and deploy an intelligent and intuitive bot – in turn helping us to field all inquiries and significantly increase enrollment.”

Codify Academy runs on the IBM Cloud platform, which has become one of the largest open, public cloud deployments in the world. It features more than 150 tools and services, spanning categories of cognitive intelligence, blockchain, security, Internet of Things, quantum computing and DevOps.

“We have designed our cloud platform to serve as the best possible engine for cognitive apps such as chatbots," said Adam Gunther, Director, IBM Cloud. "This enables companies to harness and fine tune incoming data quickly to create highly tailored user experiences.”

To learn more about Codify Academy, visit http://codifyacademy.com/.

Written by turbotodd

August 4, 2017 at 1:42 pm

Bot to Bot

leave a comment »

Facebook’s been in the news a fair amount this week.

Pivotal Research lowered its rating on Facebook to “sell” from “hold,” according to a report from CNBC, explaining it is “facing digital ad saturation risk as large companies are ‘scrutizing’ their marketing budgets.”

This despite the fact that Facebook has been one of the best-performing large-cap stocks in the market, growing nearly 50 percent year to date.

Earlier today, Fortune reported that Facebook is amping up its artificial intelligence capabilities, buying Ozlo, a small bot specialist based in Palo Alto.

Ozlo focuses on “conversational” bots that talk to users, and most of the company’s employees will join Facebook’s Messenger team.

But the story that really seemed to grab the Facebook headlines this week was the one that indicated two of its bots, instead of just talking to humans, were talking to one another and in a language that the chatbots “invented.”

Before you go all “Westworld” on me, let’s separate the fact from the fiction.

In an account from Karissa Bell at Mashable, Bell provided some much needed background to stifle the hype and get to the actual innovation. Bell wrote that “Facebook’s AI researchers published a paper back in June, detailing their efforts to teach chatbots to negotiate like humans. Their intention was to train the bots not just to imitate human interactions, but to actually act like humans.”

Which humans, we’re not yet sure of. The Mooch? Kim Kardashian? Kid Rock (Soon to be Senator Rock, to you!)

Unclear.

But Bell’s observation was that the narrative wasn’t just about the chats coming up with their own language, but instead this: That not only did the bots learn to act like humans, actual humans were apparently unable to discern the difference between bots and humans.

Where the bot chatter went off the rails was in their use of the English language, the grammar and syntax rules for which the bots were not instructed to use. Hence, some of the shortcut phrases like “I can can I I everything else.”

In the meantime, Elon Musk has cried AI Chicken Little once again, suggesting all this neural networking could be the end of humankind once and for all and that Zuck doesn’t “fully understand” the potential danger posed by AI.

The truth probably rests somewhere in the vast middle ground between the two, a truth I imagine the bots are having a good chuckle over as they create the new digital Esperanto they’ll need to take over the world.

Written by turbotodd

August 1, 2017 at 10:59 am

Droning On A Bad Santa

leave a comment »

Trying to get ready for the holidays?

You’re not the only ones.

United Parcel Service and FedEx Corp. are having a hard time keeping up with holiday shipping volumes that have “blown past expectations,” writes The Wall Street Journal. And the delayed delivery of millions of orders could rapidly become the Cyber Grinch that stole this Christmas.

Meanwhile, back at the Santa’s workshop located in Cambridge, U.K., Amazon has apparently made its first customer delivery by drone. It’s cargo? Some popcorn and — of course — a Fire TV video-streaming device.

Also according to the Journal, the drone made the trip in about 13 minutes, well ahead of the promised 30 minute windows for its “Prime Air” drone delivery service.

“But can it keep the pizza warm for that duration?” we ask.

If you’re tired of waiting for the drones to arrive, perhaps you’d like to learn more about our coming machine overlords?

The New York Times Magazine goes deep and long on the “Google Brain,” and the advances the company has made with its neural network capabilities for human language translation.

Before you get too excited about all these machines doing all this learning, however, you might want to take a second look at your vendor’s privacy policy.

As an example, Evernote is slated to announce a new policy on January 23, writes TechCrunch, one that is expected to “let its machine learning algorithms crunch your data” and also “let some of its employees read your notes so it can ensure that the machine learning is functioning properly.”

But worry not, Evernote responds, they’ve got someone watching the watchers: “Evernote claims that only a limited number of employees who have undergone background checks will be able to access user data and that users can encrypt notes they consider sensitive to prevent employees from reading them.”

How reassuring! If only I had my smart Amazon drone that I could hire out to keep an eye out on all those Evernote monitors?!!!

Written by turbotodd

December 14, 2016 at 3:10 pm

%d bloggers like this: