Ruminations on tech, the digital media, and some golf thrown in for good measure.

Archive for the ‘algorithms’ Category

No Laughing Matter

leave a comment »

Did you hear the one about the personal voice assistant that, for seemingly no apparent reason whatsoever, started breaking into strange laughing noises at random?


Well, I heard about it firsthand, but apparently I missed the opportunity to hear random guffawing of my own personal Amazon Tap.

According to Bloomberg, Amazon confirmed yesterday that in rare circumstances, the voice assistant can mistakenly hear the phrase “Alexa, laugh,” which under its normal programming would cause it to chuckle. 

Amazon has updated a fix for the problem, and is changing the trigger phrase for laughing to “Alexa, can you laugh?” instead.

A few moments ago, I tried the new command, and all I got from Alexa was a “Tee hee.”  

How very anti-climactic.

This quirk has been referred to in AI circles as a “false positive.”

Let’s just hope the voice commands for the AI algos running the armed drones have their laughs in order.

Written by turbotodd

March 8, 2018 at 9:02 am

Facebook AI to Detect Suicidal Posts

with 2 comments

TechCrunch is reporting on Facebook’s new “proactive detection” artificial intelligence technology which will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or even contact local first-responders.

Facebook also will use AI to prioritize particularly risky or urgent user reports so they’re more quickly addressed by moderators, and tools to instantly surface local language resources and first-responder contact info. It’s also dedicating more moderators to suicide prevention, training them to deal with the cases 24/7, and now has 80 local partners like Save.org, National Suicide Prevention Lifeline and Forefront from which to provide resources to at-risk users and their networks.
– via TechCrunch

TechCrunch’s Josh Constine goes on to observe that this evolution could trigger “some dystopian fears about how else the technology could be applied” (scanning for political dissent or petty crimes, for example), but that Facebook had no comment on the possibility of such scans.

So how did they get to this point?

Facebook trained the AI by finding patterns in the words and imagery used in posts that have been manually reported for suicide risk in the past. It also looks for comments like “are you OK?” and “Do you need help?” “We’ve talked to mental health experts, and one of the best ways to help prevent suicide is for people in need to hear from friends or family that care about them,” Rosen says. “This puts Facebook in a really unique position. We can help connect people who are in distress connect to friends and to organizations that can help them.”
– via TechCrunch

Written by turbotodd

November 27, 2017 at 4:46 pm

%d bloggers like this: