Turbotodd

Ruminations on tech, the digital media, and some golf thrown in for good measure.

Keep An Eye on Those Algos

leave a comment »

If you want to get a fresh look into our collective AI future, look no further than ProPublica’s recent report on how it was able to target pitches to the news feeds of “Jew haters” and similar anti-Semetic propensities.

In their test, ProPublica indicated they paid $30 to target such groups with three “promoted” posts, which then displayed a ProPublica article or post in those users’ feeds.

ProPublica indicates that Facebook approved all three ads within 15 minutes.

They report Facebook immediately removed the categories once contacted, but what was most interesting about the experiment was that they had originally been created by algorithms, not we mere mortals.

Facebook’s Rob Leathern indicated “We know we have more work to do, so we’re also building new guardrails in our product and review processes to prevent other issues like this from happening in the future.”

Uh, yeah, little bit.

This is only one minor example of the role algorithms are going to play in the brave new world where fallible humans continue to shape policy and protocols for their algorithmic counterparts.

In December of last year, Harvard Business Review published a story entitled “Hiring Algorithms Are Not Neutral.” In that story, they wrote that “algorithms are, in part, our opinions embedded in code. They reflect human biases and prejudices that lead to machine learning mistakes and misinterpretations. This bias shows up in numerous aspects of our lives, including algorithms used for electronic discovery, teacher evaluations, car insurance, credit score rankings, and university admissions.”

Another example more specific to the HR focus of the piece:

At their core, algorithms mimic human decision making. They are typically trained to learn from past successes, which may embed existing bias. For example, in a famous experiment, recruiters reviewed identical resumes and selected more applicants with white-sounding names than with black-sounding ones. If the algorithm learns what a “good” hire looks like based on that kind of biased data, it will make biased hiring decisions. The result is that automatic resume screening software often evaluates job applicants based on subjective criteria, such as one’s name. By latching on to the wrong features, this approach discounts the candidate’s true potential.
– via Harvard Business Review

So how to avoid algorithmic bias as we start to allow the machines conduct more of the necessary, but often mundane, tasks we humans prefer to avoid?

In the case of HR, they suggest organizations stop making hard screening decisions based solely on an algorithm. Encourage human reviews that will ask experienced people who have through bias training to oversee selection and evaluation.

But doesn’t that divert unnecessary resources back to us humans when the machines are supposed to now do all the work?

Well, yes, but it’s kind of difficult to get hired if the perfect machine is using its algos to hire an imperfect human.

 

Written by turbotodd

September 15, 2017 at 9:18 am

Posted in 2017, AI, HR, machine learning

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: