Turbotodd

Ruminations on tech, the digital media, and some golf thrown in for good measure.

Archive for the ‘machine learning’ Category

That’s a Big Deal!

leave a comment »

Happy Monday.

A big deal could hardly wait for Monday morning.  In fact, the biggest tech deal ever.

Broadcom Ltd. has offered roughly $105 billion for Qualcomm Inc., reports Bloomberg, “kicking off an ambitious attempt at the largest technology takeover ever in a deal that would rock the electronics industry.”

According to the report, Broadcom has made an offer of $70 a share in cash and stock for Qualcomm, a 28% premium for the world’s largest maker of mobile phone chips.

If the deal were to go through and be approved, this would make Broadcom the third largest chipmaker, behind Intel and Samsung. And as Bloomberg estimates, the deal would dwarf Dell’s $67 billion acquisition of EMC in 2015.

VentureBeat writes that “the deal would give Broadcom a foothold in the mobile communications market” but comes at a time Qualcomm is still trying to close its own $38 billion bid for NXP Semiconductors, which Qualcomm wants to use to help it get into self-driving technology.

And as VentureBeat points out, Intel is certainly not sitting still as competitive pressure emanates from the likes of Nvidia who are moving into the growing field of machine learning and artificial intelligence, both of which are demanding higher performance semiconductors.

No word at press time as to whether billionaire Prince al-Waleed bin Talal, a prominent member of Saudi Arabia’s royal family and one of the world’s wealthiest men, will be making an investment in this new venture.

The Prince was detained by Saudi authorities on Saturday night, according to the Wall Street Journal, along with at least 10 other princes and more than two dozen current ministers in the Saudi royal family. Mr. al-Waleed is a top investor in tech companies, including Apple and Twitter, and faces charges of money laundering.

 

Written by turbotodd

November 6, 2017 at 9:02 am

Keep An Eye on Those Algos

leave a comment »

If you want to get a fresh look into our collective AI future, look no further than ProPublica’s recent report on how it was able to target pitches to the news feeds of “Jew haters” and similar anti-Semetic propensities.

In their test, ProPublica indicated they paid $30 to target such groups with three “promoted” posts, which then displayed a ProPublica article or post in those users’ feeds.

ProPublica indicates that Facebook approved all three ads within 15 minutes.

They report Facebook immediately removed the categories once contacted, but what was most interesting about the experiment was that they had originally been created by algorithms, not we mere mortals.

Facebook’s Rob Leathern indicated “We know we have more work to do, so we’re also building new guardrails in our product and review processes to prevent other issues like this from happening in the future.”

Uh, yeah, little bit.

This is only one minor example of the role algorithms are going to play in the brave new world where fallible humans continue to shape policy and protocols for their algorithmic counterparts.

In December of last year, Harvard Business Review published a story entitled “Hiring Algorithms Are Not Neutral.” In that story, they wrote that “algorithms are, in part, our opinions embedded in code. They reflect human biases and prejudices that lead to machine learning mistakes and misinterpretations. This bias shows up in numerous aspects of our lives, including algorithms used for electronic discovery, teacher evaluations, car insurance, credit score rankings, and university admissions.”

Another example more specific to the HR focus of the piece:

At their core, algorithms mimic human decision making. They are typically trained to learn from past successes, which may embed existing bias. For example, in a famous experiment, recruiters reviewed identical resumes and selected more applicants with white-sounding names than with black-sounding ones. If the algorithm learns what a “good” hire looks like based on that kind of biased data, it will make biased hiring decisions. The result is that automatic resume screening software often evaluates job applicants based on subjective criteria, such as one’s name. By latching on to the wrong features, this approach discounts the candidate’s true potential.
– via Harvard Business Review

So how to avoid algorithmic bias as we start to allow the machines conduct more of the necessary, but often mundane, tasks we humans prefer to avoid?

In the case of HR, they suggest organizations stop making hard screening decisions based solely on an algorithm. Encourage human reviews that will ask experienced people who have through bias training to oversee selection and evaluation.

But doesn’t that divert unnecessary resources back to us humans when the machines are supposed to now do all the work?

Well, yes, but it’s kind of difficult to get hired if the perfect machine is using its algos to hire an imperfect human.

 

Written by turbotodd

September 15, 2017 at 9:18 am

Posted in 2017, AI, HR, machine learning

Mark III Systems Launches Cognitive Call Center With Watson

leave a comment »

If you’re trying to keep up with all the AI and machine learning companies sprouting up on the landscape…yeah, well, you might need an algorithm to help you with that.

The next best thing might be Bloomberg Beta’s Shivon Zilis, who provides a very thorough rundown on “Machine Intelligence 3.0” here. There’s way too much AI meat on ‘dem bones for me to go into the details here, but Zilis summarizes it by explaining that “a one stop shop of the machine intelligence stack is coming into view.”

On the IBM AI front, IBM today announced that Texas-based IT solutions provider and IBM Business Partner Mark III Systems has built a platform using cognitive technologies on IBM Cloud to help call centers increase efficiency, improve employee productivity and make more informed decisions based on near real-time insights.

Most call centers record phone conversations as unstructured data, only searchable by manually entered “tags.” If a conversation is relevant to an audit, it must be transcribed manually, which means reports can take weeks, which can result in decreased productivity and potentially decreased customer satisfaction.

Mark III Systems’ Cognitive Call Center platform transforms the traditional call center model by using IBM Cloud and Watson to help agents identify, filter, analyze and take actions on inbound and outbound calls.

The platform uses IBM Cloud Object Storage to manage the unstructured data, and it uses Watson APIs, specifically Watson Speech to Text and Watson Tone Analyzer, to automate the transcription and tagging of audio, provide near real-time analytics and actions and enable deeper analytics for audit situations.

Mark III’s development unit, BlueChasm, leveraged virtually the entire IBM development to deployment stack to create the cloud-based platform with an open API. With its highly repeatable, flexible solution, Mark III is set to revolutionize the call center market by providing cognitive business insights in near real time to its clients.

IBM has also launched the Watson Build, a new challenge designed to support its channel partners as they bring a cognitive solution to market. The deadline to apply is May 15, 2017, and businesses can learn more here.

Written by turbotodd

May 2, 2017 at 9:13 am

IBM’s PowerAI Distribution To Support Google TensorFlow

leave a comment »

IBM has announced that its PowerAI distribution for popular open source Machine Learning and Deep Learning frameworks on the POWER8 architecture now supports the TensorFlow 0.12 framework that was originally created by Google.

TensorFlow support through IBM PowerAI provides enterprises with another option for fast, flexible, and production-ready tools and support for developing advanced machine learning products and systems.

As one of the fastest growing fields of machine learning, deep learning makes it possible to process enormous datasets with millions or even billions of elements and extract useful predictive models. Deep learning is transforming the businesses of leading consumer Web and mobile application companies, and it is quickly being adopted by more traditional business enterprises as well.

IBM developed PowerAI, enterprise distribution and support for open-source machine and deep learning frameworks used to build cognitive applications. PowerAI helps reduce the complexity and risk of deploying these open source frameworks for enterprises on the Power architecture.

PowerAI is tuned for performance. It offers enterprise support on IBM Power Systems S822LC for HPC platforms used by thousands of developers in commercial, academic and hyperscale systems environments. These Power systems are built with IBM’s POWER8 with NVIDIA NVLink processor that is linked via the high-speed NVLink interface to NVIDIA’s Tesla Pascal P100 GPU accelerators. The CPU to GPU and GPU to GPU NVLink connections give a performance boost to deep learning and analytics applications.

In addition, deep learning and other machine learning techniques are being deployed across a wide range of industry sectors including banking, the automotive industry and retail.

IBM also added the Chainer deep learning framework to the latest release of PowerAI.

PowerAI now includes CAFFE, Chainer, TensorFlow, Theano, Torch, NVIDIA DIGITS, and several other machine and deep learning frameworks and libraries and is available for download from https://www.ibm.com/us-en/marketplace/deep-learning-platform.

Written by turbotodd

January 27, 2017 at 11:55 am

%d bloggers like this: