Turbotodd

Ruminations on tech, the digital media, and some golf thrown in for good measure.

Archive for the ‘artificial intelligence’ Category

Nvidia’s New Chips

leave a comment »

Nvidia announced its largest-ever acquisition today, offering $6.9 billion for data center chip technology maker, Mellanox.

Bloomberg is reporting that Mellanox beat out rivals that included Intel in a bidding process for the American-Israeli company, one which its founder, Jensen Huang, built in under three years by “persuading owners of data centers that his graphics chips are the right solution for processing the increasingly large amounts o information needed for artificial intelligence work, such as image recognition.”

The growing reams of data generated means work on AI and large databases needs to be split between multiple computers. Simply using a faster processor isn’t enough, Huang said. To deal with this, data centers in future will be built as though they are single giant computers “with tens of thousands of compute nodes,” requiring inter-connections that let them work in parallel. Nvidia will use its newly acquired technology to make those giant warehouses full of machinery more efficient and effective, he said.

The deal may signal a resumption of consolidation in the $470 billion semiconductor industry, which has been reshaped over the past five years as companies combined to gain scale while battling rising costs and shrinking customer lists.

The deal will require regulatory approval.

ZDNet’s take

Nvidia’s purchase of Mellanox for $6.9 billion will translate into a broader data center play for beyond the graphics and high performance computing markets.

  • Nvidia’s rivalry with Intel hits a new stage.
  • Nvidia gets more revenue diversification and data center sales.
  • Mellanox gives Nvidia more entries into high performance computing and the data center.
  • As artificial intelligence workloads become the norm, Nvidia with Mellanox be more an architecture play.

And SiliconANGLE spoke with analyst Patrick Moorhead about the deal:

“Both Nvidia and Mellanox are big in the high performance computing, machine learning, automotive, public cloud and enterprise data center markets, and could bring even more value to customers when [their technologies are] combined.”

“Scale and diversification is everything in the chip business, and Nvidia gets both with this acquisition,” added Holger Mueller, principal analyst and vice president of Constellation Research Inc. “It allows the company to scale and diversify from its existing graphics, gaming and AI use cases. Getting in the data center is vital with the overall move to the public cloud, so if this goes through, it means Nvidia will become more relevant for both executives and developers alike.”

The financial spin: Nvidia is paying a 15% premium to Mellanox’s Friday closing level, and indicated the purchase would be immediately accretive to earnings, margins and cash flows.

Written by turbotodd

March 11, 2019 at 2:03 pm

Think Again

with 2 comments

So much has happened since last I wrote!

I got sick as a dog (apparently spurred on behind the early blooming of the Texas juniper cedars), but then I got well enough to travel out to San Francisco for IBM’s Think event (more on that in a moment).

The Mars Opportunity rover finally decided to call it quits after surviving 5,111 sols (the Martian day equivalent), after it was expected to only last 90! Talk about underpromising and overdelivering.

Opportunity, thanks for all the pics and memories and pics of the Martian surface — and please send Marvin our regards.

I can’t even remember what all else happened, so let me get back to Think.

I’ve attended more IBM conferences over the years than I care to count, and this was the first year in 10 years that our signature event wasn’t being held in Las Vegas.

San Francisco was a nice break from the desert landscape, but we also got lots of rain in the middle of the week.

No sooner had I landed on Monday PM that I quickly made my way over to Think land around the Moscone Center so that I could watch the Project Debater debate.

Some history: I was in the audience in NYC for one of the Deep Blue vs. Kasparov matches in 1997, and followed the chess action closely via our (then) brand new Java applet.

In 2011, I watched with amazement when IBM Watson beat the world’s best in “Jeopardy!”

But for my two cents, Project Debater takes things to the next level — using machine learning and AI to form both arguments and rebuttals in debates with a human opponent.

You can read more about Project Debater here.

If you want to learn more about the technology, check out this interview IBM Developer conducted with Dr Ranit Aharonov, the project’s team lead.

Dr. Aharanov explains that IBM’s “third grand challenge” was to develop a system that can hold its own in a full debate with a human.

One week ago Project Debater proved that she was up to the challenge.

Written by turbotodd

February 18, 2019 at 12:22 pm

AI Survey: More Harm Than Good?

leave a comment »

Happy Friday.

So yesterday I wrote about the beginnings of an AI backlash vis a vis some of the tests Waymo has been doing on Arizona. 

Then today this AI study hits my in-box, featured on the MIT Technology Review and conducted by the Center for the Governance of AI and Oxford University’s Future of Humanity Institute.

The headline is that of Americans surveyed in the study, a higher percentage of respondents support than oppose AI development, while more respondents than not also believe high-level machine intelligence would do more harm than good for humanity.

The report goes on to ask respondents to rank their specific concerns, and they list a weakening of data privacy and the increased sophistication of cyber-attacks as issues of most concern and those most likely to affect many Americans within the next 10 years.

They’re also concerned about other key issues, including autonomous weapons, hiring bias, surveillance, digital manipulation, and, interestingly further down the list, technological unemployment.

So, more than 8 in 10 believe that AI and robotics should be “managed carefully.”

But as MIT observes in its article, that’s easier said than done “because they also don’t trust any one entity to pick up that mantle.”

I’m assuming that also means no one wants to leave it up to the Director from “Travelers” (you’ll have to go watch the show on Netflix to understand the reference…I don’t want to give any plot points away).

Where do they put the most trust in building AI?  University researchers, the US military, and tech companies, in that order.

Allan Dafoe, director of the center and coauthor of the report, says the following about the findings:

“There isn’t currently a consensus in favor of developing advanced AI, or that it’s going to be good for humanity,” he says. “That kind of perception could lead to the development of AI being perceived as illegitimate or cause political backlashes against the development of AI.”

“I believe AI could be a tremendous benefit,” Dafoe says. But the report shows a main obstacle in the way of getting there: “You have to make sure that you have a broad legitimate consensus around what society is going to undertake.”

Like any life-changing technology, it all comes down to trust…or the lack thereof.

Written by turbotodd

January 11, 2019 at 3:38 pm

But Is It Art?

leave a comment »

I think we’re about to jump the AI shark. And that’s before the shark has hardly even started to begun to swim.

A new work of art entitled “Portrait of Edmond de Belamy” is going on sale at Christie’s tonight, and according to a report by Quartzy, at first glance it appears to look like the handiwork of a long-dead Old Master.

Quartzy reports that it has a few smudges, a lightness in the brush strokes, some negative space at the edge of the canvas, and even a subtle chiaroscuro.

But, in fact, the picture of a man in a black shirt is not the work of any painter, living or dead.

No, it’s the result of an artificial intelligence algorithm.

“Portrait of Edmond De Belamy” will be the first algorithm – made artwork to go on auction in the world of fine art.

So how was the painting produced?

The humans behind the AI, a Parisian art collective called “Obvious,” first fed 15,000 images of paintings from between the 14th and 20th centuries into an open-source generative adversarial network, or “GAN”: 

This sort of neural network works in two parts: one generates the picture using the data available, and the other “discriminates,” essentially telling it whether it’s done a good job or whether the finished images are still obviously the work of a machine. It’s not clear exactly how many images the network shored up on the screen in total, but this is the one that won out. Obvious members then printed it on canvas, framed in gilt—and put it up for sale.

Will anybody buy it?

Quartzy reports that Christie’s is banking on somebody biting, probably with a final sale price of between $7,000 to $10,000?

No word yet whether or not the first AI-produced painting will shed itself after the sale, but knowing the arrogance of those AI algorithms, there’s a good stance it will instead attempt to replicate itself.

Written by turbotodd

October 25, 2018 at 9:39 am

A $1B AI School

leave a comment »

Happy Tuesday. It’s still raining here in Austin, and about 80 miles west of us the Llano River has reached a 40-foot flood stage. Please stop the rain, at least for a little while. We’ve had enough.

If you need a ride away from the floods, or were simply wondering what’s been going on with Uber, The Wall Street Journal is reporting that the company could be valued at 120 billion dollars in an IPO as early as 2019, which would nearly double its valuation from just two months ago.

As the Journal story points out, that “eye-popping” figure would make Uber worth more than General Motors, Ford Motor, and Fiat Chrysler Automobiles combined.

While Uber is focused on making smarter car rides, Paperspace has scored $13 million in investment for its AI-fueled application development platform.

According to a report from TechCrunch, Paperspace wants to help developers build AI and machine learning apps with a software and hardware development platform powered by GPUs and other powerful chips.

Last spring, the company released gradient, a serverless tool to make it easier to deploy and manage Ai and machine learning workloads.

By making Gradient a serverless management tool, customers don’t have to think about the underlying infrastructure. Instead, Paperspace handles all of that for them providing the resources as needed. “We do a lot of GPU compute, but the big focus right now and really where the investors are buying into with this fundraise, is the idea that we are in a really unique position to build out a software layer and abstract a lot of that infrastructure away [for our customers].

In other news, the Massachusetts Institute of Technology announced yesterday it was creating a new college focused on better preparing students to adapt to the increasingly disruptive AI wave through a planned $1 billion investment, $350 million of which came from private equity guru Stephen Schwarzman.

According to a report in The New York Times:

Mr. Schwarzman said he hoped that the M.I.T. move might trigger others to invest in America’s A.I. future, not just commercially. He points to the major push the Chinese government is making, and notes the fruits of United States government-funded research in the past — technologies that helped America take the global lead in industries from the personal computer to the internet.

Just last month, IBM and MIT announced a 10-year, 240 million dollar investment to create the MIT-IBM Watson AI lab, which will carry out fundamental AI research and seek to propel scientific breakthroughs than unlock the potential of AI.

Written by turbotodd

October 16, 2018 at 12:10 pm

Big Fines and Big Pipes

leave a comment »

Happy Monday.

First off, a hearty congratulations to the European team Ryder Cup victors. They left the U.S. team babbling in Le Golf National’s dust from which U.S. captain Jim Furyk couldn’t see the forest for the fescue.

Meanwhile, tech-related news hardly stopped just because there was a not-so-exciting golf tournament going on outside Paris.

Remember that August Tweet Tesla’s Elon Musk sent about taking his company public at $420?

Yeah, well, he paid for that one when the SEC fined both he personally, and Tesla the company, $20 million apiece over the weekend.

Though Musk admitted no guilt, he did have to resign as chairman of Tesla for three years, as well as appoint two new independent directors. He will also be required to have his communications monitored, including his social media activity, ongoing.

We also learned that the state of California is being sued by the Trump Administration in an effort to block what some have described as the toughest net neutrality law ever enacted in the United States.

On Sunday, California became the largest state to adopt its own rules requiring internet providers like AT&T, Comcast and Verizon to treat all web traffic equally.

Only hours after California’s proposal became official did senior Justice Department officials tell the Washington Post they would take the state to court on grounds that the federal government, not state leaders, has the exclusive power to regulate net neutrality.

That is the lowdown of the showdown in preparation for the big pipes throwdown.

Written by turbotodd

October 1, 2018 at 9:34 am

Explaining AI Decisions

leave a comment »

IBM’s Institue of Business Value recently issued a new report concerning the implementation of AI, and according to a survey of 5,000 executives, discovered that 60 percent of those polled said they were concerned about being able to explain how AI is using data and making decisions in order to meet regulatory and compliance standards.

According to a story in The Wall Street Journal, there’s concern that:

AI decisions can sometimes be black boxes both for the data scientist engineering them and the business executives telling their benefits. This is especially true in deep learning tools such as neural networks that are used to identify patterns in data, whose structure roughly tries to mimic the operations of the human brain.

But just as in high school geometry, the question arises as to how to demonstrate one has proved their work. That is to say, to reveal how the AI system arrived at a specific conclusion.

The Journal identifies measures IBM took last week which include cloud-based tools that can show users which factors led to an AI-based recommendations. 

The tools can also analyze AI decisions in real-time to identify inherent bias and recommend data and methods to address that bias. The tools work with IBM’s AI services and those from other cloud services providers including Google, said David Kenny, senior vice president of cognitive solutions at IBM.

You can learn more about those measures in this blog post.

Written by turbotodd

September 27, 2018 at 12:13 pm

%d bloggers like this: