Turbotodd

Ruminations on tech, the digital media, and some golf thrown in for good measure.

Posts Tagged ‘blue gene

Happy Anniversary, Deep Blue

leave a comment »

It was fifteen years ago today that the IBM chess-playing supercomputer, Deep Blue, beat he-who-shall-remain-nameless, a world grandmaster, after a six-game match, which brought two wins for IBM, one for the world champion, and three draws.

On May 11, 1997, an IBM computer called IBM Deep Blue beat the world chess champion after a six-game match: two wins for IBM, one for the champion and three draws. The match lasted several days and received massive media coverage around the world. It was the classic plot line of man vs. machine. Behind the contest, however, was important computer science, pushing forward the ability of computers to handle the kinds of complex calculations needed to take computing to the its next stage of evolution.

It was classic man-versus-machine, but underlying the mythology that enveloped the John Henry storyline was something far more important: The opportunity to push the frontiers of computer science, to push computers to handle the kind of complex calculations necessary for helping discover new pharmaceuticals; to conduct the kind of financial modeling needed to identify trends and do risk analysis; to perform the kinds of massive calculations needed in many fields of science.

Solving The Problem That Is Chess

Since artificial intelligence emerged as a concept along with the first real computers in the 1940s, computer scientists compared the performance of these “giant brains” with human minds, and many gravitated to chess as a way of testing the calculating abilities of computers. Chess is a game that represents a collection of challenging problems for minds and machines, but had simple rules, and was thus an excellent testbed for laying the groundwork for the “big data” era that was soon to come.

There’s but no question that Deep Blue was such a powerful computer programmed to solve the complex, strategic game of chess.  But IBM’s goal was far deeper: To enable researchers to discover and understand the limits and opportunities presented by massively parallel processing and high performance computing.

IBM Deep Blue: Analyzing 200 Million Chess Positions Per Second

If, in fact, Deep Blue could explore up to 200 million possible chess positions per second, then could this deep computing capability be used to help society handle the kinds of complex calculations required in some of these other aforementioned areas.

Deep Blue did, in fact, prove that industry could tackle these issues with smart algorithms and sufficient computational power.

I recalled earlier this year in a blog post my own experience witnessing the Deep Blue chess match.  It evoked a lot of nostalgia for me and so many others.

IBM’s Deep Blue supercomputer could explore up to 200 million possible chess positions per second on 510 processors. Juxtapose that with IBM Blue Gene’s ability a few short years later to to routinely handle 478 trillion operations every second!

But it also laid a foundation, paving the way for new kinds of advanced computers and breakthroughs, including IBM’s Blue Gene and, later, IBM Watson.

Forever In Blue Genes

Blue Gene, introduced in 2004, demonstrated the next grand challenge in computing and was both the most powerful supercomputer and the most efficient, but was built to help biologists observe the invisible processes of protein folding and gene development. Deep Blue was also one of the earliest experiments in supercomputing that propelled IBM to become a market leader in this space to this day.

Fifteen years on, we’ve seen epic growth in the volume and variety of data being generated around the planet, via business, the social media, new sensor data helping with instrumentation of the physical world vis-a-vis IBM’s smarter planet initiative.  We’ve created so much new data that, in fact, 90% of the data in the world today was created in the last two years alone!

Calling Doctor Watson

Most recently, IBM embarked upon the next wave of this computing progress through the development of IBM’s Watson, which can hold the equivalent of about one million books worth of information. But make no mistake, Watson’s significance wasn’t just the amount of information it could process, but rather, a new generation of technology that uses algorithms to find answers in unstructured data more effectively than standard search technology, while also “understanding” natural language.

The promise of IBM Watson is now being put to productive use in industry — as an online tool to assist medical professionals in formulating diagnoses; by simplifying the banking experience by analyzing customer needs in the context of vast amounts of ever-changing financial , economic, product, and client data; and, I’m sure, other industries near you soon.

Those early chess matches were exciting, nail-biting even (and who’d have thought we’d ever say that about chess?)! But they pale by comparison to the productive work and problem-solving IBM’s Watson, and other IBM technologies, are now and will continue to be involved with as the world of big data matures and becomes adopted by an ever-increasing audience.

You can now visit Deep Blue, which ultimately was retired to the Smithsonian Museum in Washington, D.C.

But its groundbreaking contributions to artificial intelligence and computing in general continues, and now extends well beyond the confines of the chess board.

Big States (And Countries) Need Big Computers

with one comment

IBM’s been on a roll with the supercomputer situation of late.

Last week, we announced the installation of a Blue Gene supercomputer at Rutgers, and earlier today, we discovered that the IBM Blue Gene supercomputer is coming to my great home state of Texas.

Specifically, IBM announced a partnership with Houston’s Rice University to build the first award-winning IBM Blue Gene supercomputer in Texas.

Rice also announced a related collaboration agreement with the University of Sao Paul (USP) in Brazil to initiate the shared administration and use of the Blue Gene supercomputer, which allows both institutions to share the benefits of the new computing resource.

Rice University and IBM today announced a partnership to build the first award-winning IBM Blue Gene supercomputer in Texas. Rice also announced a related collaboration agreement with the University of Sao Paulo (USP) in Brazil to initiate the shared administration and use of the Blue Gene supercomputer, which allows both institutions to share the benefits of the new computing resource.

Now, you all play nice as you go about all that protein folding analysis!

Rice faculty indicated they would be using the Blue Gene to further their own research and to collaborate with academic and industry partners on a broad range of science and engineering questions related to energy, geophysics, basic life sciences, cancer research, personalized medicine and more.

“Collaboration and partnership have a unique place in Rice’s history as a pre-eminent research university, and it is fitting that Rice begins its second century with two innovative partnerships that highlight the university’s commitments to expanding our international reach, strengthening our research and building stronger ties with our home city,” said Rice President David Leebron about the deal.

USP is Brazil’s largest institution of higher education and research, and Rodas said the agreement represents an important bond between Rice and USP. “The joint utilization of the supercomputer by Rice University and USP, much more than a simple sharing of high-tech equipment, means the strength of an effective partnership between both universities,” explained USP President Joao Grandino Rodas.

Unlike the typical desktop or laptop computer, which have a single microprocessor, supercomputers typically contain thousands of processors. This makes them ideal for scientists who study complex problems, because jobs can be divided among all the processors and run in a matter of seconds rather than weeks or months.

Supercomputers are used to simulate things that cannot be reproduced in a laboratory — like Earth’s climate or the collision of galaxies — and to examine vast databases like those used to map underground oil reservoirs or to develop personalized medical treatments.

USP officials said they expect their faculty to use the supercomputer for research ranging from astronomy and weather prediction to particle physics and biotechnology.

In 2009, President Obama recognized IBM and its Blue Gene family of supercomputers with the National Medal of Technology and Innovation, the most prestigious award in the United States given to leading innovators for technological achievement.

Including the Blue Gene/P, Rice has partnered with IBM to launch three new supercomputers during the past two years that have more than quadrupled Rice’s high-performance computing capabilities.

The addition of the Blue Gene/P doubles the number of supercomputing CPU hours that Rice can offer. The six-rack system contains nearly 25,000 processor cores that are capable of conducting about 84 trillion mathematical computations each second. When fully operational, the system is expected to rank among the world’s 300 fastest supercomputers as measured by the TOP500 supercomputer rankings.

Written by turbotodd

March 30, 2012 at 6:54 pm

%d bloggers like this: