Posts Tagged ‘ibm research’
IBM released its annual “5 in 5” list yesterday, the seventh year in a row whereby IBM scientists identify a list of innovations that have the potential to change the way people work, live and interact during the next five years.
The IBM 5 in 5 is based on market and societal trends, as well as emerging technologies from IBM’s R&D labs around the world. This year, the 5 explores innovations that will be underpinnings of the next era of computing, what IBM has described as “the era of cognitive systems.”
This next generation of machines will learn, adapt, sense, and begin to experience the world as it really is, and this year’s predictions focus on one element of the this new era: The ability of computers to mimic the human senses — in their own manner, to see, smell, touch, taste and hear.
But before you try and spoon-feed your iPad some vanilla yogurt, let’s get more practical.
These new sensing capabilities will help us become more aware, productive, and help us think — but not do our thinking for us.
Rather, cognitive systems will help us see through and navigate complexity, keep up with the speed of information, make more informed decisions, improve our health and standard of living, and break down all kinds of barriers — geographical, language, cost, even accessibility.
Now, on to our five senses.
1) Touch: You will be able to touch through your phone. Imagine using your smartphone to shop for your wedding dress and being able to feel the satin or silk of the gown, or the lace on the veil, from the surface on the screen. Or to feel the beading and weave of a blanket made by a local artisan half way around the world. In five years, industries like retail will be transformed by the ability to “touch” a product through your mobile device.
IBM scientists are developing applications for the retail, healthcare and other sectors using haptic, infrared and pressure sensitive technologies to simulate touch, such as the texture and weave of a fabric — as a shopper brushes her finger over the image of the item on a device screen. Utilizing the vibration capabilities of the phone, every object will have a unique set of vibration patterns that represents the touch experience: short fast patterns, or longer and stronger strings of vibrations. The vibration pattern will differentiate silk from linen or cotton, helping simulate the physical sensation of actually touching the material.
2) Sight: A pixel will be worth a thousand words. We take some 500 billion photos a year, and 72 hours of video is uploaded to YouTube every minute. But computers today only understand pictures by the text we use to tag or title them; the majority of the information — the actual content of the image — is a mystery.
In the next five years, systems will not only be able to look at and recognize the contents of images and visual data, they will turn the pixels into meaning, making sense out of it similar to the way a human views and interprets a photographs. In the future, “brain-like” capabilities will let computers analyze features such as color, texture patterns or edge information and extract insights from visual media, having a potentially huge impact on industries ranging from healthcare to retail to agriculture.
But please, no Escher drawings, at least for now…that’s just plain mean.
3) Hearing: Computers will hear what matters. Ever wish you could make sense of all the sounds around you and be able to understand what’s not being said? Within five years, distributed systems of clever sensors will detect elements of sound such as sound pressure, vibrations and sound waves at different frequencies.
It will interpret these inputs to predict when trees will fall in a forest or when a landslide is imminent. Such a system will “listen” to our surroundings and measure movements, or the stress in a material, to warn us if danger lies ahead.
I’m ever hopeful such systems will be able to “listen” to my golf swing and help me course correct so I can play more target golf!
4) Taste: Digital taste buds will help you to eat smarter. What if we could make healthy foods taste delicious using a different kind of computing system built for creativity? IBM researchers are developing a computing system that actually experiences flavor, to be used with chefs to create the most tasty and novel recipes. It will break down ingredients to their molecular level and blend the chemistry of food compounds with the psychology behind what flavors and smells humans prefer.
By comparing this with millions of recipes, the system will be able to create new flavor combinations that pair, for example, roasted chestnuts with other foods such as cooked beetroot, fresh caviar, and dry-cured ham.
“Top Tasting Computer Chefs,” anyone?
5) Smell: Computers will have a sense of smell. During the next five years, tiny sensors embedded in your computer or cell phone will detect if you’re coming down with a cold or other illness. By analyzing odors, biomarkers and thousands of molecules in someone’s breath, doctors will have help diagnosing and monitoring the onset of ailments such as liver and kidney disorders, asthma, diabetes, and epilepsy by detecting which odors are normal and which are not.
Already, IBM scientists are sensing environment conditions to preserve works of art, and this innovation is starting to be applied to tackle clinical hygiene, one of the biggest healthcare challenges today. In the next five years, IBM technology will “smell” surfaces for disinfectants to determine whether rooms have been sanitized. Using novel wireless mesh networks, data on various chemicals will be gathered and measured by sensors, and continuously learn and adapt to new smells over time.
Watch the video below to listen to IBM scientists describe some of these new innovations and their potential impact on our world.
It’s Monday, and here in Austin, Texas, it officially got cold overnight.
Yesterday, it was partly cloudy and almost steamy warm. And this morning, it’s like I was transplanted back to IBM’s Somers, New York, location, where the wind streams across the Westchester landscape and chills native Texans like me to their core.
But enough talk about the weather. I want to get to the topic of the day: Making little things that move information faster.
Earlier today, IBM announced a major advance in the ability to use light instead of electrical signals to transmit information for future computing.
The breakthrough technology — called “silicon nanophotonics” — allows the integration of different optical components side-by-side with electrical circuits on a single silicon chip using, for the first time, sub-100nm semiconductor technology.
Silicon nanophotonics takes advantage of pulses of light for communication and provides a super highway for large volumes of data to move at rapid speeds between computer chips in servers, large data centers, and supercomputers, thus alleviating the limitations of congested data traffic and high-cost traditional interconnects.
Big Light, Bigger Data
The amount of data being created and transmitted over enterprise networks continues to grow due to an explosion of new applications and services.
Silicon nanophotonics, now primed for commercial development, can enable the industry to keep pace with increasing demands in chip performance and computing power. Businesses are entering a new era of computing that requires systems to process and analyze, in real-time, huge volumes of information known as “big data.”
Silicon nanophotonics technology provides answers to big data challenges by seamlessly connecting various parts of large systems, whether few centimeters or few kilometers apart from each other, and move terabytes of data via pulses of light through optical fibers.
Building Proof Beyond Concept
Building on its initial proof of concept in 2010, IBM has solved the key challenges of transferring the silicon nanophotonics technology into the commercial foundry.
By adding a few processing modules into a high-performance 90nm CMOS fabrication line, a variety of silicon nanophotonics components such as wavelength division multiplexers (WDM), modulators, and detectors are integrated side-by-side with a CMOS electrical circuitry.
As a result, single-chip optical communications transceivers can be manufactured in a conventional semiconductor foundry, providing significant cost reduction over traditional approaches.
IBM’s CMOS nanophotonics technology demonstrates transceivers to exceed the data rate of 25Gbps per channel. In addition, the technology is capable of feeding a number of parallel optical data streams into a single fiber by utilizing compact on-chip wavelength-division multiplexing devices.
Learning More About Nanophotonics
The ability to multiplex large data streams at high data rates will allow future scaling of optical communications capable of delivering terabytes of data between distant parts of computer systems.
“This technology breakthrough is a result of more than a decade of pioneering research at IBM,” said Dr. John E. Kelly, Senior Vice President and Director of IBM Research. “This allows us to move silicon nanophotonics technology into a real-world manufacturing environment that will have impact across a range of applications.”
Further details will be presented this week by Dr. Solomon Assefa at the IEEE International Electron Devices Meeting (IEDM) in the talk titled, “A 90nm CMOS Integrated Nano-Photonics Technology for 25Gbps WDM Optical Communications Applications.”
You can learn more about IBM silicon integrated nanophotonics technology here.
Since I posted about Hurricane Sandy earlier in the day, I’ve seen some pretty stunning pictures and video coming in, and heard more reports from friends in and around the New York City area.
The story of the crane toppling over on a very tall building being built on West 57th Street, between 6th and 7th Avenues (my old IBM office is at Madison and 57th, further east) was most stunning. You can find some of the pics or video on CNN.
While we wait to discover how big a problem Sandy presents to the northeast Atlantic coast, I’ll share with you a diversion focusing on a much smaller topic — but one with potentially huge implications.
IBM scientists recently demonstrated a new approach to carbon technology that opens up the path for commercial fabrication of dramatically smaller, faster and more powerful computer chips.
For the first time, more than ten thousand working transistors made of nano-sized tubes of carbon have been precisely placed and tested in a single chip using standard semiconductor processes.
These carbon devices are poised to replace and outperform silicon technology allowing further miniaturization of computing components and leading the way for future microelectronics.
Four Decades Of Innovation
Aided by rapid innovation over four decades, silicon microprocessor technology has continually shrunk in size and improved in performance, thereby driving the information technology revolution.
Silicon transistors, tiny switches that carry information on a chip, have been made smaller year after year, but they are approaching a point of physical limitation.
Their increasingly small dimensions, now reaching the nanoscale, will prohibit any gains in performance due to the nature of silicon and the laws of physics. Within a few more generations, classical scaling and shrinkage will no longer yield the sizable benefits of lower power, lower cost and higher speed processors that the industry has become accustomed to.
Carbon nanotubes represent a new class of semiconductor materials whose electrical properties are more attractive than silicon, particularly for building nanoscale transistor devices that are a few tens of atoms across.
Electrons in carbon transistors can move easier than in silicon-based devices allowing for quicker transport of data. The nanotubes are also ideally shaped for transistors at the atomic scale, an advantage over silicon.
These qualities are among the reasons to replace the traditional silicon transistor with carbon — and coupled with new chip design architectures — will allow computing innovation on a miniature scale for the future.
The approach developed at IBM labs paves the way for circuit fabrication with large numbers of carbon nanotube transistors at predetermined substrate positions. The ability to isolate semiconducting nanotubes and place a high density of carbon devices on a wafer is crucial to assess their suitability for a technology — eventually more than one billion transistors will be needed for future integration into commercial chips.
Hardly A Carbon Copy
Until now, scientists have been able to place at most a few hundred carbon nanotube devices at a time, not nearly enough to address key issues for commercial applications.
Originally studied for the physics that arises from their atomic dimensions and shapes, carbon nanotubes are being explored by scientists worldwide in applications that span integrated circuits, energy storage and conversion, biomedical sensing and DNA sequencing.
This achievement was published today in the peer-reviewed journal Nature Nanotechnology.
Carbon, a readily available basic element from which crystals as hard as diamonds and as soft as the “lead” in a pencil are made, has wide-ranging IT applications.
Carbon nanotubes are single atomic sheets of carbon rolled up into a tube. The carbon nanotube forms the core of a transistor device that will work in a fashion similar to the current silicon transistor, but will be better performing. They could be used to replace the transistors in chips that power our data-crunching servers, high performing computers and ultra fast smart phones.
Earlier this year, IBM researchers demonstrated carbon nanotube transistors can operate as excellent switches at molecular dimensions of less than ten nanometers – the equivalent to 10,000 times thinner than a strand of human hair and less than half the size of the leading silicon technology. Comprehensive modeling of the electronic circuits suggests that about a five to ten times improvement in performance compared to silicon circuits is possible.
There are practical challenges for carbon nanotubes to become a commercial technology notably, as mentioned earlier, due to the purity and placement of the devices. Carbon nanotubes naturally come as a mix of metallic and semiconducting species and need to be placed perfectly on the wafer surface to make electronic circuits. For device operation, only the semiconducting kind of tubes is useful which requires essentially complete removal of the metallic ones to prevent errors in circuits.
Also, for large scale integration to happen, it is critical to be able to control the alignment and the location of carbon nanotube devices on a substrate.
To overcome these barriers, IBM researchers developed a novel method based on ion-exchange chemistry that allows precise and controlled placement of aligned carbon nanotubes on a substrate at a high density — two orders of magnitude greater than previous experiments, enabling the controlled placement of individual nanotubes with a density of about a billion per square centimeter.
The process starts with carbon nanotubes mixed with a surfactant, a kind of soap that makes them soluble in water. A substrate is comprised of two oxides with trenches made of chemically-modified hafnium oxide (HfO2) and the rest of silicon oxide (SiO2). The substrate gets immersed in the carbon nanotube solution and the nanotubes attach via a chemical bond to the HfO2 regions while the rest of the surface remains clean.
By combining chemistry, processing and engineering expertise, IBM researchers are able to fabricate more than ten thousand transistors on a single chip.
Furthermore, rapid testing of thousands of devices is possible using high volume characterization tools due to compatibility to standard commercial processes.
As this new placement technique can be readily implemented, involving common chemicals and existing semiconductor fabrication, it will allow the industry to work with carbon nanotubes at a greater scale and deliver further innovation for carbon electronics.
You can learn more in the animation below.
This morning on the IBM InterConnect stage, IBM general manager for the IBM Watson Solutions organization, Manoj Saxena, explained to the gathered audience in Singapore how IBM has taken Watson out of its “Jeopardy!” TV show playground and put Watson to work!
I last discussed Watson with Manoj this past April at the IBM Impact event, when Watson had just matriculated into the workforce, getting jobs in both the healthcare and financial services industries.
During our interview yesterday here at IBM InterConnect, Manoj and I conducted a mid-year performance review for Watson, and the evaluation was overwhelmingly positive — Watson will continue to stay gainfully employed, but as with any cutting edge technology, there are always areas for improvement.
We discussed all of this, and how Manoj’s team has made Watson smaller and smarter, during our interview here in Singapore. Manoj also explained how Watson has really become a demonstrable example of “one of the most dramatic shifts we’re going to see in our life times,” the shift from transactional to cognitive computing.
You can view the interview here.
Over the past several years, you’ve probably noticed that IBM has become much more active on the African continent.
IBM’s continued investment in this emerging and important continent were expanded upon yesterday when IBM announced that Africa would be the next frontier for innovation in IBM Research.
IBM Research – Africa will have its first location, in Nairobi, Kenya, in collaboration between the Ministry of Information, Communication, and Technology (ICT) through the Kenya ICT board.
This new venture will conduct basic and applied research focused on solving problems relevant to African and contribute to the building of a science and technology base for the continent.
Key areas of research will include the following:
- Next Generation Public Sector: Governments have a mission critical role to play in the growth and sustainable developments in Africa. With the right kind of ICT, including big data solutions, advanced analytics, and cloud technologies, government organizations can draw insights and benefit from the vast amounts of data held by any number of government agencies. This can help advance e-government capabilities such as helping to reduce the cost of social services, improving efficiency and productivity, deterring fraud and abuse, improving citizen access to services, and enabling digital interaction between citizens and the public sector.
- Smarter Cities – with initial focus on water and transportation: Rates of urbanization in Africa are the highest in the world. The single biggest challenge facing African cities is improving access to and quality of city services such as water and transportation. IBM, in collaboration with government, industry and academia, plans to develop Intelligent Operation Centers for African cities – integrated command centers – that can encompass social and mobile computing, geo-spatial and visual analytics, and information models to enable smarter ways of managing city services. The initial focus will be on smarter water systems and traffic management solutions for the region.
- Human Capacity Development: A skills shortage is hindering the leadership and innovation of new industry in Africa. The IBM Research – Africa lab, while carrying out research, will help to elevate the level of ICT and related scientific skills in Kenya by working in collaboration with select universities, government agencies and companies. Boosting the innovation culture in Kenya and engaging local entrepreneurs and innovators in developing solutions that matter to the people of Kenya and the region may also assist in accelerating economic development.
“In today’s world, innovation is the main lever for a competitive national economy, is a source of employment, and has the potential to improve lives,” said Dr. Bitange Ndemo, Permanent Secretary, Ministry of Information, Communication and Technology. “The IBM research lab, will not only rubber stamp Kenya as Africa’s leader in ICT, but will help the country to transform into a knowledge based economy.”
Operations at IBM Research – Africa will commence immediately. Expansion into other parts of Africa may be considered in a second phase.
IBM Investment in Africa
IBM is making a significant investment in Africa, ramping up its profile on the continent as part of its focus on emerging markets. The expansion program is part of a major business plan to increase IBM’s presence in growth markets and support global strategy. The company is present in more than 20 African countries and recognizes the huge potential of research and smarter systems in transforming business, government and society across the continent.
Alongside its day-to-day business of providing advanced technologies and services to clients in Africa, IBM has deployed an array of programs aimed at building economic capacity such as IBM’s employee volunteer program, Corporate Service Corps, which is modelled on the U.S. Peace Corps. For example, IBM is working with the Postal Corporation of Kenya (PCK) to review the country’s changing economic landscape and develop a plan to deliver financial services to rural areas.
IBM Research – Africa will join existing labs in Australia, Brazil, China, India, Ireland, Israel, Japan, Switzerland and the United States.
IBM Research laboratories are credited with the creation of many of the foundations of information technology, including the invention of the relational database, disk storage, DRAM memory and much more. IBM Research has been recognized with five Nobel Prize Laureates, and many leading scientific and technical medals and awards.
Recently IBM Research created a question-answering supercomputing system called Watson that defeated the champions of a major televised quiz show, showing its ability to match humans in answering a wide range of free text questions.
Steve Lohr with The New York Times has gone long on “big data.”
In his piece, Lohr explains how big data has gone mainstream, and using IBM’s Watson computer that beat “Jeopardy!” world champions last year as a key inflection point in its evolution, and also quoting IBM exec and technical fellow Rod Smith.
Rod Smith: “Big Data is really about new uses and new insights, not so much about the data itself.”
And on Watson: “The Watson computer from I.B.M. that beat human “Jeopardy” champions last year was a triumph of Big Data computing. In theory, Big Data could improve decision-making in fields from business to medicine, allowing decisions to be based increasingly on data and analysis rather than intuition and experience.”
I mentioned in some prior posts the upcoming Smarter Commerce Global Summit IBM will be hosting at the Walt Disney World Swan and Dolphin Resort (which you can learn more about and register for here).
Just out of curiosity, I went and did a query to see if any sessions would include “big data” as a featured topic, and as it turns out, there were four, including “Crunch Big Data for Digital Analytics Using Netinsight on Premises and Netezza,” and “Big Data, Big Campaigns: Using Unica Campaign Management & IBM Netezza Data Warehousing Appliances.”
So, it’s pretty clear that the era of “big data” is certainly upon us with respect to marketing as well.
I also wanted to highlight some news just emerging from our friends in IBM Research.
Yesterday, they announced a new breakthrough that has potential impact for semiconductor transistor manufacturing.
With the announcement, they revealed the first-ever direct mapping of the formation of a persistent spin helix in a semiconductor, an effort jointly conducted between IBM Researchers and scientists with ETH Zurich.
Until now, it was unclear whether or not electronic spins posessed the capability to preserve the encoded information long enough before rotating.
But through this new experiment, they demonstrated that synchronizing electrons extends the spin lifetime of the electron by 30 times to 1.1 nanoseconds — the same time it takes for an existing 1 GHz process to cycle.
Why do we care?
Well, today’s computing technology encodes and processes data by the electrical charge of electrons. But that technique is limiting, as the semiconductor dimensions continue to shrink to the point where the flow of electrons can no longer be controlled. Spintronics could surmount this approaching impasse by harnessing the spin of electrons instead of their charge.
This new understanding in “spintronics” not only gives scientists unprecedented control over the magnetic movements inside devices, but also opens up new possibilities for creating more energy efficient electronics.
However, this effort could get colder before it warms up and leads to massive technology transfer into the marketplace: Spintronics research takes place at very low temperatures at which electron spins interact minimally with the environment.
In the case of this particular research, IBM scientists worked at 40 Kelvin (-233 Celsius, -387 Fahrenheit)!!!
You can read the full scientific paper entitled “Direct mapping of the formation of a persistent spin helix” by M.P. Walser, C. Reichl, W. Wegscheider and G. Salis was published online in Nature Physics, DOI 10.1038/NPHYS2383 (12 August 2012).