Saturday, June 8

Interactive holographic displays and self-driving cars are in our future.

image




SAN FRANCISCO -- While consumers are marveling at this year's new tablets and smartphones, researchers are hard at work developing the next wave of computer technologies that will change our lives.
For starters, parallel computing is yielding extraordinary results. The once-experimental technology that allows computers to process multiple problems simultaneously is being used to create new breakthroughs in graphics rendering, language translation and even facial recognition. Berkeley Par Lab research professor Kurt Keutzer predicts that parallel computing will foster enormous advances in speed and power for every kind of electronics, from videogame consoles to handheld devices.


In addition to producing more powerful machines, new research is making it easier to work with computers. "The most exciting thing that's happened in [user interface systems] is the rise of multi-touch screens, like the Wii [controller], and more recently the Microsoft Kinect," says Scott Klemmer, a researcher at Stanford's Human-Computer Interaction Group. "As opposed to writing down a set of textual commands ... we envision people telling a computer, 'When someone gestures like this, then the computer should do this.'"

New Technologies changing your life :


Interactive holographic displays are also being developed to allow people to see real and virtual objects in true 3-D. These displays will let architects evaluate models that exist only in the mind of a computer, medical students view the body without needing a cadaver, and show shoppers the look and size of a product without leaving their homes. Scientists at the University of Tokyo in 2009 used a holograph display, Wii Remotes rigged as motion detectors and a grid that produced ultrasonic sound waves to create a high-tech interface that allowed people to "feel" a virtual object when they tried to touch the image floating in front of the screen.
New sensors will let computers take on new tasks, such as providing more accurate real-world data. For example, "augmented reality" programs rely on GPS data and online information to let smartphone owners with cameras view street names and overlay reviews on live video of their surroundings. But due to GPS' roughly 10-meter margin of error, the correlation between the video image and the information superimposed on top is rough and unreliable at present. An array of GPS satellites developed by Boeing ( BA - news - people ) can pinpoint people's locations on the globe with much greater accuracy, and should make augmented reality much more useful.
Keutzer says better sensors and faster processors will give rise to super-smart handheld devices that can "learn" their owner's behaviors, keeping track of where they go and what they buy online. For instance, researchers at Dartmouth have tested software that analyzes sounds from smartphone microphones to identify voices, music and even the places the phone's owner visited.
Perhaps the most interesting area of research is in machines that are capable of functioning largely on their own. Research in autonomous machines has come a long way from primitive robots that can solve mazes. For example, Google's ( GOOG - news - people ) self-driving cars use image recognition software, light detection technology, GPS and proximity detection Radar to almost entirely replace human drivers. And the U.S. Defense Advanced Research Projects Agency's new "Deep Learning" program aims to teach computers to interpret and classify images.
Such research brings humanity closer to realizing what we've read about in science fiction--true, practical artificial intelligence. The future may come sooner than we think.

No comments:

Post a Comment