As computers become more sophisticated, they sometimes seem almost human – especially when they refuse to download a page when you’re in a hurry. At the Intel Developer’s Forum in San Francisco, Intel revealed that it is taking that a step further by giving their new line of Ultrabooks “human-like senses to perceive the user's intentions” thanks to a new generation of processors.
Intel sees the new processors as not only the way to faster, lighter, thinner cooler and more secure Ultrabooks, but also as a means of opening the door to a raft of new mobile designs and interactive softwares. One idea that Intel is actively promoting is to make its Ultrabooks into highly interactive platforms with advanced senses that allow for more intuitive, natural interactions between computer and user.
As part of this effort, Intel will release its Intel Perceptual Computing Software Development Kit beta to developers early next year. The idea behind perceptual computing is to provide the Ultrabook with human-like senses to allow the computer to perceive the user's intentions. Key to this is Intel’s Creative camera, which is a screen-mounted system that includes a low-power HD 760p image sensor designed to work at close range, along with a 3D depth array and dual microphone array.
With the Creative camera, Intel hopes that software developers will come up with advanced applications to enhance user interactions. One area is speech recognition, which Intel wants to move beyond today’s "barking orders" stage to more natural functions, such as real-time language translation. Another is facial analysis for facial recognition, determining a user’s age and gender, facial tracking and even something as simple as knowing when the user is smiling or not.
Other uses for perceptual computing involve giving users a virtual interface. By providing the Ultrabook with close-range tracking, users can reach out and manipulate objects on screen or control applications by simple gestures. Related to this is 2D/3D tracking that will allow user images to blend into the action on the screen. As the computer tracks the user’s hands or facial features, it will be possible for users to go beyond manipulating objects to doing things like trying on glasses in a virtual environment.
These are still more wish list than reality, but if you should someday see someone in the coffee shop gesturing wildly at their laptop, keep in mind that they aren’t raving mad.
No comments:
Post a Comment