Intel Pitches Visual Computing Power, Platform Power Savings

 

Santa Clara (CA) - Intel is hosting its traditional Research Day event for analysts and journalists today, giving an insight in current research projects underway at the company. This year, Intel said it is highlighting 70 "futuristic projects and concepts" covering the areas of environment, healthcare, visual computing, and wireless mobility.

Chief technology officer Justin Rattner is hosting this year’s show and tell and provides a glimpse of how Intel is spending its $6 billion-a-year research budget to come up with products that are scheduled to launch within a 5-year time frame.

Among a myriad of demonstrations raining down on attendees, we found three research areas to be especially noteworthy.

There is an undeniably excitement in the IT industry about visual computing and floating point acceleration, which could bring the most significant performance increases in virtually every computing area in more than a decade. Visual computing, according to Intel, will bring life-like 3-D environments, immediate, real-world analysis of video feeds and more natural ways for people to interact with their devices.

Applications falling into this segment - for example data input with your hand that is rendered and recognized via a webcam in real time, or speech recognition that is based on audio recognition as well a visual tracking of lip movements - require massive amounts of computing horsepower, which could be achieved either through tapping into the floating point performance of graphics cards or, in Intel’s strategy through much more capable x86 many-core architectures.

At its research day, Intel showed a future car application with cameras serving as eyes and multi-core processor-based computers as the brain. The company believes that future cars will be able to much more accurately identify other vehicles and pedestrians that are getting too close and alert drivers or take its own safe actions to prevent accidents.

The car demonstration took advantage of Intel’s Ct programming research, a C/C++ language extension created in Intel’s labs, which enabled the program to seamlessly scale from two to eight cores to conduct its accident prevention work without writing additional software code or compilers. At its very core, the programming approach is very similar to Nvidia’s CUDA technology, which also is based on C++ extensions and lets developers tap into massively parallel hardware via a high-level programming language.

On the other end of the spectrum, Intel has more thoughts how to make computing platforms much more efficient. More than eight years after the debut of Intel’s SpeedStep power management technology, the company outlined an idea to monitor changes in a computer’s operation and reduce power - or turn off entire portions of the system that are not in use such as the radio or USB ports. This idea is not completely new: Rattner has been talking publicly about this research since the 2005 Intel Developer Forum, when he told conference attendees that power management would be extended to the complete PC platform, including chipsets.

However, the company appears to be making progress as Intel claims early demonstrations of this work have shown power savings of more than 30% when a system is idle or lightly active. In the next few years, Intel researchers anticipate to extend these advancements and demonstrate reductions in power consumption of 50% in any usage scenario. The company believes this technology will impact all of its platforms, ranging from mobile Internet devices (MIDs) to servers.

MIDs also are expected to get user-aware computing capabilities in which a device can be aware of their surrounding and interact accordingly. Intel thinks speech interfaces are particularly suitable for small mobile devices because of the limitation of the physical input and output channels. Intel researchers demonstrated a speech interface controlling the task of creating connections between two mobile devices and a wireless display with the goal of sharing resources and services. In such a scenario, a user would be able speak commands in a natural manner to, for example, synch a mobile device with a large screen television to share recent photos.