Nvidia Announces New Drive CX And PX Automotive Tech At CES

As it turns out, actually, it will be doing quite a bit, and the way we see it, it may even be what the automotive industry needs.

The first announcement in the category was the Nvidia Drive CX, which the graphics card maker calls a "Digital Cockpit Computer." The idea behind it is to be a single central computing system that takes care of all the displays inside the car. Nvidia believes that in the future, cars will have more and more screens built in, and having all of it managed from a central computer is what will make it shine. Today's high-tech cars have about 700 thousand pixels that need to be pushed, which isn't that much. Despite that, Nvidia built the Drive CX to be powerful enough to push up to 16.6 million pixels. This makes sense, though, as by adding a couple of passenger displays you can easily end up with a very high pixel count. The Drive CX is based on the just-announced Tegra X1.

The most impressive part of the Nvidia Drive CX isn't the hardware though, but rather what Nvidia intends on doing with it. For it, the company built a runtime, Nvidia Drive Studio, which is basically a game engine but then for car interfaces. It offers a huge range of customization options, and Nvidia showed off some of the things that it is capable of; it’s nothing short of impressive. The demos included navigation with 3D maps, dynamic lighting to keep you focused on what's necessary, ambient occlusion, shadows, and more. Nvidia also demonstrated the customizability options that the gauges would have in the instrument cluster, and indicated that the car would be able to have multiple profiles, with one for each driver. Because the Drive CX is so powerful, some of the different skins for the instruments even offered advanced lighting features, including sub-surface scattering when simulating surfaces like car paint or carbon fiber, which cannot simply be pasted over a polygon as a simple texture.

Naturally, all of it also tied in nicely with the onboard infotainment system, which also offered a wide array of customization options. The idea behind Nvidia Drive Studio, however, isn't to provide a ready-to-go system, but rather to work as a platform that car manufacturers can use to create their experience for drivers. Certainly, however, once the APIs open up, or someone cracks them, there will be a lot of 3rd party modding going on.

Of course, what Nvidia can do in the automotive space doesn't end there. The company also announced the Nvidia Drive PX, which is a very advanced auto-pilot for cars. It is also based on the Tegra X1 SoC, but rather than using just one, it uses two, giving the Drive PX a grand total of 2.3 teraflops of computational power. The system is designed to work with 12 cameras, and can process up to 1.3 gigapixels per second.

The Drive PX takes a fundamentally different approach to an auto-pilot compared to what we've seen before. Rather than using lasers, radar, and cameras, it uses these twelve cameras and a massively complicated "Deep Neural Network." Using these two things in tandem, the Drive PX is able to recognize various objects on the road, ranging from pedestrians, occluded pedestrians, cars, vans, and more. It can also recognize speed cameras and police cars, which we don't need to tell you can be very useful.

The neural network is also something that's constantly being updated. If a car doesn't recognize an object, or if an object turned out to be something different upon closer inspection than the Drive PX originally thought, it will send the image date to Nvidia, which will process it and add the link in the neural network in the next update of the Drive PX system, which means that if one car on the road doesn't recognize a situation, that knowledge gets contributed to the brains of every car using the system. This deep learning system is fully automated, too. With today's technology, a car can have sufficient training after about 40 hours of training, while in the past this would take months, if not years of training.

Also in the Drive PX system is a mapping system for a surround view. Using the multiple cameras the computer can generate a single image of what's going on around the car and show that on a display in a chasing point of view, like you would in a game. Of course, it wouldn't be safe for driving, but for maneuvers such as parking it could be extremely useful.

Because the Drive PX system also spatially maps the entire environment around itself, it can drive by itself, too. In fact, it can drive itself in almost any situation, ranging from long distances on the highway to city driving to parking itself. It even has an Auto-Valet system, with which the car can venture into an unknown parking garage and find a spot to park in, and park in that spot. Unlike BMW's system, which the German auto-maker is showing off this week as well, Nvidia's Drive PX won't need a map of the garage.

One thing we find noteworthy with this auto-pilot technology is how various different groups take fundamentally different approaches to solving the same problem. Some companies choose to use lasers, radars, and ultrasound in order to detect obstacles and map the environment near them, while Nvidia, in this case, has opted, rather radically, to not use those technologies. The company didn't mention specifically that it wouldn't be using them, but from the demonstrations and the information we got at the press conference it's more than clear that the neural network is the heart of their technology. The other systems might complete the same task, but they don't appear to have any learning technique in-place and are simply programmed to recognize various objects by hand. Nvidia's neural network appears to make the sensors that other companies use redundant.

To finish off, we'd like to mention the importance of this technology. On this topic, the fact that Nvidia uses a hyper-advanced neural network or a highly customizable infotainment system isn't even the most important thing we found out today. What we consider the most important thing of this announcement is that Nvidia is doing it in the first place because it will be available for all car manufacturers to use, and there can be a single system which is developed to whole new heights, rather than re-inventing the wheel every time a new model of a car comes out. A system that is adopted by the entire industry, at large, is exactly what is needed in order to bring this level of technology into cars in the mainstream, because it also offers an upgrade path for the future. A car is something that's good for 10, 15, or more years, while the technology inside it would become obsolete after just a few years. Heck, who wants to buy a car today that still has a phone built in? Link to bmw piece: https://www.tomshardware.com/news/bmw-i3-autonomous-parking,28222.html.

Follow Niels Broekhuijsen @NBroekhuijsen. Follow us @tomshardware, on Facebook and on Google+.

  • RCguitarist
    Ugh, I hope this crashes and burns for Nvidia. Just what everyone needs, more eletronics and computers for people to mess with while driving. Not to mention that their demo gauges look very hard to read at a glance. Not everything has to look like a video game.

    Also, I look at computer screens all day at work and love gaming in my free time but I want at least some time where I can look at something real, like a gauge or knobs in my car.
  • clonazepam
    Interesting tech but really not excited to have more high-priced replacement parts that won't last the life of the car... of course, for those worried about price, hope analog is still around for a long long time.
  • Duckferd
    There's a specific reason why the other automakers are using things beyond cameras for their autopilot concepts.

    What happens at night once it is too dim for cameras? What about inclement weather? how far can a camera see when you're cruising at 160 km/h+, as on the autobahn? Realistically you need a suite of sensors for any sort of failsafe.

    I can see emerging autopilot tech as a good companion for driving to safeguard against possible accidents and provide useful information to the driver, and perhaps it will transform inner city public transportation (autonomous taxis, to go along with rail). I just don't see "autonomous" driving, as people imagine it, being realistic for real world use cases outside of maybe mundane freeway driving.
  • CoolDark
    RCguitarist said it best. Theoretically, more touch screens or illuminated displays means requiring more attention from the driver which is results in two outcomes; 1) the driver distributes their attention appropriately and slows or drives cautiously if they are paying extra attention to the center touch display, or 2) the driver tries to tell themselves "whats the point of all this new tech if they need to drive extra conservatively", in order to use it without crashing the car. As you can imagine, most people will fall under number 2.

    Personally, I don't believe that people need to adapt to tech, it should be tech that adapts to us. People want knobs and some tactile feel. A touch screen requires you look away from the road, a knob does not. Placing and replacing your fingers on a sensitive touch screen as the car is bouncing around while going over 115 km/h on the freeway is not as easy as it sounds. for every bump the hand moves, for every hand movement the driver looks to re-position it.

    Lastly, the more simplistic something is, the faster the mind can understand it. The fact is that a simple digital display with a needle and numbers made with basic graphics is easier to understand than one with complex graphics. Easier to read means faster for the mind to comprehend.

    Just because someone CAN do something, doesn't mean they SHOULD.
  • none12345
    Agree with the other posters. Expecially in a car anything that requires a second or extra glance while driving is BAD. The average person already is a danger on the road, they dont need more distractions.

    For gauges. Speed should be a 1-3 digit number. A gauge takes far longer to read then a digial number. For things like oil pressure, who the hell needs to know that when driving, you only need to know if its good or bad. Good doesn't even need an indicator, bad should be a colored icon. Same with battery(in a fuel powered car). A tach is useless in an automatic car, dont even display it, in a manual its useful. A colored slider style bar would be useful for a manual showing optimal shift points.

    A touch screen should never be used for any operation made while driving. You need tactile response so you can do it without taking your eyes off the road. If tactile response isn't practical(ie looking on a map), then you shouldn't be allowed to operate it while in motion. Humans suck at driving, and shouldn't be given more distractions to suck more.
  • Misunderstanding
    Neural networks have to do with how the information from the sensor is "processed", how the decision is made. They have nothing to do with using cameras versus radar... and many of these other systems probably use neural networks. It happens that a lot of the research on neural networks has been done for optical images for internet search so Nvidia can borrow on this. Automated cars will require other sensors because optical cameras can not see in 100% of conditions.