Tim Sweeney: GPGPU Too Costly to Develop

Epic Games' chief executive officer Tim Sweeney recently spoke during the keynote presentation of the High Performance Graphics 2009 conference, saying that it is "dramatically" more expensive for developers to create software that relies on GPGPU (general purpose computing on graphics processing units) than those programs created for CPUs.

He thus provides an example, saying that it costs "X" amount of money to develop an efficient single-threaded algorithm for CPUs. To develop a multithreaded version, it will cost double the amount; three times the amount to develop for the Cell/PlayStation 3, and a whopping ten times the amount for a current GPGPU version. He said that developing anything over 2X is simply "uneconomic" for most software companies. To harness today's technology, companies must lengthen development time and dump more money into the project, two factors that no company can currently afford.

But according to X-bit Labs, Sweeney spent most of his speech preaching about the death of GPUs (graphics processing units) in general, or at least in a sense as we know them today. This isn't the first time he predicted the technology's demise: he offered his predictions of doom last year in this interview. Basically, the days of DirectX and OpenGL are coming to a close.

“In the next generation we’ll write 100-percent of our rendering code in a real programming language--not DirectX, not OpenGL, but a language like C++ or CUDA," he said last year. "A real programming language unconstrained by weird API restrictions. Whether that runs on Nvidia hardware, Intel hardware or ATI hardware is really an independent question. You could potentially run it on any hardware that's capable of running general-purpose code efficiently."

  • ravewulf
    I'm not going to comment on the economics as I don't know enough about it (although I would guess they are inflated a bit), but the benefits of multithreading must be weighted and determine if it is a good fit for the application. Video compression needs it, a simple text editor less so.

    As for the "death" of GPUs, I doubt that will happen anytime soon. Far off in the future, probably.
    Reply
  • C++ or CUDA? Is Nvidia sponsoring this guy? If CUDA was so freaking wonderful in it's present state, there'd be more applications that use it. The fact of the matter is that 99.999% of applications run fast enough on a modern CPU without any good reason to run it on GPGPU.

    What's more absurd is him making that ridiculous rant without giving a nod to OpenCL, which aims to do everything he talks about...
    Reply
  • eyemaster
    Well, he has a valid point, where programming in a simple way for a CPU is much simpler than writing for an API like Direct X and Open GL. They do provide a good way of hiding the hardware video cards and providing a common interface. So, a major con and major pro for video cards and their API's.

    Until processors are fast enough to replace all that the video cards of today can do now, at the same speed, I don't see video cards going anywhere anytime soon. At the same time, when CPU's are fast enough, GPU's will also have advanced enough that they will still make a difference big enough. They progress together. Where games are concerned, I can see that the CPU would go away or be less significant than the video card.
    Reply
  • DXRick
    Reminds me of the CEO of my last company. He tells us that he has no clue what we do every day and then goes on to tell us that it must be faster and cheaper.

    Sweeney obviously has no clue what DirectX and OpenGL are, but is convinced there are better ways to do graphics processing. I know how the programmers at Epic feel.
    Reply
  • deltatux
    I would rather listen to John Carmack talk the state of gaming technology than to listen to Tim Sweeney's baseless talks.
    Reply
  • Blessedman
    I think Tim is just wrong, I mean maybe(!) when CPU's have 32 cores (2016?) you could afford to gobble up 20 or so for rasterazation and vertex setup. Doesn't that though kind of push into the area of programming for a GPGPU? I thought that's what Directx and OpenGL was for so they didn't need to keep reinventing the wheel... This is the perfect time for a small team of highly motivated young programmers to spend a few summers in their basement and bang out the next generation engine for GPGPU's.
    Reply
  • ptroen
    Well for starters you have a slow PCI bus that is just well slow. The significance of this is a developer needs to write instructions(ie code) that takes this into account and then call the API to do stuff(shader code etc...). To complicate matters further you have PPU code(physics code which is really just glorified collision detection) which may sit on the graphics card but not necessarily. Also their is the sound card which will call positional acoustic events in 3d space. All of the code of the game itself in a effect has to be load balanced with the PCI bus working overtime.

    Going back to the GPU compiler topic what would be nice is to just use C++ templates or the CLR of .net and just stick some templates and very quick load balance CPU/GPU code churned out however regardless of the language at hand the developer will still have to construct a good object design which will take some time. The worst case is a bit of code duplication because of different languages which is what we have right now but honestly it's not really that bad unless you don't understand the architecture then creep sets in. For example within DirectX you have constant buffers and vertex types where you can set the structures which will communicate the type of information back and forth between CPU mobo and GPU land since the primitive types are standardized(IEEE32 bit floats) it's pretty trivial for a programmer to know what's going where however I must agree it's quite annoying to try to integrate physics api with gpu.
    Reply
  • hellwig
    This all goes to a lack of understanding of the underlying architecture. I worked at a company that was enforcing what they called 3-View design. The only problem with this design system was that determining what the system should do, and determining what the system should be made of (i.e. hardware) were independant processes. This meant you developed a system without knowing the limitations of the hardware its running on. I pointed this out to the instructor who was teaching the class they offered at work, and he couldn't even respond.

    A good example of this is Crysis. How much money did the producers put into that game to give it cutting-edge graphics and effects, only to find out consumers needed a multi-thousand dollar computer to benefit from that hard work, thus most people would never see it?
    Reply
  • CEOs tend to be business people with degrees in Business Administration that rarely know the details of what they manage. This guy clearly doesn't understand code, he's taking "recommendations" and "data, in an executive format" that have been regurgitated up the chain of command a few times, combined with some arrogance and self-importance.

    We used to be a nation where inventors founded a company to create and sell their invention, now we have a bunch of spoiled, rich-kid schmucks running "established, brand name" companies. It's nearly impossible to start a new company now, and any person with brilliant ideas has to find a job at an established company, and then have their ideas "managed" by a bunch of ignorant MBAs. Then we wonder what happened to America...
    Reply
  • frozenlead
    Frozenlead: Developers too lazy to learn to multithread/GPGPU optimize code.

    Since when in the tech field do people complain about moving forward? If you can't keep up with the train, you lose.
    Reply