Sign in with
Sign up | Sign in

Tim Sweeney: GPGPU Too Costly to Develop

By - Source: Tom's Hardware US | B 34 comments

Epic Games' chief executive officer Tim Sweeney recently spoke during the keynote presentation of the High Performance Graphics 2009 conference, saying that it is "dramatically" more expensive for developers to create software that relies on GPGPU (general purpose computing on graphics processing units) than those programs created for CPUs.

He thus provides an example, saying that it costs "X" amount of money to develop an efficient single-threaded algorithm for CPUs. To develop a multithreaded version, it will cost double the amount; three times the amount to develop for the Cell/PlayStation 3, and a whopping ten times the amount for a current GPGPU version. He said that developing anything over 2X is simply "uneconomic" for most software companies. To harness today's technology, companies must lengthen development time and dump more money into the project, two factors that no company can currently afford.

But according to X-bit Labs, Sweeney spent most of his speech preaching about the death of GPUs (graphics processing units) in general, or at least in a sense as we know them today. This isn't the first time he predicted the technology's demise: he offered his predictions of doom last year in this interview. Basically, the days of DirectX and OpenGL are coming to a close.

“In the next generation we’ll write 100-percent of our rendering code in a real programming language--not DirectX, not OpenGL, but a language like C++ or CUDA," he said last year. "A real programming language unconstrained by weird API restrictions. Whether that runs on Nvidia hardware, Intel hardware or ATI hardware is really an independent question. You could potentially run it on any hardware that's capable of running general-purpose code efficiently."

Display 34 Comments.
This thread is closed for comments
Top Comments
  • 20 Hide
    Anonymous , August 14, 2009 7:49 PM
    C++ or CUDA? Is Nvidia sponsoring this guy? If CUDA was so freaking wonderful in it's present state, there'd be more applications that use it. The fact of the matter is that 99.999% of applications run fast enough on a modern CPU without any good reason to run it on GPGPU.

    What's more absurd is him making that ridiculous rant without giving a nod to OpenCL, which aims to do everything he talks about...
  • 12 Hide
    hellwig , August 14, 2009 8:50 PM
    This all goes to a lack of understanding of the underlying architecture. I worked at a company that was enforcing what they called 3-View design. The only problem with this design system was that determining what the system should do, and determining what the system should be made of (i.e. hardware) were independant processes. This meant you developed a system without knowing the limitations of the hardware its running on. I pointed this out to the instructor who was teaching the class they offered at work, and he couldn't even respond.

    A good example of this is Crysis. How much money did the producers put into that game to give it cutting-edge graphics and effects, only to find out consumers needed a multi-thousand dollar computer to benefit from that hard work, thus most people would never see it?
  • 10 Hide
    deltatux , August 14, 2009 8:23 PM
    I would rather listen to John Carmack talk the state of gaming technology than to listen to Tim Sweeney's baseless talks.
Other Comments
  • 3 Hide
    ravewulf , August 14, 2009 7:40 PM
    I'm not going to comment on the economics as I don't know enough about it (although I would guess they are inflated a bit), but the benefits of multithreading must be weighted and determine if it is a good fit for the application. Video compression needs it, a simple text editor less so.

    As for the "death" of GPUs, I doubt that will happen anytime soon. Far off in the future, probably.
  • 20 Hide
    Anonymous , August 14, 2009 7:49 PM
    C++ or CUDA? Is Nvidia sponsoring this guy? If CUDA was so freaking wonderful in it's present state, there'd be more applications that use it. The fact of the matter is that 99.999% of applications run fast enough on a modern CPU without any good reason to run it on GPGPU.

    What's more absurd is him making that ridiculous rant without giving a nod to OpenCL, which aims to do everything he talks about...
  • 0 Hide
    eyemaster , August 14, 2009 8:08 PM
    Well, he has a valid point, where programming in a simple way for a CPU is much simpler than writing for an API like Direct X and Open GL. They do provide a good way of hiding the hardware video cards and providing a common interface. So, a major con and major pro for video cards and their API's.

    Until processors are fast enough to replace all that the video cards of today can do now, at the same speed, I don't see video cards going anywhere anytime soon. At the same time, when CPU's are fast enough, GPU's will also have advanced enough that they will still make a difference big enough. They progress together. Where games are concerned, I can see that the CPU would go away or be less significant than the video card.
  • 4 Hide
    DXRick , August 14, 2009 8:21 PM
    Reminds me of the CEO of my last company. He tells us that he has no clue what we do every day and then goes on to tell us that it must be faster and cheaper.

    Sweeney obviously has no clue what DirectX and OpenGL are, but is convinced there are better ways to do graphics processing. I know how the programmers at Epic feel.
  • 10 Hide
    deltatux , August 14, 2009 8:23 PM
    I would rather listen to John Carmack talk the state of gaming technology than to listen to Tim Sweeney's baseless talks.
  • 1 Hide
    Blessedman , August 14, 2009 8:27 PM
    I think Tim is just wrong, I mean maybe(!) when CPU's have 32 cores (2016?) you could afford to gobble up 20 or so for rasterazation and vertex setup. Doesn't that though kind of push into the area of programming for a GPGPU? I thought that's what Directx and OpenGL was for so they didn't need to keep reinventing the wheel... This is the perfect time for a small team of highly motivated young programmers to spend a few summers in their basement and bang out the next generation engine for GPGPU's.
  • 2 Hide
    ptroen , August 14, 2009 8:36 PM
    Well for starters you have a slow PCI bus that is just well slow. The significance of this is a developer needs to write instructions(ie code) that takes this into account and then call the API to do stuff(shader code etc...). To complicate matters further you have PPU code(physics code which is really just glorified collision detection) which may sit on the graphics card but not necessarily. Also their is the sound card which will call positional acoustic events in 3d space. All of the code of the game itself in a effect has to be load balanced with the PCI bus working overtime.

    Going back to the GPU compiler topic what would be nice is to just use C++ templates or the CLR of .net and just stick some templates and very quick load balance CPU/GPU code churned out however regardless of the language at hand the developer will still have to construct a good object design which will take some time. The worst case is a bit of code duplication because of different languages which is what we have right now but honestly it's not really that bad unless you don't understand the architecture then creep sets in. For example within DirectX you have constant buffers and vertex types where you can set the structures which will communicate the type of information back and forth between CPU mobo and GPU land since the primitive types are standardized(IEEE32 bit floats) it's pretty trivial for a programmer to know what's going where however I must agree it's quite annoying to try to integrate physics api with gpu.
  • 12 Hide
    hellwig , August 14, 2009 8:50 PM
    This all goes to a lack of understanding of the underlying architecture. I worked at a company that was enforcing what they called 3-View design. The only problem with this design system was that determining what the system should do, and determining what the system should be made of (i.e. hardware) were independant processes. This meant you developed a system without knowing the limitations of the hardware its running on. I pointed this out to the instructor who was teaching the class they offered at work, and he couldn't even respond.

    A good example of this is Crysis. How much money did the producers put into that game to give it cutting-edge graphics and effects, only to find out consumers needed a multi-thousand dollar computer to benefit from that hard work, thus most people would never see it?
  • 6 Hide
    Anonymous , August 14, 2009 8:58 PM
    CEOs tend to be business people with degrees in Business Administration that rarely know the details of what they manage. This guy clearly doesn't understand code, he's taking "recommendations" and "data, in an executive format" that have been regurgitated up the chain of command a few times, combined with some arrogance and self-importance.

    We used to be a nation where inventors founded a company to create and sell their invention, now we have a bunch of spoiled, rich-kid schmucks running "established, brand name" companies. It's nearly impossible to start a new company now, and any person with brilliant ideas has to find a job at an established company, and then have their ideas "managed" by a bunch of ignorant MBAs. Then we wonder what happened to America...
  • 5 Hide
    frozenlead , August 14, 2009 9:14 PM
    Frozenlead: Developers too lazy to learn to multithread/GPGPU optimize code.

    Since when in the tech field do people complain about moving forward? If you can't keep up with the train, you lose.
  • 2 Hide
    falchard , August 14, 2009 9:37 PM
    For a company that develops the most used engine in videogames. Thats a poor idealogy. 2 cores is too much money. If a competitor develops a GPGPU version, they will definetly face a backlash in engine sales.
  • 0 Hide
    Wayoffbase , August 14, 2009 11:47 PM
    deltatuxI would rather listen to John Carmack talk the state of gaming technology than to listen to Tim Sweeney's baseless talks.

    That's a tough call. I'll choose option 3 if I can: ignore both.
  • 0 Hide
    Uncle Meat , August 15, 2009 12:26 AM
    http://en.wikipedia.org/wiki/Tim_Sweeney_(game_developer)

    Not exactly someone who I would consider a clueless CEO.
  • 1 Hide
    omnimodis78 , August 15, 2009 12:43 AM
    _barraCUDAC++ or CUDA? Is Nvidia sponsoring this guy? If CUDA was so freaking wonderful in it's present state, there'd be more applications that use it. The fact of the matter is that 99.999% of applications run fast enough on a modern CPU without any good reason to run it on GPGPU. What's more absurd is him making that ridiculous rant without giving a nod to OpenCL, which aims to do everything he talks about...

    Well I can tell you first hand that when I enable CUDA in Coreavc for my HD movies, CPU utilzation drops from about 10-20% to mostly 1%. Yes, same speed - but why not utilize the power of the GPU for tasks that it can very easily perform?
  • 0 Hide
    LORD_ORION , August 15, 2009 2:11 AM
    Carmack has a unique position, he surrounds himself with the most elite programmers, and is one of the most eilite programmers himself. I get the impression that Sweeney has more business acumen, and thus aproaches the situation from that perspective.

    In the end, I agree with Sweeney... having a unified programming architecture is more cost effective... and I see larrabee's architecture ultimately dominating mainstream PC gaming.
  • 3 Hide
    Anonymous , August 15, 2009 3:06 AM
    Having done some GPGPU work myself, I can agree that it's a significant amount of work to port general purpose code to the GPU. The amount of effort depends on exactly what you're trying to do, and sometimes the whole exercise can end up producing a much smaller speedup than expected.

    The biggest hurdle is the fact that GPGPU is still not standardized (DirectX 11/compute & OpenCL are just starting out), so there are several standards to work with. The algorithm still needs to be written for the CPU, as there are still users out there without proper GPGPU hardware support. All of that adds up to a lot of risk, which the financial guys don't like much - so the 10x figure doesn't seem too crazy.

    Of course, when a GPGPU'd algorithm works well, it's pretty incredible.
  • 0 Hide
    bk420 , August 15, 2009 4:06 AM
    GPGPU is the future. OpenCL will be the greatest programming standard that ever happened to linux and windows.
  • 2 Hide
    ash9 , August 15, 2009 4:21 AM
    Sweeney's still pissed at ATI for pre releasing Quake 3

    asH
  • 0 Hide
    ash9 , August 15, 2009 4:36 AM
    Sweeney can only be talking about Larabee, which is conceptually a bunch of cpu's strapped together..if i7's are $500 and up that one will cost $3000 in lots of 1000- reduced wafer size or not, its hyperthreading and all that overhead thats costly- and I dont see signs of Intel spending on R&D lately.
    asH
  • 0 Hide
    MamiyaOtaru , August 15, 2009 6:59 AM
    right, cause larrabee will be made of a bunch of their most expensive processors. I'm pretty sure it is actually a bunch of shrunk P1s or P2s isn't it?
Display more comments