We've heard rumors that Nvidia been dipping its toe into the x86 CPU market, and today the graphics company made an announcement related to that – but it's not what you think.
Nvidia CEO Jen-Hsun Huang revealed that the company will bring its CUDA programming language to "any computer, or any server in the world," with the help of the Portland Group (PGI).
Specifically, this means that systems without Nvidia GPUs will be able to process CUDA code, giving the company its answer to Microsoft's DirectCompute and the more open OpenCL.
Nvidia says that its CUDA without a GPU will run best on multicore CPUs and will be ideal for servers.
Stay on the Cutting Edge
Join the experts who read Tom's Hardware for the inside track on enthusiast PC tech news — and have for over 25 years. We'll send breaking news and in-depth reviews of CPUs, GPUs, AI, maker hardware and more straight to your inbox.
Whats the point of CUDA without a GPU? Multi core or not.Reply
Whats the point of CUDA without a GPU? Multi core or not.
Well now you can run CUDA code WITHOUT the need for a Nvidia graphics card and will also allow them to compete with OpenCL and Directcompute.
And maybe it's cheaper to use what you have already (example in this case, a Supercomputer with say.. 100 CPU's), and would be cheaper to simply use those 100 CPU's instead of spending more cash on GPU's.
it's all in the text.
They've got the best hardware, now they need the whole software community to be able to lend a helping hand. It's okay, they plan to have a GPU that can run without a CPU two steps ahead. A greater CUDA crowd will be beneficial.Reply
joytech22 allow them to compete with OpenCL and Directcompute.Reply
Except that OpenCL and Directcompute are compatible with all GPUs. CUDA is useless without GPU acceleration.
Well now you can run CUDA code WITHOUT the need for a Nvidia graphics card and will also allow them to compete with OpenCL and Directcompute. Think about it.. it's all in the text.
CUDA was toted as being nvidias answer to give exceptional processing power over x86 after Jen-Hsun Huang bashed it for so long.
Now that port THERE pride and joy to the thing they bagged for so long?
What i meant was, (When) CUDA is (now) supported on CPU's as well as GPU's it allows more people to use the language without the need to spend thousands, it's just a money saver for some and allows others to mess around with CUDA without needing strict requirements, it should work, just not as fast.Reply
Cuda is aussie slang for two in the pink and one in the stink....americans may know this as "shocker" http://www.urbandictionary.com/define.php?term=cudaReply
CUDA in also happens to be the best. The bees knees. The ducks nuts. in AU anywaysReply
joytech22Well now you can run CUDA code WITHOUT the need for a Nvidia graphics card and will also allow them to compete with OpenCL and Directcompute.And maybe it's cheaper to use what you have already (example in this case, a Supercomputer with say.. 100 CPU's), and would be cheaper to simply use those 100 CPU's instead of spending more cash on GPU's.it's all in the text.Reply
4chan made something called tripper for cuda. it runds tripcodes in cuda, arguably the best way to get trip codes you want. now a single core cpu can do i believe 1-5 million trips a second, an i7 920 can do i think 22 million, a gtx285 is capable of almost 2 billion and if its not faked i have seen numbers up to 15 billion but i know for a fact that this is the BEST use of the gpu in practice as a gpgpu. without gou slow, with gpu fast.
point being that 100cpus are outdone by a quad sli, if you have the ability to get 100cpus, just get 4 cpus.
Is it me or what they are saying is that is posible to run CUDA on ATI/AMD also ? :))Reply