Epic Demos Samaritan on Nvidia's Next-Gen Kepler GPU
During this years Game Developers Conference, Epic showed the infamous Samaritan Demo running on Nvidia's Next-Generation Kepler GPU.
At the 2011 GDC conference, Epic introduced the Samaritan demo, which provided users a look at the next generation of videogame graphics. The demo utilized a host of advanced rendering techniques to create a realistic environment. The issue with the demo in 2011 was it took three GeForce GTX 580s to run the demo in real-time. At this years GDC, Epic showed the demo utilizing only one next-generation Nvidia Kepler GPU.
In addition to the power of the Kepler GPU, we see the benefit of Nvidia-developed anti-aliasing technique, Fast Approximate Anti-Aliasing (FXAA), aimed to improve upon the established success of Multisample Anti-Aliasing (MSAA), the form of anti-aliasing most commonly seen in today’s games. FXAA is used to smooth out jagged edges and improve visual fidelity, and anti-aliasing is key to creating the incredible sights of Samaritan.
Image Credit: GeForce
"Without anti-aliasing, Samaritan’s lighting pass uses about 120MB of GPU memory. Enabling 4x MSAA consumes close to 500MB, or a third of what's available on the GTX 580. This increased memory pressure makes it more challenging to fit the demo’s highly detailed textures into the GPU’s available VRAM, and led to increased paging and GPU memory thrashing, which can sometimes decrease framerates. FXAA is a shader-based anti-aliasing technique,” however, and as such “doesn't require additional memory so it's much more performance friendly for deferred renderers such as Samaritan,” according to Ignacio Llamas, a Senior Research Scientist at Nvidia who worked with Epic on the FXAA implementation.
Image Credit: GeForce
As reported earlier, we recently saw the first images of the Nvidia GK104 Kepler card and now we are starting to see some visual performance benefits with Kepler. Most reports have Kepler slated for releasing later this month, though still no official word from Nvidia yet.
VERY good graphics in that vid by the way... I'm very impressed.
VERY good graphics in that vid by the way... I'm very impressed.
go nvidia go!
This would be awesome for all of us...these prices would be fair
Also, the video is just that a video. Not real game play.
But honestly, I expect good things from Kepler. Cannot wait to see its benchmarks!
I highly doubt the new nVidia cards will be able to surpass what AMD has done at this stage. At best they'll hope to match them and beat them here and there through better drivers and optimization. If the GTX680 was going to perform so much better there'd be samples out there already to convince customers not to buy AMD cards right now.
Still a good idea to wait another month and see what makes the best bargain before buying, AMD card prices will probably drop slightly.
Have to totally agree with this. Although I believe that Nvidia will hold a few surprises, AMD still hase done very well on their 7xxx series.
I "FAIL"to see how a card that is 10% faster in BF3 is a "FAIL". Sounds like a victory to me.
Looking forward to detailed benchmarks. Until then, theres no point in insulting anyones fanboy sensibilities.
Anyway, i am very excited to see what happens with pricing and performance during this go around with AMD/Nvidia. Time to upgrade soon!
GK104 is supposed to have 1536 cores WITHOUT hot-clocking, making them effectively a LOT slower than a Fermi CUDA core. Besides, GK100/110 is supposed to have more than that.
That card clearly has a GK-104, NOT a GTX 680 GPU. 680 will likely have a GK-100 or 110. I'm not too good with Nvidia GPU names so I'm not sure which it will be, but it's probably one of those two. That is something like a GTX 660 or 660 TI. GK-104 is supposed to have 1536 cores, NOT the high end GPUs in the 680, 670, maybe 660 TI. Of course, Nvidia might change names around too, so who knows?
Also, GK-104 is about the same size as Tahiti, so it is fairly similar in performance. AMD and Nvidia have similar performance per square mm of die. Considering that, and the increased power usage over the 7970, the 7970 and this GTX whatever probably overclock to about the same performance. The Nvidia card listed at that site is about 10% faster and uses about 20% more power, seems fairly reasonable if the main difference is clock rates (remember, clock rates increase power usage exponentially while increasing performance linearly).
Basically, the 7970 and this Nvidia card can probably run at similar performance while using similar amounts of power. However, it clearly states that the card is only UP TO 10% faster and is usally slower in 3Dmark, so it probably doesn't beat the 7970 anyway, in which case Nvidia could have failed. Then again, all of this is assuming that this link of yours isn't fake anyway. Since it really has little to say that even makes me assume it's true (it partially contradicts what Nvidia has been saying), I'm leaning on it being BS.
For example, it says that this card has hot-clocking despite Nvidia claiming that Kepler doesn't and the power usage of 1536 CUDA cores that still have hot-clocking would be through the roof EVEN on the 28nm node. Then we see it has much less memory bandwidth than the 7970. The picture also has no defining features to tell us anything about the card save for it having a fan, being black, and it has Nvidia on it. For all we know, it's some old card. Nothing on the card implicates otherwise.