MSI Introduces its GeForce GTX 770 4 GB Graphics Card

Status
Not open for further replies.
With the upcoming next-gen consoles having effectively 8GB of VRAM (although obviously no game will use that much, since the vram is shared with the rest of the system), 4GB should be useful imo to play upcoming console ports. Titan Fall, for example, is slated to use 5GB of VRAM, so even this card won't be able to handle the full-resolution textures of that particular title.
 
seems the ram in the xbox will be 8gb of gddr3 while the ps4 will be 8gb of gddr5, i can't seem to find an official bus width but i see specs saying 178gb/s on the ps4 while a tahiti based video card runs at well over 260gb/s ram speed. also keep in mind that part of this 8gb of ram in these system will be for the OS only, probably roughly 2gb to 3gb.
 

jrstriker12

Honorable
Jun 13, 2013
100
0
10,680
Wonder how much difference 4GB will make vs. the 2GB version and is it's worth the extra cash. Most reviews of the normal 770 don't seem to indicate that ram is a bottleneck.
 

SNA3

Honorable


4G is for three screens set up . or the coming 4K monitors (using Quad SLI ofc lol)

2G is enough for 1 monitor running at 1920x1080.
 

madogre

Distinguished
Dec 7, 2008
26
0
18,530
A GTX 770 with 4GB of Vram is a waste, you get no benifit from having more then 2GB due to the 256bit memory bus. The 8GB of Vram in the consoles is probably as usless as tits on a boar, but we will have to wait and see.
While we are at its a 780 with 6GB will give you the same as with 3GB, truth is the Titan only need 3GB as well 384bit memory bus just cant use the 6GB effectively.
 

SNA3

Honorable


Prove what you just said .
 
G

Guest

Guest
MSI have been getting Great review on their video cards. I haven't really seen Asus or EVGA review for the 700 series.
 
G

Guest

Guest
MSI have been getting Great review on their video cards. I haven't really seen Asus or EVGA review for the 700 series.
 

madogre

Distinguished
Dec 7, 2008
26
0
18,530


Sigh, can you not use Google your self?
#1 http://hexus.net/tech/reviews/graphics/43109-evga-geforce-gtx-680-classified-4gb/?page=13
The Good
Cool, near-silent, and quick
Completely non-standard design begs to be pushed hard
Healthy factory-based GPU overclock
Can be made into a beast with over-voltage and better cooling
Remains power-efficient

The Bad
No out-of-the-box memory overclocking
Usefulness of 4GB memory buffer is questionable

#2 http://www.guru3d.com/articles_pages/palit_geforce_gtx_680_4gb_jetstream_review,26.html
Final words and conclusion
The 4GB -- Realistically there was not one game that we tested that could benefit from the two extra GB's of graphics memory. Even at 2560x1600 (which is a massive 4 Mpixels resolution) there was just no measurable difference.

#3 http://hexus.net/tech/reviews/displays/57849-asus-pq321q-4k-gaming-tried-tested/?page=7
Having larger framebuffers remains more of a marketing tool than a real-world benefit for even enthusiast gamers... there simply won't be many harder cases than rendering to a 4K screen at ultra-quality settings and with a semi-pointless 4x MSAA invoked. Should you really want to have pristine edges in games and aren't happy with the default render from an 8.3MP screen, we'll doubtless see other, more efficient techniques such as FXAA take over.

 

Are they equally sure it will be of NO USE DOWN THE ROAD? That imo is what having 4GB VRAM is about. NOT for current games, obviously, but for future next-gen titles (i.e. over the next few years). Massive amounts of textures specifically will surely effectively utilize more ram. The issue here is that there are currently ZERO titles on which to test this. That will change - you would argue that the 5GB of VRAM that Titan Fall is reportedly going to use won't actually work properly because the memory bandwidth will be too low. Can you prove that?

 

asukafan2001

Distinguished
Apr 21, 2010
36
0
18,560


According to the developer, it technically uses more then 5 GB when you factor in the cloud computing that Microsoft is doing with the xbox one.
 

madogre

Distinguished
Dec 7, 2008
26
0
18,530

Yes I am sure, when that happens the GPU horse power of the current card will not be able to keep up with it. There is a reason reference cards stick with memory amounts close to the memory bus size. with GDDR 5 you are able to run 256bus with 2GB of Vram, back in the old days you saw 256bit bus with 512 because it was GDDR2 like in the Geforce 5900 cards, once they moved up to GDDR3 you say 256bit bus with a massive 1GB of Vram, and now we have GDDR5 and it has the speed to work awesome with 2GB of GDDR5 with a 256bit bus, but once you move up to 3GB of GDDR5 you really need the 384bit bus to utilize it like with the GTX780 and Titan.

The Titan was not intended for gamers really, and had AMD not dropped the ball with the 7970 letting them use the mid range GK104 chip as the 680, we would have seen reference cards with 3GB instead of 6GB like the workstation counter parts its made from.

 

SNA3

Honorable


who said you will get benefet today ?

this is about the future . if the game buffer and data woesnt need more than 2 GB ofc you wont see a difference. but when they start to need it you will see that . and you can still use 3 monitors are higher resolution . like 1600P monitors. or 1400P monitors.

yes most of the people wont need more than 2G of GDDR5 , BUT there are people who pay $10,000 to $20,000 machines .. and they will make use of it.
 

yyk71200

Distinguished
Mar 10, 2010
877
0
19,160


When you will need more VRAM, you will need a stronger card anyway.
 

SNA3

Honorable


lol dont worry 4 in GTX 770 4GB in SLI will be enough for that :)

this stuff is for the rich =)
 
If I remember correctly, the 770 is a reworked 680 so the card itself is likely suffering from lack of bandwidth at the PCIe slot...though the extra buffer will massively help dual and triple-monitor users.
 
Some selective quoting being done on the usefulness of 4GB. I use 2GB is for single screen builds, 4 GB for multi monitor setups

http://www.guru3d.com/articles_pages/palit_geforce_gtx_680_4gb_jetstream_review,26.html

Now the setup could benefit from triple monitor setups at 5760x1080 (which is a 6 Mpixels resolution), but even there I doubt if 4 GB is really something you'd need to spend money on. It might make a difference at 16xAA and the most stringent games, or if you game in 3D Stereo and triple monitor gaming -- I mean sure -- at any point graphics memory can and will run out. There's one exception to the rule, and that's Skyrim all beefed, tweaked and modded upwards. But the universal question remains, is it worth it investing in that extra memory?

http://hexus.net/tech/reviews/graphics/43109-evga-geforce-gtx-680-classified-4gb/?page=8

Of more interest is the 2,204MB framebuffer usage when running the EVGA card, suggesting that the game, set to Ultra quality, is stifled by the standard GTX 680's 2GB.
 

OcelotRex

Honorable
Mar 4, 2013
190
0
10,760

You stopped on the Hexus.net article right before it explained this:

"This graph shows that while the average and per-second framerate of the EVGA card is better than the 6GB-equipped Sapphire TOXIC, both produce a near-identical number of sub-33ms frames. Of more interest is the 2,204MB framebuffer usage when running the EVGA card, suggesting that the game, set to Ultra quality, is stifled by the standard GTX 680's 2GB. We ran the game on both GTX 680s directly after one another and didn't feel the extra smoothness implied by the results of the 4GB-totin' card."

So the results were only noticeable interms of FPS, not the the reviewers eyes.

The argument that the consoles have 8GB of RAM and more RAM will be necessary in the future because of that is specious. The devices themselves have 8GB total system RAM - most systems today have 4, 8, or 16 GB of RAM to go along with their 2, 3, 4, or 6GB of video RAM on their GPU. That's systems ranging from 6-22GB of system RAM, much higher than consoles.

Both Consoles are running a 256-bit system memory bus and the PS4 has the most shaders at 1152, the xbox at 768. Whilst shaders are not everything per se, the specifications for the GPU on the SoC for the consoles is quite similar to the GTX 760. AMD's competing product in that arena is the 7950 which enjoys a higher shader count, 384-bit memory bus, and 3GB of RAM. Both perform very similarly in real world tests. All this is to say that there's no real reason to believe at this point that the 8GB of system RAM on a 256-bit memory bus from the consoles is going to drive up the RAM above 2GB on a 256-bit BUS at 1080p.

Of course these are my theories - don't take it from me. Instead take it from this article:

http://www.pcper.com/news/Editorial/Epic-Games-disappointed-PS4-and-Xbox-One

Back in 2011, the Samaritan Demo was created by Epic Games to persuade console manufacturers. This demo was how Epic considered the next generation of consoles to perform. They said, back in 2011, that this demo would theoretically require 2.5 teraFLOPs of performance for 30FPS at true 1080p; ultimately their demo ran on the PC with a single GTX 680, approximately 3.09 teraFLOPs.

This required performance, (again) approximately 2.5 teraFLOPs, is higher than what is theoretically possible for the consoles, which is less than 2 teraFLOPs. The PC may have more overhead than consoles, but the PS4 and Xbox One would be too slow even with zero overhead.

Now, of course, this does not account for reducing quality where it will be the least noticeable and other cheats. Developers are able to reduce particle counts and texture resolutions in barely-noticeable places; they are also able to render below 1080p or even below 720p, as was the norm for our current console generation, to save performance for more important things. Perhaps developers might even use different algorithms which achieve the same, or better, quality for less computation at the expense of more sensitivity to RAM, bandwidth, or what-have-you.

But, in the end, Epic Games did not get the ~2.5 teraFLOPs they originally hoped for when they created the Samaritan Demo. This likely explains, at least in part, why the Elemental Demo looked a little sad at Sony's press conference: it was a little FLOP.
 

tajisi

Distinguished
Jan 15, 2011
179
0
18,710
It's an incremental step in the right direction. That being said, being able to load large texture packs would be useful. :) As for having no benefit to the extra memory, well, that gets said every two or so years. Then a game pulls a GTA IV and starts gobbling up the VRAM. Fidelity increases are not necessarily keyed to polygon count or overall scene complexity. Tech luddites are always griping that we can't make use of this, that no, extra of that is pointless. Ignore it and go on.

I had a friend who griped that around the time of GTA IV saying that quad core CPUs were useless due to blah blah blah threading and bandwidth. I could play the game. He couldn't. :)
 
Status
Not open for further replies.