Sign in with
Sign up | Sign in
Your question

Hexus.net benchmarks Nehalem - Page 3

Last response: in CPUs
Share
August 19, 2008 11:14:36 PM

NMDante said:


Quote:
In fact Johan has fairly good article up over at AT, where he explicitly states, this is not a gamers cpu.

So, what does it mean when the gaming benchmarks surpassed those of the Phenom 9950 BE? That it is a better gamer CPU than the Phenom it was compared to?


Well if you are directing that at me, it means i'll remove my agena from my AM2+ mobo, drop in a Deneb when it's released, and away i'll go. Yourself? I assume you are willing to spend alot of money, in exchange for 0 gaming improvements?
August 19, 2008 11:25:38 PM

Hmm, I wasnt aware they were falling behind. Last time I checked, even poor old K8 processors could keep up for gaming, so I fail to see why we need a 50% increase in IPC. When CPUs cant keep up, then Ill be interested in your argument, but untill that happens, i7 is an improvement over Core2, so stop whining.
August 19, 2008 11:27:00 PM

look it don't matter anyone say about intel cpu amd fanbois will
start another arguement, so I will be the first to say that Intel cpu
don,t boot faster then amd. (not true and don't make any difference
but i'll be the first to say it)
Related resources
Can't find your answer ? Ask !
August 19, 2008 11:29:39 PM

Who cares about Phenom? They arent leading. They arent the best. They dont hold all the influence. And more people care about the proper usage of a gpu than what youre letting on. So Im disappointed. And peoples perceptions have changed. If you dont see it, then its just my opinion. It wont change a thing, as I can see. While gpus increase in ability 100% in 20 months, and cpus gain maybe 40% over that same time, and today we have games and graphics cards that still await a cpu to be able to play those games without bottleneck, I still ask my question, and if Im disappointed that no one knows, itll be apparent next year. Before I said itd happen, cpus will bottleneck games, and its happened. This is a trend. Intel wants everything to be multithreaded. It simply isnt going to happen in everything, especially gaming. Itll cost too much. You think the cpu market is flaky? Here today, gone tomorrow is more the mantra in game dev. If the trend doesnt change, were going to see slow down in actual game dev. Again, because of multithreading. It is important that Intel puts out a faster cpu to gamers. Trust me. Its only going to get worse
August 19, 2008 11:31:53 PM

piesquared said:
Well if you are directing that at me, it means i'll remove my agena from my AM2+ mobo, drop in a Deneb when it's released, and away i'll go. Yourself? I assume you are willing to spend alot of money, in exchange for 0 gaming improvements?


When did Intel ever state this was a "gaming" chip? But regardless of that the lower memory latency will help on ID and Epic engines, both are sensitive to memory latency.

The fact of the matter is this is a refresh to address the Core uArch lack of scalability on 2+ sockets, fact is Core uArch chips tend to be top dog in single socket machines, but when it comes to HPC like environments the poor little fella is memory starved. Intel has solved that with this revision of the Core uArch why is it so hard for some of you guys to get that through your heads and stop, shut up, review current solution, review older solutions, review competitors solutions, then come to a conclusion. Pretty easy based on the actual specifications of the processor what its targeting, if they wanted more "gaming" performance there would have been changed to the Integer units again.

All in all if the numbers hold even at 15% overall in single socket configurations that’s a pretty good deal for us, but where its going to count is the multi socket setups and that is where we are going to see these 40% claims in that specific discipline not code specific but simply raw throughput in those configurations.

Word, Playa.
August 19, 2008 11:37:33 PM

Spud, will the next iterations be more integer oriented?
August 19, 2008 11:39:48 PM

piesquared said:
Well if you are directing that at me, it means i'll remove my agena from my AM2+ mobo, drop in a Deneb when it's released, and away i'll go. Yourself? I assume you are willing to spend alot of money, in exchange for 0 gaming improvements?


Easy. I won't need to upgrade my CPU, even after you upgrade yours. I am not an early adopter of new technology. Again, what is your point? Where did I mention anything about upgrading or how you would upgrade? Why change your argument to upgrade path, and not gaming results?

So, I simply ask again - If you think Nehalem is merely a server CPU, based on 1 blog, in which the author even admitted to not having tested or tried a Nehalem CPU, and the Hexus review, what does it make the Phenom 9950 BE, it was compared to, and beat in those same gaming benchmarks?
August 19, 2008 11:43:38 PM

JAYDEEJOHN said:
Who cares about Phenom? They arent leading. They arent the best. They dont hold all the influence. And more people care about the proper usage of a gpu than what youre letting on. So Im disappointed. And peoples perceptions have changed. If you dont see it, then its just my opinion. It wont change a thing, as I can see. While gpus increase in ability 100% in 20 months, and cpus gain maybe 40% over that same time, and today we have games and graphics cards that still await a cpu to be able to play those games without bottleneck, I still ask my question, and if Im disappointed that no one knows, itll be apparent next year. Before I said itd happen, cpus will bottleneck games, and its happened. This is a trend. Intel wants everything to be multithreaded. It simply isnt going to happen in everything, especially gaming. Itll cost too much. You think the cpu market is flaky? Here today, gone tomorrow is more the mantra in game dev. If the trend doesnt change, were going to see slow down in actual game dev. Again, because of multithreading. It is important that Intel puts out a faster cpu to gamers. Trust me. Its only going to get worse


And which games would those be? Im quite interested in this...

O and incase you havent noticed, Intel makes their money on servers, just like AMD. Gamers make a very, VERY small part of their market. Its not who they cater to.
August 19, 2008 11:47:31 PM

JAYDEEJOHN said:
Spud, will the next iterations be more integer oriented?


Not the slightest clue if Intel would talk more about their future products that are say 4-7 years to be anounced we could have a really cool conversation about it but sadly they dont. But for **** and giggles I would say the next revision after the 32nm refresh would be my guess, but thats me I would gauge my sucess off the product and will have to assume the FPU performance will be the defineing "handycap" of this current revision. But like I said thats just my thoughts on the matter.

Word, Playa.
August 19, 2008 11:51:38 PM

Crysis, AoC, others. It used to be like this. I dont have the link, but Toms did a article on it. At 2.2 a K8 would max out all gfx cards. No matter what you changed, the games wouldnt produce higher fps, and that was using a FX60. That changed a lil when the 8800GTX came out. Then, having a C2D at 2,6 was adequate. Not, we have the 4870x2. To get the acual highest framerates in a game, it is no longer solely determined by a gpu. You have to oc the cpu, and on some games, it doesnt stop, the more you oc, the more fps you get. Next gen gpu? and the next? You see where Im going with this?
August 20, 2008 12:00:08 AM

Like I said, Im NOT downing Nehalem. For what its meant for, its great, as I said. Its not the benchmark improvements I wanted to see. So Im disappointed. Maybe we are all wrong, and itll show better than what weve seen. Who knows. I hope youre getting my point, and not reading anti anything in what Ive said, because other than disappointment, its good to see Nehalem coming out
August 20, 2008 12:12:11 AM

Heres an example where a cpu is oceed to 4.4, and the reviewer is still talking cpu limitations. http://www.driverheaven.net/reviews.php?reviewid=609&pa... As gaming matures, the demands are also higher on the cpu, as well as physics AI etc. Plus, the speed of the cards are growing at a faster rate than cpus. Its apparent in this gen especially, and we will see more in the future, only worse. At these res, and prices, people dont want less than 60fps, period. Thats also why Id said earlier that the Hexus review was crap, because higher res doesnt mean no cpu bottleneck, as they inferred
August 20, 2008 12:24:18 AM

cjl said:
You keep calling X48 boards $400.

Lets' see how true that is:

Asus Rampage Formula = $299
DFI LP LT X48 = $250
Asus Rampage Extreme = $399
Gigabyte GA-X48 = $225
Intel reference X48 board = $250
Asus P5E deluxe X48 = $219
MSI X48C Platinum = $230


I'm only seeing one $400 board out of that sample from the top rated X48 boards on newegg. More importantly, the boards can easily be found in the $200 range (and a fairly decent selection too - not just a single cheap board). Likely, the X58 will be the same, with the top end boards coming in around the $400 mark, but easily attainable, and still very good boards being available in much lower price brackets.



Those are X48 boards and I said CLOSE TO $400. You obviously didn't find FoxConn's board which is about $360 or so. The point is that people are saying that X58 boards will be more expensive, so any Strikers, etc will be close to $500. I'm not knocking them I'm saying that a $235 Phenom and a $179 790FX will allow play up to 2560 with the right GPU.
a c 127 à CPUs
a b } Memory
August 20, 2008 3:29:46 AM

gallag said:
IDF nehelam news http://www.techreport.com/articles.x/15344

1:29PM: Nehalem demo of Lost Planet: Colonies. Yorkfield vs. Nehalem. Nehalem is over 50% faster thanks to eight cores, faster cores.

Now Cinebench with overclocked Nehalem and Yorkfiled. Over 30% faster on Nehalem. 45850 Cinebench R10 rendering score.


Nice find. Interesting how no one commented on it. Oh well.

piesquared said:
Well if you are directing that at me, it means i'll remove my agena from my AM2+ mobo, drop in a Deneb when it's released, and away i'll go. Yourself? I assume you are willing to spend alot of money, in exchange for 0 gaming improvements?


You.... look at the above quote. Whats that show ya?

BaronMatrix said:
Those are X48 boards and I said CLOSE TO $400. You obviously didn't find FoxConn's board which is about $360 or so. The point is that people are saying that X58 boards will be more expensive, so any Strikers, etc will be close to $500. I'm not knocking them I'm saying that a $235 Phenom and a $179 790FX will allow play up to 2560 with the right GPU.


But that will be very GPU limited especially @ 1920x1200+. And the funny thing is you always talk about Intels highest end mobo but you never take into perspective their mainstreame mobos like say the P55 (just a guess) would be out for. The P45 just came out a month ago and there are already mobos out cheaper than my P5K-E that are equivalent (well better cuz PCIe 2.0 and such), a good example is the P5Q Pro.

Everyone jumped on the "rumor" that Nehalem will not OC and it changed from the Lynnfield chips wont to only the X58 mobos will. Well The problem is that Intel just went and showcased a Nehalem chip self OCing for single threaded apps. It shuts off the unneeded cores and OCs the chip as high as it can to stay within the thermal limits to help complete the task as fast as possible.

I would love to buy a Core i7 rig to mess with but I am waiting to see what the next step (Westmere or whatever AMD has planned) brings. Personally I feel 32nm will improve the heat a lot and thats what I am looking forward to.
August 20, 2008 3:57:57 AM

I saw that post on Anands, and it was pulled for some reason. Not sure as to why. I still have hope that Nehalem will be a great gamers cpu. I just wish I knew why that was pulled
a b à CPUs
a b } Memory
August 20, 2008 6:18:42 AM

BaronMatrix said:
Those are X48 boards and I said CLOSE TO $400. You obviously didn't find FoxConn's board which is about $360 or so. The point is that people are saying that X58 boards will be more expensive, so any Strikers, etc will be close to $500. I'm not knocking them I'm saying that a $235 Phenom and a $179 790FX will allow play up to 2560 with the right GPU.

I wasn't trying to find every board, just a relatively random sampling of them. I'm sure there are several higher than the average price for the group I found, and several lower. The point stands that X58 will likely be much cheaper than you were trying to imply. Also, people are saying X58 will be more expensive, but is there any real reason to think so? If anything, the chipset itself will be cheaper, due to the memory controller on the CPU. The 6 memory slots adds a bit of complexity though. I would be surprised if an average "lower end" X58 board (I put it in quotes because no X58 board could really be considered low end) came in much above $250, and even the top boards shouldn't exceed the $400 or so price point that the top end ones are at right now.
August 20, 2008 8:23:52 AM

Just touching on the topic of CPU scaling with the faster GPUs:
http://www.pcgameshardware.com/aid,647744/Reviews/GT200...

It appears a C2D @ 3.6GHz is the 'sweet spot' as far as the GTX280 is concerned, though minimum framerates do keep rising in Crysis even at 4GHz. Average framerates stays practically the same from 3GHz to 4GHz though, so it must be a short scene that only lasts a few seconds, like a massive explosion or something.

Now, I'm not sure exactly what speed Nehalem will need to run to match a C2D @ 3.6GHz or 4GHz in games, though I am pleasantly surprised at the Lost Planet benchmarks, but it just goes to show you really need a massively multithreaded game to truly take advantage of Nehalem.
a b à CPUs
a b } Memory
August 20, 2008 12:54:27 PM

JAYDEEJOHN said:
I saw that post on Anands, and it was pulled for some reason. Not sure as to why. I still have hope that Nehalem will be a great gamers cpu. I just wish I knew why that was pulled



I didn't see the one at Anand, but it's here at Tom's: http://www.tomshardware.com/news/Turbo-Mode-Intel,6193....

Quote:
Leaked information also indicates that production CPUs will self overclock by up to two speed bins %u2014 for example jumping from 3 GHz to 3.2 GHz or even 3.4 GHz.

With this kind of headroom, it will be interesting to see how far enthusiasts will be able to push Core i7 processors. Even Intel indicated to us in June that Core i7 silicon is extremely healthy. Our own tests revealed that Core i7 processors will have considerable amount of headroom in terms of clock speeds.

August 20, 2008 2:29:05 PM

Well, currently, with the overall improvements in Nehalem, even barring mulithreading, a 3.4 seems fairly close to optimal, tho thats with the current G280, non oceed card at 65nm, but by Dec the 55nm comes out, and also this isnt even mentioning the 4870x2, which in certain games no cpu will max out, even Nehalem. Ive seen the pcg article before, and was reading it while I came to the cpu section, just now. Glad you posted it, I was about to. Im still under the impression, that at some point next year, with many demanding games coming out by then, and much more powerful gpus out, cpus in some scenarios will actually, truly be a real bottleneck
August 20, 2008 2:38:19 PM

Heres the most telling argument about hexus comments on res and cpus vs gpus, and the theory that at higher res, gpus is the bottleneck. http://www.pcgameshardware.com/?menu=browser&article_id... Notice what happens when the clocks on these cpus are cranked. Surely its a gpu bottleneck? The average framerates are stuck at 17, meaning no further improvements. Thus a gpu bottleneck. But look further, and see the MINIMAL fps. As the cpu is oceed, we see a never ending (to the max the cpu was oceed) scaling of better framerates. To truly reach the max, the minimum would barely be below the average, but such a cpu doesnt exist. This is, as I said, just the beginning of this. We need to understand a lil bit more about it, but its here. 4 months ago, I wasnt even considered to be real with my comments on this. Everyone said, what games> Well theres some now, and with better cards, its showing up. As time goes along, will the cpu keep up? Will there be more Crysis type games out, tho this time, everyone will be waiting for a better cpu?
August 20, 2008 3:03:15 PM

JAYDEEJOHN said:
Well, currently, with the overall improvements in Nehalem, even barring mulithreading, a 3.4 seems fairly close to optimal, tho thats with the current G280, non oceed card at 65nm, but by Dec the 55nm comes out, and also this isnt even mentioning the 4870x2, which in certain games no cpu will max out, even Nehalem. Ive seen the pcg article before, and was reading it while I came to the cpu section, just now. Glad you posted it, I was about to. Im still under the impression, that at some point next year, with many demanding games coming out by then, and much more powerful gpus out, cpus in some scenarios will actually, truly be a real bottleneck


It could well be, but the problem is that the age of massive clockspeed gains with each node is long gone. The Netburst era is now nothing more than a distant memory. Multicore is the future, but making games massively multithreaded is apparently very difficult (I'm no programmer, just going by what I've read by game developers). But we're slowly getting there, Lost Planet shows its possible, so its a start. From what I've read Valve seems to be embracing multithreading as well, and we know the UT3 engine can take advantage of quads, and a lot of upcoming games will be based off that engine.

I guess the onus is on programmers finding a way to harness all the extra cores that go mainly unused today, because we sure ain't gonna get any massive IPC or clockspeed gains in the next 2 years until the next 'tock' or Nehalem replacement.
August 20, 2008 3:18:26 PM

It may open the door for many gpu apps as well, or physx on gpu scenarios. Its not just hard for game devs, its mainly costs. Question, will the auto oc in Nehalem be able to be over ridden? So theres an ability to manually oc?
August 20, 2008 3:20:35 PM

JAYDEEJOHN said:
Crysis, AoC, others. It used to be like this. I dont have the link, but Toms did a article on it. At 2.2 a K8 would max out all gfx cards. No matter what you changed, the games wouldnt produce higher fps, and that was using a FX60. That changed a lil when the 8800GTX came out. Then, having a C2D at 2,6 was adequate. Not, we have the 4870x2. To get the acual highest framerates in a game, it is no longer solely determined by a gpu. You have to oc the cpu, and on some games, it doesnt stop, the more you oc, the more fps you get. Next gen gpu? and the next? You see where Im going with this?


Honestly, I don't think we can assume games are being CPU bottlenecked, jaydee. Many games don't even show a super performance increase if going from a X2 5600+ to a C2Q QX9770 with some new GPUs. While you may argue that's because "development has stopped" I would say it's because it doesn't matter that much.

Take a look at this: http://www.techreport.com/articles.x/14573/4

Heck, even Supreme Commander doesn't show much love for highly-clocked Quad-Cores (which fanboys will say it's a so much "Quad-Core optimized" game).

The difference between CPUs in games is usually pathetic unless you game at a very low resolution. Anyway, we saw the GTX 280 (which is said to be "CPU bottlenecked" just because it's a turkey) double the scores of the 9800 GTX. 4870X2 also shows its powers. How come that they are being CPU bottlenecked? If so, why do we still see big improvements while changing the GPU but not the CPU? Wouldn't DAMMIT or Nvidia be shouting about this "bottleneck thing" by now if it truly existed?

Until there is clear evidence that our CPUs can't keep up with the new GPUs that statement will remain meaningless.
August 20, 2008 3:29:01 PM

JAYDEEJOHN said:
Heres the most telling argument about hexus comments on res and cpus vs gpus, and the theory that at higher res, gpus is the bottleneck. http://www.pcgameshardware.com/?menu=browser&article_id... Notice what happens when the clocks on these cpus are cranked. Surely its a gpu bottleneck? The average framerates are stuck at 17, meaning no further improvements. Thus a gpu bottleneck. But look further, and see the MINIMAL fps. As the cpu is oceed, we see a never ending (to the max the cpu was oceed) scaling of better framerates. To truly reach the max, the minimum would barely be below the average, but such a cpu doesnt exist. This is, as I said, just the beginning of this. We need to understand a lil bit more about it, but its here. 4 months ago, I wasnt even considered to be real with my comments on this. Everyone said, what games> Well theres some now, and with better cards, its showing up. As time goes along, will the cpu keep up? Will there be more Crysis type games out, tho this time, everyone will be waiting for a better cpu?


Sorry, but did you happen to take a look at the other screenshots?

Like this one: http://www.pcgameshardware.com/&menu=browser&mode=artic...

COD4 Min. FPS with CPU 4.0 GHZ: 70
COD4 Min. FPS with CPU 2.0 GHZ: 63

Another one: http://www.pcgameshardware.com/&menu=browser&mode=artic...

Prey Min. FPS with CPU 4.0 GHZ: 135
Prey Min. FPS with CPU 2.0 GHZ: 107

Sorry, but in my opinion that just shows how Crysis is a poorly coded game.
August 20, 2008 3:32:20 PM

The basic flaw to your idea is that a game such as sup com which IS cpu dependent, isnt taking into account the gpu. Im talking about certain games where we find the ability of the game engine to let the gpu fly AND because of the game, stress the cpu as well. When that combination arrives, thats when the cpus start to show their slow. And, it doesnt matter what the res is, sure, a smaller res would allow the gpu to fly, but listen to this. Old games, ones where you can get 300+ fps in, sure, theyre cpu bottlenecked, but who cares? With the complexity of todays games, and the more powerful gpus, and not much growth from cpus, the workload is shifting more and more to the cpu, and not just because of physics.
August 20, 2008 3:37:24 PM

Ive said, were just starting to see this. Itll show much more by next year, when newer games and newer cards arrive. If A is growing at 33% a year, and B is growing at 20%, its only a matter of time before A catches up and surpasses B. We are at that threshold now
a c 127 à CPUs
a b } Memory
August 20, 2008 3:37:37 PM

I have NEVER had my CPU utilization go above 60% and that only goes that high when it is loading something. Not even Crysis loads my CPU that much.

We could possibly say that its not what you think jaydee. It could be that the game engines do not take advantage of the CPU power thats given to them (this is 90% of most cases) and the minimum FPS go higher only due to a faster clock speed.

I don't think its CPUs bottlenecking but truthfully I think its GPUs at high res and the games themselves.
August 20, 2008 3:38:46 PM

My evidence is the past. This NEVER occurred before. It is now, and will become more apparrent
August 20, 2008 3:41:43 PM

It is true game engines dont stress a cpu like super pi does, theyre not made that way. Having a cpu max out isnt a true indicator of cpu limitations in games, and many other apps. Thatd be like saying, the cpu isnt running winrar at max because it isnt maxxed out.
August 20, 2008 3:51:22 PM

Another flaw is here http://www.techreport.com/articles.x/14573/2 which is exactly bringing home my point is, what gfx cards were used? The top card is a 8800GTX? Thats maybe half as fast as the 4870x2. Can we see whatll happen if we reduce cpu speeds by half? Need I go there? Poor example. Im trying to find an old article written here at Toms, which will help clarify our current situation
August 20, 2008 4:00:52 PM

JAYDEEJOHN said:
Another flaw is here http://www.techreport.com/articles.x/14573/2 which is exactly bringing home my point is, what gfx cards were used? The top card is a 8800GTX? Thats maybe half as fast as the 4870x2. Can we see whatll happen if we reduce cpu speeds by half? Need I go there? Poor example. Im trying to find an old article written here at Toms, which will help clarify our current situation


I remember the "at least 2.6 GHZ" article. However, that THG's article doesn't seem to agree with the links you posted yourself and they used cards older than a GTX 280, from what I remember. Those differences between 4.0 ghz and 2.0 ghz with a GTX 280 looked pretty much pathetic to me - except for Crysis, the most well-coded game.
August 20, 2008 4:05:55 PM

jaydeejohn does have a point in that GPU performance is increasing at a far greater rate than CPUs. What was once more than sufficient for a 8800GTX (say, a 3GHz C2D) may no longer be enough for a GTX280 or faster GPU.

There really is no way around this though, its been an obvious trend for years now. It hasn't really been a problem in the past few years as most games were mainly shader intensive, which put the onus of performance on the GPU. Now, instead of just pretty pixels, advanced physics effects and smarter AI is becoming important as well, and thus the CPU is starting to play a bigger role on performance than in the past.
August 20, 2008 4:09:52 PM

In regards as to the PCGH showings, its using the G280, a card thats currently around 25% slower than the 4870x2. And what about G280s in SLI? Theyre around 30% faster yet
August 20, 2008 4:16:21 PM

If the gpu requires the cpu to work faster, then at some point theres a wall. Simple as that. The gpus dont dont it themselves, as we all know. And its somewhat cavilier to think that a cpu will automatically keep up, at least on the cpu side of things. Im not slamming anyones opinions or anything like that, so please dont take it so, its just hard for some people to get their head around this. This IS new.
August 20, 2008 4:29:06 PM

JAYDEEJOHN said:
In regards as to the PCGH showings, its using the G280, a card thats currently around 25% slower than the 4870x2. And what about G280s in SLI? Theyre around 30% faster yet


GTX280 SLI (or even 4870X2s) are mainly for hardcore gamers who run at 2560 x 1600 though. Of course at lower resolutions it'll be CPU bottlenecked, but for its intended market its really not much of a problem.

2560 x 1600 = ~4.1 million pixels

1680 x 1050 = ~1.76 million pixels

So a GTX280 SLI / 4870X2 at its intended resolution has a workload nearly 2.5x that of that of a standard 'single GPU' res of 1680 x 1050. If a 3.6GHz C2D is sufficient for a single GTX280 @ 1680 x 1050, it shouldn't be bottlenecking a GTX280 SLI setup at 2560 x 1600 either.
August 20, 2008 4:35:58 PM

Some games are best run at 19x12 tho, even for these cards. Whether its the 4870x2, or even the G280SLI, and I point out, the res doesnt go any higher. Then what? My pouint here helps clarify our current situation as well. At 25x16, we see no or only a rare cpu slow down. That wasnt the case a few months ago, let alone 6 months from now. In other words, a short while ago, 16x10 res was where youd be gauranteed of no cpu bottleneck, thats not longer the case when we see them at 19x12. So, as weve asked the gpu to do much greater work its compled, and now its atr its final resolution before we start to see cpu bottlenecks there as well.
a b à CPUs
a b } Memory
August 20, 2008 4:58:02 PM

JAYDEEJOHN said:
Some games are best run at 19x12 tho, even for these cards. Whether its the 4870x2, or even the G280SLI, and I point out, the res doesnt go any higher. Then what? My pouint here helps clarify our current situation as well..



I believe you may be confusing a limitation of the software with a limitation of the hardware. If the game isn't coded to provide/handle resolutions greater than 19x12, swapping out cards isn't going to make it so.
August 20, 2008 4:58:56 PM

JAYDEEJOHN said:
Some games are best run at 19x12 tho, even for these cards. Whether its the 4870x2, or even the G280SLI, and I point out, the res doesnt go any higher. Then what? My pouint here helps clarify our current situation as well. At 25x16, we see no or only a rare cpu slow down. That wasnt the case a few months ago, let alone 6 months from now. In other words, a short while ago, 16x10 res was where youd be gauranteed of no cpu bottleneck, thats not longer the case when we see them at 19x12. So, as weve asked the gpu to do much greater work its compled, and now its atr its final resolution before we start to see cpu bottlenecks there as well.


What game, Crysis maybe?! :p  OTOH I can't think of any other game that won't run at 2560 x 1600 on a 4870X2, let alone a GTX280 SLI.

There was really never a rule set in stone that stated 1600 x 1200 (or whatever) was guaranteed to be GPU limited, it depends entirely on the game engine anyway. Of course due to the recent ~2x increase in GPU performance (thanks to the GTX280 and HD4870) traditional 'GPU bound' resolutions are starting to show CPU scaling as well, and I do understand your point on that. However, the same thing has always happened with the Radeon 9700, 8800GTX or basically every time a new GPU arrived that blew existing offerings away, so it shouldn't really come as a surprise to anyone by now.
August 20, 2008 5:09:41 PM

JAYDEEJOHN said:
If the gpu requires the cpu to work faster, then at some point theres a wall. Simple as that. The gpus dont dont it themselves, as we all know. And its somewhat cavilier to think that a cpu will automatically keep up, at least on the cpu side of things. Im not slamming anyones opinions or anything like that, so please dont take it so, its just hard for some people to get their head around this. This IS new.


It is not new. At least in gaming. In this case i must support JDJ on this. GPUs are evolving (for some time now) at a faster pace than the CPUs. That is noticeable when you upgrade a GPU, and how long it lasts. How do i explain myself in this case i dont be mis understood.

In gaming, every GPU upgrade the performance jumps considerably. GPUs ussually have a must shorter life cycle about 6 months (80 was the exception) always with gains of 20% or above. Latest generation of the K10 was a 10% gain over the K8 architecture. Nehalem benchies are already out (some of them at least) and they show a 10% to 34% improvement...but not in gaming. Don't talk about theoretical Flops, see workbench results. For a long time ago, every chip makers knows that what looks good on paper doesn't mean it will work good on practice.

Netburst.
The VSA100 chip.
Kyro II Chip

And i guess if you look further, you will find more.

With the same specs the Jump between the 3870 and the 4870 is really big. A bit bigger than 34% and the drivers aren't mature yet. Se we can expect even bigger gains.

GPU world is advancing much faster than CPU world. CPU world is talking about is about dual to octo core now, and GPU world has broke that barrier a long time ago.

Like when we Europeans partied because Y2K, the Chinese were already in the year 5000 and something.....
a b à CPUs
August 20, 2008 5:42:45 PM

JAYDEEJOHN said:
Like I said, Im NOT downing Nehalem. For what its meant for, its great, as I said. Its not the benchmark improvements I wanted to see. So Im disappointed. Maybe we are all wrong, and itll show better than what weve seen. Who knows. I hope youre getting my point, and not reading anti anything in what Ive said, because other than disappointment, its good to see Nehalem coming out


I did say a few months ago I would be shocked to see more than a 10% gain. Maybe with a memory controller, DDR3 RAM could create better results, but CPU's are reaching their limits. Thats why multicore programming is starting to become so important.
August 20, 2008 6:00:21 PM

Then not only Intel, but M$ needs to get off their arses and either contribute money or more resources on this. An app CAN be enjoyable (when done) even if it takes awhile. But gaming is now, it has to happen now, or its a poor experience. I just see this, as well as in the case of a few apps, as being very important in seeing some kind of growth thru the cpu, whether its thru SW, going multithread or the cpus themselves
August 20, 2008 6:04:51 PM

It is undeniable, the workload for the gpu has increased much more, and at a faster rate than for cpus in gaming. What we saw 2 years ago on then mostly owned 12x10 screens was no, or rarely a cpu bottleneck or slow down. Cant even say that today. Today, we EXPECT there to be cpu bottlenecks at those resolutions, and to an extent, even higher at 16x10. Like I said, we only go to 25x16, so far thats it.
August 20, 2008 6:21:55 PM

JAYDEEJOHN said:
It is undeniable, the workload for the gpu has increased much more, and at a faster rate than for cpus in gaming. What we saw 2 years ago on then mostly owned 12x10 screens was no, or rarely a cpu bottleneck or slow down. Cant even say that today. Today, we EXPECT there to be cpu bottlenecks at those resolutions, and to an extent, even higher at 16x10. Like I said, we only go to 25x16, so far thats it.


Its all relative though - 3 years ago 1600 x 1200 was considered high res, now its mainstream. Nowadays 2560 x 1600 is considered high res, maybe in a few years we'll be talking about 3200 x 2000?

PS. Who here remembers people going nuts over a 3D accelerated Quake running at 30fps @ 640 x 480 on a Voodoo1? Look how far PC gaming has come in a little more than a decade!
August 20, 2008 6:36:03 PM

I still have my model 64. Wanna buy it? heheh
August 20, 2008 6:42:07 PM

On second thought, Im keeping it, say for another 10 years or so, by then itll be a classic heheh and I can Ebay my retirement heheh
a c 127 à CPUs
a b } Memory
August 20, 2008 9:33:42 PM

I have a Pentium w/MMX and a Cyrix (remember them?) 486 DX.

Game engines will have a lot to do with how well they will take advantage of a PCs hardware.

Lets look at my favorite example. Source. Source is a well optimized and built game engine. @ 1680x1050 with everything maxed out including AA/AF and HDR enabled on a Q6600 and a HD2900Pro 1GB I get about 150FPS in both the CS:S stress test and the Lost Coast stress test (tried this on my 40" TV). A guy here on the forums (3Ball I think) got a HD4870 and a C2D E6600 @ 3.4GHz he gets 300FPS (thats the max it will show on Source).

Now the CPU I have is at 3GHz and his is at 3.4GHz. Mine is a quad his is a dual. He has a generation ahead card that IF the CPU was truly limiting it he should have gotten a FPS in the test near mine, maybe a bit higher. But he easily got 2x my FPS using a 2 year old CPU.

So in the end it truly depends on the game maker and engine. Look at Crysis. Its pooryl optimized and it had a memory leak upon release and doesn't scale well at all. But Source is 4 years old (probably older with development time),and scales very well from older DX7 GPUs and Pentium IIIs all the way to the newest hardware. Well except The Orange Box,or Source 2007, which will scale back to DX8 and low end Pentium 4s to the highest end available.
August 20, 2008 11:41:34 PM

I'm curious how people here would feel if Intel were found to be tampering with public benchmarks. Would they be given another free pass?
a b à CPUs
a b } Memory
August 21, 2008 12:10:28 AM

What makes you think Intel is tampering?
August 21, 2008 12:23:49 AM

jimmysmitty said:
I have a Pentium w/MMX and a Cyrix (remember them?) 486 DX.

Game engines will have a lot to do with how well they will take advantage of a PCs hardware.

Lets look at my favorite example. Source. Source is a well optimized and built game engine. @ 1680x1050 with everything maxed out including AA/AF and HDR enabled on a Q6600 and a HD2900Pro 1GB I get about 150FPS in both the CS:S stress test and the Lost Coast stress test (tried this on my 40" TV). A guy here on the forums (3Ball I think) got a HD4870 and a C2D E6600 @ 3.4GHz he gets 300FPS (thats the max it will show on Source).

Now the CPU I have is at 3GHz and his is at 3.4GHz. Mine is a quad his is a dual. He has a generation ahead card that IF the CPU was truly limiting it he should have gotten a FPS in the test near mine, maybe a bit higher. But he easily got 2x my FPS using a 2 year old CPU.

So in the end it truly depends on the game maker and engine. Look at Crysis. Its pooryl optimized and it had a memory leak upon release and doesn't scale well at all. But Source is 4 years old (probably older with development time),and scales very well from older DX7 GPUs and Pentium IIIs all the way to the newest hardware. Well except The Orange Box,or Source 2007, which will scale back to DX8 and low end Pentium 4s to the highest end available.


Word. As much as Valve as a company annoys me they have built a fantastic engine. That can do some impressive effects the only real difference is ID3 and Unreal3 support a more "advanced" shader model but finding genuine visual effect is somewhat moot. Source in my honest opinion is more developer "friendly" that either ID's and Epics engines its sad their developer support is lack luster is comparison to Carmack on site and Sweeny's interesting open source consordium.

But thats not to say the other engines dont benifit from a faster CPU's, memory subsystem either its just I saw Jimmy's post figured I would word it up :lol:  .

Oh ya Crysis sucks *lights DVD on fire*

Word, Playa.
August 21, 2008 12:30:39 AM

cjl said:
What makes you think Intel is tampering?


I think piesquared is talking about the Cinebench score of ~45000 reported at IDF.

http://www.xtremesystems.org/forums/showthread.php?t=19...

Basically the new Cinema4d 11 engine renders some ~2.5x times faster than the previous engine used in Cinebench 10. Apparently you can modify CB10 to use the C4d 11 engine and get much higher scores.

The thing is, apart from a one liner quoting the ~45k score from sites reporting IDF 'live' such as Anandtech, Tech-report, etc, we really don't have any additional info on this matter at this point, but it'll make for an interesting follow up article I'm sure, since there was almost universal surprise at the extremely high score attained.
!