Intel’s Core i7 processor, formerly referred to as Bloomfield, has been architecturally dissected one bit at a time for the past two years. Talk about a rollercoaster of anticipation for gaming enthusiasts. Gone is the super-quick L2 cache that helped propel Core 2 processors so far ahead of AMD’s Phenom offerings in a myriad of games. In its place is a much smaller L2, a large L3, a HyperTransport-like processor-to-processor interconnect, an integrated memory controller, and the reemergence of Hyper-Threading. If we didn’t know any better, we’d suggest that Intel is going after a different market entirely.
Ah, but we do know better, and it is. At this fall’s Intel Developer Forum (IDF), Intel representatives gave us our closest look yet of production specs for its upcoming Core i7 lineup, along with preliminary performance numbers. In addition to exceptional results in video editing, media conversion, and professional workstation titles, we also received a healthy dose of reality for what we’d see in testing gaming performance. Quite simply, the Nehalem micro-architecture incorporates a lot of improved technology, which is reflected in the speed-ups seen throughout a cherry-picked benchmark suite. However, it reportedly wouldn’t gain much in the way of gaming—presumably as a result of the tradeoffs made as Intel’s engineers optimized their efforts to retake the enterprise computing space.
It’s A Platform Story Now
Thus, our attention shifted to the X58 platform, Intel’s chipset complementing Core i7 at launch. Initial rumblings were that X58 would extend Intel’s support for CrossFireX—AMD’s multi-card rendering technology—through divisible PCI Express links. A number of existing Intel chipsets already accommodated two or more cards, so it was hardly a surprise that core logic armed with two x16 PCI Express 2.0 links would continue that trend.
Then the rumors started bubbling up that X58 might also work with SLI, just like Intel’s “Skulltrail” does.
First, we were lead to believe the feature would require Nvidia’s nForce 200 companion chip. But then, at its NVISION ’08 show in San Jose, the company made it clear that X58 would support SLI natively—in two- and 3-way configurations. We’ll leave alone discussion of SLI and the supposed hardware requirements that kept it from appearing on non-Nvidia platforms previously. By now, it’s a foregone conclusion that the technology needed to enable SLI isn’t a chipset-specific feature, but rather a matter of licensing. The company shared with us the “updated” requirements for what it’d take to enable SLI, and gave us some idea of the configurations made possible by combining Intel’s X58 and the company’s own nForce 200.
- Tempered Expectations
- X58 And SLI Get Busy
- Caveats, Realism, And Driver Hell
- Hardware Enthusiast’s Paradise
- Benchmark Results: World in Conflict
- Benchmark Results: Supreme Commander: Forged Alliance
- Benchmark Results: Crysis 64-bit
- Benchmark Results: Crysis: Warhead
- Benchmark Results: Company of Heroes
- Benchmark Results: Unreal Tournament 3
- Benchmark Results Far Cry 2
- Benchmark Results: 3DMark Vantage
- Averaging It All Out
- Conclusion


Hey you even got a "First" in there Randomizer!
And modest old me didn't even mention it.
Go check out the benchmark pages man! Every one with 1, 2, 4 4870s. The 2x and 4x configs are achieved with X2s, too.
Oh, and latest drivers all around, too. Crazy, I know! =)
I found it, just read the article too quickly. - My bad.
"A single Radeon HD 4870 X2—representing our 2 x Radeon HD 4870 scores—is similarly capable of scaling fan speed on its own. "
Hope to see driver updates like you said.
Cheers.
The conclusion I draw from this and other tests made is that Core i7 is great, but you need to spend big money on graphic cards to make it a gamers choice, or put it into a game performance per money perspective. As it is now, before future releases of mid range CPUs, or if AMD unexpectedly release some scary monster, I foremost see Core i7 as a solid solution for serious work. In rendering and other CPU dependent tasks it might be a blessing to cut some 40 % of the time to process.
Another observation is that if the current scenario doesn't change in the near future we could well be back to old school over-clocking culture, when money and availability set the limits. I'm not against but in the last years we've seen more of a yuppie's over-clocker culture, where money and availability haven't been an issue. To be frank, what we have here is two ways of making priorities: one option is an AMD system which gives you a 790FX motherboard + CPU + RAM for the same price as a single Intel Core i7, and if you're not planning to play at resolutions higher than 1600x1200 and probably not buy anything above a possible single X2 Graphic Card it could well be the best offer for the money.
Options are good and even with AMD well behind it opens up for many different choices. Some never really use but enjoy knowing they got a monster system, others only buy exactly what gives best value for money, some specialize systems for tasks with a cost conscious approach, and some don't have a clue. Every choice is good as long as the user is happy (and spendings doesn't hurt the family economy).
Oh, a lot of text there. In conclusion I'm more interested in whether the X58 platform is ill suited (or less good) for AMD graphic cards at the moment, and for proof either way.
nice @ first : )
but now i got one big question about this review
on page 12 you got a nice overview about the 3d mark benchmark
what i don't understand is the CPU score of the i7 and c2q ex based on nvidia and ati graphic boards
there is such a huge difference of the cpu score just because of changing graphic boards ???
how can that be?
i mean the cpu score is based on the cpu right? or does futuremark test
something else with this cpu score than just the cpu itself?
i don't get to see the reason, why just changing from nvidia to ati or otherwise could have such huge effect on that cpu score
but maybe one of you could explain it to me, otherwise i could think there is something wrong with this or maybe all of these benchmarks in this review
thx in advance : )
it was stated in the article on that page, at the top. the default run with PhysX artificially inflates the scores.
You should do this benchmark again in a couple of months when the drivers give more accurate results, and in that one..... lose the phenom
However, I didn't see specified CPU clocks, so I presume that all three CPUs tested were run at stock speed. Although I have little doubt that Phenom X4 will still lose to both Ci7 965EE and QX9770, it's just my curiocity to see how Phenom X4 at 3.2GHz would perform.
i7 is really a cool system, much like the intel SSD. intel is on fire.
Why not just revert to the system everyone else uses with a simpler scroll-down bar?