/ Sign-up
Your question

Nehalem SuperPi score: this time it's real!

  • CPUs
  • Intel
  • Nehalem
Last response: in CPUs
June 11, 2008 3:02:36 AM

Expreview shows the Intel Nehalem 2.4Ghz running superpi 1M for 17 seconds.

Expreview also got Intel's latest roadmap. It shows that Nehalem will enter mainstream in Q4 this year with 2.66Ghz. The higher end will be 2.93 Ghz and 3.2 Ghz. The three Nehalem will have the same amount of L3: 8mb. We know from anandtech, the 2.66Ghz Nehalem is faster than QX9650, which is a good news for mainstream users.

More about : nehalem superpi score time real

a b à CPUs
June 11, 2008 3:25:04 AM

I saw the latter but didn't see the top one...

my 3.2 ghz dual core finishes it in 16 seconds... so I wonder how much faster a nehalem can do it once OCed... but i was expecting it to be faster than that... idk why but...
June 11, 2008 3:34:56 AM

Cant wait for the G280 superpi results
Related resources
June 11, 2008 3:47:01 AM

This seems interesting.
a b à CPUs
June 11, 2008 3:48:50 AM

Unfortunately Superpi is single-threaded and inefficient. Should have tried wPrime :) 

CPU-Z needs to be updated to read the vcore correctly.
June 11, 2008 4:36:27 AM

I hit 18.812 seconds on the 1M. Q6600 @ 3.0GHZ

I would love to see the score of that CPU when OC'd.
June 11, 2008 4:44:25 AM

Intel needs a more complex processor chart.

ed: I was just thinking as I posted that, you know one thing Intel and Nvidia definitely have in common - they both have a rediculously large and redundant product lineup =D
a b à CPUs
June 11, 2008 5:19:53 AM

At least they both have competitive products.
June 11, 2008 5:41:46 AM

Nvidia is a fair company making good products. I don't understand why people would bash the Company. Just because they're always a step ahead of ATI doesn't make them bad.

Intel on the other hand, they use dirty tricks to flush out the competition, hence why I dislike them. I still use their CPU though for the simple fact its faster.
a b à CPUs
June 11, 2008 6:01:11 AM

People don't like Nvidia because they aren't very innovative. 2 years later and we are only now seeing something other than a G80 refresh.
June 11, 2008 6:29:23 AM

And those who bought the original 8800GTS or GTX models got shafted. I mean no PureVideo while even the cheaper 8500GT had it? WTH?!

NVidia has no problem shafting its customers.
a b à CPUs
June 11, 2008 6:38:20 AM

Hey, and actually paying for the purevideo app does suck...but I am not a home theater fanatic, so it wasn't a big deal for me feature wise.
a b à CPUs
June 11, 2008 6:50:30 AM

The 8800s were targeted as "gaming" cards anyway.
June 11, 2008 7:18:15 AM

Something like video playback should always first be considered in a higher card
June 11, 2008 7:21:22 AM

Randomizer, true but its kinda like saying don't expect your Merc S600 to have all the same 'good' features as the Merc S320 but do expect to pay more.
June 11, 2008 7:29:30 AM

The 8800's didn't need an update for those 2 years. Nothing challenged the cards except Crysis and that was it. The competition had nothing to even compete with. So, Why not work on a brand new GPU and leave the 8800's out for the time being.

Nvidia is playing it smart. I have an 8800 Ultra and even now I can play all the games on the market at high frames. 2+ years its been out, and it still plays all of todays games

Tells ya something.
June 11, 2008 7:32:59 AM

Sacre true but whats not good about it is that they leave us out in the cold. Newer faster cards force down the price of older cards. After all not everybody can afford an ultra. Most people can bearly afford the GT.

Edit: Imagine if Intel just sat back and said well the Q6600 is fast enough and carried on charging $500 for 2 years. Alot of us would still be on Celerons then...
a b à CPUs
June 11, 2008 7:40:25 AM

I think Nvidia wanted to get in early with the performance king and worry about additional features later. It may annoy some people, but from a market opportunity perspective it was the right choice IMO.
June 11, 2008 7:41:59 AM

How long has the C2D been around?
June 11, 2008 7:44:13 AM

jaydeejohn, Core 2 Duo was launched June 2006, Core 2 Quad in November 2006.
June 11, 2008 7:46:25 AM

So, its been around longer than the 8xxx series from nVidia.
June 11, 2008 7:49:14 AM

@jaydeejohn Yes but since then Intel has released the 45nm Core 2 Duos and Quads bring down the prices of the 65nm chips quite drastically and offering lower temps, better performance at lower clocks and better energy effiency.

Most importantly regards of the 45nm node Intel still brought their prices down.
June 11, 2008 7:52:37 AM

The 9600 is far superior to the 8600. nVidia has shrunk its process as well, and severly reduced its prices. Ultra vs 8800gts
June 11, 2008 7:54:23 AM

All Im saying is, Intel hasnt done more than nVidia regarding their own products within the same timeframe
June 11, 2008 7:59:08 AM

jaydeejohn Yes but only because ATI forced their hand. If it wasn't for the 3870 being so close performance wise but much cheaper we would never have seen the cards.

At this stage NVidia is just sitting back and reacting to ATI not actually doing much of its own accord. I am hoping it is because they were dedicating their resources to the 260 GTX and 280GTX but somehow I fear that the competition has not only been forcing NVidia's hand but also putting more into their own cards. I have a feeling that if it wasn't for the ATI 4870 we wouldn't be seeing the new cards but rather NVidia would be milking their older technology for a while longer.
a b à CPUs
June 11, 2008 8:01:10 AM

Yeah, I'm really looking forward to seeing the full benches for the new ATI and NVidia cards. Can't be long now!
a b à CPUs
June 11, 2008 8:01:27 AM

G92.5 :lol: 
June 11, 2008 8:03:25 AM

Possibly. And theres been some words in that direction regarding Nehalem because of AMDs lack of competition as well. Who really knows, Im just sayong what Ive seen in the market to date regarding both Intel and nVidia, as to what we have currently, and how its gone, not why
June 11, 2008 8:05:12 AM

Randomizer, spot on. I've even heard rumours that the new chips are effectively just two G92s in a single socket using SLI (but not sharing memory like the new 4870x2). So if that rumour holds the new cards are just the 9800GX2 rehashed which I chould see happening. Would also explain the power / heat...


@jaydeejohn, Intel has been saying H208 for a while now. These rumours just bascially say the single socket version will be released earlier in H208 while the dual socket version later on in H208 but still in H208. I think thats more AMD fanboys and girls trying to make noise about something that may or may not happen to draw attention away from AMD but I doubt its going to be a year late and far less than promised.
June 11, 2008 8:14:06 AM

More than likely true about the release dates. I just dont think its right to discount what nVidias done, and praise Intel, when essentually theyve both done pretty much the same thing, fo whatever reasons
June 11, 2008 8:40:32 AM

I'm not praising Intel just pointing out that dispite their lack of competition they still went out and reduced prices which helps out and created new products. NVidia has basically been sitting back and only responding.

Neither company's business practices are worth praising though.
June 11, 2008 8:53:58 AM

The release pricing of the C2Ds was very good also. I undertand what youre saying, I agree for the most part, it was the celeron thing mostly. Anyways, itll be interesting to see what gpus can do in superpi, and against each other as well. Maerket driven is what we saw in the gpus. Intels pricing? Market share? Again, who knows, but it would have been better for AMD to have the pricing higher, or left where it was at, tho then Intel wouldve had a hard time entering into mainstream with C2D. Like I said, its all to the markets, what they can handle. Looks like, if you truly have a great discrete gpu, you can make some serious money, thus Larrabee?
June 11, 2008 8:59:10 AM

Yeah but what everybody seems to be forgetting is that GPUs although really fast are not nearly as dynamic as a CPU. On most systems sold the CPU 'is' the sound card and network card. Has anybody seen a GPU doing that kinda thing?

I see a great future where the CPU manages the system and fills in for missing hardware as it currently does but the bulk of the processing is done on the graphics card. Larrabee containing cores with x86 support is a huge step in this direction.
June 11, 2008 9:14:16 AM

As is the marraige of VIA and nVidia, and fusion as well. I think cpus will be limited by the use of gpus doing more, better and faster. Put it all together, cpus can concentrate more on what they do best, and do it better, just like gpus are doing now. This is something Im not sure Intel is being honest with right now. They know it, I know it, alot do. So itll only get better from here
June 11, 2008 9:17:30 AM

Well the main problem with GPUs is that they don't support X86 so all of our software would have to be redesigned or the graphics card would have to be. Until then the CPU is our only option.
June 11, 2008 9:23:03 AM

Just like superpi was only cpu used. We will see video encoding on gpu soon. Therell be others as well. This is growth, better faster. This software to run these things is easier than multithreading in 86. Otherwise it wouldnt be at our doorstep so quickly. Give it time, as we will all benefit from this. Thats why Intel is heading in this direction.
June 11, 2008 9:34:03 AM

Also power savings wise it makes sense. Have a low CPU run sound, networking and other services and under light load such as office work the GPU(of most of it) can sleep saving the power usage of the networking card, sound card and GPU. Massive drop in power requirements.

Load picks and the GPU becomes more active. Perhaps a few trees could be saved after all the paperless age sure as hell failed with that.
a b à CPUs
June 11, 2008 9:36:42 AM

sacre said:
I hit 18.812 seconds on the 1M. Q6600 @ 3.0GHZ

I would love to see the score of that CPU when OC'd.

E6600 @ 3200/1600 gets 15 seconds flat (XP), and 20 seconds at stock (2400/1066)

Although SuperPi is a single core/threaded benchmark it can give us an indication as to how well it can crunch numbers (unoptimized for SSE and other features etc) per core etc, compared to its predecessors, but as an overall benchmark its not that great - only Intel fanboys supply SuperPi scores because they always work out better on an Intel, as a benchmark comparing say core revisions, its great.

The score and improvement is great considering the architecture hasnt really changed process wise (from the benchmarks we have seen) apart from the cache structure and IMC (and with that the FSB/connectivity)

Another factor we have to remember is power efficency - we've seen AMD come from DDR1 to DDR2 with improved power figures, now Intel is replacing its ancient FSB design which is probably responsible for poor power figures trying to maintain high frequencies it prolly wasn't designed for in the first place (1600+), and now to a new design, but then again perhaps it may consume more power too.

Comparing a GPU to a CPU is like comparing a human to a calculator - the calculator doesnt have enough power to figure out human movement, sight, smell, touch, logic, balance which for us is everyday nothing, yet a calculator can figure out 435.34 + 56664.96 x 0.23 in a split second which takes the average human 10,000 times longer to figure - does that make us 10,000 times weaker then a calculator? Hell no.
a b à CPUs
June 11, 2008 9:58:27 AM

The human brain: multithreading at its best.
Nehalem: a powerful calculator with a bad name.
June 11, 2008 10:31:33 AM

Human brain, heh. Add emotions and choices to a robot with an Intel/AMD chip and our brain will be obsolete.

Anywho, with Nvidias pricing it seems insane, but back when I decided to save for a new Gcard, and bout the Ultra, it was pricy. But when I played my graphic intesive games and they ran smooth, and even to this day all games run smooth, I don'tm regret it at all. I will have this card running games nicely for another year or so easily. So, spending 600 and running the same card for years in my mind is worth it.

Also, with GPU's, if it can run everything smoothly, without a hitch, over 60 FPS, why bring a new card out? Sure, you could pump a card out to do 150+ fps but why?

Hence why GPU companies can take their time, because GPU's are designed mainly for 1 thing, games. If they run smooth, that is it. When Competition grows, then they make new. If games get stronger then they release better.

As for CPU's they do more, and there is always room for improvement.

blah blah blah i'm rambling now lol
June 11, 2008 10:41:58 AM

ovaltineplease said:
Intel needs a more complex processor chart.

ed: I was just thinking as I posted that, you know one thing Intel and Nvidia definitely have in common - they both have a rediculously large and redundant product lineup =D

would you prefer the penis chart?

a b à CPUs
June 11, 2008 10:59:09 AM

They could have made it rectangular, but noooo. Intel must have a bunch of immature 1st year marketing students. :kaola: 
June 11, 2008 12:15:46 PM

The release chart above wasn't supposed to look like Penis's but instead like bullets in AMD's proverbial head. Also it's spelled Nehalem, but pronounced Nail Em'.
June 11, 2008 12:30:31 PM

JDocs said:
And those who bought the original 8800GTS or GTX models got shafted. I mean no PureVideo while even the cheaper 8500GT had it? WTH?!

NVidia has no problem shafting its customers.

No, its actually the guys who just bought Radeon 38XX or Geforce 98XX, that just got shafted since they are buying old technology on the heels of new architectures that are to make their lines obsolete.

Us folks who bought 8800GTX's and such almost TWO years ago and can still push over 18,000 in 3Dmark06 and walk all over just about any new card released since then are the ones who have gotten MORE than our moneys worth. Pure Video, whatever dude, blueray players are for watching movies not your PC.
June 11, 2008 12:40:07 PM

Thought I would touch on the subject a bit: both purevideo and avivo suck. You can get a MUCH better looking video with just a simple avisynth script. The only thing thats nice about the newer cards is the hardware decoder.. which iirc only does main profile decoding.
a b à CPUs
June 11, 2008 12:45:18 PM

The old GTX and Ultra are still King.

I can't believe the n00bs don't realise they just bought a cheaper GPU on a process shrink with less memory bandwidth.

The newer NV cards are cheap imitations of the original GTX and Ultra.

The 8800GT dies in the A$$ on hot days becaus they run too hot and have cheap (insufficient) single slot coolers.

Those who bought the 8800GTX and (to a lesser extent the Ultra) two years ago still have the best cards.

NV has done nothing innovative in 2 years bar shmooz the losers who look at the pictures on the box of the brand new card they bought.

But those of you with a brain already know that.

The last two years for NV has been all about making the chip cheaper to produce and leverage increased profits.

While ATI have bumbled about a bit ... thats another bedtime story.


P.S. What was that fake superPII about again?
a b à CPUs
June 11, 2008 12:59:00 PM

ATI have been sweeping up the crumbs in the lower price brackets, where us alcoholics spend our money!
June 11, 2008 1:26:59 PM

Reynod said:
I can't believe the n00bs don't realise they just bought a cheaper GPU on a process shrink with less memory bandwidth.

When I bought my GT with a dual slot cooler it cost R2500 (about $333) while the cost about R7000(just shy of $1000) so the question becomes did I waste my money or make a good decision? Hell the when I bought my 19inch monitor a 24inch monitor easily cost upward of R6000 but on the 19inchs that are more affordable (read 1/3 of the price) my GT will go head to head with an Ultra and do well.

Not everybody has tons of money to spend on those cards or monitors that actually allow them to show their superiority.
a c 921 à CPUs
a c 218 å Intel
June 11, 2008 2:00:57 PM

The superpi results seem strange. I ran 1m on my E8400 stock(3.0) and got 15 seconds.
I see posted results from Q66600@3.0 = 18.812 (Why don't I get decimal places?)
Q6600 @3.2 = 15sec.
Q6600 @stock(2.4) = 20sec.
I also ran E6300@stock(1.86) in 29 secoonds.

Why the differences??

I suspect that quad cores have inefficiencies compared to dual cores.
This would support the contention that using a dual core cpu is more beneficial in games
that are not quad core optimized. Even at equivalent clock speeds.

Where nehalem will shine is in the multicore application arena.
      • 1 / 2
      • 2
      • Newest