AMD 45nm Deneb benchmarks

37 answers Last reply
More about 45nm deneb benchmarks
  1. The 3.2ghz one seems pretty nice. But it dosent look like the 2.3ghz one is even worth buying if you own a current Phenom.
  2. Makes me want to get a Deneb and an AM2+ board to play around :(
  3. So just like others said, we'll see a small but decent clock-over-clock improvement, while most of the advantages come from small node.
  4. well through a little work, ive found out that the 3.2 GHz version is on par with the QX6800 from Intel. Also, it performs the same in Fritz 11 as the Q9450.
    Hopefully the performance improves. I was kinda hoping for a little more.
  5. eh?? those are the same old benchies from august, it.anand had some new ones
    http://it.anandtech.com/IT/showdoc.aspx?i=3456
    server stuff, but hey its a server chip. and at least for perf/watt it beats the 'new' xeon quite clearly
  6. Kari said:
    eh?? those are the same old benchies from august, it.anand had some new ones
    http://it.anandtech.com/IT/showdoc.aspx?i=3456
    server stuff, but hey its a server chip. and at least for perf/watt it beats the 'new' xeon quite clearly


    Be nice if they had the Xeon at a lower speed grade for comparison sake.

    Word, Playa.
  7. Lol, sorry guys, I just had to do it after seeing Shanghai benchmark that makes it look good under server conditions. It's already #1 on Google for "Deneb benchmarks." *dies* :na:
  8. i was kinda wondering why the benchmarks were in chinese?
  9. the last resort said:
    i was kinda wondering why the benchmarks were in chinese?


    It's a Chinese site. :p
  10. the last resort said:
    i was kinda wondering why the benchmarks were in chinese?


    Because they usually get the ES chips and boards first, given that they're the manufacturing base afterall. :bounce:
  11. I edited to place the Roadmap in it's own thread. It's worth it's own discussion.

    I'll wait to see Deneb, Propus, Heka and Rana benchmarks before I make a final decision on how they compare to Agena. Right now, my Toliman's good enough. What I really need this year is a 24" LCD. I'm getting tired of a 17" CRT.

    I don't expect Deneb to do as well on the desktop as Nehalem, but I expect it to catch up to more recent Core 2 quad and overtake Kentsfield at best.
  12. http://www.techradar.com/reviews/computing/components/processors/amd-opteron-2384-shanghai--484339/review

    In German
    http://www.tecchannel.de/server/prozessoren/1776972/test_amd_opteron_2384_cpu_benchmarks_performance_shanghai/

    From the look of it, Opteron 2384 still perform relatively inferior to Penryn in desktop applications. However, the CFD calculation result is intriguing.
  13. I'd really like to see some gaming benchies with multi-GPU setups, like what we saw recently with Ci7. In fact, I'd like to see a side-by-side comparison of the two across a 'wide variety of (gaming) workloads' :).
  14. chaohsiangchen said:
    http://www.techradar.com/reviews/computing/components/processors/amd-opteron-2384-shanghai--484339/review

    In German
    http://www.tecchannel.de/server/prozessoren/1776972/test_amd_opteron_2384_cpu_benchmarks_performance_shanghai/

    From the look of it, Opteron 2384 still perform relatively inferior to Penryn in desktop applications. However, the CFD calculation result is intriguing.


    As I said before. Server results generally never translate to desktop results. SO we are still in the dark until a full official review comes, and according to Yips it will be about January when Denebs NDA is released.
  15. Why can't they just run 'normal' apps on it???

    ppff

    I remeber... K10 barcy in server space beat kenty
  16. amdfangirl said:
    Why can't they just run 'normal' apps on it???

    ppff

    I remeber... K10 barcy in server space beat kenty


    Because tha would make sense?

    Have you not noticed the trend lately? Everyone does what makes no sense.

    Just like ppl who vote for the wrong reasons or without knowing a damn thing just because they think the person is cool!!!!

    Its just the way it is. Nothing makes anymore sense.

    You my dear fangirl, live in Senseville. We are all stuck in Nonsenseville.
  17. fazers_on_stun said:
    I'd really like to see some gaming benchies with multi-GPU setups, like what we saw recently with Ci7. In fact, I'd like to see a side-by-side comparison of the two across a 'wide variety of (gaming) workloads' :).



    So AMD is going to have mobo's with CF and SLI support on the same platform like the i7? I wasnt aware of this. Gj AMD! :love:
  18. roofus said:
    So AMD is going to have mobo's with CF and SLI support on the same platform like the i7? I wasnt aware of this. Gj AMD! :love:


    I dunno. But since NVidia is letting Intel's X58 boards use SLI, why wouldn't they let AMD do the same with their boards? After all, AMD could withhold some key technology from NVidia like Intel threatened to do with QPI, unless NVidia cooperates. NVidea already had one can of "whoopass" explode in its face - it doesn't need another :kaola:
  19. jimmysmitty said:
    Because tha would make sense?

    Have you not noticed the trend lately? Everyone does what makes no sense.

    Just like ppl who vote for the wrong reasons or without knowing a damn thing just because they think the person is cool!!!!

    Its just the way it is. Nothing makes anymore sense.

    You my dear fangirl, live in Senseville. We are all stuck in Nonsenseville.


    OBAMA!!!!

    Word, Playa.
  20. fazers_on_stun said:
    But since NVidia is letting Intel's X58 boards use SLI, why wouldn't they let AMD do the same with their boards?


    I seriously doubt this will ever happen as it would most likely kill Nvidia's own chipset division. Other than SLI is there really any compelling reason to buy a Nvidia chipset?
  21. Just_An_Engineer said:
    I seriously doubt this will ever happen as it would most likely kill Nvidia's own chipset division. Other than SLI is there really any compelling reason to buy a Nvidia chipset?


    They wont. I knew it when I responded to the post I did. I think it would be a smart move and one that would grab some respect but they wont do it. They are way too "in tune" with what everyone wants lol.
  22. Too late for that, nvidia chipsets are already dead. If Nvidia does put out boards/chipset for i7, only an idiot would buy them. Literally no reason to buy one if you can get an Intel board with SLi support.
  23. Very VERY true. Intel does make the best chipsets for Intel. They run cooler, much more stable, more headroom for overclocking. Never had problems with my 780i but I know thats an exception, not the rule.
  24. Buying SLI right now would make no sense, or Nvidia for that matter.

    AMD/ATI 4xxx gpu's with 8.12 drivers and beyond will have stream computing built into the drivers. This means that every app/game that has the capability of using a GPGPU will automatically use those new features of ATI's drivers/cards.

    On top of that Nvidia's CUDA is a high level driver vs ATI's low level.
    This meand that if a developer writes something to take use of CUDA they have to rely on Nvidia's drivers to make it work. in other words something may break in future driver releases.

    ATI's low level(or CTM) approach will allow developers to make their program and forget about it. it will ALWAYS work the way it was designed in all future catalyst updates. Nvidia would have to freeze their driver to be able to get that kind of reliability out of CUDA.....or make it much more bloated and slower. AMD/ATI's approach is much better for developers.
  25. amd has the same problem it has had for years

    intel sell a 3ghz cpu that runs 4-5ghz and sells and 3ghz cpu that might run barely run just over 3ghz

    perception is everything and this die shrink and updates are good but not good enought to keep market share from slipping

    hard core amd fans hang in there!
  26. grndzro said:
    Buying SLI right now would make no sense, or Nvidia for that matter.

    AMD/ATI 4xxx gpu's with 8.12 drivers and beyond will have stream computing built into the drivers. This means that every app/game that has the capability of using a GPGPU will automatically use those new features of ATI's drivers/cards.

    On top of that Nvidia's CUDA is a high level driver vs ATI's low level.
    This meand that if a developer writes something to take use of CUDA they have to rely on Nvidia's drivers to make it work. in other words something may break in future driver releases.

    ATI's low level(or CTM) approach will allow developers to make their program and forget about it. it will ALWAYS work the way it was designed in all future catalyst updates. Nvidia would have to freeze their driver to be able to get that kind of reliability out of CUDA.....or make it much more bloated and slower. AMD/ATI's approach is much better for developers.


    No thanks. I may be game to ditching the Nvidia chipset in favor of the x58 but I wont consider ATI cards for at least a year, year and a half (providing they maintain good offerings like they currently do). My investment in video cards I do try to maintain discipline with because I used to spend wayyy too much money playing that game.
  27. roofus said:
    They wont. I knew it when I responded to the post I did. I think it would be a smart move and one that would grab some respect but they wont do it. They are way too "in tune" with what everyone wants lol.


    I guess if AMD wanted to force the issue, they would have to figure out which would be the lesser loss - fewer Deneb sales vs. increased 48XX GPU sales... However if they did nothing, it would be a bit of an embarrassing admission that they don't swing the same weight as Intel does. Wonder how much of an ego Dirk Meyer has, compared say to "Mr. Whoopass" Jen-Sun Huang?
  28. Those benchmarks don't show the true performance of the deneb cpu's as the tests were run on a bottlenecking AM2+ board with DDR2.
  29. sonar610 said:
    Those benchmarks don't show the true performance of the deneb cpu's as the tests were run on a bottlenecking AM2+ board with DDR2.


    Um so having a faster link and DDR3 should make it super amazingly inredibly ultra faster?

    Meh. Last time AMD switched memory types (DDR to DDR2) we saw either worse performance or maybe 1-2% gains. I am doubting DDR3 will make it much faster considering that current DDR3 latencies are pretty high.
  30. The latencies aren't bad at all on DDR3. Sure, CL8 at 1600 MHz sounds high, but it's actually the same latency as CL4 DDR2-800.
  31. I believe AMD has claimed a 5% increase on the AM3s
  32. jimmysmitty said:
    Um so having a faster link and DDR3 should make it super amazingly inredibly ultra faster?

    Meh. Last time AMD switched memory types (DDR to DDR2) we saw either worse performance or maybe 1-2% gains. I am doubting DDR3 will make it much faster considering that current DDR3 latencies are pretty high.


    There is a reason why Intel has opted for DDR3 on thier Core i7, just like DDR2 when it first came out, almost no performance gains, over time the DDR3 will show some noticable performance gains, especially when the bandwidth of DDR3 can work with the Cache and other component bandwidths, one day we will have RAM, CPU and GPU (probally via PCI-E) bandwidth all the same speed I guess ;)
  33. cjl said:
    The latencies aren't bad at all on DDR3. Sure, CL8 at 1600 MHz sounds high, but it's actually the same latency as CL4 DDR2-800.


    I understand all that. But it really depends on how well AM3 works with DDR3 and at what voltages their CPU will work with such high speed memory.

    I was just pointing out that performance increase from mainly a memory switch for AMD and Intel has always shown no performance improvements, except in Intels case where they added another channel to the memory so that does help performance a bit.

    Anonymous said:
    There is a reason why Intel has opted for DDR3 on thier Core i7, just like DDR2 when it first came out, almost no performance gains, over time the DDR3 will show some noticable performance gains, especially when the bandwidth of DDR3 can work with the Cache and other component bandwidths, one day we will have RAM, CPU and GPU (probally via PCI-E) bandwidth all the same speed I guess ;)


    Someday we will have everything on one chip. Soon it will be GPU and CPU. Then it will be memory, CPU and GPU. One step in that direction is Fusion and Intels equivalent. Another step is Intels Terascale chip thats extremely modular to the point that each core can be either GPU, PPU or CPU.

    Of course its still a bit away but I am sure we will have it some day.
  34. sighQ2 said:


    As I have said before, there is no way to tell if this isn't just a cherry picked chip being used for that purpose.

    Considering the last time AMD showed off a Phenom you might want to take this with a grain of salt. Or dive right into the hyp and end up getting let down. Its your choice.
  35. sighQ2 said:


    of course when you posted this you did realize this is the same exact event that keeps getting re-posted? read it carefully. this was done by AMD, not an unbiased 3rd party. as encouraging as it is, i will wait to reserve judgment.
Ask a new question

Read More

CPUs Nehalem AMD Phenom