Sign in with
Sign up | Sign in
Your question
sticky

AMD CPU speculation... and expert conjecture - Page 200

Tags:
  • AMD
  • CPUs
Last response: in CPUs
Share
a b à CPUs
December 9, 2013 9:56:02 AM

juanrga said:


noob and company I notice how you are again silent to people here mentioning and discussing ARM.


I don't care if ARM is discussed. It's the way it's been discussed that people take issue with. As if ARM is the magical solution to everything just because it's newer. The grass is always greener in the other persons yard when scalability issues are all conveniently ignored.
a b à CPUs
December 9, 2013 10:05:04 AM

gamerk316 said:


Well, nothing is stopping you from making a GPU using X86 cores; Intel tried it (Larrabee/Knights Peak)...oh wait, it drew a ton of power and hasn't actually made it as a product yet. Never mind.


Actually that's still going on and shipping now. They redubbed it Xeon Phi.

They're still dumping money into it and expanding it with on package memory (HBM).
December 9, 2013 10:27:44 AM

gamerk316 said:
Ags1 said:
gamerk316 said:
noob2222 said:
http://www.headline-benchmark.com/results/1cbd380e-ff9f...

ill play with some settings later and see how that affects things. Don't remember right now where I have everything set at as far as nb, htt, fsb.

waiting for my 290x water block still, might put it back in just to test it again if i can do it without unhooking the 6970 hoses.

^^ also looking at Gamerk's bench, his memory testing carried all the way down to 2mb and yuka's dropped off at 128k. Did GK test with a xeon to have that much caching?


noob2222 said:
^^ could be, his cpu and memory aren't detected either.

Checking some others, looks more like a haswell system.

http://www.headline-benchmark.com/results/0f004cfc-1942...

or ... now im confused, yuka's seems to be the one thats off with the drop after 128mb.

http://www.headline-benchmark.com/results/93a871e9-4a76...


Standard 2600k, at stock. 8GB DDR3 (forget the brand offhand). Might be the fact I've highly tuned my Windows installation for performance, and something is getting messed up settings wise...I'll toy around over the weekend and see if i can figure out why the CPU/RAM isn't getting detected right, but the GPU is...


I altered the maths on the site slightly to resolve the issues of variance you could see in some scores. The system scores were giving a lot of weight to the performance found at around 4 threads (ostensibly to simulate gaming loads) but I have now dropped that as the results were a bit unreliable and varied a lot from run to run. Also, a user informed me that the 4-thread results could be massively influenced by playing with the process priority, but now I am using data points that are largely immune to that.


You could significantly affect the results via priority, especially when the number of threads is greater or equal to the number of CPU cores. In this case, the higher the priority, the less of a chance of another background application bumping one of your application threads, which, if they are using every CPU core, WILL affect the final result.

I suspect that's why my system fared so well; I have a very customized Win7 install with almost no background tasks running.

...see how fun it is making "unbiased" benchmarks yet?


Actually, the results from threads > logical cores are not influenced by priority - at that point the app is consuming approximately 100% of the CPU and there is nothing extra to squeeze out. It might make a difference if you are running other things in the background, but the app specifically requests users to shut down background processes! And really, if you are going to run a benchmark in parallel to youtube and video rendering, you can't expect sane results, regardless of what you set the priority to :-)

The place where priority affected the results was where threads < logical cores, as higher priority seemed to incline Windows to schedule the threads more efficiently. But once the cores are saturated, there is basically no difference. Here are Yuka's experiments with priority (high priority compared to normal priority):

http://www.headline-benchmark.com/results/cef25b47-463f...

As you can see, the priority possibly gives a boost for light threading, but by the time cores are saturated there is no real difference.

Related resources
a b à CPUs
December 9, 2013 10:43:52 AM

Ags1 said:
gamerk316 said:
Ags1 said:
gamerk316 said:
noob2222 said:
http://www.headline-benchmark.com/results/1cbd380e-ff9f...

ill play with some settings later and see how that affects things. Don't remember right now where I have everything set at as far as nb, htt, fsb.

waiting for my 290x water block still, might put it back in just to test it again if i can do it without unhooking the 6970 hoses.

^^ also looking at Gamerk's bench, his memory testing carried all the way down to 2mb and yuka's dropped off at 128k. Did GK test with a xeon to have that much caching?


noob2222 said:
^^ could be, his cpu and memory aren't detected either.

Checking some others, looks more like a haswell system.

http://www.headline-benchmark.com/results/0f004cfc-1942...

or ... now im confused, yuka's seems to be the one thats off with the drop after 128mb.

http://www.headline-benchmark.com/results/93a871e9-4a76...


Standard 2600k, at stock. 8GB DDR3 (forget the brand offhand). Might be the fact I've highly tuned my Windows installation for performance, and something is getting messed up settings wise...I'll toy around over the weekend and see if i can figure out why the CPU/RAM isn't getting detected right, but the GPU is...


I altered the maths on the site slightly to resolve the issues of variance you could see in some scores. The system scores were giving a lot of weight to the performance found at around 4 threads (ostensibly to simulate gaming loads) but I have now dropped that as the results were a bit unreliable and varied a lot from run to run. Also, a user informed me that the 4-thread results could be massively influenced by playing with the process priority, but now I am using data points that are largely immune to that.


You could significantly affect the results via priority, especially when the number of threads is greater or equal to the number of CPU cores. In this case, the higher the priority, the less of a chance of another background application bumping one of your application threads, which, if they are using every CPU core, WILL affect the final result.

I suspect that's why my system fared so well; I have a very customized Win7 install with almost no background tasks running.

...see how fun it is making "unbiased" benchmarks yet?


Actually, the results from threads > logical cores are not influenced by priority - at that point the app is consuming approximately 100% of the CPU and there is nothing extra to squeeze out. It might make a difference if you are running other things in the background, but the app specifically requests users to shut down background processes! And really, if you are going to run a benchmark in parallel to youtube and video rendering, you can't expect sane results, regardless of what you set the priority to :-)

The place where priority affected the results was where threads < logical cores, as that seemed to incline Windows to schedule the threads more eficiently. But once the cores are saturated, there is basically no difference. Here is Yuka's experiments with priority (high priority compared to normal priority):

http://www.headline-benchmark.com/results/cef25b47-463f...

As you can see, the priority gives a boost for light threading, but by the time cores are saturated there is no real difference.



Depends on a few things honestly. Intel/AMD may differ here, due to how CMT operates. Going to be Linux/Windows scheduling differences on this one too...For the Windows case:

In the case where NumThreads < NumCores, I'd expect a speedup via priority boost, as this would reduce the chance of another system thread booting one of the threads for your application, instead booting a lower priority thread from a different core. This also has the secondary effect of greatly reducing the chances of threads bouncing between cores, which can eat performance.

For the NumThreads > NumCores case, depending on CPU arch and scheduling, a few things could happen. On one hand, you'd have a bottleneck where no matter what you do, some of your threads can't run, and thus, priority boosts really won't affect performance. On the other hand, in a HTT/CMT system, getting threads done even slightly faster can have a significant impact if it results getting another thread off a HTT/CMT core (which costs you performance). That was the case I was thinking of above.

The Linux case would be the most interesting, since the default scheduler (CFS) tends to run threads in such a way as to ensure each gets roughly the same amount of total execution time. As a result, I'd expect total execution time to rise as the number of threads in the system increases (background tasks, etc). I'd expect you'd be able to measure differences between a heavy and light linux distro in purely CPU bound benchmarks.

FYI, you should be able to invoke the WinAPI and manually set priority to individual threads; should be trivial to make them all the highest priority on Windows, which should remove any such issues in the future.
December 9, 2013 10:44:38 AM

juanrga said:
The leaked roadmap is not official. Someone leaked it from a non-public presentation given by AMD. This is what "leaked" means. I can see in the bottom part of the slide the word "CONFIDENTIAL". Therefore I am not sure why people insist on saying that it is not an official roadmap. Official roadmaps don't have the word "CONFIDENTIAL" printed in them.

And how difficult would it be to insert the text "CONFIDENTIAL" on a slide, in a document or an image...?

Printing business cards saying "Title: Emperor of the Galaxy" doesn't make it true, but it is simple enough to make them... And a lot of fun...
December 9, 2013 11:13:45 AM

Spoiler
gamerk316 said:
Ags1 said:
gamerk316 said:
Ags1 said:
gamerk316 said:
noob2222 said:
http://www.headline-benchmark.com/results/1cbd380e-ff9f...

ill play with some settings later and see how that affects things. Don't remember right now where I have everything set at as far as nb, htt, fsb.

waiting for my 290x water block still, might put it back in just to test it again if i can do it without unhooking the 6970 hoses.

^^ also looking at Gamerk's bench, his memory testing carried all the way down to 2mb and yuka's dropped off at 128k. Did GK test with a xeon to have that much caching?


noob2222 said:
^^ could be, his cpu and memory aren't detected either.

Checking some others, looks more like a haswell system.

http://www.headline-benchmark.com/results/0f004cfc-1942...

or ... now im confused, yuka's seems to be the one thats off with the drop after 128mb.

http://www.headline-benchmark.com/results/93a871e9-4a76...


Standard 2600k, at stock. 8GB DDR3 (forget the brand offhand). Might be the fact I've highly tuned my Windows installation for performance, and something is getting messed up settings wise...I'll toy around over the weekend and see if i can figure out why the CPU/RAM isn't getting detected right, but the GPU is...


I altered the maths on the site slightly to resolve the issues of variance you could see in some scores. The system scores were giving a lot of weight to the performance found at around 4 threads (ostensibly to simulate gaming loads) but I have now dropped that as the results were a bit unreliable and varied a lot from run to run. Also, a user informed me that the 4-thread results could be massively influenced by playing with the process priority, but now I am using data points that are largely immune to that.


You could significantly affect the results via priority, especially when the number of threads is greater or equal to the number of CPU cores. In this case, the higher the priority, the less of a chance of another background application bumping one of your application threads, which, if they are using every CPU core, WILL affect the final result.

I suspect that's why my system fared so well; I have a very customized Win7 install with almost no background tasks running.

...see how fun it is making "unbiased" benchmarks yet?


Actually, the results from threads > logical cores are not influenced by priority - at that point the app is consuming approximately 100% of the CPU and there is nothing extra to squeeze out. It might make a difference if you are running other things in the background, but the app specifically requests users to shut down background processes! And really, if you are going to run a benchmark in parallel to youtube and video rendering, you can't expect sane results, regardless of what you set the priority to :-)

The place where priority affected the results was where threads < logical cores, as that seemed to incline Windows to schedule the threads more eficiently. But once the cores are saturated, there is basically no difference. Here is Yuka's experiments with priority (high priority compared to normal priority):

http://www.headline-benchmark.com/results/cef25b47-463f...

As you can see, the priority gives a boost for light threading, but by the time cores are saturated there is no real difference.



Depends on a few things honestly. Intel/AMD may differ here, due to how CMT operates. Going to be Linux/Windows scheduling differences on this one too...For the Windows case:

In the case where NumThreads < NumCores, I'd expect a speedup via priority boost, as this would reduce the chance of another system thread booting one of the threads for your application, instead booting a lower priority thread from a different core. This also has the secondary effect of greatly reducing the chances of threads bouncing between cores, which can eat performance.

For the NumThreads > NumCores case, depending on CPU arch and scheduling, a few things could happen. On one hand, you'd have a bottleneck where no matter what you do, some of your threads can't run, and thus, priority boosts really won't affect performance. On the other hand, in a HTT/CMT system, getting threads done even slightly faster can have a significant impact if it results getting another thread off a HTT/CMT core (which costs you performance). That was the case I was thinking of above.

The Linux case would be the most interesting, since the default scheduler (CFS) tends to run threads in such a way as to ensure each gets roughly the same amount of total execution time. As a result, I'd expect total execution time to rise as the number of threads in the system increases (background tasks, etc). I'd expect you'd be able to measure differences between a heavy and light linux distro in purely CPU bound benchmarks.

FYI, you should be able to invoke the WinAPI and manually set priority to individual threads; should be trivial to make them all the highest priority on Windows, which should remove any such issues in the future.



It's a bit OT, so I PM'd you gamerk.

For the sake of posterity, I believe Cazalan below is refering to the porno spam that has since been deleted, not this post... lol
a b à CPUs
December 9, 2013 2:37:29 PM

Wow that's just a bit off topic.
a b à CPUs
December 9, 2013 2:41:09 PM

kviksand81 said:
juanrga said:
The leaked roadmap is not official. Someone leaked it from a non-public presentation given by AMD. This is what "leaked" means. I can see in the bottom part of the slide the word "CONFIDENTIAL". Therefore I am not sure why people insist on saying that it is not an official roadmap. Official roadmaps don't have the word "CONFIDENTIAL" printed in them.

And how difficult would it be to insert the text "CONFIDENTIAL" on a slide, in a document or an image...?

Printing business cards saying "Title: Emperor of the Galaxy" doesn't make it true, but it is simple enough to make them... And a lot of fun...


I think even AMD isn't sure what their roadmap will be 2 years out. It will depend on their Kaveri and other APU sales and which nodes GF finally has available. They can always tape out parts and not make them depending on the economics of it at the time.

They even said they would release a version of the 8 core APU used in PS4/XBone but that's not on any roadmap yet either.
December 9, 2013 2:42:16 PM

We broke the 10k barrier!
a b à CPUs
December 9, 2013 3:31:59 PM

Ags1 said:
We broke the 10k barrier!


And only 90% of it was off topic! Good hustle guys!
a b à CPUs
December 10, 2013 1:46:50 AM

http://www.fudzilla.com/home/item/33361-no-amd-is-not-k...

"AMD Manager of APU/CPU Product Reviews James Prior told Gamers Nexus that the slide was fake and that FX parts aren’t going anywhere."

You have to ask your self why would someone fake a roadmap?
Why would someone show Amd going big on Arm(something i said was not happening as its a "side project")
December 10, 2013 2:38:31 AM

Cazalan My comment was not addressed to you.

kviksand81 You can believe what you want. The PR representative has also said that the roadmap presented at APU13 is official and you continue believing it is not official, because you cannot find it in the AMD website...

jdwii Except that James Prior didn't say that the roadmap was fake he only said: "I've never seen that slide before, I don't know where that came from". And this is the same James Prior whom I was discussing during days in twitter when I leaked the Kaveri diagram just before publishing the BSN* article. I got the diagram from an official talk given by one of his chiefs at AMD, still he claimed during days he was not familiar neither with the talk nor with the slide I was mentioning to him. Finally he was unable to answer my question about the slide with a simple YES or NO.

8350rocks said:

Bulk is dead past 28nm. PERIOD.

FinFET on bulk will be more expensive than planar FD-SOI


20nm bulk is ready for volume production. The first processors have been tapped out on 16 nm bulk FinFET (bulk) and the first tests of 10nm bulk FinFET are under the radar.
a b à CPUs
December 10, 2013 3:16:18 AM

@juan
What cazalan said is correct. Mentioning arm a few times here and there is one thing. Talking about ARM in everything you post is something different entirely.

Good luck on GF's 20nm being on time.
December 10, 2013 4:21:05 AM

^^^ I didn't say if he was correct or not, merely said that my comment was not addressed to him. Also talking about other posters in everything you post here is something different entirely.
a b À AMD
a c 210 à CPUs
December 10, 2013 5:06:03 AM

@juanrga:

20nm bulk is a pipe dream...it will be FD-SOI for CPUs past 28nm. 20nm bulk is only ULP for ARM cores, etc. that operate below 2 GHz.

16nm is a hybrid process with 14nm FEOL and 20nm BEOL. It is an XM process and will be geared toward FinFETs, probably on FD-SOI...though if they actually manage to get FinFETs to work, they may be able to do without FD-SOI. However, FinFET on bulk is quite a bit more expensive and complex.

14nm will be FD-SOI or FinFET, and 10nm will only be FD-SOI FinFETs because they will have to eliminate all complexities...or did you not read that even Intel is going to FD-SOI past 14nm?
a b à CPUs
December 10, 2013 5:13:00 AM

jdwii said:
http://www.fudzilla.com/home/item/33361-no-amd-is-not-k...

"AMD Manager of APU/CPU Product Reviews James Prior told Gamers Nexus that the slide was fake and that FX parts aren’t going anywhere."

You have to ask your self why would someone fake a roadmap?
Why would someone show Amd going big on Arm(something i said was not happening as its a "side project")


Not going anywhere != New Product. Read between the lines on this one.
a b à CPUs
December 10, 2013 12:21:04 PM

gamerk316 said:
jdwii said:
http://www.fudzilla.com/home/item/33361-no-amd-is-not-k...

"AMD Manager of APU/CPU Product Reviews James Prior told Gamers Nexus that the slide was fake and that FX parts aren’t going anywhere."

You have to ask your self why would someone fake a roadmap?
Why would someone show Amd going big on Arm(something i said was not happening as its a "side project")


Not going anywhere != New Product. Read between the lines on this one.


Agreed
a b à CPUs
December 10, 2013 1:35:15 PM

szatkus said:
Hi guys.
Take that:
http://cdn3.wccftech.com/wp-content/uploads/2013/12/AMD...
And that:
http://www.benchmark.pl/uploads/backend_img/a/fotki_new...

About ~10% better than Richland. Quite nice.


I'll be "that guy" and point out the obvious: neither of those are CPU intensive benchies.

For all I want the new APUs to do fine, credit due when it's time. In this case, I want to see how it fares not only in an "avg joe" scenario (PC Mark does that, I believe), but going to the OC scenario.

I don't know how to express it well, but they put themselves in the same level as an i5K, so I might judge them at that level. Just like they put themselves in the same level as the 980X with BD1. We all know how that went,lol.

Cheers!
December 10, 2013 2:12:24 PM

8% uplift over i5 4670k on CPU only or CPU/GPU total score? i really don`t believe Kaveri can even compete with the 3570k, even less with the 4670k, they should show some CPU only benchmarks, maybe they are using the GPU to boost total score... i don`t like this, i am starting to feel Kaveri will be a disappointment on CPU only workloads.
a b à CPUs
December 10, 2013 2:32:04 PM

Off topic but i installed W8.1 on my Amd A8 3520M Llano and i got 15% lower scores in CPU Multicore benchmark on Cincebench but i have a 5% boost in the Single core test. In Wprime i got the same scores maybe 1-2% slower on 8.1 compared to 7. But in Fritz chess benchmark i got 13% boost using the test with only 1 thread compared to 7 and i got about a 1% boost using all the cores.

I noticed W8.1 seems to use turbo a more efficiently even on the older Llano laptop but i can't explain the lower scores in Cinebench and the slightly lower scores in Wprime(tested 3 times)

Going to try gaming next see how that feels on this thing.
December 10, 2013 3:10:14 PM

etayorius said:
8% uplift over i5 4670k on CPU only or CPU/GPU total score? i really don`t believe Kaveri can even compete with the 3570k, even less with the 4670k, they should show some CPU only benchmarks, maybe they are using the GPU to boost total score... i don`t like this, i am starting to feel Kaveri will be a disappointment on CPU only workloads.


pcmark scores include a gaming component so almost certainly the iGPU is included in the 4670 comparison. Also on the slide the next bullet point calls out the GCN cores.
December 10, 2013 3:39:47 PM

PCMark 8 measures APU performance. It is measuring CPU+GPU of Kaveri against the CPU+GPU of i5-4670k. PCMark 8 uses code from ordinary applications that are accelerated by the iGPU: Handbrake, Photoshop, VLC player...
a b à CPUs
December 10, 2013 4:57:05 PM

jdwii said:
Off topic but i installed W8.1 on my Amd A8 3520M Llano and i got 15% lower scores in CPU Multicore benchmark on Cincebench but i have a 5% boost in the Single core test. In Wprime i got the same scores maybe 1-2% slower on 8.1 compared to 7. But in Fritz chess benchmark i got 13% boost using the test with only 1 thread compared to 7 and i got about a 1% boost using all the cores.

I noticed W8.1 seems to use turbo a more efficiently even on the older Llano laptop but i can't explain the lower scores in Cinebench and the slightly lower scores in Wprime(tested 3 times)

Going to try gaming next see how that feels on this thing.


Check on the drivers. They changed a lot of stuff around, so your answer might lie there.

juanrga said:
PCMark 8 measures APU performance. It is measuring CPU+GPU of Kaveri against the CPU+GPU of i5-4670k. PCMark 8 uses code from ordinary applications that are accelerated by the iGPU: Handbrake, Photoshop, VLC player...


Ok, sounds better then. Does GCN include a 10bit/H265 decoder? :p 

Cheers!
a b à CPUs
December 10, 2013 6:15:35 PM

The pc mark slide isn't kaveri.

It does show that pcm8 is heavily influenced by the igp.

@juan

You should take your own advice and quit trying to belittle people in every post.
December 10, 2013 7:46:33 PM

So Kaveri is being built on TSMC or GlobalScrewndies? i had the impression that AMD decided to go with TSMC for Kaveri.

Can anyone confirm or deny this?
a c 171 À AMD
a c 902 à CPUs
December 11, 2013 6:28:57 AM

Well Intel's 10% improvement doesn't mean much as their IGP still sucks compared to Trinity and Richland. 10% faster on a pretty decent IGP from AMD is quite nice.
December 11, 2013 7:14:19 AM

gamerk316 said:
szatkus said:
Hi guys.
Take that:
http://cdn3.wccftech.com/wp-content/uploads/2013/12/AMD...
And that:
http://www.benchmark.pl/uploads/backend_img/a/fotki_new...

About ~10% better than Richland. Quite nice.


Intel Improves 10%, improves GPU = Intel Sucks
AMD Improves 10%, improves GPU = Quite Nice


I Thought AMD was aiming at at least 30% GPU performance and 20% CPU... 10% in GPU over Richland seems rather slow and it´s probably fake.

Kaveri GPU is a 512 GCN, Richland is 384 WVLI4 and GCN is faster even with the same numbers of Cores, performance of Kaveri should range between HD7850-HD7870 and should be much much faster with faster Ram.
a b à CPUs
December 11, 2013 7:30:28 AM

juanrga said:
From by BSN* article (abstract):

Quote:
Combining all this data, I predict that the CPU of the top Kaveri APU will be about 26% faster than top Trinity APU and about 17% faster than top Richland APU. This would put the multi-threaded performance of the CPU of the new quad core Kaveri APU at the same level than an Intel quad core i5 or a six-core AMD FX with traditional software.


The next slide has been recently leaked:



For the GPU, I assumed a minimum gain of 33%. But then I was managing higher frequencies for the GPU by using the 1050GFLOP. Kaveri comes with lower GPU frequencies, the lower frequencies are compensated by the better architecture gain and Kaveri GPU is expected to be ~30% faster than Richland.

Finally, it seems my simulations are not very far...


etayorius said:
gamerk316 said:
szatkus said:
Hi guys.
Take that:
http://cdn3.wccftech.com/wp-content/uploads/2013/12/AMD...
And that:
http://www.benchmark.pl/uploads/backend_img/a/fotki_new...

About ~10% better than Richland. Quite nice.


Intel Improves 10%, improves GPU = Intel Sucks
AMD Improves 10%, improves GPU = Quite Nice


I Thought AMD was aiming at at least 30% GPU performance and 20% CPU... 10% in GPU over Richland seems rather slow and it´s probably fake.

Kaveri GPU is a 512 GCN, Richland is 384 WVLI4 and GCN is faster even with the same numbers of Cores, performance of Kaveri should range between HD7850-HD7870 and should be much much faster with faster Ram.
In games with mantle support it might be possible but, for everything else 7730-7750. ddr3 is going to hurt kaveri much more than it did trinity or richland.

a b à CPUs
December 11, 2013 10:19:56 AM

8350rocks said:


20nm bulk is a pipe dream...it will be FD-SOI for CPUs past 28nm. 20nm bulk is only ULP for ARM cores, etc. that operate below 2 GHz.



We'll just have to see what they can do with 20nm. Xilinx is shipping some 20nm parts from TSMC now. It is supposedly a mix of LP/HP as there is only 1 process being used.

They're touting 2x speed increase over their 28nm parts. 400G applications. That's some serious bandwidth.
a b à CPUs
December 11, 2013 10:36:42 AM

logainofhades said:
Well Intel's 10% improvement doesn't mean much as their IGP still sucks compared to Trinity and Richland. 10% faster on a pretty decent IGP from AMD is quite nice.


Intel increased GPU performance on Haswell by about 30-40%, depending on the benchmark. CPU performance went up 10%. And note the never ending amount of "Hasfail" references.

Hence my point: If AMD does worse then that, that means Intel is closing the performance gap on the GPU side, while increasing the gap CPU side.
a b à CPUs
December 11, 2013 10:41:03 AM

Quote:
The next slide has been recently leaked:



For the GPU, I assumed a minimum gain of 33%. But then I was managing higher frequencies for the GPU by using the 1050GFLOP. Kaveri comes with lower GPU frequencies, the lower frequencies are compensated by the better architecture gain and Kaveri GPU is expected to be ~30% faster than Richland.

Finally, it seems my simulations are not very far...


"Up to" is the key point here; take 66% of that, and get about 14% performance CPU side, or basically what I predicted.

GPU side is the same, so figure 20% gains typical, assuming proper bandwidth and non-memory based benchmarks.
December 11, 2013 10:43:54 AM

gamerk316 said:
szatkus said:
Hi guys.
Take that:
http://cdn3.wccftech.com/wp-content/uploads/2013/12/AMD...
And that:
http://www.benchmark.pl/uploads/backend_img/a/fotki_new...

About ~10% better than Richland. Quite nice.


...

Intel Improves 10%, improves GPU = Intel Sucks
AMD Improves 10%, improves GPU = Quite Nice


I never said that Intel sucks. Also for Kaveri that's nice achievement, because it's 10% lower clocked than Richland.

And I know that PCMark is not good benchmark, but at least it's something.
Oh, we also have this: http://browser.primatelabs.com/geekbench3/223722
a b à CPUs
December 11, 2013 10:54:18 AM

^^ Never said you, I was referring to all the "Hasfail" references that occurred after Haswell released. Just pointing out, a lot of the same people who claimed 10% CPU performance was bad for Intel would just as quickly claim it is good for AMD.

Nevermind 10% of a higher number is larger then 10% of a lower number.
a b à CPUs
December 11, 2013 11:20:22 AM

Again you guys are reading this wrong that's not kaveri that is the Amd A10 6800K and i'm sure that is with their new driver and windows 8.1. The other site did not post any real performance figures to compare it to the A10 6800K. Saying that i will continue to say 15% more performance on average in CPU Benchmarks and 25-30% Boost in GPU Performance at most.

That is a impressive thing to do with bulk. I will continue to say that Amd fixed the scaling a bit with this design and you should get close to a 6300 in mulithreading performance and around a 15% Boost on single core performance compared to the 8350fx. Which is still around 15% slower per core compared to the I5 haswell

Again Gamer this was with a driver update and windows 8.1! Not their new APU
a b à CPUs
December 11, 2013 11:24:52 AM

However passed the certian responces this has been a good page so far compared to the last 30, For 1 we found out that "FX isn't going no where"
We found out that Amd's weakest point is now improving and no its not their IPC performance(what instruction set is being used on the test?) it's their collaboration with software companies. Amd is working with them now when i first bought their products this didn't happen now we are seeing more Gaming evolved logo's and more OpenCL support and a decent amount of the industry are looking into Mantle. Now we need them to use the newer instruction set's on the FX/APU. As well as see more cores being used this will happen.


December 11, 2013 11:32:25 AM

Yuka said:
juanrga said:
PCMark 8 measures APU performance. It is measuring CPU+GPU of Kaveri against the CPU+GPU of i5-4670k. PCMark 8 uses code from ordinary applications that are accelerated by the iGPU: Handbrake, Photoshop, VLC player...


Ok, sounds better then. Does GCN include a 10bit/H265 decoder? :p 

Cheers!


It goes in the other way. ;)  H.265 has been designed to benefit from HSA acceleration. There was a discussion of that at APU13



H.265 is so complex that you need a 16-core Sandy Bridge Opteron to get 30fps @ 1080p. It remain to be seen the performance of Kaveri APU.
December 11, 2013 11:36:30 AM

gamerk316 said:
^^ Never said you, I was referring to all the "Hasfail" references that occurred after Haswell released. Just pointing out, a lot of the same people who claimed 10% CPU performance was bad for Intel would just as quickly claim it is good for AMD.

Nevermind 10% of a higher number is larger then 10% of a lower number.


Yup, if AMD go up 10% and Intel goes up 10%, then AMD has fallen even further behind Intel. When Intel isn't even competing on performance these days. (They're competing on performance-per-watt).
December 11, 2013 11:58:48 AM

gamerk316,

Haswell CPU was up to 6% better than IB. Haswell increases power consumption and runs about 15º hotter.

Kaveri CPU will be up to ~30% better (some preliminary benchmarks show that) than Richland. But AMD achieves that with lower clocks, lower power consumption, and less thermal dissipation.

My BSN* prediction claims "about 17%" for the average.
a b à CPUs
December 11, 2013 12:06:14 PM

^ show before you where screaming 30% all over the place when i continued to say just that 15%(CPU) and 25-30%(GPU) based on averages
December 11, 2013 6:10:00 PM

Hey guys.

Sorry for jumping in again, but I thought this would be a good topic to discuss.

Before we find a true victor, lets talk overclocking.

I'll give you an example. Right now I have my i5-3570k @ 4.7 Ghz on 1.750 v-core (which I dont think is too bad).

With this overclock, my scores on the benchies have been exceeding those with AMD's "best processors."

Even at 4.7, this processor seems to have no giving up. With 4 instances of prime 95 running, the highest it has gotten with my h100i on this is 72 ºC, and on the previous 4.5 Ghz and 1.105 v-core, it maxed at 68ºC. With the barrier at (~110ºC I think it is), there is totally room to go since I'm getting a 3770k soon.

So those people that usually max out at 4.8 Ghz, and then the AMD people who say that their 8350 can go to 5.5-6 GHz, think again. Some of those wafers out there are the chosen ones. ^_^
a b à CPUs
December 11, 2013 8:01:17 PM

GOM3RPLY3R said:
Hey guys.

Sorry for jumping in again, but I thought this would be a good topic to discuss.

Before we find a true victor, lets talk overclocking.

I'll give you an example. Right now I have my i5-3570k @ 4.7 Ghz on 1.750 v-core (which I dont think is too bad).

With this overclock, my scores on the benchies have been exceeding those with AMD's "best processors."

Even at 4.7, this processor seems to have no giving up. With 4 instances of prime 95 running, the highest it has gotten with my h100i on this is 72 ºC, and on the previous 4.5 Ghz and 1.105 v-core, it maxed at 68ºC. With the barrier at (~110ºC I think it is), there is totally room to go since I'm getting a 3770k soon.

So those people that usually max out at 4.8 Ghz, and then the AMD people who say that their 8350 can go to 5.5-6 GHz, think again. Some of those wafers out there are the chosen ones. ^_^


If you can keep the 8350fx cool and give it enough voltage threw the board i'm sure it will go that high. You said 1.750V??? Really that high? Pretty sure a 8350fx can get to 4.8Ghz with just 1.5V.
December 11, 2013 8:41:56 PM

GOM3RPLY3R said:
Hey guys.

Sorry for jumping in again, but I thought this would be a good topic to discuss.

Before we find a true victor, lets talk overclocking.

I'll give you an example. Right now I have my i5-3570k @ 4.7 Ghz on 1.750 v-core (which I dont think is too bad).

With this overclock, my scores on the benchies have been exceeding those with AMD's "best processors."

Even at 4.7, this processor seems to have no giving up. With 4 instances of prime 95 running, the highest it has gotten with my h100i on this is 72 ºC, and on the previous 4.5 Ghz and 1.105 v-core, it maxed at 68ºC. With the barrier at (~110ºC I think it is), there is totally room to go since I'm getting a 3770k soon.

So those people that usually max out at 4.8 Ghz, and then the AMD people who say that their 8350 can go to 5.5-6 GHz, think again. Some of those wafers out there are the chosen ones. ^_^


Dude are you serious? 1.75vcore? I burned out 90nm Pentium 4 at 1.55v and you're supposed to lose .1v of overvolting headroom per full node shrink.

Temps don't matter on Intel bulk. Intel process (I have two dead Intel chips, 32nm and 90nm) are extremely fragile and will break even if you keep them plenty cool.
a b À AMD
a c 84 à CPUs
December 12, 2013 12:55:24 AM

^^ won't the intel cpu start degrading at that vcore? or may be gom3r mistyped 1.175...?
still, at 4.7ghz it will only be close or slightly ahead of stock 8 core or 4c/8t cpus in really 'wide' tasks. (had to comment on it after seeing a vcore that high in a cpu that has higher thermal density and poorer t.i.m.)

Analyst firm predicts the PS4 will conquer the next-gen market
http://vr-zone.com/articles/analyst-firm-predicts-ps4-w...
i see 4 million jaguar+gcn powered devices... mmmm...

now for something stupid, from wccefghireallydon'tknowwhatistech.com
http://wccftech.com/amds-flagship-a10-kaveri-apu-bundle...
the bf4 bundle is good news, but the rest of is rather rubbish. wccfghineververifypricesoftech.com parrots shopblt's preorder prices. even then, subtracting bf4 retail price (amd will be paying ea much less through various deals) gets a10 7850k price roughly at $120-130. kaveri will fly off the shelves at launch no matter how high amd prices them. but after the rush ends (and it will end soon after launch), those might not be worth buying at $189 price.

a b à CPUs
December 12, 2013 1:39:33 AM

"Analyst firm predicts the PS4 will conquer the next-gen market"
Well Duh i could of said that in fact i did several times its more powerful and cheaper and the PS4 focuses more on the gamer vs the casual user.

Note that i'm a PC gamer and a Nintendo fan and if the PC gaming did not happen i would buy a PS4 over a Xbox one anyday.
a b à CPUs
December 12, 2013 4:51:11 AM

jdwii said:
GOM3RPLY3R said:
Hey guys.

Sorry for jumping in again, but I thought this would be a good topic to discuss.

Before we find a true victor, lets talk overclocking.

I'll give you an example. Right now I have my i5-3570k @ 4.7 Ghz on 1.750 v-core (which I dont think is too bad).

With this overclock, my scores on the benchies have been exceeding those with AMD's "best processors."

Even at 4.7, this processor seems to have no giving up. With 4 instances of prime 95 running, the highest it has gotten with my h100i on this is 72 ºC, and on the previous 4.5 Ghz and 1.105 v-core, it maxed at 68ºC. With the barrier at (~110ºC I think it is), there is totally room to go since I'm getting a 3770k soon.

So those people that usually max out at 4.8 Ghz, and then the AMD people who say that their 8350 can go to 5.5-6 GHz, think again. Some of those wafers out there are the chosen ones. ^_^


If you can keep the 8350fx cool and give it enough voltage threw the board i'm sure it will go that high. You said 1.750V??? Really that high? Pretty sure a 8350fx can get to 4.8Ghz with just 1.5V.


But here's the thing: Intel has more OC headroom then AMD, and has the higher base IPC, so Intel gains more then AMD overall when OCing to max frequency. If you get into comparing Max OC results, AMD looks even worse then it normally would.
December 12, 2013 7:13:00 AM

Was not Intel using Bulk on their CPU? that should massively limit the OC capabilities of their CPUs.
December 12, 2013 10:29:53 AM

AMD owns several world-wide OC records thanks to SOI, but that is under extreme cooling. On air, the average OC of an i5-3570k is 4663MHz, which is slightly higher than average OC of an FX-6350: 4645MHz. On watter the tendecy inverts and FX achieves slightly better frequencies than the i5: 5063MHz (FX) vs 4819MHz (i5). In any case these are differences of 3-5%. Thus for average overclockers, bulk is not massively limiting anything.

Kaveri will not break world-records, but it will have good OC.
!