Sign in with
Sign up | Sign in
Your question
sticky

AMD CPU speculation... and expert conjecture - Page 100

Tags:
  • AMD
  • CPUs
Last response: in CPUs
Share
a b à CPUs
June 25, 2013 8:13:58 PM

palladin9479 said:
Meh most tasks can be done in parallel, you just need to rethink the process flow. We tend to design in bottlenecks without knowing it, and then complain that we can't do it any faster. Unfortunately methods that organize data such that it can be worked on in parallel also tend to be less efficient when forced to work in serial. It's mostly in task and process organization. And gamer, I have no idea how you can claim not to use GCC and maintain a straight face. I can attest that GCC is heavily used in the server industry and our primary desktop product use's it for binary code (most of our desktop apps are written in java and designed to be modular).


Define what you mean by "most" and remember that we're not discussing servers.
a b à CPUs
June 25, 2013 8:26:30 PM

hafijur said:
Its not 50watts if its on full load as it is is normally 80-100 watts on peak loads. The other thing is I had an ibm r40 and it had 8-10 hour barttery life web browsing on 15 sxga screen. Also I know for a fact that a pentium m 1.3ghz and 1.5ghz pentium m were much faster then most p4's at gaming like need for speed underground and flash based games it destroyed a 2.66ghz p4 and 1.6ghz p4. Pentium m cpus were way ahead of its time, heck they are better performers then atoms with similar battery life and form factor and slightly more power consumption.

The pentium m cpus were definately better then the k8's. I was amazed how good pentium m's were, as the ibm r40 I could not even hear it really at 100% cpu load as it was such a low power chip. The pentium m's are probably still as efficient as intel atoms performance per watt wise.

Intel can release cpus if they want with more cores, they just don't have to. It makes no sense for them to as they can mark up there 6 core cpus in price still. They make 8 core xeons already, 6 core 980x a few years ago. Its not hard for them to but because the fx8350 isn't faster then the 3770k or 4770k or 2600k intel don't have a reason to add more cores yet and we all know intel like to make easy money.

Last thing I only mention power as that is the way I judge if the cpu is better or not from before. Most companies of technology always go on about increasing performance per watt but AMD have not gone forward since like 2009 on the desktop market considering at 32nm they should have improved a lot more.

I get your power consumption point but personally I like the best performance per watt components at the time as its more future proof or shall I say it won't look ancient in a few years time and I like to buy stuff that are innovating instead of releasing slap dash cpus like intel p4 netburst cpus.


Until you explain why this matters when its completly stable which you continue to avoid that question you are nothing but a troll.
a b à CPUs
June 25, 2013 8:28:31 PM

Seriously guys... STOP FEEDING THE TROLL.

It's getting really annoying this useless thread hijack by that user.

Cheers!
Related resources
a c 105 à CPUs
June 25, 2013 8:35:54 PM

hafijur said:
Its not 50watts if its on full load as it is is normally 80-100 watts on peak loads.


I said the power draw difference at load is around 50 watts, to use a round number. The actual power consumption difference between the FX-8350 and the i7-4770K in the THG entire benchmark run was only 35 watts, despite the AM3+ MB being a higher-end unit than the LGA1150 unit (actually, closer to the LGA2011 unit used with the i7-3930K). So I even overestimated a bit just to get a round number.




Quote:
The other thing is I had an ibm r40 and it had 8-10 hour barttery life web browsing on 15 sxga screen.


I don't believe it unless you had an external battery. Notebooks in those days were considered to be very, very good if they got 4-5 hours on a giant battery. Most got 2-3 hours on a regular sized battery. IBM claimed 4 hours runtime with an idle computer and dimmed screen out of a single battery (see here) which is very consistent with battery runtimes of the era. Various user reviews on sites like Notebook Review say 3-4 hours with one battery and 6-ish with both batteries installed. So I call BS on that claim of 8-10 hours.

Quote:
Also I know for a fact that a pentium m 1.3ghz and 1.5ghz pentium m were much faster then most p4's at gaming like need for speed underground and flash based games it destroyed a 2.66ghz p4 and 1.6ghz p4.


It is very difficult to directly compare the Pentium Ms and Pentium 4s in gaming since you are talking about laptops. You are more likely than not comparing how much RAM and what IGP/GPU you had in each machine than anything. I don't doubt that the 1.6 P4 was slower than the 1.3 or 1.5 P-M but a 2.66 P4B with an equivalent RAM/GPU should be faster than a 1.5 P-M in gaming. To really test them you would need a rare standalone motherboard or socket adapter for a PGA479 CPU so that you can run desktop RAM and a desktop GPU with the mobile CPU and directly compare to the P4. The ASUS CT479 socket adapter allowed that and there were some tests done that showed the Dothan 533 vs. various 90 nm Pentium 4s and A64s. (Here is one.) The fastest Pentium M (780) did decently in 1024x768 gaming on par with the A64s of similar clock speed but finished midpack to bottom of the list in other programs.

Quote:
The pentium m cpus were definately better then the k8's. I was amazed how good pentium m's were, as the ibm r40 I could not even hear it really at 100% cpu load as it was such a low power chip. The pentium m's are probably still as efficient as intel atoms performance per watt wise.


The Pentium Ms were pretty efficient but they weren't really better than the K8 as far as performance was concerned. Look at the link above where a Pentium M was benched against K8s. It would be impossible to do a direct power comparison on a current Atom vs. a Pentium M since they use wildly different platforms. If you just did a "plug a board that supports either into the wall and let 'er rip," I would strongly suspect the Atom would do better, considering it's a 32 nm SoC setup vs. a 90 nm CPU with an older two-chip chipset. Total performance of the Atom would be lower on single-threaded stuff since the Atom isn't too fast but a dual-core Atom should outperform a Pentium M using multithreaded code and it would use less power for sure since the platform TDP is a lot lower.

Quote:
Intel can release cpus if they want with more cores, they just don't have to. It makes no sense for them to as they can mark up there 6 core cpus in price still. They make 8 core xeons already, 6 core 980x a few years ago. Its not hard for them to but because the fx8350 isn't faster then the 3770k or 4770k or 2600k intel don't have a reason to add more cores yet and we all know intel like to make easy money.


Intel probably could drop the 6-core SB-E to the $300 ish mark but that's all they could do today for a 6-core chip. The only dies Intel currently makes which are able to yield a CPU with more than four cores are the 435 mm^2 8-core SB-E die and the 10-core 513 mm^2 Westmere-EX die. Both are huge dies and are going to be expensive to produce, even as a 6-core salvaged part. Intel would do best if they introduced a dedicated 6-core die which would be in the ~300ish mm^2 range on 32 nm and be obsolete even before it came out. As I said above, I am not sure Intel can yet make a 6-core sized die on 22 nm due to process immaturity.

You made a remark about the 6-core Westmere chips. Intel never sold one for less than around $500 but it did have its own dedicated 248 mm^2 die which Intel used extensively in the Xeon 5600 line as that was the top core count chip for the LGA1366 platform. I suppose the argument that you would be better off making is that Intel feels no pressure to release the full 8-core SB-E as a desktop chip due to a lack of pressure from AMD as AMD can't touch the 8-core E5s with an AM3+ chip plus the chip would likely be retailing in the $1500+ range, out of reach of all but the most loaded enthusiasts.

AMD on the other hand is using an 8-core die 3/4 of that size, and yields are exponentially better with smaller die sizes. AMD also uses that 4-module FX die in every single non-APU part it sells so there is a significant economy of scale. Intel only uses the 6-core-capable die in a small high-end desktop line and in midrange servers. I'd hazard a guess AMD sells considerably more 4-module dies than Intel sells SB-E dies, considering E3s use the 4-core die and the E7s use the 10-core Westmere-EX die.

Quote:
Last thing I only mention power as that is the way I judge if the cpu is better or not from before. Most companies of technology always go on about increasing performance per watt but AMD have not gone forward since like 2009 on the desktop market considering at 32nm they should have improved a lot more.


Performance in a few benchmarks with BD/PD has not improved relative to power use compared to Stars (K10) but most have, especially anything multithreaded. If you really want to look at power use, look at how much AMD cleaned up the idle power use of BD/PD. They introduced clock gating and BD/PD are *much* better at idle than any previous chip ever was. My Bulldozer-based Opteron 6234 runs a little hotter than the Stars-based 6128 it replaced (because the 6234 uses the entire 115 W TDP due to Turbo CORE and is significantly faster than the 6128) but it is significantly cooler at idle. Most desktop CPUs idle the vast majority of the time so you should be be very happy with these more meaningful reductions in power use at idle.

Quote:
I get your power consumption point but personally I like the best performance per watt components at the time as its more future proof or shall I say it won't look ancient in a few years time and I like to buy stuff that are innovating instead of releasing slap dash cpus like intel p4 netburst cpus.


I betcha AMD's CPUs will look much better in the future compared to the present Intel competition. The future is clearly more multithreaded and the BD/PD's forte is multithreading. Intel is hanging on with a bunch of fairly low-core-count CPUs trying to flog single-threaded performance. Think of it this way- would you rather use a Phenom X4 9850BE or a Core 2 Duo E6850 today? Back in the day the E6850 was considered much faster and the Phenom X4 was considered to be crap as poorly-threaded performance was all that mattered...
a b à CPUs
June 25, 2013 8:36:40 PM

"instead of releasing slap dash cpus like intel p4 netburst cpus" Cough Haswell,Ivy, Sandy all of these CPU's measured in a 15% increase in CPU performance From Sandy-Haswell so i still find no reason for a person with a 2500K to upgrade on an Intel machine. Intel is no different and in fact worse since they have the money to improve quicker and are worth much more.

On a laptop a 35 watt TDP Cpu is fine and stable and on a desktop a 125 watt is stable its funny how you have fell into marketing and care more about performance per watt over performance which is nothing more than a gimmick for desktops really. Again you avoid that question why is power consumption so important to you?
a b à CPUs
June 25, 2013 8:40:04 PM

"would you rather use a Phenom X4 9850BE or a Core 2 Duo E6850 today? Back in the day the E6850 was considered much faster and the Phenom X4 was considered to be crap as poorly-threaded performance was all that mattered..."

He explains how Amd CPU's are so far behind when looked into the future when your statement proves him wrong actually and according to game developers a 8350fx is going to do the same thing to a 4 Core HT CPU.
a b À AMD
a c 210 à CPUs
June 25, 2013 9:22:46 PM

jdwii said:
"would you rather use a Phenom X4 9850BE or a Core 2 Duo E6850 today? Back in the day the E6850 was considered much faster and the Phenom X4 was considered to be crap as poorly-threaded performance was all that mattered..."

He explains how Amd CPU's are so far behind when looked into the future when your statement proves him wrong actually and according to game developers a 8350fx is going to do the same thing to a 4 Core HT CPU.


Based on things like Unreal4 Engine, CryENGINE3, Frostbite, and several other cutting edge game engines, it is already doing it. The fruit is forthcoming...we just don't have it yet.
a b à CPUs
June 26, 2013 12:00:37 AM

The past few pages are battling with a troll. Never argue with trolls, they will bring you down to their level and beat you with experience. Obviously hi-jacked from a more simply stated quote.

Back to the subject at hand. Steamroller specced full fledged CPU vs APU Steamroller. What would be the significance in performance difference? APU seems like it will be the only cat in town sooner rather than later. So it may be best to look at it from that perspective. Last time I saw some stuff for gaming, the APU+GPU wasn't so far behind in regards to traditional AMD Proc+GPU.

Please for the love of everything, no compiler issues, older proc comparisons, so on and so forth.
June 26, 2013 12:49:37 AM

mlscrow said:
Okay, so the ultimate conclusions of this thread can be:

A) hajifur is a complete waste of worldly resources.
B) The 220W TDP of the 5GHz FX9590 is spot-on, as it's simply an overclocked FX8350, which we all know is based on Piledriver, which we all know requires exponentially more power as you overclock it beyond 4GHz. 220W is actually surprisingly low for a 5GHz OC (compared to an FX8350 OC'd to 5GHz), but we can attribute that to process maturity and AMD lowballing.
C) High-end performance enthusiasts don't care about power consumption anyway.
D) The FX-8350 provides better performance per dollar than its Intel counterparts.
E) The FX-8350 provides worse performance per watt than its Intel counterparts.
F) The point of this thread is supposed to be Steamroller anyway, so it's dumb to battle the troll in this virtual stadium, lets get back on topic.
G) Steamroller should put AMD back in direct competition with Intel in the high-end.
H) AMD only shows a quad core Steamroller available for 2014, which is very, very sad and hopefully they'll realize their mistake sooner than later, because we all want an 8-core Steamroller FX for 2014, don't we?




B) The FX9590 isn't simply an overclocked FX8350. We don't have details, but either it is a chip with extraordinary thermal and electric tolerances preselected from the main FX8350 line or includes some 2.0 revision like Richland. In any case, the FX9590 will consume less power than an OC FX8350.

E) Compared with ordinary i7 line such as the 3770k or 4770k, compared with extreme series such as the 3960x or 3970x I am not so sure.

H) I don't know any updated desktop roadmap for 2014. I only know server roadmap and the desktop for 2013. If steamroller is 2 threads per module I would wait some 4 module FX, but if steamroller is 4 threads per module as speculated before, then a 2 module FX makes more sense for general public.


a b à CPUs
June 26, 2013 3:21:25 AM

ohyouknow said:


Back to the subject at hand. Steamroller specced full fledged CPU vs APU Steamroller. What would be the significance in performance difference? APU seems like it will be the only cat in town sooner rather than later. So it may be best to look at it from that perspective. Last time I saw some stuff for gaming, the APU+GPU wasn't so far behind in regards to traditional AMD Proc+GPU.


It's truly difficult to guess how Kaveri will perform. There are several big changes that diverge from the most recent Richland, and it's a new process on top of that. It will have higher CPU performance and higher GPU performance (512 GCN), but still marred with memory bandwidth issues.

It will be interesting to see what the enthusiasts do between the choice of a 4 core Steamroller APU or an 8 core Piledriver for the desktop. SR doesn't come close to a 100% gain so 8 Piledriver will still win out over 4 SR simply on numbers. SR will be more efficient per watt but anyone already with an 8350 won't care about that.

Some will just want the new SR cores, and go discrete. Use the 512 GCN as a heatsink to OC the CPU cores. Then it comes down to the new 28nm node. How well that will OC. Too many variables still. Is it the new FD-SOI or older PD-SOI? It has the potential to be the new overclock king.

As usual AMD is stuck between a rock and a hard place. They spent a fortune in fines to GF to allow production at TSMC, and now they're getting screwed by Apples wad of cash stealing the front of the TSMC fab line.
a b À AMD
a c 210 à CPUs
June 26, 2013 8:31:07 AM

hafijur said:
The pentium m 765 and 780 are faster on cinebench single threaded( single core cpu) then the n2800 with 4 threads on cinebench multithreaded and the pentium m's are like 2x faster on superpi and score similar on 3dmark06 cpu score in the 900's. Also the link you gave me saw the pentium m take 27w on burn test and 14w idle. My acer m3 idles like 10-14w as well with new screen. http://www.xbitlabs.com/articles/cpu/display/pentiumm-7...

Also the link you gave me the pentium m was better at gaming but slightly worse on tasks then p4s or athlon 64's.

Yes the atom n2800 probably will take 13w less the pentium m but the pentium m is not only faster but like 9 year old technology for the one I am basing it on and the pentium m with faster single threaded performance means it will be snappier.

Considering the 14w idle of a top end pentium m its no suprise my ibm r40 lasted 8-10 hours on a 15 inch sxga screen. On Notebookcheck the top end pentium m's are faster then the n2800 and only take 13w more like 9 years ago. You can click on archived ones and you will see this for yourself. http://www.notebookcheck.net/Mobile-Processors-Benchmar...

Finally power efficiency is important as you get basically double the performance at same power it seems every 4 years. For example my 5930g I bought in january 2009 mid 2008 model scores 5200 in 3dmark06 and takes 70w while gaming. My new acer m3 bought in december 2012 got in january 2013 a mid 2012 model takes 65w and the cpu is 2.2x faster and the 3dmark06 score is just over 10000. The 3317u cpu on most things like cinebench video encoding is 2.2x faster then the p7350 c2d and my p7350 because of 9600m gt idles took 50w for cpu vs 28-30w for my 3317u for cpu 100% test.

Question to you is why buy inferior technology. AMD have a lot of catching upto do. Its actually quite funny intel desktop cpus with netburst cpus from 2000-2005 were worse then there laptop cpus in terms of power efficiency. Now AMD are in a similar way there laptop cpus are more advanced then there desktop cpus in terms of power efficiency i.e. performance per watt. So the moral of this story is intel have been ahead basically in the mobile market since 2003 by far.


If you had 10 hours battery back then...why are people thrilled about having 7 hours battery life with an XL battery in current laptops? Why isn't Intel making a CPU with 20 hours battery life web browsing if they could make one that got 10 hours 9 years ago? If Intel is so great...then surely such an awesome and powerful company has a CPU that can run for a week web browsing without charging the battery!! By your logic, 9 years ago they had one that ran 10 hours, Moore's law says that technology gets twice as complex every 2 years...that means in the 9 years since your "10 hour CPU" technology has doubled effectiveness 4.5 times. By that logic, Intel should have a current CPU for laptop mobile that runs 50 hours web browsing without charging the battery.

However, they don't...why is that?

Because your battery didn't really last 10 hours unless you were plugged into a wall for ~6 of them.

I invite you to show me a documented test showing your same configured laptop with that CPU lasting 10 hours web browsing not plugged in. If it was an IBM laptop...it should be pretty common to find battery life benchmarks on your favorite site, notebookcheck.

Benchmarks or it didn't happen. From this point forward, I would like to see you post a benchmark to support anything you say in this thread, because everything you've said so far has been wildly off base with no evidence to support it.

a b à CPUs
June 26, 2013 9:30:35 AM

"Finally power efficiency is important as you get basically double the performance at same power it seems every 4 years. For example my 5930g I bought in january 2009 mid 2008 model scores 5200 in 3dmark06 and takes 70w while gaming. My new acer m3 bought in december 2012 got in january 2013 a mid 2012 model takes 65w and the cpu is 2.2x faster and the 3dmark06 score is just over 10000. The 3317u cpu on most things like cinebench video encoding is 2.2x faster then the p7350 c2d and my p7350 because of 9600m gt idles took 50w for cpu vs 28-30w for my 3317u for cpu 100% test."


Reading this all i can say is who cares, are you a green energy guy or something the most energy efficient design does not make it the best if it did Amd jaguar would be considered the best design in 20 years. Besides bragging rights i still see no reason to think what you do but it's your own beliefs and you are entitled to it but don't be a troll going around thinking its right.

"Question to you is why buy inferior technology. AMD have a lot of catching upto do. Its actually quite funny intel desktop cpus with netburst cpus from 2000-2005 were worse then there laptop cpus in terms of power efficiency. Now AMD are in a similar way there laptop cpus are more advanced then there desktop cpus in terms of power efficiency i.e. performance per watt. So the moral of this story is intel have been ahead basically in the mobile market since 2003 by far."

It wouldn't be if those laptop CPU's had to compete with desktop CPU's in performance(clocked higher) they would run HOT and require a LOT of power. Laptop CPU's are fun at looking at performance per watt which you seem to fallen to which honestly its just a way to lower performance figures each generation and at the end of the day a gimmick. And finally Amd is improving performance per watt they are making their CPU's and APU's faster while improving efficiency.
a b À AMD
a c 210 à CPUs
June 26, 2013 3:49:02 PM

Anyone else see this M$ restructuring article? Somehow I doubt this will be significant in any sense beyond changing the way they try to ram crap down our throats about how good Win8 is supposed to be...even though it isn't.

http://www.tomshardware.com/news/Restructure-Steve-Ball...

Some Kaveri speculation from BSN:

http://www.brightsideofnews.com/news/2013/6/1/amd-updat...

If you look at the product roadmap .pdf here, it shows that HD 8XXX series are due in Q3 this year...so likely before end of summer we'll see HD 8XXX series GPUs hit:

http://phx.corporate-ir.net/phoenix.zhtml?c=74093&p=iro...
a c 105 à CPUs
June 26, 2013 5:08:56 PM

All right guys, it was fun to battle and all, but we really should get back on topic. How about that Steamroller?

I'll start. Steamroller is apparently only announced on FM2(+) currently, judging from the server roadmaps. The server die is used in the high-end desktop as well and 2014 server is going to be a tweaked Piledriver, not Steamroller. Thoughts?
a b À AMD
a c 210 à CPUs
June 26, 2013 5:21:20 PM

Yeah the current desktop roadmap shows no Steamroller through 2013, but AMD has been quiet about the 1H 2014 roadmap on desktop, while the others are starting to peek out on APUs and Servers.
a b à CPUs
June 26, 2013 5:32:47 PM

But are they going to do scrapping for the "high end" desktop parts or just binning? Maybe both?

I'd like to see some AM3+ love with a new chipset to be honest... The correct way should be DDR4 and PCIe 3 with the "high end" desktop from AMD, just like Intel leaked some time ago for HW-E.

Cheers!
a b À AMD
a c 210 à CPUs
June 26, 2013 5:38:45 PM

Yuka said:
But are they going to do scrapping for the "high end" desktop parts or just binning? Maybe both?

I'd like to see some AM3+ love with a new chipset to be honest... The correct way should be DDR4 and PCIe 3 with the "high end" desktop from AMD, just like Intel leaked some time ago for HW-E.

Cheers!


I would think binning would be enough, they don't seem to be having the same yield problems they were before.

Agreed on new chipset. 1090X/FX
June 26, 2013 7:22:27 PM

Personally if they are going to be doing a new chipset (1090X/FX) I would like to see AM4. It would like to see a Steamroller processor for AM3+ 990FX, but top of the line Steamroller processors running AM4 1090FX. If they are going to come out with a new chipset, requiring a new motherboard anyway I would like to see AM4 which would hopefully mean "high end" FX processors through excavator. It would be a shame if AMD dropped its high end FX line and only produced APUs.

I know I read somewhere awhile back that AMD was "unifying" its line which I took it to mean producing only APUs and the FX line would end. I personally want to see the high end FX line continue and keep nipping at Intel's heels until they are able to surpass Intel. A commitment like AM4 1090FX would in my mind show AMD is committed to continuing its FX line, hopefully through excavator.
a b À AMD
a b à CPUs
June 26, 2013 7:35:04 PM

cowboy44mag said:
Personally if they are going to be doing a new chipset (1090X/FX) I would like to see AM4. It would like to see a Steamroller processor for AM3+ 990FX, but top of the line Steamroller processors running AM4 1090FX. If they are going to come out with a new chipset, requiring a new motherboard anyway I would like to see AM4 which would hopefully mean "high end" FX processors through excavator. It would be a shame if AMD dropped its high end FX line and only produced APUs.

I know I read somewhere awhile back that AMD was "unifying" its line which I took it to mean producing only APUs and the FX line would end. I personally want to see the high end FX line continue and keep nipping at Intel's heels until they are able to surpass Intel. A commitment like AM4 1090FX would in my mind show AMD is committed to continuing its FX line, hopefully through excavator.

No, AM3 is a legend like 939 and must live on until the very death of a non-APU CPU :3. I believe AM3+ 990/1090FX should stay until the end of Steamroller. From there, I think AMD will have a fully matured APU with 8/9870ish integrated graphics and FX will die.
a b à CPUs
June 26, 2013 9:47:03 PM

The Q6660 Inside said:
This is completely irrelevant to the topic at hand.


I'd say he's just a regular spammer, haha.

Anyway, in regards to SR without iGPU talk... Is it going to exist at all? If so, they should move it to a timeframe that allows to have DDR4 (mainly) in their MoBos. Man, I wish they do just that for the "high end" SR parts.

APUs would be the most benefited from it, but the move to GDDR5 will be enough for a good time. In all honesty, 8GB or RAM for Windows computers is still enough and Win7 (not sure about 8) goes around the 2GB mark and 1.5GB when "optimzed", so there's still a lot of memory pool for games and all the rest of the stuff. I wonder if they'll use GDDR5 sticks or what, haha.

Cheers!
a b à CPUs
June 26, 2013 11:18:09 PM

hafijur said:
Question to you is why buy inferior technology. AMD have a lot of catching upto do. Its actually quite funny intel desktop cpus with netburst cpus from 2000-2005 were worse then there laptop cpus in terms of power efficiency. Now AMD are in a similar way there laptop cpus are more advanced then there desktop cpus in terms of power efficiency i.e. performance per watt. So the moral of this story is intel have been ahead basically in the mobile market since 2003 by far.
.

That is because Kabini notebooks came out before the new Kabini based All in One ITX systems which are markedly better than the old E-350's they replace across the board. So if a true ITX setup running low power and giving ample performance is what you are looking for, then you may want to look at those :D 

Quote:
I'd say he's just a regular spammer, haha.

Anyway, in regards to SR without iGPU talk... Is it going to exist at all? If so, they should move it to a timeframe that allows to have DDR4 (mainly) in their MoBos. Man, I wish they do just that for the "high end" SR parts.

APUs would be the most benefited from it, but the move to GDDR5 will be enough for a good time. In all honesty, 8GB or RAM for Windows computers is still enough and Win7 (not sure about 8) goes around the 2GB mark and 1.5GB when "optimzed", so there's still a lot of memory pool for games and all the rest of the stuff. I wonder if they'll use GDDR5 sticks or what, haha.

Cheers!


At this point we cannot assume DDR4 or GDDR5 for Kaveri but what we do know is that on both X86 and IGP Kaveri will be more up to date. Current APU's run a hybrid Turks Radeon core which we know is not good on tessellation nor is it good at compute, Kaveri features a hybrid Pitcairn core, along with a new x86 Arch, these two advances alone will see notable performance gains even if limited to the DDR3 2400-2800 bandwidths. I contrast this to Intel Iris Pro, the HD5200 is a beast a low resolutions where the potent i7 core is capable of driving its frame rates but as soon as you touch details or bump resolutions to 16x9/10 or 19x10 the performance is slower than an APU despite 2x the bandwidth, I will say that yes bandwidth is important but if the rest of the architecture on the CPU+IGP side is flimsy even that is fallacy.

Soon I will be doing a demonstration of a i7 4770K vs A10 5800K, Battlefield 3 at 1680x1050 showdown, same settings, same RAM same multiplayer maps just to show how playable the APU is at HD res and how much more fluid it is. this follows a person who bought a 4770 from me claiming that the HD4600 is nothing like its reviews and by that I mean not good at all, complains of massive stutters and lags.





a b À AMD
a c 84 à CPUs
June 27, 2013 5:01:10 AM

sarinaide said:

...
At this point we cannot assume DDR4 or GDDR5 for Kaveri but what we do know is that on both X86 and IGP Kaveri will be more up to date. Current APU's run a hybrid Turks Radeon core which we know is not good on tessellation nor is it good at compute, Kaveri features a hybrid Pitcairn core, along with a new x86 Arch, these two advances alone will see notable performance gains even if limited to the DDR3 2400-2800 bandwidths.
....

trinity - not turks. cayman-derivative, with vce taken from gcn gpus. the apu is called trinity because of this 3-in-1 design. edit: also because it matches some river's name. amd was happy cuz the codename fit perfectly.
if kaveri has a pitcairn igpu, that'd be a gcn 1.0, unlike gcn 1.1 in bonaire gpus and unlike kabini's igpu or the kaveri igpu (based on gcn 2.0 arch) that has been suggested by the rumors and leaks so far.
are you absolutely sure kaveri has a pitcairn igpu? what would be so hybrid about the igpu? doesn't seem like you're speculating...

edit2: a bit googling with the jaguar-successor's 'beema' codename directed me to 'bhima river'. i wonder if fudzilla got the name wrong. bhima is another indian river like kaveri and kabini.
http://www.fudzilla.com/home/item/30108-amds-2014-kabin...
if fudz are wrong, they might have used the wrong pun... :whistle:  :sol: 
a b À AMD
a c 210 à CPUs
June 27, 2013 8:04:21 AM

Nvidia Shield thoughts from semiaccurate:

http://semiaccurate.com/2013/06/27/an-encounter-with-nv...

AMD OCP "Roadrunner" systems for Servers drop cost of VDI Slices from $91 (Intel) to $38.

http://www.theregister.co.uk/2013/05/15/amd_roadrunner_...

The systems were part of facebook's "open compute challenge" and were designed in collaboration with large financial institutions to meet their needs.

Sounds like AMD's going after the big dogs in the server market with this...I think they regain a large portion of market share in the next 12 months.
a b à CPUs
June 27, 2013 8:09:28 AM

de5_Roy said:
sarinaide said:

...
At this point we cannot assume DDR4 or GDDR5 for Kaveri but what we do know is that on both X86 and IGP Kaveri will be more up to date. Current APU's run a hybrid Turks Radeon core which we know is not good on tessellation nor is it good at compute, Kaveri features a hybrid Pitcairn core, along with a new x86 Arch, these two advances alone will see notable performance gains even if limited to the DDR3 2400-2800 bandwidths.
....

trinity - not turks. cayman-derivative, with vce taken from gcn gpus. the apu is called trinity because of this 3-in-1 design. edit: also because it matches some river's name. amd was happy cuz the codename fit perfectly.
if kaveri has a pitcairn igpu, that'd be a gcn 1.0, unlike gcn 1.1 in bonaire gpus and unlike kabini's igpu or the kaveri igpu (based on gcn 2.0 arch) that has been suggested by the rumors and leaks so far.
are you absolutely sure kaveri has a pitcairn igpu? what would be so hybrid about the igpu? doesn't seem like you're speculating...

edit2: a bit googling with the jaguar-successor's 'beema' codename directed me to 'bhima river'. i wonder if fudzilla got the name wrong. bhima is another indian river like kaveri and kabini.
http://www.fudzilla.com/home/item/30108-amds-2014-kabin...
if fudz are wrong, they might have used the wrong pun... :whistle:  :sol: 


Speculative based on article out last week.

Caymen's were the HD6900 family and I seriously doubt Trinity is anything remotely close to a Caymen core. I remember reading up that it was Turks based, that used in the HD6500/6600 families.

a b À AMD
a c 84 à CPUs
June 27, 2013 8:19:00 AM

sarinaide said:

Speculative based on article out last week.

Caymen's were the HD6900 family and I seriously doubt Trinity is anything remotely close to a Caymen core. I remember reading up that it was Turks based, that used in the HD6500/6600 families.


http://www.tomshardware.com/reviews/a10-4600m-trinity-p...
a b à CPUs
June 27, 2013 8:43:15 AM

I am calling that a head scratcher, notwithstanding the sheer impossiblity of that.
a b À AMD
a c 210 à CPUs
June 27, 2013 9:05:12 AM

It makes some sense, in the way that if you're using less cores, you'd want the most potent cores to get the best performance out of the least hardware.
a b À AMD
a c 210 à CPUs
June 27, 2013 9:08:35 AM

http://www.tomshardware.com/news/Xbox-One-Chat-Headset-...

Just as M$ looked like they were going to be competitive in the consoles again with changes on policy after gamers made an outcry. They shoot themselves in the foot again!

Why does this not at all surprise me...?

Steve Ballmer is clearly a moron.
a b À AMD
a b à CPUs
June 27, 2013 9:33:41 AM

8350rocks said:
http://www.tomshardware.com/news/Xbox-One-Chat-Headset-...

Just as M$ looked like they were going to be competitive in the consoles again with changes on policy after gamers made an outcry. They shoot themselves in the foot again!

Why does this not at all surprise me...?

Steve Ballmer is clearly a moron.


Well, M$ has been on a roll with stupid decisions since Windblows 8. That is why this is not surprising at all...............
a b À AMD
a c 210 à CPUs
June 27, 2013 10:03:11 AM

The Q6660 Inside said:
8350rocks said:
http://www.tomshardware.com/news/Xbox-One-Chat-Headset-...

Just as M$ looked like they were going to be competitive in the consoles again with changes on policy after gamers made an outcry. They shoot themselves in the foot again!

Why does this not at all surprise me...?

Steve Ballmer is clearly a moron.


Well, M$ has been on a roll with stupid decisions since Windblows 8. That is why this is not surprising at all...............


I admit, you got a chuckle for "Windblows 8"....lol.
a b à CPUs
June 27, 2013 10:32:25 AM

8350rocks said:
http://www.tomshardware.com/news/Xbox-One-Chat-Headset-...

Just as M$ looked like they were going to be competitive in the consoles again with changes on policy after gamers made an outcry. They shoot themselves in the foot again!

Why does this not at all surprise me...?

Steve Ballmer is clearly a moron.


not like the ps4 included headset is anything spectacular.


id say its pretty chincy at best.
a b À AMD
a c 210 à CPUs
June 27, 2013 10:35:01 AM

Yes, but M$ is promoting a headset as a "must have" for a system that comes without it, and is already $100 more than the more powerful competitor that includes a flimsy headset.

EDIT: Can you say blatant money grab?
a b à CPUs
June 27, 2013 11:02:00 AM

either one is just a marketing gimmick. you don't actually need one since the kinect that comes with the system has an integrated mic. the PS4 comes with a "headset" even though its just a wire and an earpiece. Gamers will opt to buy a real "must-have" headset, but hey, they can brag about including one.

http://reviews.cnet.com/must-have-gadgets/

gotta buy one of each since they are must-haves?

But M$ is stupid with the proprietary mic jack that hooks to the controller.

selling point for me is bluetooth = PS4. Logitech H800 ftw.

XB one went wifi direct and no bluetooth. means all your bt xb 360 devices are useless.
a b À AMD
a c 210 à CPUs
June 27, 2013 11:15:25 AM

noob2222 said:
either one is just a marketing gimmick. you don't actually need one since the kinect that comes with the system has an integrated mic. the PS4 comes with a "headset" even though its just a wire and an earpiece. Gamers will opt to buy a real "must-have" headset, but hey, they can brag about including one.

http://reviews.cnet.com/must-have-gadgets/

gotta buy one of each since they are must-haves?

But M$ is stupid with the proprietary mic jack that hooks to the controller.

selling point for me is bluetooth = PS4. Logitech H800 ftw.

XB one went wifi direct and no bluetooth. means all your bt xb 360 devices are useless.


Well, the issue is interference on the Kinect. M$ is basically all but saying...if you're going to talk while playing, you want a headset because Kinect sucks.

I personally don't care one way or the other...but I know what my kid is getting for Christmas, and it won't be from M$.
a b À AMD
a c 210 à CPUs
June 27, 2013 12:58:42 PM

ECS FM2+ mini-ITX board spotted @ Computex 2013

If the boards are already made, then that means that Kaveri is right around the corner. Additionally, this would be a great HTPC platform. It uses an A78 chipset, which correlates to an updated A75 chipset. Unless they got it backwards and it's an A87 chipset, though I don't think the board would be large enough to support full blown CF like the A85 chipset did (though an ATX version would easily be able to do so).

Either way, interesting read. The website is natively German, so google translate helps but the English is rough in a few places.
a b À AMD
a b à CPUs
June 27, 2013 1:09:52 PM

8350rocks said:
ECS FM2+ mini-ITX board spotted @ Computex 2013

If the boards are already made, then that means that Kaveri is right around the corner. Additionally, this would be a great HTPC platform. It uses an A78 chipset, which correlates to an updated A75 chipset. Unless they got it backwards and it's an A87 chipset, though I don't think the board would be large enough to support full blown CF like the A85 chipset did (though an ATX version would easily be able to do so).

Either way, interesting read. The website is natively German, so google translate helps but the English is rough in a few places.

I would assume Kaveri will release in October or November. Still, I prefer a retooled, old FX-6X or Core 2 Quad/Extreme in an HTPC case :3.
a b à CPUs
June 27, 2013 2:21:29 PM

Kaveri is supposedly just waiting on GF 28nm to ramp. Both AMD/GF have been dead silent.

July 18th is AMDs next earnings announcement. Expect some news then.
June 27, 2013 4:09:30 PM

juanrga said:
mlscrow said:
Okay, so the ultimate conclusions of this thread can be:

A) hajifur is a complete waste of worldly resources.
B) The 220W TDP of the 5GHz FX9590 is spot-on, as it's simply an overclocked FX8350, which we all know is based on Piledriver, which we all know requires exponentially more power as you overclock it beyond 4GHz. 220W is actually surprisingly low for a 5GHz OC (compared to an FX8350 OC'd to 5GHz), but we can attribute that to process maturity and AMD lowballing.
C) High-end performance enthusiasts don't care about power consumption anyway.
D) The FX-8350 provides better performance per dollar than its Intel counterparts.
E) The FX-8350 provides worse performance per watt than its Intel counterparts.
F) The point of this thread is supposed to be Steamroller anyway, so it's dumb to battle the troll in this virtual stadium, lets get back on topic.
G) Steamroller should put AMD back in direct competition with Intel in the high-end.
H) AMD only shows a quad core Steamroller available for 2014, which is very, very sad and hopefully they'll realize their mistake sooner than later, because we all want an 8-core Steamroller FX for 2014, don't we?




B) The FX9590 isn't simply an overclocked FX8350. We don't have details, but either it is a chip with extraordinary thermal and electric tolerances preselected from the main FX8350 line or includes some 2.0 revision like Richland. In any case, the FX9590 will consume less power than an OC FX8350.

E) Compared with ordinary i7 line such as the 3770k or 4770k, compared with extreme series such as the 3960x or 3970x I am not so sure.

H) I don't know any updated desktop roadmap for 2014. I only know server roadmap and the desktop for 2013. If steamroller is 2 threads per module I would wait some 4 module FX, but if steamroller is 4 threads per module as speculated before, then a 2 module FX makes more sense for general public.



B) Agreed, we don't have the details, but all Richland was, outside of the GPU, was higher clocks due to the maturing of the process and slight power management tweaks. In the end, it's still the same arch, hence my reasoning that all the FX9590 will be, is an overclocked FX8350. I would still prefer 28nm SR cores for a new FX, wouldn't you?

E) I did say "counterparts". To me, the counterpart to an AMD 4m/8t APU is a 4c/8t APU from Intel, thus the 2700K, 3770K, 4770K series. Once you go on to the E series (with the 3820 as maybe the exception), AMD doesn't have any counterpart.

H) Historically speaking, the FX line were based on AMD's 1P server platform offerings. If there was an 8-core part made for the 1P server platform, we would eventually see an 8-core Desktop FX version. So, with that, one can speculate that since AMD shows no 8-core 1P server part with SR cores, but only shows a 4-core part, then the best we can expect to see, in terms of SR core parts available for desktops are 4-core, meaning 2 module parts for 2014. And although I agree that a 4-core part does make sense for the general public, it's the enthusiasts, such as myself and most people here, who are sorely disappointed that there is no mention of an 8-core part. Fingers crossed for the future.
June 27, 2013 4:55:21 PM

MU_Engineer said:
All right guys, it was fun to battle and all, but we really should get back on topic. How about that Steamroller?

I'll start. Steamroller is apparently only announced on FM2(+) currently, judging from the server roadmaps. The server die is used in the high-end desktop as well and 2014 server is going to be a tweaked Piledriver, not Steamroller. Thoughts?


FAKE... as it seems...

http://www.abload.de/img/9gtasti18.jpg

Its the first time i saw the logic part get an astounding shrink, the BLACK BOXES in that image is ~50% shrink (edit) for the same functionality structures,* yet the SRAM macros of the cache (L2 in this case) doesn't scale at all* ( that 2MB L2 should be quite visible smaller than in PD, and 4 MB is no likely at all, it will mean 50% overall shrink and that is not 28nm for sure) LOL

And its not due to a superimposition, the "PINK" first L2 arrays, 16 in total, where already present in the first *Module* image in the same position they are in the above, of a ST module on top of a PD module.

https://securecdn.disqus.com/uploads/mediaembed/images/...

It would be nice anyway, to think 4 ALUs + 4AGUs, but what the hell for if not for 4 threads module, the old ALU+AGU scheme of Athlon K10 was deprecated when OoO (out-of-order) memory ops, STL (store to load) and "Data Speculation" was introduced( not really need to calculate every address)... is also to scratch head. The double FlexFPU is more nicer, i think AMD will end do it, but not 4MMX + 4 FMAC each as in that picture (2MMX + 4FMAC is more likely on those high density libraries).

Also i think AMD will end up introducing "2 fetch front-end" engines, decode is not the most of concern if you have a small decoded cache after decode, and a stack offloading pipe after fetch(like K10 did). Besides the rumor based on revelations and analyzed at RWT, of the developer guides is that there will be the same 4 decode pipes(not 8 like that picture presents)... only perhaps arranged in a different way in the "vertical multithreading scheme of things", 2 dedicated input buffers(1 per thread), and 2 dedicated output dispatch buffer can reasonably simulate 2 decode engines.

In the end we got to applaud, who ever wasted weeks doing that god job, is very good at computer graphics arts.





June 27, 2013 5:03:52 PM

cowboy44mag said:
Personally if they are going to be doing a new chipset (1090X/FX) I would like to see AM4. It would like to see a Steamroller processor for AM3+ 990FX, but top of the line Steamroller processors running AM4 1090FX. If they are going to come out with a new chipset, requiring a new motherboard anyway I would like to see AM4 which would hopefully mean "high end" FX processors through excavator. It would be a shame if AMD dropped its high end FX line and only produced APUs.

I know I read somewhere awhile back that AMD was "unifying" its line which I took it to mean producing only APUs and the FX line would end. I personally want to see the high end FX line continue and keep nipping at Intel's heels until they are able to surpass Intel. A commitment like AM4 1090FX would in my mind show AMD is committed to continuing its FX line, hopefully through excavator.


Yes it usually means a new motherboard lol... yet they can use the same sockets... one doesn't invalidate the other.

And unifying is a good suspect... Berlin APU for servers, if the I/O block is out of the die, could function in the same scheme of a AM3+ socket, the central interconnect for HSA is HyperTransport, the base of IOMMU v2.5( an HSA standard)... not crap PCIe...

Perhaps AM4 could have CPUs and APUs in the same socket, with no difference... an have PCIe3 + HTX4.0 combo slots directly from the socket dies (ITS IN THE PATENTS)... much better than pcie alone like intel.

June 27, 2013 5:10:53 PM

Yuka said:
The Q6660 Inside said:
This is completely irrelevant to the topic at hand.


I'd say he's just a regular spammer, haha.

Anyway, in regards to SR without iGPU talk... Is it going to exist at all? If so, they should move it to a timeframe that allows to have DDR4 (mainly) in their MoBos. Man, I wish they do just that for the "high end" SR parts.

APUs would be the most benefited from it, but the move to GDDR5 will be enough for a good time. In all honesty, 8GB or RAM for Windows computers is still enough and Win7 (not sure about 8) goes around the 2GB mark and 1.5GB when "optimzed", so there's still a lot of memory pool for games and all the rest of the stuff. I wonder if they'll use GDDR5 sticks or what, haha.

Cheers!


Don't count on GDDR5 for now. If you do the right accounts, 3 channels of DDR3 2400 have the double bandwidth of 2 channels of DDR3 1866 ( the official supported)... and AMD could sell those with AMP profiles. It would be grand if Kaveri could have 2x the performance of a Trinity (which i doubt will happen).

Its quite possible Kaveri will have ESRAM on board like Wuii or XB one... its more scalable... if not 3 channels of DDR3 at higher freq/bandwidth(with perhaps a small L3 block...4MB maybe)(edit).

June 27, 2013 5:29:12 PM

Quote:
In the end, it's still the same arch, hence my reasoning that all the FX9590 will be, is an overclocked FX8350. I would still prefer 28nm SR cores for a new FX, wouldn't you?


Not if it comes with Turbo core 3.0... and probably tweaked at memory controller also. Centurion could be based on Richland that got average 5% better IPC than Trinity on the CPU side (the GPU is more specially at games, and compute where it can go >20%)... Then Centurion will be Vishera 2.0.

http://m.hexus.net/tech/news/cpu/56653-amd-set-release-...

( 5% clock to clock will be half hasfail lol... not bad for such a minor revision.. plus 20% more due to clock lol)
a b à CPUs
June 27, 2013 6:28:47 PM

hcl123 said:
Don't count on GDDR5 for now. If you do the right accounts, 3 channels of DDR3 2400 have the double bandwidth of 2 channels of DDR3 1866 ( the official supported)... and AMD could sell those with AMP profiles. It would be grand if Kaveri could have 2x the performance of a Trinity (which i doubt will happen).

Its quite possible Kaveri will have ESRAM on board like Wuii or XB one... its more scalable... if not 3 channels of DDR3 at higher freq/bandwidth(with perhaps a small L3 block...4MB maybe)(edit).


But going tri or quad channel DDR will make the expense of the boards skyrocket. AMD doens't need expensive MoBos IMO. Specially when they're not perf kings of the hill.

As it stands now, dual channel DDR3 is more than plenty for CPUs without iGPU (yes, including Intel's -E series), so DDR4 or GDDR5 only makes sense for APUs; or makes sense to rush it in that territory, which don't crave for enthusiasts that much, so going tri or quad channel doesn't make any sense to me, to be honest, from a price performance point of view. Nice to have, but not really needed IMO.

AMD APUs are still horribly constrained by heat draw, so including an even bigger IMC isn't a great idea at first glance.

Cheers!
June 27, 2013 7:12:40 PM

NO it could even make it cheaper... offer boards with only 3 DIMM slots instead of the usual 4... or have the same 4, in case only one channel is expanded to 2 slots, which should be the same price.

DRAM upgrade could be in the DIMMS themselves not in more DIMMs, AMD has good support for dual bank (both sides) DIMM toplogies... perhaps you could match 4GB with 8GB DIMMS without problem.

Besides Intel done it before with Nehalem... the problem then was that DRAM was much more expensive per GB, an then you needed all 6 DIMM slots yet usually restricted to 16GB total... now you can have, and not very expensive, 24GB with only 3 DIMMS... 48GB if this supports the 8Gbit DRAM devices in dual rank/"dual-bank".

UPGRADE
DDR4 will be much more expensive, a price nightmare for expansion, you need a Buffer Chip (as if it were "registered" DRAM) in those DIMMS for expansion... otherwise you'll get only 1 DIMM per channel. DDR4 has a point-to-point topology that don't even support the "clam-shell" topology of GDDR5.

Besides the DDR4 that is to be in 2014 is 2133 and 2400.. perhaps 2666 by end of 2014... hardly any improvement in bandwidth compared to DDR3, only power(but latencies are worst comparatively), and very far from the top 4266 that is to be certified by JEDEC.
a b À AMD
a b à CPUs
June 27, 2013 7:23:32 PM

anestarks said:
I'm just posting this to sub to this.

Gah, what is with these flags and faces! Any explanation? On the topic of FM2b APUs, I think they may be waiting for 28nm FD-SOI from GF to ramp up.
June 27, 2013 7:31:34 PM

Quote:
AMD APUs are still horribly constrained by heat draw, so including an even bigger IMC isn't a great idea at first glance.


You have a Richland A10-6700 at 65W for the all chip yet above 4.0Ghz... the 6800 is less than 100W for sure ( the K versions are always deliberately over rated )

I think facts contradict your statement. Besides the IMC is not the worst offender, not even close, not even with 6 channels would be, the GPU is. period
June 27, 2013 7:58:12 PM

The Q6660 Inside said:

Gah, what is with these flags and faces! Any explanation? On the topic of FM2b APUs, I think they may be waiting for 28nm FD-SOI from GF to ramp up.


It could be, the plans at GF are clearly future going in this respect, down to the 10nm node.
http://www.advancedsubstratenews.com/2013/04/gfs-two-fl...
(THAT IS AN EXCLUSIVE INTERVIEW WITH A TOP EXECUTIVE OF GF)

Besides porting a design from "planar" bulk to "planar" FD-DOI is the less complicated of all transitions so far... that has been one strong point of STMicro on the marketing front. Planar "bulk" and "FD-SOI" share the same BEOF(back end of line) steps ... and some middle to...

But i think it was already revealed (don't remember where but i read it) in an exclusive interview with AMD that Kaveri will debut on "bulk" process. If its low power low clock, mobile parts, as Richland also launched mobile first, is not a bad bet.

EDIT:
50% better at Vdd= 1 volt and 200% at 0.6v.. kaveri could gain much more than 60% perf/power, clearly ahead of intel, cherry on top of cake to "knock your socks off", "forward back gate biasing" was able to get 40% more clock than without it ... simple (it is a relative simple process) is beautiful lol

http://www.advancedsubstratenews.com/2013/06/fully-depl...
(just read the head lines of those VLSI (kyoto) conference presentations)

And they will catch intel at 14nm, they even plan to reach there first (finfet is quite harder to push and do right)... a bold plan by any metric...

http://www.advancedsubstratenews.com/2013/06/which-will...


a b à CPUs
June 27, 2013 8:47:00 PM

hcl123 said:
Quote:
AMD APUs are still horribly constrained by heat draw, so including an even bigger IMC isn't a great idea at first glance.


You have a Richland A10-6700 at 65W for the all chip yet above 4.0Ghz... the 6800 is less than 100W for sure ( the K versions are always deliberately over rated )

I think facts contradict your statement. Besides the IMC is not the worst offender, not even close, not even with 6 channels would be, the GPU is. period


Well, my own fact is sitting under my TV right now, underclocked for the very same reason I told you about, haha.

I agree with the quad channel 4 dimms approach though. I wonder if AMD could do that to extend the use of DDR3. On the other hand, wasn't DDR4 like the second coming of Jesus "DDR" Christ? Why make it closer to ECC RAM than keeping it unbuffered? Are you sure they'll keep Server and Desktop grade RAM buffered? You got me confused.

Cheers!
June 27, 2013 11:13:39 PM

For server the "buffer" is no problem... they need it... the idea was to prop up the market at all levels "client/DT" included, but too late for Elpida that went belly up.

DDR4 will have plenty of success in servers.. more cores need more bandwidth. Client specially with the push of integrated GPGPUs, not even DDR4 will be enough, 4 year ago "top GPU" level of performance are clearly at hand, so this ones will go for HBM (wide I/O, Memory Cube) instead(that or very large caches on die like ESRAM, CW).
a b À AMD
a c 84 à CPUs
June 28, 2013 12:11:56 AM

Yuka said:
Spoiler
hcl123 said:
Don't count on GDDR5 for now. If you do the right accounts, 3 channels of DDR3 2400 have the double bandwidth of 2 channels of DDR3 1866 ( the official supported)... and AMD could sell those with AMP profiles. It would be grand if Kaveri could have 2x the performance of a Trinity (which i doubt will happen).

Its quite possible Kaveri will have ESRAM on board like Wuii or XB one... its more scalable... if not 3 channels of DDR3 at higher freq/bandwidth(with perhaps a small L3 block...4MB maybe)(edit).


But going tri or quad channel DDR will make the expense of the boards skyrocket. AMD doens't need expensive MoBos IMO. Specially when they're not perf kings of the hill.

As it stands now, dual channel DDR3 is more than plenty for CPUs without iGPU (yes, including Intel's -E series), so DDR4 or GDDR5 only makes sense for APUs; or makes sense to rush it in that territory, which don't crave for enthusiasts that much, so going tri or quad channel doesn't make any sense to me, to be honest, from a price performance point of view. Nice to have, but not really needed IMO.

AMD APUs are still horribly constrained by heat draw, so including an even bigger IMC isn't a great idea at first glance.

Cheers!

i keep finding information saying sram is much more expensive than dram(and gddr5) and wiiu has edram (the one on the package). won't it be better add just 4-6 MB L3 cache(afaik, made of sram) instead of adding 32MB esram? 32MB sram seems like 6-8 times the cost among other issues.

apus started at entry level consumer devices which also migrated to servers. they're still consumer devices in essence. tri/quad channel seems like a move suitable for high end enthusiast level hardware that will increase motherboard costs and may not help with mini itx or micro atx form factors. it's bad enough that there is so little demand for amd mini itx motberboards...

hcl123 said:
Spoiler
NO it could even make it cheaper... offer boards with only 3 DIMM slots instead of the usual 4... or have the same 4, in case only one channel is expanded to 2 slots, which should be the same price.

DRAM upgrade could be in the DIMMS themselves not in more DIMMs, AMD has good support for dual bank (both sides) DIMM toplogies... perhaps you could match 4GB with 8GB DIMMS without problem.

Besides Intel done it before with Nehalem... the problem then was that DRAM was much more expensive per GB, an then you needed all 6 DIMM slots yet usually restricted to 16GB total... now you can have, and not very expensive, 24GB with only 3 DIMMS... 48GB if this supports the 8Gbit DRAM devices in dual rank/"dual-bank".

UPGRADE
DDR4 will be much more expensive, a price nightmare for expansion, you need a Buffer Chip (as if it were "registered" DRAM) in those DIMMS for expansion... otherwise you'll get only 1 DIMM per channel. DDR4 has a point-to-point topology that don't even support the "clam-shell" topology of GDDR5.

Besides the DDR4 that is to be in 2014 is 2133 and 2400.. perhaps 2666 by end of 2014... hardly any improvement in bandwidth compared to DDR3, only power(but latencies are worst comparatively), and very far from the top 4266 that is to be certified by JEDEC.

did you count the slowly rising cost of desktop ddr3 ram? for budget/entry level, higher cost could become an issue. in addition to that, all the quad/tri channel memory kits i've seen cost a lot more than dual channel kits. i assume it's because of the validation process.

edit:
Kabini laptops available in US
http://www.fudzilla.com/home/item/31811-kabini-laptops-...
AMD leaks model numbers of future Kabini APUs
http://www.cpu-world.com/news_2013/2013062701_AMD_leaks...
Microsoft Xbox One Cracked Opened: First Photos of Hardware Inside Xbox One Emerge.
http://www.xbitlabs.com/news/multimedia/display/2013062...
!