AMD CPU speculation... and expert conjecture - Page 40
Tags:
-
AMD
-
CPUs
Last response: in CPUs
gamerk316 said:
I'm not convinced AMD is going to make a lot of profit off the consoles. Especially since WiiU sales are already falling, and I suspect the PS4/Xbox Next sales aren't going to do that well either.
Then again, I'm very pessimistic on consoles in general at this point. I'm convinced Smartphones/Tablets have more or less made them obsolete.
tbh, i've had a suspicion like that as well. when i saw ps vita specs and prices, i said it'd do badly. people jumped on me singing sony and console praise. smartphones and tablets have sucessfully disrupted traditional pc and console ecosystem in many ways. but, the current consoles are aiming for the specific areas where smaprtphones and tablets become inadequate. for example, party games on a large high-def(1080p tv, 3d - gimmicky as it is.) screen @60 fps, 4k playback, hardcore fps, multiplayer. at least ps4 has stronger media consumption capability than tablets. sony can even couple psv/ps2 with ps4 like wii u does with its tablet-sticks. devs seem to like ps4 as well... well moer than win. 8 and wii u, i think. without jobs, ios will have harder and harder time to reinvent itself. ps4 can easily do well in japan. u.s. might be tougher sale, but keep expectations low and you won't be disappointed.
still, amd can make less money if the deals themselves aren't made to amd's favor. i think this has happened with ati before. then i'll blame the people who made those deals... wait, amd fired them already.
-
Reply to de5_Roy
gamerk316 said:
I'm not convinced AMD is going to make a lot of profit off the consoles. Especially since WiiU sales are already falling, and I suspect the PS4/Xbox Next sales aren't going to do that well either.
Then again, I'm very pessimistic on consoles in general at this point. I'm convinced Smartphones/Tablets have more or less made them obsolete.
It will come down to marketing and what games are available. With a 5 year upgrade cycle the new consoles have a decent chance of creating excitement for that sector again. Of course Microsoft or Sony could kill that with draconian DRM and always online requirements.
-
Reply to Cazalan
Related resources
- Tek Syndicate: Expert Conjecture and Speculation - Forum
- AMD Steamroller rumours ... and expert conjecture - Forum
- Will the 4+4 pin CPU power on the Corsair RM850 Gold reach to 8 pin header on the Asus Z87- Expert in the Corsair Obsidian 750 - Tech Support
- I need a CPU expert's opinion on this - Tech Support
- 5/5 on CPU expert, but no badge? - Forum
viridiancrystal
March 12, 2013 1:08:56 PM
mayankleoboy1 said:
blackkstar said:
Why drive down a crowded street when you can drive down one that's barely used?more like would you rather go in a sports car on a crowded highway, or walk on an empty street. When the highway is crowded, you still are pretty fast. And when its open, there is simply no comparison.
The devs would have to check this hypothesis in every situation. Whether the CPU off-loading is worth it or not, or would it still be faster to just let the GPU do all the stuff. There has to be a clear benefit of doing parallel work on the CPU over a GPU.
I am all for using multiple cores for AI, memory management, stacks, syncing, engine housekeeping and hundreds of other serial/barely parallel stuff that go on continuously. But to use the CPU for textures/graphics work is foolish IMO. This approach sounds a lot like "putting a tick on checkbox for marketing, just to show PC is our main platform".
You are actually not making any sense right now. The GPU could, in theory compute the problems faster, but if it is already working at full capacity to render textures/shadows, then making the CPU do the work instead is more efficient.
Imagine you are cooking omelettes with two different pans. One pan can cook an omelette in 5 minutes, the other can in 20 minutes. If the first pan is constantly cooking (at a 100% work load) then waiting to cook all the omelettes in that one pan is not faster than using the other pan also. By using both pans, you will finish 5 omelettes in 80% of the time it would take just the first pan to cook them.
On topic: Does anyone know when we may see some Richland benchmarks?
-
Reply to viridiancrystal
gamerk316 said:
Quote:
It isn't that hard of a concept. If you have a GPU at 100% load and a CPU at 25% load, what do you want to shift your load too?What CPU and what GPU? What if I purchase the game two years from now, with my i7-2600k and NVIDIA 890 GTX? Oh, guess what? The CPU is bottlenecked while my GPU is at 50%. Woops, what did we actually solve here?
See the problem? You aren't actually "solving" anything, you're just moving the bottleneck around. The code in question is typically run on the GPU because, guess what? The GPU executes it FASTER then the CPU does.
And thats the problem is have: In the absence of any bottlenecks, the code is sub-optimal, and will run slower then if it were coded the "traditional" way.
So in other words you don't like new ways of writing code. Fight the future, make any excuse possible to avoid making the cpu work harder, and essentiallly making dual core cpus obsolete, instead lets rely on going sli/crossfire so the gpu can handle the extra workload ... oh, thats right, you don't like those either ...
Lets look at your theory in another way. say you get game X in the future with geforce 890. at medium settings, your at 100% gpu usage, pushing 120fps, with the cpu utilizing 2 cores at 75%. lets crank it to ultra, make the gpu do the calculating. gpu stays at 100%, fps drops to 30, cpu still just idling 6 cores (4 being HT) and 2 working ...
So now to get the game to actually be playable, we need to run sli since the cpu isn't overworked, and so that the I3 is still usable.
We can't alienate our dual core cpus. <--- This way of thinking needs to die in order for programming to move forward. Yes, I know that means your product will not sell to those with i3 cpus, after all its all about making money and not looking to the future.
-
Reply to noob2222
mayankleoboy1
March 12, 2013 9:40:17 PM
I have a redesigned, highly optimized Pentium4 processor, with power gating on each register. Plus, each part of the chip has a separate voltage plan, so that each part of the chip can boost its speed according to the load. It can boost 20% over the existing P4's for longer period of time.
It performs in best-of-class category.
Would you care to buy?
/trolling.
It performs in best-of-class category.
Would you care to buy?
/trolling.
-
Reply to mayankleoboy1
-
Reply to amdfangirl
mayankleoboy1
March 13, 2013 2:11:44 AM
mayankleoboy1
March 13, 2013 2:56:06 AM
Cazalan said:
So in other words you don't like new ways of writing code. No, I'm against ways of coding that artificially slow down processing. Because thats exactly what the Crysis devs did.
The rendering code in queston that was offloaded to the CPU will execute faster on a GPU. GPU's are gaining performance faster then CPUs. So how does offloading that code to the CPU make ANY sense whatsoever?
Quote:
Fight the future, make any excuse possible to avoid making the cpu work harder, and essentiallly making dual core cpus obsolete, instead lets rely on going sli/crossfire so the gpu can handle the extra workload ... oh, thats right, you don't like those either ... Because of the latency problems, which sites are now starting to reveal. How many threads over the years have been along the lines of "I'm getting 80 FPS on my SLI/CF setup, so why does the game feel choppy?". As long as SLI/CF is implemented by copying on cards VRAM to another, thus introducing unacceptable latency into the processing, I don't view it as a "good" implementation.
Secondly, I'm all for using the power CPU's have, providing it DOESN'T SLOW DOWN THE APPLICATION, OPERATING SYSTEM, OR OTHER APPLICATIONS THAT MAY OR MAY NOT BE RUNNING.
Quote:
Lets look at your theory in another way. say you get game X in the future with geforce 890. at medium settings, your at 100% gpu usage, pushing 120fps, with the cpu utilizing 2 cores at 75%. lets crank it to ultra, make the gpu do the calculating. gpu stays at 100%, fps drops to 30, cpu still just idling 6 cores (4 being HT) and 2 working ...Except the GPU wouldn't be that loaded to begin with, especially since my 570 isn't hitting 100% load at medium (V-sync enabled). So those performance numbers you pulled out of your rear aren't even remotely valid.
Secondly, follow this REALLY simple logic:
1: Year over year performance increases in GPU's is significantly greater then year over year performance in CPU's.
2: GPU's excel when executing parallel code.
3: Rendering code is very parallel.
Therefore, which component should handle the processing for rendering?
So look at it this way instead: two years from now, CPU's have increased in power by about 20-25% (not unreasonable at current trends). Meanwhile, GPU performance has doubled (not unreasonable either; look at the performance increases the past two GPU generations). So now when you look at that render code that was offloaded to the CPU, guess what? You now have a CPU bottleneck for all non-8 core processers (and even they are tasked to capacity), meanwhile, the GPU is running at ~50% load, not rendering anything because its waiting on the CPU to finish rendering.
Congrats. You just slowed down your application for no reason whatsoever for 90% of all CPU's on the market, with no performance increase for the other 10%.
So, if I understand you right, coding in a way that REDUCES FPS is a GOOD thing.
Quote:
We can't alienate our dual core cpus. <--- This way of thinking needs to die in order for programming to move forward. Yes, I know that means your product will not sell to those with i3 cpus, after all its all about making money and not looking to the future.Great. How about this then: We remove all GPU's and move back to executing all code on the CPU, in order to force CPU's to move to 64-core processors? Theres a reason why the entire rendering stack was moved to GPU's in the first place: Because at rendering, GPUs are faster then CPUs.
Now, if some dev can make a game engine that is more parallel in a way that makes programming sense, I'm all for it. But moving code back to the CPU from the GPU, for tasks that execute faster on the GPU in the first place, and causing a CPU bottleneck instead of a GPU bottleneck, is the WRONG approach.
-
Reply to gamerk316
Quote:
No, I'm against ways of coding that artificially slow down processing. Because thats exactly what the Crysis devs did.The rendering code in queston that was offloaded to the CPU will execute faster on a GPU. GPU's are gaining performance faster then CPUs. So how does offloading that code to the CPU make ANY sense whatsoever?
Gamer your better then this. Their implementation resulted in their program running faster and more efficiently utilizing system resources then it otherwise would of. Nobody can deny this as it's rather evident from various tests and performance profiles that have since been done. For a long time it's been commented that multi-core CPUs were underutilized with most of their capability going to waste. This is most evident when an i3 match's an i5/i7 at the same clock speed even though it only has 50% of the CPU resources. At the same time GPU's are being pushed harder and harder mostly due to their workload being highly conducive to parallel operations. Crysis 3 is no exception to this, lots and lots of benchmarks have demonstrated that it scales perfectly with additional GPU resources.
Any sane rational individual can take a look at this scenario and see the balance between CPU and GPU usage is highly lopsided towards the GPU. As is resulted in your i3 + 570 scenario (please tell me you didn't actually build this) where your taking a budget CPU and mating it with a mainstream performance GPU, or in some people's cases even a high end GPU. The C3 devs just balanced it out by pushing more onto the CPU to alleviate usage on the GPU to do more work. In effect they turned the CPU into a co-processor for the GPU, which while hilarious is somewhat fitting considering the context.
So while you may take a purist position, and that's your right, you can not in good faith claim it is an implementation that results in less performance. Benchmarks and performance profiling have already empirically demonstrated otherwise.
-
Reply to palladin9479
mayankleoboy1
March 13, 2013 9:25:25 AM
gamerk316 said:
No, I'm against ways of coding that artificially slow down processing. Because thats exactly what the Crysis devs did.I think Crytek devs had to resort to using CPU for rendering because of the GPU in consoles getting all maxxed out.Contrary to what the Crytek devs tell, i dont think that PC was really their main platform (and who can blame them, when console sales are more?)
Anyone remember that Crytek devs said that "after Crysis3, there want even 1% spare computation power left in consoles" ?
Regarding doing rendering stuff on 6/8 core processors : PLEASE read on LLVM-pipe drivers. (and how they are good for 2D, but awfully inadequate for even semi-serious 3D work). There is a reason that the industry went to 1000 tiny 'cores' rather than 6 big cores.
Quote:
palladin9479 Said :So while you may take a purist position, and that's your right, you can not in good faith claim it is an implementation that results in less performance. Benchmarks and performance profiling have already empirically demonstrated otherwise.
What do benchmarks show ? That Crysis3 gets performance gains from using more cores, and more clock. But it does not show that if the same work was done on the GPU, the FPS would be higher.
Plus, in the next gen of GPU, expect conservative ~30-50% perf increase. When was the last time you saw 30% improvement in CPU processing ? Reason : its sort of trivial to add moar coares to the GPU, and the workload scales effortlessly
-
Reply to mayankleoboy1
Quote:
What do benchmarks show ? That Crysis3 gets performance gains from using more cores, and more clock. But it does not show that if the same work was done on the GPU, the FPS would be higher.
Look at previous sources posted in the thread. On anything with four or more cores the game scales with GPU performance, this is definitely a "GPU limited" game. It really does seem to use the CPU as a "coprocessor".
http://www.gamegpu.ru/action-/-fps-/-tps/crysis-3-test-...
http://www.overclock.net/t/1362591/gamegpu-crysis-3-fin...
http://www.techspot.com/review/642-crysis-3-performance...
-
Reply to palladin9479
Ok after doing some digging around I can see what they did and what makes this different from previous games.
Games tend to have one primary render thread along with a handful of other threads for various other tasks that need to be done. The implementations of most games have those secondary threads implemented in a very lock-step serialized method, each one needs data from the previous one to do it's job. So even though you've created other threads, rarely will they be able to run at the same time. This is where the "games only ever use and will only every use two threads" idea comes from.
What C3 did was implement those additional threads in a non-serialized manor. There is still one primary render thread that use's as much CPU as it possibly can, and often it needs data from those other threads to do it's job. It doesn't need all the data all the time, and the exact scene your looking at will dramatically determine how much extra CPU power is required. The more grass, vegetation, environmental effects and various physics effects present the more additional CPU power your going to need to keep that primary thread fed which in turn keeps your GPU fed.
That explains why dual core CPU's fall so hard on this, they simply do not have the raw processing resources required to keep the GPU fed with data. Once you have four cores it comes down to the exact scene and how much additional work needs to be done, the more work the more raw CPU resources are needed (vs single threaded performance). It also explains the disparity between benchmarks, different scenes, resolutions, detail settings and camera angles result in different amounts of work required to be done.
Games tend to have one primary render thread along with a handful of other threads for various other tasks that need to be done. The implementations of most games have those secondary threads implemented in a very lock-step serialized method, each one needs data from the previous one to do it's job. So even though you've created other threads, rarely will they be able to run at the same time. This is where the "games only ever use and will only every use two threads" idea comes from.
What C3 did was implement those additional threads in a non-serialized manor. There is still one primary render thread that use's as much CPU as it possibly can, and often it needs data from those other threads to do it's job. It doesn't need all the data all the time, and the exact scene your looking at will dramatically determine how much extra CPU power is required. The more grass, vegetation, environmental effects and various physics effects present the more additional CPU power your going to need to keep that primary thread fed which in turn keeps your GPU fed.
That explains why dual core CPU's fall so hard on this, they simply do not have the raw processing resources required to keep the GPU fed with data. Once you have four cores it comes down to the exact scene and how much additional work needs to be done, the more work the more raw CPU resources are needed (vs single threaded performance). It also explains the disparity between benchmarks, different scenes, resolutions, detail settings and camera angles result in different amounts of work required to be done.
-
Reply to palladin9479
-
Reply to jdwii
palladin9479 said:
Games tend to have one primary render thread along with a handful of other threads for various other tasks that need to be done. The implementations of most games have those secondary threads implemented in a very lock-step serialized method, each one needs data from the previous one to do it's job. So even though you've created other threads, rarely will they be able to run at the same time. This is where the "games only ever use and will only every use two threads" idea comes from.More or less correct. The render thread is typically done VERY early in processing, so the data can be passed to the GPU. While the GPU is then busy creating the current frame, the audio, physics, AI, and UI engines do their work, which will be reflected in the next in-game frame. Now, since sound, AI, physics, and UI typically aren't high workload threads, this results in one or two threads that do the vast majority (>90%) of the workload.
Quote:
What C3 did was implement those additional threads in a non-serialized manor. There is still one primary render thread that use's as much CPU as it possibly can, and often it needs data from those other threads to do it's job. It doesn't need all the data all the time, and the exact scene your looking at will dramatically determine how much extra CPU power is required. The more grass, vegetation, environmental effects and various physics effects present the more additional CPU power your going to need to keep that primary thread fed which in turn keeps your GPU fed.There are downsides though. You get into issues with folding the other threads into the main render thread. Specifically, do the worker threads send the main thread data (in which case, the main thread risks stalling if it executes too fast], or does the main thread request data from the workers (same problem, in reverse). So you need to create a very robust thread-management system to make a schema like this work well. Keeping the main thread fed in this scheme is a LOT harder to accomplish. [Basically, this is very close to codind to the PS3y. Keeping the 6 PPE's fed with data is probably the hardest challenge of coding for the PS3.]
And again, I dislike moving rendering code off the GPU. Thats its own separate debate.
There are other things that should be threaded, but typically aren't, due to simplistic implementations. Take Sim City: The game uses really simple pathfinding for traffic, specifically, "shortest distance", with a small weight towards highways when applicable. This results in some halarious traffic patterns, especially when you put two highways side by side, one is gridlock, and the other never used. While "shortest time" pathfinding would solve this issue, its a LOT more computationally expensive, since each path would need to be analyzed for EVERY traffic element. While each is simple to compute, the sheer workload will effect FPS due to the overhead. Something like this is a perfect threading opportunity, as each car and each possible path is more or less parallel of the rest. But in todays implementations? Just a simple "shortest distance" equation that takes up next to no processing power.
Its things like this that annoy me, because it actually affects gameplay. Thats the type of stuff I want fixed going forward.
-
Reply to gamerk316
Spoiler
More or less correct. The render thread is typically done VERY early in processing, so the data can be passed to the GPU. While the GPU is then busy creating the current frame, the audio, physics, AI, and UI engines do their work, which will be reflected in the next in-game frame. Now, since sound, AI, physics, and UI typically aren't high workload threads, this results in one or two threads that do the vast majority (>90%) of the workload.
There are downsides though. You get into issues with folding the other threads into the main render thread. Specifically, do the worker threads send the main thread data (in which case, the main thread risks stalling if it executes too fast], or does the main thread request data from the workers (same problem, in reverse). So you need to create a very robust thread-management system to make a schema like this work well. Keeping the main thread fed in this scheme is a LOT harder to accomplish. [Basically, this is very close to codind to the PS3y. Keeping the 6 PPE's fed with data is probably the hardest challenge of coding for the PS3.]
And again, I dislike moving rendering code off the GPU. Thats its own separate debate.
There are other things that should be threaded, but typically aren't, due to simplistic implementations. Take Sim City: The game uses really simple pathfinding for traffic, specifically, "shortest distance", with a small weight towards highways when applicable. This results in some halarious traffic patterns, especially when you put two highways side by side, one is gridlock, and the other never used. While "shortest time" pathfinding would solve this issue, its a LOT more computationally expensive, since each path would need to be analyzed for EVERY traffic element. While each is simple to compute, the sheer workload will effect FPS due to the overhead. Something like this is a perfect threading opportunity, as each car and each possible path is more or less parallel of the rest. But in todays implementations? Just a simple "shortest distance" equation that takes up next to no processing power.
Its things like this that annoy me, because it actually affects gameplay. Thats the type of stuff I want fixed going forward.
gamerk316 said:
palladin9479 said:
Games tend to have one primary render thread along with a handful of other threads for various other tasks that need to be done. The implementations of most games have those secondary threads implemented in a very lock-step serialized method, each one needs data from the previous one to do it's job. So even though you've created other threads, rarely will they be able to run at the same time. This is where the "games only ever use and will only every use two threads" idea comes from.More or less correct. The render thread is typically done VERY early in processing, so the data can be passed to the GPU. While the GPU is then busy creating the current frame, the audio, physics, AI, and UI engines do their work, which will be reflected in the next in-game frame. Now, since sound, AI, physics, and UI typically aren't high workload threads, this results in one or two threads that do the vast majority (>90%) of the workload.
Quote:
What C3 did was implement those additional threads in a non-serialized manor. There is still one primary render thread that use's as much CPU as it possibly can, and often it needs data from those other threads to do it's job. It doesn't need all the data all the time, and the exact scene your looking at will dramatically determine how much extra CPU power is required. The more grass, vegetation, environmental effects and various physics effects present the more additional CPU power your going to need to keep that primary thread fed which in turn keeps your GPU fed.There are downsides though. You get into issues with folding the other threads into the main render thread. Specifically, do the worker threads send the main thread data (in which case, the main thread risks stalling if it executes too fast], or does the main thread request data from the workers (same problem, in reverse). So you need to create a very robust thread-management system to make a schema like this work well. Keeping the main thread fed in this scheme is a LOT harder to accomplish. [Basically, this is very close to codind to the PS3y. Keeping the 6 PPE's fed with data is probably the hardest challenge of coding for the PS3.]
And again, I dislike moving rendering code off the GPU. Thats its own separate debate.
There are other things that should be threaded, but typically aren't, due to simplistic implementations. Take Sim City: The game uses really simple pathfinding for traffic, specifically, "shortest distance", with a small weight towards highways when applicable. This results in some halarious traffic patterns, especially when you put two highways side by side, one is gridlock, and the other never used. While "shortest time" pathfinding would solve this issue, its a LOT more computationally expensive, since each path would need to be analyzed for EVERY traffic element. While each is simple to compute, the sheer workload will effect FPS due to the overhead. Something like this is a perfect threading opportunity, as each car and each possible path is more or less parallel of the rest. But in todays implementations? Just a simple "shortest distance" equation that takes up next to no processing power.
Its things like this that annoy me, because it actually affects gameplay. Thats the type of stuff I want fixed going forward.
Yet we have indication that .... it's working exactly as intended. They somehow "solved" all the problems you indicated and the result is a high performance game that is actually taking advantage of modern CPU capabilities. On several occasions I've said that in order for programming to move forward people need to rethink and redefine the problem from the beginning. Not try to take an already serialized methodology and make it parallel. Expect more of this over the next few years. The era of "you just need a dual core" is now ending.
-
Reply to palladin9479
palladin9479 said:
Yet we have indication that .... it's working exactly as intended. They somehow "solved" all the problems you indicated and the result is a high performance game that is actually taking advantage of modern CPU capabilities. On several occasions I've said that in order for programming to move forward people need to rethink and redefine the problem from the beginning. Not try to take an already serialized methodology and make it parallel. Expect more of this over the next few years. The era of "you just need a dual core" is now ending.Haha, you have just earned a quote in my sig.
Cheers!
-
Reply to Yuka
sarinaide said:
Got a few Richland notebooks and ultrabooks in. Think these are going to be very good offerings.My next project that I've been trying to convince myself to do is a mini-itx set-top miniature gaming box for my living room. There won't be any room for a dGPU and the power budget is 180w maximum, so it's pretty obviously that I'd use an APU with DDR3-2133 memory. The purpose of this box will be to play older games that I want to enjoy in the big room.
So knowing this, would richland be worth waiting for?
-
Reply to palladin9479
palladin9479 said:
sarinaide said:
Got a few Richland notebooks and ultrabooks in. Think these are going to be very good offerings.My next project that I've been trying to convince myself to do is a mini-itx set-top miniature gaming box for my living room. There won't be any room for a dGPU and the power budget is 180w maximum, so it's pretty obviously that I'd use an APU with DDR3-2133 memory. The purpose of this box will be to play older games that I want to enjoy in the big room.
So knowing this, would richland be worth waiting for?
To my own experience with the A8-3850 is that waiting is always a good idea when the next product is on sight. Unless current stock is cheap enough to sway your decision, wait a little bit. Also, the A8 works VERY hot, so keep in mind that using DDR3-2133 memory it'll run hot. I had to underclock it a little bit to make it stable inside the case I chose for it (TT SD200), and that's even using the original Phenom II cooler, which is very good. I have it with DDR3-1600 and had to lower the multi by 2 or 3. It's running fine now, but still with scary temps over 60°c at full load.
Point is, if Richland excels in power management as advertised over Trinity, then yes, DO wait for it
Cheers!
EDIT: I'm thinking of getting this baby here: CM Gemin II M4 for the A8. I had the original Gemin II for the Athlon64 X2 and the Phenom II before and it worked amazingly well for being a low profile non-tower HSF.
EDTI2: Looking around a little more (hope the table looks good, haha):
Manufacturer: Model No.: *Fan Speed: 125W Thermal Test (°C) Noise Level (dBA)
Zalman CNPS8900 Extreme high 17.2 48.2
Zalman CNPS8900 Extreme low 24.2 34.8
Noctua NH-L12 high 16.3 44.7
Noctua NH-L12 low 19.1 41.3
Coolermaster Gemin II M4 high 26.5 43.5
Coolermaster Gemin II M4 low 54.2 28.2
-
Reply to Yuka
But Kaveri is very far away as it stands now AFAIK. Besides, it might not even have all the goodies in FM2 if the FM3 socket (and new chipset) rumour is true. Richland might be a stopgap, but between getting a Trinity now and waiting like a month for Richland, I'll suggest waiting for Richland.
Cheers!
Cheers!
-
Reply to Yuka
fm3? did i miss something? i am still fumbling through the new forum and all..
i read here or somewhere else that kaveri may fit into both fm2 and fm3 motherboards. only you lose fm3-specific features. sorta like if you use a sandy bridge cpu in a z77 mobo, your pcie x16 gfx slot runs at gen 2.0 speed. may be fm2 loses gddr5 compatibility. i think overclockability will stay, ddr3 ram support will stay as well. imo integrated pcie controller shouldn't be a problem. a85x chipset already allow pcie lane splitting, so discreet card cfx shouldn't an issue on the platform side.
yeah, the delay is problematic. imo if you get richland now, richland's cost just adds to the final kaveri rig since these two are launching so closely (if kavery launches by q3-q4).
i read here or somewhere else that kaveri may fit into both fm2 and fm3 motherboards. only you lose fm3-specific features. sorta like if you use a sandy bridge cpu in a z77 mobo, your pcie x16 gfx slot runs at gen 2.0 speed. may be fm2 loses gddr5 compatibility. i think overclockability will stay, ddr3 ram support will stay as well. imo integrated pcie controller shouldn't be a problem. a85x chipset already allow pcie lane splitting, so discreet card cfx shouldn't an issue on the platform side.
yeah, the delay is problematic. imo if you get richland now, richland's cost just adds to the final kaveri rig since these two are launching so closely (if kavery launches by q3-q4).
-
Reply to de5_Roy
mayankleoboy1
March 14, 2013 7:27:35 AM
-
Reply to de5_Roy
mayankleoboy1
March 14, 2013 8:19:40 AM
-
Reply to mayankleoboy1
Cazalan said:
gamerk316 said:
And again, I dislike moving rendering code off the GPU. Thats its own separate debate.
Ideally GPUs would do all the work. The world is moving to APUs though and the GPUs are going to be relatively constrained compared to discrete.
Disagree. GPU's excel in massively parallel workloads, but their long memory read times (high latency) and slow execution times (for a single shader) make them very unsuited for non-parallel work.
griptwister said:
My thoughts, if you're using a GTX 890, You're not going to be using a 2600k. Sooo, any more excuses? Or are we going to accept the fact that Games will be using more than 4 cores now?Haswell won't have enough new CPU performance to justify moving to a new motherboard. And its not like IB was a significant enough upgrade to warrant an upgrade either. I typically only upgrade the CPU every 4-5 years or so, simply because you don't see significant performance jumps across generations anymore. [GPU upgrades, by contrast, are near mandatory every 2 years now]
-
Reply to gamerk316
gamerk316 said:
de5_Roy said:
temash, kabini and the console deals are gonna be the moneymakers this year.I'm not convinced AMD is going to make a lot of profit off the consoles. Especially since WiiU sales are already falling, and I suspect the PS4/Xbox Next sales aren't going to do that well either.
Then again, I'm very pessimistic on consoles in general at this point. I'm convinced Smartphones/Tablets have more or less made them obsolete.
I'm sorry but Console and smartphones/Tablets have different markets and usage/purposes. To even fathom the thought of a smartphone/tablet performing the function of a console as a gaming port and home theatre and entertainment device is beyond ludicrous. I just cant see myself wanting to play gears of war on a phone or tablet you know with the lack of practicality.
palladin9479 said:
sarinaide said:
Got a few Richland notebooks and ultrabooks in. Think these are going to be very good offerings.My next project that I've been trying to convince myself to do is a mini-itx set-top miniature gaming box for my living room. There won't be any room for a dGPU and the power budget is 180w maximum, so it's pretty obviously that I'd use an APU with DDR3-2133 memory. The purpose of this box will be to play older games that I want to enjoy in the big room.
So knowing this, would richland be worth waiting for?
It has been answered, wait for Kaveri. If you cannot wait then your best option would be the A10 6800K/6700.
I have a HTPC build using;
- Silverstone Sugo7/Streamline 550w Bronze PSU that came with the chassis
- ASRock FM2A85X ITX
- A10 5800K
- G.Skill TridentX DDR32400
Runs perfect, cold and quiet.
For gaming the following I have managed to max at full HD on HD7660D
Diablo 3
F1 2010-2012
NFS Shift 2
ARMA II Combined Operations with DAYZ mod
Dead Space 1 and 2
Mass Effect 1 and 2, 3 is a bit iffy
Fifa 2012 - 2013
Shogun 2
Max Payne 3
Sleeping Dogs
Dirt 3 gets sluggish at times at 1920x1080 drop to 16x10/16x9 and the frames jump.
L4D2
Football Manager 2013
MLB2K11 - 12
Mirrors Edge
Stalker and Stalker Clear Sky.
Doom BFG edition.
Games playable on Medium or lower at 16x10/16x9 or lower;
BF3 multiplayer, turn settings to low, meshing at ultra and the frame rates jump to 40FPS.
Skyrim, Medium settings is the best option here very playable.
Dishonored
Metro 2033
STALKER call of the pripyat
de5_Roy said:
fm3? did i miss something? i am still fumbling through the new forum and all..i read here or somewhere else that kaveri may fit into both fm2 and fm3 motherboards. only you lose fm3-specific features. sorta like if you use a sandy bridge cpu in a z77 mobo, your pcie x16 gfx slot runs at gen 2.0 speed. may be fm2 loses gddr5 compatibility. i think overclockability will stay, ddr3 ram support will stay as well. imo integrated pcie controller shouldn't be a problem. a85x chipset already allow pcie lane splitting, so discreet card cfx shouldn't an issue on the platform side.
yeah, the delay is problematic. imo if you get richland now, richland's cost just adds to the final kaveri rig since these two are launching so closely (if kavery launches by q3-q4).
From what I read or learned was that all FM2 parts are compatible with FM3(undecided whether FM3 or FM2+ will be used), the only difference is FM3* boards will have integrated GDDR5 support while FM2 doesnt but in features AMD don't skimp like Intel do so whatever the Kaveri can do the Trinity can just at a lower level.
-
Reply to sarinaide
sarinaide said:
I'm sorry but Console and smartphones/Tablets have different markets and usage/purposes. To even fathom the thought of a smartphone/tablet performing the function of a console as a gaming port and home theatre and entertainment device is beyond ludicrous. I just cant see myself wanting to play gears of war on a phone or tablet you know with the lack of practicality.
if you look at marketshare, sales you'll notice that tablets have successfully taken away laptop display shares, ssd shares, overall laptop shares, mobile ram shares and so on. smartphones have almost killed off point-and-shoot digital cameras, standalone portable music players and are taking away users from portable console market. tablet and smartphones have also put dent in desktop markets as odms are putting aside more share for mobile devices. that's why desktop ram prices have started to climb.
it's irrelevant what you can and cannot do on a tablet/smartphone. manufacturers are frustrated with stagnant markets like dt ram, pc monitors, prebuilt pcs and are focussing more on tablets and smartphones.
you've in the minority for a long time now.
sarinaide said:
From what I read or learned was that all FM2 parts are compatible with FM3(undecided whether FM3 or FM2+ will be used), the only difference is FM3* boards will have integrated GDDR5 support while FM2 doesnt but in features AMD don't skimp like Intel do so whatever the Kaveri can do the Trinity can just at a lower level.
did you hear if kaveri apus will fit fm2 motherboards or not? if no, then we may have a fm1-fm2 transition again.
that makes sense - fm2 apus being forward compatible with fm3 socket. i seem to have incorrectly assumed that kaveri, if it's fm3, will be backwards compatible with fm2. if it's true, that essentially locks out a trinity owners from a 'proper' upgrade. users only get to recycle apus to new fm3 motherboards...and nothing much else. since the memory controller will be inside the apus (has been for a long time), trinity apu on an fm3 mobo may not be able to benefit from higher memory bw.
-
Reply to de5_Roy
Well, considering the only date I've found was SA's 2014 delay, waiting for Kaveri is too much to ask IMO.
Richland won't be the best product ever seen from AMD, yes, but it's hardly the worst or a bad buy. Since palladin wants a *new* HTPC and Richland is very near by (1 month, maybe a little more), why buy a Trinity APU or even wait till 2014? Unless they're DIRT cheap, makes no sense to build a Trinity HTPC right now. Also, it IS a better revision of Trinity. Just like when Phenom II C3 came out, no one in their right mind was telling people to get a C2. This is the same.
So, I do not agree to wait for Kaveri. At all.
Cheers!
EDIT: Correction.
Richland won't be the best product ever seen from AMD, yes, but it's hardly the worst or a bad buy. Since palladin wants a *new* HTPC and Richland is very near by (1 month, maybe a little more), why buy a Trinity APU or even wait till 2014? Unless they're DIRT cheap, makes no sense to build a Trinity HTPC right now. Also, it IS a better revision of Trinity. Just like when Phenom II C3 came out, no one in their right mind was telling people to get a C2. This is the same.
So, I do not agree to wait for Kaveri. At all.
Cheers!
EDIT: Correction.
-
Reply to Yuka
2014.... will make it almost a year away. that certainly makes richland more viable. upgradability may be suspect but for near term, it should be fine.
edit: but think about this - 2014 will bring - broadwell, igpu tock, 14nm and full on soc vs kaveri apu @ 28nm. broadwell will have easier time to scale down to cash-cow tablet and smartphone socs. new atoms will be out as well, bay trail...iirc. desktop and laptop markets will shrink more, ram (and possibly other component) prices might go further up. delaying kaveri to 2014 will make it harder for amd.
edit: but think about this - 2014 will bring - broadwell, igpu tock, 14nm and full on soc vs kaveri apu @ 28nm. broadwell will have easier time to scale down to cash-cow tablet and smartphone socs. new atoms will be out as well, bay trail...iirc. desktop and laptop markets will shrink more, ram (and possibly other component) prices might go further up. delaying kaveri to 2014 will make it harder for amd.
-
Reply to de5_Roy
easy answer. the way it should be programmed and the way they program it is 2 different things. we need devs to make every software 100% multi-treaded meaning 2cores+ atleast so we can start seeing good performance increases. until than are 4 cores 6cores and 8 cores aren't doing zip.
hopefully steamroller will be a success but I already invested in a intel plateform so no amd for me. but who knows if steamroller is top shape I might change my mind.
hopefully steamroller will be a success but I already invested in a intel plateform so no amd for me. but who knows if steamroller is top shape I might change my mind.
-
Reply to iceclock
de5_Roy said:
sarinaide said:
I'm sorry but Console and smartphones/Tablets have different markets and usage/purposes. To even fathom the thought of a smartphone/tablet performing the function of a console as a gaming port and home theatre and entertainment device is beyond ludicrous. I just cant see myself wanting to play gears of war on a phone or tablet you know with the lack of practicality.
if you look at marketshare, sales you'll notice that tablets have successfully taken away laptop display shares, ssd shares, overall laptop shares, mobile ram shares and so on. smartphones have almost killed off point-and-shoot digital cameras, standalone portable music players and are taking away users from portable console market. tablet and smartphones have also put dent in desktop markets as odms are putting aside more share for mobile devices. that's why desktop ram prices have started to climb.
it's irrelevant what you can and cannot do on a tablet/smartphone. manufacturers are frustrated with stagnant markets like dt ram, pc monitors, prebuilt pcs and are focussing more on tablets and smartphones.
you've in the minority for a long time now.
sarinaide said:
From what I read or learned was that all FM2 parts are compatible with FM3(undecided whether FM3 or FM2+ will be used), the only difference is FM3* boards will have integrated GDDR5 support while FM2 doesnt but in features AMD don't skimp like Intel do so whatever the Kaveri can do the Trinity can just at a lower level.
did you hear if kaveri apus will fit fm2 motherboards or not? if no, then we may have a fm1-fm2 transition again.
that makes sense - fm2 apus being forward compatible with fm3 socket. i seem to have incorrectly assumed that kaveri, if it's fm3, will be backwards compatible with fm2. if it's true, that essentially locks out a trinity owners from a 'proper' upgrade. users only get to recycle apus to new fm3 motherboards...and nothing much else. since the memory controller will be inside the apus (has been for a long time), trinity apu on an fm3 mobo may not be able to benefit from higher memory bw.
on the FM2, yes Kaveri is compatible with FM2 socket, AMD has stated this and ASRock have told me that FM2 will be compatible with Kaveri and no BIOS update is needed, makes me wonder if the vendors didn't know this beforehand. The only benefits to FM3 is;
1) DDR3 3000+ speeds I believe are open, but a kit of DDR3 2800 is just expensive.
2) PCIe 3.0/3.1 in Tri-fire or 3 way SLI support.
3) High Speed USB 3.0.
4) GDDR5.
5) First boards with DDR4 support.
6) Hybrid Memory support.
A lot of people are really excited about Kaveri and its iGPU, some suggesting around 60% faster with 100% higher bandwidth. I am turning over 41GB/s with DDR3 2400+ and there is some serious grunt in these Devestators. Another interesting thing is what Richland will also offer, AMD will support Dual Graphics mode with VLIW4, GCN and GCN 2.0 select cards, the HD8650/8670 will release end of year with Kaveri and its twice as fast as the HD6670 with GDDR5 support.
AMD are taking hybrid processing to a completely different level, the way its meant to be done. Since AMD is very big news right now for the right reasons; Richland notebooks iGPU just got serious and as expected puts HD4000 well out of the rear view mirror, GT3 may close the gap but thats about it, in ultrabooks with no discrete cards if gaming laptop/ultrabook is your thing well if you buy intel then you are probably reading Toms and Anandtechs Intel myopia pieces and you can live with the choppiness at a much higher cost. The have also released what is probably the most potent tablet on the market if spiffed up graphics and interactive features are your thing, talks of ARM partnerships for smartphone processors coming soon and then of course Richland desktop which has a nice upgrade over trinity if you are not already on trinity and want to buy now Richland is very much at the for front of any builders choice.
Quote:
2014.... will make it almost a year away. that certainly makes richland more viable. upgradability may be suspect but for near term, it should be fine.edit: but think about this - 2014 will bring - broadwell, igpu tock, 14nm and full on soc vs kaveri apu @ 28nm. broadwell will have easier time to scale down to cash-cow tablet and smartphone socs. new atoms will be out as well, bay trail...iirc. desktop and laptop markets will shrink more, ram (and possibly other component) prices might go further up. delaying kaveri to 2014 will make it harder for amd.
Yeah 32nm to 22nm yielded more troubles than gains, sandy to ivy is basically no improvements, ivy to haswell will only really yield a iGPU improvement but has had a lot of troubles up to now, Ivy Bridge E delayed and USB 3.0 failures. What is intels solution, start feeding you broadwell nonsense on 14nm and yet another socket because why. The real issue here is that intel have become desperate, brick walling x86, shrinking dies without any tangible benefits and a iGPU which is like turning a CPU into a GPU with no magic pixie dust or miracle cure. Desktop is a thriving market and will be for a very long time despite the bull crap Intel spew, this sentiment is shared by many in the review world.
Cherish their FABs, its what has always kept them ahead of AMD, if AMD had Intel's FAB and resources Intel will be out of ideas and out of business in no time.
I am sure I read that GF will have a 14nm process by 2014 some time but AMD will not be rapidly descending down the die shrinks because it makes no sense. I guess for Intel the good news is the most expensive GT part may sort of match llano give or take, that little beauty is 22nm paradise right there and 10:1 costs in development sounds impressive don't it.
-
Reply to sarinaide
@sarinaide:
heh. i'll revisit your claims such as getting 41gb/s from devastator (iirc that's what trinity igpus are called) igpus with ddr3 2400+ ram when benches come out.
ddr3 2400 ram is too expensive for entry level platform like the apus anyway.
rest of the post looks like some kind of anti intel statement. i'll try to reply to the ones i understood:
broadwell might not need a new socket.... but that remains to be seen.
desktop isn't as thriving as it was before. it'll live like it has so far, but in a smaller size as long as people give money to smartphones and tablets. after those fads are over, may be dt and laptops make a come back. although, that probably won't happen in 2013-14.
reviewers, unfortunately, don't run markets. i think most hardware reviewers are pc enthusiasts who favor ... pcs. in fact, desktops do rule, in terms of performance. but average public has tasted the poison of affordable, portable, touch-based computing (i.e. tabs and phones and phablets) at sub $500-700 price range in this bad economy and they seem to want moar of those.
sounds like intel's die shrinks are sour, lol.
intel is making bad marketing decisions regarding ultrabooks, especially with pricing. that's ye olde news.
intel really is desperate, has been since qualcomm started to emerge and since it looked like apple might dump them. strangely, they seem to have been taking the right steps with their smartphone socs.
afaik, glofo's 14nm process may be a hybrid solution, not a true 14nm shrink, for low power mobile socs called 14nm xm (eXtreme Mobility), not for high performance desktop asics. i don't think i've seen anything like a glofo's desktop/cpu roadmap recently.
if amd does start making arm socs with radeon igpus, the biggest loser would be enthusiasts who have become accustomed to playing with cheap, customizable amd cpus. amd will make money like always.
heh. i'll revisit your claims such as getting 41gb/s from devastator (iirc that's what trinity igpus are called) igpus with ddr3 2400+ ram when benches come out.
ddr3 2400 ram is too expensive for entry level platform like the apus anyway.rest of the post looks like some kind of anti intel statement. i'll try to reply to the ones i understood:
broadwell might not need a new socket.... but that remains to be seen.
desktop isn't as thriving as it was before. it'll live like it has so far, but in a smaller size as long as people give money to smartphones and tablets. after those fads are over, may be dt and laptops make a come back. although, that probably won't happen in 2013-14.
reviewers, unfortunately, don't run markets. i think most hardware reviewers are pc enthusiasts who favor ... pcs. in fact, desktops do rule, in terms of performance. but average public has tasted the poison of affordable, portable, touch-based computing (i.e. tabs and phones and phablets) at sub $500-700 price range in this bad economy and they seem to want moar of those.
sounds like intel's die shrinks are sour, lol.
intel is making bad marketing decisions regarding ultrabooks, especially with pricing. that's ye olde news.
intel really is desperate, has been since qualcomm started to emerge and since it looked like apple might dump them. strangely, they seem to have been taking the right steps with their smartphone socs.
afaik, glofo's 14nm process may be a hybrid solution, not a true 14nm shrink, for low power mobile socs called 14nm xm (eXtreme Mobility), not for high performance desktop asics. i don't think i've seen anything like a glofo's desktop/cpu roadmap recently.
if amd does start making arm socs with radeon igpus, the biggest loser would be enthusiasts who have become accustomed to playing with cheap, customizable amd cpus. amd will make money like always.
-
Reply to de5_Roy
Hmm maybe some confusion. The purpose of this box isn't to play any high end games, that is what my fx8350 + 580 Hydro x2 rig is for. I often play emulators or other "old" softer, things like Might and Magic VII or other games from the 90~2005 era. A few from the 2008~2009 era also. I would like to do this while sitting in my living room with my rather comfortable setup. My office is rather cluttered with various projects and doesn't really provide for a laid back atmosphere.
My idea so far was
A10-5800K
8GB DDR3-2133 memory
FM2A85X-ITX MB
PicoPSU 160-XT
Debating on a 256GB SSD or a larger 7200RPM HDD. Leaning towards the SSD as I store most of my home data on my server. Also I'm working inside a VERY tight power budget here, every little bit I can shave off helps. There will be no optical media as I can just mount DVD-ROM images if need be. Was looking for a good sleek case and low profile cooler. Thanks for the links.
Since Richland is so close I'll probably wait for it.
Was looking for a good tight
My idea so far was
A10-5800K
8GB DDR3-2133 memory
FM2A85X-ITX MB
PicoPSU 160-XT
Debating on a 256GB SSD or a larger 7200RPM HDD. Leaning towards the SSD as I store most of my home data on my server. Also I'm working inside a VERY tight power budget here, every little bit I can shave off helps. There will be no optical media as I can just mount DVD-ROM images if need be. Was looking for a good sleek case and low profile cooler. Thanks for the links.
Since Richland is so close I'll probably wait for it.
Was looking for a good tight
-
Reply to palladin9479
palladin9479 said:
Hmm maybe some confusion. The purpose of this box isn't to play any high end games, that is what my fx8350 + 580 Hydro x2 rig is for. I often play emulators or other "old" softer, things like Might and Magic VII or other games from the 90~2005 era. A few from the 2008~2009 era also. I would like to do this while sitting in my living room with my rather comfortable setup. My office is rather cluttered with various projects and doesn't really provide for a laid back atmosphere.My idea so far was
A10-5800K
8GB DDR3-2133 memory
FM2A85X-ITX MB
PicoPSU 160-XT
Debating on a 256GB SSD or a larger 7200RPM HDD. Leaning towards the SSD as I store most of my home data on my server. Also I'm working inside a VERY tight power budget here, every little bit I can shave off helps. There will be no optical media as I can just mount DVD-ROM images if need be. Was looking for a good sleek case and low profile cooler. Thanks for the links.
Since Richland is so close I'll probably wait for it.
Was looking for a good tight
Well, if you're very tight on power, then the more reason to wait for Richland. Also, get the APU with most shaders (A10, most likely), off course, then if you require to do so, underclock the CPU. Well, you already know how to fine tune the APU, so it won't be anything weird for you to do.
And as a side note, it's very hard to find fresh information about Kavini or Kaveri, hahaha. Every link and info I found is already in this thread. That's good I suppose
Cheers!
-
Reply to Yuka
de5_Roy said:
@sarinaide:heh. i'll revisit your claims such as getting 41gb/s from devastator (iirc that's what trinity igpus are called) igpus with ddr3 2400+ ram when benches come out.
ddr3 2400 ram is too expensive for entry level platform like the apus anyway.rest of the post looks like some kind of anti intel statement. i'll try to reply to the ones i understood:
broadwell might not need a new socket.... but that remains to be seen.
desktop isn't as thriving as it was before. it'll live like it has so far, but in a smaller size as long as people give money to smartphones and tablets. after those fads are over, may be dt and laptops make a come back. although, that probably won't happen in 2013-14.
reviewers, unfortunately, don't run markets. i think most hardware reviewers are pc enthusiasts who favor ... pcs. in fact, desktops do rule, in terms of performance. but average public has tasted the poison of affordable, portable, touch-based computing (i.e. tabs and phones and phablets) at sub $500-700 price range in this bad economy and they seem to want moar of those.
sounds like intel's die shrinks are sour, lol.
intel is making bad marketing decisions regarding ultrabooks, especially with pricing. that's ye olde news.
intel really is desperate, has been since qualcomm started to emerge and since it looked like apple might dump them. strangely, they seem to have been taking the right steps with their smartphone socs.
afaik, glofo's 14nm process may be a hybrid solution, not a true 14nm shrink, for low power mobile socs called 14nm xm (eXtreme Mobility), not for high performance desktop asics. i don't think i've seen anything like a glofo's desktop/cpu roadmap recently.
if amd does start making arm socs with radeon igpus, the biggest loser would be enthusiasts who have become accustomed to playing with cheap, customizable amd cpus. amd will make money like always.
1) DDR3 1866 - 30GB/s, DDR3 2000 - 33GB/s, DDR3 2133 - 34GB/s, DDR3 2400 - 37GB/s, DDR3 2800 - 41GB/s.
2) Yes the 14nm is intended for ULV mobility parts which AMD/ARM are pushing, but DT will remain on 28nm and whatever is after that.
3) Notebooks and Ultrabooks are proficient for on the move people and granted businesses do move to mobility and its after market support more these days but the standard home and office PC is still prominent, as a professional workbench and gaming or enthusiast platform desktop rules the roost. With other technologies evolving alongside desktop it is so far from dead, maybe just evolving.
Smartphones are really just cell phones with access to wireless networks allowing people to access internet and email facilities on the move but are not good for anything more than that. To compose a full legal document on a smartphones clumsy keypad takes probably 3 to 4 times longer than on a notebook or desktop so while people buy phones it is far from ever matching the utility of DT and frankly there is no phone, tablet or ultrabook that matches a desktop when it comes to gaming needs, albeit AMD's new tablet PC is just amazing with full HD resolution and playable, can see them selling by the boatload.
3) Why are enthusiasts the losers when AMD is improving. So should SR produce the 20-25% computing performance improvement thats about nip and tuck with intels current line, then you throw in the iGPU which is likely to be really good, that kinda suggests that AMD are pushing towards something big. Excavator is rumored to be a unified socket with high end x86 performance with mainstream iGPU, I dunno how that is not appealing to an enthusiast. As a overclocker you take AMD over Intel every day to Sunday
4) Intel are desperate to control everything while the quality of product slips, I guess when you have the fabs, who needs to worry about quality of the brand. Intel chasing mobility is purely to provide expensive chips to a market that has a high annual turn over, this while abandoning its bread and butter market, haswell is being pushed back, reports of problems, heck reports of broadwells problems... geee um lets just get one product working before we start jumping to the next. To me Intel are in a rush to nowhere, and they are taking the desktop market for a ride.
palladin9479 said:
Hmm maybe some confusion. The purpose of this box isn't to play any high end games, that is what my fx8350 + 580 Hydro x2 rig is for. I often play emulators or other "old" softer, things like Might and Magic VII or other games from the 90~2005 era. A few from the 2008~2009 era also. I would like to do this while sitting in my living room with my rather comfortable setup. My office is rather cluttered with various projects and doesn't really provide for a laid back atmosphere.My idea so far was
A10-5800K
8GB DDR3-2133 memory
FM2A85X-ITX MB
PicoPSU 160-XT
Debating on a 256GB SSD or a larger 7200RPM HDD. Leaning towards the SSD as I store most of my home data on my server. Also I'm working inside a VERY tight power budget here, every little bit I can shave off helps. There will be no optical media as I can just mount DVD-ROM images if need be. Was looking for a good sleek case and low profile cooler. Thanks for the links.
Since Richland is so close I'll probably wait for it.
Was looking for a good tight
160W is not going to work.
-
Reply to sarinaide
wh3resmycar
March 14, 2013 4:30:13 PM
hmm i believed i found a reasonable scenario that'll make a PD (currently , and probably a same range steamroller in the future) viable... and seeing how an i3 3220 sucks at crysis3, which would be the game that i'm planning to build a new rig around, a setup with PD 4300 + 2x hd7850 + asrock 970 extreme 4 makes a bit of sense. thoughts? i almost forgot that amd offers PCIE lanes at a much cheaper price.
-
Reply to wh3resmycar
mayankleoboy1
March 14, 2013 6:51:57 PM
sarinaide said:
2) Yes the 14nm is intended for ULV mobility parts which AMD/ARM are pushing, but DT will remain on 28nm and whatever is after that.
14nm is just 14nm, its not "intended" for anything, excepet making transistors smaller. Its their final implemenmtation that matters. That implementation can be for high end 100W processors. Or it can be for mobile devices.
Since the market has irrevocably turned to mobiles, (and has been since a long time), the focus of every new node is to reduce idle power consumption.
Haswell and Broadwell are basically mobile parts, really and sadly.
Quote:
and USB 3.0 failures. What is intels solution, start feeding you broadwell nonsense on 14nm and yet another socket because why. The real issue here is that intel have become desperate, brick walling x86, shrinking dies without any tangible benefits and a iGPU which is like turning a CPU into a GPU with no magic pixie dust or miracle cure. I would say that increasing x86 performance IPC is getting difficult, simply because all the possible ways have already been tapped. So basically you have two options left :
1. Increase Instruction level parallelism : in the form of SSE3/4/AVX/AVX2 . Or make CPU more like a GPU
2. Increase clocks. Which AMD has been doing, because they dont have enough money to research advanced architecture.
-
Reply to mayankleoboy1
And is why any form of higher usage of cpus should be viewed as a step in the right direction.
If it can done here and there, it can be done elsewhere, and using a heterogenous approach only widens such opportunities, even and especially at this early phase.
We can core or stack our way to heaven, but be limited by our SW.
Average Joe doesn't see the use for a hex or octo core except for a few new games, with more on the horizon.
Will this prompt Intel to widen its sku count and pricing to include such chips for average Joe to use then?
And again, that's only if SW can make its needed contributions.
So far at least in gaming, it looks like it may
If it can done here and there, it can be done elsewhere, and using a heterogenous approach only widens such opportunities, even and especially at this early phase.
We can core or stack our way to heaven, but be limited by our SW.
Average Joe doesn't see the use for a hex or octo core except for a few new games, with more on the horizon.
Will this prompt Intel to widen its sku count and pricing to include such chips for average Joe to use then?
And again, that's only if SW can make its needed contributions.
So far at least in gaming, it looks like it may
-
Reply to JAYDEEJOHN
mayankleoboy1 said:
sarinaide said:
2) Yes the 14nm is intended for ULV mobility parts which AMD/ARM are pushing, but DT will remain on 28nm and whatever is after that.
14nm is just 14nm, its not "intended" for anything, excepet making transistors smaller. Its their final implemenmtation that matters. That implementation can be for high end 100W processors. Or it can be for mobile devices.
Since the market has irrevocably turned to mobiles, (and has been since a long time), the focus of every new node is to reduce idle power consumption.
Haswell and Broadwell are basically mobile parts, really and sadly.
Quote:
and USB 3.0 failures. What is intels solution, start feeding you broadwell nonsense on 14nm and yet another socket because why. The real issue here is that intel have become desperate, brick walling x86, shrinking dies without any tangible benefits and a iGPU which is like turning a CPU into a GPU with no magic pixie dust or miracle cure. I would say that increasing x86 performance IPC is getting difficult, simply because all the possible ways have already been tapped. So basically you have two options left :
1. Increase Instruction level parallelism : in the form of SSE3/4/AVX/AVX2 . Or make CPU more like a GPU
2. Increase clocks. Which AMD has been doing, because they dont have enough money to research advanced architecture.
1) 8350 at much higher clocks consumes a little less power than the FX 8150 in idle and load states, so by deductive logic is the 8350 is clocked at 3.3ghz it will consume considerably less power than the FX8150 and the 1100T even comparible to Intels Hexcore parts. This is without much in way of die shrinks even on the exact same process. This to me suggests that Intels reckless drive to 10nm will not really yield the gains intended and if it is Intels endeavor to leave DT and go to mobility, even at that level they will not be able to compete with Samsung, Qualcom, Nvidia etc in mobility, so again die shrinks don't bring performance gains it is just Intel trying to drive its FABs into a universal 14nm/10nm process without diversifying it depending on the market.
AMD have no necessity to drive die sizes down, yet they are putting more and making faster chips without breaking thermals deemed by the company to be acceptable. Now with AMD making some potent stuff in notebook, ultrabook and tablet forms, I think it will start becoming a problem for Intel who are still very dependant upon AMD and Nvidia for any value.
2) AMD are doing both, appart from improving its IPC's per product they are a) bumping clocks while not pushing thermals and b) adapting their parts to hybrid processing, by excavator we will have a fully fledged x86 processor working in tandem with a iGPU core packing some serious grunt existing in a space of harmony with each other, by then HSA will be more prominant.
Intel's iGPU will be nothing more than a iGPU with absolutely now Heterogeneous support, so they need to come up with the next big thing or squeeze more blood from the x86 rock.
3) Some other AMD news, AMD and Patriot releasing new AMD RAM, tighter latencies, higher SPD's and more importantly more aggressive AMP profiles. AMD is also releasing SSD's soon which will recieve support on all AMD chipsets to work in unison with AMD RAM disks for faster pre caching.
And they said AMD is dying.
-
Reply to sarinaide
mayankleoboy1
March 15, 2013 12:30:54 AM
^ The problem with x86 now is that to get even a 10% IPC improvement , the needed branch prediction, faster/larger cache, improved pipelines require a lot more transistors, which means shrinking the node is the best step.
Last i checked, a 8 'module' PD CPU had more transistors than Intels 8 'core' processor, with a iGPU . And an HD4000 takes a considerable amount of die space (about 30-40% ?)
My point : for pure x86 performance, AMD is using more transistors and much more power (50% more ?) than Intel, to get similar perf in imultithreaded integer apps, and quite less in FP apps, and a lot less in single threaded apps.
To reach intels IPC, AMD would need to add quite a lot more logic, which means quite a lot transistors. Assuming AMD has to work within the physial laws, a node shrink is their best option.
Quote:
1) 8350 at much higher clocks consumes a little less power than the FX 8150 in idle and load states, so by deductive logic is the 8350 is clocked at 3.3ghz it will consume considerably less power than the FX8150 and the 1100T even comparible to Intels Hexcore parts. This is without much in way of die shrinks even on the exact same process. This to me suggests that Intels reckless drive to 10nm will not really yield the gains intended and if it is Intels endeavor to leave DT andYour argument is facetious. A 8350 at 3.33ghz will be extremely shitty performing. Reason : longer pipelines. So :
1. Initial latency is high.
2. In case of branch prediction miss, the whole pipeline is flushed and refilled @ 3.3 GHZ. So super slowness.
Long pipelines are very good for throughput if :
1. Your clock speeds are high enough to compensate for a) initial delay. b) pipeline flush'
2. Excellent branch prediction.
#1 means increased power consumption increases, as P increases with the square of the frequency. So wither you remain stuck on the current node, or you can move to a smaller node. Intels "reckless drive to 10nm" is actually logical and practical.
#2 needs extensive CS research, which needs money AND scientists AND engineers.
Would you care to benchmark mere Intel hexcore with a super 8 core AMD Piledriver @ 3.33 GHZ ?
Quote:
go to mobility, even at that level they will not be able to compete with Samsung, Qualcom, Nvidia etc in mobility, so again die shrinks don't bring performance gains it is just Intel trying to drive its FABs into a universal 14nm/10nm process without diversifying it depending on the market.Intels problem in mobile space is not technical, but the old mindset of management. They just cant 'allow' the consumers to get a fully functional chip. They have to trim features, and quote ridiculously high prices.
Plus, there is the architectural difference between x86 and ARM to consider. AMD is failing worse than Intel in the smartphone/Tablet market. So this means
1. x86 is not the best arch for low power.
2. There are no x86 software for mobiles. So users dont need a x86 CPU.
3. AMd's Temash is basically banking on its iGPU perf to win customers. And GPU is API driven, not related to x86.
4. To become competitive with ARM in both power and performance, x86 needs to be manufactured at a much smaller node.
5. If intel makes ARM SoC's at their 14nm node, do you think Sammy/QC/Apple have a fcuking chance ? No. But intel cant do that, becuase the market is already biased in mobility (which is ARM dominated). If they themselves make ARM, x86 has no chance in the mobile market.
Your argument sounds like its OK for TSMC and GloFo to move to a smaller node, but if Intel does it, its somehow wrong. Plus, what Intel has already done by moving to 22nm FinFET alone , the consoritum of IBM, GloFO, Sammy have not yet reached.
Intel already has 14nm working chips, which will have both low frequency version for mobiles, and high frequency version for DT. The high freq chips are much more harder to research and fabriate.
The consoritum is working to make only mobile chips, which are typically low freq.
I sense a major case of butthurt because AMD cant do for atleast 1 year what Intel has already done one year back.
Still, i applaud AMD for their perseverance and bringing HSA to the market. But you cant appreciate Intel for anything.
Quote:
Now with AMD making some potent stuff in notebook, ultrabook and tablet forms, I think it will start becoming a problem for Intel who are still very dependant upon AMD and Nvidia for any value.Intel has about 80% of the notebook market.
And 100% of the Ultrabook market.
Last i checked, AMD's ultrathin concept hasnt taken off the ground. Tablets ? x86 has about 1% of the tablet market. Let AMD and Intel fight for that. Nobody can claim victory here. I suspect x86 cant get a bigger share.
Quote:
Intel's iGPU will be nothing more than a iGPU with absolutely now Heterogeneous supportThe advantage off a node win is that you can keep throwing more and more transistors at the iGPU to gain perf. Your arch may be shitty, but you will still get good performance.
Quote:
adapting their parts to hybrid processing, by excavator we will have a fully fledged x86 processor working in tandem with a iGPU core packing some serious grunt existing in a space of harmony with each other, by then HSA will be more prominant.I really wish HSA gets more prominant. But the software devs need AMD's support to use HSA. Its easy to get parallel processing from a embarassingly parallel problem (like video, rendering). Its the other more common, general softwares, which are difficult to parallelize. Unless AMD can do some automagic here, HSA will remain gimmick at best.
Quote:
AMD is also releasing SSD's soon which will recieve support on all AMD chipsets to work in unison with AMD RAM disks for faster pre caching. (This not aimed at you, but at another member
) :How is this different from the "monopolistic, bloody, capitalist,manipulative, cheating, bastard" Intel ?
-
Reply to mayankleoboy1
Related resources
- Solvedwhat AMD cpu would run good with this? solution
- SolvedAMD CPU for gaming (fanboi question) solution
- Solvedwill a amd 750k cpu hold back my 280x solution
- SolvedAMD GPU and CPU speeds? solution
- SolvedAMD Athlon ii x4 640 CPU/GTX 480 temps good? solution
- SolvedUpgrading CPU from AMD solution
- SolvedIntel Core i3-4130 or AMD FX-6300 CPU? solution
- SolvedAmd CPU and 12 Gb of Ram solution
- SolvedCPU-Cooler (AMD FX-8350) solution
- SolvedAMD FX6350 CPU Gaming boost? Any suggestions? solution
- SolvedWhich CPU-Cooler? (AMD FX-8350) solution
- SolvedHelp identifying an AMD Athlon II CPU solution
- Solvedasus sabertooth 990fx rev 2 amd 8350 sudden crash and cpu led solution
- Solvedasus sabertooth 990fx rev 2 amd 8350 sudden crash and cpu led solution
- Solvedis the amd a10-7850k a good cpu solution
- More resources
!