Decided to upgrade my pc and finally came down to 2 chioces for the cpu:
FX 8350 and i3 3570k. Which one should i go for and this is regarding gaming only.
FX 8350 and i3 3570k. Which one should i go for and this is regarding gaming only.
The E-350 scores about 770, the E2-1800 about 850. The lowest possiblw current pentium dual core, the G620T scores in the 2130 range, well over twice as fast. To call them close is delusional.
Even at 850, to increase it by 20% and scalw it perfectly so that there is no overhead to get 8 cores all working 100% efficiently you're still looking at 4080... well under where the i3-3220 or FX-4300 sit.
Citation Needed. The CPU will be working on AI, and Frame Data, the GPU will be working on textures and pixel pumping. Both will need access to the memory. Though they are both working on the game, so you can kind of twist it to say they are working on the same thing, but in no way shape or form are they actually working on the same things. To say otherwise is idiotic.
Here is an interesting test. I grabbed a little fill rate tester to measure my GPU's memory bandwidth and ran it to simulate a couple of things. The first test was just nothing running on my PC but the test and it spits out:
GPU fill rate, single-texture (16/0): 7669 Mtexels/sec
GPU fill rate, multi-texture (16/0): 15345 Mtexels/sec
So I ran it again with Prime95 running on all of my CPU cores to max that out to see what kind of impact it would have. You would think it would be very little but as it turns out it does impact it quite a bit:
GPU fill rate, single-texture (16/0): 6344 Mtexels/sec (Dropped 17.2%)
GPU fill rate, multi-texture (16/0): 12678 Mtexels/sec (Dropped 17.3%)
So then I ran it again with Unigen Heaven running on my second monitor to simulate multiple things accessing the GDDR5 since it's not very CPU or system RAM heavy . . .
GPU fill rate, single-texture (16/0): 587 Mtexels/sec (Dropped 92.3%)
GPU fill rate, multi-texture (16/0): 437 Mtexels/sec (Dropped 97.1%)
So then I thought to myself . . . Hmmm, how about DDR3 . . . I have that in my machine and how to test that similarly. So I grabbed Passmark's performance test and ran it stand alone on the memory test section and it spit out the memory mark of 2688. So I ran it with Unigen Heaven since that mostly avoids CPU usage and it spit out 2534, so an impact of 5.7%. So then I ran Prime95 on all my cores and made sure it was on the test to impact as much memory as possible and Passmark spits out 1966, and impact of 26.8%.
Still think GDDR5 is far superior when multiple things try to use it? Sony is using it as a marketing gimmick at best, and I'm not convinced it's the best choice for a shared access scenario. Like I said before. I'm sure it will be a great gaming machine, and incredibly fun. But to think it's going to blow gaming PC's away or it's hardware is anything magical is just plain silly.
The E-350 scores about 770, the E2-1800 about 850. The lowest possiblw current pentium dual core, the G620T scores in the 2130 range, well over twice as fast. To call them close is delusional.
Even at 850, to increase it by 20% and scalw it perfectly so that there is no overhead to get 8 cores all working 100% efficiently you're still looking at 4080... well under where the i3-3220 or FX-4300 sit.
Citation Needed. The CPU will be working on AI, and Frame Data, the GPU will be working on textures and pixel pumping. Both will need access to the memory. Though they are both working on the game, so you can kind of twist it to say they are working on the same thing, but in no way shape or form are they actually working on the same things. To say otherwise is idiotic.
Here is an interesting test. I grabbed a little fill rate tester to measure my GPU's memory bandwidth and ran it to simulate a couple of things. The first test was just nothing running on my PC but the test and it spits out:
GPU fill rate, single-texture (16/0): 7669 Mtexels/sec
GPU fill rate, multi-texture (16/0): 15345 Mtexels/sec
So I ran it again with Prime95 running on all of my CPU cores to max that out to see what kind of impact it would have. You would think it would be very little but as it turns out it does impact it quite a bit:
GPU fill rate, single-texture (16/0): 6344 Mtexels/sec (Dropped 17.2%)
GPU fill rate, multi-texture (16/0): 12678 Mtexels/sec (Dropped 17.3%)
So then I ran it again with Unigen Heaven running on my second monitor to simulate multiple things accessing the GDDR5 since it's not very CPU or system RAM heavy . . .
GPU fill rate, single-texture (16/0): 587 Mtexels/sec (Dropped 92.3%)
GPU fill rate, multi-texture (16/0): 437 Mtexels/sec (Dropped 97.1%)
So then I thought to myself . . . Hmmm, how about DDR3 . . . I have that in my machine and how to test that similarly. So I grabbed Passmark's performance test and ran it stand alone on the memory test section and it spit out the memory mark of 2688. So I ran it with Unigen Heaven since that mostly avoids CPU usage and it spit out 2534, so an impact of 5.7%. So then I ran Prime95 on all my cores and made sure it was on the test to impact as much memory as possible and Passmark spits out 1966, and impact of 26.8%.
Still think GDDR5 is far superior when multiple things try to use it? Sony is using it as a marketing gimmick at best, and I'm not convinced it's the best choice for a shared access scenario. Like I said before. I'm sure it will be a great gaming machine, and incredibly fun. But to think it's going to blow gaming PC's away or it's hardware is anything magical is just plain silly.
Precisely the FX chips are used in both the PS4 and the next Xbox development kits.