Solved

is nvidia better then ati?

ok so to my knowledge from what I've heard from people and whatnot. that nvidia is better then ati.

now im only assuming geforce and radeon have tiers e.G r250,r260,r270 etc etc and gtx 750/60/70/80 etc.

it is to my assumption that the r270 to radions answer to the geforce 780.

lets take these 2 gpu's similar prices.
http://www.pccasegear.com/index.php?main_page=product_info&cPath=193_1483&products_id=23837
http://www.pccasegear.com/index.php?main_page=product_info&cPath=193_1555&products_id=26486

from what we can gather here in differences.
radeon:
512bit
4GB
gpu clock 1040mhz
memory clock 5000 mhz.

geforce:
384bit
3GB
gpu clock 954mhz
memory clock 6008mhz

now, the radeon beats the geforce in many of those specs, so why the nvidia considered better or is what i heard about geforce being better wrong? or is the
43 answers Last reply Best Answer
More about nvidia ati
  1. ATi no longer exists; they we re bought out by AMD and rebranded. Excluding that, see below:

    This may be a useful resource: http://www.tomshardware.com/reviews/gaming-graphics-card-review,3107-7.html

    Note that specs do not always translate directly into benchmarks.

    There are some who consider Nvidia to be more of a 'quality'/luxury brand, though personally I don't really agree.
  2. Someone Somewhere said:
    ATi no longer exists; they we re bought out by AMD and rebranded. Excluding that, see below:

    This may be a useful resource: http://www.tomshardware.com/reviews/gaming-graphics-card-review,3107-7.html

    Note that specs do not always translate directly into benchmarks.

    There are some who consider Nvidia to be more of a 'quality'/luxury brand, though personally I don't really agree.


    ok so i looked that the link and thats just confirming my suspicion of tiers

    but why don't the specs translate into benchmarks.
    Like with processors. say you get a 6core i7 4ghz. 8m cache. and a i3 dual core 2ghz 4mb cache *for exmaple*
    the i7 in theory should kick the i3's ass. its got more cores, faster speed, bigger cache etc etc.
    so why wouldn't the r290 beat the 780. (despite the 780 having a faster memory clock). or is it just that the 780 is worser and everything and then the memory clock brings it up or 'redeems' itself to make it equal with the r290
  3. The R9 290X is the better of the 2, faster GPU more memory and AMD drivers and support were only bad in the ATI days so noone can compare the 2 by drivers, the 780 would be a tiny bit cooler but not enough to make a difference, the R9 290X will be faster in most games, some games the 780 will be faster but all most all of them games arent worth playing anyway, i see your Australian well same here mate and i know what im talking about, generally people saying NVIDIA is better because its cooler, consumes less power and "EVENTUALLY" brings out a faster variant to an AMD card (the last reason can be vice versa) but overall AMD is very stable and a strong company, the R9 290X is compared to the 780 TI alot but i cant justify paying $300+ for literally 1 FPS when you can overclock the R9 290X to match that or better, very rarely it gets to be 5 FPS more ive only ever seen once that its 10 FPS, also games like Battlefield 4, Crysis 3 and watchdogs use 3GB+ of VRAM when on ultra or 1440P+ resolutions so the 4GB VRAM will result in a more immersive and overall solid gameplay, right now the 290X's are super cheap, if your not overclocking get the XFX R9 290X for $549 i think it was, if you have anymore questions ask away im around quite often :)
  4. http://gpuboss.com/gpus/Radeon-R9-290X-vs-GeForce-GTX-780

    There is many things to compare and saying one company is better than the other is just a statement with limited understanding.
  5. iron8orn said:
    http://gpuboss.com/gpus/Radeon-R9-290X-vs-GeForce-GTX-780

    There is many things to compare and saying one company is better than the other is just a statement with limited understanding.


    DO NOT USE GPU BOSS!! THEY ARE LITERALLY INTEL AND NVIDIA BIASED AND THIS HAS BEEN PROVEN MANY TIMES, I HAVE NOT ONE OUNCE OF TRUST FOR THAT WEBSITE KEEP AWAY!
  6. With Maxwell (750), I would say the technical superiority ball is clearly in Nvidia's camp at the moment: roughly half the TDP of similar-performing AMD GPUs.
  7. Yeah, that GPU boss site is definitely biased. If a site doesn't tell you exactly what benchmarks they are using to create their composite score then they can pick and choose ones that make the results turn out how they'd like.

    Take a look at the Anandtech comparsion between the 290X and 780, you'll notice a pattern where the 290X almost always wins unlike what that GPUboss site had tried to show with 6 tests...
    http://www.anandtech.com/bench/product/1036?vs=1056
  8. From a pure performance standpoint, nVidia is generally a slight bit faster than the Amd equivalent. However, you'll pay more money for it, and it's generally not worth the extra money spent for that performance gain.

    In your example, the Radeon 290x actually competes with the nVidia 780ti, not the 780. The 290x performs better than the 780, for only 20 more so it's the better buy. This is the same-branded card that competes with the 290x though: http://www.pccasegear.com/index.php?main_page=product_info&cPath=193_1577&products_id=26525 It's also 899.00, and not worth the price for what little performance you do gain.

    As far as specs go, only compare them within the same brand. The nVidia specs don't really translate to Amd specs, because like you've pointed out there are vast disparities in the speeds.

    Like the article that was linked earlier says, the *tiers* are like this: nVidia - Amd equivalent: 780ti - 290x, 780 - 290, 770 - 280x. After that it gets pretty muddy because the pricing starts to really separate from themselves.
  9. InvalidError said:
    With Maxwell (750), I would say the technical superiority ball is clearly in Nvidia's camp at the moment: roughly half the TDP of similar-performing AMD GPUs.


    The 750 is the only card with that kind of TDP for a while because the small manufacturing process is being delayed because NVIDIA's silicon/gpu supplier cant get it to work, the new GTX 880 is going the have the same Process and wont be much more energy efficient, also the TDP between NVIDIA and AMD is that big its only the CPU's that have a big difference also take into account that AMD is ALOT cheaper than NVIDA
  10. I'm pretty sure the 750 is 28nm Maxwell, not 20nm. Same as Kepler.
  11. Swordkd said:
    From a pure performance standpoint, nVidia is generally a slight bit faster than the Amd equivalent. However, you'll pay more money for it, and it's generally not worth the extra money spent for that performance gain.

    In your example, the Radeon 290x actually competes with the nVidia 780ti, not the 780. The 290x performs better than the 780, for only 20 more so it's the better buy. This is the same-branded card that competes with the 290x though: http://www.pccasegear.com/index.php?main_page=product_info&cPath=193_1577&products_id=26525 It's also 899.00, and not worth the price for what little performance you do gain.

    As far as specs go, only compare them within the same brand. The nVidia specs don't really translate to Amd specs, because like you've pointed out there are vast disparities in the speeds.

    Like the article that was linked earlier says, the *tiers* are like this: nVidia - Amd equivalent: 780ti - 290x, 780 - 290, 770 - 280x. After that it gets pretty muddy because the pricing starts to really separate from themselves.


    Someone Somewhere said:
    I'm pretty sure the 750 is 28nm Maxwell, not 20nm. Same as Kepler.


    Because it cant be done yet, my point may have come across saying that it was 20, i tend to type things wrong sometimes and it doesnt come out how i meant it but speculation says the GTX 860 will hopefully have the 20nm with more research
  12. I have much doubt for a 20nm gpu. the next phase is 14nm/dx12
  13. PC-Noobist said:
    Because it cant be done yet, my point may have come across saying that it was 20, i tend to type things wrong sometimes and it doesnt come out how i meant it but speculation says the GTX 860 will hopefully have the 20nm with more research

    Well, the 750 proves that there is substantial power-saving potential in tweaking things even at 28nm and 28nm is available from UMC, TSMC, GF, etc. NOW so Nvidia does not really need to wait for 20nm to be more mature to release more Maxwell chips... same performance for half the electrical power sounds like a good enough reason to launch a new lineup. Certainly beats re-branding old designs with minor alterations.
  14. iron8orn said:
    I have much doubt for a 20nm gpu. the next phase is 14nm/dx12


    14nm is the next step for Intel. Not for TSMC, who are currently on 28nm and very close to 20nm.

    DirectX version has very little correlation with node size.
  15. I think that there is little actual info about it at the moment but several have speculated about it on here and from what they have found it will be 14nm come early 2016.

    Really just my speculation about dx12 but... what else would they be.
  16. Nvidia says that DX12 will run on all DX11 class GPUs, including Fermi, Kepler, and Maxwell: http://blogs.nvidia.com/blog/2014/03/20/directx-12/

    Sixth paragraph.
  17. no drp.. you should argue with your mama kid... omg is it really backwards compatible? so do you want 1 cookie or 2?
  18. iron8orn said:
    no drp.. you should argue with your mama kid... omg is it really backwards compatible? so do you want 1 cookie or 2?


    I normally think I'm fairly good at parsing poor English, but this...
  19. Someone Somewhere said:
    iron8orn said:
    no drp.. you should argue with your mama kid... omg is it really backwards compatible? so do you want 1 cookie or 2?


    I normally think I'm fairly good at parsing poor English, but this...


    I agree. I'm lost o.O what does cookies have to do with anything here? I don't get it. Also you seriously have the nicest English on the internet ...
  20. Jacob Bowerman said:
    ok so to my knowledge from what I've heard from people and whatnot. that nvidia is better then ati.

    now im only assuming geforce and radeon have tiers e.G r250,r260,r270 etc etc and gtx 750/60/70/80 etc.

    it is to my assumption that the r270 to radions answer to the geforce 780.

    lets take these 2 gpu's similar prices.
    http://www.pccasegear.com/index.php?main_page=product_info&cPath=193_1483&products_id=23837
    http://www.pccasegear.com/index.php?main_page=product_info&cPath=193_1555&products_id=26486

    from what we can gather here in differences.
    radeon:
    512bit
    4GB
    gpu clock 1040mhz
    memory clock 5000 mhz.

    geforce:
    384bit
    3GB
    gpu clock 954mhz
    memory clock 6008mhz

    now, the radeon beats the geforce in many of those specs, so why the nvidia considered better or is what i heard about geforce being better wrong? or is the


    It really depends on what games you play, AMD is better on Mantle Based games and is usually always cheaper than Nvidia for similiar performance. In my personal experience I prefer Nvidia, Find their Drivers are better, their software is better, their Temp's are better and run quieter than the Radeons. Personal preference at the end of the day, Check reviews for relevent games you play or intend to play and compare your budget. For me the cooler card that overclocks better is the winner... Videoram should be considered also, For 1440p you really want more than 3GB otherwise 3GB is more than enough at the moment.
  21. Someone Somewhere said:
    Nvidia says that DX12 will run on all DX11 class GPUs, including Fermi, Kepler, and Maxwell: http://blogs.nvidia.com/blog/2014/03/20/directx-12/

    Sixth paragraph.


    Apparantly they learnt from the mistakes of DX10..... Hehe, Yes I have read multiple sources say the same thing, Nvidia are claiming DX12 will be backwards compatible, Unlike Mantle....
  22. Retaliator said:
    Hehe, Yes I have read multiple sources say the same thing, Nvidia are claiming DX12 will be backwards compatible, Unlike Mantle....

    Mantle is a whole new API so that cannot be helped.

    DX12's biggest benefits are allowing persistent buffers and bypassing redundant draw steps. Since drivers were already responsible for macroscopic GPU resource management, it is not so surprising that most of those DX12 features can be implemented on older GPUs with a few relatively simple driver tweaks.

    If OGL and DX draw performance could be improved this much with such simple changes, it boggles the mind how they waited so long to sanity-check and simplify the draw process.
  23. Best answer
    Jacob Bowerman said:
    ok so to my knowledge from what I've heard from people and whatnot. that nvidia is better then ati.

    now im only assuming geforce and radeon have tiers e.G r250,r260,r270 etc etc and gtx 750/60/70/80 etc.

    it is to my assumption that the r270 to radions answer to the geforce 780.

    lets take these 2 gpu's similar prices.
    http://www.pccasegear.com/index.php?main_page=product_info&cPath=193_1483&products_id=23837
    http://www.pccasegear.com/index.php?main_page=product_info&cPath=193_1555&products_id=26486

    from what we can gather here in differences.
    radeon:
    512bit
    4GB
    gpu clock 1040mhz
    memory clock 5000 mhz.

    geforce:
    384bit
    3GB
    gpu clock 954mhz
    memory clock 6008mhz

    now, the radeon beats the geforce in many of those specs, so why the nvidia considered better or is what i heard about geforce being better wrong? or is the


    you can't compare specs, just like with CPUs. You can only compare performance in games from (hopefully non-biased)reviews. Clocks mean nothing, memory interface width doesn't tell the whole story and 3GB vs 4GB only really comes into play multimonitor.

    Nvidia's CUDA cores are generally more efficient than AMD stream processors, so again you can't compare cores.

    Just look which card is in your price bracket, check how each performs in the games you want to play (reviews) and get the one that generally performs the best.

    AMD Mantle can help, but generally only when you have a weak AMD CPU as it take some load of it.
  24. sarcasm... children...

    your article had nothing to do with what i said. all you wanted to do was take a shot at me so you get a cookie for a epic fail
  25. InvalidError said:
    Retaliator said:
    Hehe, Yes I have read multiple sources say the same thing, Nvidia are claiming DX12 will be backwards compatible, Unlike Mantle....

    Mantle is a whole new API so that cannot be helped.

    DX12's biggest benefits are allowing persistent buffers and bypassing redundant draw steps. Since drivers were already responsible for macroscopic GPU resource management, it is not so surprising that most of those DX12 features can be implemented on older GPUs with a few relatively simple driver tweaks.

    If OGL and DX draw performance could be improved this much with such simple changes, it boggles the mind how they waited so long to sanity-check and simplify the draw process.


    The problem is most likely Nvidia's arrogance, They seem to not really care too much until AMD out-shine them then they make an effort. Which is why good competition is needed to keep companies like Nvidia in check. I like both companies however from the current offerings I do prefer Nvidias cards, however find it hard to justify the money.
  26. If you compare the fanboys Nvidia fanboys are much worse than AMD fanboys *phew* try meeting a hardcore Nvidia fanboy xD They will say the most ridiculous things about AMD and say shit like "Nvidia is 10x better all the time" whereas the AMD fanboy has usually (10% more knowledge on graphics cards) will only say about Nvidias overprice and about how AMD beats them even at a lower price. I think it's stupid to be a fanboy though. I'm a fan of both. I have an old 8800GT and a R9 270x. I would say Nvidia really was dominating in 2007-2010. My old Nvidia 8800 GT can still run plenty of new games on low settings anyway :) So this is what I conclude.


    Nvidia Pros: has premium quality, longer lasting. Slightly quieter/cooler same performance/lower/more
    Cons: Often priced higher if not much higher than an AMD card that gives identical performance. Fanboys.
    AMD Pros: Much cheaper. Great performance often par to par with Nvidia. Very budget friendly with a huge range of cards spanning from low performance to the ultra performance. Suitable for every budget.
    Cons: FX 9000 series fanboys. Often slightly higher/higher TDP than Nvidia. Cooling on reference is bad. That's where companies like Sapphire come in though. Half a con.

    I hope that helped. (Nvidia is still dominating, just not as much)
  27. CAaronD said:
    Nvidia Pros: has premium quality, longer lasting. Slightly quieter/cooler same performance/lower/more

    How reliable and loud or quiet GPU cards are is up to individual card manufacturers to decide.

    In many cases, the cost difference between building a component with a 1-year warranty and another with a 5-years warranty is only ~$1 worth of higher-quality parts.
  28. Jacob Bowerman said:
    ok so to my knowledge from what I've heard from people and whatnot. that nvidia is better then ati.

    now im only assuming geforce and radeon have tiers e.G r250,r260,r270 etc etc and gtx 750/60/70/80 etc.

    it is to my assumption that the r270 to radions answer to the geforce 780.

    lets take these 2 gpu's similar prices.
    http://www.pccasegear.com/index.php?main_page=product_info&cPath=193_1483&products_id=23837
    http://www.pccasegear.com/index.php?main_page=product_info&cPath=193_1555&products_id=26486

    from what we can gather here in differences.
    radeon:
    512bit
    4GB
    gpu clock 1040mhz
    memory clock 5000 mhz.

    geforce:
    384bit
    3GB
    gpu clock 954mhz
    memory clock 6008mhz

    now, the radeon beats the geforce in many of those specs, so why the nvidia considered better or is what i heard about geforce being better wrong? or is the


    You can't compare those numbers and actually get a meaningful result. You have to compare actual benchmarks on sites like Tom's in order to determine real performance.

    One company or the other is not "better." Its just a matter of who offers a more compelling product in your particular price range.

    Nvidia has a larger market share and spends a lot more heavily on marketing. Thus the perception among the less informed of Nvidia offering a more "premium" product.

    InvalidError said:
    With Maxwell (750), I would say the technical superiority ball is clearly in Nvidia's camp at the moment: roughly half the TDP of similar-performing AMD GPUs.


    In terms of technology, the GTX 750 Ti is fairly impressive. That's not particularly useful information, though. We buy the card that is the best for our money, we don't cheerlead the company that is ahead technologically.

    The GTX 750 Ti is almost $50 more than the R7 260X, which is just as fast. It may be a relative engineering feat, but its an extremely niche product that should be irrelevant to most of us.
  29. oxiide said:
    The GTX 750 Ti is almost $50 more than the R7 260X, which is just as fast. It may be a relative engineering feat, but its an extremely niche product that should be irrelevant to most of us.

    You might want to check prices... the 260X is a fair bit more expensive than it used to be.

    Most R7-260X are currently priced $140 and up while the 750Ti are priced $160 and up so the price difference is currently down to about $25.
  30. And the R9 270 beats the 750ti by plenty though. And its only $10-20 more. Excluding TDP. 750 ti is great on old computers. But on a custom build might as well go for an R9 270
  31. CAaronD said:
    And the R9 270 beats the 750ti by plenty though.

    ~20% better benchmark results for ~15% extra cost and 150% more TDP... so just barely even on performance-per-buck before counting the need for a beefier PSU and energy costs over time.

    AMD has a horrible lot of catching-up to do on power-efficiency.

    Personally, I like my PCs to be cool and quiet. I have never owned a Nvidia GPU before (all my graphics cards have been ATI/AMD starting with an EGA Wonder in my 8088) but if I had to upgrade right now, I would be really tempted to get a 750. I'm still happy with my current 1GB HD5770 though (sort-of itching for a 2GB upgrade but more out of geek pride than actual need), so I'll wait until the 20nm GPUs are out to decide what I'm going to get next.
  32. The r9 270 still has much more performance for dollar. Besides TDP difference is so little the average user would only need to pay an extra $10 or so per year. Provided he doesn't run his PC 24/7 on full load. And also the R9 270 is only $20 more. 750 ti is really only useful for ultra low tdp builds/old desktops with bad psus. The performance per dollar is still more than 10%+ better on the R9 270. Besides what is $10 per year? A normal person would change his card every few years anyway. Assuming they don't have ridiculously high costing energy price.
  33. Cheapest 750 ti is the MSI version priced at $119 and cheapest R9 270 is the Powercolour priced at $139
  34. CAaronD said:
    Besides what is $10 per year? A normal person would change his card every few years anyway.

    I consider myself an 'average' person (more precisely a geeky non-enthusiast) and still cannot really be bothered to upgrade my HD5770 at this point in time since GPUs around my $150 sweet-spot are only about 50% faster.

    Smells like stagnation on GPU progress to me. A 100% performance improvement is usually my minimum threshold to consider upgrades.

    BTW, I do run my PC under some degree of CPU and GPU load close to 24/7.
  35. Well then if you ran 24/7 I don't know for what reason since 750ti isn't that great at mining? :P Then I suppose the 750ti would be a better value for money. Also unlike the 1990s the jumps aren't so huge. The jumps in price to performance value are small hops at a time I would say now a days.
  36. CAaronD said:
    The jumps in price to performance value are small hops at a time I would say now a days.

    Yup, the non-enthusiast mainstream is about as boring as CPUs now: 1.5X the performance in ~5 years is roughly 8%/year compounded; almost the same as Intel's ~7%/year.

    I think the next big GPU shake-down could actually come from Intel: GT3e has the potential to become scary stuff if Intel decides to push their IGP like they mean it but for now, it is hamstrung by its tiny 128MB local memory. Bump that to 512MB and things should get more interesting. Make it dual-channel for 1GB eDRAM and 200GB/s aggregate bandwidth, scale the IGP accordingly and now the fun begins.

    With (e)DRAM, NAND, SoC and other stuff getting stacked in one way or another for various reasons in phones, tablets, ultrabooks, embedded platforms, etc., it would not be too surprising to see AMD, Nvidia, Intel and others start integrating (more) graphics or otherwise performance-critical memory in their CPU/APU/SoC/GPU packages.
  37. CAaronD said:
    Well then if you ran 24/7 I don't know for what reason since 750ti isn't that great at mining? :P Then I suppose the 750ti would be a better value for money. Also unlike the 1990s the jumps aren't so huge. The jumps in price to performance value are small hops at a time I would say now a days.


    The 750 Ti is a great little miner, one of the best in terms of performance/power usage.
  38. InvalidError said:
    CAaronD said:
    The jumps in price to performance value are small hops at a time I would say now a days.

    Yup, the non-enthusiast mainstream is about as boring as CPUs now: 1.5X the performance in ~5 years is roughly 8%/year compounded; almost the same as Intel's ~7%/year.

    I think the next big GPU shake-down could actually come from Intel: GT3e has the potential to become scary stuff if Intel decides to push their IGP like they mean it but for now, it is hamstrung by its tiny 128MB local memory. Bump that to 512MB and things should get more interesting. Make it dual-channel for 1GB eDRAM and 200GB/s aggregate bandwidth, scale the IGP accordingly and now the fun begins.

    With (e)DRAM, NAND, SoC and other stuff getting stacked in one way or another for various reasons in phones, tablets, ultrabooks, embedded platforms, etc., it would not be too surprising to see AMD, Nvidia, Intel and others start integrating (more) graphics or otherwise performance-critical memory in their CPU/APU/SoC/GPU packages.


    But then again, the price would increase if they did that right? And most people are using Dedicated cards not the IGPU. I suppose that would greatly benefit the non-gamers though. I suppose for non-gamers Intergrated is the way to go. For media, streaming, etc.
  39. CAaronD said:
    But then again, the price would increase if they did that right? And most people are using Dedicated cards not the IGPU.

    Maybe, maybe not.

    A 512MB (4Gbit) DRAM chip currently costs $4.40 so if the "Crystalwell-512MB" custom eDRAM chip cost twice as much, that would be $20 with some of that cost already present with the current Crystalwell-128MB. Add another $10 to cover other costs and we have a ~$30 production cost premium on this souped-up IGP. Add Intel's 200% markup and we have a ~$100 premium over regular IGPs, which I think seems fair enough considering the cheapest GT3e chip is the BGA1364 i7-4770R which costs only $55 more than the standard OEM i7-4770 instead of the $200-300 premium Intel charges for mobile GT3e variants.

    If you take the R9-280X's 352sqmm 28nm die and optimistically shrink it to 14nm, it becomes 88sqmm and the 770's 294sqmm becomes 74sqmm, which are much smaller than Haswell GT3/3e's current ~170sqmm on 22nm so there seems to be plenty of room to accommodate extra graphics processing power on 14nm if Intel wanted to - which they certainly will simply because the die size is becoming too small to put all the necessary power and IO micro-BGA balls under the die anyway - Intel does not seem to like having dies much below 140sqmm.

    This hypothetical souped-up $100 IGP would have bandwidth and processing power roughly on par with a R9-280 and GTX770: 200GB/s from dual Crystalwell-512MB chips + 48GB/s from DDR4-3200 system RAM. How many people would bother getting a $300-500 discrete GPU when they can get an IGP that performs just about as fast for a $100 premium on their CPU?

    If Intel decided they wanted to grab GPU mindshare using chips featuring this hypothetical souped-up IGP, they would price them at irresistible price points and that could hurt. Intel is running out of useful things they can throw more transistors at to keep the CPU die from becoming too small and the IGP is the easiest thing to scale up in some meaningful way until more mainstream software catches up with quad/hex/octo-core/thread/whatever CPUs to justify adding more cores or threads in mainstream CPUs.
  40. I only understood 40% of what you said ^_^ hehe, I'm too ignorant haha. :( You seriously know more shit than the people who take up computer engineering in university >.< I have never met anybody with as much computer knowledge as you ... (Probably because of the fact that I live in Brunei a small little country in Borneo island)

    Seems like a GREAT idea though, wouldn't mind have having to pay an extra $100 for that powerful integrated. That would be able to smash AMD's APU's if they did that. Would also save plenty for many people. Instead of getting a 770/R9 280 for $300-400 they could just pay an extra $100 and get a really powerful processor with a powerful integrated at $500? My guess for their price if they made one with that.
  41. Which again proves going to University to study Computer engineering is useless when they can just Google search/Toms Hardware. LOL XD
  42. University is rarely a good path for a career in IT, professional qualifications and experience is far better in my experience.
  43. CAaronD said:
    I only understood 40% of what you said ^_^ hehe, I'm too ignorant haha. :( You seriously know more shit than the people who take up computer engineering in university >.< I have never met anybody with as much computer knowledge as you ...

    Seems like a GREAT idea though, wouldn't mind have having to pay an extra $100 for that powerful integrated. That would be able to smash AMD's APU's if they did that.

    Well, I do have about three years of actual experience in digital ASIC and programmable logic (FPGA/CPLD) engineering. Probably helps a little.

    Basically, the scary thing is that due to Intel being over two years ahead of most others on process technology, they can afford putting something equivalent to mid/high-end AMD/Nvidia GPUs in their CPU's waste space if they wanted to and now that methods for tightly coupling memory with other ASICs are becoming more economically viable, memory bandwidth as a major bottleneck to entry-level graphics is about to crumble.

    AMD will buff their APU's IGPs too when they finally gain access to 20-22nm manufacturing and they could put some eDRAM on that if they wanted. A single 512MB Crystalwell-style chip would enable them to raise the entry-level bar from R7-250 to R7-260X standards and be quite acceptable for tons of casual gamers.
Ask a new question

Read More

Graphics Cards AMD Nvidia