The Truth about Woodcrest revealed?

Hmmmm. Seems like Intel has been pulling our chains again. All the 30% SYNTHETIC benchmarks are only showing 5-15% real world advantages when comparing a 3.0ghz woodcrest to a 2.6ghz opteron. Heck, clock for clock, they are suddenly looking very even. Gotta love marketing.

http://www.dailytech.com/article.aspx?newsid=2487
43 answers Last reply
More about truth woodcrest revealed
  1. Cool so intel is basically dead in the water with woodcrest for servers cause amd has it's secret weapon hypertransport. :twisted: And with k8l things should even out right?
  2. Quote:
    Hmmmm. Seems like Intel has been pulling our chains again. All the 30% SYNTHETIC benchmarks are only showing 5-15% real world advantages when comparing a 3.0ghz woodcrest to a 2.6ghz opteron. Heck, clock for clock, they are suddenly looking very even. Gotta love marketing.


    Gotta love skeptics. Go over to Tech-report and see everything. You'll see the advantages are 10-40% for most applications. 27-29% advantage in 3dsmax 8.

    Lame MP3 encoding. 30-35% faster. Speech recognition. 30-35% faster.

    Are they blind?? How is 279s for Woodcrest and 357.7s for Opteron 5-15%.
  3. Ya but does'nt hyper transport help amd make up for some of the slack?
  4. LOL
    In the long term maybe. As long as the Intel FSB is not slowing the CPU down,no.
  5. Wait a minute does'nt intel have that dual inline bus thingy coming out soon?
  6. My point is that is seems the new core march is optimized for desktop applications and that it will not be as good of a SERVER chip as the hype would have suggested. This doesn't seem very surprising as INTEL NEEDED to regain desktop footing. But there server battle is just begining. Again these reviews are 3.0ghz versus 2.6ghz. The 2.8 AMD is just begining shipping and 3.0 will be here shortly.
  7. Quote:
    My point is that is seems the new core march is optimized for desktop applications and that it will not be as good of a SERVER chip as the hype would have suggested. This doesn't seem very surprising as INTEL NEEDED to regain desktop footing. But there server battle is just begining. Again these reviews are 3.0ghz versus 2.6ghz. The 2.8 AMD is just begining shipping and 3.0 will be here shortly.


    The majority of x86 server are application servers and AD servers. Most database work is handled by RISK processors. Therefore Intel is strong were it need to be. Do you guys actually work in the IT field or just post here?
  8. Most people here do.
  9. I think it is quite obvious that the server market is moving AWAY from RISK systems due to the fact that it is more simple to have one server setup to do ANY needed tasks and the cost of x86 servers is significantly cheaper than any current RISK competitor. I think streamlining would be the proper term. Personally, I feel that x86 should have died 10-15 years ago, and a more efficient system should have taken over, but what I think doesn't affect IBM/SUN/INTEL/HP/DELL/AMD/etc.
  10. X-86 dead? Never!
  11. You're reasoning on why it is still alive is....?

    I'm not saying it's terrible. But it could be much more efficient. Look at the bandwidth ROC processors can pump out at 300-500mhz. Plenty of other applications out there that prove the same point. x86 is great for compatability, that's about it. If the big-wigs could get together and agree on a new highly compatable standard, we could see some HUGE gains in efficiency down the road. But I don't see that happening.
  12. Quote:
    You're reasoning on why it is still alive is....?

    I'm not saying it's terrible. But it could be much more efficient. Look at the bandwidth ROC processors can pump out at 300-500mhz. Plenty of other applications out there that prove the same point. x86 is great for compatability, that's about it. If the big-wigs could get together and agree on a new highly compatable standard, we could see some HUGE gains in efficiency down the road. But I don't see that happening.
    I like x-86 the name is nice.
  13. I have worked at several companies that do heavy database work. None have used anything Risk and most are now using AMD (even the ones that used to standardize on Dell, they are now using IMB and HP). They all prefer clustering to trying to move things over to different systems.
  14. Quote:
    My point is that is seems the new core march is optimized for desktop applications and that it will not be as good of a SERVER chip as the hype would have suggested. This doesn't seem very surprising as INTEL NEEDED to regain desktop footing. But there server battle is just begining. Again these reviews are 3.0ghz versus 2.6ghz. The 2.8 AMD is just begining shipping and 3.0 will be here shortly.


    The majority of x86 server are application servers and AD servers. Most database work is handled by RISK processors. Therefore Intel is strong were it need to be. Do you guys actually work in the IT field or just post here?
  15. They dont have'nt you figured it out yet?
  16. Woodcrest, Conroe, and all the rest of the Core 2-based CPUs' engineering sample reviews just tell us what we already know from benching the Core Duo against the X2s and Opteron dual-cores: clock-for-clock, the Core uarch is roughly 15% more efficient and the K8 depends more on memory latency than memory bandwidth.

    The real issue will be the prices of the entire CPU/board setup when both the Conroe and AM2 are in stock. Intel's 975X chipset boards- the only ones out so far that support Conroe- are extremely expensive, whereas the RD580 and NForce 570 and 590 are supposed to be roughly as much as NF4/RD480 goods. If you have to spend $150 more to get a 2.6GHz AMD versus a 2.33 GHz Intel to match it in performance, you come out even in the end.
  17. I think the Tech Report's article on this topic was better coverage, especially since the daily report article referred to it. Woodcrest is surely compelling, especially the Blackford MCH. With the Blackford, I did not notice a mention of a Southbridge for subsystems and I/O as it seems that the PCIe bus is controlled by Blackford. Truly though, that design reminds me very much of the single chip nVidia design for AMD...hmmm, coincidence?

    Some good ideas going on there, curious to see if and how Intel develops the Blackford MCH for the desktop. They inferred about a workstation version, the Green Creek, but nothing other than a mention about it. Would be interested in seeing how the Green Creek PCIe connectivity performs with a PCIex16 GPU or an SLI setup.
  18. Quote:
    I have worked at several companies that do heavy database work. None have used anything Risk and most are now using AMD (even the ones that used to standardize on Dell, they are now using IMB and HP). They all prefer clustering to trying to move things over to different systems.


    Heavy database I assume you mean SQL on x86? If that's the case then I wouldn't consider it heavy database. For at least the next 5-10 years that is the realm of Oracle.
  19. Tech report benches:

    I was going to get this data from the site....but someone had already done it sooo I just copied them. I think the one thing that stands out is the powerconsumption on the woodcrest. The author writes "That CPU just isn't drawing much power at all. Under load, the difference is 59 W—much less than the 80 W TDP of the 3GHz Woodcrest, and under the 65 W target for the lower-speed Woodcrest processors." WOAHHH talk about OC potential.

    Cinebench Render x64 multi 12.5%
    Cinebench Render x64 single 17.3%
    Cinebench Render x86 multi 22.2%
    Cinebench Render x86 single 25.8%
    Cinebench Shading x64 15.2%
    Cinebench Shading x86 11.9%
    3DMax Render 22%
    3DMax Metalray 21.5%
    Windows Media Encoder 5.5%
    DivX6 15.9%
    Lame CBR MS single 27.7%
    Lame CBR MS multi 26.1%
    Lame CBR Intel single 28.6%
    Lame CBR Intel multi 28%
    Lame VBR MS single 17.2%
    Lame VBR MS multi 5.2%
    Lame VBR Intel single 24.1%
    Lame VBR Intel multi 16.6%
    Sphinx MS 23%
    Sphinx Intel 24.8%
    picCOLOR overall 34%

    2.6Ghz Opteron->3.0Ghz 100% scaling 15.4%

    Memory
    Latency 24.5% worse on Woodcrest.

    Powerconsumption
    Idle without powermanagement 19.3% better on Woodcrest
    100% load 5% better on Woodcrest.
  20. The dual inline bus is to run dual-socket CPUs from one chipset, sending one FSB to each socket. Before, Intel Xeon DP chipsets used a shared FSB to send information to both CPUs, and why they were beaten so badly by AMD. DIB could be used on a desktop chip, with each core having its own FSB, but apparently Intel thinks that a 1066 or 1333 MHz FSB is sufficient to feed two cores so it has a single shared FSB. I'd be inclined to believe them as the the Core Solo with a 667 MHz FSB and the Pentium M with its 533 FSB benchmark just about the same at the same clock speed.

    Now I would expect Intel to use DIB to feed each pair of cores on quad-core chips as a 1333 MHz FSB would deliver only 333MHz of bandwidth to each core. Sure, they could jack the bandwidth up a lot, but running a lot of high-frequency lines on the motherboard leads to other issues, like capacitance, inductance, and other RF-related problems. Intel also likes multi-chip modules due to manufacturing ease, and using two independent FSBs on one package would easily allow Intel just to plop two dual-core dies into a processor package without any work whatsoever. It's an inelegant but cheap and probably very workable solution at least for a little while.

    Intel is known for using inferior, inelegant methods to get the job done- their photolithography, silicon processing, use of FSBs instead of IMCs, multi-chip modules, shared FSBs in server chipsets and in dual-core chips- but it means that they can generally do the job much cheaper than AMD can with their technically superior processes. Intel may have made some mistakes in the past, but they are certainly not stupid and have profit margins that are roughly 10 times AMD's (AMD's 3-4% vs. Intel's ~40%) because of this, among other factors.
  21. Quote:
    The dual inline bus is to run dual-socket CPUs from one chipset, sending one FSB to each socket. Before, Intel Xeon DP chipsets used a shared FSB to send information to both CPUs, and why they were beaten so badly by AMD. DIB could be used on a desktop chip, with each core having its own FSB, but apparently Intel thinks that a 1066 or 1333 MHz FSB is sufficient to feed two cores so it has a single shared FSB. I'd be inclined to believe them as the the Core Solo with a 667 MHz FSB and the Pentium M with its 533 FSB benchmark just about the same at the same clock speed.

    Now I would expect Intel to use DIB to feed each pair of cores on quad-core chips as a 1333 MHz FSB would deliver only 333MHz of bandwidth to each core. Sure, they could jack the bandwidth up a lot, but running a lot of high-frequency lines on the motherboard leads to other issues, like capacitance, inductance, and other RF-related problems. Intel also likes multi-chip modules due to manufacturing ease, and using two independent FSBs on one package would easily allow Intel just to plop two dual-core dies into a processor package without any work whatsoever. It's an inelegant but cheap and probably very workable solution at least for a little while.

    Intel is known for using inferior, inelegant methods to get the job done- their photolithography, silicon processing, use of FSBs instead of IMCs, multi-chip modules, shared FSBs in server chipsets and in dual-core chips- but it means that they can generally do the job much cheaper than AMD can with their technically superior processes. Intel may have made some mistakes in the past, but they are certainly not stupid and have profit margins that are roughly 10 times AMD's (AMD's 3-4% vs. Intel's ~40%) because of this, among other factors.
    Thank you man. Btw do you know when intel will use the dual inline bus?
  22. It is being used on the dual-processor Woodcrest chipsets that are supposed to be out by the end of the year. I don't know when/if it will show up on the desktop, but I wouldn't expect it until we start to see quad-core desktop chips as Intel has been very firm in "1066/1333 MHz shared FSB is enough for our desktop dual cores."
  23. Thank you. You are one of the smartest guys here you know that?
  24. I'm sorry but the role of AMD fanboy is already filled here.
  25. Quote:
    I'm sorry but the role of AMD fanboy is already filled here.
    Me?
  26. No you're the forum idiot/noob.

    9-inch is the lead AMD fanboy (now that LMM has departed), the horde are the junior AMD fanboys.
  27. Quote:
    No you're the forum idiot/noob.

    9-inch is the lead AMD fanboy (now that LMM has departed), the horde are the junior AMD fanboys.
    Thank you. But leave all the other noobs alone ok.
  28. Quote:
    (...) Intel also likes multi-chip modules due to manufacturing ease, and using two independent FSBs on one package would easily allow Intel just to plop two dual-core dies into a processor package without any work whatsoever. It's an inelegant but cheap and probably very workable solution at least for a little while.


    Unless you know better, Woodcrest/Conroe/Merom are not individual dies linked together but true dual-core dies. On what regards multi-core, glueless-logic engineering, yes it's tougher to merge two or more cores into a single die; that's why IBM POWER 5 is a MCM, for instance; and, it's tougher for every chip's manufacturer. As for yield, I'm sure that a four-core single-die would be more densely packed than a 2+2 die solution...

    Quote:
    Intel is known for using inferior, inelegant methods to get the job done- their photolithography, silicon processing, use of FSBs instead of IMCs, multi-chip modules, shared FSBs in server chipsets and in dual-core chips- but it means that they can generally do the job much cheaper than AMD can with their technically superior processes. Intel may have made some mistakes in the past, but they are certainly not stupid and have profit margins that are roughly 10 times AMD's (AMD's 3-4% vs. Intel's ~40%) because of this, among other factors.


    Can you prove Intel's Manufacturing processes are inferior? More complex (and, sometimes, more inefficient) manufacturing processes is not equivalent of better processes (from sand-to-chip); one IMC is far better (not, necessarily, more complex!) than a FSB. Now, do you have an idea of how hard it will be to implement 2-4 IMCs in a single 4-core die?

    My point: AMD is - by no means - an inferior company. Being about 10 times smaller than Intel, it's achieved, together with IBM, admirable technological heights. It still is an admirable company.

    Does that mean Intel is technologically inferior, hence, cheaper?


    Cheers!
  29. Intel's Smithfield is a multi-chip module of two 1M Prescott dies stuck together. The Presler and Conroe aren't MCMs. Intel wanted to get Smithfield to market right now to beat AMD's Athlon X2 series and undercut them in price as well, and the MCM approach let them do that. Once they had Smithfield shipping, Intel was able to design, test, and get good yields on a true dual-core chip in the Presler.

    The MCM approach can give better yields on a new manufacturing process as you can take known good single-core dies and put them together and are assured a working dual-core chip. If you have one core on a true dual-core chip go bad, then you are left with a functional single-core chip and your dual-core yield suffers. But yes, once you have the manufacturing process down pat, better yields are had with the more densely-packed solutions as silicon real estate is expensive.

    Intel's manufacturing technology is less advanced that AMD's. They simply use strained silicon as their substrate and not strained-silicon-on-insulator like AMD and IBM do. This means that they have more current leakage at the same transistor gate size. Hence Intel ran out of headroom on 90nm chips much earlier than AMD did- Intel never made a short-pipeline 90nm chip like the over 2.26GHz (Pentium M 775) whereas AMD is making 3.0GHz Opterons on 90nm with the same 12-stage pipeline as that Pentium M. Intel has announced a 3.0GHz Conroe, but that is on 65nm and the Conroe has a 2-stage-longer pipeline.

    And yes, Intel has cheaper chips and their manufacturing methods promote this. That's how they can make profits on $130 dual-core CPUs.
  30. Quote:
    Intel's Smithfield is a multi-chip module of two 1M Prescott dies stuck together. The Presler and Conroe aren't MCMs. Intel wanted to get Smithfield to market right now to beat AMD's Athlon X2 series and undercut them in price as well, and the MCM approach let them do that. Once they had Smithfield shipping, Intel was able to design, test, and get good yields on a true dual-core chip in the Presler.

    The MCM approach can give better yields on a new manufacturing process as you can take known good single-core dies and put them together and are assured a working dual-core chip. If you have one core on a true dual-core chip go bad, then you are left with a functional single-core chip and your dual-core yield suffers. But yes, once you have the manufacturing process down pat, better yields are had with the more densely-packed solutions as silicon real estate is expensive.


    Certainly. When I addressed the manufacturing process (from sand-to-chip), I was referring to the best case, where both manufacturers have achieved a leveraged output yield & optimum RAS, as you mention on your last paragraph. Moreover, there'll always be die issues; if a core "dies" at the manufacturing process, the chip will still be sold as a downgraded version, as you know. Again, this is true for any manufacturer. Actually, JumpingJack has a very enlightening analysis on manufacturing vs. yields vs. costs (http://forumz.tomshardware.com/hardware/Conroe-amp-AM2-Discussionftopic-178020-days0-orderasc-825.html), if you care to have a peek.

    Quote:
    Intel's manufacturing technology is less advanced that AMD's. They simply use strained silicon as their substrate and not strained-silicon-on-insulator like AMD and IBM do. This means that they have more current leakage at the same transistor gate size. Hence Intel ran out of headroom on 90nm chips much earlier than AMD did- Intel never made a short-pipeline 90nm chip like the over 2.26GHz (Pentium M 775) whereas AMD is making 3.0GHz Opterons on 90nm with the same 12-stage pipeline as that Pentium M. Intel has announced a 3.0GHz Conroe, but that is on 65nm and the Conroe has a 2-stage-longer pipeline.

    And yes, Intel has cheaper chips and their manufacturing methods promote this. That's how they can make profits on $130 dual-core CPUs.


    Actually, when you go down to basics, the manufacturing process sub-divides itself into a miriad of processes, from sand-to-chip, as I put it.
    Some of IBM/AMD drawbacks on SOI process are related to its difficult (& costly) implementation and, as good as it is as a dielectric, it also does a fine job as a good thermal insulator, which means that heat accumulates at the transistor level; moreover, IBM/AMD process technology need four stressor steps, while Intel needs two (more stressor steps adds up to costs, complexity & time-to-market...); AMD achieved a magnificent technological plateau, with the K8 uArch, while Intel was loosing time with an experimental microarchitecture, "NetBurst". But, that was/is at the uArch level; at the transistor process (again from sand-to-chip), Intel always beat IBM/AMD at the same node, 90nm: http://www.realworldtech.com/includes/templates/articles.cfm?ArticleID=RWT123005001504&mode=print (page 14, Table 2).
    As you may see for yourself, Intel (2003) was able to achieve an Idsat (Ion) largely superior to IBM/AMD (2004), meaning a faster transistor switch, also allowing - indirectly - faster clock speeds (Intel's performance was hampered by the chosen uArch approach, not by being technologically inferior). Among many other (costly & time-consuming) issues with IBM/AMD process, Intel also managed to produce 8-metal layer chips, against AMD 10-layer; and, even the actual node transition to 65nm, was performed with the same 193nm wavelenght "dry" lithographic technology Intel used to produce 90nm transistors.
    So, when you state that «Hence Intel ran out of headroom on 90nm chips much earlier than AMD did- Intel never made a short-pipeline 90nm chip like the over 2.26GHz (Pentium M 775) whereas AMD is making 3.0GHz Opterons on 90nm with the same 12-stage pipeline as that Pentium M. Intel has announced a 3.0GHz Conroe, but that is on 65nm and the Conroe has a 2-stage-longer pipeline.», it comes as a sort of a contradiction: If Intel had chosen a wider uArch at 90nm, it would surely have beaten IBM/AMD 90nm node process, just because Intel had a better 90nm process technology at the time, simply using strained-silicon!

    As for the near future, I'm sure both manufacturers will surprise, by bringing new materials, techniques & processes, increasing both performance and power savings, at "reasonable" costs, pushing chip technology to the next level.
    Meanwhile, Intel is ahead, in almost all fronts (1H 2006).


    Cheers!
  31. Yes, I know that mobile chips generally don't get clocked very high. But might it not make sense to push the clock a little bit on some certain higher-TDP chips that are in the huge 17" gaming notebooks? Those run Pentium 4 desktop chips for crying out loud! You would have to clock a Pentium M far above 2.26 GHz to equal the heat output of the 3.2-3.8 GHz Prescotts (or Athlon 64 X2s and FXs) that were being stuck in those beasts.

    I am not saying that Intel couldn't produce chips at that speed at the 90nm node- but it is odd that they did not even try to because of the aforementioned reason.
  32. They probably will now do that now since they've got one architecture.
  33. Action Man:
    Quote:
    No you're the forum idiot/noob.


    dvdpiddy's response:
    Quote:
    Thank you.


    @Action Man:
    You crack my $h!7 up, man.

    @dvdpiddy:
    I like you just as much as I do Action Man. I'm not laughing at you, I'm laughing with you. (I hope.)
  34. Quote:
    The dual inline bus is to run dual-socket CPUs from one chipset, sending one FSB to each socket. Before, Intel Xeon DP chipsets used a shared FSB to send information to both CPUs, and why they were beaten so badly by AMD. DIB could be used on a desktop chip, with each core having its own FSB, but apparently Intel thinks that a 1066 or 1333 MHz FSB is sufficient to feed two cores so it has a single shared FSB. I'd be inclined to believe them as the the Core Solo with a 667 MHz FSB and the Pentium M with its 533 FSB benchmark just about the same at the same clock speed.

    Now I would expect Intel to use DIB to feed each pair of cores on quad-core chips as a 1333 MHz FSB would deliver only 333MHz of bandwidth to each core. Sure, they could jack the bandwidth up a lot, but running a lot of high-frequency lines on the motherboard leads to other issues, like capacitance, inductance, and other RF-related problems. Intel also likes multi-chip modules due to manufacturing ease, and using two independent FSBs on one package would easily allow Intel just to plop two dual-core dies into a processor package without any work whatsoever. It's an inelegant but cheap and probably very workable solution at least for a little while.

    Intel is known for using inferior, inelegant methods to get the job done- their photolithography, silicon processing, use of FSBs instead of IMCs, multi-chip modules, shared FSBs in server chipsets and in dual-core chips- but it means that they can generally do the job much cheaper than AMD can with their technically superior processes. Intel may have made some mistakes in the past, but they are certainly not stupid and have profit margins that are roughly 10 times AMD's (AMD's 3-4% vs. Intel's ~40%) because of this, among other factors.



    That could not possibly get any further from the truth. I'd extrapolate but Jack already took care of the light work =P I need to go check my wafers on the wetstation. . .
  35. I must not have gotten it from as reliable as a source as you guys did.
  36. Quote:
    I must not have gotten it from as reliable as a source as you guys did :oops:


    That's ok. At least, you don't seem to be unconditionally biased, like many in this forum.
    I'm more often wrong than right, overall; anyway, it's fantastic to know how things work & speculate about it... even if it's just for the fun of it.
    Of course, I get - pertinently - corrected, most of the times. :wink:

    One of the IBM/AMD process stressor steps, in particular, Stress Memorization, seems amazingly complex:

    The stress memorization technique to enhance channel was discussed in depth in a paper published in the VLSI Technology Symposium, 2004 by Chen et. al. from TSMC [8]. In the stress memorization technique, a nitride layer is selectively and temporarily deposited on top of the gate electrode to provide a high level of tensile stress to the channel in NMOS transistors. The high-tensile nitride layer is eventually removed, but only after source and drain activation and the wafer undergoes a carefully controlled poly amorphorization and recrystallization procedure. In essence, the nitride layer acts as a stressing agent that holds the conduction channel in a stressed state. The silicon wafer is then heated and allowed to cool down in a carefully controlled annealing process. The annealing process locks in the poly silicon crystals in a state that continues to hold the stress on the conduction channel even after the original stressing agent, the selectively deposited nitride layer, is removed. The silicon is thus said to have “memorized” the stressed state.
    http://www.realworldtech.com/page.cfm?ArticleID=RWT123005001504&p=7


    Cheers!
  37. No, I'm not very biased. In the words of my brother, I'm a "fair weather fan" as I tend to support whoever currently makes the goods that are the most effective for what I need to do.

    Well, I guess I am biased about one thing though- I much prefer open and standard specifications for hardware, software, and data- even if the proprietary spec may be a bit better. I'd much rather put up with a slightly less effective product that was usable with anything than something that is a little more effective but overly locked-up and locked-in and likely a ton more expensive.
  38. It's no big deal dude, but to make your name MU_Engineer and then blindly spit something like that out of your keyboard makes me question your communication skills at the least. Get what I'm putting here?
  39. Quote:
    No, I'm not very biased. In the words of my brother, I'm a "fair weather fan" as I tend to support whoever currently makes the goods that are the most effective for what I need to do.

    Well, I guess I am biased about one thing though- I much prefer open and standard specifications for hardware, software, and data- even if the proprietary spec may be a bit better. I'd much rather put up with a slightly less effective product that was usable with anything than something that is a little more effective but overly locked-up and locked-in and likely a ton more expensive.


    Yeah, that's the word: preference... which most mix up with blind fanaticism.


    Cheers!
  40. We are all biased towards certain things- I bet that you and most others here are biased towards building and messing with computers yourselves versus buying a unit off the shelf and paying somebody else to fix/upgrade it, am I not correct? Otherwise we probably would not be here.

    But there is a continuum of preference, anywhere from "It would be a little nicer, but if I can't get it, oh well" to "there's no way that I would get that if it was the last thing on earth." I simply get frustrated when I can't get things to work together because one is some funky design and you have to pay $200 for a $50 PSU that will work with that particular computer or my music player application of choice won't play a certain file, etc.
  41. Quote:
    We are all biased towards certain things- I bet that you and most others here are biased towards building and messing with computers yourselves versus buying a unit off the shelf and paying somebody else to fix/upgrade it, am I not correct? Otherwise we probably would not be here.

    But there is a continuum of preference, anywhere from "It would be a little nicer, but if I can't get it, oh well" to "there's no way that I would get that if it was the last thing on earth." I simply get frustrated when I can't get things to work together because one is some funky design and you have to pay $200 for a $50 PSU that will work with that particular computer or my music player application of choice won't play a certain file, etc.


    In general, we're all biased, towards this or that; that's not the point, though. I suppose that, the more you learn about something of your interest, the less biased you become (even if you still have a preference!); this should be valid universally.
    However, when you (in abstract) blindly insist on a subject you hardly know about, leading to extremisms just because, that's not only "biased"; that's volontary stupidity.


    Cheers!
  42. Quote:
    Action Man:
    No you're the forum idiot/noob.


    dvdpiddy's response:
    Quote:
    Thank you.


    @Action Man:
    You crack my $h!7 up, man.

    @dvdpiddy:
    I like you just as much as I do Action Man. I'm not laughing at you, I'm laughing with you. (I hope.)

    Word.
  43. Quote:

    The majority of x86 server are application servers and AD servers. Most database work is handled by RISK processors. Therefore Intel is strong were it need to be. Do you guys actually work in the IT field or just post here?


    Guys, are you all talking about RISC processors? :roll:
Ask a new question

Read More

CPUs Intel