Sign in with
Sign up | Sign in
Your question

Is Intel Gonna Screw Us With The LGA 2011 CPUs?

Last response: in CPUs
Share
January 19, 2011 4:27:23 PM

I hope the LGA 2011 cpus can be OC'd like the i7 cpus.
January 19, 2011 4:49:47 PM

And how about the X68 chipset? Will we be again restricted to a dual x8 mode in sli/crossfire? I'm planning on running 3x24 monitor setup by tri sli/crossfire which will be bottlenecked at higher resolutions. I wish they go with 16x 16x 8x, or better yet all 3 16x pcie. I could have build a 1155rig w/ AsusM4E but I would like to use a discrete soundcard, as I don't like the integrated one.
m
0
l
a b à CPUs
January 19, 2011 5:00:05 PM

Why would Intel restrict either overclocking or PCI-E lanes on their high end chipset?

Look at LGA1366 and X58 if you want to see what Intel does with the high end. They overclock like mad, and have more than enough PCI-E lanes to go around.
m
0
l
Related resources
January 19, 2011 5:38:02 PM

the high end are usually fine, intel seems to have taken a leaf out of amds book with regards to oc'ing and seem to be getting more relaxed about it all. who knows maybe one day no cpu's will be locked :) 
m
0
l
a b à CPUs
January 19, 2011 6:13:27 PM

kinth said:
the high end are usually fine, intel seems to have taken a leaf out of amds book with regards to oc'ing and seem to be getting more relaxed about it all. who knows maybe one day no cpu's will be locked :) 


Intel is more relaxed about overclocking? That's totally not the case. Yes Intel is giving us unlocked chips, but the cheapest they offer costs $215. If you want to buy a cheap CPU for ~$150-$100 and overclock it to extract value then you get screwed because of the locked CPU multipliers and locked BCLK. Intel is only offering the unlocked SB CPUs because if they didn't there would have been a revolt because none of the SB CPUs could be overclocked.

I still think Sandy Bridge CPUs are quite good, but if you are a budget builder then your options for an Intel CPU are now severely limited with SB.

LGA 2011 sounds like it will be easier to overclock because from what I've heard the BCLK won't be tied to things like the SATA bus and PCIe bus so BCLK overclocking will be an option on LGA2011. I'm worried though that the entry level CPUs on LGA2011 are going to cost around $280 like they were with LGA1366.
m
0
l
January 19, 2011 6:31:49 PM

^+1

i agree and when i say relaxed i mean they actually have some sort of unlocked chip these days which is relaxed for intel :p 

when people are paying out the nose for a cpu though they should be able to do what they want with it. but i guess intel wants to make *** loads of money and they know as long as they have the enthusiast market they won't have to change.
m
0
l
January 19, 2011 7:05:57 PM

jprahman said:
Intel is more relaxed about overclocking? That's totally not the case. Yes Intel is giving us unlocked chips, but the cheapest they offer costs $215. If you want to buy a cheap CPU for ~$150-$100 and overclock it to extract value then you get screwed because of the locked CPU multipliers and locked BCLK. Intel is only offering the unlocked SB CPUs because if they didn't there would have been a revolt because none of the SB CPUs could be overclocked.

I still think Sandy Bridge CPUs are quite good, but if you are a budget builder then your options for an Intel CPU are now severely limited with SB.

LGA 2011 sounds like it will be easier to overclock because from what I've heard the BCLK won't be tied to things like the SATA bus and PCIe bus so BCLK overclocking will be an option on LGA2011. I'm worried though that the entry level CPUs on LGA2011 are going to cost around $280 like they were with LGA1366.


LGA 2011 easier to overclock? People are consistantly getting 4.8 with just a mouse click on EFI BIOS and 5.2 without the easy overclock function. Hell I dont need my computer to levitate or become self aware, I just want some really good performance. The fastest Ive been on my I7-860 is 4.2 because Im limited to air but when Newegg finally gets the Asus Maximus Extreme back in, Ill be putting that I7-2600k under water.

Please explain the easier.

With LGA 1155 having great features on mobos, Im really interested to see what lga 2011 is going to do to make it "enthusiest". Having a full 16 lanes of pci express power has already been proven that you lose 2 to 5% that only shows up in benchmarks so a full 16 lanes means nothing to my sli set up.

m
0
l
a b à CPUs
January 19, 2011 7:41:37 PM

i hate when people do this... even if lga 2011 beats 1155, is somehow magically even cheaper than lg1155 and out preforms it... lg1155 is STILL going to be a good mobo/cpu, it is SILL going to last you for many years, it is STILL going to with the right gpu max out all games out right now so no nobody is getting skrewed..

its just technology evolving, every improvment is a good thing, if you want to keep stuff that lasts and goes up in value with age get into antiques and collecting baseball cards or something ... if you're into computers you should WELCOME change and improvment
m
0
l
January 19, 2011 7:52:50 PM

So, It looks like Intel is going to use the 1155 socket for a bit. Am I correct. Because I really want to get the i7 2600K?
m
0
l
a b à CPUs
January 19, 2011 8:20:33 PM

I'm talking about people building $700 gaming machines and want to save money on a CPU so they can spend enough money on graphics and other components. I don't mind the prices on Sandy Bridge CPUs personally, its just anybody that wants a bottom level gaming system that Sandy Bridge doesn't offer a lot of options. That's where AMD comes in.

As to LGA2011 being more easily overclocked I didn't make it clear enough. What I meant is that it's more "accessible" because you don't have to buy a CPU with an unlocked multiplier to overclock. You have a greater level of flexibility, but obviously it's going to be pricey. The multiplier unlocked LGA 1155 CPUs are easier to overclock, but you're stuck having to use a multiplier unlocked CPUs only.

And believe me, I'm very happy that Intel took a hint from AMD and released multiplier unlocked CPUs, it's just a shame they only did it at the higher end and took away budget overclocking.
m
0
l
a c 126 à CPUs
January 19, 2011 8:36:41 PM

binoyski said:
And how about the X68 chipset? Will we be again restricted to a dual x8 mode in sli/crossfire? I'm planning on running 3x24 monitor setup by tri sli/crossfire which will be bottlenecked at higher resolutions. I wish they go with 16x 16x 8x, or better yet all 3 16x pcie. I could have build a 1155rig w/ AsusM4E but I would like to use a discrete soundcard, as I don't like the integrated one.


Actually The LGA2011, from what I can find, will be hosting 40 PCIe 3.0 lanes. One PCIe 3.0 x16 lane will be 2x as powerful as a single PCIe 2.0 lane, much like it takes a PCIe 1.0 x16 to equal a PCIe 2.0 x8 lane. So with 40 lanes you would run at x16/x16/x8 which would still give better bandwidth than a full x16/x16/x16 PCIe 2.0 setup.

Still there are no real GPUs out there that even tap the full bandwidth of a PCIe 1.0 x16 lane apart from dual GPU boards, even then. Hell LGA2011 is not even set to have Intel HD graphics. its set to be just the CPU itself.

As for overclocking, its hard to say. I am sure it will have the same setup as LGA1155, with all the parts being on die and tied to the BCLK but there will probably mainly be unlocked versions since high end is always known for overclocking.

As for discrete sound, you can always use a discrete card. Just disable the onboard in BIOS like I did when I got my Creative X-Fi.

The other advantage that LGA2011 will have is tri channel DDR3 so memory intensive applications will love it. Supposed to have something like 51GB/s bandwidth for memory due to it using QPI and the third channel instead of DMI 2.0 and dual channel. But other than memory loving applications, LGA1155 will probably be a good setup. Even with just x8/x8/x8 PCIe 2.0 it would be fine as there is pretty much no difference between that and full x16 PCIe 2.0 on current GPUs and probably wont be until a good while.

Hell AGP 8X was not even bottlenecking when PCIe 1.0 came out and probably didn't bottleneck until the nVidia 8 / ATI HD3K series really.
m
0
l
a b à CPUs
January 19, 2011 8:41:30 PM

I've even heard that LGA2011 will have quad channel memory, but that may have been just a rumor.
m
0
l
January 19, 2011 8:45:09 PM

So unless your making the Incredibles 2 in your basement, why would you pay for lga 2011?
m
0
l
a b à CPUs
January 19, 2011 8:51:20 PM

Bragging rights. lol. Seriously if you're running like 3 or more video cards, or want 8 cores for rendering, video transcoding, photo and video editing and other things like that LGA 2011 would be worth it.
m
0
l
a c 126 à CPUs
January 19, 2011 8:51:52 PM

LGA2011 will have a quad channel variant. For the highe end desktop, basically LGA1366 replacement it will have tri but for the workstation and server market it will have quad channel plus 2x QPI instead of 1x QPI.

Of course its all rumored so we will have to wait for Intel to release official specs.

silky salamandr said:
So unless your making the Incredibles 2 in your basement, why would you pay for lga 2011?


Because you can. Same reason why LGA1366 is pointless for almost everything but will probably last longer than LGA1156 in terms of performance later down the road due to higher bandwidth and faster PCIe lanes.

LGA2011 will be the same.
m
0
l
a c 199 à CPUs
a b å Intel
January 19, 2011 8:53:46 PM

It's all about ROI. As you go for that last bit of performance, you suffer the law of diminishing returns. The cost ratio of the top CPU to the 2nd best is always much larger than their relative performance....the top video card that gets you 10% more performance, is not going to cost 10 % more .... it's always going to cost 50% more.

Same thing here ....the general consensus a year ago was that the impact of x16 x16 over x8 x8 was about 2%. Today's top end cards are faster than a year ago. However, when a x16 x16 P67 MoBo is $30 more than a x8 x8 one, that's an increase of 1.5% on a $2k system. So can ya argue that it's foolish to pay 1.5% more for a system to go 2% faster ? I can't.

m
0
l
a c 199 à CPUs
a b å Intel
January 19, 2011 9:04:56 PM

binoyski said:
And how about the X68 chipset? Will we be again restricted to a dual x8 mode in sli/crossfire?


You not restricted on the 1155 can do x16 x16 SLI now with the 1155 as long as it has an NF200 chipset
http://www.newegg.com/Product/Product.aspx?Item=N82E168...

Quote:
I'm planning on running 3x24 monitor setup by tri sli/crossfire which will be bottlenecked at higher resolutions. I wish they go with 16x 16x 8x, or better yet all 3 16x pcie. I could have build a 1155rig w/ AsusM4E but I would like to use a discrete soundcard, as I don't like the integrated one.


With two cards, I'm pretty comfy with SLI or Xfire and today's cards from both camps are pretty well matched. At 3 cards, I'd stick w/ SLI.

http://www.guru3d.com/article/radeon-hd-6850-6870-cross...

Quote:
The one recommendation we always gave you guys is to keep it simple at 2 GPUs maximum, as after 2 GPUs in a CrossfireX setup you quickly run into weird anomalies that can be irritating.....

So over the years Multi-GPU support has improved quite a bit, AMD still isn't up-to snuff at the level of NVIDIA though, multi-GPU supports still literally and directly remains the Achilles heel of ATI's Catalyst drivers. .....

It's like this with ATI, once you pass 2 GPUs you'll often find yourself compromising a lot with new game titles versus multi-GPU support.

m
0
l
January 19, 2011 9:19:08 PM

Answer to the topic question: Yes, and we're going to jump on it like a b!tch in heat.
Quote from WHAT movie?
m
0
l
a c 126 à CPUs
January 19, 2011 9:24:59 PM

Quote:
Jimmy. 2011 will have quad channel memory and have a max memory bandwidth of 56gb/s


I know the memory bandwidth is supposed to be insane but have heard that there will be two configurations for LGA2011. One for the high end desktop utilizing 3 channels of DDR3 and one for the workstation and high end server with 4 channels of DDR3. I can understand as well since one will have 1x QPI and the other will have 2x QPI.

Of course I will wait for official news from Intel but its nice to speculate about those things unless you have a article.
m
0
l
a c 126 à CPUs
January 19, 2011 9:38:07 PM

Quote:
They have taken away that option of buying a cheaper chip then overclock the *** out of it for a big performance gain. Now you buy the unlocked one there's no need to OC it its a faster design which already runs past 3.2ghz and boost itself till 3.8Ghz. So the days of the E5200 are over until gigabytes find away to circumvent it......


They didn't just take it away. They had to. Integrating the PCIe, SATA and a whole slew of other controllers meant they would be tied to the BCLK which is set to 100MHz. If AMD dos the same, they will have the same in terms of OCing ability by only having unlocked multiplier versions which I would assume might be why they are planning FX versions for Bulldozer.
m
0
l
a b à CPUs
January 19, 2011 10:18:39 PM

I don't know a significant amount about electrical engineering, but couldn't they have integrated two BCLKs? One for the processor, RAM and other components that were based off of the BCLK in the past and another for SATA, PCIe and the other timing sensitive components. Again there may have been cost or technical concerns that prevented doing so that we don't know about, but doing that would have allowed for integration of those components while also allowing overclocking without requiring a multiplier unlocked CPU.
m
0
l
a b à CPUs
January 19, 2011 10:58:04 PM

Could they have? Yes. It would be more expensive though, and potentially more power hungry. Intel made the decision to lower costs and reduce power at the price of bclk overclocking, and honestly, I can't blame them. For the vast, vast majority of their market, they made the right choice.
m
0
l
a b à CPUs
January 20, 2011 12:02:52 AM

Quote:
I7 920 was high end?

Yes, actually. Consider that the cheapest processor available for the platform was $300, and you needed a $200 board to go with it.
m
0
l
a b à CPUs
January 20, 2011 1:42:53 AM

Quote:
Oh ok thought it was the mainstream version because the 965 extreme edition was the high the end one which you could mess with the individual turbo settings.

Well, it was the most mainstream of the 1366 offerings. 1156 was more of the mainstream socket for Nehalem though.

(And yes, you're right - the 965 was the only fully unlocked one)
m
0
l
a c 172 à CPUs
a b å Intel
January 20, 2011 5:38:09 AM

cjl said:
Could they have? Yes. It would be more expensive though, and potentially more power hungry. Intel made the decision to lower costs and reduce power at the price of bclk overclocking, and honestly, I can't blame them. For the vast, vast majority of their market, they made the right choice.

Remember, we are an almost insignificant portion of the market. But good performance trickles down to the mainstream market.
m
0
l
a b à CPUs
January 20, 2011 5:56:36 AM

cjl said:
Why would Intel restrict either overclocking or PCI-E lanes on their high end chipset?

Look at LGA1366 and X58 if you want to see what Intel does with the high end. They overclock like mad, and have more than enough PCI-E lanes to go around.
They're moving the primary PCIe controller to the CPU, as with current mainstream solutions, but there's a reason for the extra pins beyond the extra memory channel: I've been told the CPU will have 40 PCIe 2.0 lanes, to support x16/x16/x8 solutions natively. That's more than the 36 on the X58 Northbridge.

Edit: Someone mentioned PCIe 3.0, I've "heard" that the number of lanes that can support it will be limitted, just as the number of SATA 6Gb/s ports on the current P67 is limitted. I'm certain that all 40 lanes will support PCIe 2.0, but I'm uncertain how many of those same lanes will also support PCIe 3.0.
m
0
l
a b à CPUs
January 20, 2011 6:00:03 AM

Quote:
You lose less then 1% between X16 and X8. There are no cards that can even max out the X8 bandwidth
Try 5-8% for x16 to x8, or at least try reading the PCIe and CrossFire scaling article from like, 2009! Talk about outdated advice...
Quote:
You lose less then 1% between X16 and X8. There are no cards that can even max out the X8 bandwidth


Quote:
^+1


-1 for listening to someone who's clearly wrong by at least 13 months.
m
0
l
a b à CPUs
January 20, 2011 6:33:37 AM

Quote:
2009? Please. I'd rather quote from an article posted 8 months ago

http://www.techpowerup.com/reviews/NVIDIA/GTX_480_PCI-E...

Quote:
The theory couldn't be more wrong, as seen by the mere 2% performance loss going from x16 to x8

Well then here's one that's only a year old:

http://www.tomshardware.com/reviews/p55-pci-express-sca...

4% using SLOWER cards, EDIT but I found the problem, they're using Nvidia cards while I pointed to one using ATI.
m
0
l
a b à CPUs
January 20, 2011 9:16:48 AM

Quote:
"2% performance loss going from x16 to x8 (which reduces bandwidth by 50%). To cite results from one of the latest and resource-heavy games in our bench, Collin McRae DiRT 2, that translates into something like 63.2 FPS vs. 62.1 FPS, at 2560 x 1600 pixels resolution – barely a difference. “

http://www.techpowerup.com/reviews/NVIDIA/GTX_480_PCI-E...


Looks trust worthy to me. What's the gain any one would get from rigging a Pci-e bandwidth test?


That's hardly a bandwidth-restricted game, try something with really big maps that need to be cached to RAM. The most bandwidth-challenging game in the year-old test was Far Cry 2, and I'm certain you could find a current title that has similarly complex distance rendering.
m
0
l
January 20, 2011 10:07:51 AM

techpowerup is unreliable?? lolerskates. by that logic, gpuz is a very unreliable software as well.
m
0
l
January 20, 2011 10:22:11 AM

That's hardly a bandwidth-restricted game, try something with really big maps that need to be cached to RAM. The most bandwidth-challenging game in the year-old test was Far Cry 2, and I'm certain you could find a current title that has similarly complex distance rendering. said:
That's hardly a bandwidth-restricted game, try something with really big maps that need to be cached to RAM. The most bandwidth-challenging game in the year-old test was Far Cry 2, and I'm certain you could find a current title that has similarly complex distance rendering.


is this from the link you posted or from psyscho's link? because as far bandwidth is concerned farcry2 is not a very good test since most cards are scoring sufficient fps (above 60).

you guys are aware that you are comparing 2 different setups (1 is using a multi-gpu configuration from toms, the other is using a single card setup from tpu)
m
0
l
January 20, 2011 7:36:20 PM

Everybody elses site is untrustworthy when you dont get paid by them.
m
0
l
a b à CPUs
January 20, 2011 8:00:07 PM

wh3resmycar said:
is this from the link you posted or from psyscho's link? because as far bandwidth is concerned farcry2 is not a very good test since most cards are scoring sufficient fps (above 60).

you guys are aware that you are comparing 2 different setups (1 is using a multi-gpu configuration from toms, the other is using a single card setup from tpu)
Tom's Hardware's test is inclusive, the link I pointed to was the 1-card test. The other half of the article is multi-card testing.
m
0
l
a b à CPUs
January 20, 2011 8:03:45 PM

silky salamandr said:
Everybody elses site is untrustworthy when you dont get paid by them.
Why would I trust results that contradict those I've generated myself?

EDIT: OK, I finally went there and found the problem. The comparison between their results and Tom's Hardware's is invalid because they used Nvidia cards and Tom's Hardware used ATI cards.

Seriously, ATI cards are more-easily bottlenecked, and this was even noticed in Tom's Hardware's PCIe and SLI scaling article.
m
0
l
a c 126 à CPUs
January 20, 2011 8:22:02 PM

cjl said:
Could they have? Yes. It would be more expensive though, and potentially more power hungry. Intel made the decision to lower costs and reduce power at the price of bclk overclocking, and honestly, I can't blame them. For the vast, vast majority of their market, they made the right choice.


I would also imagine that having two BCLKs would increase latency and reduce performance overall. Having one BCLK means the CPU only has to look in one place to access everything instead of having to go to two places. Its a lot like Intels L3 cache that stores a copy of everything so the CPU doesn't have to check the L1 or L2 cache or even request it again from memory.

Quote:
the XEONs already got 2 QPI's


Not the same. Thats the server sockets. LGA1366 based Xeons only have one QPI link. LGA2011 is meant to be high end to extreme user and will have the ability to have two QPI links. I would imagine the server market might have more.

Crashman said:
They're moving the primary PCIe controller to the CPU, as with current mainstream solutions, but there's a reason for the extra pins beyond the extra memory channel: I've been told the CPU will have 40 PCIe 2.0 lanes, to support x16/x16/x8 solutions natively. That's more than the 36 on the X58 Northbridge.

Edit: Someone mentioned PCIe 3.0, I've "heard" that the number of lanes that can support it will be limitted, just as the number of SATA 6Gb/s ports on the current P67 is limitted. I'm certain that all 40 lanes will support PCIe 2.0, but I'm uncertain how many of those same lanes will also support PCIe 3.0.


I am just going off of what I can find which is why I can't give it a 100%.

http://www.tomshardware.com/reviews/pci-express-3.0-pci...

Thats a article about it being out about summer this year which would probably coincide with Intels LGA2011 release.

http://www.bit-tech.net/hardware/cpus/2010/04/21/intel-...

That details that LGA2011 should have 32lanes for PCIe 3.0 on the CPU itself. Of course its still early but hopeful as that would be a nice feature.

Here is an interesting article about Patsburg, X58s successor:

http://www.semiaccurate.com/2010/08/12/intels-patsburg-...

Still I wouldn't be suprised if Intel has PCIe 3.0 on LGA2011. They do tend to push new technology as fast as possible.
m
0
l
a b à CPUs
January 20, 2011 8:50:10 PM

jimmysmitty said:
I would also imagine that having two BCLKs would increase latency and reduce performance overall. Having one BCLK means the CPU only has to look in one place to access everything instead of having to go to two places. Its a lot like Intels L3 cache that stores a copy of everything so the CPU doesn't have to check the L1 or L2 cache or even request it again from memory.



Not the same. Thats the server sockets. LGA1366 based Xeons only have one QPI link. LGA2011 is meant to be high end to extreme user and will have the ability to have two QPI links. I would imagine the server market might have more.



I am just going off of what I can find which is why I can't give it a 100%.

http://www.tomshardware.com/reviews/pci-express-3.0-pci...

Thats a article about it being out about summer this year which would probably coincide with Intels LGA2011 release.

http://www.bit-tech.net/hardware/cpus/2010/04/21/intel-...

That details that LGA2011 should have 32lanes for PCIe 3.0 on the CPU itself. Of course its still early but hopeful as that would be a nice feature.

Here is an interesting article about Patsburg, X58s successor:

http://www.semiaccurate.com/2010/08/12/intels-patsburg-...

Still I wouldn't be suprised if Intel has PCIe 3.0 on LGA2011. They do tend to push new technology as fast as possible.
Yes, PCIe 3.0 is still speculative at the market level, and if I knew anything more I probably couldn't tell you. Rumors about it have ranged from it not being there, to 16 or 32 lanes having it. Regardless, I think all of these rumors agree that there will be 40 CPU lanes and that all of those will support at least PCIe 2.0.
m
0
l
a c 126 à CPUs
January 20, 2011 8:58:30 PM

Crashman said:
Yes, PCIe 3.0 is still speculative at the market level, and if I knew anything more I probably couldn't tell you. Rumors about it have ranged from it not being there, to 16 or 32 lanes having it. Regardless, I think all of these rumors agree that there will be 40 CPU lanes and that all of those will support at least PCIe 2.0.


From what I have read on the official PCIe groups site, PCIe 3 will be backwards compatable so yea all 40 should support PCIe 2.0/ But I still wont be suprised if LGA2011 comes with at least 32 PCIe 3.0 lanes. Its been in testing since 2008 and back in Summer of 2010 they fixed the backwards compatability issue that was pretty much the largest setback of them all.

We will see though. Although I don't see any real use for PCIe 3.0 right now its the way the industry moves. By 2012, Intels Haswell CPUs will probably sport DDR4.
m
0
l
a b à CPUs
January 20, 2011 11:28:31 PM

Quote:
crashman before any tests were done intel released a pdf file explaining Pci-e scaling. The said once 8x goes over 2 gpus the scaling goes down to 71 percent. Look at that post. Three cards. And they did it with no gpus. Why?
I try to keep this simple: Tom's Hardware did a couple articles based on the same theme, PCIe and CrossFire Scaling, then PCIe and SLI scaling. The first article showed an average difference of 4%, while the second showed an average difference that was far smaller. I've gone through a few more tests that continuously show increased PCIe bottlenecking on ATI, compared to Nvidia, so if you're looking for the "worst case scenario" you always have to use ATI/AMD graphics.

On the other hand, if you're trying to disprove the "worst case scenario" you use Nvidia. But if disproving the worst case is your point, you're starting off with a bias. I say that because Radeons are not rare cards, they are quite common, so to say the "worst case scenario" is theoretical is to say that nobody really uses Radeons.

If I stick to a discussion of practice rather than theory, I can focus on results rather than causes.
m
0
l
a b à CPUs
January 21, 2011 2:50:41 AM

cjl said:
(And yes, you're right - the 965 was the only fully unlocked one)

What about the 975? Or 980X?

jimmysmitty said:
Not the same. Thats the server sockets. LGA1366 based Xeons only have one QPI link. LGA2011 is meant to be high end to extreme user and will have the ability to have two QPI links. I would imagine the server market might have more.

LGA1366 Xeons can have two QPI links. http://ark.intel.com/Product.aspx?id=47920&processor=X5670&spec-codes=SLBV7
m
0
l
a b à CPUs
January 21, 2011 4:43:27 AM

jimmysmitty said:
Thats a strange one. I could have sworn Intel kept those on the server socket only. Still whats the use of 2 QPI links since the biggest use would be for 2 processors so they don't have to share bandwidth.

The whole point of 2x QPI is for dual processors, and as far as I know, is necessary for that. Those CPUs are probably for workstations rather than servers, mostly.
m
0
l
a b à CPUs
January 21, 2011 1:59:28 PM

PreferLinux said:
What about the 975? Or 980X?

Not available at launch - those came later (though you're right, they were fully unlocked as well)
m
0
l
January 21, 2011 3:11:02 PM

When I said "in higher resolutions" I meant 5670x1200 in a 6 monitor eyefinity setup or in a 3 monitor rig. I think I read a review here in TH or a different site(can't remember), that in very high resolutions dual x8 will be bottlenecked.
m
0
l
a b à CPUs
January 21, 2011 4:03:23 PM

binoyski said:
When I said "in higher resolutions" I meant 5670x1200 in a 6 monitor eyefinity setup or in a 3 monitor rig. I think I read a review here in TH or a different site(can't remember), that in very high resolutions dual x8 will be bottlenecked.

I'm fairly sure that the x16 becomes less important at high resolution, not more important (so long as the card has enough VRAM). If the card doesn't have enough VRAM for the resolution and settings, then x16 becomes really important, but the card also takes a nosedive in performance (usually not enough to be playable, x16 or otherwise).
m
0
l
a b à CPUs
January 21, 2011 7:05:32 PM

Quote:
that sounds logical but not entirely true.

Latency plays an important role in Pci-e performance and should be considered in relationship to bandwidth. For example if the round trip latency for a read is 128bytes is 400ns then the read bandwidth would be 2.5mb/s. If the latency is increased to 800ns then the read decreases to 1.25mb/s, a one to one relationship.

That's not necessarily true. If the round trip latency for a 128 byte read is 400ns, but the bus can service 1 of those calls every 10 ns, then the read bandwidth would be 95 Gb/s. Similarly, if the round trip latency for a 128 byte read is 400ns, and the bus can only serve one call every 400ns, but that call can be up to 1024 bytes, then the bandwidth would be 19 Gbps. You're making the mistake of assuming that every call is the full width of the bus, and that multiple calls cannot be occurring simultaneously.

Oh, and none of this has to do with the actual benchmarks, which show the gap between x16 and x8 decreasing with increasing resolution (at least from what I've seen).
m
0
l
a c 126 à CPUs
January 23, 2011 8:00:49 AM

^That sounds interesting. One thing I know is that in almost every Intel chipset mobo I have bought, intels RAID controllers are present. And I have yet to have one fail on me (my oldest RAID is a RAID 0 on a 845PE chipset using Intels RAID technology and pushing on 8 years now).

But still. Dual CPU sporting 8 GPUs? That would be one hell of a tall case. Probably would need at least dual PSUs.
m
0
l
a b à CPUs
January 23, 2011 9:35:08 AM

Not going to upgrade until I see native USB3/SATA3, PCI-E 3.0 and quad-channel DDR4.
m
0
l
a b à CPUs
January 23, 2011 7:47:39 PM

I though LGA1356 had one QPI, so could only support one CPU.
m
0
l
a b à CPUs
January 23, 2011 8:44:22 PM

PreferLinux said:
I though LGA1356 had one QPI, so could only support one CPU.
No, it has two. They kill the other link on Desktop processors so they can't be used in dual-CPU configurations on dual-socket server boards. You can however put the dual-socket CPU on a single-socket board.
m
0
l
February 7, 2011 3:17:43 AM

silky salamandr said:
Hell I dont need my computer to levitate or become self aware...


^+1 no you don't want that!
m
0
l
!