Sign in with
Sign up | Sign in
Your question

32Bit PCIe

Last response: in Motherboards
Share
a c 87 V Motherboard
October 30, 2011 9:19:39 AM

Why are mother boards still stuck on the 32Bit Bus Speed in the PCI Slots section in the BIOS and why are they not Migrating to 64Bit or 128Bit, with Quad and Hexa Cores out there???

More about : 32bit pcie

a c 97 V Motherboard
October 30, 2011 11:00:03 AM

Uhhhh, PCIe? PCIx? Still have PCI slots for backwards compatibility.
m
0
l
a c 87 V Motherboard
October 30, 2011 12:20:27 PM

Nah!! I didn't mean that, I mean why aren't the increasing the Bus Capacity any further than 32Bit.
m
0
l
Related resources
a c 97 V Motherboard
October 30, 2011 12:46:06 PM

Again, why should they? They did make a 64bit "PCI", it's called PCIx. And PCIe is faster then PCI or PCIx. PCIe basically is a better/faster PCI.
m
0
l
a c 87 V Motherboard
October 30, 2011 12:58:49 PM

But it still has the max lanes set to 32........
m
0
l
a c 97 V Motherboard
October 30, 2011 1:15:36 PM

?

I really don't get what your getting at. They made a 64bit PCI, PCIx. They gave us a bus with even more bandwidth later, PCIe. We basically have a faster PCI, what difference does it make how many bits wide it is? I really don't understand whats to be upset over.
m
0
l
a b V Motherboard
October 30, 2011 1:37:28 PM

I think hes confusing PCI and PCIe :) 

the reason they don't make PCIe any faster (they will eventually) is because we dont have powerful enough hardware to make the jump worth the cost of expensive materials and licensing for the design schematics :)  We have PCIe 3.0 despite most cards on the market not being able to take advantage of even 2.0 to its full capacity, barely even have 2.0s capacity (2.0 8x)

of your not confusing the two then its because PCI is dead tech so to speak. (it doesn't need upgrading) PCIe is faster then normal PCI.
m
0
l
a c 87 V Motherboard
October 30, 2011 3:25:45 PM

None of the above actually, we're talking about the same thing with different names, whereas, what I'm driving at is the physical lanes to and fro the pci....pcix......to the NB SB CPU circuitry.....
m
0
l
a c 97 V Motherboard
October 30, 2011 4:00:47 PM

You need to figure out what EXACTLY your complaining about. PCI, PCIx, and PCIe are all DIFFERENT things. You started off saying something about wanting a faster version of PCI. Like a 64bit version. As I replied, they did make one, its called PCIx. As mouse24 pointed out, they are making faster versions of PCIe as well. If 64bit PCI (PCIx actually.) does exist and they are making faster version of PCIe, what is your problem? I am seriously not understanding the point of your rant.
m
0
l
a c 87 V Motherboard
October 30, 2011 5:00:06 PM

Hey!! I ain't complaining.... just wondering why all the bleeding bioses are stuck a 32 Bit level for their PCI settings inclusive of whatever number of x's you like to add to them.
m
0
l
a c 109 V Motherboard
October 30, 2011 5:01:56 PM

He means like how pci is 32 bit and then had an enhanced version released which is pcix that is 64bit. There are much more ways to improve performance than just increasing bus width that's why pcie is 32 bit but is better than pcix that is 64bit. The reason pcie hasn't been replaced with a 64 bit connection is there is no need for that amount of bandwidth. We can still increase pcie bandwidth by increasing the speed like with pcie 3.0 (there are also a number of other changes). But even the top cards have very little performance difference from current 2.0 x8 to x16. When the need arises, I'm sure there will be a 64bit version pcie, or a new connection.
m
0
l
a b V Motherboard
October 30, 2011 5:02:03 PM

-.- i am so confused.... you are saying why aren't they making faster pci/pcie/pcix etc etc etc, im saying because the current technology cant even use 1/4th of the bandwidth of pcie3.0, so whats the point of making it?

dang K11 beat me to it D:
m
0
l
a c 87 V Motherboard
October 30, 2011 5:08:12 PM

No No, mouse I agree with you they are making stuff that a lot faster , I agree to the fact that we have not yet been able to make full use of the present capacity of the slots either.
What I am trying to say is with the processors changing from a single core to 8 cores why aren't they increasing the number of physical lanes used to access and send data to and fro these slots to separate cores to increase the efficiency, they ultimately are ending up increasing the computing capacity at either side of the bridge but are not increasing the flow capacity of the lanes.

Check the bios setting it'll ask for a PCI Bus speed, max is 32 any given bios if it allows it.
the computing capacity of GPU's is supposed to way ahead of the CPU these days.... and the CPU these days itself is millions of times faster then the first few 386's we've seen, but between the 386 and now, we haven't seen that particular bus speed change..
That's what is keeping me wondering.
m
0
l
a c 109 V Motherboard
October 30, 2011 5:11:53 PM

The number of cores makes no difference in bus width and efficiency, they never directly interact and never will/can.
m
0
l
a c 87 V Motherboard
October 30, 2011 5:15:42 PM

It's not meant to be taken that literally, it's the concept. Like the memory controller was never supposed to be on the chip, SB did it.... I guess.
Wow. You guy's need to think out of the box K1114 (Never will/can?????? That's heavy)

m
0
l
a c 109 V Motherboard
October 30, 2011 5:44:51 PM

The other northbridge functions are also integrated on sb which includes pcie io. It has nothing to do with thinking outside the box, even though it's integrated on the cpu, it's still a "separate" part.


m
0
l
a b V Motherboard
October 30, 2011 5:50:23 PM

i dont see how making the pathways bigger/adding more would help any... wouldnt just raising the FSB/northbridge frequencies work just as well?

assuming your not trying to quadfire 6990s
m
0
l
a c 87 V Motherboard
October 30, 2011 5:57:54 PM

Exactly what I was thinking, that's what we do in an OC ..... increase the NB speeds.
If they have the same number of lanes for a single core processor why is the number same for a quad core processor..... whereas it could calculate the same stuff in 1/4 of the time given the CAPACITY OF DATA that went to and fro was increased to 4 times.... right?
Logic.... say that.

Logic also states that it is easier to manipulate a PCB than a Microchip in terms of manufacturing, desigining and production. All three steps are cheaper to execute on the PCB level.
m
0
l
a b V Motherboard
October 30, 2011 6:03:19 PM

ah i see what your saying, however the time it takes for the bits to transverse the FSB/Northbridge is insignificant enough to make redesigning all the current tech not worth it... however they are updating the FSB/Northbridge frequencies, you just have to shop around a bit :) 

but most of the time is spent on the cpu actually calculating instead of it being transported
m
0
l
a c 87 V Motherboard
October 30, 2011 6:06:27 PM

Now comes the next step, if the CPU with a 8 Core setup is taking a little less time than a 6 Core to compute the same quantity of data would it not be wiser to dedicate the 2 extra core to help control the same traffic better more efficiently or say reduce the travel across the bridge time or help bridge new io's to and fro?
m
0
l
a b V Motherboard
October 30, 2011 6:14:26 PM

ah i did a bit of reading up on FSB... it seems that its semi out of the picture, and each companies manufacturers are using there own variant of it... hmmm, interesting... must do more reading on the wiki :) 

heres a quote that i think sums it up nicely

"The front-side bus was criticized by AMD as being an old and slow technology that limits system performance.[8] More modern designs use point-to-point connections like AMD's HyperTransport and Intel's QuickPath Interconnect (QPI).[9] FSB's fastest transfer speed was 1.6 GT/s, which provided only 80% of the theoretical bandwidth of a 16-bit HyperTransport 3.0 link as implemented on AM3 Phenom II CPUs, only half of the bandwidth of a 6.4 GT/s QuickPath Interconnect link, and only 25% of the bandwidth of a 32-bit HyperTransport 3.1 link. In addition, in an FSB-based architecture, the memory must be accessed via the FSB. In HT- and QPI-based systems, the memory is accessed independently by means of a memory controller on the CPU itself, freeing bandwidth on the HyperTransport or QPI link for other uses."
m
0
l
a c 109 V Motherboard
October 30, 2011 6:23:21 PM

But increasing the speed is what I was getting at instead of bus width which is what you originally stated. The speed is easier to change so that is what increases until it just can't go any faster, then we change the width.
m
0
l
a b V Motherboard
October 30, 2011 6:32:04 PM

k1114 said:
But increasing the speed is what I was getting at instead of bus width which is what you originally stated. The speed is easier to change so that is what increases until it just can't go any faster, then we change the width.



hrmmm yeah, I always thought that FSB speeds were kinda like PCIe in that it needed current processors to catch up....

well... I was wrong lol, I really need to read up on this type of stuff.
m
0
l
a c 87 V Motherboard
October 30, 2011 6:32:43 PM

k1114 you're going back to the same square, increasing speed is what we all do, the point is , is it really required? the same thing was attainable much more efficiently if the simple step of increasing the width would suffice. The you could OC and get a totally new Volume level of computing power at your hands.
m
0
l
a b V Motherboard
October 30, 2011 6:57:40 PM

yeah but then they would need all new instruction sets, right? i mean from the companies perspective its cheaper to just bump the fsb frequency when it needs it instead of working a whole new architecture.
m
0
l
a c 109 V Motherboard
October 30, 2011 7:52:54 PM

But it's not a simple task to increase width and yes we're back to square one; it's not necessary. X58 was capable of x16/x16 and so is nf200. But why is p67/z68 still x8/x8? X58 doesn't cost much more so ignore cost. Why are we just now switching to 64bit os? DDR1 is 64bit, so is DDR3, why don't we increase that? Then what about physical size increase. See how the pci connection is smaller than a pcix? Do we want bigger mobos?

And since it's not necessary, what's the point of having 64 lanes when only 8 are being used? Increased efficiency is true only when all lanes are being used. A mobo with nf200 is slower than a board without if it's not using those lanes in a single card setup. There are much more implications than just simply increasing speed.

And all of this has nothing to do with cpu cores or cpus being 64bit (why don't we increase this?). The same way graphics cards can be 256 bit, when pcie is 32bit.
m
0
l
a c 97 V Motherboard
October 31, 2011 1:57:01 AM

I want to test something.

Alyoshka, how can the 5770 with a 128bit memory bus nearly keep up with the 4870 which has a 256memory bus? Wouldn't the wider bus make the 4870 twice as fast? I get the feeling you have an idea, but due to your lack of knowledge you don't realize its worthless. No offense btw. At least you stopped to ask.
m
0
l
a c 87 V Motherboard
October 31, 2011 5:01:03 AM

No, I didn't stop asking and the picture is still not getting through to you, I take no offense if I get answers that do not pertain to the picture, it's just call someones opinion about a different thing that I ain't talking about.

I stopped asking??? Nope it was 4 in the am here.....

You can't be serious in asking me that question unless you think you're dealing with a school kid...... it'd be basically stupid to answer you, when I am asking for a logical answer....... it's not good to answer a question with a question , did you know that?
The idea and the sense if I put it in simple words is High CPU ------ Across a Bridge that has data traffic in single lane , double lane, twelve lanes and 16 lanes.------- High End GPU.

There is no need for a size increase in the physical parts...... we oc the crap out and know the lines are capable of carrying the traffic.

Why do we speak of bottle necks???? at the processor end? at the gpu end? at the Mobo level.....
The only thing that can cause a bottle neck anywhere in the rig is literally a gate no being able to send enough data across in a smooth flow, ends up breaking the flow of data through out the whole layout.

We all agree that we are not utilizing the PCIe Bandwidth to it's fullest...but which one is the reason....
we do not have equipment ( cards) capable of using the PCIe bandwidth to their full extent?
or Mobo's not able to transport data at the full extent calculated by the PCIe addon?
or The processor not able to compute and communicate the data sent to it from the PCIe at it's fullest?
m
0
l
a c 97 V Motherboard
October 31, 2011 5:17:19 AM

Quote:
At least you stopped to ask.


Does that help? I never said you stopped asking, I said you stopped to ask/ask why. HUGE difference, and one that speaks to your intellect.

Quote:
it's not good to answer a question with a question


As I said, I'm checking something. I can't see if I'm right until you answer.
m
0
l
a c 109 V Motherboard
October 31, 2011 5:25:49 AM

Bashing aside...

The cards are not powerful enough, this is evident when testing dual gpu cards like the 6990 and 590 in pcie scaling tests vs single gpu cards in pcie scaling test.

http://www.tomshardware.com/reviews/pci-express-scaling...

You can google a benchmark for 6990/590, I just have this link that I can easily google to.
m
0
l
a c 87 V Motherboard
October 31, 2011 5:27:58 AM

oops, sorry, yeah, glad I stopped to ask, you're welcome.

Well, you're right and you can stop asking if you want to feel you're right and I don't know the answer.....
A simple different generation different computing power capacity ought to answer that, cos I don't want to divert the main frame of discussion , the least you could have done was to atleast use similar series or generation of hardware to make the answer more challenging...... you can't ask for why a fiat runs at half the speed of a lotus even though it's on the autobahn.
m
0
l
a b V Motherboard
October 31, 2011 2:33:40 PM

hmm, maybe they aren't doing it because they can't? maybe the copper is too thin? or the chipset would simply be to hot so they have to think of more efficient designs?

whatever the case may be they obviously have a good reason for it, and yes they are pushing out faster speeds, its not that that is what is limiting us, just take a look at server motherboards they are using 18+ cores and aren't FSB limited.
m
0
l
a c 97 V Motherboard
October 31, 2011 4:36:18 PM

It's ok to admit you don't know, that was the answer I was hoping for/expecting. The reason is its another way of looking at the same problem you mentioned.

The "bitness" on the video card relates to how much memory bandwidth a card will have. Lets look at the 5770 vs 4870 for a bit.

http://www.tomshardware.com/reviews/radeon-hd-5770,2446...

Scroll down to where they show the 5770, 5750, and 4870 specs. Notice that the 4870 and the 5770 are nearly identical, save the memory bandwidth. (~77GBps vs 115) If you read that page they talk about 4870 have a 256bit memory bus while the 5770 has *only* a 128bit bus. If you do the math half the bus should have half the bandwidth, but that's not the case. (half of 115 is 57.5) So why does the 5770 have 77GBps and not 57.5?

The bitness is only part of the equation. Another part is clock speed. A 64bit bus clocked at 1GHz will be able to transfer twice as much data as a 64bit bus that is clocked at only 500MHz, assuming the same memory type. DDR3/GDDR5 can transfer 4 bits per clock, while all the others can do only 2. You don't need to increase the size of the bus (increase "bitness") as you can get what you want by increasing the clock speed. Read the wiki on memory bandwidth for more info.

http://en.wikipedia.org/wiki/Memory_bandwidth

So when you ask "why aren't the increasing the Bus Capacity any further than 32Bit" the answer is because they don't need to. PCI was clocked at 33MHz, PCIe is clocked at 100MHz. You also asked why they aren't increasing with more cores and I think K1114 answered that with his link.

Quote:
maybe they aren't doing it because they can't?


They could if there was a need. We could have giant buses if we wanted. As the above link shows we couldn't use it however. At that point its a waste as all that money was spent putting the copper in and it can't be used.
m
0
l
a b V Motherboard
October 31, 2011 4:59:56 PM

oh sorry, i was talking about increasing the bus speed not lanes :) , but anyways thanks for the useful info man <3 I honestly never knew how Teraflops were calculated, i figured it was just put out by the company :D 
m
0
l
a c 87 V Motherboard
November 1, 2011 4:19:49 AM

OK, let me put it in an even simpler way if we have to go the GBps way, be it 32bit or 64bit we are sitting the same processor on the same mobo, that's the first concrete thing.
So, what's the max IO in GBps that the CPU socket can handle?
m
0
l
a c 97 V Motherboard
November 1, 2011 6:08:48 AM

We haven't found it yet. Read the link showing PCIe scaling again. Why put a 64bit or 128bit bus in if we can't max out what a 32bit bus will do? All it will do is increase prices as you'll need the copper for the extra lanes.
m
0
l
a c 109 V Motherboard
November 1, 2011 4:19:32 PM

Yup, and that's why we can OC a cpu and see a performance boost because we aren't using all the socket's bandwidth. The bandwidth of the other components is really the limitation(example: gpu) and as these continue to come out with newer faster versions, we will only change the connection's bandwidth (example: pcie) when we need to.


I think I know the perspective you're trying to get at. Let's give a cpu a arbitrary bandwidth; 100GB/s info goes through pcie 2.0 x16 at 16GB/s and lets say we got a 560ti, 128GB/s. So its 100->16->128, pcie looks to be a bottleneck right? Wrong. More info is being computed than communicated, like (2+2=4), only (4) needs to be communicated so in this example only 1/5 of the info is communicated. So in essence the cpu gets more powerful, the first number changes, gpu gets more powerful, the last number changes, we only change the middle if we see a bottleneck but we can't calculate the pcie's actual saturation (which is been asked multiple times on the internet). We can only change the first and last numbers to see if a bottleneck occurs. There are other variables affecting saturation such as the game or resolution (how much info needs to be communicated) but you should get the picture.

Now I want to go back to the equation to calculate bandwidth. One version is (speed x width x multiplier x channels). So it really doesn't matter which variable you change, each is a power of 1; each has an equal increase on the total bandwidth. So changing the bus doesn't give you "a totally new Volume level of computing power" no more than changing the speed does. If we could increase all variables at once, that would be awesome, but we can't.
m
0
l
a c 97 V Motherboard
November 1, 2011 11:56:59 PM

I think we can. But it's not cost effective so we don't.
m
0
l
a c 109 V Motherboard
November 2, 2011 3:03:43 AM

Now that I think about it, SBE is pretty close to doing that vs nehalem.

Edit: Hmmm this statement doesn't sound logical, I need sleep I think.
m
0
l
!