Understanding FSB

I don't quite understand Front Side Bus.

Is it, or is it not the maximum amount of data your processor can receive in a timed measurement?

Your FSB is the connection between your northbridge and southbridge (or single controller chip) to your CPU, right?

I was under the impression, everything on your motherboard gets to your processor by FSB - so if this is at a certain speed (like 133MHz), what is being sent to your processor from any of your components cannot exceed that, and your processor has to wait to receive the data?

If that is the case, it makes absolutely no sense to have 166 memory clock, and 133 FSB clock, because the memory clock exceeds what the processor can handle, further more it would leave absolutely no room for the other components to send data because the RAM would be using all 133MHz (and 33MHz more so).

So if that is not the way FSB works, how does it?

I just recently read something, that gave me the idea, it is not a measurement of how much your processor can intake, but the maximum it can send back to the rest of your computer? So instead read speed, it would be more like write speed, how much your CPU can get back to your interfaces after it has processed the data?

If that is the case, if your CPU is taking in data and calculating it faster than it can send back to the northbridge, what happens to this data it is receiving? Where is it stored? That seems to me like backing up a water hose/tube where you have too much pressure at the end of it and it is being held back and compressed, slowing down the flow or even blocking it.

Can someone explain exactly what FSB is, please?


Benchmarks don't lie :-)
20 answers Last reply
More about understanding
  1. The FSB is the interface between the cpu and memory. an fsb of 266 is a maxium speed of 2.1GBs per second, 333 3.3 GBs per sec, etc.

    If its not broken, you're not trying hard enough.
  2. Um, that didn't answer my question at all.

    Some corrections.

    FSB is the connection between your CPU and all other components that reach your processor, not just your RAM. I understand that much.

    333MHz bandwidth is 2.7GB per second, not 3.3

    And your CPU speed is your FSB combined with a multiplier factor.

    No offense, but please don't post a response if it is not going to be of benefit.


    Benchmarks don't lie :-)
  3. Did I say 3.3? I meant 2.7 What is your question? FSB is also to northbridge and several other components, yes, I'm aware of that. And CPU speed is equal to multiplier times FSB. Dont give me sh** I'm not the one asking a stupid question.

    If its not broken, you're not trying hard enough.
  4. This is exactly why DMA (Direct Memory Access) can significantly speed up performance. This way, components would not have to go to the processor in order to access memory. It can access memory directly.
    The FSB is the connection between the North Bridge and the CPU. All signals to the processor are carried to and fro through this channel. Normally, the only thing the processor communicates with the most is main memory, the other components usually either then accesses memory or other components. For instance, the video card (unless you have fast writes enabled) retrieves all data from main memory and sends all data from main memory, the processor then takes that data from main memory. This is why it is sometimes beneficial to have faster memory but not neccessarily a faster processor, most components access memory a lot more than they do the CPU.

    "We are Microsoft, resistance is futile." - Bill Gates, 2015.
  5. The Front Side Bus goes between the chipset and CPU. Running your memory at a data rate faster than that of you CPU does nothing for you. But running your memory at a faster CLOCK rate can give you less latency. No motherboard has a Cas1 or Cas1.5 setting, so the fastest you can run your memory at is Cas2. Now convert that into nanoseconds, Cas2 at 100MHz is 20ns, Cas2 at 133MHz is 15ns. So while your CPU doesn't benefit from increased bandwidth, it will benefit from reduced wait time.

    You see, Cas Latency is the number of memory cycles your CPU has to wait before the chipset can send or recieve data. Reducing wait time is benificial.

    OK, now Double Data Rate and Quad Data Rate come into play. While the AMD processor is perfectly paired up to a DDR memory type (both have 64-bit paths, both busses are DDR), the P4 runs on a QDR bus. So DDR at 133MHz has only half the bandwidth of the CPU's QDR bus of 133MHz. This is why there is a push for Dual Channel DDR on the P4, it would give two 64-bit modules a parallel structure to make the bus 128-bits. Now a 128-bit bus is twice as fast, so DDR266, aka PC2100, which runs at 133MHz actual clock, would be perfectly paired up with Intel's QDR533 CPU bus, which also operates at 133MHz. But you would still see the benefit of reduced Cas Latency by running the memory at an even faster rate, just as before.

    This being the case, you might ask why the nForce2 chipset gets a benifit from running Dual Channel memory? Well, the nForce2 does some tricky things with those two channels to reduce latency even further, it makes the memory capable of handling more simultaneous transfers. While it's waiting for one bus, it can initiate a transfer on the other!

    What does the CPU do when it has to wait for the data to come in or leave? Not much! It waits! This is why we are seeing DDR and QDR technologies emerging for this bus! Of course the P4 can do a lot of things while it waits, like take care of all those branch mispredictions, using it's internal memory, called Cache.

    <font color=blue>You're posting in a forum with class. It may be third class, but it's still class!</font color=blue>
  6. Man your up late. I'm going to have to stop at your place sometime when I go to the Eagle. I'm in Lansing.

    Signature, I'm still learning & having fun doing it!!!!!!
  7. Wow, I used to go down there to shop all the time. Remember when ValueCity had all those IBM Softouch keyboards for $9.97 and then had a super redtag sale? I bought all of them for $5 each.

    <font color=blue>You're posting in a forum with class. It may be third class, but it's still class!</font color=blue>
  8. Let me know the next time you come down.

    Signature, I'm still learning & having fun doing it!!!!!!
  9. Quote:
    No motherboard has a Cas1 or Cas1.5 setting, so the fastest you can run your memory at is Cas2.

    actually crash my motherboard (Albatron pevpro )does in fact have a cas 1.5 setting and I think there are other PE mobos that do the same im not sure of this though....I have tried this setting but it doesnt run really stable so I just settle with the cas 2 setting....
  10. let me rephrase for Crash, no motherboard or memory can do a stable 1.5 cas latency, and certainly not 1 cas latency

    Instead of Rdram, why not just merge 4 Sdram channels...
  11. Yes you are correct but his sentence says that there are no cas 1.5 settings on any motherboard....maybe he just phrased it wrong?
  12. I've simply never seen a Cas Latency of 1.5. Surely PC3200 would be even faster on an AMD platform if it were set at PC2700 speed and Cas1.5 than at PC3200 speed and Cas2, given that these higher memory speeds only benifit the AMD CPU with less latency!

    <font color=blue>You're posting in a forum with class. It may be third class, but it's still class!</font color=blue>
  13. So what is Code Address Strobe anyway?

    Crashman, are you saying the only way to decrease access times is to increase clock speed? When you say 15ns, you are referring to 15 nanoseconds, the time it takes the RAM to find the data requested and start sending to the CPU - just as a read head on a harddive could have a seek time of 9ms (is 1,000 nanosecond sequal to a single milisecond), correct?

    Actually I think there was some PC133 SDRAM that functioned at 1.5 CAS Latency I believe I saw it on Crucial.com (could be wrong though, but I think I saw it somewhere).

    You are saying the actual speed in clock cycles winds up reducing delay in access times, instead of increasing the speed of streaming data transfer in progress?

    That's why I never understood why chipsets like the KT333 and the 745 offer 133 FSB, and 166 memory bus. I think you'd have a hell of a time OCing a processor designed for 133 FSB up to 166. But if it deduces the latency time it appears to be more clear now.

    But why would things, like say, a mouse attached by USB, go to your RAM first instead of going directly to your processor? Doesn't going to your RAM first simply slow the signal down if its final destination is the processor? I thought your RAM simply sends data, it can't calculate numbers?

    I thought the purpose of system memory was simply to provide temporary storage space, that would be much faster than accessing data from virtual memory off your harddrive?

    If you are constantly making something active, like moving your mouse around or typing on your keyboard, every one of those actions you take has to get to your CPU, why send it through memory first instead of going directly to your CPU?


    Benchmarks don't lie :-)
  14. I've never seen a setting in BIOS for Cas 1.5, but it may exist on some boards I haven't used. None of those would be PC133 boards.

    As you suggested earlier, the RAM can't send/receive data to/from the CPU faster than the CPU bus allows. But 200MHz at Cas3 has the same delay time as 150MHz with Cas2.

    Most stuff does go to your CPU. But I believe some AGP data is cached to RAM without being processed by the CPU. The video card has it's own processor and doesn't need the CPU to do everything.

    <font color=blue>You're posting in a forum with class. It may be third class, but it's still class!</font color=blue>
  15. I copied this from another forum:

    In earlier days, CPU connect with North Bridge chip via "host bus". In 1995, Intel introduced Pentium Pro CPU with L2 cache chip packed together with CPU core. This is the very first time the terms "front side bus" and "back side bus" enters x86 world. Front Side Bus indicates the bus that CPU communicate with the North Bridge chip, as well as the Host Bus before. Back Side Bus is the channel that CPU core communicate with L2.

    Back Side Bus is rarely mentioned, because the speed is fixed as a certain ratio of the CPU speed. For example, Intel Pentium Pro's L2 runs the same speed as the CPU's. Pentium II's L2 runs 1/2 speed as the CPU core. After that, L2 cache was integrated into the CPU die, due to the advancement of semiconductor technology.

    Front Side Bus is always an important issue to system performance, because it is the only way that CPU could fetch data from memory and devices. 486 applies 25MHz, 33MHz or 50MHz FSB, and Intel advanced into 60MHz and 66MHz with Pentium CPU, Cyrix introduces 75MHz FSB in their 6x86/M1 products. In latter times, Intel introduces 100MHz FSB in 440BX/GX chipsets coordinating with Pentium II and Pentium III CPU, followed by VIA's PC133 specification and 133MHz FSB support.

    The latest technology for increasing FSB speed is DDR (Double Data Rate) and QDR (Quadruple Data Rate) technology that AMD K7 and Intel Pentium IV CPU apply. Faster peripheral and faster CPU requires broader channel to communicate with each other. Broadened FSB allow them to exchange data in shorter time. With the acceleration by faster FSB speed, performance for modern system is hopping over their ancestors

    It's not important to know all the answers, as long as you know how to contact someone who does.
  16. SO yo agree with my theory that integrade MCH will kill DMA.

    Now what to do??
  17. No. The Hypertransport link is there dedicated to allowing I/O devices to access memory, and DMA will still allow memory to be accessed by these I/O devices without having to be controlled by the processor (one less memory instruction).

    "We are Microsoft, resistance is futile." - Bill Gates, 2015.
  18. i mean not be charge but this data will move inside the core raising trafic inside the chip (on the request interface that connenct the HT to the MCH.The data will have to go on many hop.When i say kill it the meaning of losing it advantage of low lantency access.

    Now what to do??
  19. The advantages of DMA has always been less processor utilization. I don't see why this is not true now. The processor sitll doesn't have to handle memory access instructions and hence, has room for other instructions. Latency for i/o devices usually are not critical.

    "We are Microsoft, resistance is futile." - Bill Gates, 2015.
  20. any request to any PCI/AGP storage component will have to go to a bunch of HT tunnel or bridge to get back inside the cpu where trafic will be handle to destination MCH L2 HT or core.I/O is not critical far from it the issue come when a on use of a 8 15K SCSI RAID 5 is use on a data base for storage the systemes data base load.As CPU will need this data it will force a turnaround on the MCH for a good 10 cycle loss for turnaround a broken chain in the burst mode of the disk and ht tunnel.I expect AMD to have change DMA to have a the feature to send a copy directly to the cpu so reducing the load on ram.

    Now what to do??
Ask a new question

Read More

CPUs Processors