Sign in with
Sign up | Sign in
Your question

Optical Motherboards

Last response: in Overclocking
May 16, 2008 8:23:04 AM

I would like to gauge interest in optical motherboard technologies which would replace metal (Cu) traces with optical waveguides and lasers to facilitate the communication between CPU/Northbridge/Memory (FSB). Let's assume for a minute that all the technology to do this exists and that it is possible to increase the fundamental limits of the physical FSB into say the 100's of GHz range.

I am having a really difficult time finding information on:
1) What is currently the fastest anyone has run a FSB on a motherboard?
2) What is the limiting factor? What prevents pushing it beyond this limit? CPU speed/heat? DDR3 heat? Cross-talk/signal attenuation on the metal interconnects?
3) Is it useful for a FSB to be this fast (I'm not talking about for web surfing...rather for data intensive apps like HD video rendering, scientific computing, HPC, etc.)? I know that 100GHz would be overkill, but essentially, this would eliminate any question of FSB on the physical motherboard side from limiting performance.
4) If this technology were available today, how fast could you theoretically run a "computer system" using existing chipsets/memory?

With Intel projecting 50 to 100 core processors in the next 5-7 years, CPU clock speeds will not have to be blazing fast to get amazing processing capability. The onus will be on the rest of the system, namely the hundred of tiny metal wires radiating at GHz frequencies which connect the CPU to the memory and other critical components. An optical chip-to-chip interconnect technology may be inevitable in the future, but I think the timing will be key. At what point will we really need to replace the metal wire with optical connections?

Any thoughts? Is there anyone out there that really knows/understands the bandwidth limitations of existing technologies? Your help/feedback would be greatly appreciated!

BTW, I am asking because I am trying to evaluate the commercialization potential for some technology. I am interested in finding some qualified people to help me in this regard as I am a scientist and am new to the overclocking/CPU performance world. I apologize if the answers to my questions are buried in a thread somewhere...I tried but couldn't find anything that clearly answered my questions. Thanks in advance!

More about : optical motherboards

May 16, 2008 5:02:05 PM

I know a few years ago, mit was looking at making a optical computer, they even were getting serious about testing it. But due to the fact that it would require alot of technology that is way to expensive these days, plus the fact that you are limited with cpu's and such, it would just create a huge bottleneck. After all if you send information around too fast, you just create bottle necks, its a great idea, but just not feasible with todays technology.
Related resources
May 16, 2008 6:42:31 PM

shadow -
I just read up on Nehalem and it is indeed interesting. Thanks for the heads up on that. From what I can tell, Intel will use their "QuickPath" interconnect technology to address the bandwidth limitation between the CPU and memory. This is even more evidence to me that the FSB (both the physical layer of metal interconnects as well as any latencies, etc. on the chip side) is one of the major limitations of today's computer architectures.

However, it seems that even using the QuickPath technology it will only increase the bandwidth by a factor of 2 (quote from wikipedia):

"Initial Nehalem Implementation uses a 20-bit wide 25.6 GB/s link (as reported in the Intel Nehalem Speech on IDF). This 25.6 GB/s link provides exactly double the amount of theoretical bandwidth as Intel's FSB 1600 used in the X48 Chipset."

I think that when they attempt to push this to higher and higher speeds, they will run into the fundamental limits of metal interconnects at around 15-20GHz clock speeds (note that the 25.6GB/s is for a 20bit wide bus operating at 4-6GT/s).
May 16, 2008 6:53:02 PM

blacksci -
Thanks for the note. I agree with you that there is no way that existing chipsets can run at 100's of GHz speed today. I am not suggesting that we run the system at those speeds....all I am saying is that if we can just remove any question of physical limitation of the interconnects of the bus (whether it's FSB or QuickPath), then how fast can we run our CPUs before they breakdown.

Also, on another side note, in reference to shadow's comment on checking out the top 500 list...I think that the HPC and supercomputer systems on this list are simply clusters of really high end systems that integrate hundreds of thousands of processors together. So of course they can perform TFlops of calculations/second. I am trying to distill it down to a single CPU architecture and understand the fundamental limits of each unit. When we can understand the limitations of a single unit, we can easily scale this to hundreds of thousands of units working in parallel.

Any more thoughts?
May 16, 2008 6:54:58 PM

BTW, nice article shadow, thanks for the link.
May 16, 2008 6:55:37 PM

Well, it is essentially the major limiting factor in togays distributed computing environment and architectures with multiple cpu sockets and co-processors. That is one of the main reasons for shoving everything on the die; if we had much better throughput on a board level, there would be no need.

I would love optical motherboards as well as some type of optical storage. the fact that we are still using dram with a fsb today is just absurd.
May 16, 2008 7:32:44 PM

Excellent! :D 
May 16, 2008 7:37:36 PM

Theres so many more inefficiencies in current computer systems other than the physical limitation of electricity. Eventually i think it would be cool to have optical everything, but right now, an optical motherboard even in a gigantic multicore system would only help so much. Optical is a great thing in communications over long distance right now. I just think everything in the computer would need to run a whole lot faster in order to see the benifit of the speed of light across 4 inches on the motherboard.
a b V Motherboard
a b K Overclocking
May 16, 2008 8:13:21 PM

Also another thing is the software, no matter how powerful the hardware is if the software is inefficient then there is no point. As is the case with the current market (very few software are multi threaded).
May 16, 2008 8:21:08 PM

FHDelux said:
Theres so many more inefficiencies in current computer systems other than the physical limitation of electricity. Eventually i think it would be cool to have optical everything, but right now, an optical motherboard even in a gigantic multicore system would only help so much. Optical is a great thing in communications over long distance right now. I just think everything in the computer would need to run a whole lot faster in order to see the benifit of the speed of light across 4 inches on the motherboard.

Actually, it wouldn't, you could see a benefit right away...higher speeds to to no EMI, and higherspeeds could be used instead of multiple paths and layers, eventually cheaper to make.
May 16, 2008 9:05:03 PM


Thanks for the posts. I respectfully disagree with FHDeluxe as well. The issue is not how fast electrons move through metal wires, it is how fast can you modulate the voltage (in other words clock) through them. Electrons move very quickly through metal materials (almost the speed of light), however, frequency is another issue. Not to mention the 100's to thousands of tiny metal wires sitting less than millimeter pitch distances from one another...they will act like antenna and disrupt the integrity of the data being transmitted through a 0 might look like a 1 on the other side, and so on.

FHD does have a point though, that we are in a new paradigm. We are not dealing with the long range telecom fiber optic transceivers any longer. These needed to be high power and super fast because they were transmitting over just a few data lines over km distances. For on board optical interconnects, you must now think in terms of ultra-short range, low power, highly parallel optical transceiver tehcnologies. The old paradigm simply does not scale to address these needs.

And, as RCrown mentioned, I believe that optical waveguides will be cheaper to fabricate in the long all know how expensive copper is these days. Imagine getting rid of most of the copper lines (you will still need some for power, etc.) as well as all those tiny little capcitors, and so on needed to impedance match each individual line. You can make optical waveguides out of plastics!

Anyway, let's say that this technology exists. How much additional would you pay for this kind of motherboard? Let's say an average high end mother board costs $300. Would you pay $400? $500? Or not even consider buying one? I know that the actual realizable bandwidth gain might be minimal at this point due to other factors such as CPU speeds, RAM, etc....but let's say something like this is available in 5 years when you will need something like this...I don't think I have framed the question properly, but just curious to know what you all think.
May 16, 2008 9:14:56 PM

I wanted to correct my earlier statement that electrons move almost the speed of light through metal materials...what I meant to say is that the "electromagnetic wave" propagates through metals nearly at the speed of light. Electrons actually travel relatively slowly through metals...but this is not what you need to communicate digital it is the EM propagation velocity, not the actual electrons.
May 16, 2008 9:30:26 PM

It would depend on the relative performance increase as a whole to me, I wouldn't buy one unless the current high end performance could be doubled and probably no more than $400.00 or so for me.

I would say I am not much of a junkie though when it comes to hardware and is usually buy around the price/performance sweet spot or a bit above it.
May 16, 2008 9:48:03 PM

well what i heard is that the biggest reason we cant run at 10+ghz is because to do so we would need to use more power thus creating more heat. optical generates little to no heat and barely needs power. and this would let us push to amazing speeds.

this is an interesting read
May 16, 2008 10:18:53 PM

Two things (both of which have been touched upon by other posters already):
1) FSB is not really limiting system performance right now - you can always buy an unlocked cpu. Your CPU speed can then be increased without increasing the FSB - and you can still increase the FSB, at least until your memory can no longer keep up. That said, there probably would be performance gains (medium sized ones, not huge ones) if the fsb was not a limiting factor.
2) The real key here, as another poster pointed out, is not decreasing the delay or increasing the bandwidth between the cpu and other components. Instead, the key is integrating the other components into one chip. Why should our sound and gpu and network chips (not to mention plenty of others) all be on separate chips? The answer is that, presently, it's cheaper to use a network chip from this company, a video chip from that company, etc, and a motherboard to connect them all. Perhaps in the not so distant future your 32 core cpu will have a single core that just delivers the functionality of your network or gpu or sound card. You will still need something akin to a motherboard to connect components to that cpu, but it doesn't matter if you're using a slightly lower latency connection between the source of your video and the actual monitor ... it doesn't matter if the connection between your cpu (with networking functions built in) and the port into which you plug your cat5e network cable adds a millisecond of latency (and it seems likely it would actually add much less than a millisecond) if the network connection itself is going to have 10+ milliseconds of latency most of the time (and often much more ... there's just no way at present to get communication across great distances to transcend the light speed limitations).
Where that latency really does matter is when data is being manipulated by multiple components - if the data goes back and forth between your gpu and your cpu a few times (even if it's just some of the data), reducing that delay by integration will not only reduce costs (fewer components, less material required to connect them ... if they're already connected ... less power use) and improve performance, it will also beat the improvements you might gain by using optical connections and separate components.
IMO, at least.
May 16, 2008 10:24:47 PM

Also im sure somone would sue if you didnt allow there third party components to be used on your mobo if you offered a all in one solution. As much as i hate to say it, the world revolves around money right now, and someone would get butthurt. Asus for one, they dont like to be screwed, even if they did it to themselves.
May 16, 2008 11:52:52 PM

MattC said:
...FSB is not really limiting system performance right now - you can always buy an unlocked cpu. Your CPU speed can then be increased without increasing the FSB...

No one was talking about the FSB limitations with clockspeeds :| The limits of the FSB architecture are inherently a HUGE data delivery bottleneck in the server/multi-socket world.
May 17, 2008 4:00:46 AM

In this case the bottle neck would be ram and the gpu.
May 17, 2008 2:36:31 PM

well the gpu would be changed to optical
the big problem would be the ram and harddrive like stated by some one else. but hp has come up with the memristor that like dram but can hold more data and doesnt suffer from a loss of data when the system is powerd down

you can read it here

i think this would replace the hard drive and everything will installed on these ram chips. but i bet its very costly $$$

still this ram will prolly still make a bottle neck but would get rid of the bottle neck with the harddrive

is electricity 2/3 the speed of light???
May 17, 2008 6:19:11 PM

Until it goes through a medium, such as wire.
January 24, 2013 4:34:50 PM

The #1 limiting factor from what I have researched is the fact that everything is still going through copper traces. The metal alone reduces the speed the signals can travel through the motherboard.

Once IBM perfects and mass-produces the Optical Interconnect, we will be seeing HUGE boosts in performance. Upwards of the 100GHz per core processors.

We're still 3-5 years away from anything remotely similar to that though. Needless to say though, the next 5-10 years are going to be AWESOME for computers!!! :D