Mac and PC users are never going to agree on which platform has the best operating system. But when it comes to hardware, though, the PC world has an undisputed advantage. We have a lot more choice when we pick our processors, graphics cards, and motherboards. If you're using a Mac, you have to wait for Apple to add driver support for the device you want (if it ever happens at all).
Thunderbolt violates the rule that PCs get the coolest technologies first. For almost a year, Mac users have been enjoying the Thunderbolt, which was developed by Intel, because of collaboration from Apple. Power users with PCs were forced to sit and wait, though a dearth of client devices made it more tolerable to watch the Mac guys get their hands wet with Thunderbolt.
MSI recently released the first available motherboard with Thunderbolt support, its Z77A-GD80, ending Apple’s monopoly on what could be considered the coolest interface since the original USB standard. The platform we received is essentially identical to the Z77A-GD65 we reviewed in Six $160-220 Z77 Motherboards, Benchmarked And Reviewed, aside from a 10 Gb/s Thunderbolt port on the rear I/O panel (replacing DVI output), along with a new 14-phase voltage regulator.

If you aren't yet familiar with Thunderbolt or its implications, we definitely believe that the technology is an interface you're going to want on the next system you put together, even if the ecosystem of compatible devices remains fairly small today.
Of course, Thunderbolt is a name for an Intel initiative originally code-named Light Peak—an optical physical layer used to connect peripherals. Back when Intel first showed off its Light Peak project at IDF 2009, it was thought that optical would enable 10 Gb/s throughput. However, a version employing copper wiring turned out better than expected, allowing Intel to drop costs and deliver up to 10 W of power to attached devices.

The big objection from most enthusiasts is going to be that we already have USB 3.0 showing up as a value-added extra in AMD and Intel chipsets. Why do we need to pay for yet another interface? After all, at 5 Gb/s, a third-gen USB port is almost able to accommodate the peak performance of a modern SSD. Thunderbolt isn't just another peripheral interface, though. It combines DisplayPort and PCI Express into a serial data stream, enabling very powerful connectivity combinations (along with innovative product ideas like MSI's GUS II).
Manufacturers have toyed with USB-based graphics expansion over the years, but none truly succeeded because USB’s unique command set simply wasn't designed to facilitate high-performance graphics I/O. However, Thunderbolt’s low-latency/high-bandwidth interface is, however, making it a robust transport technology with extremely accurate time synchronization support that's ideal external video and audio devices.
How Does Thunderbolt Work?

Systems with Thunderbolt controllers will attached them one of two ways: either it's attached directly to PCI Express links originating from a Sandy or Ivy Bridge-class processor, or it derives connectivity from a Platform Controller Hub's available PCIe.
We suspect that, on the desktop, most motherboard vendors will hook up through the PCH in order to avoid monopolizing processor-based lanes, which are generally needed for add-in graphics. Such a configuration does open up the potential for a bottleneck, since the DMI connection between processor and chipset is theoretically good for around 2 GB/s of bi-directional throughput. If you have a lot of SATA-attached storage cranking away, it's conceivable that the maximum performance of Thunderbolt could be constrained.
In the image above, you can see that DisplayPort data routes between the Thunderbolt controller and the PCH's Flexible Display Interface, since that's where display connectors attach. The FDI is its own pathway, specifically reserved for carrying display information, and it does not impact the bandwidth available through DMI 2.0.

PCIe and DisplayPort signals enter the Thunderbolt controller separately, are multiplexed, travel through a Thunderbolt cable, and are de-multiplexed at the other end.

Thunderbolt requires active cables, which is why they're so expensive (in the $50 range). Each cable end sports two tiny, low-power Gennum GN2033 transceiver chips that are responsible for boosting the signal passing through to enable 10 Gb/s data rates over runs as long as three meters.
Originally, Thunderbolt was going to be enabled using an optical physical layer and optical fiber cabling. But Intel discovered that it could achieve its 10 Gb/s per channel target at a lower cost using copper wiring. Plans for an optical-based implementation are still on the table, and we expect to see optical cables enabling even longer-distance connections in the future. As we already mentioned, though, copper cabling delivers up to 10 W of power to attached devices. When optical cables do emerge, attached devices will require their own power supplies.

Despite Thunderbolt's many unique attributes, the interface shares certain capabilities with other technologies. For example, it supports hot-plugging. And, like FireWire, it is designed to work in daisy chains. Machines that come armed with Thunderbolt will either include one or two ports, each supporting up to seven chained devices, two of which can be DisplayPort-enabled monitors. So, you end up with the ability to attach:
- Five devices and two Thunderbolt-based displays
- Six devices and one Thunderbolt-based display
- Six devices and one display via mini-DisplayPort adapter
- Five devices, one Thunderbolt-based display, and one display via mini-DisplayPort adapter
Of course, daisy-chaining requires that each device (except for the last one) has two Thunderbolt ports. So, when you attach a display that doesn't have a Thunderbolt port, necessitating a mini-DisplayPort adapter, or only has one port, there is no way to pass the signal on to the next device in the chain. As such, displays go to the end when you're linking multiple components.

The Thunderbolt connector itself is physically compatible with mini-DisplayPort, so turning the connector into a display output is particularly easy.
Are there caveats to putting PCIe and DisplayPort data on the same cable? In theory, no. Apple and Intel resolved early display quality issues encountered on early hardware through a firmware update in 2011. The interface employs two data channels, each capable of pushing 10 Gb/s in each direction. The solution used one channel for device I/O and the other for display signaling. Even still, we cite 10 Gb/s as Thunderbolt's official spec, since performance is not additive.

Thunderbolt was designed with a number of usage models in mind, one of which is high-bandwidth, low-latency data transfer for audio and video professionals. That has sequential transfers written all over it. And so, we're able to fire up Iometer and cram as many 128 KB blocks through the interface as possible in order to gauge Thunderbolt's potential performance.

In our quest to test the limits of external storage interfaces, we rounded-up a handful of external RAID enclosures (subsequently disabling caching).
We got our hands on LaCie's 4big Quadra to use with FireWire 400/800, USB 2.0, and eSATA. It was a little harder to track down a USB 3.0-capable solution, but we managed to snag a DriveStation Quad USB 3.0 from Buffalo Technology. Promise sent us its Pegasus R6 with Thunderbolt compatibility. All enclosures were loaded up with Hitachi DeskStar 7K3000 drives.

Thunderbolt wins hands-down in a raw performance comparison, with the hard drive-based Pegasus R6 maxing out at up to ~925 MB/s at high queue depths. Because the cable's second Thunderbolt channel is used for display data, that ~925 MB/s figure is very close to the interface's 1 GB/s theoretical ceiling in one direction. Despite that ceiling getting hit, Thunderbolt simply destroys the other five interface options.

Notice in that chart above that there are lines for hard drives and lines for SSDs. Crucial lent us six m4 SSDs, just in case the hard drives failed to saturate our connections. What we saw, though, was that the DriveStation Quad and 4big Quadra didn't speed up after replacing disks with SSDs. Throughput from the Pegasus R6 did increase to 965 MB/s, though.
This small performance delta confirms that we're saturating the Thunderbolt interface with six hard drives in RAID 0. With four disks (Pegasus R4), performance tops out at 600 MB/s using Thunderbolt. We also see that the SSD-equipped Pegasus R6 achieves better performance at lower queue depths than the version with hard drives.

The above chart represents peak throughput from our sequential results, derived from testing a single device attached to a Thunderbolt port. According to Promise, as you add devices, aggregate performance slowly starts to slide due to the protocol overhead required to manage multiple devices. Consequently, you're better off with one high-speed device compared to several slower peripherals if you're trying to tax interface bandwidth. Naturally, when we add devices to a USB 2.0 hub or FireWire daisy chain, the aggregate performance of those devices drops as well.
Despite impressive sequential results, Thunderbolt's random I/O performance is substantially weaker—often the case when you're working with external interfaces. Dropping an internal SATA drive into an enclosure with some sort of bridge chip negatively affects the disk's native potential. This can be attributable to the interface itself. For example, USB and FireWire completely discard command queuing, resulting in benchmarks the would seem to reflect a queue depth of one at all times. The graph below illustrates:

It is no surprise to see the hard drives deliver low throughput in a test involving random reads. But we'd expect to see SSDs doing better. Of course, our expectation there is based on the performance of a drive attached via native SATA (a 240 GB Vertex 3 should hit ~325 MB/s) at high queue depths. With one outstanding command, the Vertex 3 falls closer to ~70 MB/s. But the USB- and FireWire-based solutions even come up short of that number. What's going on?
Let's examine the random I/O performance of our external RAID enclosures. If these connectivity technologies cannot queue up commands, can we compensate through the use of multiple drives? After all, these RAID devices have their own own controllers to manage I/O requests, hence the support for hardware-based RAID.

Random I/O still looks pretty bad, even with multiple SSDs in RAID. It's simply bad here. Even with SSDs in RAID, we cannot achieve the same performance possible with a native SATA connection. Even the Pegasus R6 equipped with six Crucial m4 SSDs cannot seem to get past 80 MB/s. Although we're seeing generally-poor handling of random I/O by external interfaces, there are two exceptions where this wouldn't be the case.
First, a non-RAID eSATA drive should be able to achieve native SATA 3Gb/s performance, as long as it does not support any other interface. It must be non-RAID and exclusively eSATA because adding support for RAID and other interface technologies requires controller hardware. Lacie’s 4big Quadra, for example, cannot achieve native SATA performance via eSATA because it uses Oxford Semiconductor's OXUFS936QSE, a universal interface-to-quad-SATA storage controller (supporting eSATA, FireWire 800, FireWire 400, and USB 2.0). The RAID controller within the Oxford Semiconductor chip is implemented after the eSATA switch, affecting random I/O performance. Unfortunately, only a handful of external enclosures support eSATA and only eSATA.
Non-RAID Thunderbolt devices are also an exception. Inside them, you'll likely find a PCIe-to-SATA controller. This is very similar to the topology motherboard vendors used to add SATA 6Gb/s support to their platforms before it was integrated into chipsets, employing third-party Marvell and ASMedia controllers attached to the core logic through one PCIe link.

Non-RAID Thunderbolt drives employing third-party SATA controllers underperform native SATA connections, though, in this case. Seagate's GoFlex Thunderbolt adapter, for example, uses ASMedia's ASM1061 SATA controller, which coincidentally is also on-board our MSI Z77A-GD80. Theoretically, random performance should be nearly identical from both devices. But the GoFlex Thunderbolt adapter only delivers 120 MB/s, whereas we can achieve 160 MB/s with a direct connection to motherboard's ASM1061.
According to ASMedia, the performance of its ASM1061 depends on vendor-specific BIOS optimization. Creating a product for a broader range of applications, like the GoFlex, means less of the tuning you'd find on a piece of hardware tweaked for a certain motherboard model.

Sequential performance isn't as sensitive to those optimizations, which is something we also see in our SSD reviews. While we keep our BIOS, drivers, and SSD firmware up-to-date, sequential numbers rarely change. Not surprisingly, then, we see identical performance from our Vertex 3 in Seagate's GoFlex Thunderbolt adapter and MSI's Z77A-GD80. Both deliver a maximum of 400 MB/s in sequential reads.
Five Flavors, All Intel
Systems supporting Thunderbolt technology are not all created equal. Although Intel currently holds a monopoly on Thunderbolt-capable controllers, the tech is actually available in several flavors.
| Controller Code Name | Model Number | Thunderbolt Channels | DisplayPort | PCIe Interface | Package Size | TDP | Purpose |
|---|---|---|---|---|---|---|---|
| Cactus Ridge 4C | DSL3510L | 4 | 2 outputs | x4 Gen2 | 12 x 12 mm | 2.8 W | PC, Daisy-chainable devices |
| Cactus Ridge 2C | DSL3310 | 2 | 1 output | x4 Gen2 | 12 x 12 mm | 2.1 W | PCs |
| Port Ridge | DSL2210 | 1 | N/A | x2 Gen2 | 6 x 5 mm | 0.7 W | Endpoint devices |
| Light Ridge | CV82524EF/L | 4 | 2 outputs | x4 Gen2 | 15 x 15 mm | 3.2 W | PC, Daisy-chainable devices |
| Eagle Ridge | DSL2310 | 2 | 1 output | x4 Gen2 | 8 x 9 mm | 1.8 W | PCs, Endpoint devices |
| Notes: Bold indicates Second-gen controller | |||||||
When Apple originally launched its Sandy Bridge-based Macs, many of them supported Thunderbolt using Intel's Eagle Ridge controller, which enables a single Thunderbolt port, support for one DisplayPort device, and a four-lane PCI Express 2.0 interface to the host platform. If you want to start daisy chaining peripherals, however, you need the more expensive Light Ridge controller, which offers two Thunderbolt ports.

But Intel is not sitting idle. With the introduction of its Ivy Bridge-based platforms, Intel’s partners plan to use a second-generation Thunderbolt controller family referred to as Cactus Ridge that comes in two- and four-channel variants. Because a single port requires two channels, the higher-end 4C model supports two ports, while the 2C SKU enables one Thunderbolt port.
According to Intel's partners, Ultrabooks will employ the single-port Cactus Ridge controller due to its lower power consumption. Enthusiast-oriented desktop systems and daisy-chainable devices will employ the Cactus Ridge 4C for its dual-port support. Both Cactus Ridge parts utilize a four-lane PCIe 2.0 interface. It was previously thought that the 2C version would monopolize two lanes, but we've confirmed that those reports were mistaken.

Intel’s Port Ridge controller is also a second-gen development. However, it's specifically designed to enable Thunderbolt-based endpoints. Devices employing the Port Ridge controller have to be set at the end of your daisy chain (if there is one) or used individually. Elgato’s Thunderbolt-based SSD, a portable 2.5” external SSD with a single Thunderbolt port, is a good example of an endpoint device. And because Thunderbolt delivers up to 10 W of power, you won't have to worry about a separate power cable for products like this one.
Why all of the differentiation in Thunderbolt controllers? Intel is trying to make the technology more affordable where it can. We hear that Light Ridge costs somewhere between $25-$30, and Eagle Ridge supposedly runs around half of that. Port Ridge is, in effect, half of an Eagle Ridge controller, removing Thunderbolt channel used for DisplayPort signaling. Thus, as a single port single channel controller, Port Ridge allows vendors to partially mitigate the expense of enabling Thunderbolt on their end-point devices.
Dual Display Support
The Cactus Ridge 4C and Light Ridge controllers both featuring dual DisplayPort outputs. On the desktop, one pipeline comes from the Sandy or Ivy Bridge-based HD Graphics engine. The other requires interaction with an add-in graphics card. Of course, the option to attach a second screen is important on high-end systems, which is why enthusiast-class Z77 motherboards will feature the four-channel Cactus Ridge controller. Implementation-wise, that's going to look a little strange because you need a DisplayPort loop-back cable between discrete graphics and the motherboard itself. But it's the only way to establish a second path to the Cactus Ridge 4C controller.
"Why not just hook your monitor up to the graphics card, and forget about all of that hassle," you ask? Remember: Thunderbolt uses an active cable.
An active cable allows Thunderbolt to communicate with displays physically farther away without compromising signal integrity. A long DisplayPort cable isn't necessarily a good option because the signal degrades after two meters. DVI uses only passive cables, causing resolution and refresh rate reductions as length increases (that's why DVI boosters exist). Thunderbolt solves those problems and simplifies display connectivity.
| Thunderbolt-Supported Macs | TB Controller | Thunderbolt Ports | Integrated Graphics | Discrete Graphics | Max # Of Connected Displays |
|---|---|---|---|---|---|
| MacBook Air (Mid 2011) | Eagle Ridge | 1 | Y | N | 1 |
| MacBook Pro (13-inch, Early 2011) | Light Ridge | 1 | Y | N | 1 |
| Mac mini (Mid 2011) 2.3 GHz | Eagle Ridge | 1 | Y | N | 1 |
| Mac mini Lion Server (Mid 2011) | Eagle Ridge | 1 | Y | N | 1 |
| MacBook Pro (15- and 17-inch, Early 2011) | Light Ridge | 1 | Y | Y | 2 |
| iMac (Mid 2011) | Light Ridge | 2 | Y | Y | 2 |
| Mac mini (Mid 2011), 2.5 GHz | Light Ridge | 1 | Y | Y | 2 |
Providing that a mobile system with Thunderbolt includes the Light Ridge or Cactus Ridge 4C controller, you can enable dual-monitor output using only integrated graphics. The 13.3" MacBook Pro serves as a good example.
All MacBook Pros feature the Light Ridge controller. On the 15" and 17" MBPs, specifically, Apple does routes a second DisplayPort signal from discrete graphics to the on-board Thunderbolt controller. Yet, the 13.3" model only comes with Intel's HD Graphics 3000. In the case of the smaller MacBook, Apple routes both DisplayPort signals from the integrated graphics subsystem to the Light Ridge controller. The catch is that HD Graphics 3000 only supports two displays. And that's why the 13.3" panel goes blank if you plug in a second 27" Thunderbolt Display.
Ivy Bridge's HD Graphics 4000 engine offers up to three independent displays. So, configurations lacking add-in graphics, but equipped with Light Ridge/Cactus Ridge 4C, still have the potential to drive two Thunderbolt screens without sacrificing the notebook's display.
If your notebook has a Eagle Ridge or Cactus Ridge 2C controller, you can only drive one Thunderbolt display. This is a limitation of the controller, so even if you have discrete graphics, a second Thunderbolt-equipped device cannot be attached.

Technically, it is possible to drive two displays via Thunderbolt using Intel's integrated graphics subsystem on a desktop, but the machine has to meet several requirements for this to happen.
- The motherboard must have a Light Ridge or Cactus Ridge 4C controller.
- Your motherboard must have a DisplayPort input port to route the second display signal.
- You motherboard must have a built-in DisplayPort output port (from Intel HD Graphics 3000/4000), which is looped back to the input port.
Even though it's more work to hook up a loop-back cable, there's a legit reason for its existence. The cable gives you the option to drive a second screen using discrete graphics. Without that, you'd have no way to run a Thunderbolt monitor from a high-performance graphics card.
As a technology, Thunderbolt operates similarly, regardless of the controller you're using. For the sake of our technical discussion, we're using Intel's previous-generation two-port Light Ridge chip.

The Thunderbolt controller on your motherboard is always in host mode, with a second-gen PCI Express interface to either a Sandy/Ivy Bridge-based CPU or PCIe-equipped chipset (and one or more available DisplayPort inputs).
Inside the controller, you find a PCI switch and a collection of DMA engines collectively referred to as the Native Host Interface (NHI). The PCI switch enables connectivity for downstream devices, while the NHI is used for software protocols and device discovery (Plug and Play detection). The Thunderbolt switch, marries, if you will, the DisplayPort and PCIe inputs into a single connection.
Recall that each Thunderbolt port requires two channels, one for device I/O and another for display signaling. In the case of Intel's Light Ridge, four channels output to two Thunderbolt ports.
When you have a daisy chain or an endpoint device, Intel's Thunderbolt controller chip provides a PCIe 2.0 x4 downlink. However, the company also enables broader flexibility for attaching multiple components. With four connected, for example, you could configure the downlink as four individual PCIe 2.0 x1 links. According to Intel, Cactus Ridge (2C/4C) can be configured in the following ways:
- 1 * x4: one device of four lanes
- 4 * x1: four devices of one lane each
- 2 * x2: two devices of two lanes each
- 1 * x2 + 2 * x1: One device of two lanes and two devices of one lane
Most of the time you'll see one device attached to one Thunderbolt controller, yielding a 1 * x4 configuration. However, there are situations where a single Thunderbolt controller might control multiple devices.

Apple's 27" Thunderbolt Display is a good example. Its controller is responsible for communicating with a USB hub, a FireWire 800 port, Gigabit Ethernet, and a FaceTime camera. Each device requires an interface to the Light Ridge controller, with its internal PCI switch divided into four single-link lanes when it's in switch mode. Each lane is then mapped to a device controller (USB, FireWire, Ethernet, and the camera). This setup doesn't negatively affect the display itself because, remember, I/O and DisplayPort are on different channels.

In its current form, Thunderbolt employs PCIe fanout mapping. This means that daisy-chained Thunderbolt devices are routed through the internal PCI switch of the controller ahead of it in the sequence. As a result, the first device in the chain always enjoys the lowest latency.

The PCI Express protocol also influences latency. For example, a storage device on a desktop PC might negotiate precedence over a capture card, and you'd assume that Thunderbolt should probably operate the same way since it uses PCI Express signaling. However, each device plays a role in the PCI arbitration of devices connected downstream. Thus, throttling is quite noticeable if every piece of hardware in a Thunderbolt daisy chain operates simultaneously.

A downside of device arbitration is wasted bandwidth due to inefficient management. This is potentially an issue with fanout mapping because the PCI switch located in the preceding controller manages downstream devices. It's possible to circumvent the drawbacks to fanout mapping by using PCI direct mapping, illustrated in the diagram above. This method passes the Thunderbolt signal through each controller's internal switch, completely bypassing the PCI pathway. It'd ultimately impose a greater negotiation burden on the first system's PCI switch, but it delivers the benefit of greater control over bandwidth/resource allocation.
Thunderbolt controller firmware, as it is implemented by Intel and Apple, uses fanout mapping. Direct mapping is possible and is fully compatible with the Thunderbolt standard. But there's no word yet on if or when it might be a user-selectable option.
Thunderbolt as an interface on Apple's Macs has an advantage with regard to integration and validation. The company's control over the ecosystem (from the operating system to the software to the hardware) ensures the technology works exactly like it should. That's not the case with PCs, though. Right now, Windows-based systems face two specific problems relating to daisy-chaining and hot-plugging.
Both issues relate to how Thunderbolt detects system devices, and are complicated by Windows 7 (x86 and x64) driver limitations that prevent PCIe hot-plugging. This becomes a problem because Thunderbolt relies on PCIe signaling to connect devices. Intel’s answer is special BIOS code that works around the issue. But Intel's partners must also make the requisite changes, either in hardware or through a BIOS update.
Ironing Out Thunderbolt's Kinks
In preparing for this piece, we ran across a handful of different issues. Our MSI Z77A-GD80 motherboard required two BIOS updates to help address them. The first one (v1.0) actually caused a BSOD when a Thunderbolt-based device was removed from a running system. The second one (v1.1 B1), intended to resolve the first, created a problem of its own.

More specifically, BIOS v1.0 required you to connect your Thunderbolt device before powering up. Attaching it to a running machine caused the system to incorrectly detect the device. The only solution was to restart.

In this case, unplugging the incorrectly-detected LaCie Little Big Disk resulted in a reproducible blue-screen. The problem would always resolve itself after restarting with the Thunderbolt device attached.

Updating our system to BIOS v1.1 B1 fixed the hot-plugging. However, Windows would then incorrectly detect the Native Host Interface (illustrated in the first picture on this page), which is used for Thunderbolt software protocols and device discovery, resulting in random instability. It is possible to disable NHI in the BIOS, but that'd be self-defeating since NHI is required for hot-plugging.
One other problem with daisy-chaining materialized when we'd restart or wake the machine up from sleep mode, often causing a device in the chain to disappear. Any component downstream from the missing device is subsequently affected, since it's supposed to be attached to that peripheral's PCI root.
We committed to testing three Thunderbolt devices for this story (Promise's Pegasus RAID R6, LaCie's Little Big Disk, and Seagate's GoFlex Thunderbolt). However, our daisy-chaining issue appears to affect devices downstream of the second Thunderbolt device. Each party involved believes they know who's to blame. But rather than point fingers, we'll just say this illustrates the conundrum of Thunderbolt in the PC space, compared to Apple's more tightly-controlled fiefdom.
Fortunately, Intel claims that any of the problems we encountered can be addressed at a software (BIOS) level. We've been told that there is no need to wait for Windows 8 for robust Thunderbolt support from an operating system, since the technology centers on two already-well-established standards: PCIe and DisplayPort. Driver support is seemless. If a PCIe driver exists, you can use it for Thunderbolt.
You might not think that an external connectivity solution would have thermal issues to worry about, but Thunderbolt is quite literally a hot technology.

An infrared image of where a Thunderbolt cable plugs into our motherboard reveals temperature readings above 110o F, even when downstream devices are idle. Active, we see the cable exceed 120 degrees.
Of course, those temperatures are a result of the active Thunderbolt cable, with two Gennum GN2033 chips on each end. As information moves through the cables, the hard-working data transmission chips heat up and cause those more extreme readings.

Not surprisingly, more space-constrained applications, like our 13.3" MacBook Pro, demonstrate even more alarming thermal properties. In the shot above, the Thunderbolt cable is the one up in the 120o+ range. Next to it, on the left, you can see a FireWire 800 cable. On the other side, there's a USB 2.0 cable. Although those two interfaces look like they're giving off heat as well, they're actually being warmed by the Thunderbolt cable. Fortunately, only the ends of the cable heat up; everything in between stays cool.
Those lofty temperatures aren't a problem if you're using a mini-DisplayPort adapter. The display signal is already demuxed by the controller before it hits the adapter.

So, in comparison to USB and FireWire, Thunderbolt cables get pretty darned hot. But the heat dissipated only causes the plug to become uncomfortable to handle for any significant length of time, and they won't burn you (the same conclusion we reached about gaming on an iPad 3 at maximum brightness).

Despite a less-than-picturesque debut on the PC, Thunderbolt's raw performance is impressive. Boasting throughput close to 1 GB/s, ultra-speedy next-generation external storage solutions are now a reality. Much more than a simple enabler for big disks sitting outside of your machine, Thunderbolt also extends the PCIe bus beyond your motherboard, enabling innovation in some ways we've seen and others that will no doubt surprise us in the next year.
Perhaps its most glaring weakness, Thunderbolt is not well-suited to a value audience. Reminiscent of FireWire 800's debilitating premium, Seagate's Thunderbolt-based GoFlex adapter costs a staggering $190. In contrast, formerly-expensive FireWire 800 adapters hover in the $80 range and USB 3.0 adapters sell for a mere $30. You can thank the high price of Intel's Thunderbolt controllers for that, particularly since vendors don't include cables with their Thunderbolt-based devices. Plan on spending another $50 bucks just to make a connection between your new toy and motherboard.
Intel, however, says it's making a concerted effort to drive down costs with less expensive second-gen Thunderbolt controllers (Cactus Ridge and Port Ridge controllers), and it's providing subsidies for its technology partners to help cover costs.
Despite Thunderbolt’s technological sensibility and resulting performance advantages, enthusiasts should stick to their lower-cost storage controllers, SATA-based SSDs, and internal graphics cards. The number of applications we can think of that require Thunderbolt's capabilities is still tiny. You can achieve high-speed external storage using JBODs, and most folks don't find the limits of DVI cables to be all that constraining. Right now, Thunderbolt is a very niche technology on PC desktops, attractive to high-end audio and video professionals looking for a low-latency, high-bandwidth interface for moving large amounts of data quickly.

Thunderbolt is arguably more promising in the mobile space. We love our notebooks for their mobility. But they typically give up a lot in the way of performance and flexibility for those compact form factors. By externalizing PCI Express and DisplayPort, Thunderbolt has the potential to add fast storage, graphics upgrades, and nice big monitors to small computing devices that couldn't accommodate them before.
There's no doubt that Thunderbolt addresses many of the weaknesses plaguing today's external interfaces. And because of the standards on which Thunderbolt is based, the technology can do things outside of the chassis (be it a desktop or mobile form factor) that simply weren't possible before.