Intel’s X-Lab: Tomorrow’s Network Happens Here

The X-Lab Unlocked

Meet Pete. Pete Cibula, Jr. has a job that most geeks would kill for. His daytime existence is spent making sure that when you want fast service from your cloud service provider (whether that’s OnLive, Amazon, or Google), the speed is there. When you want to transfer half of a terabyte over your LAN, Pete’s work is part of why that process doesn’t take several days. Cars, tanks, slot machines, space shuttles, TVs, and plenty more all share the common thread of Ethernet. Pete’s job is to make sure that when the world needs to move beyond 1 Gb/s this year, the bandwidth will be there, the ports will work, the cabling won’t crush data streams, and the world’s communications will continue to advance.

Pete isn’t alone, obviously. Several manufacturers work in the networking silicon space. But Intel has a position of prominence and legacy that is unique. Most of us hardly give a second thought to Intel’s role in Ethernet development, but the efforts of Pete and his colleagues are literally helping to shape the networking experiences every one of us will enjoy and depend on in the near future.

The work they do happens in several rooms of the Jones Farm 3 building at one of Intel’s Hillsboro, OR campuses. As in our prior Western Digital venture, Tom's Hardware teamed up with pro photographer Gary Wilson to explore and reveal the little-known world of…Intel’s X-Lab.

The Need For Speed

The story behind the X-Lab isn’t just about the race to 10GBASE-T or making sure that when every server or workstation that needs more than one gigabit port, there’s a device able to reliably supply that bandwidth. The X-Lab exists to pave the way for the future of networking. In more than one way, networking scales alongside of CPUs and Moore’s Law. As systems are able to process more data, they need to exchange those greater data loads more quickly in order to maintain real-time functionality. Also consider the role of virtualization in networking. As companies condense five or ten servers into one physical machine, the networking load of those old servers gets jammed into a single box. What might have been  2-4 Gb/s of bandwidth spread across each of ten systems now has to flow in and out of only one, and this requires some fresh infrastructure changes.

Since 1978, Intel has led the industry through multiple speed transitions, from 10 megabit Ethernet through 100 megabit and gigabit Ethernet and now on to 10 gigabit Ethernet (10GbE). The IEEE may be responsible for the creation and supervision of the underlying specifications, but someone has to get their hands dirty and do the years of costly hardware development and validation. A huge chunk of that work gets done by Intel.

X-Lab Inside

Pete (right) and Joe Edwards (left) guided us into Jones Farm 3, past security, and through the  labyrinthine nest of laboratories and cubicle farms that is home to Intel’s LAN Access Division (LAD).

The LAD is a global operation, with silicon design centers in Israel and Austin, Texas; software development in Oregon and Poland; network interface card design and operations in Oregon; and several design centers in Asia and Europe. Knowing the magnitude of the work done here, and also knowing that the X-Lab is the epicenter of Intel’s 10GBASE-T networking platform test and validation efforts, I went in expecting something like a data center—some expansive room with raised flooring and rack after rack of test equipment. What I encountered was something closer to the equipment closet in the hall outside that data center.

The X-Lab, as Pete described it, is a “compact but efficient command-and-control center for twisted-pair Ethernet conformance testing.” Half of the chamber was dominated by chrome baker’s racks laden with what seemed an endless supply of Ethernet cabling. There were racks supporting test equipment and ever more bundles of cables. But I’d be stunned if the room was larger than 1000 square feet. Six of us standing amidst the benches and tools were rubbing elbows and finding it awkward to maneuver. Remodeling to expand the LAD’s test capabilities was going on in the adjacent room, so our conversations were constantly punctuated with the sporadic rhythm of hammering.

Yet, even in those cramped quarters, geek humor prevails. Pete has a Guy Fawkes mask perched atop one instrument, and a crown from some buffalo wing establishment adorns another shelf. Some years ago, one of Pete’s young children drew him a poster-sized landscape in crayon, scrawled unabashedly in green and yellow. The X-Lab crew still have it taped to the inside of the lab’s only door. They shrug with acceptance of their long hours and joke that it’s the most sunshine they usually see in a day.

Big Effort, Little Chip

If our tour was any indication, this is the current big Kahuna in the X-Lab: Intel’s 40 nm, 10GbE Twinville silicon. We photographed it here in both its LAN-on-motherboard (LOM) and PCI Express network interface (NIC) card (code-named Twinville) incarnations. As we proceed, you’ll begin to get a sense of the scale and resources Intel is devoting to making 10GbE happen.

“Ethernet is important for all servers, PCs and workstations, and it is the critical backbone of today’s datacenters,” said Pete. “That’s one of the reasons Intel reorganized in 2009 to combine Intel’s networking, server, and storage groups into a single data center group. 10GbE has already been deployed in many data centers.

Ethernet can be deployed over different types of physical interconnects. When a new, faster version of Ethernet is introduced, it starts out using expensive fiber optic connections. Eventually, as costs and power decline, it moves to the less expensive, copper interconnects the industry refers to as ‘BASE-T.’ These BASE-T connections use the familiar RJ-45 connector, which looks like a phone plug, only a little larger. This is the connection you see on nearly all servers, desktops, and laptops or in the Ethernet switches scattered throughout your favorite LAN party.”

To succeed in the market, 10GBASE-T needs low-cost cabling, run lengths of up to 100 meters, and backwards compatibility with gigabit Ethernet networks. The amount of silicon development, interface testing, power analysis, software validation, and everything else required to bring a new networking platform into production is almost overwhelming.

10GBASE-T...And Beyond

Twinville is Intel’s next-generation 10GBASE-T product, originally shown for the first time at the Intel Developer Forum last September. It will be the industry’s first single-chip, fully integrated, dual-port 10GBASE-T controller. Why does that matter? Because until now, 10GBASE-T designs have included at least two chips and a myriad of support components. This single-chip design will allow 10GBASE-T solutions to finally be small, cheap, and power-efficient enough to be integrated onto server motherboards. Expect this to happen in 2011.

Let’s take it a step further. Why do we specifically need 10GBASE-T? Because today, if you want to go faster than gigabit Ethernet, 10GBASE-T offers the easiest upgrade path. Other flavors of 10GbE and different network fabrics require major infrastructure changes (new switches, cabling, and so on). 10GBASE-T’s backward compatibility with existing GbE networks means you can install 10GbE adapters that will work with your existing equipment, and when you’re ready to upgrade switches and cabling, the move is easy and seamless. In fact, the transition is so smooth that the Dell’Oro Group predicts 10GBASE-T port shipments will grow from 5 million units in 2011 to more than 25 million by 2014.

By the time 10GbE becomes that prominent, the world will start to need terabit Ethernet switches, and we’ll be repeating the same situation we have today with another zero added to each number. Fortunately, the just-released 40GbE and 100GbE standards will help ease the transition.

NIC Evolution

“Twinville will be used on our fourth-generation 10GBASE-T adapter product,” said Pete. “Our first-generation card here at the top was a single-port, 10GBASE-T adapter. It used the 82598, had a third-party PHY, and it came in right at the top of the 25 W limit for PCI Express. The PHY itself burned approximately 14 W of that power budget and required an active cooling solution. That product was introduced in 2007. We then ‘upgraded’ that 82598 MAC by coupling it with a second-generation, 65 nm 10GBASE-T PHY. With that single-port solution, we were able to lose the active cooling. Losing active cooling is critical for LAN-on-motherboard solutions. You don’t want more fans…although blue LEDs might be nice! Anyway, we called this the WWF heatsink because it almost looks like the WWF logo. That design gave us enough surface area to dissipate the heat without a fan, and the total power was significantly lower—about 16 W.

Then, using a similar 65 nm device, we were able to use the same active [PHY] heatsink but go to a dual-port solution. Total power bumped up to approximately 20 W, which is still lower less power than the first-generation device, but with two ports. It uses the 82599 media access controller and has less board complexity. There are fewer power components, for example.

And now, moving to our 40 nm device, Twinville, you can see that the MAC is integrated with the PHY. Actually, it’s not just one physical interface—it’s two. Also, it supports three speeds—100 Mb, 1 G, and 10 G—whereas the others only supported two speeds. And it’s expected to consume about 10 W. The Twinville we have in testing now uses an active heatsink, but by production it’ll be passive, just like the second-gen card.”

We Like To Conform

Pete struck me as a humble guy and a true team player. In the several hours I spent with him, he only once mentioned a personal accomplishment: “My one claim to fame is probably that I’ve figured out how to automate a lot of this signal testing. If it’s not automated, it’s very time-consuming.”

One of the many signal tests done in the X-Lab involves physical layer transmitter conformance. This characterizes the transmitted signal quality, or, as Pete put it, “makes sure that the signals sent out on the wire are wiggling correctly.” Intel tests these parameters using a mix of equipment, including RF power meters, RF spectrum analyzers, RF vector network analyzers, and oscilloscopes. Quite often, the X-Lab team has had to develop its own custom software tools to assure accurate and repeatable test results.

With the lab’s automated test implementation, a basic 10 Mb/100 Mb/1 Gb test pass can be completed in about six hours. Only about 30 minutes of this requires hands-on operator interaction. Using the prior, manual test bench methods, the same test pass would take about two weeks for 10 Mb and 100 Mb alone. Similar time savings are realized in the lab’s 10GBASE-T testing.

Signal Analysis

Wrist-deep in Twin Pond with Twinville A1 silicon stepping, we moved through some signal integrity measurements. Specifically, we examined transmitter droop using an oscilloscope, which shows waveform decay over time. Next, we switched to the spectrum analyzer. At this point, Pete stopped in his tracks, realizing that he’d forgotten to lecture on the channel requirements for 10GBASE-T and how those played into the change in instrumentation.

“CAT6a has a nominal channel bandwidth of 500 MHz. Some manufacturers extend that to 650 MHz,” he explained. “So, it’s a very broad range. The signal encoding is so complex that it’s very difficult to make a measurement on a time domain signal. To address that issue, the IEEE specifies a series of test measurements that are defined more in the RF realm than in the time domain. Right now, we’re selecting one of the four twisted pairs, routing it into the spectrum analyzer, which is acquiring the signal for us, and then we do some offline analysis of the signal. In this case, the specific measurement is called spurious free dynamic range. We’re using the analyzer to acquire a two-tone test signal and we’re looking at differences between the signal’s minimum and maximum frequency peaks. We do that with multiple pairs of tones, and taken together, the measurements define the linearity of the system.” 

Hot And Cold

Networks have to function everywhere—from the frigid Subarctic to the enclosed swelter of an engine room. X-Lab techs call this Thermonics temperature forcing device the “Elephant Arm,” and it can drive component temperatures down under -55˚C and up to over 250˚C. The idea is to focus an air stream onto the case of a device.

Techs needed extreme cooling when working on first-generation devices owing to their 17 W to 25 W power envelopes. The amount of heat generated under load required significant air conditioning to get the chip down to 0ºC. Pete would have demonstrated, but the machine is loud enough to require ear protection, and we didn’t have any on hand.

CAT Fight

CAT5e cabling is very common. I have it running through my own walls for gigabit structured wiring. The cabling specs out at 125 MHz and simply uses four pairs of loosely twisted copper wiring. CAT6 steps up to 250 MHz, features tighter twisting, and adds in a dielectric conduit that separates the four twisted pairs and helps prevent energy from one pair bleeding into its neighbors.  Most 10GBASE-T specifically targets CAT6a cabling, which includes even more stringent control over pair twist, as well as manufacturer-specific design features for improving immunity to alien crosstalk noise.

Pete set 10GBASE-T cabling in historical perspective to illustrate some of the increasing problems networking engineers face.

“10BASE-T has been around for a long time, and it’s even been demonstrated to work over barbed wire. There’s that much signal-to-noise ratio margin, even on a very lossy channel like barbed wire. It’s 6 V peak to peak, best-case, and the pulses are very wide. It takes a lot of bad things happening in the channel for a receiver not to see it.

100BASE-TX requires you to do some funky things to the signal. Instead of the two-level signal in 10BASE-T, there’s a three-level signal. If you look at the signal energy, it’s a 2 V peak to peak system, so there’s less power, but all of this scrambling and pulse shaping gets it to work. In 1998, that all wasn’t very straightforward, but with modern signal processing, it’s pretty easy to do 100BASE-TX.

Now, for gigabit, you start to get into some magic. There is more noise power than signal power, meaning we have a negative signal-to-noise ratio. That means if there’s a lot of background noise—like in this room now—and if we get that hammering noise so loud that you can’t hear me, that’s like a gigabit Ethernet noise environment. In 2000, 2001, there were some signal processing techniques applied, some special encoding and decoding that, at a high level, means you’re taking a best guess. You know what you’re sending out, and the receiver knows that there are certain expected combinations that will be coming back. So the system takes a best guess, to put it crudely, at what that data is. Better than 1 in 10-10 times, it makes the right guess.

But gigabit sucked up a lot of power and required, for the time, a lot of gates. At the gigabit inflection point, you started to have more gates than analog circuitry because we’re sending highly-encoded analog signals—those wiggly things with an amplitude and everything else. To encode and decode that properly requires a lot of logic gates. For 10GBASE-T, you just carry that concept to the next level. If gigabit is a whisper in a rock concert, 10GBASE-T would be like a whisper in a nuclear blast. It’s that much more noise power compared to the signal power. But with today’s digital signal processing techniques, you can make a signal have more apparent power. That’s one way to think of it. Again, the ratio of analog content to gates in 10GBASE-T is—wow. It’s very significant, with much, much more digital than analog content. This is good because it suddenly becomes very Moore’s Law-friendly, plus you get the advantage of power savings as you go to each new process node. In the lab here, we have 90, 60, and 40 nm technologies. The power savings associated with each generation has been key for our 10 Gb NIC products and the broader 10GBASE-T deployment in servers and switches.”

Miles Of Cabling

A lot of the X-Lab’s testing is done in different ways over various segments of cable. The lab contains cabling from seven manufacturers spanning 14 cable types. Pete said that he stopped doing the math at seven miles and two tons of cable, “just because…it’s a lot.”

Intel has to look at so many varieties because they’re constructed differently, and so have different characteristics. The group needs to ensure that its products will work over whatever is commonly installed. “We don’t have every manufacturer in the world in here,” said Pete, “but we asked the cable distributors who is the most popular and widely deployed, and that’s what we got.”

As you wonder at all of that cabling, think about your feet. In particular, think about your feet dragging across a carpet, then touching the light switch with your fingertip. Friction followed by static electricity buildup and discharge, right? Now imaging snaking those miles of cables through walls and crawlspaces, the friction causing electrostatic buildup on the cabling jackets. What do you suppose might happen when someone goes to plug that cable into a patch panel? Maybe nothing…or maybe not.

“I was a cable discharge skeptic until we actually pulled cables into the lab and I got zapped by a cable, “ says Pete.” Guys were working next door to us, pulling cable through conduits and onto racks and trays. So you pull the cable, you plug it into a port—well, the switch is going to provide a passive ground for that charge. We’re verifying the immunity of a networking device—a port—to that type of a discharge. It’s different than the typical ESD testing we do anywhere else in Intel or at any semiconductor company. Those involve looking at the movement of people, like across a carpet, or the movement of machines building up a charge as they operate. But this is a special type of ESD that really only appears in the networking world.”

  • dogman_1234
    Anyone else notice the Guy Fawkes mask in the background?
    Reply
  • super_tycoon
    dogman_1234Anyone else notice the Guy Fawkes mask in the background?
    It's existence is noted in the text for pic3, though I can only wonder why he has it. Is it good taste to associate yourself with 4chan and anon nowadays?
    Reply
  • gmoney86
    I am not sure if I ever saw the sign to the X-Lab when working at Jones Farm, but I did always wonder what went on in the labs that were similar to it. They kind of looked like IT work rooms to me, though it makes sense to have a need for oscilloscopes, soldering irons, networking tools, etc. for certain R&D projects.
    Reply
  • CvP
    In picture #4 (elephant arm) :D

    Thanks Toms for this article.
    Reply
  • scook9
    Awesome article, I just finished me BSEE degree and now work in an network company where I help engineer servers so this is right up my alley!
    Reply
  • This all started in 1990 with the creation of EtherExpress 16 by a handful of people led by a visionary leader, Steve Kassel.
    Reply
  • williamvw
    super_tycoonIt's existence is noted in the text for pic3, though I can only wonder why he has it. Is it good taste to associate yourself with 4chan and anon nowadays?My guess is that it was just a fun-looking mask someone had brought to the lab, perhaps because they also enjoyed "V for Vendetta." (I did!) I'd wager that the X-Lab crew had no idea of the mask's fleeting association with 4chan's anti-Scientology protests, much less the religious motivations behind Fawkes's attempted regicide. Let's not accuse good people without cause.
    Reply
  • chovav
    Excellent article Tom (Willam actually). Nice reading, informative and geeky, just the way I like it. Amazing to see that they transfer 76TB in just one test (500,000,000*1518*100). Good job!
    Reply
  • dEAne
    I love this article - thanks for this info.
    Reply
  • williamvw
    chovavExcellent article Tom (Willam actually). Nice reading, informative and geeky, just the way I like it. Amazing to see that they transfer 76TB in just one test (500,000,000*1518*100). Good job!Yeah, I was stunned. I honestly expected some automated tests, maybe a few guys with scopes taking occasional signal readings -- NOTHING like what I saw. I'd assumed that a technology as old as Ethernet was pretty much a done deal and didn't require much hand holding at this point. I couldn't have been more wrong.
    Reply