Quadcore vs Single core on compiling software faster

badger101101

Distinguished
Nov 28, 2006
72
0
18,630
I am a software engineer, and I hate waiting for my builds to compile. I will soon be working on a C++ project that is over 1 million lines of code and was wondering if anyone here has any data showing how much faster a quadcore such as the QX6700 compiles over a single core such as my Athlon 64 3400+ (Newcastle). Generally 2 threads are utilized per core when compiling, so I am thinking that utilizing 8 threads over 4 cores should theoretically improve compiling times significantly. Any thoughts or personal experiences?

I think it would be a great addition to the THG CPU charts.
 

Dr_asik

Distinguished
Mar 8, 2006
607
0
18,980
Regardless of multi-threading, the sheer speed of the faster Core 2 Duos can certainly make a remarquable difference over your Athlon 64+. As all C2Ds are already dual-core processors, I don't think you'd see a big difference from going to quad-core, except a larger hole in your wallet.
 

mr_fnord

Distinguished
Dec 20, 2005
207
0
18,680
Are you using a multithreaded compiler? How are your library dependencies structured? If you happen to have a multithreaded compiler, and you have a widely branched dependency tree, you could well see some significant performance gains with a 4 core processor. Like everyone is saying, you should also see a pretty significant performance improvements moving to a C2D even in single threaded apps.

Why are you recompiling the entire application? I sincerely hope that your million line project is logically divided into libraries and effectively segmented, otherwise you'll have worse problems than waiting a couple of minutes for a build. Also, you should be able to selectively build, or do a rebuild, or any other of a number of options depending on your compiler in order to speed up build times.
 

badger101101

Distinguished
Nov 28, 2006
72
0
18,630
I am using VS 2005, which is capable of utilizing many threads when compiling, so long as you tell it how many you want to use. I realize that the C2D alone would offer great improvements, but I guess I'm interested to see how the QX6700 utilizing 8 threads would stack up against say an overclocked C2D X6*00 utilizing 4 threads. Is the scalability of threads/cores in compiling as effective as higher clock speeds?

And yes, most of my builds are incremental and selective on a per module basis. However, there are times where I have to do a clean build on a very large portion of the application and it takes much longer than I would like it to.

I want to put together a new system soon and want to maximize the performance for my needs. Thanks.
 
Depends on the compiler and how the code is broken up. If you use a good compiler like GCC, you can set it to spawn as many threads of itself as you want. However, if you have just one huge monolithic .cpp file or a lot of .h dependencies (a.h is included in b.h and b.h is included in c.h) then you'll only see one thread being utilized.

But generally, you will see a pretty decent improvement when going from 1 -> 4 threads. I usually see both cores on my dual-core box getting used at 95-100% during compiles with 3 threads being spawned. I've also heard from people with dual quad-core workstations that compiles are *really* fast due to parallelization. I'd suggest that you go for a pair of low-end quad-core Xeon 5300s over a QX6700 if you are aiming to compile things quickly as it's not much more expensive than a QX6700 but will be a bunch faster.
 
The two Xeon 5310 1.6 GHz CPUs cost $910 for the pair. A good dual socket 771 motherboard is about $400, and 4 1GB sticks of DDR2-667 FB-DIMM memory are about $500. That gives a cost of CPUs + board + RAM of $1810. A single QX6700 is ~$1000 and a good P965 motherboard is $120 or so. 4 1GB sticks of DDR2-800 are about $500. So that gives a total cost of $1620. That's a little less expensive, but 8 1.6 GHz cores will perform roughly equivalent to 4 ~3.0-3.1 GHz cores for optimal threading situations. The 8 Xeons are a little more expensive but also a little faster.

Not to mention that the socket 771 setup is very upgradeable as the Xeon 5310s are the absolute slowest quads for that socket while the QX6700 is as fast as is currently made for 775. Maybe there will be a 3.33 GHz quad on 775 before it's retired for LGA1366, but that's not that much upgrade headroom. Now two 3.33 GHz quad-core Xeons is a BIG upgrade on the 771 board- it would double the speed of the compiles and not require a new board and RAM.
 

badger101101

Distinguished
Nov 28, 2006
72
0
18,630
Distributed building ehh? That's a cool concept. How does it handle dependencies?

In regards to the Quadcore Xeons vs QX6700... Does the Xeon outperform because of it's architecture/cache etc... or are you suggesting them because they scale well in a two-socket motherboard? Clock cycle for clock cycle, are the Xeon's that much better? I don't really understand the difference in architecture. For example why a Xeon is superior in a server environment.

Would a pair of QX6700's (if there is even a motherboard supporting that) perform just as well as the pair of Xeons? I'm attracted to the higher clock speed of the QX6700, but if the Xeon's are really that superior, I will probably go with them.
 
Uh, it all depends on your particular setup. There are dual-socket 771 boards in both regular ATX and then ones that are Extended ATX/SSI MEB that are bigger than ATX. The ATX boards are like any other ATX board and will fit in the same cases. The EATX ones need a larger case. Anyway, with chips that dissipate 80+W each, I'd aim bigger for a case or at least make sure you have plenty of airflow. That and be conscious that server chip HSFs are very loud as they spin at 5000+ RPM.

The Xeon boards need an 8-pin EPS12V connector. However, some of the larger ATX12V PSUs have this connector as well. And as far as size goes, 500-550W is generally recommended for a DP server. That will cover the CPUs, fans, the 8-12 sticks of RAM that the boards take, and also several SCSI hard drives. Most servers have IGPs, so you'll need to adjust that figure for what your GPU is and how many disks you have in your system.

Servers are not all that difficult to build and configure- I've done it. Just realize that you're building a machine that is a working machine and not a toy and choose parts accordingly.

badger101101: Distributed compiling is pretty neat. I use distcc on my little home network so that my 5-year-old notebook can do most of its compiling on my X2 4200+ desktop. You can even cross-compile if you want too- that's what I do. Distcc works by having the host machine (the one doing the compiling) sending out work packets that have the code to be compiled as well as headers needed to the helper machines. Thus the helpers can simply crunch the compiling and not worry about having the headers and such locally. This leads to a lot of network traffic as some packets can be several megabytes in size and it's common to send several a second if there are several helpers or the helping machines are fast. However, a 100 Mbit LAN should be fine, gigabit is massive overkill. If you want some more info on distcc, look here.

The Xeons are the same chips as the Core 2s are, the advantage is more sockets. A single Xeon X5355 at 2.67 GHz will perform rather similarly to a QX6700 also at 2.67 GHz. The X5355 will be a tad slower as its FB-DIMM memory has much higher latency than the unbuffered DDR2 for the QX6700, but you get my point. Same chips, just more sockets.

One cannot run the QX6700 in pairs. Nobody has wired a socket 775 socket to a Xeon 5000 chipset, which is what's needed for DP operation. I'd be willing to bet that of somebody rewired a QX6700 for socket 771, it just might work. The QX6700 and Xeon X5355 are otherwise identical silicon, except for a slightly different FSB speed and multiplier. It's not like the Opteron line, where the chips are actually physially different. The UP chips have 1 HTT link and the DP/MP chips have 3 links and use a different socket for it.