Sign in with
Sign up | Sign in

Motherboard: Quintessential Supermicro

Supermicro 5046A-XB: X58 Workstation Barebones
By

The C7X58 motherboard tied into the SuperWorkstation 5046A-XB is typical Supermicro fare. That is to say the board is somewhat plain to look at, yet meticulously laid out, passively cooled, and clearly designed with reliability in mind.

Because Intel isn’t yet shipping the single-socket “Nehalem-WS” Xeon family, Supermicro’s first-generation workstation design must center on the X58 chipset and accommodate Core i7 processors. It’s worth noting here that Intel is expected to continue supporting the workstation market with X58 once Nehalem-WS emerges. However, that single-socket Xeon will support ECC memory, enhancing the platform’s reliability story beyond what either Intel or Supermicro can boast right now. Supermicro is claiming support for all Core i7 and upcoming Nehalem architecture-based processors, so we assume that this machine will include single-socket Xeon compatibility when the time comes. With that being said, we wouldn’t hesitate to use Core i7 today, even in a true workstation build.

Like most enthusiast boards out there, the C7X58 comes armed with six DDR3 memory slots in a triple-channel arrangement. Official support caps out at 24 GB of DDR3-1600/1333/1066/800 using ECC or non-ECC modules.

The X58 I/O hub offers 36 lanes of PCI Express (PCIe) 2.0 connectivity and the ICH10 controller wields six lanes of PCIe 1.1. Supermicro uses the chipset’s available PCIe support to enable two 2.0 x16 slots (presumably for graphics cards) and one x8 slot (wired for x4 operation at 1.1 link speeds). There’s also a standard PCI slot, although it and the x8 PCIe connector would both be covered if you opted for two dual-slot graphics cards. This is undoubtedly a limitation for anyone who was also considering an add-in storage controller or a PCI-based sound card.

The good news is that, even though Supermicro’s C7X58 is a workstation board, it does support both AMD’s CrossFireX and Nvidia’s SLI multi-card rendering technologies. Would this make an ideal platform for gaming? We think not—selling for $800 online, a gamer could easily find a motherboard/chassis/power supply combination for less that’d still be able to work with both technologies and likely boast more tuning knobs and switches.

On the other hand, Nvidia has done a lot of work to its Quadro drivers, making SLI a more marketable feature in its workstation card lineup. SLI currently runs in one of three different user-selectable modes: SLI Frame Rendering, which teams two cards and presents a single adapter to the operating system, SLI Multi-View, which allows multiple displays to render 3D independently, and SLI FSAA, which sounds a lot like Frame Rendering to us with an emphasis on enhancing image quality through anti-aliasing (AA) rather than performance. It’s absolutely feasible that a professional might want to leverage Supermicro’s SLI license to take advantage of the feature.

AMD, on the other hand, seems to have put very little into CrossFire support on its FirePro cards, neglecting to even mention it as a supported feature. The FirePro V8700 we put into our test bed here has the requisite connectors, but they remain unutilized. As a result, a professional workstation seems to be overkill for anyone looking to employ a pair of Radeons in CrossFire.

As mentioned previously, the C7X58 centers wholly on Intel’s X58 and ICH10 chipset components, so you get six SATA 3 Gb/s ports with software-based RAID 0, 1, 5, and 10; a 7.1-channel Realtek audio codec, eight back-panel USB 2.0 ports (with two more internal), and a floppy controller. There is no legacy parallel ATA support, and Supermicro doesn’t add the third-party logic needed to resurrect it. The company does, however, add FireWire 400 and a pair of Intel 82574L Gigabit Ethernet controllers.

Check prices for Supermicro's 5046A-XB

Display all 23 comments.
This thread is closed for comments
  • -4 Hide
    LightWeightX , March 6, 2009 11:39 AM
    I have not used the system in the review however based on my experience and those of other IT Professionals of SuperMicro we have renamed the company to SuperCrapo.
  • 0 Hide
    WyomingKnott , March 6, 2009 12:23 PM
    Hmm. Aside from LeightWeightX's comment, this appeals to me. I'd appreciate your feedback on my reasoning.

    I want to build a new home computer. I'm less of a gamer and more of a tinkerer, with occasional heavy audio processing. I like to play with things like raid arrays when I have spare time.

    I am sufficiently concerned with unavoidable memory errors that I would like to use ECC memory - I would sacrifice some speed to avoid random crap happening while I am working.

    I believe that, now that the front-end bus and memory bandwith bottlenecks have been addressed, the next key point is disk speed and IO rates. For this reason, I want to play with a RAID array. Five disks so that I can have a hot spare, plus a single disk for systems, and the removable hard drive that I use for backup.

    Is RAID and ECC going just too far for an enthusiast system, or will these meaningfully increase performance and reliability? If you would RAID, would you buy an (expensive!) controller with hardware XOR processing?

    Thanks.
  • 0 Hide
    jeffunit , March 6, 2009 3:23 PM
    Having the power supply point to it's certification is nice, but 80% efficiency is nothing spectacular. Antec has the earthwatts line which has been 80% efficient for several years. Enermax has a 82% minimum efficiency line. Dell sells a 90% minimum efficiency power supply - that's noteworthy.

    As for 'workstation' - perhaps it will be one with the yet-to-be-announced-xeons which support ECC. It certainly isn't now. Though the i7 is crazy fast, memory errors are a concern to anyone doing serious computation. Thats why I got a phenomII, and enabled chipkill ECC and memory scrubbing every 8 hours for my memory. If a BIOS doesn't have options like that, you may have a fast gamers rig, but you don't have something for serious data processing.
  • 3 Hide
    enealDC , March 6, 2009 3:26 PM
    Well this IT professional has been in the business for more than 10 years and let me say that SM is not a crappy company...
  • 1 Hide
    Anonymous , March 6, 2009 4:17 PM
    I would think the supermicro x8sax would be a better board to use as a workstation. More slots 2 pcie 2.0, 2 64-bit pci-x, and 1 pci 32.
  • 0 Hide
    cdillon , March 6, 2009 4:53 PM
    WyomingKnott, I think you are justified in your concern about memory errors. The more RAM we install in our systems the greater the chance that memory errors will occur. It doesn't matter how good your system is, ECC RAM is absolutely necessary for stability. I use ECC RAM and fault-tolerant RAID in all of my servers and also on my workstation at home and I never see "random" problems. When I do rarely come across a problem it is the reproducible type, which is the easiest kind of problem to try to solve! At least you know you can look elsewhere for the source of a problem when you are able to rely on certain pieces of your hardware.
  • -4 Hide
    hustler539 , March 6, 2009 7:23 PM
    When will Graphics Charts be updated!!??
  • -2 Hide
    Shadow703793 , March 6, 2009 8:06 PM
    Not sure if this is possible with i7 yet, but one way to over come the unavailability of the OCing options on the board would be to do BSEL and volt mods, risky? yes; worth it? maybe.

    In all seriousness you wouldn't buy this kind of board for OCing anyways. These Server class boards are geared for max reliability.

    And +1 for enealDC. SuperMicro is well respected for their server boards.
  • 6 Hide
    TechDicky , March 6, 2009 8:44 PM
    jeffunit,
    You mention the power supply efficiency, as not being all that great... I don’t disagree with you, as I don't know what the other power supplies efficiency is at various loads. But for those who may not know as much about power supplies, your comments may be misleading.

    First it is important to distinguish a few things about power supply ratings. If the efficiency stated by a company is simply a single percentage or says “upto xx%”, then it is really not telling you the whole story. Efficiency is not a singular figure, rather it is more like a curve. It starts out low at very low loads, as the load increases, so does the efficiency, then toward peak loads it falls off again. So just as important (if not more important) than the peak is how flat the curve is.

    That said, it is more reasonable to say that this psu is 80 PLUS, and upto 85%, since the curve never dips below 80% and 85% is the peak, not 80%. Really the 80Plus certifications tell you the most about the efficiency curve. The certifications are based on efficiency ratings at 20%, 50% and 100% loads. In order to obtain the various certifications the psu must achieve these levels.

    | Certification | 20% Load | 50% Load | 100% Load |
    | 80 PLUS | 80% | 80% | 80% |
    | 80 PLUS Bronze | 82% | 85% | 82% |
    | 80 PLUS Silver | 85% | 88% | 85% |
    | 80 PLUS Gold | 87% | 90% | 87% |

    Seeing the reported rating for all three loads actually gives you even better information since as in this case, the psu is considerably better than the bare minimum required for 80 PLUS and is only lacking a 2% efficiency at 20% load to be certified as an 80 PLUS Bronze.

    The point is that this power supply is really quite nice. But certainly there are others that are better. But before you can honestly know if a psu is better than this one, you need more than a single figure.

    That said, I generally agree with the comments here including your other comment on ECC. This rig is not very impressive overall... and I wouldn’t seriously consider this a workstation class system that is worth the bucks it costs. About the only parts I think are worth having are the case and psu. Again, not that either of these are the best, but they are pretty good. But that’s it. Considering all of the limitations of this board, I don’t care how well made and how stable it is, its just not worth it.

    WyomingKnott,
    To respond to some of your comments/questions, this is really not the rig for you. You will pay more than it is worth if you are just a “tinkerer” or “enthusiast”. Even if it were a better motherboard at a better price, true workstation class hardware cost considerably more than other systems, and the added stability you are referring to, might not be quite the same thing as the stability of a workstation system. Think of it like this. If you were going to leave your system running for a week or more solid and the whole time, it was going to be rendering a video or an image and even very small memory errors over the course of that span could wreak havoc on your work, then that is a workstation. If on the other hand, you plan to use your computer for various tasks, maybe home video editing, and some personal photo editing, etc, and you expect to have it running maybe 4 hours a day, or even if you planned to use it for working and expected to use it for 8 or 9 hours a day. But don’t want to be hassled by an occasional hiccup, and even given that you would like to play with some RAID, you could buy or build a much more suitable system for considerably less money that you would be very happy with.

    With ECC, again, I personally think it is probably unnecessary for your case. There is certainly nothing wrong with ECC, but I doubt if you would notice a difference. I honestly believe that most stability issues for the average enthusiast are the result of several factors:

    1) Cheap Parts – Just because you don’t need an Intel or other brand server motherboard doesn’t mean that you should settle for a PCChips or no name motherboard. Buy good quality parts.
    2) Components running out of spec – Just because you are not overclocking does not mean that your components are running within their specifications. For example a cheap or faulty power supply might cause voltage to sag in places. Even if you have a great power supply and a great motherboard, you’re voltages may not be perfectly within spec. It is very possible to confirm voltages and adjust them via the BIOS to get them to run closer to spec, or even slightly over if necessary. Most components have an acceptable under and over voltage range. It is better to be slightly over voltage but within the acceptable range, than to be under voltage outside of the acceptable range.
    3) Excessive heat – Most people do not give much consideration to the heat generated in their system. They are quick to notice fan noise, but are slow to think about the impact of a slow moving or stopped fan, a system clogged with dust and dirt, or of inadequate cooling to begin with. It’s a delicate game of balance to avoid excessive noise and provide sufficient cooling. This doesn’t mean that you have to run water cooling, etc. (even though those can be very quiet and very efficient). But if you plan to run RAID you may want to just give up on the idea of a quiet system anyway. Even if not, do not sacrifice cooling for a quiet system. Don’t be afraid to switch to aftermarket heatsink/fan combos, but read up on them, good reviews/ratings and make sure they allow enough clearance for your motherboard, case, other components, etc. use enough case fans and arrange them for equal flow or positive pressure, not negative. (that means equal volume of air moving in and out or more air moving in than out) fans have different speeds, sizes and blade pitch with all add up to different volumes of air. So use CFM to arrange and not number of fans or size of fans. Also do not have fans blowing in close to fans blowing out. Chances are the hot air will be redrawn in.
    4) Software – Viruses, adware, bloatware, crappy applications that run all of the time and have memory leaks, etc… Software is just as likely to be your problem as hardware is. Perform a fresh install of a stable OS (64-bit if you can), install anti-virus and anti-adware, keep them running and keep them upto date. Install only the applications you need. If you want a “scratch pad pc” then use a separate computer or a virtual computer. Don’t download and install crap on your system you want to be stable. Don’t install something that a friend gave you. Don’t surf questionable sites from your system, in fact don’t surf at all, use a separate computer or virtual machine to surf the web. Reboot regularly. Learn about processes and programs that automatically run. Download and learn to use “AutoRuns” from Microsoft (previously SysInternals). And when you first do a fresh install record everything set to automatically run, get rid of things that are running that you can. And keep that list handy… once a month compare the current state to the list and remove new crap that has set itself to autorun.

    Think of it like this, lets say you are not getting a new computer but a new car. First, if you only look at price and you buy a brand new car for $9,000, (yes it can be done), you are probably not going to expect much of it or be very happy with it. Now if you move up the scale and look at a nice car that is $35,000 or another one that is $60,000. Ok, so either one could be a very nice car. You really may like the $60,000 car, it may hug the road a little tighter, it may have a quicker take off, but if you drive it around for a while (few years), you constantly over rev the engine, you never change the oil, never change the plugs or filters, never perform any maintenance on it. After a short while, it starts running hot, but you ignore it and keep driving it hard anyway, a warning bell starts to ding so you cut the wire, you start to develop an oil leak that you also ignore, its not really going to matter if you got the $35,000 car or the $60,000 car, because either one of them would be running like crap by now… it doesn’t matter what kind of system you buy if you don’t maintain it. Once you make sure you are going to maintain it, then don’t buy the cheapest piece of crap you can, but that doesn’t mean you need to buy a Rolls Royce either.

    If you really want to play with RAID then you are not going to be disappointed in a hardware RAID card. I don’t mean a card that gives you enough connections to run software RAID on a bunch of drives either. It needs to be a RAID controller that actually has a processor on it. Again, you don’t have to buy a $1000 card, but don’t buy a $100 or $50 card and expect it to perform like a true hardware RAID controller. Also there are a lot of factors in running RAID, like the drives. Don’t setup a RAID and cripple it with 7200 RPM SATA drives. Yet again, if you cant get 8 x 15,000 RPM SAS drives then that’s ok, but don’t buy standard home consumer hard drives that you spent $50 bucks each for. Finally if you are going to setup a RAID 5 for storage that is great, but don’t run a single drive for the OS, etc. Run 2 drives in RAID 1 (mirrored) if you cant do that (i.e. you are only going to use 4 drives, etc) then run RAID 10 or 5 on all drives as one large virtual disk and then when partitioning carve it up into System and Data partitions.

    Well, I guess now that I have a book written, I should go… :D 

  • 0 Hide
    cangelini , March 6, 2009 8:51 PM
    Cheers for the response Tech!
  • 1 Hide
    ShadowFlash , March 6, 2009 9:59 PM
    I agree with most everything TechDicky on RAID except for one thing. I would never use a RAID 5 partition for an OS. The overhead involved with the many small writes ( think page file ) is usually to great for most controller cards to handle effectively resulting in a "sluggish" feeling system. While this can be hard to quantify in benchmarking, it is easily noticed in practiced as compared to a a simple mirror. The other major drawback is in the rebuiling process. I've had arrays "hiccup" from time to time and require rebuilding. Usually this is do to a dying HDD, but it can be caused by numerous different things. When the RAID 5 array ( with OS installed on it ) is rebuilding, the entire system becomes slow. If you reboot while rebuilding there is a good chace the OS will not even boot until the rebuild is complete. Rarely are there comprehensive reviews made on the rebuild quality of controller cards.

    Of course I despise RAID 5 in any form....I posted an explanation of proper RAID Set-ups in the FAQ abou a week ago
    http://www.tomshardware.com/forum/43125-32-raid
  • 0 Hide
    TechDicky , March 6, 2009 11:14 PM
    ShadowFlashI agree with most everything TechDicky on RAID except for one thing. I would never use a RAID 5 partition for an OS. The overhead involved with the many small writes ( think page file ) is usually to great for most controller cards to handle effectively resulting in a "sluggish" feeling system. While this can be hard to quantify in benchmarking, it is easily noticed in practiced as compared to a a simple mirror.


    Point well taken... I recind my statement about 4 drives in RAID 5 and partitioned for OS and Storage and instead suggest that if you cant do 6 or more drives (2 in RAID 1 for OS and 4 in RAID (insert your RAID Flavor Here). Then I would probably recommend 2 smaller faster drives in RAID 1 for the OS and 2 or 3 larger (slightly slower) drives in RAID 0 for storage.

    ShadowFlash,
    You clearly have more experience with various RAID configurations. Does that sound reasonable, or what would your recommendation be for configurations that are limited to 4 or 5 drives?

    cangelini,
    Thx... I've still got plenty to learn, but I'm getting there... :D 

    Regards,
    Richard aka TechDicky
  • 0 Hide
    ShadowFlash , March 6, 2009 11:57 PM
    @Tech...

    True redundancy - Workstation : 2-drive RAID 1 for OS and Programs, and 2-drive RAID 1 for Files ( or RAID 1E if a 3rd drive is available ).

    I hear RAID 0 suggested all the time for workstation use, but a full re-install takes me approx. 10 hours bare metal to fully-functional on any of my CAD boxes when something goes wrong. SolidWorks and Inventor are reallllly slow installing and setting up even from mounted ISO's.

    There are Performance advantages to RAID 1 also. Some controller cards can almost equal RAID 1 on reads ( 3ware comes to mind ). The multi-tasking ability of reads on RAID 1 is nice too. On reads each drive moves independantly of one another for native throughput on each drive. Most controller cards support that one. I have yet to be blessed with a 1E card to play with, but it looks to be the best of all worlds...performance of a 2-drive RAID 0 ( using 3 drives ) with the redundancy of RAID 1.

    Sorry for the long post...I know I tend to ramble....
  • 0 Hide
    ShadowFlash , March 7, 2009 12:17 AM
    oops...should read Some controller cards can almost equal RAID 0 on reads.

    And check out this paper on why I despise RAID 5. I know it's dated, but I've yet to hear any current solution to this problem. Just look through the forums and see how many posts are on strange problems and rebuild issues with RAID 5 vs. RAID 1 or 10.
    http://miracleas.com/BAARF/RAID5_versus_RAID10.txt
  • 0 Hide
    Anonymous , March 9, 2009 12:32 AM
    RAID 6, enough said. It's a solution. You can take 2 disk failures instead of just 1. Rebuilds take hours though. Only good if you want to salvage an old RAID box somewhere. RAID 1 is the best solution for a workstation. Period. Of course, never rely on solely a RAID system as your backup. Image your system and save it on tape, or another seperate file server. If you want to go extreme with it, take your tapes off site. There are just so many ways to gain redundancy. Hell, make 2 exact same machinces, and replicate them every minute of the day via BackupEXEC. If your primary station goes down, swap that sucker out! I'll bet you won't loose much data either :) 
  • 0 Hide
    kittle , March 9, 2009 5:53 PM
    Some of my own experiences with SuperMicro stuff:

    I purchased a SM system from a company who normally builds servers. They had a 3year parts warranty and lifetime free tech support from techs with a clue - so I Went with them.
    First thing I ran into is the parts are NOT cheap. dual socket motherboard: $750. Fulltower case + PSU: $200.
    the system was $3500 when everything was done.
    spendy - yes. but it ran and ran and ran.
    Since buying it, the board has outlasted the following:
    - 2 power supplys
    - 1 set of CPU fans
    - 1 fulltower case (old case was replace by a SM case)
    - 2 video cards
    - 1 System drive (failed 6mo after its 5 year warranty expired).
    I bought the system in 2001 and its still running. And if not for a messed up SBS 2003 install it would be running as I type this post.

    Later on I got another system and setup the SM system as a file/webserver. While the Tyan board I used worked, it had to be replaced twice after the power went out one time. The SM system just kept on running.

    in general -- if your looking for inexpensive stuff, go elsewhere: This is not the hardware you are looking for.
    but if you want something that will last "forever" SM fits the bill.
  • 0 Hide
    cdillon , March 9, 2009 8:11 PM
    kittle: If you want something that runs forever you could do better than SuperMicro. I've had some SuperMicro servers but the "tier 1" server manufacturers like HP, Dell, and IBM make even better stuff. I had several old Compaq Proliant 3000 servers run for 10 years straight with *nothing* replaced on them. Fans, power supplies, hard drives, all original. Some of the fans were getting rather noisy at that point, but they still worked. Some others weren't so lucky and needed a hard drive or fan replaced, but that was it. I still have a couple lying around and if I plugged one in I would bet that it worked. This is why I still buy HP (nee Compaq) Proliant servers. They rarely have problems, and when they do, I have a replacement part from HP the next day, although we could get a problem fixed within hours if we were willing to pay for it.
  • 0 Hide
    mapesdhs , March 10, 2009 7:39 PM

    There are other options for RAID of course, especially if one
    cares about data reliability, ie. SCSI and FC. I bought a cheap
    LSI 22320-R U320 PCIX and an LSI 23320IE U320 PCIe. The PCIe card
    has my system disk (146GB 15K U320, faster than any SATA for
    access time), while the PCIX card has a bunch of 15Ks as a RAID0
    stripe. I get 359MB/sec sequential read.

    2nd-hand SCSI is cheap! Unless one really needs lots of storage
    space, SCSI is an easy way to get high speeds.

    I'll be building a Core i7 system in May/June, for which I
    bought a 2nd-hand Dell PERC 4e/DC PCIe card, which also gives
    359MB/sec sequential read but an impressive 558MB/sec buffered
    read.

    I've yet to beat the 700MB/sec sequential read I get with my SGI
    Tezro though.

    Btw, I know someone who has a 12-port Areca SATA card, he gets
    800MB/sec. Where he works, building server systems, they get
    1150MB/sec with the 24-port Areca 1280.

    Ian.

  • 0 Hide
    Anonymous , March 22, 2009 3:34 AM
    I have had good luck with SuperMicro. Dells are very reliable also but expensive. SuperMicro provided us with the best reliability for the buck. As far as this Core i7 system goes, we configured one for about $2000 to use as a development server. I am disappointed that SuperMicro advertises ECC for this system. Today's Core i7 will not support ECC. We would never use this for a production server hosting customer applications without ECC. For that task we use Zeon based SuperMicro. The mother board leaves a little to be desired. The slots are close to one another and the CPU HS/Fan assembly steals a little room from the back of the first PCI-E slot. The fat HS on video cards will crowd add-on cards. Another disappointment is no Parallel ATA. What a pain to get an add-on card just to interface an IDE DVD. We don't have much experience with our system yet. Seems fast. Only time will tell on the reliability. If it fails us in the office, it will end up in home. We use RAID 1 for the OS and RAID 10 for the rest.
  • 0 Hide
    WyomingKnott , March 24, 2009 12:29 PM
    Whoops - I've been out of contact for a while.
    My thanks to all for your opinions, especially TechDicky. Write that book; I'll read it. I agree with most of your points, in fact, I'm already implementing them. My approach to a clean set of software: My system disk effectively stays within a few months of a clean rebuild. I rebuild and save an image. When I install software or do updates, I make notes. Periodically, I restore the old image, re-apply all said updates and new software, and then re-image. Ideally, this is done with the network disconnected (unless activation requires it) so that nastyware can't sneak in before I burn the next image.
    Any broken stuff, or nastyware, or bloatware can be dealt with by restoring the last image.
    "Paranoid and proud of it!"
Display more comments