ASRock H97 Anniversary Motherboard Review

Today's LGA1150 is the last H97 board at a mainstream price that will go on the bench for my scrutiny. It's a little different in a couple of interesting ways, as we'll find out.

An initial glance shows ASRock's H97 Anniversary board is a rather odd bird, yet companies don't make products without a market. Who might buy this, and what are they getting? Perhaps just as important, what might have been given up or reduced to keep the price so low? After all, this is a mere $60 board, so it is bound to have weaknesses.

The name of ASRock's H97 Anniversary clearly indicates it uses Intel's H97 chipset. As the features table below shows, this chipset offers a wide range of ports and a robust storage subsystem, including multiple RAID configurations. I said this one is unusual, though, so let us see how.

Specifications



MORE: Best Motherboards
MORE: 
How To Choose A Motherboard
MORE: All Motherboard Content

The codec is only an ALC662, which is somewhat bottom-of-the-barrel for motherboard sound, even in the mainstream. It offers 98dBA S/N on the outputs and 90dBA S/N on the inputs. This is considerably less than the 100dBA S/N, which is the minimum for any kind of professional audio work. If its full ATX-width form factor didn't tell you, this is not a HTPC board.

The H97 Anniversary uses a Realtek 8111GR network controller for its RJ-45 port, offering gigabit speeds. There is no radio on the board, either for a wireless NIC or Bluetooth. There are as many USB ports as the average person is likely to need, including front USB for cases that support that connector. Capacitors are all solid, and chokes are ferrite core rather than the less efficient iron.

Overview

As usual, accessories are limited to what you'd expect from a mainstream product. In addition to the full ATX board, you get a typical 5 1/2 x 8 1/4-inch glue-bound manual, which will not want to lie flat. In addition to English, you get German, French, Italian, Spanish, Russian, Portuguese, Turkish and four [pictographic] Asian languages I do not recognize (Thai, Japanese, Chinese?), plus Bahasa Indonesian. Some of these are only five to six pages, but the English section doesn't seem to be missing anything. There is a driver CD, I/O shield and the usual two SATA 6Gb/s cables.  Particularly given this board's RAID support, I really believe there should be more.

Perhaps because of the board's width, the layout is quite open. No slots will be blocked if a double-width graphics card is installed in the PCIe x16 slot, but the CR-2032 battery will be overlapped. The board is not especially thick, but is sufficiently sturdy so that it does not readily flex. The SATA ports are all on the forward edge of the board, beginning around an inch from the left edge. They alternate, so the clips will always face out.

Nothing else is obstructed on the H97 Anniversary. Most headers are around the edges, except for three fan headers. Both CPU fan headers and the 3-pin CHA_FAN2 header are in the middle of the board, close to the CPU to its left. There is another connector here as well, and the first clue as to this motherboard's intended market segment. Also to the left of the CPU, a little more toward the back of the board, is a 4-pin Molex power connector for auxiliary PCIe power. The only other unusual connector placement is the 8-pin CPU power connector, which is on the right edge just behind the four DIMM slots; more often this connector is much closer to the back of the board. On this H97 variant, the space between the CPU power connector and the rear panel connectors is taken by the VRM phases and their heat sink. Although this board has the standard ATX width for seven slots, it is only 7.5 inches deep. There is no room for the VRMs between the CPU and the rear panel connectors, where most motherboards seem to have them. On the rear panel, you will find DVI-D, HDMI and VGA connectors; however, the manual notes that each of these is only good for up to 1920 x 1200 resolutions.

The CLR_CMOS header is easily reached on the left edge, and includes a jumper block. In addition to the front-panel header in the bottom left, there is a header just behind the midpoint for a speaker. If your power LED connector is 3 pins wide, there is also a separate header for it also on the left edge behind the front panel connector (which has 2-pin spacing for the power LED). The audio header is behind the PCIe x1 slots, with plenty of finger space around it. There are no indicator LEDS on the board, such as a diagnostic display or a +5VSB pilot. Be sure you've switched your PSU off or unplugged it before adding or removing expansion cards, to make sure the +5VSB is off.

What is a little unusual is that all expansion slots are PCIe X1; no PCI, and no PCIe [2.0] X4. Adding in the PCIe power connector, it looks to me as though this particular motherboard is aimed at those who want potentially massive GPGPU performance. Given design lead times, it may have been intended as the heart of a mining rig. Mining on GPUs is no longer practical; however, it would still be a great board for folders, and perhaps for heavy rendering. If you're a graphics pro, please weigh in on the comments and let our readers know if X1 lanes are a meaningful limitation for certain GPGPU tasks.

Create a new thread in the Reviews comments forum about this subject
This thread is closed for comments
29 comments
    Your comment
  • Snayperskaya
    "Cons: Full ATX width"

    Seriously? How is that a con?
    3
  • Rookie_MIB
    This would be a good board to put a bunch of PCIe -> SATA adapters for a large scale NAS... Four PCIe to quad-port SATA cards can be had for $20 each or so, so along with the on-board SATA ports you'd be able to put together a 22 drive setup.
    0
  • DouglasThurman
    Why does the link for the board show up on NewEgg as the smaller micro-atx anniversary board?
    0
  • InvalidError
    Anonymous said:
    This would be a good board to put a bunch of PCIe -> SATA adapters for a large scale NAS... Four PCIe to quad-port SATA cards can be had for $20 each or so, so along with the on-board SATA ports you'd be able to put together a 22 drive setup.

    That would be grossly inefficient and slow since each board would only have an x1 PCIe connection. You would be much better off with a motherboard with x4 slots and controller boards with 16+ ports each. Yes, those cost more than $20 but if you can afford to spend $3000 on HDDs + $100 in add-in boards, I think you can afford a $500 controller.
    -1
  • DonkeyOatie
    I realize that these things take a long time to get out and there are a lot of things in the pipeline. This would have been great information a year a go, but less useful today, especially with the G3258 being past it's' best by', although still a great chip.
    0
  • RazberyBandit
    Quote:
    "Cons: Full ATX width"

    Seriously? How is that a con?

    What's funny is that it's not 9.6" (244mm) wide, as per the ATX standard. It is built closer to the widths of the Flex-ATX or DTX standards, which measure 194mm/7.6" and 208mm/8.2", respectively. (If I owned the board, I'd measure it, like the author should have.)

    The dead-giveaway that this isn't a standard 12" x 9.6" ATX board is the fact that it only has 6 mounting holes - it would have 9 if it were in fact a full-size ATX board. My personal moniker for boards built to such dimensions is ATX-Thin.
    0
  • mac_angel
    ASRock seem to make decent products, but as for their customer service and warranty, it's poor. Next to impossible to reach anyone, and redirected all over. I have one of their gaming motherboards that I sent back weeks ago and haven't gotten anything back since. Any time I write it, they say it is 'out of stock'. Maybe because I'm in Canada? Either way, really bad support. Never again.
    1
  • Onus
    I agree that this socket is now "old," but I have no control over length of the publishing queue. Soon I'll be submitting H170 reviews so as to be more current.
    I list the full ATX width as a con because there are many boards with similar features (all but the multiple PCIe X1 slots) that are considerably smaller. In many settings it doesn't matter, but where it does, you can get most of the same features on a mITX board now.
    -1
  • Onus
    Oh, and as to ASRock's customer service, for my part I've been satisfied. I bought their Z77E-ITX second-hand some years ago. It died (popped VRM; running stock, but in a cramped case with a hot GPU). Although I bought it without a warranty, ASRock replaced it for $50, and dealing with them was straightforward and easy.
    0
  • Crashman
    Anonymous said:
    Quote:
    "Cons: Full ATX width"

    Seriously? How is that a con?

    What's funny is that it's not 9.6" (244mm) wide, as per the ATX standard...The dead-giveaway that this isn't a standard 12" x 9.6" ATX board is the fact that it only has 6 mounting holes - it would have 9 if it were in fact a full-size ATX board.
    Actually, you're both right but Onus is more-right. The old standard for computers was horizontal desktops and racks, so top to bottom is "width" and front to back is "depth".

    I try to avoid this confusion by not using the word "width" when describing a motherboard.

    Joe's comments concern the availability of similar features in Micro ATX models. He assumes that you won't need five x1 slots and that 2 would do. He's probably right.
    2
  • InvalidError
    Anonymous said:
    Joe's comments concern the availability of similar features in Micro ATX models. He assumes that you won't need five x1 slots and that 2 would do. He's probably right.

    Personally, the only add-in board I have put in my last two PCs is a GPU and even that might go away once CPUs get HMB/HMC and IGPs that leverage it. I wish there were more decent and reasonably priced mATX cases, full-ATX seems like such a waste of space for a regular single-GPU desktop system.
    0
  • Rookie_MIB
    Anonymous said:
    Anonymous said:
    This would be a good board to put a bunch of PCIe -> SATA adapters for a large scale NAS... Four PCIe to quad-port SATA cards can be had for $20 each or so, so along with the on-board SATA ports you'd be able to put together a 22 drive setup.

    That would be grossly inefficient and slow since each board would only have an x1 PCIe connection. You would be much better off with a motherboard with x4 slots and controller boards with 16+ ports each. Yes, those cost more than $20 but if you can afford to spend $3000 on HDDs + $100 in add-in boards, I think you can afford a $500 controller.


    It's entirely possible that it would be limited in some regards but your math and logic are both off.

    First, what would be the difference between an 4 port SATA x1 card and a 16 port SATA x4 card? You're still running 4 drives per PCIe lane, your overall bandwidth is still equal. The difference is that a 16 port x4 card is far more expensive.

    Next, the overall bandwidth. A single PCIe x1 (v2.0) lane is 500MB one way. With 4 drives per card (or per lane) it would be (assuming no card overhead, PCIe overhead is already accounted for) 125MB available per drive. That not seem like much, but the fastest HDDs out there hit around 190MB/sec - I would say the average is in the 150MB range. Yes, you're leaving performance on the table in that regard.

    The thing you're NOT considering though is the inherent limitation of your network interface. You're maxed out on network transfers at 1Gb/sec. Gigabit ethernet is your absolute limiting factor. 125MB/sec. So - comparing the overall PCIe bandwidth available (4 PCIe x1 slots @ 500MB sec = 2GB/sec overall ideal peak bandwidth depending on how the drives are RAIDed) you have way more drive bandwidth than network bandwidth. Even though you're leaving some drive bandwidth on the table, even if you dropped a 10Gbit ethernet card in the x16 slot - the drive bandwidth still exceeds the network bandwidth by a fair margin (1.25GB/sec network bandwidth vs 2GB/sec drive bandwidth).

    The only way you'd really lose out in this situation is if you were running it as a JBOD with a 10Gbit ethernet (1.25GB/sec ethernet into 500MB/sec PCIe x1 lane). A very highly unlikely situation for your average home NAS builder considering the expense of 10Gbit hardware and the average demands placed on the average NAS. Anyone who's running that many drives (8-16 drives) would be using a RAID10 setup for speed and redundancy.
    1
  • Rookie_MIB
    Anonymous said:

    What's funny is that it's not 9.6" (244mm) wide, as per the ATX standard. It is built closer to the widths of the Flex-ATX or DTX standards, which measure 194mm/7.6" and 208mm/8.2", respectively. (If I owned the board, I'd measure it, like the author should have.)

    The dead-giveaway that this isn't a standard 12" x 9.6" ATX board is the fact that it only has 6 mounting holes - it would have 9 if it were in fact a full-size ATX board. My personal moniker for boards built to such dimensions is ATX-Thin.


    Actually, it looks very similar in dimensions to a board I picked up for my NAS a few months ago. The board I picked up was a Gigabyte B85M-HD3 which is a mATX board, BUT it only has the depth (front to back with the back being the rear IO panel) of an ITX board. It was quite handy as it went into a short depth rackmount chassis and left room for the ICYDOCK hotswap bay. The previous mATX board was a little deeper and things were interfering.

    Still - you're right, this is more of a flex-atx style board than a true ATX. It has the ATX height and slots, but is significantly narrower and doesn't use that third row of mounting screws that even standard mATX board takes.
    0
  • Onus
    Anonymous said:
    Anonymous said:
    Joe's comments concern the availability of similar features in Micro ATX models. He assumes that you won't need five x1 slots and that 2 would do. He's probably right.

    Personally, the only add-in board I have put in my last two PCs is a GPU and even that might go away once CPUs get HMB/HMC and IGPs that leverage it. I wish there were more decent and reasonably priced mATX cases, full-ATX seems like such a waste of space for a regular single-GPU desktop system.

    This, for the most part, although I've put in an occasional wireless card. I consider USB dongle antennas as tending toward useless, so if wireless is a need, I'm going to want a card.
    One possible exception is drive space limitation in small cases. Most meet the minimum for general use, which IMHO is a system SSD, a data HDD, and an optical drive (why I consistently grouse about only two SATA cables included with motherboards). A fully loaded primary system will have a system SSD, a pair of HDDs in RAID1 for data, another HDD for backups, plus the optical. That may require an ATX tower.
    0
  • akula2
    I reckon those comparison boards aren't good enough. Don't get me wrong. With the same CPU, I built a few dozen of executive machines for whatever am doing. I chose these boards:

    ASRock Z97 Anniversary
    MSI Z97 PCMate
    MSI Z97 Guard Pro (with DisplayPort)

    Later, with same CPU, I also built a few Gaming machines using this solid board :
    Asus Z97 Pro Gamer ATX board (with ROG features and quality).

    Perhaps, you might consider adding Asus H97 Pro Gamer as the last board in your series because I know many folks who don't bother much about OCing.
    0
  • Crashman
    Anonymous said:
    Anonymous said:
    Anonymous said:
    Joe's comments concern the availability of similar features in Micro ATX models. He assumes that you won't need five x1 slots and that 2 would do. He's probably right.

    Personally, the only add-in board I have put in my last two PCs is a GPU and even that might go away once CPUs get HMB/HMC and IGPs that leverage it. I wish there were more decent and reasonably priced mATX cases, full-ATX seems like such a waste of space for a regular single-GPU desktop system.

    This, for the most part, although I've put in an occasional wireless card. I consider USB dongle antennas as tending toward useless, so if wireless is a need, I'm going to want a card.
    One possible exception is drive space limitation in small cases. Most meet the minimum for general use, which IMHO is a system SSD, a data HDD, and an optical drive (why I consistently grouse about only two SATA cables included with motherboards). A fully loaded primary system will have a system SSD, a pair of HDDs in RAID1 for data, another HDD for backups, plus the optical. That may require an ATX tower.

    A reader would like us to do some PCH PCIe testing. We know the theoretical bandwidth limit, but there has to be more to test if you're a PCIe geek right? And, it looks like you have a great board for it!
    0
  • Onus
    Hmmm, what would you do, create a big RAID0 with controllers in each slot, and test throughput? I don't have half the equipment or space to test that. Any chance that ChrisR does, since he does drive testing? Or you in the "main" lab?
    0
  • Crashman
    Anonymous said:
    Hmmm, what would you do, create a big RAID0 with controllers in each slot, and test throughput? I don't have half the equipment or space to test that. Any chance that ChrisR does, since he does drive testing? Or you in the "main" lab?
    I really don't know. But it would probably need to include the impact of several high-bandwidth devices on things like latency on the same board's network controller. Know anyone geek enough?
    0
  • Onus
    Hmmm, my friend Frederick Brier would be geek enough, but probably does not have the time; he's always involved with multiple projects, and is also mostly a software guy.
    Does ChrisR have any thoughts on the subject?
    I'm also thinking you'd need to test with multiple CPUs to see if there's any bottleneck there with a mere Pentium.
    0
  • CRamseyer
    Personally I would try to get a protocol analyzer (http://teledynelecroy.com/protocolanalyzer/protocolstandard.aspx?standardid=3&capid=103&mid=511) so you can measure actual PCIe bandwidth and latency for each board. I've borrowed a system from LeCroy before for testing SATA SSDs.

    In the scenario above all or most of the PCIe lanes are consumed. I would be worried about the OROM capacity of the motherboard. I haven't ran into the issue with server boards and HBA/RAID cards but I do have a problem with several PCIe NICs. Shifting all of those storage cards to a consumer class motherboard cause a problem. It's one of those try it and see things.
    0