ASRock X299 Taichi Motherboard Review

Why you can trust Tom's Hardware Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.

Benchmarks & Conclusion

Now that manufacturers have had a little time to nail down their firmware, we decided to give the preceding samples a retest. Here’s how the new Prime X299-Deluxe and X299 Gaming Pro Carbon AC firmware settings compare to ASRock’s X299 Taichi:

Unfortunately, the new results were too close to former results to shed any real light on their disparate performance and thermal results. Overclocking and power readings also changed slightly, but not by enough to invalidate our previous findings. A quick look under Intel’s Extreme Tuning Utility revealed a likely culprit:

Even with the updated firmware, the Prime X299-Deluxe maintains the Core i9-7900X’s rated 140W TDP under heavy loads. Meanwhile, the MSI motherboard maintains the CPU’s rated performance level by disregarding its 140W TDP. The X299 Taichi appears to take the middle path, but will it also have middling application performance and power consumption?

Synthetic Benchmarks

Synthetic benchmarks are excellent for locating performance problems, but 3DMark and PCMark probably aren’t the best options for viewing CPU performance stratification. For that, we need to jump down to SiSoftware Sandra

Sandra Arithmetic shows the Prime X299-Deluxe hanging in with the X299 Taichi and X299 Gaming Pro Carbon AC, regardless of its lower clock under the heavier stress of Prime95. Conversely, its Multimedia results are a good reflection of the clock differences we saw when running Prime95.

Even the Cryptography test of Sandra can’t create the stress level of Prime95, its largest differences merely a reflection of the X299 Gaming Pro Carbon AC’s better memory bandwidth. The only solid evidence of the power-per-performance tradeoff thus far has been in Sandra Multimedia.

3D Games

The three X299 motherboards trade blows through our first three game tests, with the X299 Gaming Pro Carbon really only standing out for its large loss in Talos. That only occurs when the game is paired with the board’s included Nahimic Audio software, however, and the gains available by disabling the software are clearly visible when comparing the solid bars to the faded bars in its original review chart.

Timed Applications

Less encoding time is more performance in timed applications, yet the differences in the way the three boards manage the CPU has produced performance that’s anything but normalized. The best we can hope for is that the averages of each motherboard for each chart might fall within a few percent of the other motherboards.

Power, Heat & Efficiency

The X299 Taichi falls between the two previously-tested X299 motherboards in power consumption, as indicated within Intel’s Extreme Tuning Utility. It’s closer to the Asus sample, though the heat measured at its voltage regulator is noticeably higher than either competitor.

The most aggressively clocked sample and worst consumer of energy, the X299 Gaming Pro Carbon is roughly 5-6% faster than ASRock and Asus samples in our mixed applications.

A small overall performance lead for the MSI sample came with a huge increase in power consumption, placing it last in efficiency. ASRock falls in the middle again, while being closer to Asus motherboard.

Overclocking

The X299 Taichi appears to be the most advanced overclocker in the mix, taking small wins in CPU, DRAM, and BCLK frequency. Unfortunately, the higher DRAM O/C appears to be due to more-conservative timings, as the bandwidth of its higher data rate still trails both Asus and MSI.

Final Analysis

Performance per dollar charts have little to do with features per dollar, and all three competitors have slightly different feature sets. Only the X299 Taichi and Prime X299-Deluxe have dual Gigabit Ethernet, for example.

On the other hand, the X299 Gaming Pro Carbon AC from MSI hits the middle of the networking comparison with a super fast 867 Mb/s 802.11ac controller. Maybe the controller upgrade is worth as much as a second Gigabit Ethernet controller, maybe it isn’t: Your personal needs should define your perspective. The Prime X299-Deluxe goes over-the-top with a high-priced 802.11ad solution with up to five times the bandwidth of MSI’s, but the board costs over $100 more. The X299 Taichi, with its old-fashioned 433 Mb/s solution and dual Gigabit Ethernet, is actually cheaper than either of these rivals.

In spite of its lower cost, the X299 Taichi adds a third M.2 slot, and its slot is driven by the CPU. While it's a superior connection on boards that have 28 or 44 lanes, dedicating those four lanes to a storage interface knocks the SLI capability out of 16-lane Kaby Lake-X configurations. That seems a little harsh since the least-expensive X299 motherboards and processors will likely appeal to a certain segment of gamers, but the non-shared storage bandwidth will likely appeal to anyone who doesn’t choose a Core i7-7740X or lesser LGA 2066 CPU.

Despite a comparatively low price, the X299 Taichi appears to be value-optimized for mid-tier LGA 2066 processors. Lacking any overwhelming reason not to choose it over the more expensive X299 Gaming Pro Carbon for our own high-end CPU, it would be easier for us to recommend that anyone vacillating between the top Kaby Lake-X and bottom Skylake-X buy the better processor using the money they saved with this motherboard. That certainly sounds like a value win!


MORE: Best Motherboards

MORE: How To Choose A Motherboard


MORE: All Motherboard Content

Thomas Soderstrom
Thomas Soderstrom is a Senior Staff Editor at Tom's Hardware US. He tests and reviews cases, cooling, memory and motherboards.
  • You overclocked 10/20@4.4Ghz and then you complain about heat? You people are insane, seriously...
    Reply
  • Speaking of ASRock boards, in my experience they are the best. I am waiting for extreme series x299, like Extreme4, 6, 10. I still have AsRock x79 Extreme4 and AsRock x99 Extreme4 and they are rocking awesome since day 1 and both i paid just $170 (On Sale). Again speaking of SkyLake X, lower that BLCK down to 100Mhz and set multiplier to 20x and lower your voltage or in fact use default voltage and do not worry about heat. As I said in the previous article, pushing 10/20 this high goes beyond reasonable. If you guys snatch 6/12 Skylake - X you can set that one to 4.5/4.6Ghz and you will not see heat issues.
    Reply
  • Crashman
    19925236 said:
    You overclocked 10/20@4.4Ghz and then you complain about heat? You people are insane, seriously...
    Does this sound like a complaint or an observation?
    The X299 Taichi falls between the two previously-tested X299 motherboards in power consumption, as indicated within Intel’s Extreme Tuning Utility. It’s closer to the Asus sample, though the heat measured at its voltage regulator is noticeably higher than either competitor.
    Relax, it's a beautiful day somewhere
    19925267 said:
    Speaking of ASRock boards, in my experience they are the best. I am waiting for extreme series x299, like Extreme4, 6, 10. I still have AsRock x79 Extreme4 and AsRock x99 Extreme4 and they are rocking awesome since day 1 and both i paid just $170 (On Sale). Again speaking of SkyLake X, lower that BLCK down to 100Mhz and set multiplier to 20x and lower your voltage or in fact use default voltage and do not worry about heat. As I said in the previous article, pushing 10/20 this high goes beyond reasonable. If you guys snatch 6/12 Skylake - X you can set that one to 4.5/4.6Ghz and you will not see heat issues.
    I'd be happy with 4.6 GHz 8-cores, 4.4 GHz 10-cores, and 4.2 GHz 12-cores. It looks like we're on our way :)

    Reply
  • Thom457
    To someone not fully versed in what a I9 7900X is you'd never realize that the base clock speed of this $1000 CPU is 3.3 Ghz. One might surmise that the I9 7900X is defective because it won't run all the Cores at 4.3 Ghz without issues of heat. To someone that isn't obsessed with finding a way to run these CPUs at full load under unrealistic practical loads (lab rat only kind of loads) all the issues with "heat" and throttling sends kind of a false picture here... I'm not an Intel kind of Guy but this obsession tends to fall on both Camps and since Intel typically overclocks better the obsession is found there more than on the AMD side. Can anyone really see the difference between 120 and 160 FPS? I can see the dollar difference readily.

    I clearly remember all the overclocking issues with the Ivy stuff as the first generation die shrink from 32 NM to 22. Push the Cores beyond what they were rated for and heat and voltage spikes were the rule because the smaller die couldn't shed the heat that the 32 NM stuff could to the heat spreader. My Devil's Canyon was the result of optimizing that problem in rev two of the 22 NM stuff. My not over clocked DC running at stock 4.0 Ghz on water never needs to clock up all the cores on anything I can do in the practical world. On water it will naturally overclock better than air but most of the time it only overclocks up 1 to 2 Cores in normal use because outside of artificial means there is just no real world need for all four Cores to run at even 4.0 Ghz.

    Anyone that needs to overclock their equipment to these extremes I hope has deep pockets or a Sugar Daddy with deep pockets. To some it seems the base clock rate of these CPUs are treated like the speed limits the majority ignore most of the time. I thought and still think that the $336.00 I paid for my Quad Devils Canyon was a lot of money. When you add in all the supporting expenses that tend to be locked in generational too blowing the CPU a year or two down the road doesn't just mean paying an outrageous amount for a replacement CPU if you can find a NIB one but likely having to pay for a complete new MB, Memory and new model CPU because that turn out to be a better investment compared to buying rather rare and expensive older CPU models. To those where money is no restriction on their obsessions none of this matters I understand.

    I've been an enthusiast in this field since the Apple II days and not once have I abused my equipment in the vain pursuit of a meaningful increase in performance at the expense of the life span of the equipment. My time and money are valuable to me.

    If you "need" a ten Core CPU to run beyond what it is rated for I'd hope there's a commercial payback for doing that. Various commercial and government interests can afford to buy by the thousands and apply cooling mechanisms that dwarf anything available on the Consumer side of the equation. You do this in the multi-CPU Server world and you void the warranty and are on your own. No one does that because the downside of blowing out a CPU and having to explain that to the money men isn't a career advancing move. That's why server stuff isn't unlocked. The average CPU use was 3% in my Server Farm before VM came along and promised to solve that problem. That some CPUs do in fact hit 100% now and then for limited periods of time gets lost in that drive to raise CPU utilization use and lower equipment cost through buying less CPUs for the most part. When net application performance declines under load while that CPU utilization level rises explaining that to VM Warriors is about as effective as explaining to Consumers that pushing your equipment beyond its design specs isn't going to buy you anything in the real world outside of having to replace your system years before it needs to be.

    I saw the same madness when the 8 Core Intel Extreme came out. If you couldn't get all its Cores to run at 4.5 Ghz somehow you were being cheated it seemed. That it had a base frequency of 3.0 Ghz got lost in all the noise. That's its Xeon version ran at 3.2 Ghz was apparently lost on many.

    We all want something for nothing at times... With CPU performance some will never be satisfied with anything offered. That's human nature. Just as a matter of practical concern will all the CPUs on this $1000.00 CPU at over 4.0 Ghz provide a better gaming experience than my ancient $336.00 4.0 Devil's Canyon at 4.0 Ghz all else equal? Will the minute difference in FPS be detectable by the human eye?

    As another has said worrying about heat at 4.3 Ghz with this CPU model is insanity. It feeds a Beast that knows no way to be satisfied. The thermal limits at 14 NM with Silicon are there for everyone to see. I still remember the debates about which was faster the 6502 running at 2 Mhz or the Z80 at 5.0 Mhz? It didn't matter to me because my Z80A ran at 8 Mhz...on Static Ram no less. The S100 system with 16 KB of memory was a secondary heater for the house.

    Next year Intel and AMD will bring out something faster and the year after and the year after that but for some nothing will ever be fast enough. The human condition there. I value my time and money. I don't need to feed the Beast here. There's little practical value in these kinds of articles and testing. The 10 Core I9 7900X rushed to production has issues running at 4.3 Ghz vs. its stock 3.3 Ghz... Who would have thought?

    Just saying...
    Reply
  • the nerd 389
    Could you check the VRM temperatures as well? Specifically, if the caps on a 13 phase VRM with 105C/5k caps exceeds 60C, then there is likely a 1-year reliability issue. Above 50C, there's likely a 2-year reliability issue.
    Reply
  • Crashman
    19925877 said:
    To someone not fully versed in what a I9 7900X is you'd never realize that the base clock speed of this $1000 CPU is 3.3 Ghz. One might surmise that the I9 7900X is defective because it won't run all the Cores at 4.3 Ghz without issues of heat. To someone that isn't obsessed with finding a way to run these CPUs at full load under unrealistic practical loads (lab rat only kind of loads) all the issues with "heat" and throttling sends kind of a false picture here... I'm not an Intel kind of Guy but this obsession tends to fall on both Camps and since Intel typically overclocks better the obsession is found there more than on the AMD side. Can anyone really see the difference between 120 and 160 FPS? I can see the dollar difference readily.

    I clearly remember all the overclocking issues with the Ivy stuff as the first generation die shrink from 32 NM to 22. Push the Cores beyond what they were rated for and heat and voltage spikes were the rule because the smaller die couldn't shed the heat that the 32 NM stuff could to the heat spreader. My Devil's Canyon was the result of optimizing that problem in rev two of the 22 NM stuff. My not over clocked DC running at stock 4.0 Ghz on water never needs to clock up all the cores on anything I can do in the practical world. On water it will naturally overclock better than air but most of the time it only overclocks up 1 to 2 Cores in normal use because outside of artificial means there is just no real world need for all four Cores to run at even 4.0 Ghz.

    Anyone that needs to overclock their equipment to these extremes I hope has deep pockets or a Sugar Daddy with deep pockets. To some it seems the base clock rate of these CPUs are treated like the speed limits the majority ignore most of the time. I thought and still think that the $336.00 I paid for my Quad Devils Canyon was a lot of money. When you add in all the supporting expenses that tend to be locked in generational too blowing the CPU a year or two down the road doesn't just mean paying an outrageous amount for a replacement CPU if you can find a NIB one but likely having to pay for a complete new MB, Memory and new model CPU because that turn out to be a better investment compared to buying rather rare and expensive older CPU models. To those where money is no restriction on their obsessions none of this matters I understand.

    I've been an enthusiast in this field since the Apple II days and not once have I abused my equipment in the vain pursuit of a meaningful increase in performance at the expense of the life span of the equipment. My time and money are valuable to me.

    If you "need" a ten Core CPU to run beyond what it is rated for I'd hope there's a commercial payback for doing that. Various commercial and government interests can afford to buy by the thousands and apply cooling mechanisms that dwarf anything available on the Consumer side of the equation. You do this in the multi-CPU Server world and you void the warranty and are on your own. No one does that because the downside of blowing out a CPU and having to explain that to the money men isn't a career advancing move. That's why server stuff isn't unlocked. The average CPU use was 3% in my Server Farm before VM came along and promised to solve that problem. That some CPUs do in fact hit 100% now and then for limited periods of time gets lost in that drive to raise CPU utilization use and lower equipment cost through buying less CPUs for the most part. When net application performance declines under load while that CPU utilization level rises explaining that to VM Warriors is about as effective as explaining to Consumers that pushing your equipment beyond its design specs isn't going to buy you anything in the real world outside of having to replace your system years before it needs to be.

    I saw the same madness when the 8 Core Intel Extreme came out. If you couldn't get all its Cores to run at 4.5 Ghz somehow you were being cheated it seemed. That it had a base frequency of 3.0 Ghz got lost in all the noise. That's its Xeon version ran at 3.2 Ghz was apparently lost on many.

    We all want something for nothing at times... With CPU performance some will never be satisfied with anything offered. That's human nature. Just as a matter of practical concern will all the CPUs on this $1000.00 CPU at over 4.0 Ghz provide a better gaming experience than my ancient $336.00 4.0 Devil's Canyon at 4.0 Ghz all else equal? Will the minute difference in FPS be detectable by the human eye?

    As another has said worrying about heat at 4.3 Ghz with this CPU model is insanity. It feeds a Beast that knows no way to be satisfied. The thermal limits at 14 NM with Silicon are there for everyone to see. I still remember the debates about which was faster the 6502 running at 2 Mhz or the Z80 at 5.0 Mhz? It didn't matter to me because my Z80A ran at 8 Mhz...on Static Ram no less. The S100 system with 16 KB of memory was a secondary heater for the house.

    Next year Intel and AMD will bring out something faster and the year after and the year after that but for some nothing will ever be fast enough. The human condition there. I value my time and money. I don't need to feed the Beast here. There's little practical value in these kinds of articles and testing. The 10 Core I9 7900X rushed to production has issues running at 4.3 Ghz vs. its stock 3.3 Ghz... Who would have thought?

    Just saying...
    It doesn't quite work out that way. To begin with, the BEST reason for desktop users to step up from Z270 to X299 is to get more PCIe. The fact that this doesn't jive with Kaby Lake-X just makes Kaby Lake-X a poor product choice.

    Then you're stuck looking only at Skylake-X: The 28-lanes of two mid-tier models are probably good enough for most enthusiasts. The extra cores? If you need the extra lanes, I hope you want the extra cores as well.

    But the maximum way to test THE BOARDS is with a 44-lane CPU. And then you're getting extra cores again, which are useful for testing the limits of the voltage regulator.

    LGA 2066 doesn't offer a 6C/12T CPU with 44 lanes and extra overclocking capability. Such a mythical beast might be the best fit for the majority of HEDT users, but since it doesn't exist we're just going to test boards as close to their limits as we can afford.

    Reply
  • Crashman
    19926006 said:
    Could you check the VRM temperatures as well? Specifically, if the caps on a 13 phase VRM with 105C/5k caps exceeds 60C, then there is likely a 1-year reliability issue. Above 50C, there's likely a 2-year reliability issue.
    I haven't plugged in a Voltage Resistor Module since Pentium Pro :D I'm just nitpicking over naming conventions at this point. The thermistor is wedged between the chokes and MOSFET sink in the charts shown.

    Reply
  • the nerd 389
    19926029 said:
    19926006 said:
    Could you check the VRM temperatures as well? Specifically, if the caps on a 13 phase VRM with 105C/5k caps exceeds 60C, then there is likely a 1-year reliability issue. Above 50C, there's likely a 2-year reliability issue.
    I haven't plugged in a Voltage Resistor Module since Pentium Pro :D I'm just nitpicking over naming conventions at this point. The thermistor is wedged between the chokes and MOSFET sink in the charts shown.
    The caps are much more likely to fail than the MOSFETs in my experience. They're more accessible than the chokes. The ones on that board appear to be 160 uF, 6.3V caps for the VRMs. Is there any way to check their temps and if they're 105C/5k models or 105C/10k?
    Reply
  • Crashman
    19926116 said:
    19926029 said:
    19926006 said:
    Could you check the VRM temperatures as well? Specifically, if the caps on a 13 phase VRM with 105C/5k caps exceeds 60C, then there is likely a 1-year reliability issue. Above 50C, there's likely a 2-year reliability issue.
    I haven't plugged in a Voltage Resistor Module since Pentium Pro :D I'm just nitpicking over naming conventions at this point. The thermistor is wedged between the chokes and MOSFET sink in the charts shown.
    The caps are much more likely to fail than the MOSFETs in my experience. They're more accessible than the chokes. The ones on that board appear to be 160 uF, 6.3V caps for the VRMs. Is there any way to check their temps and if they're 105C/5k models or 105C/10k?
    Marked FP12K 73CJ 561 6.3. I should probably get an infrared thermometer :D

    Reply
  • drajitsh
    Tom raises some valid points but crash man gives a good answer. My take is that the tempratures matter in 2 situations without overclocks--
    1. Workstation use, specially when cost constrained from purchasing Skylake SP and various accelerators (if your workload cannot be GPU accelerated.)
    2. High ambient if you cannot or do not want to use below ambient cooling.
    Reply