ASRock X299 Taichi Motherboard Review

Why you can trust Tom's Hardware Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.

Software & Firmware

The ASRock installation disc still includes a full set of drivers, but recent versions of the disk have not included very much of ASRock’s custom software. To assure its customers that they have the latest version of various software packages, the company instead installs only a downloader application called “App Shop” that polls ASRock servers for remaining suites. Chief among these are ASRock RGB-LED Utility, App Charger modulation control software for quick charging Apple devices, A-Tuning overclocking suite, a custom interface for cFos packet prioritization called XFast LAN, and the Restart to UEFI shortcut.

App Shop will also poll ASRock servers for driver and firmware updates, if you choose. Another window lets you disable the auto run.

ASRock A-Tuning offers Windows users a variety of overclocking profiles that accesses firmware values and repeats them to the Windows interface, as well as an elaborate menu of custom settings that cover the full range of firmware capability. We found that frequency settings worked, but were unable to confirm core voltage changes.

A-Tuning’s System Info tab appears to show proper initial readings, but likewise shows no change at various core voltage settings made through the OC Tweaker menu.

Fan-Tastic Tuning also appears to have been lifted directly from motherboard firmware, in both cases allowing users to set programmed, user-configured custom, or cooling-program derived RPM to temperature slopes.

ASRock RGB LED software for Windows is a reflection of firmware settings, but didn’t function properly yet with this new X299 Taichi. The software didn’t address the board’s upper LED header, and changes to its settings caused it to quit functioning. Fortunately, the firmware menu works here.

Firmware is where the Taichi shined, but not in the easy OC section. The Turbo 4.2 GHz and Turbo 4.4 GHz setting appeared to have little impact on full-load performance, while using a danger-zone (according to Firmware) 1.90V input and no change in core voltage. The 4.6 and 4.8 GHz settings pushed input voltage even higher at 2.0 and 2.1V while increasing core voltage to 1.26 and 1.32V, respectively. Both of the higher settings caused throttling, though it’s not clear whether that was based on CPU or voltage regulator temperature, or default power limits.

(For more on this issue, read The Skylake-X Mess Explored: Thermal Paste And Runaway Power.)

Using manual settings, we found 4.40 GHz on our Core i9-7900X at 1.15V, with any higher setting causing CPU thermal throttling under a 10-core (20-thread) Prime95 AVX load. When we say “100% stable,” we mean under all loads.

Our DDR4-3866 reached a completely stable 3914 MHz data rate at the X299 Taichi’s 1.300V setting. That setting is very important, since our voltmeter showed that it produced 1.352 to 1.354V at the DIMM slot.

Two different menus address motherboard (external) and CPU (internal) voltage settings. The default Level 2 CPU load-line calibration got us to the relatively high 4.40 GHz core frequency while the fully-loaded CPU stayed just under its thermal-throttle threshold temperature.

This version of X299 Taichi firmware defaults to “Advanced” configuration menus, but the Advanced tab also allows users to set it to use “EZ Mode” upon entry.

The X299 Taichi “Tool” menu include a functioning RGB lighting control, an outgoing mail client for sending support messages, an installer that pulls RAID drivers off the installation DVD and places them onto a thumb drive to assist during Windows installation, a flash utility that pulls files from a thumb drive, and an internet utility that transfers firmware images from ASRock servers to a thumb drive.

Only two of the five fan headers can be switched from PWM to voltage speed-control, and both of those are rated at up to 1.5A output. The others are PWM-only and rated at 1A maximum output.

As with ASRock’s A-Tuning software, the X299 Taichi Firmware includes options to use several factory-programmed thermal fan-speed profiles, manual settings, or settings derived by the firmware during a fan test.

Finally, if you’d like to avoid complicated settings, an EZ-Mode menu is available by pressing F6. And, if you never want to see the advanced settings again, the menu to use EZ Mode every time is found in one of the menus of Advanced settings.


MORE: Best Motherboards

MORE: How To Choose A Motherboard


MORE: All Motherboard Content

Thomas Soderstrom
Thomas Soderstrom is a Senior Staff Editor at Tom's Hardware US. He tests and reviews cases, cooling, memory and motherboards.
  • You overclocked 10/20@4.4Ghz and then you complain about heat? You people are insane, seriously...
    Reply
  • Speaking of ASRock boards, in my experience they are the best. I am waiting for extreme series x299, like Extreme4, 6, 10. I still have AsRock x79 Extreme4 and AsRock x99 Extreme4 and they are rocking awesome since day 1 and both i paid just $170 (On Sale). Again speaking of SkyLake X, lower that BLCK down to 100Mhz and set multiplier to 20x and lower your voltage or in fact use default voltage and do not worry about heat. As I said in the previous article, pushing 10/20 this high goes beyond reasonable. If you guys snatch 6/12 Skylake - X you can set that one to 4.5/4.6Ghz and you will not see heat issues.
    Reply
  • Crashman
    19925236 said:
    You overclocked 10/20@4.4Ghz and then you complain about heat? You people are insane, seriously...
    Does this sound like a complaint or an observation?
    The X299 Taichi falls between the two previously-tested X299 motherboards in power consumption, as indicated within Intel’s Extreme Tuning Utility. It’s closer to the Asus sample, though the heat measured at its voltage regulator is noticeably higher than either competitor.
    Relax, it's a beautiful day somewhere
    19925267 said:
    Speaking of ASRock boards, in my experience they are the best. I am waiting for extreme series x299, like Extreme4, 6, 10. I still have AsRock x79 Extreme4 and AsRock x99 Extreme4 and they are rocking awesome since day 1 and both i paid just $170 (On Sale). Again speaking of SkyLake X, lower that BLCK down to 100Mhz and set multiplier to 20x and lower your voltage or in fact use default voltage and do not worry about heat. As I said in the previous article, pushing 10/20 this high goes beyond reasonable. If you guys snatch 6/12 Skylake - X you can set that one to 4.5/4.6Ghz and you will not see heat issues.
    I'd be happy with 4.6 GHz 8-cores, 4.4 GHz 10-cores, and 4.2 GHz 12-cores. It looks like we're on our way :)

    Reply
  • Thom457
    To someone not fully versed in what a I9 7900X is you'd never realize that the base clock speed of this $1000 CPU is 3.3 Ghz. One might surmise that the I9 7900X is defective because it won't run all the Cores at 4.3 Ghz without issues of heat. To someone that isn't obsessed with finding a way to run these CPUs at full load under unrealistic practical loads (lab rat only kind of loads) all the issues with "heat" and throttling sends kind of a false picture here... I'm not an Intel kind of Guy but this obsession tends to fall on both Camps and since Intel typically overclocks better the obsession is found there more than on the AMD side. Can anyone really see the difference between 120 and 160 FPS? I can see the dollar difference readily.

    I clearly remember all the overclocking issues with the Ivy stuff as the first generation die shrink from 32 NM to 22. Push the Cores beyond what they were rated for and heat and voltage spikes were the rule because the smaller die couldn't shed the heat that the 32 NM stuff could to the heat spreader. My Devil's Canyon was the result of optimizing that problem in rev two of the 22 NM stuff. My not over clocked DC running at stock 4.0 Ghz on water never needs to clock up all the cores on anything I can do in the practical world. On water it will naturally overclock better than air but most of the time it only overclocks up 1 to 2 Cores in normal use because outside of artificial means there is just no real world need for all four Cores to run at even 4.0 Ghz.

    Anyone that needs to overclock their equipment to these extremes I hope has deep pockets or a Sugar Daddy with deep pockets. To some it seems the base clock rate of these CPUs are treated like the speed limits the majority ignore most of the time. I thought and still think that the $336.00 I paid for my Quad Devils Canyon was a lot of money. When you add in all the supporting expenses that tend to be locked in generational too blowing the CPU a year or two down the road doesn't just mean paying an outrageous amount for a replacement CPU if you can find a NIB one but likely having to pay for a complete new MB, Memory and new model CPU because that turn out to be a better investment compared to buying rather rare and expensive older CPU models. To those where money is no restriction on their obsessions none of this matters I understand.

    I've been an enthusiast in this field since the Apple II days and not once have I abused my equipment in the vain pursuit of a meaningful increase in performance at the expense of the life span of the equipment. My time and money are valuable to me.

    If you "need" a ten Core CPU to run beyond what it is rated for I'd hope there's a commercial payback for doing that. Various commercial and government interests can afford to buy by the thousands and apply cooling mechanisms that dwarf anything available on the Consumer side of the equation. You do this in the multi-CPU Server world and you void the warranty and are on your own. No one does that because the downside of blowing out a CPU and having to explain that to the money men isn't a career advancing move. That's why server stuff isn't unlocked. The average CPU use was 3% in my Server Farm before VM came along and promised to solve that problem. That some CPUs do in fact hit 100% now and then for limited periods of time gets lost in that drive to raise CPU utilization use and lower equipment cost through buying less CPUs for the most part. When net application performance declines under load while that CPU utilization level rises explaining that to VM Warriors is about as effective as explaining to Consumers that pushing your equipment beyond its design specs isn't going to buy you anything in the real world outside of having to replace your system years before it needs to be.

    I saw the same madness when the 8 Core Intel Extreme came out. If you couldn't get all its Cores to run at 4.5 Ghz somehow you were being cheated it seemed. That it had a base frequency of 3.0 Ghz got lost in all the noise. That's its Xeon version ran at 3.2 Ghz was apparently lost on many.

    We all want something for nothing at times... With CPU performance some will never be satisfied with anything offered. That's human nature. Just as a matter of practical concern will all the CPUs on this $1000.00 CPU at over 4.0 Ghz provide a better gaming experience than my ancient $336.00 4.0 Devil's Canyon at 4.0 Ghz all else equal? Will the minute difference in FPS be detectable by the human eye?

    As another has said worrying about heat at 4.3 Ghz with this CPU model is insanity. It feeds a Beast that knows no way to be satisfied. The thermal limits at 14 NM with Silicon are there for everyone to see. I still remember the debates about which was faster the 6502 running at 2 Mhz or the Z80 at 5.0 Mhz? It didn't matter to me because my Z80A ran at 8 Mhz...on Static Ram no less. The S100 system with 16 KB of memory was a secondary heater for the house.

    Next year Intel and AMD will bring out something faster and the year after and the year after that but for some nothing will ever be fast enough. The human condition there. I value my time and money. I don't need to feed the Beast here. There's little practical value in these kinds of articles and testing. The 10 Core I9 7900X rushed to production has issues running at 4.3 Ghz vs. its stock 3.3 Ghz... Who would have thought?

    Just saying...
    Reply
  • the nerd 389
    Could you check the VRM temperatures as well? Specifically, if the caps on a 13 phase VRM with 105C/5k caps exceeds 60C, then there is likely a 1-year reliability issue. Above 50C, there's likely a 2-year reliability issue.
    Reply
  • Crashman
    19925877 said:
    To someone not fully versed in what a I9 7900X is you'd never realize that the base clock speed of this $1000 CPU is 3.3 Ghz. One might surmise that the I9 7900X is defective because it won't run all the Cores at 4.3 Ghz without issues of heat. To someone that isn't obsessed with finding a way to run these CPUs at full load under unrealistic practical loads (lab rat only kind of loads) all the issues with "heat" and throttling sends kind of a false picture here... I'm not an Intel kind of Guy but this obsession tends to fall on both Camps and since Intel typically overclocks better the obsession is found there more than on the AMD side. Can anyone really see the difference between 120 and 160 FPS? I can see the dollar difference readily.

    I clearly remember all the overclocking issues with the Ivy stuff as the first generation die shrink from 32 NM to 22. Push the Cores beyond what they were rated for and heat and voltage spikes were the rule because the smaller die couldn't shed the heat that the 32 NM stuff could to the heat spreader. My Devil's Canyon was the result of optimizing that problem in rev two of the 22 NM stuff. My not over clocked DC running at stock 4.0 Ghz on water never needs to clock up all the cores on anything I can do in the practical world. On water it will naturally overclock better than air but most of the time it only overclocks up 1 to 2 Cores in normal use because outside of artificial means there is just no real world need for all four Cores to run at even 4.0 Ghz.

    Anyone that needs to overclock their equipment to these extremes I hope has deep pockets or a Sugar Daddy with deep pockets. To some it seems the base clock rate of these CPUs are treated like the speed limits the majority ignore most of the time. I thought and still think that the $336.00 I paid for my Quad Devils Canyon was a lot of money. When you add in all the supporting expenses that tend to be locked in generational too blowing the CPU a year or two down the road doesn't just mean paying an outrageous amount for a replacement CPU if you can find a NIB one but likely having to pay for a complete new MB, Memory and new model CPU because that turn out to be a better investment compared to buying rather rare and expensive older CPU models. To those where money is no restriction on their obsessions none of this matters I understand.

    I've been an enthusiast in this field since the Apple II days and not once have I abused my equipment in the vain pursuit of a meaningful increase in performance at the expense of the life span of the equipment. My time and money are valuable to me.

    If you "need" a ten Core CPU to run beyond what it is rated for I'd hope there's a commercial payback for doing that. Various commercial and government interests can afford to buy by the thousands and apply cooling mechanisms that dwarf anything available on the Consumer side of the equation. You do this in the multi-CPU Server world and you void the warranty and are on your own. No one does that because the downside of blowing out a CPU and having to explain that to the money men isn't a career advancing move. That's why server stuff isn't unlocked. The average CPU use was 3% in my Server Farm before VM came along and promised to solve that problem. That some CPUs do in fact hit 100% now and then for limited periods of time gets lost in that drive to raise CPU utilization use and lower equipment cost through buying less CPUs for the most part. When net application performance declines under load while that CPU utilization level rises explaining that to VM Warriors is about as effective as explaining to Consumers that pushing your equipment beyond its design specs isn't going to buy you anything in the real world outside of having to replace your system years before it needs to be.

    I saw the same madness when the 8 Core Intel Extreme came out. If you couldn't get all its Cores to run at 4.5 Ghz somehow you were being cheated it seemed. That it had a base frequency of 3.0 Ghz got lost in all the noise. That's its Xeon version ran at 3.2 Ghz was apparently lost on many.

    We all want something for nothing at times... With CPU performance some will never be satisfied with anything offered. That's human nature. Just as a matter of practical concern will all the CPUs on this $1000.00 CPU at over 4.0 Ghz provide a better gaming experience than my ancient $336.00 4.0 Devil's Canyon at 4.0 Ghz all else equal? Will the minute difference in FPS be detectable by the human eye?

    As another has said worrying about heat at 4.3 Ghz with this CPU model is insanity. It feeds a Beast that knows no way to be satisfied. The thermal limits at 14 NM with Silicon are there for everyone to see. I still remember the debates about which was faster the 6502 running at 2 Mhz or the Z80 at 5.0 Mhz? It didn't matter to me because my Z80A ran at 8 Mhz...on Static Ram no less. The S100 system with 16 KB of memory was a secondary heater for the house.

    Next year Intel and AMD will bring out something faster and the year after and the year after that but for some nothing will ever be fast enough. The human condition there. I value my time and money. I don't need to feed the Beast here. There's little practical value in these kinds of articles and testing. The 10 Core I9 7900X rushed to production has issues running at 4.3 Ghz vs. its stock 3.3 Ghz... Who would have thought?

    Just saying...
    It doesn't quite work out that way. To begin with, the BEST reason for desktop users to step up from Z270 to X299 is to get more PCIe. The fact that this doesn't jive with Kaby Lake-X just makes Kaby Lake-X a poor product choice.

    Then you're stuck looking only at Skylake-X: The 28-lanes of two mid-tier models are probably good enough for most enthusiasts. The extra cores? If you need the extra lanes, I hope you want the extra cores as well.

    But the maximum way to test THE BOARDS is with a 44-lane CPU. And then you're getting extra cores again, which are useful for testing the limits of the voltage regulator.

    LGA 2066 doesn't offer a 6C/12T CPU with 44 lanes and extra overclocking capability. Such a mythical beast might be the best fit for the majority of HEDT users, but since it doesn't exist we're just going to test boards as close to their limits as we can afford.

    Reply
  • Crashman
    19926006 said:
    Could you check the VRM temperatures as well? Specifically, if the caps on a 13 phase VRM with 105C/5k caps exceeds 60C, then there is likely a 1-year reliability issue. Above 50C, there's likely a 2-year reliability issue.
    I haven't plugged in a Voltage Resistor Module since Pentium Pro :D I'm just nitpicking over naming conventions at this point. The thermistor is wedged between the chokes and MOSFET sink in the charts shown.

    Reply
  • the nerd 389
    19926029 said:
    19926006 said:
    Could you check the VRM temperatures as well? Specifically, if the caps on a 13 phase VRM with 105C/5k caps exceeds 60C, then there is likely a 1-year reliability issue. Above 50C, there's likely a 2-year reliability issue.
    I haven't plugged in a Voltage Resistor Module since Pentium Pro :D I'm just nitpicking over naming conventions at this point. The thermistor is wedged between the chokes and MOSFET sink in the charts shown.
    The caps are much more likely to fail than the MOSFETs in my experience. They're more accessible than the chokes. The ones on that board appear to be 160 uF, 6.3V caps for the VRMs. Is there any way to check their temps and if they're 105C/5k models or 105C/10k?
    Reply
  • Crashman
    19926116 said:
    19926029 said:
    19926006 said:
    Could you check the VRM temperatures as well? Specifically, if the caps on a 13 phase VRM with 105C/5k caps exceeds 60C, then there is likely a 1-year reliability issue. Above 50C, there's likely a 2-year reliability issue.
    I haven't plugged in a Voltage Resistor Module since Pentium Pro :D I'm just nitpicking over naming conventions at this point. The thermistor is wedged between the chokes and MOSFET sink in the charts shown.
    The caps are much more likely to fail than the MOSFETs in my experience. They're more accessible than the chokes. The ones on that board appear to be 160 uF, 6.3V caps for the VRMs. Is there any way to check their temps and if they're 105C/5k models or 105C/10k?
    Marked FP12K 73CJ 561 6.3. I should probably get an infrared thermometer :D

    Reply
  • drajitsh
    Tom raises some valid points but crash man gives a good answer. My take is that the tempratures matter in 2 situations without overclocks--
    1. Workstation use, specially when cost constrained from purchasing Skylake SP and various accelerators (if your workload cannot be GPU accelerated.)
    2. High ambient if you cannot or do not want to use below ambient cooling.
    Reply