Bitmain is a small company from China that makes ASIC miners for cryptocurrencies, and it’s about to be every gamer’s best friend. The company today announced the Antminer E3, which is an ASIC miner engineered for mining Ethereum. In theory, these new ASIC miners should alleviate some of the pressure that the GPU market currently faces.
Last year, around this time, an as-yet little-known GPU-mineable cryptocurrency called Ethereum started to gain traction in the market, and its value began to skyrocket, which kicked off the great cryptocurrency craze of 2017 in earnest. Within weeks of Ethereum’s breakout, the GPU market started to feel the brunt of the excitement, and by June, graphics card prices had begun to spiral out of control. To date, we’re still feeling the “Ethereum Effect,” with graphics cards selling for far more than MSRP (if you can find them).
Recently, GPU mining profits have started to dwindle, but a cursory check online for cryptocurrency mining discussions indicates that the lower profit margins aren’t scaring off the large scale-mining operations as fast as you might hope. Though, if history is any indication of the future, as it often is, Bitmain’s Antminer E3 Ethereum ASIC miner could disrupt the Ethereum GPU-mining market.
You may recall that several years ago, you could mine Bitcoin with your home PC. These days, it’s not possible to mine Bitcoin at a profitable rate with a standard PC. The Bitcoin mining market is now dominated by ASIC miners, which are engineered to do one thing: mine Bitcoin. These specialized devices are more powerful, and significantly more efficient at mining than graphics cards and CPUs.
Ethereum is a lot like Bitcoin, in that it is built on a blockchain with an immutable ledger, and it has a finite number of coins. However, Ethereum was supposed to be ASIC-resistant because it relies heavily on fast memory, whereas typical ASIC miners feature powerful processors that can crunch the numbers, but not much in the way of memory.
Bitmain didn’t reveal the full details of the Antminer E3, so we’re not sure how the company solved the memory challenge. However, the specifications that Bitmain did release indicate that the Antminer E3 ASIC miners will soon render GPUs obsolete for Ethereum.
Bitmain said the upcoming Antminer E3 ASIC would offer 180MH/s of mining performance and consume 800w of power, which is an unheard of level of efficiency. To put that into perspective, our scrapped together “profitable” Ethereum miner pumps out 93MH/s from a combination of one R9 380, one R9 380X, one R9 390X, and one R9 Fury. That machine draws roughly 950w from the wall, which is profitable, but the gains are dropping, and it may soon be untenable to operate it. Our system isn’t the most efficient miner, to be sure, but it doesn’t make sense to spend the money on an “efficient” GPU miner with 10-series cards.
The biggest death knell for the GPU Ethereum mining market is the Antminer E3's price. Bitmain is asking $800 for each unit, which massively undercuts the current rate for a GPU-based system. To get 180MH/s out of a GPU miner, you would need six GTX 1080 Tis or nine GTX 1060s. In today’s GPU market, you’re looking at well over $6,000 (probably closer to $7,000 once you add power supplies and other equipment) to build a rig like that with GTX 1080 Tis, and roughly $4000 for the GTX 1060 rig. And that's you can get your hands on the cards in the first place.
With those numbers, it’s easy to see that Bitmain’s ASIC has the potential to kill GPU-based Ethereum mining rather quickly. However, it’s too early to get excited about the death of GPU-based mining. There are many cryptocurrencies out there, and the death of Ethereum mining could simply direct GPU miners to another coin.
Bitmain expects the first batch of Antminer E3 units to roll out between July 16 and July 31. The company was accepting pre-orders today, but the first batch sold out before we caught wind of the announcement.
New EIP Suggests Ethereum Hard Fork For ASIC Resistance
The price of $800 is a sure giveaway that this will be useless when it arrives.
(a) if ethereum changes the algorithm, you just switch to new software while ASICs are uselss
(b) When Ethereum goes POS you sell the GPUs while you throw to the garbage can the ASIC.
..and btw, ethereum WILL change the algorithm if many of those get sold.
That said, here's a summary of a 13-gpu miner built around two 1000w PSUs:
miner3 10.0.0.13:3333 4 days, 06:38 364.61 MH/s 33110/1 (0.00%) 70C/72% 64C:72% 57C:72% 52C:72%
63C:72% 57C:72% 63C:72% 64C:72% 51C:72% 68C:72% 64C:72% 52C:72% 55C:72% us2.ethermine.org:4444
10.6 - ETH THIRTEEN
So it's 364 MH/s and pulling about 1900W at the wall. A bit under the efficiency of the Bitmain offering, but close. The difference is the GPU build way at this low of a power draw and stable, high hashrate, requires an understanding of the unix kernel and how to undervolt the PCI bus and underclock the GPUs via the kernel's amdgpu driver. And the GPU's need to have their bioses modified. Which, if you don't understand what you're doing, can leave you with "bricked" GPUs that are a PITA to recover. So quite a bit of work to build.
Bitmain's ASICs have a nice web interface. Pointy-clicky and done, you're mining.
And post #1 above isn't really accurate. The E3s won't be bricks if you know how to modify the software running them. Which is kinda like building a ROM for Android, not really that hard... just update the bundled mining software with the new algo, build, flash, poof. Already done for Bitmain D3s, google "BlissZ D3 firmware".
So the E3s are, IMHO, a good thing. They're not drastically faster than six AMD GPUs of similar power, but they are available to hobbyists that just want to mine. And they draw less power, which is always a good thing...
And I doubt PiRL will fork so you can always mine that, or ETC/ELLA/MVP/Something ETHASH-based.
A hard fork or a changed algo on ETH won't necessarily even break an ASIC, if they designed the ASIC to be re-programmable - but I imagine you would lose a lot of speed and efficiency if you did that.
FPGAs on the other hand can easily be reconfigured, but those devices are where you actually run into the memory problem. For the most part only FPGAs made for AI are designed for that kind of memory access, and you would have a hard time building a box around a single one of those for $800.
Even if they need to build a new chip for a new algorithm, the turnaround for that will be much faster this time. So ETH fans are going to find themselves in an arms race where there is a hard fork every 6 months, until investors get tired of it and the currency loses so much value that an updated ASIC is not worthwhile.
Sure, there are not many people in the world with the knowledge and motivation (and absurd bankroll) needed to design and build an ASIC, but they didn't exactly pull off some technical miracle here. Anybody who claims their currency is ASIC-resistant doesn't actually understand what an ASIC is, so you should not trust their opinion on such things. All the current ASIC-resistant currencies are simply FPGA-resistant (granted ASIC designs are often iterated and tested on FPGAs first, so that could slow ASIC development).
Interesting clarification, thanks!
My sum total of knowledge about GPU hardware: You can carefully cut the x16 slot down to x8 or x4 to fit in old slots on, say, 1U or 2U servers. A figurative hacksaw job to make an elegant AI solution for colocation. And, when replacing fans on non-blower models, the chip underneath says "ASIC" on it. Meaning a GPU is a less-specific "ASIC" than what carries the label of "ASIC"?
Further I don't understand the difference between a programmable CPU feeding a GPU's ASIC Vs. what's considered an FPGA? Haven't gate arrays fallen out of favor/use with the advent of CPU/GPU hardware and Cuda/OpenCL/OpenACC?
And you wimped out on proffering an opinion if the E3 is a good or bad thing ;-)
Why on earth would you cut down the card? It's far easier to cut the back off the PCIe slot, so that a full length card will fit in a 4x slot. That way the cards are undamaged, and can operate at full speed, and you only lose warranty on one component (if you're using risers, it's cheap), if the MB or riser doesn't already have the slot open at the back.
As far as FPGA's, yes and no. FPGA's are for development work, or for niche algorythm's that need to be fast anyways. IIRC, Nvidia was simulating new architectures on FPGA hardware at one point. Once you have a good FPGA design, you could almost as easily send it to be fabbed for ASIC production, and likely get better speeds at cheaper unit prices, once you overcome the cost of taping it out for lithography anyways.