BSOD 0x09c machine check exception

NaHoW

Commendable
Apr 16, 2016
2
0
1,510
Hello

For a project we have 50 servers all equiped with (generally) the same hardware. The issue we have here is very serious and happens on all machines. Despite a lot of effort and contacting manufacturs and the software developpers everyone points to eachother and even refuses to give me a clue about what is going one.

First let me describe the setup. This is 'servergrade' hardware. For my first experience, servergrade is the largest dissappointment in my life.

- SuperMicro X10SDV-8C+-LN2F
- Intel Xeon D-1540 (embedded on the motherboard)
- Custom designed 1U case or SuperMicro original case
- 480 watt server PSU or 200 watt SuperMicro original PSU
- Samsung Evo 850 500 GB SSD
- 32 GB DDR4-2133 ECC or NON-ECC (but not mixed in the same server)
- Asus GT730 4GB DDR3 GPU
- GPU is mounted with a PCIe riser card (not ribbon), nameless from China or SuperMicro original

Running on the system
- Windows Server 2012 R2 Enterprise
- VMWare Workstation 12
- VM's run GPU intensive tasks
- This system is stock, there's not over/underclocking at all

Symptoms
- Random BSOD 0x09c (aka Machine_Check_Exception)
- Random as in sometimes the system runs for a week with no problems, sometimes in crashes after just 10 minutes, but mostly it runs for a few hours.


Already tried/checked
- BIOS updated to latest version (I would think now that this improved the time for the system to be stable, but that could have been random.
- Windows updated to the latest version
- VMWare updated to the latest version
- Swapped all components and tried every different option, even tried a desktop ATX PSU and M.2 SSD.
- Installed all systems from scratch with Ubuntu. I'm not familiar with Linux and have never seen a Linux BSOD and I still didn't since server systems are headless and I tried this in the DC. RESULT: system would hang and after reboot Linux reported XORG crash (GPU related).
- Changed GPU setting in BIOS to 'Above 4G', the rest of the BIOS is factory default.


Also informative:
- Systems are located in a datacenter. Temperature, air, power and network are optimal.
- Temperatures are well below the factory maximum
- We have the exact same *software* setup running on desktop computers (with desktop hardware). These system can run fine with 1 our of 100 PC's crashing every month.
- I have contacted VMWare, the say this is a hardware issue
- I have contacted SuperMicro, they say nothing really except some things and already tried and also that this could still be a software issue.

We are desperate here. The application we run luckily is sort of redundant. If a server and it's VM's on it drop, it's not such an issue, other servers will take over the load within 5 minutes, but at this rate I am required to be online all day to restart servers.

I have a large hardware knowdledge but this goes past it, I've search on this all day for over a month trying all sorts of different things.
The fact that these motherboards are used with hosting providers on a large scale makes me suspect that the board on itself is ok. This is definately not a specific hardware issue for RMA as all 50 boards have the same symptoms. The only thing different with us is the GPU. This in combo with the Linux experiment makes me suspect that this is definately something on the PCIe lane. The GPU itself is stable on desktop mobo's. Despite it's large memory capacity this is a small GPU that does not draw much power. I would suspect the Chinese riser cards, but then again we also use SuperMicro certified risers and they show no improvement at all.

I am very desperate to find a solution here. This will start with determing the exact cause.
We are willing to pay a nice bounty to an expert who can analyse some dumps and give us more details (or even better yet, a solution).

Kind regards,

Simon
 

popatim

Titan
Moderator
I would not have used an 850evo in these builds. They definitely are not server grade and definitely have compatibility issues. Take your worst offender and swap in a known good harddrive and retest. Also you want to test ram for several days too. I'm not sure how 50 servers became productional if they have problems running just a few hours. Is the ram certified for that motherbd?? I have to ask because all I see from your specs is generic stuff thrown into a server board and thats not how you build a stable system.
 

NaHoW

Commendable
Apr 16, 2016
2
0
1,510
Hello

We also tested a server with an Intel SSD that passed the office and it produced the same error.
The ramtest I would take as good shot for testing, but these are 50 machines, I doubt they all have bad memory sticks. We have 2 sorts of memory and both are on the certified list.
The only generic stuff thrown in would be the SSD. The GPU is not a thrown-in part. It's one of the 2 GPU's in the world we are able to use.
Furthermore since the board is sold as servergrade and you pay the price for that I wonder what is servergrade? Fancy expensive hardware that is so delicate it cannot deal with the world's most sold SSD?

Kind regards,

Simon