Drives detected, but not one boots

jdcranke07

Honorable
So, I have two Samsung 840 EVOs in RAID as my boot drive for my media server. I have four Seagate 2TB NAS drives that I have media on. All of this was working fine on Windows Server 2016 until I added four additional Seagate 4TB Ironwolf drives to it. Both sets of four are on a PCIE RAID controller each w/ the SSDs plugged into mobo SATA ports. Nothing boots to any drive, not even to USB. My question is, is there a limit to how many drives I can have on this mobo?

Mobo: Asrock EP2C602
CPU: Dual Xeon E5-2690 0
DRAM: 64GB G.Skill UDIMM
Addin cards:
High Point RocketRAID 640L (x2)
Intel 4port Gigabit NIC
 
Was the RAID pair a RAID 0 or RAID 1? (Ensure in boot device priority that that is still the top selection)

You might also want to check your MBs documentation on PCI-e lane availability, as some SATA ports might be disabled/sacrificed as various PCI-e slots are populated, especially if you are running a pair of GPUs already, etc...
 

jdcranke07

Honorable


So, unfortunately, my manual doesn't really say anything, at least directly. I'm supplying links to pics I've taken of the manual, which is very bare bones since it is "supposed" to support six different variations of this same mobo.

https://imgur.com/q4XQDDT

https://imgur.com/2UI3bGa

To give a mental pic of my setup, I have my 840 EVOs in RAID0 & they are the only drives connected to the SATA3_0 & SATA3_1 ports, not the SATA3 M0-M3. I have no SATA2 or SCU ports filled.

My PCIE add in cards are as follows:
- Marvell RAID controller w/ four Seagate 2TB in RAID0 (PCIE slot 3)
- Intel 4port NIC (PCIE slot 4)
- Highpoint RAID controller w/ four Seagate 4TB in RAID5 (PCIE slot 7)

Now the only reason the 2TB drives are in RAID0 was to increase capacity while transferring data to this machine from another. I intend of putting this data on the RAID5 array, then swap that Marvell controller for another Highpoint to have both Seagate arrays in RAID5 ultimately. However, I need to solve this current issue first.

As far as I can tell, the PCIE slots are directly trafficking to the CPUs & not going through the chipset, unlike the SATA ports, but I do not see any notation in the manual that states that PCI lanes will be used for one or the other based on slot occupation.

In the BIOS, I see all of my drives have been detected & I have been disabling the M0-M3 & SCU ports on the mobo since I don't use them. Leaving the on-board Marvell controller enabled has confliction w/ my current Marvell PCIE controller, which is why I also disable the M0-M3 ports.

When I let the machine try to boot by itself, it reverts to the setup page & gives the option to go into BIOS for setup.
After I set up the settings, it goes through all the PCI slot detection while in Post code b2, but then it either goes to a black screen w/ a single text cursor (looks like the CMD prompt style) or it will freeze on the mobo logo page.

I have to hit the Reset button or manually shutdown for it to come up again. However, it won't go to BIOS unless I clear the CMOS. Otherwise, it will freeze or get hung on the black screen. There are no post codes or beeps at all for either result.
 

jdcranke07

Honorable
Since there haven't been anymore answers, I'm assuming I've ran into a problem that not many do. I've sent an email to Asrock Rack Support to see if there is any info I can gather from them. Since it's a holiday weekend I don't expect an answer from them for a couple more days. Any further assistance would be greatly appreciated since my Plex Server & game servers are going to be on this machine hopefully. Thanks in advance.
 

jdcranke07

Honorable
Okay, so far the Asrock guy just gave me a "Are you sure you got things plugged into the correct ports" email response & hasn't responded in two days to my response explaining what I've said here.

Again, I'm guessing that no one here knows or no one is taking the time to read & answer this posted question. Thanks for the assist.

Edit: Adding more tags to see if anyone else can assist.
 
make sure the mb has the newest bios on it. look at the mb guild check to see if the slots your using share irq or memory ranges. most new mb the first video slot and first pci slot will share. in the bios turn off serial and parralle port try freeing up an irq. on your add in cards see if they have a boot bios. if they do it may be locking up the onboard sata ports. see if the new cards turn off any boot from this card to see if the mb sata ports will work.
 

jdcranke07

Honorable


BIOS is latest version available, I turned off serial ports, onboard marvell controller, SCU ports, & PXE bootables.

I will try putting the Highpoint controller in a different slot since it is in slot 7 (closest to CPU socket).

The manual doesn't say anything about IRQ, PCI lanes, sharing bandwidth between ports/slots, or anything of that sort. It is literally just a muli-motherboard barebones manual that tells you "This is this port/slot", "This is socket 'x'", "This is fan header 'y'", & then includes how to install CPU & DRAM.

The add-in cards have their individual BIOS for configuring the RAIDs, but that's all. The only bootable disk(s) I've had in the system is the two 840 EVOs in RAID0 & a USB drive.

I will get back w/ results on swapping to a different slot for the Highpoint controller in the morning. Thank you.
 

jdcranke07

Honorable


So, moving the Highpoint controller to slot 5 (3rd from top) has given me some improvement.

I'm no longer getting freezes on the Asrock logo screen & I am successfully able to restart & load BIOS w/ the RAID setting saved. Now it is recognizing that the SSDs are in a RAID.

The problem is now that even w/ the SSD RAID set to boot#1 (since they have the OS on them), I still end up landing on the CMD prompt looking screen w/ no text & a blinking text cursor. Basically, it's the spot right before Windows would start loading.

I'm wondering if the number of drives is the problem or if I'm changing a setting thinking it does one thing & it does another.

 

jdcranke07

Honorable
Was able to get a hold of Asrock Customer Support. They had me disable CSM Oprom from loading & it revealed that my NIC was attempting to boot via PXE four times. William found that the I340-T4 has been known to prevent disks from booting to Windows for some reason. Possibly due to an outdated driver, but not sure.

Found that my OS was corrupted to the point I needed to just wipe & reinstall. Will provide updates as I go on.
 

TRENDING THREADS