Sign in with
Sign up | Sign in
Your question
Solved

How many drives can be connected to a SATA port?

Last response: in Storage
Share
April 3, 2011 5:53:05 PM

Hello, I'm new on here and not really up-to-speed with the SATA standard, so I wondered if I could ask the community some questions:

1. Does a "port" on a SATA controller/card mean the same as a "channel"?
2. How many drives can I connect per port?
3. How many ports does a typical system board have?
4. How many ports do SATA cards go up to these days? (I'm looking to build a NAS machine with 10 x 1TB drives).
5. Should I be worried about all the different versions of SATA standards or is there complete backwards/forwards compatibility?
6. Are there any SATA cards that provide RAID 5 capability? (Looking for recommendations)

Any answers, thoughts and comments on this would be appreciated.

Best solution

a c 415 G Storage
April 3, 2011 8:52:42 PM

1. Basically, yes.

2. Normally 1 drive per port. You can get "SATA multipliers" to attach multiple drives per port, but you do so at the cost of channel throughput. It's much more common to add SATA capacity via an adapter card that has ports on it rather than use SATA multipliers.

3. Depends on the board, but 6 or more is a fairly standard number. One of those is normally used for the optical drive.

4. The cheap adapters often have two ports, for more money you can buy adapters with up to 8 or even 16 ports.

5. SATA is designed to be backward compatible, and the port and drive will autonegotiate the fastest speed supported by both. Occasionally you get a combination of chipsets that don't play nice, so the drives usually have a jumper that forces them to a slower mode to bypass the negotiation phase.

6. Yes - but I don't have experience with them so I can't give you a recommendation. I will warn you, though, that a 10-drive RAID-5 array is generally not a good idea - the unrecoverable read error rate of a lot of drives means that you're fairly likely to get yourself into a situation where you loose data even with the redundancy. If you need volumes of that size you're a lot safer using RAID-6 (or even better - RAID 1+0 if you can afford it).
Share
a b G Storage
April 4, 2011 4:43:38 AM

many modern sata chips support multiple drives per port. My JMicron and Intel ICH10R are but a few that can allow multiple drives to be connected if using a backplane or multiple harddrive docking station. Can't remember how many they will detect off the same port but as sminlal already mentioned throughput is limited to the ports native speeds.

You can however run a hardware raided pair of fast HDD through a single port without hitting the sata2 limit.

m
0
l
Related resources
a b G Storage
April 4, 2011 9:35:29 AM

sminlal said:
I will warn you, though, that a 10-drive RAID-5 array is generally not a good idea


Couldn't agree more! 10 x nTB RAID 5 is looking for trouble.

1) You will literally need 10 drive bays to support this. You're going to need a big case!
2) Expanding that array is very difficult. You'd only get 1TB for every extra drive you added regardless of it's size!
3) If these are new drives: Why go for such a small size?
4) If these are old drives: 10 old drives in RAID 5 is even worse!
m
0
l
a b G Storage
April 4, 2011 3:00:58 PM

How about this? Instead of asking if you can connect a bunch of wires together (and no offense, but your tech skills are seriously lacking), why not explain what you are trying to do?

There are big issues involved with lots of data - backup, power supply redundancy to name a couple.

You may be trying to go a certain direction because its cheap are you ok with losing the data on these drives?
m
0
l
April 5, 2011 12:33:48 AM

@ sminlal: Thanks for providing the most comprehensive, non-sarcastic, informative, unassuming and non-condescending answer to my query.
m
0
l
a c 415 G Storage
April 5, 2011 2:32:05 AM

Some of the other guys could perhaps have been more diplomatic, but they're not really wrong. If you don't have experience with RAID then creating a 10-drive RAID-5 array as your first project is a recipe for some real problems. Heck, it's a dangerous thing to do even if you're a seasoned veteran.
m
0
l
a b G Storage
April 5, 2011 2:57:59 AM

Also, dedicated RAID-capable cards are expensive. Software RAID is possible if you only use the SATA ports from the motherboard chipset, and software RAID-5 is usually slow on writes. If you're planning to RAID more than 5 drives, I suggest you go with RAID 6 instead (2 drives for parity instead of 1 for RAID 5). Or just get a NAS.
m
0
l
April 5, 2011 5:39:23 AM

zedzeecorp said:
Hello, I'm new on here and not really up-to-speed with the SATA standard, so I wondered if I could ask the community some questions:

1. Does a "port" on a SATA controller/card mean the same as a "channel"?
2. How many drives can I connect per port?
3. How many ports does a typical system board have?
4. How many ports do SATA cards go up to these days? (I'm looking to build a NAS machine with 10 x 1TB drives).
5. Should I be worried about all the different versions of SATA standards or is there complete backwards/forwards compatibility?
6. Are there any SATA cards that provide RAID 5 capability? (Looking for recommendations)

Any answers, thoughts and comments on this would be appreciated.

depending on the motherboard, and see the motherboard support hard disk Array or not. Server using motherboard can support 8 or higher number of SATA hard disk. But PC using, some motherboard support 6, some can support 5, some motherboard only support 2.
m
0
l
April 6, 2011 3:26:54 PM

@ sminlal: There's just too much assumption, arrogance and an unbelievable amount of skip-reading on this post - it reminds me of school children that fail an exam because they didn't read the rules for answering the questions. This is rather disappointing - it's not exactly a welcoming post. Shame.

I usually deal with other storage technologies that are used in the corporate world, so my only 'crime' here is that I'm not up-to-speed (as I stated in the beginning) with the latest SOHO technologies and their (usual) pitfalls, despite again (clearly) stating what I'm trying to do.

From the few, non-damning, comments offered, I'm deducing that I should consider some of the following:

1. Keep the number of drives low but increase their capacity to achieve the same space requirements.
2. Reconsider the RAID level and think of hardware RAID.
3. SATA ports will control one drive only and multipliers are not a good idea, due to performance bottlenecks.
4. This project cannot be achieved very well, unless I go for a professional solution or a pre-built system, ie. pay more.


m
0
l
a c 415 G Storage
April 6, 2011 4:22:47 PM

ZZC - the fundamental problem with RAID-5 is that when a drive dies it depends on being able to successfully read every bit on every remaining drive in order to recover the data. Here's the rub: many hard drives are rated such that you can expect to get one unrecoverable read error for every 10^14 bits read. A storage system with 10TB of data has about 10^14 bits. Therefore, there's as much as even odds that you won't be able to read all the data successfully - if not, the RAID system won't be able to recover from a disk failure, and your data will be toast.

Changing the number of drives won't help, nor will using hardware RAID instead of software RAID. Choosing drives with a higher reliability spec will help, but even the highest spec'd drives I've seen are rated at one error per 10^15 bits read, which still leaves you with as much as a 10% chance of loosing all your data.

As I mentioned in my first post the solution is to use RAID-6, which employs two parity drives to eliminate any single unreadable sector as a fatal error. For an array of this size it's really the only way to go, short of RAID 1+0.
m
0
l
a b G Storage
April 6, 2011 4:24:02 PM

It's unfortunate that you feel your post is particularly unwelcoming.

I think sminlal gave an good answer to your question however.

Personally I hate people who will post the same answer but written slightly differently. So I saw no value in giving an answer. Instead I felt I could contribute my knowledge and experience with your potential RAID configuration.

I think all the points you've deduced are spot on.
m
0
l
May 6, 2011 8:18:03 PM

Best answer selected by zedzeecorp.
m
0
l
June 22, 2013 11:16:50 AM

I am confused by this thread. One the first question. Does the word "port" mean SATA connector?

I ask because I have 6 SATA connectors on my mb. If I connect 1 device per connector, Device Manager will not show 1 device per channel. It will show multiple devices on some channels and on other channels there will be nothing at all. So evidently one connector does not mean one channel as I thought it did. I had 6 SATA connectors so thought I had 6 channels but that's not the case. Apparently other factors are in place that decide which devices are on which channels regardless of which SATA connectors they are connected to. Perhaps someone with more knowledge than me on this subject can't shed some light on my dumb ass.
m
0
l
February 13, 2014 10:15:52 PM

On Question 1) Does a "port" on a SATA controller/card mean the same as a "channel"?
Based on the comments above where some have discovered that a port might not be equivalent to a channel. These questions come to mind:

1a) If I have a NAS with more ports than I have on my motherboard, what is the most effective way, especially considering throughput, to connect them all. Assume a 4 SATA enclosure and a motherboard with only 3 SATA ports (probably 4 ports w/ the 4th plugged into the CD/DVD write/reader. Can I put in an additional SATA card, say a 4, 8 or 16 ports and will it work with the ports on the motherboard? (MY OS is Debian Linux not windows.)

1b) Throughput, Since Sata III can work at 6GBps (for others: Sata I, II and III are 150MB, 300Mb and 6GBps respectively) is throughput a problem? How does it degrade? (ie. WiFi bandwidth cuts almost in half with each additional WiFi user, what is the degradation with each additional Sata hard disk? Perhaps with SATA III at 6GBps, Sata multipliers are not an issue for X number of Sata disk drives.

2) There are homemade SANs with more hard disks than any home user will ever need, the link here uses custom Linux software to work with up to 45 Sata Drives (67 TBs) for only $117,000, which is dirt cheap for SANs (http://blog.backblaze.com/2009/09/01/petabytes-on-a-bud...).
2a) Here is the 5 bay bare bones case for only $29.99 from Newegg that I plan to use, I purchased two of them, http://www.newegg.com/Product/Product.aspx?Item=N82E168...)

I would love to have some suggestions for the following (Brand + Name and price if known)
SATA III Adapter Card?

A good source / reference on the SATA and Power cables needed to hook up SATA drives to either a montherboard and/or SATA III interface adapter.

A good source / reference on how others have configured multiple processor boards, multiple SATA cards, multiple SATA drives in Linux turning it into a NAS that is price acceptable for home use? I noticed that the link for the Petabytes on a budget
(http://blog.backblaze.com/2009/09/01/petabytes-on-a-bud...)
used 45 SATA drives with 9 motherboards, 4 SATA PCIe cards (1 Adapter card w/ 4 SATA ports and 3 Adapters with 2 SATA ports each). I know the company used their own custom Debian Linux setup to get it all to work, however the placement of the SATA cards away from all the mother boards has me scratching my head as to how they got it all to work together (HTTPS + JFS)...so a site with a HowTo to point me in the right direction would be excellent. Anyone?


For others, here is an excellent discussion on SATA and SANS, full disclosure, the blog poster works for Oracle, so expect and Oracle bias, however his discussion of SATA and RAID specifically is spot on and well worth the read. He echos what others here say about being sure you want RAID and which RAID you use, also beware of hardware RAID for specific configurations, here is the URL
(http://www2.geeks.com/ata-sata-and-ssd-plus-tips-for-up...)

I sincerely hope the URLs provided will help others understand this much better and hope someone will be kind enough to point me in the right direction for my own customer home SANs on a budget. Hey with a case for less than $30, I feel like I have a shot and doing it.
m
0
l
!