Once the array has been created which in your case will be seen as one big 1.5TB drive, you can then partition it per normal.
More importantly, why are you putting 10k/15k drives (assuming that's what those 300GB are) into RAID5?
Those higher RPM drive are normally utilised for high IOps usage involving small writes to the disk, RAID5 or 6 is a complete performance disaster in under such usage not to mention long recovery time during rebuilds, RAID10 is what normally used with those disks for redundancy. If you just wanted higher sustained speed ( a video production server perhaps) then striping a bunch of high density 7200rpm would've made more sense with much lower $/GB.
In these days with cheap $/GB 7200rpm disks, RAID with parity is mostly associated with nearline data archiving instead as there's many performance & integrity downfalls to using them w/o proper implantation.
Even the 512MB cache on my SAS RAID controller can be filled in an instant at around 400MB/s then it's back to RAID5 write-hole hell, assuming small write pattern is used e.g. database (suited to those 10k/15k drives). Even with today's controller capable of over calculating parity for over 500MB/s sequential write, there's still many performance and integrity caveats to using traditional RAID5 or 6.
Unless the admin is a complete numbnut, you will not find any high performance severs requiring high IO/s or a relatively high disk throughput running RAID with parity now.
The whole explanation is much longer. In the end RAID using parity for redundancy (incl. software ones) in todays market are mostly suitable for nearline storage and those belong to cheaper $/GB 7200rpm drives.