The next drive generation will offer capacities of 200 GB per drive for the first time ever - a size that would have been flabbergasting two years ago.
Apart from this, there are two primary factors of interest in terms of a hard drive's performance, the access time and the data transfer rate; both are really quite self-explanatory. Access time specifies how long it takes from the seek request to the hard drive to the actual read process of the desired data. Because the hard drive has to address new sectors on the disk surface continuously in day-to-day operations, this factor becomes even more important, especially if small amounts of data are read or written. The shorter, the better. Seek time, by the way, is also a term that is commonly used at computer stores - it does not include the access time per se and is therefore usually noticeably shorter. Thus, unless you want to deal with an apples-to-oranges comparison, you should really take a good look.
The second factor is the data transfer rate. It mostly depends on the data density (expressed in memory space per surface unit, or in GB per disk) and the rotational speed of the medium. The shorter the distance between the data and the faster they pass the read-write heads, the more data can be read or written per time unit.
In reality, the rotational speed does also have a high impact on the access times, as latency times before the actual access are slashed significantly. Having as much buffer storage as possible (known as the "cache") is yet another performance booster. Cache today generally has a size of 2 MB, but you'll also find models in the IDE sector with up to 8 MB. Last but not least, let's talk a bit about hard drive electronics. They are responsible for executing all accesses, and their strengths and weaknesses become particularly obvious when several accesses must be responded to simultaneously.
Unlike many other system components, a hard drive's performance cannot be increased by tuning or "overclocking" it. You can alter the access speed with small software tools, but this method is primarily a means to reduce the operating noise or to defragment the data and thereby increase the efficiency.