This is purely out of curiosity. How do data centers and people with servers link their many machines so that they can operate as a single unit? I always see tons of Ethernet cables spewing out of the rear panels of servers. Does it require multiple gigibit LAN ports?
Latency and Bandwidth is a huge factor for the "Cloud".
I'm not an expert in the area but I've done my own research, so I may use some wrong terminology.
There are a few ways "Cloud" computing works or is used and I may not cover them all
Fail Over: One server dies/fails so another picks up where it left off. There are several degrees this can be done and depending on the applications or usage.
Many databases have some sort of failover, which will usually result in losing your current work, but any new work will be transparently started up on on a different server. This can show up as a web page not loading and a quick refresh shows the page correctly again but was transparently switched to a different server.
A more complicated failover is what you might see with high end VMWare virtual machines. They can literally keep more than one machine in sync so if the current master dies, the slave can instantly pick up and the end user would never know a server went down.
Both of these situations require LOTs of communication between the servers to remain in sync, so you can see how 1gbit or even 10gbit connections are important.
Virtual Machines: Virtual Machines run inside a host system but aren't tied to it. One of the advantages is that you can "migrate" from physical server to server.
An example could be Server has 10 virtual webservers on it. Over a short period of time, there's a large increase in web traffic and the server is getting over run. You could manually move or script for some of these virtual webservers to migrate to a different physical server to help balance the load. The end user wouldn't see any interruptions in their service, even if actively transferring a file.
Load balancing: In some cases, especially databases, you can load balance requests. In situations were most requests are reads, you can have a load balancer redirect requests to different servers depending on the individual loads.
Cluster File Systems: A clustered file system is somewhat similar to a P2P system except your trust every node and it's not a tug-of-war to see who can download the fastest. The nice thing about a clustered file system is they usually scale well for performance, spread out the load, have no one point of failure, and are highly resistant to hardware dieing and loss of data.
Distributed Computing: Some systems are designed to segment work loads into pieces and split these work units among many computers. I'm assuming you know the rest of this one.
Hybrids: Many larger services like Amazon's S3 is a hybrid of many of these designs which lets it scale well and be highly resistant to failure.