Internet is going to be more expensive than you think. Yes, in Kansas City (and for the next several years too ONLY in Kansas City) you can get a 1 GB network connection from Google Fiber, but it's also not really considered a dedicated line so there's not going to be a SLA on it like a true business line as necessary for the type of demands you are looking for.
Our local internet provider here in my town doesn't even have a full 180 MB/s dedicated line from their Tier 1 provider, and they provide internet to the entire town. The library that I work at just started a new 10 Mb/s dedicated fiber connection and the average cost for that is $1,500 per month. Comparing cost of some consumer and small business options like Uverse (which has a higher download than upload speed) and a true dedicated connection is quite a bit different and a lot of it comes down to service level and SLA.
As bdubs85 pointed out, just because you can put a whole bunch of GPUs in a computer doesn't mean your software can actually utilize it. Most of the times the only software that is capable of doing that is highly customized and generally going to be CAD, imaging, or high precision calculations like protein folding and mapping. I would bet that the software encoding for running game connections has not been optimized to utilize the GPU really. It's a little different type of server load and I don't think would be well suited for the continuous string calculations that are best performed with a GPU.
The major problem lies in the scale by which you are covering here. There's a big difference between 1000 and 5000 connections. Obviously it's going to be more feasible handling 1000 connections from a smaller group of servers which will require less networking hardware, less power, less cooling, and less internet bandwidth.
If this was a setup of a few servers for handling game server connections with a few friends then I'd say that the network configuration and security probably wouldn't be too big of a task. However, with the load you are talking about security is going to be pretty important. I definitely don't have the knowledge and skill to properly secure that type of hosted service, and would require a professional to configure and check in and maintain as well.
Servers can get quite hot, especially when you are running multiple rackmount systems in a closed environment. No matter what brand or type of server you go with heat is going to be one major factor to consider when you start working with multiple servers.
Let's just for now assume that you are going to have a single 42U rack for storing your servers, battery backups, and networking. First off, it's going to be pretty full depending upon the configurations and types of servers you go with. For simplicity sake lets say you have ten servers (each handling 100 connections, that's 1000 maximum user connections.) Each server is going to be pulling anywhere between 400 watts and 1,000 watts depending again upon configuration, and is going to be putting out a lot of heat. You'd want to have the rack standing out from the corner of the room definitely so you can get access all around it and allow for airflow through the rack. But you are going to have to have very good air conditioning giving air directly towards the rack, and it will have to be running all year round. What's more is if the air conditioning does fail, you have major heat issues to contend with and your only option is going to be shutting everything down until the air conditioning is operational again.