I have a range of machines running with SSD. Every Single One has failed when the swapfile has been on the drive and I've had to repair it with an offline chkdsk to mark the locked sectors as bad.
Let's be aware of the situation. SSD's have very intelligent caching algorithms but the swapfile is the one area which can overwhelm this.
MLC NAND (used in almost all consumer SSD drives), have a max write, erase, write cycle of 4,000. In the context of a swapfile that is nothing.
SLC NAND, on the other hand has a cycle of around 200,000. However the only place you will find SLC NAND, outside of hugely expensive business drives, is in SSHD hybrid drives.
I know for a fact that I'm unusual. Most of my machines, including laptops, run either 24 hours or extended hours. My machines are not 3/4/5/8 hours a day machines. As such, the extensive write to swapfile done by Windows kills my SSD's in around 18 months.
Since I moved the Swapfiles to other drives, the issue has, mainly, gone away. The exception is my OCZ 240 gig boot drive for my Alienware laptop which I have just replaced at 26 months.
Drive manufacturers now clearly state the average write cycle on their drives. My latest OCZ ARC Drive states 20gb unique write per day for a 5 year lifecycle.
My alternate option for my Netbook which only has one drive slot is this
http://www.ebuyer.com/602205-wd-black-dual-drive-1tb-hdd-120gb-ssd-wd1001x06xdtl
But it's still a bit pricey for what I want to pay, it's more than half the price of the original netbook. AS my netbook takes 8GB of RAM, I'm flirting with upgrading it to 8GB and putting in a smaller SSD then using a tiny RamDrive (circa 512mb) and place the swapfile on it. The RamDrive won't be for swapping when memory is exhausted but to stop Windows creating a temporary swapfile when you tell it not to. Which it does.
As I understand it the reality of a SSD is this. It writes to new areas until it has written to every location on the drive. Then it starts re-using erased cells on a least used basis. Simply put, if you use a small SSD for the C drive, dump a lot of non changing data on it (apps etc) and have high volatility write files (swawpfile), on it, you will hammer certain segments of the drive whilst leaving others almost pristine. This causes a failing drive where the majority of the drive is good but the segments in use are becoming blocked.
Some day, I guess, drives are going to have to become even more intelligent and move non changing data to the most used data locations to stop them from failing.
In the interim, people like me who have high write volumes on drives with large segments of non changing data will need to approach it in a more intelligent way.