They are not necessarily faster in real world situations. There is a very big problem with synthetic benchmarks. There is a reason those benchmarks are called synthetic. They are not real. They were specifically designed to grossly exaggerate very minor differences between ssd's. They are very misleading. Companies like to manipulate the benchmark settings so that their ssd's are presented in the most favorable light. As a result over 90% of the synthetic benchmarks do not represent real world use. Consider it an advertising gimmick.
A common benchmark used in ssd advertising is Iometer which was originally developed by Intel in 1998. The benchmark has since become an open source project. The benchmark measures input/output operations per second or "IOPS". It is used on the enterprise side of the market two ways. First, the benchmark is used to determine data storage requirements for network servers. Typically Iometer is installed, settings are adjusted, and the benchmark is set to run for several days. Second, the benchmark is used to generate a series of work loads to measure the maximum IOPS an ssd is capable of. The combined results serve as a guide for data storage planning. For example, a company that had a server and 1,000 desktop pc's would set up a data drive array for 100 of the pc's and measure the IOPS during actual real world use for an entire week. The results would be used to determine data drive array and IOPS requirements for the server that fed all 1,000 pc's.
Early on Intel actually conducted research to determine IOPS requirements for consumer ssd's. Intel determined that an ssd capable of 20,000 IOPS was just about right for consumers. AnandTech, a well respected website that publishes technical reviews of ssd's, also suggested 20,000 IOPS as a practical upper limit. Results posted by actual consumers confirmed Intel's and AnandTech's estimates. The big surprise was AnandTech's gaming workload tests. Two thirds of the ssd's that were benchmarked made up a very tight performance cluster. The measurements were between 309 IOPS and 325 IOPS. That was the average IOPS during use, not the burst or peak IOPS. Other reports indicate the peak IOPS which only last for a very very brief period of time are about 4,000 IOPS.
Beginning in 2009 articles and white papers were published indicating the IOPS benchmarks were being abused. Company advertising aimed at the emerging consumer market focused on maximum IOPS instead of actual requirements for maximum optimal performance. As with other benchmarks there are numerous settings. SSD manufacturers adjusted settings and quoted a best case IOPS. None of them ever quoted a worst case scenario or a well balanced scenario. In addition, the ssd manufactures failed to disclose sufficient information explaining how the benchmarks were conducted. In that respect the IOPS are useless. The advertising does not always reflect real world performance.
With the emergence of modern 3rd generation solid state drives all of the current top rated ssd's are capable of IOPS that are far beyond the needs of multi-tasking power users and hardcore gamers. In fact, modern ssd's would be ideal for systems running several virtual pc's.