Love it or hate it, JavaScript is rapidly being adopted by developers to improve the usability of the websites that they build. Many high profile websites are using it to perform acynchronous requests after the page has loaded. Google realised that simply reading static content is no longer adequate for providing the most relevant search results for users.
Google has added a post on the Webmaster Central Blog explaining the new functionality and how to take advantage of it. In order to generate an accurate instant previews in search results and to correctly index all content on the page, Googlebot will execute any JavaScript code blocks which send an XMLHttpRequest. While previous workarounds (such as the hashbang hack used by Twitter and Facebook) worked with carefully-crafted GET requests to provide a non-AJAX snapshot of a page to search engines, Googlebot is now able to perform both GET and POST AJAX requests itself. According to the blog post, only automatic requests are currently being executed and not those which are performed as a result of user action.
Just before the announcement, Digital Inspiration noticed that Facebook comments on TechCrunch were being indexed. Prior to Googlebot's new functionality, searching Google for the content of these comments wouldn't have returned the article that they were attached to. These changes should also allow other third party commenting platforms such as Disqus to be properly indexed as well.
Note that restricting search engines' access to any of the files needed to perform the request, such as an external JavaScript file or the file that is actually being opened by the request, will break Googlebot's ability to index the content. Developers and website owners will need to ensure that their robots.txt is configured correctly and they haven't left a stray robots meta tag in any required files that is blocking search engines.