If you’ve seen the movie Contact, you know the gist of the SETI program. The Search for Extraterrestrial Intelligence program uses radio astronomy to search the skies for radio signals that, by their nature, must have come from intelligent life beyond the Earth. Raw data is gathered in a 2.5 MHz-wide band and streamed back to SETI@home’s main location at UC Berkeley. As shown in the movie, most if not all such radio data is simply random noise, like static against the cosmic background. The SETI@home software performs signal analysis on this data, scouring the bits for non-random patterns, such as pulsing signals and power spikes. The more floating point computing power available to process the data, the wider the spectrum and the more sensitive the analysis can be. This is where the parallelism of multi-threading and CUDA pay off.
Berkeley workers divide the raw data into single-frequency work units of about 0.35 MB, or 107 seconds. The SETI@home server then doles out work units to home computers, which typically run the SETI@home client as a screen saver application. When SETI@home went live in May of 1999, the goal was to combine the collective power of 100,000 PCs. Today, the project boasts over 300,000 active computers across 210 countries.
In benchmarking SETI@home, one needs a consistent work unit in order to get reliable results. We only found this out after hours of receiving nonsensical results. It turns out that the Nvidia performance lab had been prepping special script and batch files for testing SETI@home. These run from a command line, not the usual graphics-rich, eye candy interface. Nvidia sent us the needed files and a much clearer performance picture emerged.