How We Tested: Benchmarking
We used a 5400 RPM, 2.5” hard drive (rather than a more enthusiast-oriented) conventional SSD specifically to slow down our test times and magnify any differences that the AV products might be exerting on storage operations. In the same vein, this is also why needed a higher caliber of timing tools. A simple stopwatch is too imprecise for several of these tests. Instead, we turned to Microsoft’s Windows Performance Analysis toolkit. The need for this should be clear from the following Microsoft data (found in http://bit.ly/oOg71J):
System Configuration | Manual Testing Variance | Automated Testing Variance |
---|---|---|
High-end desktop | 453 000 ms | 13 977 ms |
Mid-range laptop | 710 246 ms | 20 401 ms |
Low-end netbook | 415 250 ms | 242 303 ms |
We combined methodology suggestions from AVG, GFI, McAfee, and Symantec to arrive at our final test set as described below.
1. Install time. Using Windows PowerShell running with admin rights, we measured the installation time of LibreOffice 3.4.3 with this command:
$libreoffice_time=measure-command {start-process "msiexec.exe" -Wait -NoNewWindow -ArgumentList "/i .\libreoffice34.msi /qn"}
$libreoffice_time.TotalSeconds
2. Boot time. We used Windows Performance Analyzer’s xbootmgr and xperf tools to measure time elapsed across five looping boot cycles. Our score shows the mean time of the five cycles. Our command was:
xbootmgr –trace boot –prepSystem –numruns 5
3. Standby time. We used Windows Performance Analysis xbootmgr and xperf tools to measure time elapsed across five looping standby cycles. Our score shows the mean time of the five cycles. Our command was:
xbootmgr –trace standby –numruns 5
4. Synthetic performance. Our only conventional benchmark in this group, we used PCMark 7 to illustrate performance across a range of conventional computing tasks.
5. Page loads. We selected the following element-dense pages and used HTTPWatch to measure their load times in Internet Explorer 9.
6. Scan time.
This time, a simple stopwatch would do, although most AV vendors display their scanning run times within the application. Given the time scale involved, we felt confident simply using these rougher tools. Because many vendors cache scanned files, we’ve broken out data for the first full scan and a mean value of three subsequent scans. The test system was rebooted between each scan.