Staying in tune with the hardware topic, we moved on to talk about CPUs, GPUs, and APIs. Given that today’s CPU architectures are pushing parallelism above all else, games typically haven’t taken advantage of multiple cores. I asked if Arrowhead's upcoming titles will do more in this regard. And, if so, where is the most benefit seen? If not, then why?
"We definitely think that parallel computing is the way to go," Stenmark says. "PS3 goes beyond the current gamer PC with the Cell processor and pushes developers to use parallel computation models. What effects this will have on a game is hard to say; this really depends on the type of game. In an FPS, it might be used for more realistic physics, while an RTS might use it for dynamic vegetation. There is no limit really, it all depends on what the game's focus is."
And what of GPU architectures?
"General-purpose programming on GPUs has gained tremendous ground the past few years, and for good reasons," Stenmark said. "There are great ways to compute complex particle systems, cloth, etc. on the GPU, though there are problems with computation that needs to send data back to the game code due to the latency between the GPU and CPU. Another problem is the rapid development of GPUs; games are expected to run on as many as five different generations of shader models, so the new features of the latest generation can’t be used for something that is important for the gameplay."
Looking at games and hardware from a bigger picture, which leads the way: the software or the GPU technology? To that question, Stanmark couldn't say for sure, as on the one hand, there are games that abuse hardware, and on the other, popular titles are expected to run on a wide range of hardware. "With enough time and people, it is possible to achieve both, but smaller developers may never be able to use the newest and fastest hardware to its full potential," he mentions, backing up previous comments about larger studios having the time and money to develop for a wider range of hardware.