Havok Announcers Havok FX Software For Better Particle Effects In Games

The level of realism is something to marvel at in today's games. By combining real-world physics with improved graphics, certain scenarios like a crumbling city or exploding vehicles not only add to the scenery but capture the players' attention, making them feel as if they're actually in the game. However, companies continue to push the visual limits of a game, and Havok has come out with its latest software, Havok FX, to meet those needs.

The new CPU-based software works with PC games as well as any of the major gaming consoles. It would allow developers to manipulate particle effects, such as debris, shrapnel, smoke, and dust so that it all reacts to changes in the environment. Devs would also be able to control how these particle effects interact when they come into contact with an in-game character. The hope is that Havok FX can be used to create a more detailed and realistic environment for the player.

Along with this news, Havok also announced that the new software is currently being used by Ubisoft Montreal, the developers behind Rainbow Six: Siege, the latest in the tactical shooter series. Along with Ubisoft's Realblast procedural destruction engine, the team will incorporate Havok FX so that when you breach a room, the smoke from a flashbang or the way a window breaks looks just a little bit more realistic.

With the game's release date set for October 13, players won't have to wait long to see the new software in action. However, Ubisoft should be showing more of the game during E3 in a few weeks, and we'll be there to see it firsthand.

Follow Rexly Peñaflorida II @Heirdeux. Follow us @tomshardware, on Facebook and on Google+.

  • utroz
    Sounds good as long it it doesn't have the "cripple" non Intel cpu function that will only allow Intel cpu's to use newer instruction sets (AVX, AVX2, SSE4, ect..)
    Reply
  • alextheblue
    Sounds good as long it it doesn't have the "cripple" non Intel cpu function that will only allow Intel cpu's to use newer instruction sets (AVX, AVX2, SSE4, ect..)

    Or like PhysX, where you intentionally cripple it in software mode for ALL CPUs by using deprecated x87 code! But no, Havok seems to be fairly platform agnostic. Their middleware has been in use on both PCs and consoles for ages which these days means AMD, Intel, and PowerPC. I'd still like to see some benching just to be sure. :)
    Reply
  • alidan
    cant wait to see benchmarks done on systems with amd fx cpus and amd gpus, i dont want to see what this physics does on intel/nvidia, they are already powerful and have their own physics set i want to see what they do with less power.
    Reply
  • Reepca
    "The new CPU-based software"

    Aren't particle effects kind of... capable of being done in parallel? Why wouldn't you seek GPU acceleration for these?
    Reply
  • redgarl
    I don't understand how can developers can still use Physx when options are available. Let me remind you that 80% of game development is done with AMD GPU, so Physx doesn't make any sense without a big money check from Nvidia.
    Reply
  • alidan
    15994168 said:
    "The new CPU-based software"

    Aren't particle effects kind of... capable of being done in parallel? Why wouldn't you seek GPU acceleration for these?
    i wouldn't doubt if an opencl version could happen, but havok is owned by intel and intel doesn't have a gpu section, so why would they make it run good on competitors hardware? hell, seeing how often intel lands in antitrust suits i would almost think they would make a cuda version of havok than do something that may help amd at all

    as i wrote this, i realized that intel owns havok... now i have to see benchmarks on amd cpus just because i wouldn't put it past intel to screw them over and make it run like hell on them, and here i wanted it play siege but now i think i may not get the chance to.

    15994507 said:
    I don't understand how can developers can still use Physx when options are available. Let me remind you that 80% of game development is done with AMD GPU, so Physx doesn't make any sense without a big money check from Nvidia.
    as much as a despise nvidia, dont nvidia have something around a 75-85% market share in pc gpus?
    and until we see benchmarks or tech demos of this, we have no idea how much this encompasses or what its capable of... where as physx is a pretty solid engine gimped by nvidia not doing x86 coding and not opening up the source so amd can use it.
    as far as im aware and as crappy as it is, physx is the most complete of the real time physx sets.
    Reply
  • somebodyspecial
    15993859 said:
    Sounds good as long it it doesn't have the "cripple" non Intel cpu function that will only allow Intel cpu's to use newer instruction sets (AVX, AVX2, SSE4, ect..)

    Or like PhysX, where you intentionally cripple it in software mode for ALL CPUs by using deprecated x87 code! But no, Havok seems to be fairly platform agnostic. Their middleware has been in use on both PCs and consoles for ages which these days means AMD, Intel, and PowerPC. I'd still like to see some benching just to be sure. :)

    Physx has been used for ages on consoles with no effect also ;) It is in many games on consoles.
    https://en.wikipedia.org/wiki/PhysX
    "At GDC 2015 Nvidia made PhysX free with source code available on GitHub"
    Should get easier to optimize for now that source code is out (maybe...). It's been used in over 500 games, so it's a lot more popular than most think. I don't think it's as crippling as you claim either, or nobody would be using it on a console. Of course not talking PC here - silly to compete with your gpus and further would have been silly for AMD to give access to mantle. Mantle's only problem is the company trying to put it out was too broke to push a new API and had far too little gpu share. It would have been a great idea in 2005, not 2015 in AMD's condition. If Nvidia had done this, AMD would have been sunk. In fact, you could almost say gameworks (came 2014) was as response to MANTLE...ROFL. I mean if you do something to speed your stuff up, I guess the response might be the other guy does something to PRETTY his side up thereby creating differentation ;)

    So TrueAudio and Mantle probably cripple NV/Intel hardware too correct? Oh wait, both of those are proprietary and won't run on anybody's stuff and not even a large portion of AMD stuff...LOL. Both sides do this, get over it. I for one, have ZERO problems with them adding stuff to games that make whatever hardware I bought a better deal than it was before they added it. It is not company X's job to help company Y look special, no matter who we're talking about. If you pay the expense of R&D, keep it to yourself if it makes you feel good. IT IS YOURS and YOURS alone to do with as you please. Making your products SPECIAL and stick out more than the guy next to you is how you make...wait for it...

    PROFITS.

    Which AMD hasn't really made since ~2000 when they made a Billion in profits (net) and had a leading CPU. Hopefully ZEN will finally get them back there. They've lost almost 7B in the last 15yrs (6B+ in the last 12). Maybe AMD should start shipping more "special sauce" stuff ;)

    Note Havok doesn't give ALL of the source code, just parts and it also has been in 500+ games. My guess is Intel has had this all along and was waiting for an AMD cpu move...LOL. Note the new version here is CPU based only, and as such will likely give INTEL an advantage on PC's in some way shape or form, but be unhindered on consoles. Since like NV, they'll want to spread the API in places that don't DIRECTLY compete with their desktop stuff. I also think Intel is prepping the software for a major jump in perf (if ZEN does it's job, think Core/Core2 jumps), and you have to find a way to EAT that extra perf up or why would we buy faster cpus. Intel already has that problem right now, so they need something to suck up the power in games. Here it is. DX12/Vulkan will ALSO be making crap cpus better, so again they need to figure out how to (artificially?) slow down your cpus so you want faster ones...LOL. They weren't sitting there doing nothing while AMD dropped out of the race for the last ~3-4 years (been behind longer than that but gave up ~may2012). Intel continued to develop stuff, they just don't give it to us (much like NV). Business 101 - give only what you need to until you have to. Andy Grove didn't write the book on "only the paranoid survive" for nothing ;) SIPS :)

    I'm not saying I won't like the new effects, just stating how/why I think it's happening all of the sudden after Zen and somewhat probably vulkan/dx12 pushing more on gpu. Gotta use up the cpu cycles somehow. ;)

    One more note: Are you really crippling others if you're just using your current gen stuff to max capability? IE, Hairworks taxes tessellation greatly, which as it turns out doesn't harm NV's top maxwell cards much, but tanks the rest of their own cards & AMD. Granted the Developer should have made an easy way in games with issues (witcher 3) to turn that down easily (but you can do it in config files or AMD's drivers overriding the app anyway as AMD has shown), but is there anything wrong with tapping out your best stuff in any way possible? Should you NOT turn on ALL of your latest cards features just because some other guy (or even your own old gen) can't hack it? You build that power in to USE IT correct? That is a differentiating feature that compounded with others, might make me want your card more. I don't quite get why people don't see it for what it is: Nvidia making the most of their current card. I do think witcher 3 will put out 1.05 (went live today) or whatever that will easily allow this modification in the game instead of via AMD's drivers, or config file editing (works for both sides). But whatever...You should get the point. Maybe it's AMD's job, as I don't see a fix for perf in the list of 1.05.

    I want EVERY feature on that can be turned on, especially if I've paid $500+ for a card! I don't usually even play a game until 1. a few months of patches are out (or all DLC done & GOTY edition or something) & 2. I have a card that can MAX the crap out of everything in the game I want to play (usually that is, some you just have to fire up anyway...LOL). Is it Nvidia's fault the dev didn't make it EASY to modify tessellation in the witcher 3 to 32/16/8 etc? Who knows, but it's their JOB to show you ways to MAXIMIZE every feature of the card you plunked down your cash for...No doubt about that! AMD can just as easily put out a driver that has a profile (raptr fix, whatever) for the game to auto-override with a usable setting. From the pics I've seen in one page side by side so you can tell easily, there isn't much difference between 64 & 16 (and 16 works fine for AMD) and not a ton of difference with 8 though I can see it then. I don't see anything wrong with Intel maximizing their cpu's usage as long as I can turn off feature X to not be affected if I don't own Intel (IE, you can turn on hairworks easily).
    Reply
  • alidan
    15999405 said:
    15993859 said:
    Sounds good as long it it doesn't have the "cripple" non Intel cpu function that will only allow Intel cpu's to use newer instruction sets (AVX, AVX2, SSE4, ect..)

    Or like PhysX, where you intentionally cripple it in software mode for ALL CPUs by using deprecated x87 code! But no, Havok seems to be fairly platform agnostic. Their middleware has been in use on both PCs and consoles for ages which these days means AMD, Intel, and PowerPC. I'd still like to see some benching just to be sure. :)

    Physx has been used for ages on consoles with no effect also ;) It is in many games on consoles.
    https://en.wikipedia.org/wiki/PhysX
    "At GDC 2015 Nvidia made PhysX free with source code available on GitHub"
    Should get easier to optimize for now that source code is out (maybe...). It's been used in over 500 games, so it's a lot more popular than most think. I don't think it's as crippling as you claim either, or nobody would be using it on a console. Of course not talking PC here - silly to compete with your gpus and further would have been silly for AMD to give access to mantle. Mantle's only problem is the company trying to put it out was too broke to push a new API and had far too little gpu share. It would have been a great idea in 2005, not 2015 in AMD's condition. If Nvidia had done this, AMD would have been sunk. In fact, you could almost say gameworks (came 2014) was as response to MANTLE...ROFL. I mean if you do something to speed your stuff up, I guess the response might be the other guy does something to PRETTY his side up thereby creating differentation ;)

    So TrueAudio and Mantle probably cripple NV/Intel hardware too correct? Oh wait, both of those are proprietary and won't run on anybody's stuff and not even a large portion of AMD stuff...LOL. Both sides do this, get over it. I for one, have ZERO problems with them adding stuff to games that make whatever hardware I bought a better deal than it was before they added it. It is not company X's job to help company Y look special, no matter who we're talking about. If you pay the expense of R&D, keep it to yourself if it makes you feel good. IT IS YOURS and YOURS alone to do with as you please. Making your products SPECIAL and stick out more than the guy next to you is how you make...wait for it...

    PROFITS.

    Which AMD hasn't really made since ~2000 when they made a Billion in profits (net) and had a leading CPU. Hopefully ZEN will finally get them back there. They've lost almost 7B in the last 15yrs (6B+ in the last 12). Maybe AMD should start shipping more "special sauce" stuff ;)

    Note Havok doesn't give ALL of the source code, just parts and it also has been in 500+ games. My guess is Intel has had this all along and was waiting for an AMD cpu move...LOL. Note the new version here is CPU based only, and as such will likely give INTEL an advantage on PC's in some way shape or form, but be unhindered on consoles. Since like NV, they'll want to spread the API in places that don't DIRECTLY compete with their desktop stuff. I also think Intel is prepping the software for a major jump in perf (if ZEN does it's job, think Core/Core2 jumps), and you have to find a way to EAT that extra perf up or why would we buy faster cpus. Intel already has that problem right now, so they need something to suck up the power in games. Here it is. DX12/Vulkan will ALSO be making crap cpus better, so again they need to figure out how to (artificially?) slow down your cpus so you want faster ones...LOL. They weren't sitting there doing nothing while AMD dropped out of the race for the last ~3-4 years (been behind longer than that but gave up ~may2012). Intel continued to develop stuff, they just don't give it to us (much like NV). Business 101 - give only what you need to until you have to. Andy Grove didn't write the book on "only the paranoid survive" for nothing ;) SIPS :)

    I'm not saying I won't like the new effects, just stating how/why I think it's happening all of the sudden after Zen and somewhat probably vulkan/dx12 pushing more on gpu. Gotta use up the cpu cycles somehow. ;)

    One more note: Are you really crippling others if you're just using your current gen stuff to max capability? IE, Hairworks taxes tessellation greatly, which as it turns out doesn't harm NV's top maxwell cards much, but tanks the rest of their own cards & AMD. Granted the Developer should have made an easy way in games with issues (witcher 3) to turn that down easily (but you can do it in config files or AMD's drivers overriding the app anyway as AMD has shown), but is there anything wrong with tapping out your best stuff in any way possible? Should you NOT turn on ALL of your latest cards features just because some other guy (or even your own old gen) can't hack it? You build that power in to USE IT correct? That is a differentiating feature that compounded with others, might make me want your card more. I don't quite get why people don't see it for what it is: Nvidia making the most of their current card. I do think witcher 3 will put out 1.05 (went live today) or whatever that will easily allow this modification in the game instead of via AMD's drivers, or config file editing (works for both sides). But whatever...You should get the point. Maybe it's AMD's job, as I don't see a fix for perf in the list of 1.05.

    I want EVERY feature on that can be turned on, especially if I've paid $500+ for a card! I don't usually even play a game until 1. a few months of patches are out (or all DLC done & GOTY edition or something) & 2. I have a card that can MAX the crap out of everything in the game I want to play (usually that is, some you just have to fire up anyway...LOL). Is it Nvidia's fault the dev didn't make it EASY to modify tessellation in the witcher 3 to 32/16/8 etc? Who knows, but it's their JOB to show you ways to MAXIMIZE every feature of the card you plunked down your cash for...No doubt about that! AMD can just as easily put out a driver that has a profile (raptr fix, whatever) for the game to auto-override with a usable setting. From the pics I've seen in one page side by side so you can tell easily, there isn't much difference between 64 & 16 (and 16 works fine for AMD) and not a ton of difference with 8 though I can see it then. I don't see anything wrong with Intel maximizing their cpu's usage as long as I can turn off feature X to not be affected if I don't own Intel (IE, you can turn on hairworks easily).

    from my understanding, amd was developing mantel and testing it, intending to release it for everyone to use after they got it up and working but passed everything off to opengl and dx12

    for physx, they open sourced the cpu version... so yea, the horribly inefficient version has open source now and apparently you can only see it if you sign up for gameworks...

    and i believe the issue is companies screwing over competition, look up intels blatantly illegal ways they have done this that are more recent than i thought, or nvidias who has a history of working with devs, making something in the game work than locking code to their platform only forcing others to use less efficient workarounds.

    lets not forget the blatant nvidia crippling their older gpus through drivers that was prominently displayed with witcher 3

    now, i'm ok with turning eye candy off, the issue comes in when you use inefficient as crap tech and offer no way to turn it off at all.
    Reply
  • somebodyspecial
    15999603 said:
    15999405 said:
    15993859 said:
    Sounds good as long it it doesn't have the "cripple" non Intel cpu function that will only allow Intel cpu's to use newer instruction sets (AVX, AVX2, SSE4, ect..)

    Or like PhysX, where you intentionally cripple it in software mode for ALL CPUs by using deprecated x87 code! But no, Havok seems to be fairly platform agnostic. Their middleware has been in use on both PCs and consoles for ages which these days means AMD, Intel, and PowerPC. I'd still like to see some benching just to be sure. :)

    Physx has been used for ages on consoles with no effect also ;) It is in many games on consoles.
    https://en.wikipedia.org/wiki/PhysX
    "At GDC 2015 Nvidia made PhysX free with source code available on GitHub"
    Should get easier to optimize for now that source code is out (maybe...). It's been used in over 500 games, so it's a lot more popular than most think. I don't think it's as crippling as you claim either, or nobody would be using it on a console. Of course not talking PC here - silly to compete with your gpus and further would have been silly for AMD to give access to mantle. Mantle's only problem is the company trying to put it out was too broke to push a new API and had far too little gpu share. It would have been a great idea in 2005, not 2015 in AMD's condition. If Nvidia had done this, AMD would have been sunk. In fact, you could almost say gameworks (came 2014) was as response to MANTLE...ROFL. I mean if you do something to speed your stuff up, I guess the response might be the other guy does something to PRETTY his side up thereby creating differentation ;)

    So TrueAudio and Mantle probably cripple NV/Intel hardware too correct? Oh wait, both of those are proprietary and won't run on anybody's stuff and not even a large portion of AMD stuff...LOL. Both sides do this, get over it. I for one, have ZERO problems with them adding stuff to games that make whatever hardware I bought a better deal than it was before they added it. It is not company X's job to help company Y look special, no matter who we're talking about. If you pay the expense of R&D, keep it to yourself if it makes you feel good. IT IS YOURS and YOURS alone to do with as you please. Making your products SPECIAL and stick out more than the guy next to you is how you make...wait for it...

    PROFITS.

    Which AMD hasn't really made since ~2000 when they made a Billion in profits (net) and had a leading CPU. Hopefully ZEN will finally get them back there. They've lost almost 7B in the last 15yrs (6B+ in the last 12). Maybe AMD should start shipping more "special sauce" stuff ;)

    Note Havok doesn't give ALL of the source code, just parts and it also has been in 500+ games. My guess is Intel has had this all along and was waiting for an AMD cpu move...LOL. Note the new version here is CPU based only, and as such will likely give INTEL an advantage on PC's in some way shape or form, but be unhindered on consoles. Since like NV, they'll want to spread the API in places that don't DIRECTLY compete with their desktop stuff. I also think Intel is prepping the software for a major jump in perf (if ZEN does it's job, think Core/Core2 jumps), and you have to find a way to EAT that extra perf up or why would we buy faster cpus. Intel already has that problem right now, so they need something to suck up the power in games. Here it is. DX12/Vulkan will ALSO be making crap cpus better, so again they need to figure out how to (artificially?) slow down your cpus so you want faster ones...LOL. They weren't sitting there doing nothing while AMD dropped out of the race for the last ~3-4 years (been behind longer than that but gave up ~may2012). Intel continued to develop stuff, they just don't give it to us (much like NV). Business 101 - give only what you need to until you have to. Andy Grove didn't write the book on "only the paranoid survive" for nothing ;) SIPS :)

    I'm not saying I won't like the new effects, just stating how/why I think it's happening all of the sudden after Zen and somewhat probably vulkan/dx12 pushing more on gpu. Gotta use up the cpu cycles somehow. ;)

    One more note: Are you really crippling others if you're just using your current gen stuff to max capability? IE, Hairworks taxes tessellation greatly, which as it turns out doesn't harm NV's top maxwell cards much, but tanks the rest of their own cards & AMD. Granted the Developer should have made an easy way in games with issues (witcher 3) to turn that down easily (but you can do it in config files or AMD's drivers overriding the app anyway as AMD has shown), but is there anything wrong with tapping out your best stuff in any way possible? Should you NOT turn on ALL of your latest cards features just because some other guy (or even your own old gen) can't hack it? You build that power in to USE IT correct? That is a differentiating feature that compounded with others, might make me want your card more. I don't quite get why people don't see it for what it is: Nvidia making the most of their current card. I do think witcher 3 will put out 1.05 (went live today) or whatever that will easily allow this modification in the game instead of via AMD's drivers, or config file editing (works for both sides). But whatever...You should get the point. Maybe it's AMD's job, as I don't see a fix for perf in the list of 1.05.

    I want EVERY feature on that can be turned on, especially if I've paid $500+ for a card! I don't usually even play a game until 1. a few months of patches are out (or all DLC done & GOTY edition or something) & 2. I have a card that can MAX the crap out of everything in the game I want to play (usually that is, some you just have to fire up anyway...LOL). Is it Nvidia's fault the dev didn't make it EASY to modify tessellation in the witcher 3 to 32/16/8 etc? Who knows, but it's their JOB to show you ways to MAXIMIZE every feature of the card you plunked down your cash for...No doubt about that! AMD can just as easily put out a driver that has a profile (raptr fix, whatever) for the game to auto-override with a usable setting. From the pics I've seen in one page side by side so you can tell easily, there isn't much difference between 64 & 16 (and 16 works fine for AMD) and not a ton of difference with 8 though I can see it then. I don't see anything wrong with Intel maximizing their cpu's usage as long as I can turn off feature X to not be affected if I don't own Intel (IE, you can turn on hairworks easily).

    from my understanding, amd was developing mantel and testing it, intending to release it for everyone to use after they got it up and working but passed everything off to opengl and dx12

    for physx, they open sourced the cpu version... so yea, the horribly inefficient version has open source now and apparently you can only see it if you sign up for gameworks...

    and i believe the issue is companies screwing over competition, look up intels blatantly illegal ways they have done this that are more recent than i thought, or nvidias who has a history of working with devs, making something in the game work than locking code to their platform only forcing others to use less efficient workarounds.

    lets not forget the blatant nvidia crippling their older gpus through drivers that was prominently displayed with witcher 3

    now, i'm ok with turning eye candy off, the issue comes in when you use inefficient as crap tech and offer no way to turn it off at all.

    NV didn't cripple their old cards in drivers. The GAME dev had tessellation set to 64 which causes everything but Maxwell to be pounded. That has nothing to do with drivers. That is tessellation settings in the game and easily changeable in config files (though again, dev needs to make this option in the menu inside the game).

    AMD should have approached the dev 2.5yrs ago and asked to work with them on TressFX. THEY DID NOT. Not Nvidia's fault. AMD never intended to release Mantle. I do not believe that for a second. They intended to use it to gain share while the rest were held in indefinite BETA with no release date ever mentioned. Intel was told to go away multiple times. Nuff said. They only gave up when they realized they didn't have the financial/dev muscle to push it. Their mistake was thinking they could EVER get that done, and should have put ALL of those resources into DX12/DX11/Vulkan drivers. You can see at anandtech etc how weak they are in DX12 so far, and DX11 showed there also.
    Reply
  • rokit
    "The new CPU-based software"
    Thanks Intel =/
    I remember times when Havok showed OpenCL accelerated physics on AMD cards and shortly after Intel bought them. One of the worst decisions by AMD, they needed to buy it back then and it would be a game changer. Now we'll have physx that runs only on Nvidia and cripples on CPU for everyone else and Intel that probably will run good only with Intel CPUs. And while i do have Intel CPU i'd like more choices in the future, maybe AMD could competitive in CPU sector again but it wouldn't matter because another physics lock.
    Reply