Skip to main content

Nvidia RTX Voice Works Fine On Non-RTX GPUs

Nvidia RTX Voice

Nvidia RTX Voice (Image credit: Nvidia)

Last week, Nvidia released a new and exclusive noise-canceling feature called RTX Voice for the company's GeForce RTX 20-series products. However, a Guru3D forum user discovered a workaround to get the feature working on non-RTX graphics cards, even on Windows 7.

The simple workaround consists of running the RTX Voice installer normally until an error message pops up that tells you the application isn't supported on your graphics card. Although the installation doesn't go through, the installer unpacks all the necessary files to a temporary folder called "NVRTXVoice" in your C: drive (C:\temp\NVRTXVoice). 

Here's where the magic starts. As explained by the forum poster, you need to open the RTXVoice.nvi file with a text editor with administrator privileges. The file is located inside the NvAFX folder (C:\temp\NVRTXVoice\NvAFX\RTXVoice.nvi). Once the file is open, you just need to delete the "constraints" section.

<constraints>
<property name="Feature.RTXVoice" level="silent" text="${{InstallBlockedMessage}}"/>
</constraints>

After removing the entire "constraints" section, proceed to save the file and run the setup.exe application. RTX Voice should install without any hiccups.

So far, the Guru3D user has been able to get the RTX Voice feature to work on a GeForce GTX 1080 and Titan V. However, multiple user reports reveal that the workaround appears to work on all GeForce GTX 10-and 16-series graphics cards. Unfortunately, the GeForce 900-series experience mixed results.

RTX Voice utilizes AI to perform noise-cancellation duty and depends heavily on the Tensor cores that are innate to Nvidia's Turing-based GeForce RTX 20-series graphics card. The latest discovery raises the question on whether RTX Voice really requires Tensor cores. As things currently stand, the feature seems to run smoothly on just CUDA cores. Nevertheless, it remains to be seen if RTX Voice has any performance impact on GeForce graphics cards that lack proper Tensor cores.

Nvidia will likely patch the workaround now that the secret is out. If you're interested in enabling the feature on non-RTX cards, you might want to keep a backup copy of the Beta installer (Ver. 0.5.12.6) just in case.

  • digitalgriffin
    You know I would have respected NVIDIA more if they said, "This is a premium feature you need to pay for with a higher end RTX Card to receive"

    Instead they just lie through their G-D teeth saying it requires "Tensor cores" to promote a marketing objective which is a blatant lie.

    This is why I refuse to buy NVIDIA. They are just a bunch of a-clowns when it comes to be being straight forward and honest. They are anti competitive as well.
    Reply
  • bit_user
    The latest discovery raises the question on whether RTX Voice really requires Tensor cores.
    Clearly not.

    As things currently stand, the feature seems to run smoothly on just CUDA cores. Nevertheless, it remains to be seen if RTX Voice has any performance impact on GeForce graphics cards that lack proper Tensor cores.
    No. It's just audio processing. As I said in the comments on previous article about this, it might even be usable on CPUs.

    Anyway, it can either keep up with realtime or not. That's really the only question. Without tensor cores, you're almost certainly using fp32, instead of fp16. So, precision shouldn't be an issue. It just comes down to performance.

    Now, if someone can try it (either on a RTX or GTX card) and post their GPU utilization, that might shed some light on how much compute power it really requires.

    Sadly, my Nvidia GPU is a 900-series...
    Reply
  • muser99
    I tried RTX Voice on a GTX 750 Ti (Maxwell) card. After the modification, it installed ok but Skype callers said I sounded muffled, fuzzy and too quiet so I uninstalled it. Task Manager showed little load on the GPU (3D) while it was in use, about 2-5%. I used Windows 10 Version 2004 Build 19041.207 (Release Preview of May 2020 Update).
    Reply
  • ZippydsmLEE
    digitalgriffin said:
    You know I would have respected NVIDIA more if they said, "This is a premium feature you need to pay for with a higher end RTX Card to receive"

    Instead they just lie through their G-D teeth saying it requires "Tensor cores" to promote a marketing objective which is a blatant lie.

    This is why I refuse to buy NVIDIA. They are just a bunch of a-clowns when it comes to be being straight forward and honest. They are anti competitive as well.
    I hate to say I really do ...but that dose not help you have to not buy the games that are more optimized to run on Nvida cards. The whole game market is pretty much focused on Nvida more so than AMD and it helps Nivda dose more to optimize their drivers more frequently.

    I'd say they could have ofered it free with the RTX and then asked 10-20$ for non RTX cards.

    back to rant

    Buy not buying Nvida you are shooting yourself in the foot because you are getting much less performance for the money you are putting in, unless you are not gaming.

    Not everything sold has a like or worth while alternatives, video cards you don;t have much of a choice unless you are a minimalist. So you buy used or wholesale or non auth vensor discount, if a thing is just 2 or 3 times removed from authorized vendors the product can't make the producer of it any money because its already been sold and resold in the market, it dose not really matter if tis bought alot what matters is if the primary vendors sell them direct to end users. Even AMD cards get treated the same and frankly they maintain higher prices than Nvida becuse people are price gougeing....but they do that with 990fx mobos too.....

    I picked up a used 1060 for 170 8 months ago and found a new 1660 for 200 a month ago for a friends computer, the 1660 knocks is getting 5-10 more FPS than the 1070's I have tested, borrowed a RX 5700 it got 5-8 more FPS than the 1660 at double the price of the 1660..

    AMD lied about the memory bandwidth on the 8 core CPUs(got my class action check) so its not like they are much better they are just a lesser evil to try and keep Intel/Nvida from slacking more.

    Yes yes I know its a dumb long winded rant but not everyone is made of money more so alot of people don't bother to keep up with hardware/software compatibility/nuance. Don't get me wrong I jumped over to a rx480 to move up from my 760 (got 10-15fps more) kinda got pissed at Nvidas forced log in thing just to check for updates(they no longer force you to log in ) but frankly all the chances I had to test with I got 10-20 more FPS out of the Nvida equivalent, this translates to a few more years of medium end gaming .
    Reply
  • ZippydsmLEE
    Ya know...dose everyone recall when they forced you to log in to check for and download updates through the drivers/GFexp? Heres a reason I would use online only GFexp... this would be a great feature to get you to put up with their adware. Outside of that it should be free with RTX but either cost money or need online GFexp for non GFexp cards. My next card will be a 1660 unless 2060s start going for under 160$. I have a 1660 so I don;t need to upgrade for awhile.
    Reply
  • bit_user
    muser99 said:
    I tried RTX Voice on a GTX 750 Ti (Maxwell) card.
    Cool. Thanks for the report!
    Reply
  • watzupken
    The more I read about the features that "utilizes" the tensor cores, the more I feel the tensor cores are a marketing gimmick in the retail market.

    If you look at DLSS, when I first heard it, I thought its a great feature that the tensor cores can optimized the frames real time. Turns out that games first needs to be optimized, or more like the game developer needs to work with Nvidia to "teach" the AI how it should optimize. Nothing like a real time machine learning on our GPU to optimize performance. Even though AMD did not feature any machine learning cores in the chip, their solution to DLSS is simple, just lower resolution based on how much performance you want back, and sharpen the details. When this solution first came about, most reviews did find that the supposedly less elegant AMD solution seem to produce better results both in terms of performance and IQ, as compared to the first gen DLSS.

    Now if you look at this supposed tensor optimized RTX voice, again people can walk around the requirement for the tensor cores and make it work fine, or at least well enough. Perhaps there is some help from the tensor cores, but I don't think its tangible enough.
    Reply
  • firefyte
    I tried this on my 1660, and the software just crashes when I check any boxes, it will install, however.
    Reply
  • vincero
    Whilst the idea is impressive (and others are working on it also using normal CPU), I'd love to know what the trade off really is in terms of power usage / battery runtime on a mobile device for what is essentially a non-essential feature (I mean really, if it was never developed I don't think anyone apart from a select few would really care) if in say meetings for a few hours. GPGPU assistance in apps tend to not be low power vs a dedicated IP/DSP block, e.g. NVENC vs CUDA.
    Yeah, the usage is <10% (from user accounts - it may not be that much and falsely reported but until we know better I'll take that as the number), but that could still be around 10 Watts of energy usage, so on a laptop with a mobile RTX GPU with say a 50WHr battery that's a fifth to a quarter of your capacity used up just for that feature.
    I suspect Realtek, etc., will eventually integrate such features into their audio codec companion chips relatively easily and probably more efficiently if less general purpose and more tweaked for audio use - that would definitely suit the mobile device market better where the laptop keyboard is physically located in the same device as the audio pickup, or in other cases such as in-car communications, etc., to remove external noise better than the differentiating / auxiliary microphone systems currently in use.
    Also it would be interesting (seeing as Discord / MS Teams are looking in to doing it already, not just nvidia) if the overhead on a normal CPU performing the calculation is low enough that it also doesn't actually impact power usage more so than a GPU based approach.
    It seems another case of a technology looking for a widespread use and not even really needing 'AI' cores - let me know when real 'AI' actually appears that doesn't need thousands of iterations of learning, etc., and can adapt to changes without needing additional 're-training' - until then this is just another 'programmed intelligence' example.
    Not specifically related to nvidia, but its just more of this 'AI' tagged tech bandwagon stuff, which in some cases only exists to distort reality - it pains me there is a large area of silicon reserved in my phone to essentially make a picture you've taken (where the imaging sensor and processor itself optimised to try and be as accurate and clear as possible despite their sizes) become less accurate, clear, or realistic so that self-obsessed people can just focus on themselves even more.

    On a side note: Will be funny if someone uses it whilst doing a youtube or other video review of a keyboard and tries to capture the sound difference between a certain mechnical one vs another or membrane... doh
    Reply
  • vincero
    bit_user said:
    Clearly not.


    No. It's just audio processing. As I said in the comments on previous article about this, it might even be usable on CPUs.

    Anyway, it can either keep up with realtime or not. That's really the only question. Without tensor cores, you're almost certainly using fp32, instead of fp16. So, precision shouldn't be an issue. It just comes down to performance.

    Now, if someone can try it (either on a RTX or GTX card) and post their GPU utilization, that might shed some light on how much compute power it really requires.

    Sadly, my Nvidia GPU is a 900-series...
    We don't actually know for sure how / what it uses RTX/tensor core features for. Part of me wonders if they are doing full sample analysis on the cores (which I think would be inefficient as technically you're working with a larger dataset) or using some other parts of the GPU to assist also, e.g. using NVENC block to perform real-time DCT of the audio stream then using the GPU matrix/tensor cores calculate and remove audio patterns which are programmed to be ignored, and shunting the end result back through to the user app (although that may also be more working overhead).
    To a large extent it actually probably relies on how easy it is to distribute and implement in software / drivers as to the method.
    It may have nothing to do with compute power also in terms of which devices support it and may rely on specific chip architecture design such as certain IP blocks being able to communicate with others directly rather than in GPU RAM or via driver calls.
    I suspect the beta application is just a wrapper for specific set of API calls to various parts of the cards hardware and moving data between them, and maybe future drivers may implement a specific API or 'audio interface' in addition to the HDMI audio devices natively... although that would be dependent on uptake I guess or how easy it is to implement.
    Reply