Nvidia now produces three times as much code as before AI — specialized version of Cursor is being used by over 30,000 Nvidia engineers internally

Nvidia is using Cursor to boost its internal code commits by 3x across 30,000 engineers
(Image credit: Cursor)

Nvidia's internal code commits have tripled since it mobilized 100% of its engineers with AI-assisted programming tools. Cursor, an IDE made by Anysphere, now enables over 30,000 developers at the company in AI code generation.

"Cursor is used in pretty much all product areas and in all aspects of software development. Teams are using Cursor for writing code, code reviews, generating test cases, and QA. Our full SDLC is accelerated by Cursor. We have built a lot of custom rules in Cursor to fully automate entire workflows. That has unlocked Cursor's true potential." — Wei Luio, VP of Engineering at Nvidia.

Screen full of computer code

(Image credit: Getty Images)

Beyond that, Cursor has helped in other areas as well, such as debugging where it excels at finding rare, persistent bugs and deploys agents to resolve them swiftly. The teams at Nvidia are also automating their git flow by using custom rules that allow it to pull context from tickets and docs, while letting Cursor handle the bug fixes with proper tests for validation.

"Before Cursor, Nvidia had other AI coding tools, both internally built and other external vendors. But after adopting Cursor is when we really started seeing significant increases in development velocity," said Luio. According to him, Cursor really shines at understanding the complexity of long-running, sprawling databases that could otherwise trump a regular human.

Speaking of which, trainees and new employees can get up to speed quickly with Cursor since it can operate as a guiding hand with extensive knowledge. On the contrary, more experienced devs can now tackle other challenges that do require human ingenuity, closing the gap between ideas and implementation. It's like generative AI being used for what it should've always been meant for: mundane tasks.

Cursor closed out its presser by claiming that the "bug rates have stayed flat" despite the improvements in coding volume and overall productivity. This is important because critical components like GPU drivers that are used by both gamers are professionals rely on vital code, which is now being partially generated by AI. It's also nothing new for Nvidia since DLSS has been running on a supercomputer for years.

Google Preferred Source

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

TOPICS
Hassam Nasir
Contributing Writer

Hassam Nasir is a die-hard hardware enthusiast with years of experience as a tech editor and writer, focusing on detailed CPU comparisons and general hardware news. When he’s not working, you’ll find him bending tubes for his ever-evolving custom water-loop gaming rig or benchmarking the latest CPUs and GPUs just for fun.

  • jp7189
    Im not developer, just a script kiddie. I tried Cursor in the past and it wasn't quite there. However, in the past 3 months everything changed with some of the newest models. Im using gem 3 pro with cursor now and it nails it everytime. The only issue i have is when my prompts arent precise enough. Sometime I have to read through the chain of thought to see where the AI misunderstood my intention and then reprompt to fix the problem.

    Im still no developer, but i can take any github project that might be close and add what I need to get a useful tool for my specific need.
    Reply
  • vanadiel007
    Ah, so this is why Nvidia drivers quality is not what it used to be.
    Reply
  • ezst036
    vanadiel007 said:
    Ah, so this is why Nvidia drivers quality is not what it used to be.
    Yes, but its also why everybody in corporate is freaking out.

    They're counting lines of code, not quality of code. But "triple the productivity" looks good on paper even if it makes the customers angry.

    They think they can always get triple the lines of code, triple the manufactured widgets, triple the miles, triple triple triple.

    The only thing we the regular joes are getting is triple the memory prices, triple the GPU prices, triple the copper prices, triple the silver prices, triple the energy prices, and triple the wait for the next gen product. And triple the bugs!
    Reply
  • hotaru251
    explains why drivers are trash now.
    Reply
  • King_V
    ezst036 said:
    Yes, but its also why everybody in corporate is freaking out.

    They're counting lines of code, not quality of code. But "triple the productivity" looks good on paper even if it makes the customers angry.
    This is where I see the problem.

    I'm especially happy with this part:
    Beyond that, Cursor has helped in other areas as well, such as debugging where it excels at finding rare, persistent bugs and deploys agents to resolve them swiftly.

    That's good... but so much mention of this:
    Nvidia now produces three times as much code as before AI
    ...
    Nvidia's internal code commits have tripled since it mobilized 100% of its engineers with AI-assisted programming tools.
    makes me think of those stories where the brain-dead interview question of "how many lines of code have you written?" has come up.

    More is NOT better!

    And then this:
    Cursor closed out its presser by claiming that the "bug rates have stayed flat" despite the improvements in coding volume and overall productivity.

    If bug rates are flat, and you're producing triple the code, then you have triple the bugs. This is not a ringing endorsement by any stretch of the imagination. Could've just hired triple the programmers instead of investing those mountains of money into AI, and gotten the same thing.
    Reply
  • DS426
    A 300% increase in the quantity of lines of code in the same amount of time should be concerning to anyone -- even corporate executives. A single bad line can cause a security vulnerability or suffer from a lack of optimization. Quantity does not equal quality... even my 1st grade son knows this, lol.

    I'm sure AMD is doing this as well but hopefully in a more targeted, controlled, and verifiable fashion. AI can legitimately help developers in a lot of ways as copilots, but just spewing out source code like Niagra Falls is going to cause some real issues.

    Oh wait, that's already happening with how many driver fails during the early 50 series launch and now the past couple of months from Windows Update.

    Love the irony how nVidia's own cash cow that AI is will also be a huge stumbling block for them.
    Reply
  • vanadiel007
    The way it should be used, in my opinion, is you write a routine and then you ask AI to write a shorter, better version of it.
    Then you check what AI came up with, and use the best version whichever one it is.
    Reply
  • DKATyler
    Fairly concerned about measuring productivity by lines of code. "Triple the lines" is a negative indicator. As a dev, I've often joked that one of these days I'll eventually hit a positive line count because so many of my bug fixes involve deleting bad lines or refactoring to use existing common utility functions.
    Reply
  • bit_user
    During a recent code review, us reviewers spotted some fishy code and asked the developer about it. It turned out that he'd used AI to generate it and didn't review it very carefully, himself.

    A serious downside of more generated lines of code is that it's more code for human reviewers to review. I already spend more time reviewing colleagues' code that I'd like. AI also be used for code reviews, but it's not currently at a level where it can substitute for human reviewers.
    Reply
  • bit_user
    King_V said:
    If bug rates are flat, and you're producing triple the code, then you have triple the bugs.
    No, I think they meant that the code output has tripled but the bug output has stayed the same. In other words, there's now a third as many bugs per line of code being created. Otherwise, it'd be nothing to brag about.

    However, you might be right that the number of bugs per line is the same. That would be bad, but not horrible, so long as the increased code output truly represents ~3x productivity.
    Reply