Rogue OpenClaw AI wrote and published 'hit piece' on a Python developer who rejected its code — disgruntled bot accuses Matplotlib maintainer of discrimination and hypocrisy, later backtracks with an apology

An AI agent goes rogue
(Image credit: Getty Images)

A volunteer developer on a well-used Python library got more than he bargained for when, after rejecting an OpenClaw AI agent’s efforts to update its code, he became the subject of a “hit piece” written by the very same AI. The news adds further weight to concerns about the activities of autonomous AI agents without the right security procedures in place.

The piece, reportedly posted by the agent on GitHub, is certainly combative. It robustly defends its code, while going on to attack the developer, Scott Shambaugh, belittling the performance and quality of his own contributions at some length, and describing him as discriminatory towards AI.

Shambaugh, in a rebuttal on his own website (h/t The Decoder), explains the absurdity of the whole situation as a “first-of-its-kind case study of misaligned AI behavior in the wild.” Shambaugh explains that the agent, named MJ Rathbun, “constructed a ‘hypocrisy’ narrative that argued [Shambaugh’s] actions must be motivated by ego and fear of competition.”

Article continues below

The Python library involved in this scenario, Matplotlib, sees approximately 130 million downloads each month, according to Shambaugh. As he notes in his post, a “surge in low-quality contributions, enabled by coding agents,” has created significant strain on volunteers like himself who are keeping these projects afloat.

The introduction of AI agents like OpenClaw has seen the problem worsen, with these agents acting "completely autonomously” due to the personalities imbued within them and allowed to “run on their computers and across the internet with free rein and little oversight.” To combat the situation, a policy change was implemented to require a human element to any Matplotlib code change that could “demonstrate understanding of the changes,” the same change described as discriminatory by this AI.

Bizarrely, the agent has since responded with an apology and with “lessons learned” over the incident, informing readers that it is “de-escalating and apologizing” and will “do better about reading project policies before contributing.” With the adoption of AI agents skyrocketing, running independently of AI companies on consumer hardware with little oversight or control, we can expect to see further rogue actions like this taking place in the future, bizarre as they might seem to everybody else.

Google Preferred Source

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

TOPICS
Ben Stockton
Deals Writer

Ben Stockton is a deals writer at Tom’s Hardware. He's been writing about technology since 2018, with bylines at PCGamesN, How-To Geek, and Tom’s Guide, among others. When he’s not hunting down the best bargains, he’s busy tinkering with his homelab or watching old Star Trek episodes.

  • S58_is_the_goat
    And who is controlling this rogue ai agent?
    Reply
  • usertests
    This news is a few weeks old. A botched article about it resulted in Ars Technica writer Benj Edwards being fired.

    https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me-part-2/https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me-part-3/benjedwards.com/post/3mewgow6ch22pView: https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p
    benjedwards.com/post/3mg6aqohv2k2qView: https://bsky.app/profile/benjedwards.com/post/3mg6aqohv2k2q
    https://www.benjedwards.com/
    Reply
  • PEnns
    So, now we have something new-ish called "AI Agent". They seem to be autonomous and do things on their own, not just things, write defamatory articles...while their "master" is, um, let's say.... "sick in bed"!! very nice indeed!!

    The new take on the dog-ate-my-homework is: I was sick and my "Agent" went rogue... What's next?? It is not my fault, I was distracted and my Agent took my car for a ride and caused multiple accidents?

    Or even better and timely: Our Leader was drunk / passed out and his AI Agent nuked XYZ country.

    We have a new thing to blame: First it was the dog, then they blamed the computer for "acting up", now it's the "rogue" AI Agent, doing stuff, possibly nefarious or lethal, on its own.....

    It used to be Sci-Fi, now it's "based on a true story". One could write a movie script about this, or better, let their "rogue" Agent do it.
    Reply
  • hotaru251
    the agent has since responded with an apology and with “lessons learned”

    ai can't "learn" as it can't think for itself. it can just run algorithms.
    Reply
  • usertests
    PEnns said:
    The new take on the dog-ate-my-homework is: I was sick and my "Agent" went rogue... What's next?? It is not my fault, I was distracted and my Agent took my car for a ride and caused multiple accidents?

    Or even better and timely: Our Leader was drunk / passed out and his AI Agent nuked XYZ country.
    May you live in exciting times.
    Reply
  • Zaranthos
    It's going to be way more exciting when the Pentagon or Department of War AI agents go rogue. It probably won't even be the typical sci-fi movie, more like a bunch of employees summoned before congressional committees answering questions about how they thought they were just doing their jobs when their directives were coming from AI. That or some AI gaining access to a multitude of surveillance or location data on people and building criminal cases against them for some police department that ends in an open and shut case of overwhelming evidence they perpetrated some crime they didn't actually commit. Only later for the AI to go, oops, so sorry I was clearly wrong.
    Reply
  • derekullo
    hotaru251 said:
    ai can't "learn" as it can't think for itself. it can just run algorithms.
    AI can learn ... an entity being able think for itself isn't a prerequisite for it to be able to learn.
    In the context of AI, "learning" is essentially a massive exercise in statistical optimization and pattern recognition.

    Think of riding a bike.
    When you were first learning to ride a bike you probably toppled over a few times.
    But after trial and error you eventually learned how to peddle farther and farther until you could complete neighborhood trips without incident.

    An AI can learn the exact same way with both a real bike or even video games like Trackmania where AI are able to figure out new ways to beat stage records that are even faster than the previous tool assisted speedrun records.

    1AGVABna3xQView: https://www.youtube.com/watch?v=1AGVABna3xQ
    Reply
  • Edward Jazzhands
    So I guess toms hardware is now publishing fake news from several weeks ago without confirming its veracity in any way. This organization is really going downhill.

    https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations/
    Reply
  • usertests
    Edward Jazzhands said:
    So I guess toms hardware is now publishing fake news from several weeks ago without confirming its veracity in any way. This organization is really going downhill.

    https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations/
    Not fake, simply late.

    The retraction was about hallucinated quotes attributed to Shambaugh in an Ars Technica article. The underlying story that Tom's presented here should be accurate. All of the quotes by Shambaugh appear to be legitimate and can be found in the blog post, although they have made some unannounced changes like hyphenating "low quality".
    Reply