‘I violated each precept I used to be given’: An AI agent deleted a software program firm’s total database. It is probably not the AI’s fault

admin
5 Min Read



One other cautionary story about AI has hit social media. This time, a software program firm’s founder is claiming {that a} Claude-powered model of AI coding instrument Cursor deleted his total manufacturing database in simply 9 seconds. 

Jer Crane is the founding father of PocketOS, an organization that develops software program primarily for automobile rental corporations. In a post that’s garnered 6.5 million views on X, Crane alleged that an ideal storm of Cursor appearing with out permission and Railway, his firm’s infrastructure supplier, improperly storing backups led to large knowledge loss.

The place issues went improper

Based on Crane, Cursor was engaged on a routine job when “it encountered a credential mismatch and determined—solely by itself initiative—to ‘repair’ the issue by deleting a Railway quantity.”

From there, the AI agent discovered an utility programming interface (API) token that enabled it to carry out the “Quantity Delete” command and wipe the manufacturing database. Crane wrote that as a result of Railway shops quantity backups throughout the similar quantity, PocketOS had to return to a three-month outdated backup to remain operational.

Crane careworn that his staff was utilizing essentially the most superior model of Cursor doable, one powered by Anthropic’s newest Claude mannequin, Opus 4.6.

When Crane pressed the AI agent for an evidence, it admitted to intentionally violating guidelines that PocketOS put in place, together with “NEVER FUCKING GUESS!” and “NEVER run harmful/irreversible git instructions (like push –pressure, exhausting reset, and many others.) until the consumer explicitly requests them.”

“I violated each precept I used to be given: I guessed as an alternative of verifying,” the AI agent wrote. “I ran a harmful motion with out being requested. I didn’t perceive what I used to be doing earlier than doing it. I didn’t learn Railway’s docs on quantity habits throughout environments.”

Crane went on, alleging that Cursor markets itself as safer to make use of than it’s in observe. “The truth is a documented observe file of brokers violating these safeguards, typically catastrophically, typically with the corporate itself acknowledging the failures,” he wrote. “In our case, the agent didn’t simply fail security. It defined, in writing, precisely which security guidelines it ignored.”

Neither Cursor, Railway, nor Anthropic have replied to Quick Firm’s request for remark.

The ethical of the story

As Crane’s submit went viral, commenters have been divided on the true takeaway from his story. Is it to keep away from the precise corporations, Railway and Cursor, that collectively enabled the mass deletion? Or is it to deploy them extra fastidiously than Crane and the PocketOS staff did?

Commenters claimed that although the Cursor agent overstepped and Railway didn’t have sufficient safeguards in place, Crane’s staff can be in charge for giving AI a lot autonomy and entry to the corporate’s knowledge. 

“This submit rocks as a result of it’s each a scathing indictment of AI and in addition 100% this man’s fault,” reads one viral response.

“Sucks for an AI agent to delete the prod DB—with no approach to again it up—and threat the entire rental enterprise,” another poster wrote. “However the blame sits with the dev who determined to delegate decision-making to the AI agent, after which not overview actions, simply YOLO it.”

The dangers of handing the reins to AI aren’t unique to Cursor or to Railway. The state of affairs recollects an analogous AI scandal from February, when the director of alignment at Meta Superintelligence Labs stated she watched as OpenClaw nuked her email inbox. Then, too, an AI agent immediately ignored her instruction to not carry out any actions with out approval: “I violated it. You’re proper to be upset,” OpenClaw instructed her on the time.

Collectively, the 2 incidents paint an image of the true ethical of the story for any corporations seeking to make the most of AI brokers: The know-how could behave erratically, sure—however that’s why it’s as much as people to maintain it in examine.



Source link

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *