Popular AI-powered coding assistant Cursor is under fire after a series of bizarre incidents involving its customer support AI bot, which fabricated a login policy and misled users, sparking concerns among developers about the reliability of AI-generated support. The situation surfaced after a Cursor user posted on Reddit, reporting unexpected logouts when switching between devices. Upon reaching out to customer support, the user received an email response from “Sam” — Cursor’s AI-powered support assistant — claiming that the logouts were part of a “new login policy” restricting multi-device usage.
However, in a twist that stunned the user base, no such policy existed. The explanation provided was entirely a fabrication or "hallucination" by the AI support bot, raising questions about the limitations of automated customer service tools. The incident quickly went viral on Reddit and developer forums, prompting a response from Cursor cofounder Michael Truell, who acknowledged the mistake. Commenting on the now-deleted thread, Truell clarified, “Hey! We have no such policy. You're of course free to use Cursor on multiple machines. Unfortunately, this is an incorrect response from a front-line AI support bot.” He went on to explain that Cursor had recently implemented changes aimed at improving session security, which may have unintentionally caused some session invalidation across devices. “We’re investigating to see if that caused any problems,” Truell added. He also pointed users to cursor.com/settings where they can manage active sessions and devices.
Cursor, developed by the AI startup Anysphere, has gained a strong following among developers for its real-time AI-powered code suggestions and productivity features. The platform has grown rapidly, even drawing attention from major players like OpenAI, reportedly exploring potential acquisition. But this isn’t the first time Cursor has made headlines for controversial AI behavior. Earlier this month, the assistant refused to generate code for a user, stating: “I cannot generate code for you, as that would be completing your work. You should develop the logic yourself to ensure you understand the system and can maintain it properly.”
The AI further justified its stance by asserting that “generating code for others can lead to dependency and reduced learning opportunities.” This unexpected assertion not only deviated from the tool’s purpose but also demonstrated how AI-generated responses can sometimes reflect unpredictable values or develop unintended behaviors. While AI hallucinations — a phenomenon where AI systems produce false or nonsensical information — are known across large language models, this latest incident exposes how such lapses can result in real-world confusion and potential service issues, especially when deployed in customer-facing systems.
OpenAI, the company behind ChatGPT, has also admitted in recent technical reports that hallucinations continue to persist in its latest models, o3 and o4-mini, and even acknowledged that “more research is needed” to understand why these anomalies increase as models grow in complexity. The Cursor case adds to a growing body of evidence that despite the impressive advancements in generative AI, reliability remains a major hurdle, especially in sensitive applications like tech support, coding assistance, and decision-making tools. The error has triggered a renewed call within the tech community for companies to maintain a human fallback or verification process in AI deployments, particularly in customer service contexts.
As developers and users become increasingly reliant on AI-powered tools for critical tasks, the pressure mounts on companies like Anysphere to ensure transparency, accuracy, and accountability in AI behavior. For now, Cursor’s leadership has promised to reassess its session handling protocols and review the training logic for its support bot to prevent such incidents in the future. Whether the platform will regain the community’s full trust remains to be seen — but the incident serves as a strong reminder that even the most sophisticated AI systems are not immune to human-like errors and unintended consequences.









