Anthropic users across online forums are raising complaints that the Claude AI model has suddenly declined in performance. Over recent weeks, users on X, GitHub, and Reddit have been sharing anecdotes, benchmarks, and prompts to identify what changed. One AMD senior director wrote that Claude "has regressed to the point it cannot be trusted to perform complex engineering."

Speculation centers on whether the model was deliberately scaled back—what users call "nerfed"—to control costs or redirect scarce computing power toward Anthropic's more advanced Mythos model. The backlash arrives as the company tests this more powerful system, raising questions about whether cutting-edge AI is becoming less accessible even as it grows more capable.

Users have posted side-by-side outputs and benchmarks they claim show Claude generating less accurate or nuanced answers. The complaints focus on complex engineering tasks where the assistant's reasoning appears diminished. This perceived regression has sparked widespread discussion among technical professionals who rely on the tool.

Anthropic says it adjusted the default reasoning level in Claude Code but denies the changes were tied to compute constraints or the Mythos project. The firm's response suggests the modifications were intentional calibration rather than resource-driven degradation. How the company addresses user concerns could affect its reputation among developers and enterprise clients.

If performance issues persist, some power users may migrate to competing AI platforms. The situation highlights the tension between advancing frontier models and maintaining reliable service for existing customers.