Anthropic has launched the general availability of its Claude Platform on AWS, marking a strategic shift in how hyperscaler AI services are structured. Unlike Microsoft’s Azure OpenAI, where Microsoft tightly controls the stack, Anthropic operates the platform itself while AWS handles billing and identity access management (IAM). AWS’s Bedrock service remains available as a separate offering.

The move rewrites the traditional hyperscaler AI bargain by giving Anthropic direct control over the user experience and model deployment. This structure allows the AI startup to maintain its own pricing and feature roadmap without being subsumed under a single cloud giant’s infrastructure. For enterprises, it promises more flexibility in choosing and managing AI models across different clouds.

Financial specifics were not disclosed, but the platform’s GA signals a maturation of Anthropic’s enterprise strategy. The company has been positioning Claude as a safety-focused alternative to GPT models, and the AWS integration gives it immediate access to a vast customer base already accustomed to cloud AI services.

Analysts suggest this could pressure Microsoft and Google to renegotiate their own AI partnership terms. Enterprises currently locked into Azure OpenAI may evaluate Claude on AWS for cost or safety advantages, potentially fragmenting the cloud AI market further. However, migration costs and workflow integrations remain barriers.

“This is a shot across the bow for Microsoft’s OpenAI exclusivity,” said one cloud strategist quoted in the report. “But the real winner is the enterprise customer, who now gets more choice.”