Despite OpenAI partnership, Microsoft is one of Anthropic's biggest customers

Microsoft: A Major Anthropic Customer Despite Deep OpenAI Ties

In a striking revelation about the competitive dynamics of the AI industry, Microsoft has emerged as one of Anthropic’s largest customers, even as it maintains a flagship partnership with OpenAI. Independent analysis of Anthropic’s API traffic underscores this unexpected relationship, highlighting how hyperscale cloud providers are diversifying their AI model offerings to serve enterprise needs.

The insight stems from detailed tracking of Anthropic’s Claude model inference requests. Researchers at Epoch, an organization focused on AI trends, monitored public API endpoints and identified top clients by IP address geolocation and domain attribution. The data paints a clear picture: Microsoft-related traffic constitutes 12.6% of all inference requests to Anthropic’s API. This positions Microsoft as the second-largest external customer, trailing only Anthropic’s own internal usage at 18.4%.

Microsoft’s footprint is unmistakable across multiple domains. IPs associated with azurewebsites.net, a key Azure service, account for 5.6% of requests. Other significant contributors include azure-api.net (2.8%), microsoft.com (1.5%), and live.com (1.2%). Additional Azure-related domains push the total Microsoft share even higher. This volume rivals or exceeds that of other major players; for instance, Vercel, a developer platform, logs 4.5%, while Amazon Web Services (AWS) domains contribute around 2%.

This usage aligns with Microsoft’s Azure AI Studio, which integrates third-party foundation models alongside its OpenAI partnership. Launched to provide developers with a model-agnostic environment, Azure AI Studio enables seamless access to Anthropic’s Claude models, including Claude 2.1. Microsoft has publicly emphasized this multi-model approach, stating it offers “the best model for every use case.” Customers can thus deploy Claude via Azure endpoints without direct API calls to Anthropic, which explains the heavy Azure IP traffic.

Microsoft’s relationship with Anthropic is not new. In late 2023, reports surfaced of a multiyear deal allowing Microsoft to host and resell Anthropic models on Azure. While financial details remain undisclosed, the scale of API activity suggests substantial commitments. This arrangement mirrors broader industry strategies where cloud giants avoid over-reliance on single providers. Anthropic, backed primarily by Amazon with a $4 billion investment, also partners with Google Cloud, which captures about 3.2% of its API traffic per the Epoch data.

The findings challenge assumptions about exclusive alliances in AI. Microsoft poured over $13 billion into OpenAI, securing preferred access to GPT models and fueling innovations like Copilot. Yet, enterprise demands for alternatives—driven by factors like cost, performance, safety features, and vendor lock-in concerns—prompt diversification. Claude’s strengths in reasoning and coding tasks, for example, make it a compelling complement to GPT-4.

Epoch’s methodology provides robust evidence. By aggregating over 100 million inference requests from December 2023 to January 2024, the analysis correlates IP blocks with known cloud provider ranges using MaxMind GeoIP databases. While IP-based attribution has limitations—such as shared infrastructure or VPN obfuscation—the patterns are consistent and corroborated by Microsoft’s documented integrations.

Anthropic’s customer base reflects maturing enterprise adoption. Beyond hyperscalers, traffic from platforms like Replicate (3.1%) and Perplexity AI (1.8%) indicates a thriving ecosystem. Internal Anthropic requests dominate, likely for development, fine-tuning, and safety evaluations. The remaining 70% scatters across thousands of smaller clients, signaling widespread API uptake.

For Microsoft, this dual strategy hedges risks amid regulatory scrutiny and competitive pressures. The U.S. Federal Trade Commission is probing Microsoft-OpenAI ties for potential antitrust issues, prompting a broader portfolio. Similarly, Amazon’s Anthropic investment positions AWS as a counterweight to Azure, while Google’s $2 billion stake bolsters its cloud AI offerings.

Anthropic benefits immensely from such hyperscaler validation. High-volume customers like Microsoft accelerate revenue—estimated in the hundreds of millions annually based on API pricing—and validate Claude’s enterprise readiness. Anthropic’s focus on constitutional AI, emphasizing alignment and safety, resonates with regulated sectors wary of OpenAI’s rapid scaling.

Looking ahead, expect intensified model pluralism. As costs drop and capabilities converge, enterprises will mix and match via cloud marketplaces. Microsoft’s Azure AI Model Catalog already lists over 1,500 models from 40 providers, with Anthropic prominent. This trend democratizes AI while fostering innovation through competition.

The Epoch data, visualized in interactive dashboards, offers a rare window into opaque API economies. It confirms Microsoft’s pragmatic approach: leverage OpenAI’s cutting-edge tech while tapping rivals for specialized strengths. In the race for AI supremacy, no single partnership suffices.

Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.

What are your thoughts on this? I’d love to hear about your own experiences in the comments below.