Microsoft 365 Copilot Adds GPT-5.2 in Major AI Strategy Update

AI-generated Image (Credit: Jacky Lee)

On 11 December 2025, Microsoft published an update saying OpenAI GPT 5.2 is coming to Microsoft 365 Copilot and Microsoft Copilot Studio, appearing in a model selector (including in Copilot Chat and Copilot Studio) and rolling out to Copilot licensed users. Microsoft described two variants, “GPT 5.2 Thinking” for complex problems and “GPT 5.2 Instant” for everyday work.

Separately, Microsoft has also been expanding “model choice” in defined Copilot experiences. In a 24 September 2025 Microsoft 365 Blog post, Microsoft said Anthropic models can be enabled for the Researcher agent and for building agents in Copilot Studio, with the key caveat that Anthropic models are hosted outside Microsoft managed environments and subject to Anthropic’s Terms of Service.

Copilot is Not Simply “ChatGPT Inside Microsoft 365”

Microsoft’s own technical documentation describes Microsoft 365 Copilot as a system that grounds a user’s prompt with relevant workplace context and then sends that grounded prompt to a large language model, returning the result to the user’s app.

A related Microsoft post from 2024 also clarified that Copilot for Microsoft 365 uses a combination of foundation models, chosen to fit different features (for example speed versus creativity), and that Microsoft may change models when it can demonstrate improvements.

This matters for your original question because it positions Copilot as an orchestration layer that can route work to different models, rather than a product locked to a single provider.

What Microsoft Commits to

Microsoft Learn states that prompts, responses, and data accessed through Microsoft Graph in Microsoft 365 Copilot are not used to train foundation models.

Microsoft also says model updates are designed to improve performance and capabilities but do not change its enterprise security, privacy, and compliance commitments or existing controls.

The nuance is the new model choice option: Microsoft explicitly states Anthropic models in the relevant Copilot experiences are hosted outside Microsoft managed environments. That does not automatically mean customer data is unsafe, but it does mean organisations must treat that pathway as a different risk and compliance posture than Azure hosted model execution.

Microsoft Will Spend Less on Its Own AI?

The most defensible answer is no. Microsoft’s publicly reported spending and product roadmap suggest it is increasing investment, not trimming it.

  • Infrastructure spend: Reuters reported Microsoft planned about US$80 billion in fiscal 2025 for AI enabled data centres, with Reuters attributing the figure to a CNBC report and citing Microsoft vice chair Brad Smith on the investment scale.

  • Custom silicon and systems work: Microsoft has been developing its own chips and infrastructure stack, including the Azure Maia line, as part of an end to end approach to AI infrastructure.

  • In house model development: Microsoft AI has publicly discussed releasing and testing its own models, including MAI Voice 1 and MAI 1 preview, and positioned them as future offerings inside Copilot.

Put simply: shipping OpenAI models inside Copilot is consistent with a “best available model now” product cadence, while the infrastructure and in house model moves indicate Microsoft still wants leverage and optionality over the long run.

What This Implies for Microsoft’s Strategy

Copilot as the control layer for work AI

Microsoft’s strategic advantage is not only the model. It is the distribution and workflow surface: Word, Excel, Outlook, Teams, and admin tooling. The Copilot architecture that grounds prompts and returns results in app lets Microsoft standardise how enterprise users interact with AI across their daily work.

Model supply diversification, including a clear trade off

Microsoft’s “combination of foundation models” stance plus the addition of Anthropic options points to a future where Copilot can swap models based on capability, cost, latency, or policy needs.

But Microsoft has been unusually explicit that some third party models are hosted outside Microsoft managed environments. Strategically, that expands capability and negotiating leverage, but it also increases governance complexity for customers and for Microsoft’s own compliance messaging.

Reduced dependency as the OpenAI relationship evolves

Reuters reported that Microsoft and OpenAI’s restructuring deal removed Microsoft’s right of first refusal to be OpenAI’s compute provider.

Reuters also reported OpenAI signed a seven year US$38 billion cloud services deal with Amazon Web Services, and noted this followed restructuring that removed Microsoft’s first right of refusal for compute services.

From a strategy perspective, those developments make Microsoft’s multi model posture easier to understand: it lowers single supplier risk in a market where even major partnerships are becoming less exclusive.

What to Watch Next

  • Whether Microsoft expands third party model choice beyond limited Copilot experiences, and what additional admin controls and auditability it provides for those pathways.

  • Whether Microsoft’s unit economics improve through continued data centre build out and custom silicon, since Copilot margins will increasingly depend on compute cost and throughput.

  • How Microsoft positions governance and privacy messaging as more model options are added, especially where hosting is outside Microsoft managed environments.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Next
Next

OAuth Security Guidance Gets Sharper as AI Assistants Plug into More Services