The core news is this: Apple will no longer ask you to simply accept whichever AI sits inside your device. According to Bloomberg's Mark Gurman, Apple is building a system that lets users choose from a range of outside AI services to power features across their software, with the goal of turning Apple devices into a comprehensive AI platform.
That is a major strategic pivot for a company famously protective of its own ecosystem. Rather than trying to out-build Google, Anthropic, and OpenAI on its own, Apple is opening the door to all of them at once.
The change is slated to arrive alongside iOS 27, iPadOS 27, and macOS 27 in the autumn of 2026, according to people with knowledge of the plans cited by Bloomberg.
What the "Extensions" System Actually Means for You
Here is where it gets really interesting. The mechanism Apple is using internally carries the name "Extensions," and it is more powerful than it first sounds.
TechCrunch reports that test versions of the software display a message reading: "Extensions allow you to access generative AI capabilities from installed apps on demand, through Apple Intelligence features such as Siri, Writing Tools, Image Playground and more."
So instead of a single monolithic AI brain running everything, your iPhone will call on whichever model you prefer — task by task, feature by feature.
Engadget notes that AI companies will need to opt in and add support through their App Store programs to become available. Once they do, their models can power Apple's own native AI tools from within those apps.
Think of it as a plug-in architecture for intelligence. Your device stays the platform, and the AI becomes a choice.
Which Models Are Coming to Apple Intelligence
Right now, Apple Intelligence relies on Apple's own on-device foundation models for most tasks, with ChatGPT available as an optional upgrade for more demanding queries. That limited lineup drew real criticism from the start — many users and developers wanted access to more capable or specialized models.
AndroidHeadlines reports that popular options like Anthropic's Claude and Google's Gemini are strong candidates for the expanded lineup. Models from Google and Anthropic are already being tested in pre-release builds, according to TechCrunch's sourcing. For China specifically, Apple is also likely to explore partnerships with local AI providers such as DeepSeek.
Beyond those names, MacDailyNews notes that an "Extensions" menu in Settings will let users assign their preferred model to specific Apple Intelligence tasks — so you could, for example, use Gemini for image generation while relying on Claude for text editing. That granularity is new, and it matters.
Why Apple Is Making This Move Now
Apple's AI journey has not been smooth. The company launched Apple Intelligence with significant fanfare, but the promised Siri upgrades were slow to arrive and the single-model approach frustrated users who had grown accustomed to the richness of standalone AI assistants.
The strategy now appears to be offering flexibility rather than trying to build one all-conquering model in-house. As Engadget puts it, Apple is settling into its AI strategy at last, and the key is options.
This also reflects a broader market reality. The AI landscape is moving too quickly for any single company to dominate every use case alone. By turning Apple Intelligence into an open platform, Apple can deliver best-of-breed capabilities without having to win the model race outright.
There is also a competitive angle worth watching. Bloomberg's March 2026 reporting had already hinted that Apple's AI chatbot interface would support multiple model selections. The iOS 27 Extensions system is the concrete execution of that broader vision.
The Technical Foundation Already in Place
Apple has not been standing still on the model-building side, either. The company published its Foundation Models Tech Report 2025, authored by nearly 400 researchers, detailing two complementary AI systems that currently power Apple Intelligence.
Apple's Machine Learning Research page describes a compact approximately 3-billion-parameter on-device model, optimized to run directly on Apple silicon, alongside a larger server-based model built on a novel Parallel-Track Mixture-of-Experts architecture designed for Apple's Private Cloud Compute platform.
Both models support multiple languages and can process images as well as text. Crucially, Apple's research report also introduced a Swift-centric Foundation Models framework that exposes guided generation, constrained tool calling, and fine-tuning capabilities, allowing developers to integrate these capabilities with just a few lines of code.
That developer-facing infrastructure is exactly the kind of plumbing you need before you can invite third-party models to slot into your system.
Market Reaction: Investors Are Paying Attention
Wall Street noticed this story instantly. Following Bloomberg's report, MacDailyNews noted that Apple shares climbed more than 2% in trading. Shares of Alphabet, Google's parent company, also gained on expectations of a deeper partnership with Apple through the Gemini integration.
That dual stock movement tells you something important: investors see this not just as an Apple win, but as a rising-tide moment for the AI industry. When the world's most-used consumer device ecosystem opens up to third-party AI, the ripple effects touch every major AI company.
What This Means for Privacy and User Control
Apple's signature concern has always been privacy, and the company is expected to maintain that emphasis even as the platform opens up.
The on-device Apple foundation models will continue to handle tasks locally wherever possible, with no data leaving the device. For cloud-based third-party models, users will presumably need to consent explicitly before their queries travel to external servers — consistent with Apple's existing approach to ChatGPT integration, which requires an opt-in before any data is shared.
The Extensions architecture also gives Apple a clear layer of control. Because third-party models integrate through Apple's own framework rather than operating independently, Apple retains the ability to govern how data flows between your requests and external AI providers.
What Happens Next: WWDC 2026 Is the Moment to Watch
Official confirmation of all these plans is expected at Apple's Worldwide Developers Conference in June 2026. That event will likely reveal which AI companies have signed on, how the Extensions settings interface works, and exactly how deeply third-party models can integrate with features like Siri and Writing Tools.
Developer betas are expected shortly after the WWDC announcement, with public releases landing in September — the same cadence Apple follows every year for major iOS updates.
If the plans hold, this autumn could mark the moment Apple Intelligence stops being a single-vendor showcase and becomes something genuinely new: a universal AI layer for one of the most powerful device ecosystems on the planet. For users, that means real choice. For the AI industry, it means Apple's billion-plus active devices just became the most important distribution channel in the game.
Read More
Samsung Galaxy A57 Buyers Now Get Free Spotify Premium for Up to 3 Months
WhatsApp Is Ending Support for These Older Android Devices: Full List, Key Deadline, and What to Do Right Now-
Samsung One UI 8.5 Beta: Everything You Need to Know About the Latest Update & Compatible Devices
OpenAI is building a smartphone where AI agents replace every app you use
Samsung Galaxy A57 Buyers Now Get Free Spotify Premium for Up to 3 Months
Samsung One UI 8.5 Beta: Everything You Need to Know About the Latest Update & Compatible Devices
OpenAI is building a smartphone where AI agents replace every app you use