

Last Monday, towards the end of the WWDC25 keynote, Craig Federighi said that Apple had “expanded” its vision for Swift Assist, and it would now allow developers to bring their Large Language Model of choice directly to Xcode.

Now, we know more about how this is going to work.
What is Swift Assist?

This was a feature that Apple announced during last year’s WWDC, but hadn’t released until now, on Xcode 26 beta. Initially, as explained by Apple, it was going to serve as:
“a companion for all of a developer’s coding tasks, so they can focus on higher-level problems and solutions. It’s seamlessly integrated into Xcode, and knows the latest software development kits (SDKs) and Swift language features, so developers will always get the latest code features that blend perfectly into their projects. With Swift Assist, tasks like exploring new frameworks and experimenting with new ideas are just one request away. Swift Assist uses a powerful model that runs in the cloud — and like all Apple developer services, it is built with privacy and security in mind. Developers’ code is only used to process requests and never stored on servers, and Apple will not use it to train machine learning models.”
What is the new Swift Assist?
Apple has now detailed its new vision for Swift Assist on Xcode 26: it includes a built-in model for predictive code completion, a new, native ChatGPT integration with limited free usage, and a way for developers to plug in third-party providers (including local models running on their own machines).
By default, developers have the option to enable ChatGPT in just a few clicks, with a daily request cap. If they already have a ChatGPT Plus subscription, they can log in or plug in their own API key to bypass those limits.

But the real power comes from Xcode’s new openness: developers can add other providers like Anthropic by simply entering an API key, which gives them access to the latest generation of the industry’s leading frontier models for coding. Apple’s implementation also lets developers choose which models to display from each provider and mark favorites for quicker access.
Even local models are fair game. If the developer uses tools like Ollama or LM Studio, Xcode allows them to tap into models running directly on their Mac. They can add as many providers as they want and toggle between them on Xcode’s Coding Assistant when beginning a new conversation.

This move brings Xcode in line with how modern AI tooling is shaping up: model-agnostic, customizable, and modular to the point of not depending on Apple’s adoption of updated model versions. This is definitely the right way to go, and is the kind of flexibility that could make Xcode a lot more appealing in a multi-model world.
Are you excited about using LLMs on Xcode? Have you been using LLMs to code? Let us know in the comments.
FTC: We use income earning auto affiliate links. More.

Comments