Dream it. Build it. Grow it. Sign up now and you'll be up and running on DigitalOcean in just minutes.
In a statement on Friday to John Gruber of Daring Fireball, Apple acknowledged a delay in the release of Apple Intelligence-powered Siri:
Siri helps our users find what they need and get things done quickly, and in just the past six months, we’ve made Siri more conversational, introduced new features like Type to Siri and product knowledge, and added an integration with ChatGPT. We’ve also been working on a more personalized Siri, giving it more awareness of your personal context, as well as the ability to take action for you within and across your apps. It’s going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year.
(As far as I can tell, Apple provided this statement only to Gruber; no other outlet appears to be reporting it independently.)
I’m among the many people disappointed, but not surprised, by the delay. In my first piece on this site, I expressed my excitement for the just-announced Apple Intelligence. In it, I highlighted three demos which delighted me, all tied to Siri’s deeper integration into and across the system.
Today, none of those examples work yet, and seemingly won’t for quite some time.
I’ve previously expressed my sympathy for the Siri team. In that same piece, I referenced a Bloombergstory suggesting longtime Apple exec Kim Vorrath is moving to Apple Intelligence, commenting:
I’ve watched Vorrath and her Program Office teams operate from the inside for many years. The biggest impact she and her team had across engineering was instilling discipline: every feature or bug fix had to be approved; tied to a specific release; and built, tested, and submitted on time. It was (is!) a time-intensive process—and engineering often complained about it, sometimes vocally—but the end result was a more defined, less kitchen-sink release each year. To a significant extent, her team is the reason why a feature may get announced at WWDC but not get released until the following spring. She provided engineering risk management.
It seems like Vorrath is already making an impact.
Most of those commenting on this delay have focused on internal technical issues as the cause. That makes sense and is most likely the case: all of the demos at last year’s WWDC for Personal Context were based on Apple apps and features—Photos, Calendar events, Files, Messages, Mail, and Maps (plus real-time flight details). Most of what they’re dealing with is likely tied to Apple Intelligence- and Siri-specific issues.
But another thought occurred to me, an important aspect to Apple Intelligence that may be overlooked. What is the impact of third-party developers on this delay? Not the impact on them—of.
Apple’s statement says that “a more personalized Siri” has “more awareness of your personal context” and “the ability to take action for you within and across your apps.” Much of that functionality would rely on third-party apps and the knowledge those apps have about us.
I can’t help but wonder: Have enough developers adopted the necessary technologies (App Intents, etc.) to make Apple Intelligence truly compelling?
Of the three WWDC demos I noted, it’s the last one described by Kelsey Peterson (Director, Machine Learning and AI) that’s the most extensive example of what “a more personalized Siri” would be capable of. Here’s how I summarized it:
You’re picking your mom up from the airport. You ask Siri “when is my mom’s flight landing?” Siri knows who “my mom” is, what flight she’s on (because of an email she sent earlier), and when it will land (because it can access real-time flight tracking). You follow up with “what’s our lunch plan?” Siri knows “our” means you and your mom, when “lunch” is, that it was discussed in a Message thread, and that it’s today. Finally, you ask “how long will it take us to get there from the airport?”. Siri knows who ”us” is, where “there” is, which airport is being referenced, and real-time traffic conditions.
(Watch the video, starting at 1:22:01.)
Imagine if, instead of Apple Mail, Messages, and Maps, Peterson was using Google Gmail, Messages, and Maps. Or Proton Mail, Signal, and Mapquest. If any of these apps don’t integrate with Apple Intelligence, the whole experience she described falls apart.
The key takeaway from the demo is that users won’t have to jump into individual apps to get the answers they need. This positions apps as subordinate to Apple Intelligence.
Considering Apple’s deteriorating relationship with the community, will third-party developers want their app to be one more piece of Apple’s AI puzzle? How many developers are willing to spend time making their apps ready for Apple Intelligence, just so Apple can disintermediate them further? Unless customers are clamoring for the functionality, or it’s seen as a competitive advantage, it’s work that few developers will consider a priority—witness the reportedly low native app numbers for Apple Vision Pro as an example of the impact developers can have on the perceived success of a platform.
Much of the long-term success of Apple Intelligence depends on widespread adoption of App Intents by third-party developers—many of whom, at least initially, may see little reason to participate. While Apple is unlikely to delay Apple Intelligence just because of third-party developers, it could seriously hamstring the feature if there isn’t ample adoption of App Intents. Perhaps Apple, in addition to addressing technical issues, will use the extra time to drive that adoption. Apple Intelligence cannot succeed on first-party apps alone.