Dream it. Build it. Grow it. Sign up now and you'll be up and running on DigitalOcean in just minutes.
Two press releases and five new OS releases top today’s Apple headlines.
Starting today, with the availability of iOS 18.4, iPadOS 18.4, and macOS Sequoia 15.4, Apple Intelligence features are now available in many new languages, including French, German, Italian, Portuguese (Brazil), Spanish, Japanese, Korean, and Chinese (simplified) — as well as localized English for Singapore and India — and are accessible in nearly all regions around the world.
In addition, iPhone and iPad users in the EU have access to Apple Intelligence features for the first time, and Apple Intelligence expands to a new platform with an initial set of features available in U.S. English with Apple Vision Pro […]
The second press release offers more details on “the first set of powerful Apple Intelligence features” for Vision Pro:
visionOS 2.4 is available today, bringing the first set of powerful Apple Intelligence features that help users communicate, write, and express themselves on Apple Vision Pro […] visionOS 2.4 also introduces the Apple Vision Pro app for iPhone to help users easily find new content and apps, and enhancements to Guest User make sharing Vision Pro experiences even easier.
Added are Writing Tools, Image Playground, Genmoji, Smart Reply, natural language search, Create a Memory Movie, plus—
Priority Messages in Mail, Mail Summaries, Image Wand in Notes, Priority Notifications in Notification Center, and Notification Summaries.
Apple Intelligence on Vision Pro is only available for US English.
Also available is the Apple Vision Pro app for iPhone, which:
offers a new way for users to discover new spatial experiences, queue apps and games to download, easily find tips, and quickly access information about their Vision Pro […]
Most useful, at least for me, are improvements to Guest Mode:
visionOS 2.4 lets users start a Guest User session on Apple Vision Pro with their nearby iPhone or iPad. To make it easier to guide a guest through the Vision Pro experience, users can now choose which apps are accessible to their guests and start View Mirroring with AirPlay from their iPhone.
This is a good start, and it’ll make it easier for me to share my Vision Pro with my wife, but Vision Pro desperately needs a real “multi-user” experience, like Mac has had for decades—but which iPhone and iPad have never gotten. A $3,500 device needs to be shareable within a household.
Enabling all of these new features are five new OS updates: iOS 18.4, iPadOS 18.4, macOS 15.4, tvOS 18.4, and visionOS 2.4.
In addition to the above-noted features, these releases also add Apple News+ Food, eight new emoji (the Face with Bags Under Eyes may become my personal avatar) and a whole host of “bug fixes and enhancements.”
Apple developers can download the releases and read detailed Release Notes.
In a statement on Friday to John Gruber of Daring Fireball, Apple acknowledged a delay in the release of Apple Intelligence-powered Siri:
Siri helps our users find what they need and get things done quickly, and in just the past six months, we’ve made Siri more conversational, introduced new features like Type to Siri and product knowledge, and added an integration with ChatGPT. We’ve also been working on a more personalized Siri, giving it more awareness of your personal context, as well as the ability to take action for you within and across your apps. It’s going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year.
(As far as I can tell, Apple provided this statement only to Gruber; no other outlet appears to be reporting it independently.)
I’m among the many people disappointed, but not surprised, by the delay. In my first piece on this site, I expressed my excitement for the just-announced Apple Intelligence. In it, I highlighted three demos which delighted me, all tied to Siri’s deeper integration into and across the system.
Today, none of those examples work yet, and seemingly won’t for quite some time.
I’ve previously expressed my sympathy for the Siri team. In that same piece, I referenced a Bloombergstory suggesting longtime Apple exec Kim Vorrath is moving to Apple Intelligence, commenting:
I’ve watched Vorrath and her Program Office teams operate from the inside for many years. The biggest impact she and her team had across engineering was instilling discipline: every feature or bug fix had to be approved; tied to a specific release; and built, tested, and submitted on time. It was (is!) a time-intensive process—and engineering often complained about it, sometimes vocally—but the end result was a more defined, less kitchen-sink release each year. To a significant extent, her team is the reason why a feature may get announced at WWDC but not get released until the following spring. She provided engineering risk management.
It seems like Vorrath is already making an impact.
Most of those commenting on this delay have focused on internal technical issues as the cause. That makes sense and is most likely the case: all of the demos at last year’s WWDC for Personal Context were based on Apple apps and features—Photos, Calendar events, Files, Messages, Mail, and Maps (plus real-time flight details). Most of what they’re dealing with is likely tied to Apple Intelligence- and Siri-specific issues.
But another thought occurred to me, an important aspect to Apple Intelligence that may be overlooked. What is the impact of third-party developers on this delay? Not the impact on them—of.
Apple’s statement says that “a more personalized Siri” has “more awareness of your personal context” and “the ability to take action for you within and across your apps.” Much of that functionality would rely on third-party apps and the knowledge those apps have about us.
I can’t help but wonder: Have enough developers adopted the necessary technologies (App Intents, etc.) to make Apple Intelligence truly compelling?
Of the three WWDC demos I noted, it’s the last one described by Kelsey Peterson (Director, Machine Learning and AI) that’s the most extensive example of what “a more personalized Siri” would be capable of. Here’s how I summarized it:
You’re picking your mom up from the airport. You ask Siri “when is my mom’s flight landing?” Siri knows who “my mom” is, what flight she’s on (because of an email she sent earlier), and when it will land (because it can access real-time flight tracking). You follow up with “what’s our lunch plan?” Siri knows “our” means you and your mom, when “lunch” is, that it was discussed in a Message thread, and that it’s today. Finally, you ask “how long will it take us to get there from the airport?”. Siri knows who ”us” is, where “there” is, which airport is being referenced, and real-time traffic conditions.
(Watch the video, starting at 1:22:01.)
Imagine if, instead of Apple Mail, Messages, and Maps, Peterson was using Google Gmail, Messages, and Maps. Or Proton Mail, Signal, and Mapquest. If any of these apps don’t integrate with Apple Intelligence, the whole experience she described falls apart.
The key takeaway from the demo is that users won’t have to jump into individual apps to get the answers they need. This positions apps as subordinate to Apple Intelligence.
Considering Apple’s deteriorating relationship with the community, will third-party developers want their app to be one more piece of Apple’s AI puzzle? How many developers are willing to spend time making their apps ready for Apple Intelligence, just so Apple can disintermediate them further? Unless customers are clamoring for the functionality, or it’s seen as a competitive advantage, it’s work that few developers will consider a priority—witness the reportedly low native app numbers for Apple Vision Pro as an example of the impact developers can have on the perceived success of a platform.
Much of the long-term success of Apple Intelligence depends on widespread adoption of App Intents by third-party developers—many of whom, at least initially, may see little reason to participate. While Apple is unlikely to delay Apple Intelligence just because of third-party developers, it could seriously hamstring the feature if there isn’t ample adoption of App Intents. Perhaps Apple, in addition to addressing technical issues, will use the extra time to drive that adoption. Apple Intelligence cannot succeed on first-party apps alone.
Apple will spend more than $500 billion in the U.S. over the next four years
Apple today announced its largest-ever spend commitment, with plans to spend and invest more than $500 billion in the U.S. over the next four years. This new pledge builds on Apple’s long history of investing in American innovation and advanced high-skilled manufacturing, and will support a wide range of initiatives that focus on artificial intelligence, silicon engineering, and skills development for students and workers across the country.
My immediate thought upon seeing Apple’s headline: How much of this is actually new, rather than a repackaging of existing plans?
Dan Gallagher for The Wall Street Journal (News+):
Apple’s $500 Billion U.S. Investment Is Mostly Already in the Books
Unclear, though, is how much of the planned spending is actually new. Apple has spent about $1.1 trillion over the past four fiscal years on total operating expenses and capital expenditures—and Wall Street expects nearly $1.3 trillion in total spending over the next four years, according to consensus estimates by Visible Alpha. While Apple doesn’t break out its expenses per geography, about 43% of its revenue comes from the Americas region, which it defines as North and South America. Assuming the U.S. constitutes the large bulk of that number, and if spending is about in line with revenue, then a rough figure of 40% of projected global spending through the 2028 fiscal year equates to about $505 billion.
In short, Apple’s announced figure is in line with what one might expect the company to be spending anyway, given its financials.
I don’t know that Apple announced this only for the benefit of Trump, but Trump, of course, claimed credit:
There is also domestic politics to consider—no small matter for a U.S. consumer-electronics company that still builds the bulk of its products overseas. Indeed the announcement seems to have already paid off: “Thank you Tim Cook and Apple!!!” President Trump exclaimed on his Truth Social platform Monday morning.
The full post reads (in all caps, naturally, complete with a typo and three exclamation marks):
APPLE HAS JUST ANNOUNCED A RECORD 500 BILLION DOLLAR INVESTMENT IN THE UNITED STATES OF AMERICA. THE REASON, FAITH IN WHAT WE ARE DOING, WITHOUT WHICH, THEY WOULD’NT BE INVESTING TEN CENTS. THANK YOU TIM COOK AND APPLE!!!
Also on Truth Social, Trump released a graphic touting Apple’s $500 Billion commitment as part of “Investments in the U.S. Under President Trump”.
To quote myself on Mastodon in early February:
Everyone: We’re Doing The Thing.
Trump: I will SEEK VENGENCE upon anyone not Doing The Thing!
Everyone: After speaking with Trump, we’ve agreed to Do The Thing.
Trump: Thanks to me and me alone, everyone is now Doing The Thing. You’re welcome.
Everyone: 😶
There’s no reason to believe Apple’s announcement today had anything to do with Trump or that it would have been any different under another administration—except that this administration is deeply, corruptly transactional and rewards behavior that demonstrates fealty.
Back to Apple’s announcement:
As part of its new U.S. investments, Apple will work with manufacturing partners to begin production of servers in Houston later this year. A 250,000-square-foot server manufacturing facility, slated to open in 2026, will create thousands of jobs.
Previously manufactured outside the U.S., the servers that will soon be assembled in Houston play a key role in powering Apple Intelligence, and are the foundation of Private Cloud Compute, which combines powerful AI processing with the most advanced security architecture ever deployed at scale for AI cloud computing. The servers bring together years of R&D by Apple engineers, and deliver the industry-leading security and performance of Apple silicon to the data center.
It’s a tantalizing tidbit that the servers being built in this new facility are for Private Cloud Compute, hardware Apple doesn’t even sell. Obviously very important to Apple’s AI plans, but I’m surprised they’re important (unique?) enough to require a dedicated facility. I’m very curious to see what these servers might look like. (Honestly, a rack-mountable M4 Max relaunch of the Xserve—Apple Server?—would be dope.)
One final thought, on Apple’s hiring plans:
In the next four years, Apple plans to hire around 20,000 people, of which the vast majority will be focused on R&D, silicon engineering, software development, and AI and machine learning. The expanded commitment includes significant investment in Apple’s R&D hubs across the country. This includes growing teams across the U.S. focused on areas including custom silicon, hardware engineering, software development, artificial intelligence, and machine learning.
This is deftly worded. It specifies how many people it plans to hire, but doesn’t state if it’s additional or replacement headcount. If 20,000 people leave, and 20,000 people are hired to replace them, will Apple claim success on its hiring plan? It does meet the letter, if not the spirit, of the statement. Likewise, it plans on “growing” specific teams, but says nothing about “shrinking” others to balance things out.
Quite the facility with language, Apple has.
Paul Kafasis engages in some excellent, self-inflicted nerd-snipping on One Foot Tsunami:
I asked my iPhone who won Super Bowls 1 through 60 (that’s “I” through “LX” in Super Bowl styling) and captured a screenshot of each result.
The results are utterly appalling:
So, how did Siri do? With the absolute most charitable interpretation, Siri correctly provided the winner of just 20 of the 58 Super Bowls that have been played. That’s an absolutely abysmal 34% completion percentage. If Siri were a quarterback, it would be drummed out of the NFL.
Some of the results are especially awful. For example, to the question “Who won Super Bowl XXIII?”, Siri responds with the number of times Bill Belichick has won or appeared in the Super Bowl—completely irrelevant.
John Gruber at Daring Fireball wrote a brutally (but fairly) titled follow-up, Siri Is Super Dumb and Getting Dumber, sharing the appalling results to his own query, “Who won the 2004 North Dakota high school boys’ state basketball championship?”
New Siri — powered by Apple Intelligence™ with ChatGPT integration enabled — gets the answer completely but plausibly wrong, which is the worst way to get it wrong. It’s also inconsistently wrong — I tried the same question four times, and got a different answer, all of them wrong, each time. It’s a complete failure.
We’ve all had the Siri experience of getting a clearly wrong or patently useless answer to our query. It’s gotten to the point where I merely roll my eyes and move on—I rarely even screenshot mistakes anymore.
But I do feel sorry for the Siri team. I have some good friends who work there, and I had occasion to work with the team on Siri responses a few years back. I know they cringe every time these failures hit the blogs. They know more than anyone just how much Siri needs to improve.
The latest scuttlebutt (from Mark Gurman at Bloomberg) is that longtime Apple exec Kim Vorrath is moving to Apple Intelligence in an effort to whip it into shape. I’ve watched Vorrath and her Program Office teams operate from the inside for many years. The biggest impact she and her team had across engineering was instilling discipline: every feature or bug fix had to be approved; tied to a specific release; and built, tested, and submitted on time. It was (is!) a time-intensive process—and engineering often complained about it, sometimes vocally—but the end result was a more defined, less kitchen-sink release each year. To a significant extent, her team is the reason why a feature may get announced at WWDC but not get released until the following spring. She provided engineering risk management.
I hope the Vorrath and the Siri team can make this work. I need them to make this work. The future promised by Apple Intelligence is too compelling for it to fail.
Graham Fraser, writing about the BBC, on BBC:
The BBC has complained to Apple after the tech giant's new iPhone feature generated a false headline about a high-profile murder in the United States.
Apple Intelligence, launched in the UK earlier this week, uses artificial intelligence (AI) to summarize and group together notifications.
Apple Intelligence is new to the U.K, but those of us in the U.S. have been ridiculing it for a month now. As John McClane said, “Welcome to the party, pal!”
This week, the AI-powered summary falsely made it appear BBC News had published an article claiming Luigi Mangione, the man arrested following the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself. He has not.
Headlines are an editorial decision, and represent the voice of the publication. A poor summary can be embarrassing. A misleading one—as this was—can sully the publication.
"BBC News is the most trusted news media in the world," the BBC spokesperson added.
"It is essential to us that our audiences can trust any information or journalism published in our name and that includes notifications."
Apple can’t afford this bad press if Apple Intelligence is going to be taken seriously and drive hardware sales.
If Apple can’t address this quickly, they may have another egg freckles situation on their hands.[1]
To summarize: The handwriting recognition on the Apple Newton would fail, often in spectacular ways. Garry Trudeau “mocked the Newton in a weeklong arc of his comic strip Doonesbury, portraying it as a costly toy that served the same function as a cheap notepad, and using its accuracy problems to humorous effect. In one panel, Michael Doonesbury's Newton misreads the words "Catching on?" as "Egg Freckles", a phrase that became widely repeated as symbolic of the Newton's problems.” ↩︎
As an Apple nerd, the week of WWDC is both a great and a terrible time to launch something new. Almost no one will pay attention to this new website, yet it gives me much to write about.
If you’re also an Apple nerd (and if you’re reading this, there’s an above-average chance you are), Apple’s annual Worldwide Developer Conference offers a ton to explore, learn, and generally obsess over. For many, it sets the direction for the rest of their year, acting as something of a launching pad, a new beginning.
For over two decades, WWDC was a major focal point of my year. I worked in Apple’s Worldwide Developer Relations organization (WWDR), which puts on the show, and the team I was on, Developer Technical Support (DTS), was an integral part of it.
Months of long days, sleepless nights, and endless planning meetings culminated with WWDC Monday. It was immensely gratifying to see the results of many months of hard work from the teams make it to the stage or screen. And while much of what was announced at any given show was a surprise to me, my excitement was generally tempered by having already spent a lot of time living on the new software.
Not so this year. It’s my first WWDC “on the outside” since 2001[1], and my anticipation for what’s new was sky-high.
Leading up to WWDC, there was much speculation about what Apple would do with AI, and its impact on the company’s fortunes.
Apple answered, but they sure took their time.
Monday’s Keynote (anchored by Craig Federighi) was effectively two events. The first sixty minutes was the normal annual updates of Apple’s software product lineup: All the new features coming in iOS 18, iPadOS 18, tvOS 18, watchOS 11, macOS Sequoia, and visionOS 2, with a surprisingly long Apple TV+ segment. Despite the hype, there wasn’t a single mention of “AI” or “artificial intelligence” in this first hour.
I must applaud Apple’s restraint here. I speculated last week,
What if Apple announces “Siri AI” and says it’s “Advanced Interactions” or “Apple Intelligence”?
“Edit your photos using your voice. Powered by Siri AI….”
“Xcode 16 helps you write code twice as fast using Siri AI.…”
“In the newly improved Developers Forums, you can quickly find answers to your code-level questions thanks to Siri AI….”
They get to utter “AI” a bunch of times, but make it distinct from “artificial intelligence”.
It would be very Apple to try to redefine what AI means.
🤔
What I expected was a presentation littered with “AI” droppings to satisfy those carping about Apple falling behind in AI. Instead, Apple stuck to their usual language as they highlighted new features enabled by “intelligent capabilities”, “machine learning”, and “powerful new algorithms”, just as they have for years.
It was in the back forty minutes that we finally got what for many people was the main attraction: The introduction of “Apple Intelligence”, Apple’s name[2] for their on-device, privacy-focused, and deeply integrated take on artificial intelligence.
Tim Cook introduced Apple Intelligence this way:
At Apple, it’s always been our goal to design powerful personal products that enrich people’s lives, by enabling them to do the things that matter most, as simply and easily as possible.
We’ve been using artificial intelligence and machine learning for years to help us further that goal. Recent developments in generative intelligence and large language models offer powerful capabilities that provide the opportunity to take the experience of using Apple products to new heights.
So as we look to build in these incredible new capabilities, we want to ensure that the outcome reflects the principles at the core of our products. It has to be powerful enough to help with the things that matter most to you. It has to be intuitive and easy to use. It has to be deeply integrated into your product experiences. Most importantly, it has to understand you, and be grounded in your personal context, like your routine, your relationships, your communications and more. And of course, it has to be built with privacy from the ground up. Together, all of this goes beyond artificial intelligence. It’s personal intelligence, and it’s the next big step for Apple.
I include the entire quote[3] because I see this as Apple’s AI thesis. Their privacy-first approach to AI is all about experiences and functionality, not data collection. Technology as Infrastructure, not as a business model. Apple Intelligence gives Apple (and developers) the ability to craft experiences that are relevant to you, using the extraordinarily personal information available on your device, and without compromising your privacy.
This is the right approach. People care about what technology lets them do—or can do for them—not the technology itself. They buy a new iPhone because it “takes better photos” not because it has an “ƒ/1.78 aperture.” The technology enables the feature, but it’s not the feature.
For Apple, it’s not about AI, it’s about what AI enables.
What sets Apple Intelligence apart from other offerings are Siri’s deep integration with the system, on-device processing, and new cloud server infrastructure.
With Siri’s improved integrations, better natural language understanding, and awareness of my personal contexts, my iPhone, iPad, and Mac—which already know more about me than my wife or mom—will be able to use that knowledge even more directly.
It’ll do this without needing to go to the cloud. Much of Apple Intelligence will be processed locally, which is a massive win for both speed and privacy. (It does require devices with the latest Apple silicon: iPhone 15 Pro, or any M-family iPad or Mac. One can presume any new phones announced this year will work.)
For requests too complex to process locally, Apple’s new cloud server infrastructure adds scalability. Private Cloud Compute uses Apple silicon-powered servers created specifically for this task. Limited data is sent to those servers, the data is used only for your requests (not to train models for others), and then the data is deleted once the task is complete.
As Craig said,
You should not have to hand over all the details of your life to be warehoused and analyzed in someone’s AI cloud.
This is what differentiates Apple from everyone else doing AI, and why Apple remains one of the few companies I trust with vast amounts of my personal data. Apple Intelligence is built, as Tim noted, “in a uniquely Apple way.” They aren’t trying to monetize your data, so there’s no need to hold onto it. It’s a solution “only Apple” could make.
There were several demos showcasing the capabilities of Apple Intelligence. Many were tied to generative writing and images, done locally, and constrained to specific contexts. I found them interesting, but not exciting.
There were three examples which delighted me. All were tied to Siri’s deeper integration into and across the system:
I’ve been dreaming about these types of interactions since first seeing Apple’s Knowledge Navigator concept video, where—among other futuristic things—an “intelligent agent” has such deep contextual knowledge about a professor, it jumps in—unprompted—with the answer to a forgotten appointment time.
The WWDC video opens with Phil Schiller flying an airplane filled with Apple executives[4]. Craig is pumping them up ahead of the show, then they all parachute out above Apple Park. The airplane, the jumpsuits, and the parachutes themselves are all liveried in the six colors of the classic Apple logo. It’s a lovely callback to Apple’s history.
Ninety minutes later, toward the end of the event, Craig gives us the tag line for Apple Intelligence: AI for the rest of us.
This is another callback to the earliest ads for Macintosh, the computer for the rest of us, and I think it encapsulates everything about how Apple envisions their place within the broader AI ecosystem.
The original series of ads compared the graphical user interface and one-button mouse of Macintosh to DOS-based PCs where you typed in cryptic, text-based commands to get things done. In one ad, a Macintosh is removed from a zippered bag:
It’s more sophisticated, yet less complicated.
It’s more powerful, yet less cumbersome.
It can store vast amounts of yesterday, or tell you what’s in store for tomorrow.
It can draw pictures, or it can draw conclusions.
It’s a personal computer from Apple, and it’s as easy to use as this.
The ad ends with a finger pressing a mouse button, highlighting the simplicity of using a Macintosh.
With new visuals, and some minor changes to the narration, this could be an ad for Apple Intelligence.
It can’t be coincidence that a Keynote that opens by evoking the early days of Apple computing, ends with one too. Apple is saying there’s Artificial Intelligence, which is all about LLMs and models and prompt engineering, and which requires specialized knowledge and lots of typing to accomplish anything, and there’s Apple Intelligence, which uses context and relevance and personal knowledge to make it easy to be creative and productive.
The pundits worry Apple has fallen behind.
Apple is telling us this is just the beginning.
In October, 2023 I retired from Apple after 22 years. Being on the outside means I can, for the first time in two decades, write about Apple. So, here we are. ↩︎
I’m hardly the first person to come up with “Apple Intelligence” as a likely marketing name. After all, Apple has been known to use the occasional pun in their brand marketing. I would have been disappointed if they hadn’t used it. ↩︎
Beth Dakin, Craig Federighi, Cyrus Irani, Dr. Sumbul Desai, Kelsey Peterson, Mike Rockwell, Phil Schiller, Ron Huang, Ronak Shah, and Susan Prescott. Also: Nine people jumped, but only eight parachutes were shown to open. ↩︎