Dream it. Build it. Grow it. Sign up now and you'll be up and running on DigitalOcean in just minutes.
In a statement on Friday to John Gruber of Daring Fireball, Apple acknowledged a delay in the release of Apple Intelligence-powered Siri:
Siri helps our users find what they need and get things done quickly, and in just the past six months, we’ve made Siri more conversational, introduced new features like Type to Siri and product knowledge, and added an integration with ChatGPT. We’ve also been working on a more personalized Siri, giving it more awareness of your personal context, as well as the ability to take action for you within and across your apps. It’s going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year.
(As far as I can tell, Apple provided this statement only to Gruber; no other outlet appears to be reporting it independently.)
I’m among the many people disappointed, but not surprised, by the delay. In my first piece on this site, I expressed my excitement for the just-announced Apple Intelligence. In it, I highlighted three demos which delighted me, all tied to Siri’s deeper integration into and across the system.
Today, none of those examples work yet, and seemingly won’t for quite some time.
I’ve previously expressed my sympathy for the Siri team. In that same piece, I referenced a Bloombergstory suggesting longtime Apple exec Kim Vorrath is moving to Apple Intelligence, commenting:
I’ve watched Vorrath and her Program Office teams operate from the inside for many years. The biggest impact she and her team had across engineering was instilling discipline: every feature or bug fix had to be approved; tied to a specific release; and built, tested, and submitted on time. It was (is!) a time-intensive process—and engineering often complained about it, sometimes vocally—but the end result was a more defined, less kitchen-sink release each year. To a significant extent, her team is the reason why a feature may get announced at WWDC but not get released until the following spring. She provided engineering risk management.
It seems like Vorrath is already making an impact.
Most of those commenting on this delay have focused on internal technical issues as the cause. That makes sense and is most likely the case: all of the demos at last year’s WWDC for Personal Context were based on Apple apps and features—Photos, Calendar events, Files, Messages, Mail, and Maps (plus real-time flight details). Most of what they’re dealing with is likely tied to Apple Intelligence- and Siri-specific issues.
But another thought occurred to me, an important aspect to Apple Intelligence that may be overlooked. What is the impact of third-party developers on this delay? Not the impact on them—of.
Apple’s statement says that “a more personalized Siri” has “more awareness of your personal context” and “the ability to take action for you within and across your apps.” Much of that functionality would rely on third-party apps and the knowledge those apps have about us.
I can’t help but wonder: Have enough developers adopted the necessary technologies (App Intents, etc.) to make Apple Intelligence truly compelling?
Of the three WWDC demos I noted, it’s the last one described by Kelsey Peterson (Director, Machine Learning and AI) that’s the most extensive example of what “a more personalized Siri” would be capable of. Here’s how I summarized it:
You’re picking your mom up from the airport. You ask Siri “when is my mom’s flight landing?” Siri knows who “my mom” is, what flight she’s on (because of an email she sent earlier), and when it will land (because it can access real-time flight tracking). You follow up with “what’s our lunch plan?” Siri knows “our” means you and your mom, when “lunch” is, that it was discussed in a Message thread, and that it’s today. Finally, you ask “how long will it take us to get there from the airport?”. Siri knows who ”us” is, where “there” is, which airport is being referenced, and real-time traffic conditions.
(Watch the video, starting at 1:22:01.)
Imagine if, instead of Apple Mail, Messages, and Maps, Peterson was using Google Gmail, Messages, and Maps. Or Proton Mail, Signal, and Mapquest. If any of these apps don’t integrate with Apple Intelligence, the whole experience she described falls apart.
The key takeaway from the demo is that users won’t have to jump into individual apps to get the answers they need. This positions apps as subordinate to Apple Intelligence.
Considering Apple’s deteriorating relationship with the community, will third-party developers want their app to be one more piece of Apple’s AI puzzle? How many developers are willing to spend time making their apps ready for Apple Intelligence, just so Apple can disintermediate them further? Unless customers are clamoring for the functionality, or it’s seen as a competitive advantage, it’s work that few developers will consider a priority—witness the reportedly low native app numbers for Apple Vision Pro as an example of the impact developers can have on the perceived success of a platform.
Much of the long-term success of Apple Intelligence depends on widespread adoption of App Intents by third-party developers—many of whom, at least initially, may see little reason to participate. While Apple is unlikely to delay Apple Intelligence just because of third-party developers, it could seriously hamstring the feature if there isn’t ample adoption of App Intents. Perhaps Apple, in addition to addressing technical issues, will use the extra time to drive that adoption. Apple Intelligence cannot succeed on first-party apps alone.
John Gruber has a long, thoughtful piece at Daring Fireball about the complications (and relative importance) of creating bootable backups in the modern Mac era (triggered by a now-fixed Apple bug):
I don’t think anyone would dispute that “creating a bootable startup drive clone” has gotten complicated in the Apple Silicon era, which began with MacOS 11 Big Sur in late 2020. Not to mention the complications that were introduced with the switch from HFS+ to APFS with MacOS 10.13 High Sierra in 2017, and the read-only boot volume and SIP with MacOS 10.15 Catalina in 2019. M-series Macs boot weirder than Intel-based Macs. Not bad weird. I think it’s all justified in the pursuit of security (SIP stands for System Integrity Protection, and is aptly named) and elegant system architecture. But booting is now makes-things-much-more-difficult-than-before weird for tools like SuperDuper and Carbon Copy Cloner.
He goes into deep detail about how bootable backups work with SuperDuper, his backup tool of choice. (For what it’s worth, my preferred app has long been its “archrival”, Carbon Copy Cloner, which I’ve been using—and recommending—for at least two decades, though the earliest reference I can find to it is a 2008 post on my now-defunct personal blog. I also worked with the author, Mike Bombich, when we were both at Apple.)
Gruber concludes:
Having my SuperDuper-cloned backup drive be bootable is nice to have, but I really can’t say I need it any more. 20, 15, even just 10 years ago, that wasn’t true — I really did want the ability to boot from my backup drive at a moment’s notice. But that’s really not true any more for me. It probably isn’t for you, either. It definitely isn’t true for most Mac users.
But it remains true for some people, who are using (or responsible for) Macs in high-pressure tight-deadline production environments. Live broadcast studios. Magazines or newspapers with a deadline for the printer that’s just hours (or minutes) away. Places with strict security/privacy rules that forbid cloud storage of certain critical files. If the startup drive on a production machine fails, they need to get up and running now. Plug in a backup drive, restart, and go. Anything longer than that is unacceptable.
I agree with Gruber broadly: I was also once a “bootable backups” guy, and I too haven’t used one in at least a decade. And certainly production environments need fast recovery options to handle time-critical failures.
But booting from a backup drive “at a moment’s notice”? Well, that’s just straight-up bananas!
OK, let me be clear: Gruber is a smart and technically savvy fellow, and I’m confident he doesn’t mean it the way I’m (overly dramatically) interpreting it here. But let me state for the record:
A backup you boot from is no longer a backup. It is now a production device.
(I’d originally added to the end of that, and the sole copy of your data, but that’s not necessarily true (and certainly not what Gruber meant). In an environment like what Gruber describes, there would (should!)never be a “single backup” of critical data. The backup drive that you plug in, restart, and go would likely be one of multiple such drives, kept up to date and designated as, effectively, a “hot spare.” In fact, I’d wager most such environments go beyond mere data redundancy, to device redundancy: Backup computers, not just backup data.)
I spent a significant part of my early career working in technical support and as a sysadmin at various magazine publishers, and later, in early web publishing at marketing companies and advertising agencies. Among other things, I was the person responsible for creating and implementing backup policies. Part of that was having options for handling critical path failures—recovering quickly when computers or drives failed.
Bootable backups were part of that process, but not in the way Gruber appears to imply. We’d never use a backup as a boot drive without having another copy of that drive.
Whether we were continuously making that second backup, or made it at the time we needed it, we always ensured that a second backup existed before we attempted any recovery. The last thing we wanted was to screw up the backup too.
The only time I would use a bootable backup drive directly—without making another copy—was if I specifically made it to boot from it. This wasn’t a backup in the traditional sense, but a clone, a snapshot from which to work. It wasn’t a hedge against the future, but a way of replicating a system to work on now. In this scenario, it didn’t matter if I screwed up the bootable backup, because the data still existed and could be re-cloned.
To be very clear: In the production environments I worked in, we would never use the current and only backup to recover and keep working (or as Gruber put it, to “get up and running now”). Our data was as important as our deadlines, and we invested the necessary time and money into systems that allowed fast recovery without sacrificing either.
One standard process we implemented was having boot partitions and data partitions. We created bootable recovery drives—so computers could be used if the boot partition failed—and separate datadrives, with backups running to those as often as the amount of work we were willing to lose. Thosedrives were themselves also backed up near- or offline.
For any “critical path” data or systems, we also kept “hot spares”—devices we could press into service at a moment’s notice. These were maintained as if they were in active use, because at any moment, they could be.
Gruber mentions that he’s “suffered very few disk calamities.” He’s fortunate. I’ve had seemingly more than my fair share of catastrophic disk failures—some caused by my own poor backup hygiene. Over the years, my backup process has oscillated between very disciplined and a totally laissez-faire approach.
Today, it leans toward the latter, in part because a lot of my data is in The Cloud™ and I can get to it from multiple devices—it almost feels like a backup. It’s not, but I might be excused for acting otherwise.
Many of us keep irreplaceable information in the cloud: Photos of your kids. Early email flirtations with your now-spouse. Tax records. Software serial numbers. The list is endless. We trust Apple and Dropbox and Google Drive to keep our stuff safe. But they’re not backups. You delete it here, it deletes it there and everywhere. Or, worse still, they delete it without having adequate backups of their own.
This is why I enable iCloud Photos to “Download Originals to this Mac” and disable iCloud’s “Optimize Mac Storage” in System Settings. If the files are local, I take responsibility for them.
It’s also why I have dozens of old hard drives with copies of copies of backups of data I’ll probably never look at again, but which makes me happy knowing I have them, just in case.
My backup process works fine but is not as regimented as it could be. It currently relies on a combination of Carbon Copy Cloner for local backups (semi-automated on drive connection, not done as regularly as I should, sadly) and Backblaze, which I’ve used since at least 2012, for automatic, remote backups. Both have saved my bacon more than once, but it’s unfocused, made worse by having multiple computers in various states of sync.
I plan to revamp my backup process soon. I’m thinking of reintroducing Time Machine for more granular, local backups; with Carbon Copy Cloner handling replication of those (and other) backups; and Backblaze acting as my online-and-offsite copy. I’ve been eyeing a Synology since my Drobo died several years back (and the company went out of business), but I’m considering a Mac mini with a JBOD (Just a Bunch of Disks). And I’m looking into a local offsite backup with friends and potentially cold storage with family in another state.
My biggest challenge is the sheer volume of data I have—approaching 80TB across several drives. How much of that is duplicate data? Who knows! (Prolific podcaster John Siracusa has a brilliant new app (currently available in TestFlight for ATP members) that would help here. I’m excited by its potential.)
I’m open to hearing about experiences with and alternative strategies for backup solutions.
Over the weekend and into this morning, seemingly every entry in my newsfeed was about DeepSeek, a China-based AI lab which rolled out a highly capable AI model called R1. By far the best of the summaries I saw was from Ben Thompson at Stratechery.
The long and short of it: DeepSeek’s newly announced R1 model reportedly equaled the capabilities of OpenAI’s o1 model, which is considered the leader in the space, but uses vastly less powerful—and vastly less expensive—hardware to do so. This led to a meltdown of sorts in both the AI community at large, and the tech stock market. Nvidia, the world’s most valuable “AI” company, cratered nearly 17% on the news, and other AI-adjacent companies were also affected, both positively and negatively.
(Thompson’s podcast partner, John Gruber, helpfully distills the market impact over at Daring Fireball.)
Thompson delves into the backstory of DeepSeek, explains some of the technical underpinnings, and assesses the ramifications (real and imagined) on the future of AI computing.
He also highlights a tweet from Microsoft CEO Satya Nadella suggesting one future we can certainly anticipate (and introducing me to Jevons paradox):
Jevons paradox strikes again! As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can't get enough of.
Cheaper and ubiquitous AI is coming. We’re edging ever closer to an intelligent agent future.