Fast, private email that's just for you. Try Fastmail free for up to 30 days.
Apple’s annual Worldwide Developer Conference (WWDC) kicked off Monday with a 90-minute Keynote (for the masses), an hour-long Platform State of the Union (for the geeks), and, for I think the first time, the complete catalog (or nearly so) of session videos dropped on Day One.[1]
Last year’s WWDC was a novel experience for me: it was my first as an outsider after 23 shows on the inside, and still close enough to my departure that I felt a frisson of excitement—tinged with the slight sting of missing out.
For WWDC25, I watched as a mere enthusiast, my excitement more muted—still anticipatory, but subdued. No fluttering butterflies leading up to this one.
A ton was announced on Monday, but let me briefly touch on just these four:
Liquid Glass is Apple’s “new material” used in the refreshed “universal design” across all of its platforms, with “the optical quality of glass, and a fluidity only Apple can achieve.”
From Apple’s press release:
This translucent material reflects and refracts its surroundings, while dynamically transforming to help bring greater focus to content, delivering a new level of vitality across controls, navigation, app icons, widgets, and more.
Along with the new glassy look, Apple also redesigned the various controls, toolbars, and navigation. The new UI feels elegant, clean, and dynamic. You might even call it playful. The “fluidity” and dynamism is very much like that of the Dynamic Island, which squishes and stretches like blobs of black goo in a lava lamp—but here, it’s lively beads of translucent glass. I appreciate the whimsy.
However, the translucency makes a lot of text difficult to read, and the dynamism can be distracting. Also, in just the first few hours of using the various OS 26 betas, I found several visual bugs and glitches (as you would expect in a Seed 1 release). I expect later releases will address these issues as they tune things based on developer and customer feedback. The core ideas behind the design are intriguing, and I’m cautiously optimistic.
I love using my iPad, but I’ve rarely been productive with it, because I tend to jump between apps a lot while working on something—notes, calendar, mail, web, terminal, what have you—and the low data density in most iPad apps, coupled with the limited ability to see multiple apps at once, makes for a much slower computing experience.
This is true even when using an iPad with a Magic Keyboard—perhaps more so, as it feels like a laptop… but most definitely isn’t.
No doubt 35+ years of using a Mac has engrained certain, shall we say expectations of how a “computer” should work.
iPadOS 26 may finally change that.
I installed it on both an iPad mini 5th-generation and iPad Pro 12.9” 6th-generation. Having multiple, resizable windows on my iPad is delightful. It immediately improved my multitasking.
On the iPad Pro it feels somewhat akin to using a MacBook Air in “Larger Text” (lower resolution) mode. Not quite as many windows on screen, and still low density, but way more productive than two apps side-by-side (plus Slide Over).
Multi-window mode on the iPad mini is less helpful on-device, but the 6th gen has a USB-C port, which I used to connect it to my Apple Studio Display, keyboard, and trackpad and work on a big (mirrored) screen. I’m excited about the potential to carry just an iPad mini and jack into a destination setup (imagine a hotel room with a 4K TV, keyboard, and mouse/trackpad in every room!). I’m even more jazzed about a future when I can do this with an iPhone.
Liquid Glass and iPad multi-windows were irresistible enough that for the first time in years—possibly ever!—I felt compelled to install Beta 1 on personal devices. Test devices, to be sure, but still something of a milestone for me. The software looked too interesting to wait for more stable betas to arrive.
This one is for you developers, but the impact on customers could be massive. From Apple’s Apple Intelligence press release:
With the Foundation Models framework, app developers will be able to build on Apple Intelligence to bring users new experiences that are intelligent, available when they’re offline, and that protect their privacy, using AI inference that is free of cost. For example, an education app can use the on-device model to generate a personalized quiz from a user’s notes, without any cloud API costs, or an outdoors app can add natural language search capabilities that work even when the user is offline.
The framework has native support for Swift, so app developers can easily access the Apple Intelligence model with as few as three lines of code. Guided generation, tool calling, and more are all built into the framework, making it easier than ever to implement generative capabilities right into a developer’s existing app.
Apple is effectively making available to developers the same AI tooling it uses under the covers, at no cost. It’s on-device, so it’ll work without a network connection. It offers over a dozen highly optimized capabilities that are included with the OS, so no duplicate models bloating your apps and taking up precious space.
I think this could be huge. Developers don’t need to pay for access to a cloud-based model—a financial and privacy win. They don’t need to include their own model—a support and storage space win. And when Apple makes improvements to the Foundation model, all apps immediately benefit—a developer and customer win.
I’ll go out on a limb and say the Foundation model framework will be the most consequential API to come out of WWDC25, and will enable more innovation than any other new framework introduced this year.
I’m itching to see what developers do with it. If you’re a developer curious about the Foundation Models framework (or you’re just plain curious), here are a few videos to get you started:
Apple’s OS naming scheme has gotten confusingly out of sync: iOS 18, macOS 15, watchOS 11, visionOS 2—these are all from the same release year of 2024-2025, but you’d never know that by the numbering.
(You might be fooled into thinking that iOS 18 is three versions ahead of macOS 15, when in fact macOS 15 is the twenty-first version of macOS.)
Apple releases new major OS versions annually, so why not name them that way? Thus we now have iOS 26, macOS Tahoe 26, iPadOS 26, watchOS 26, visionOS 26, and tvOS 26. This renaming makes practical sense, even if it’s weird to go from “iOS 18” to “iOS 26” in one year (and even weirder to go from “visionOS 2” to “visionOS 26”).
More than ever though, it’s going to make your system feel older than ever when you’re still running iOS 26 in 2029.
One year ago, Apple spent forty minutes introducing Apple Intelligence. They no doubt had high hopes for its success. Instead, it was a slow trickle of mostly missable features, culminating in a hushed statement that their biggest features required more time to bake.
If Apple was disappointed by the reception, you wouldn’t know it by Monday’s event. Software chief Craig Federighi spent all of three minutes talking about Apple Intelligence as a product before moving on to the redesign and new OS features. Sure, Apple Intelligence was enthusiastically mentioned as part of several features (I’m excited for more-intelligent Shortcuts), but it wasn’t the victory lap Apple likely anticipated. It was a tacit admission that they’d pre-announced features that weren’t ready—and I suspect that won’t happen this year. I believe everything we saw in the Keynote will land in *OS 26 (though perhaps as late as April of next year!). I don’t think Apple ever wants to go through the embarrassment of missing their stated deadlines again.
This has long been a goal, but the work involved to rehearse, record, edit, and review over 100 sessions is, shall we say, considerable. My congrats to the teams for pulling it off this year. ↩︎
Like what you just read?
Get more like it, direct to your inbox. It’s free for you and an ego boost for me. Win-win!
Free, curated, possibly habit-forming. (It’s OK, you can stop anytime.)