Dream it. Build it. Grow it. Sign up now and you'll be up and running on DigitalOcean in just minutes.
Over the weekend and into this morning, seemingly every entry in my newsfeed was about DeepSeek, a China-based AI lab which rolled out a highly capable AI model called R1. By far the best of the summaries I saw was from Ben Thompson at Stratechery.
The long and short of it: DeepSeek’s newly announced R1 model reportedly equaled the capabilities of OpenAI’s o1 model, which is considered the leader in the space, but uses vastly less powerful—and vastly less expensive—hardware to do so. This led to a meltdown of sorts in both the AI community at large, and the tech stock market. Nvidia, the world’s most valuable “AI” company, cratered nearly 17% on the news, and other AI-adjacent companies were also affected, both positively and negatively.
(Thompson’s podcast partner, John Gruber, helpfully distills the market impact over at Daring Fireball.)
Thompson delves into the backstory of DeepSeek, explains some of the technical underpinnings, and assesses the ramifications (real and imagined) on the future of AI computing.
He also highlights a tweet from Microsoft CEO Satya Nadella suggesting one future we can certainly anticipate (and introducing me to Jevons paradox):
Jevons paradox strikes again! As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can't get enough of.
Cheaper and ubiquitous AI is coming. We’re edging ever closer to an intelligent agent future.