5 Software Turning Points Predicted by Hugging Face's Co-founder
Thomas Wolf's five predictions for how AI will fundamentally reshape software architecture, from the end of dependency culture to AI-native programming languages.
Thomas Wolf’s recent essay argues that in an era where AI writes code, the structure of software gets inverted. Some of his predictions are already visible in day-to-day work. Others feel speculative in ways he doesn’t fully acknowledge. I want to work through each one and say where I agree, where I’m skeptical, and what I think he gets wrong.
The Dependency Stack Gets Thinner
Pulling in third-party packages has always been the path of least resistance. Writing everything yourself took too long. But when an AI agent can build from scratch in minutes, going custom becomes realistic, and the tradeoffs change: fewer external packages means fewer security holes, smaller app sizes, and faster execution.
Working with Claude Code over the past few months, I’ve noticed my npm dependency depth shrinking. The chain-reaction vulnerability structure, where one compromised package endangers thousands of projects, starts to lose its grip. Smaller bundles load faster. This prediction already has evidence behind it.
The risk Wolf doesn’t address is correctness. Custom code built by an agent may have subtler bugs than a well-tested library maintained by hundreds of contributors. Reducing dependency depth trades one class of risk for another.
Legacy Code Becomes Rewritable
The reluctance to touch legacy code follows Lindy-effect logic: if it’s survived this long, don’t break it. But if AI can read tens of thousands of lines and rewrite them in a different language, that calculus shifts. Wolf estimates the time and cost to rewrite legacy code has dropped to less than 1/10 of what it used to be.
He honestly acknowledges the limitation here: AI still misses unexpected bugs and edge cases. That’s why formal verification, mathematically proving that code behaves as intended, becomes a prerequisite rather than an optional layer. Deploying AI-rewritten code to production without formal verification is still a gamble, and most teams aren’t doing it yet.
Strongly Typed Languages Get a Second Look
Programming language popularity has always been driven more by psychology than technical merit: ease of learning, community warmth, job market signal. LLMs don’t factor any of that in. Languages with strict type systems that catch mistakes at compile time are more reliable environments for AI to work in.
Rust is the clearest example. Notoriously hard for humans to learn, but the rules are explicit and the margin for error is narrow, which suits AI well. Whether Python can maintain its dominant position will be clearer within five years. My honest guess is it holds on longer than Wolf expects, because the training data skew is enormous.
Open Source Loses Its Original Engine
Open source was never just about sharing code. It was a culture built on learning together, building in public, and a sense of belonging. When AI writes the code and AI reads the code, that motivational structure changes in ways that are hard to predict.
Wolf takes it further: he envisions communities where AI models create and share libraries with each other. If that happens, the alignment of those AI systems would determine the direction of entire ecosystems. I find this plausible but unnerving, and I don’t think Wolf gives enough weight to how much open source depends on human motivation that has no obvious AI substitute. The future of open source without that engine looks uncertain.
Languages Designed for AI, Not Humans
When humans design programming languages, expressiveness and safety trade off against each other. Wolf argues there’s no reason AI faces that same dilemma. If humans no longer need to read the code, the design constraints change entirely.
This was the most imaginative part of his essay and also the least grounded. The age-old debate of compile-time versus runtime error catching could become irrelevant for AI. Languages that don’t need to be readable by humans could emerge. I find this genuinely interesting and genuinely uncertain in equal measure. We have no real evidence yet about what AI-native language design would look like or whether it would converge on anything stable.
Two Predictions Already Landing, Three Still Speculative
Reduced library dependency and the rise of strongly typed languages are changes already visible on the ground. The rewriting of legacy code is beginning but nowhere near mainstream. The open-source motivation shift and AI-native languages are further out, and Wolf’s confidence in them exceeds what the current evidence supports.
The skill that will hold value through all of this is understanding how code gets created, not just writing it. That’s the thread running through all five predictions, and it’s the one I find most durable.
Join the newsletter
Get updates on my latest projects, articles, and experiments with AI and web development.