Torvalds Speaks: Impact of Artificial Intelligence on Programming

Written by Massa Medi
Artificial intelligence (AI) is quickly becoming an inescapable part of the world of software development. But how intelligent is “artificial intelligence,” really? As our conversation with Jim illustrates, the most influential advancements are being made through large language models (LLMs) the powerhouse technology behind many headline grabbing AI tools today.
What Are Large Language Models, Really?
When the hype around AI reaches a fever pitch, it’s easy to imagine these systems are thinking, self aware machines. But let's clear the air: LLMs are more like autocorrect on steroids. Their secret sauce? Predicting the *next most likely word* in a sentence at superhuman speeds, extrapolating based on patterns in enormous datasets. The result isn’t true intelligence, but as Jim points out, the impact on our daily lives is massively significant nonetheless.
Is AI Generated Code Already Among Us?
Here’s the million dollar question: will we soon see code written entirely by LLMs landing in your repository as a pull request? Jim is convinced it’s not only possible, but that it’s likely already happening albeit on a smaller scale for now. Developers are getting a “helping hand” from AI for routine coding tasks, and automation in coding isn’t exactly new. We’ve moved from machine code to assembler, from C to Rust, using tools that increase our productivity every step of the way. The difference now is the sophistication and potential reach of these new tools.
So is this just another technical evolution, or the revolutionary change the media proclaims? Jim remains pragmatic: while it’s transformative, it’s not necessarily the paradigm shift proclaimed in every headline. For him, the true allure has always been the thrill of working close to the hardware digging into kernels and the lowest levels of computation far removed from high level, AI augmented abstraction.
Can AI Help Us Write and Review Code?
There’s plenty of excitement about using AI for coding, but what about one of the most challenging aspects of software engineering: code review and maintenance? The hope is real. Imagine an LLM as an eagle eyed assistant, tirelessly catching “stupid bugs” the kind that are so obvious, they often slip through unnoticed by humans. If you’re a developer, you probably recognize that a significant chunk of bugs in any codebase aren’t subtle at all. They’re simple oversights.
Compilers already do a great job flagging the most blatant errors, but what about those slightly subtler mistakes a variable used incorrectly, a pattern that doesn’t match expectations? An LLM could flag inconsistencies, prompting developers with, “Are you sure this is what you meant?” Sometimes, it’s not and that simple prompt could prevent major headaches down the road.
“You call them disparagingly like autocorrects on steroids. And I actually think that they're way more than that… We all are autocorrects on steroids to some degree. And I see that as a tool that can help us be better at what we do. But I've always been optimistic.”
AI as a Tool: Hopeful, Helpful, and Humble
The sentiment here is one of guarded optimism. Sure, it takes a measure of bold hopefulness to think you can revolutionize something as fundamental as kernel development Jim jokes that 32 years ago, he was “stupid enough” to believe he could build a better kernel than anyone else. That hopeful optimism, he suggests, is the engine behind progress.
In that spirit, the outlook on tools like LLMs is positive: they’re “wonderful”, and they’re going to make a difference. Even if not everyone shares the same level of optimism, there’s little denying their potential.
But What About AI’s Hallucinations And Our Reliance On Them?
Here’s the plot twist: as much as LLMs can help, they’re not infallible. In fact, LLMs are known to “hallucinate” that’s the technical term for when AI makes things up. Whenever AI tools are used to automatically write or modify code without a vigilant human in the loop, there’s the risk of introducing errors. The more we let go of oversight, the greater the odds of “hallucinations” slipping in and causing real world bugs.
But as Jim notes, bugs are an everyday occurrence even without the help of advanced AIs. Software development has always been a dance with imperfection and, in his view, we might just be adding new missteps to an old routine.
The Verdict: AI in Programming Is Here But It’s Not Magic
As artificial intelligence embeds itself into development workflows, it brings new capabilities and new risks. LLMs won’t end human programming or the need for expertise and careful code review. Instead, they’ll serve as powerful, ever improving tools that help us catch dumb mistakes, speed up everyday development, and perhaps give us the optimism needed to keep building a better digital world.
So whether you call LLMs “autocorrect on steroids,” the next frontier in automation, or simply another tool in the ever growing programmer’s toolkit, one thing is certain: they’re not going away and their impact will only grow.
Recommended Articles

AI Trends for 2025: Expert Predictions on Agentic AI, Model Sizes, and the Next Wave of Intelligence

AI, Machine Learning, Deep Learning & Generative AI: What’s the Real Difference?
