Ask HN: How will AI affect learning programming?
Beyond just programming language fundamentals, what will be the way how to learn coding considering the AI tools will be available? What will be the shift in your opinion, if any?
Beyond just programming language fundamentals, what will be the way how to learn coding considering the AI tools will be available? What will be the shift in your opinion, if any?
AI currently as it's used, is replacing Google searches, but without multiple sources. So in learning coding, AI can explain things, but as far as multiple approaches to solve a problem, it won't specifically help with that, unless asked (which few people do).
IF (strong if) AI begins replacing coders, AI will code in it's own optimized language, I speculate something like minimized JS or WASM. It doesn't have to be human readable, it just needs to pass unit tests. So humans will probably only write unit tests, and AI will write code. So knowing programming will not be necessary anymore, only prompt-derived unit tests.
The DeepSearch on Grok 3 provides links to references, but most people are probably going to want a model with a faster response.
As someone who works in cyber security but has dabbled in coding in the beginning of my career, I think its just like having an improved search engine.
I don't need to Google search a particular issue in the hope that someone else had the same issue and someone had a solution. I don't need to wade through unprofessional comments on forums.
Having an LLM is almost like having a tutor I can ask questions, even silly questions. I can ask it to make inferences between particular mistakes and I can dive deeper. I can also supplement the LLM with Google searches. I can ask it to reframe code if I want it to or help me fix certain mistakes, or understand particular concepts better.
I'd be excited if I was going to University and had coding subjects knowing I had an LLM I could ask for help.
LLMs by their nature are good at word or concept association. Cohesiveness in length is where they begin to break down.
I tend to liken them to very drunken scholars. They know things, usually at about a Wikipedia level, and they’re cheery too. But they lack a capacity to doubt themselves; they often are confidently wrong.
Often the greatest help is understanding natural language, but given its hiccups… time using it is probably best spent using it as a drunken librarian – to teach how to phrase and fetch information
As one telling recent example, I tried using an LLM to help with some jq (with which I’m rusty); it got a few basics and then repetitively tripped over a syntax hiccup on loop, “correcting” itself to the same answer each time. A StackOverflow search or two, for comparison, answered my questions and taught some new syntax too. Probably took less time, but more critical thought.
That, coupled with the fact LLMs tend to give an answer and then also an unnecessary verbose step-by-step, means I tend to dislike them.
I also have a huge bugbear about “AI” as a term because it tells you very little. Plenty of applied statistics (markov chains, clustering algos, deep learning eg computer vidion; even SearchRank) are used heavily in research and other cases to do a lot of good. Even for the layman: the Seek app by iNaturalist is awesome for identifying common plant species; Stockfish is (now) a NN that dominates in chess.
But these are classifiers, not generators. By their very nature it is just statistics to evaluate a classifier on a test dataset. Generators, however, are far, far thornier to test, and seem a lot more prone to overfitting.
While I’m not familiar with a typical trained generator tensor, I imagine the optimal one will be surprisingly sparse, though not in a structured way - corresponding to a more clustered “small world” network, which IRL seem the most productive.
> they often are confidently wrong.
Only when they can also dismissively sneer at a marketing person will they be truly ready to replace programmers.
Are we talking about "coding" or software engineering ? It's somewhat good at "coding", which makes up maybe 25% of my workload, and not so good at engineering imho.
"coding" is like learning cursive and how to spell words, no matter how good you are at that it won't make you a NYT best seller.
My wife is currently enrolled in a computer science program, and AI was absolutely vital for getting her over the 0 to 1 known programming languages chasm (Python, in her case). It has been less helpful (still very!) for C++ after that, but C++ after Python is much easier than C++ with no background at all. So I don't think ignoring the "just programming fundamentals" phase is the right call. I think that's where the greatest difference is going to be by far.
I think a great many more experienced people forget just how hard it is to learn your first, just how many new unintuitive concepts appear no matter what language you choose. Many, many more people are now able to learn that without getting frustrated to the point of giving up, because they all have a digital Aristotle in their pocket to guide them.
After getting past the fundamentals, of course, I think the runaway feedback loop of AI will continue to consolidate real world programming work into an even smaller pool of currently-popular languages. JavaScript, Python, Go, Rust, Java, C, and C++ will all be in this cannot. Bash, too, if you count that - current AI tools are much much better at Bash than e.g. Fish, much to my dismay. So we'll see educational resources follow suit.
We will probably also see renewed interest in the "out of the box" options available on common platforms like Debian. By my count those would be awk, sed, and perl. [1] There's just something very satisfying about being able to spin up a bare server and get useful work done with just the essentials!
[1]: https://hiandrewquinn.github.io/til-site/posts/what-programm...
Very broadly speaking, I think it's going to have the same impact as search engines had where productivity increases but understanding decreases.
I very much doubt we're going to see a massive shift where everyone becomes a system analyst or service designer and we just punch in business requirements out comes a ready-to-release system.
I can see automated ui testing tools becoming truly amazing if AI Agents are even half of what they're hyped up to be. At the moment they kind of work, but also a bit of a headache.
I have learned Go recently. And it was a combination of Google, copilot, ChatGPT, and some books.
Even when the AI was technically right it often didn’t solve my problem because I didn’t understand how to ask the right question. I didn’t know exactly how to step back and ask at the right level of specificity. “Solve this tiny narrow problem” doesn’t help me and I often omit broader context that an expert would have added for this situation. Going too broad doesn’t actually get to the specifics of what I need and misses important details.
The trial and error of formulating a meaningful question, rethinking questions, working through hallucinations, and digging into fundamentals from authoritative sources all worked well together.
Similar to how calculators changed `sin(pi/2)`.
For some, they never have to learn. For others, it makes everything click. Maybe they didn't have a good teacher or book. But now they can experiment and play around with different variables and wonder. Then other things start to click - angles, friction, and so on.
I think it should accelerate things to either the point where people say good riddance or where they can build more things on top of it.
AI helps me consider a wider range of approaches to lots of different problems. Especially when working with unfamiliar domains or esoteric technologies it helps me iterate so much faster.
It’s the world’s greatest rubber duck and even if that’s all it ever is, it’s a game changer for me.