←back to thread

925 points dmitrybrant | 5 comments | | HN request time: 0s | source
Show context
theptip ◴[] No.45163517[source]
A good case study. I have found these two to be good categories of win:

> Use these tools as a massive force multiplier of your own skills.

Claude definitely makes me more productive in frameworks I know well, where I can scan and pattern-match quickly on the boilerplate parts.

> Use these tools for rapid onboarding onto new frameworks.

I’m also more productive here, this is an enabler to explore new areas, and is also a boon at big tech companies where there are just lots of tech stacks and frameworks in use.

I feel there is an interesting split forming in ability to gauge AI capabilities - it kinda requires you to be on top of a rapidly-changing firehose of techniques and frameworks. If you haven’t spent 100 hours with Claude Code / Claude 4.0 you likely don’t have an accurate picture of its capabilities.

“Enables non-coders to vibe code their way into trouble” might be the median scenario on X, but it’s not so relevant to what expert coders will experience if they put the time in.

replies(16): >>45163642 #>>45163857 #>>45163954 #>>45163957 #>>45164146 #>>45164186 #>>45165282 #>>45165556 #>>45166441 #>>45166708 #>>45167115 #>>45167361 #>>45168913 #>>45169267 #>>45178891 #>>45193900 #
nine_k ◴[] No.45163954[source]
Yes. The author essentially asked Claude to port a driver from Linux 2.4 to Linux 6.8. Very certainly there must be sufficient amounts of training material, and web-searchable material, that describes such tasks. The author provided his own expertise where Claude could not find a good analogue in the training corpus, that is, the few actually non-trivial bits of porting.

"Use these tools as a massive force multiplier of your own skills" is a great way to formulate it. If your own skills in the area are near-zero, multiplying them by a large factor may still yield a near-zero result. (And negative productivity.)

replies(1): >>45164956 #
1. rmoriz ◴[] No.45164956[source]
You can still ask, generate a list of things to learn etc. basically generate a streamlined course based on all tutorials, readmes and source code available when the model was trained. You can call your tutor 24/7 as long as you got tokens.
replies(2): >>45165125 #>>45169350 #
2. theshrike79 ◴[] No.45165125[source]
ChatGPT even has a specific "Study mode" where it refrains from telling you the answer directly and kinda guides you to figure it out yourself.
3. seba_dos1 ◴[] No.45169350[source]
You have to keep guard at each step to notice the inconsistencies and call your tutor's mistakes out though, or you'll inevitably learn some garbage. This is a use case that certainly "feels" like it's boosting your learning (it sure does to me), but I'd like to read an actual study on whether it really does before reaching any conclusions.

It seems to me that LLMs help the most at the initial step of getting into some rabbit hole - when you're getting familiar with the jargon, so you can start reading some proper resources without being confused too much. The sooner you manage to move there, the better.

replies(1): >>45178329 #
4. rmoriz ◴[] No.45178329[source]
You overestimate hallucinations in known settings. If you ask to show source code, it‘s easy to check the sources (of a framework, language, local code)
replies(1): >>45192401 #
5. seba_dos1 ◴[] No.45192401{3}[source]
No I don't. I have used Claude, ChatGPT and Gemini in many "known settings" while working during the last few weeks to test whether their output would be helpful. Topics included many things - Bayer image processing, color science, QML and Plasma plugins, GPS, GTK3->4 porting, USB PD, PDF data structures, ALSA configs... All of them hallucinated (which is hardly surprising, that's just what they do). Sometimes it was enough to ask it to verify its claims on the Web, but Gemini Pro once refused to get corrected, stubbornly claiming that the correct answer was "a common misconception" even when confronted with sources claiming otherwise :)

I was already knowledgeable enough in these topics to catch these, but some were dangerously subtle. Really, the only way to use LLMs to actually learn anything beyond trivial is to actively question everything it prints out and never move forward until you actually grasp the thing and can verify it. It still feels helpful to me to use it this way, but it's hard to tell how it compares to learning from a good and trustworthy resource in terms of efficiency. It's hard to unlearn something and try to learn it again another way to compare ;P