←back to thread

LLM Inevitabilism

(tomrenner.com)
1611 points SwoopsFromAbove | 1 comments | | HN request time: 0s | source
Show context
mg ◴[] No.44568158[source]
In the 90s a friend told me about the internet. And that he knows someone who is in a university and has access to it and can show us. An hour later, we were sitting in front of a computer in that university and watched his friend surfing the web. Clicking on links, receiving pages of text. Faster than one could read. In a nice layout. Even with images. And links to other pages. We were shocked. No printing, no shipping, no waiting. This was the future. It was inevitable.

Yesterday I wanted to rewrite a program to use a large library that would have required me to dive deep down into the documentation or read its code to tackle my use case. As a first try, I just copy+pasted the whole library and my whole program into GPT 4.1 and told it to rewrite it using the library. It succeeded at the first attempt. The rewrite itself was small enough that I could read all code changes in 15 minutes and make a few stylistic changes. Done. Hours of time saved. This is the future. It is inevitable.

PS: Most replies seem to compare my experience to experiences that the responders have with agentic coding, where the developer is iteratively changing the code by chatting with an LLM. I am not doing that. I use a "One prompt one file. No code edits." approach, which I describe here:

https://www.gibney.org/prompt_coding

replies(58): >>44568182 #>>44568188 #>>44568190 #>>44568192 #>>44568320 #>>44568350 #>>44568360 #>>44568380 #>>44568449 #>>44568468 #>>44568473 #>>44568515 #>>44568537 #>>44568578 #>>44568699 #>>44568746 #>>44568760 #>>44568767 #>>44568791 #>>44568805 #>>44568823 #>>44568844 #>>44568871 #>>44568887 #>>44568901 #>>44568927 #>>44569007 #>>44569010 #>>44569128 #>>44569134 #>>44569145 #>>44569203 #>>44569303 #>>44569320 #>>44569347 #>>44569391 #>>44569396 #>>44569574 #>>44569581 #>>44569584 #>>44569621 #>>44569732 #>>44569761 #>>44569803 #>>44569903 #>>44570005 #>>44570024 #>>44570069 #>>44570120 #>>44570129 #>>44570365 #>>44570482 #>>44570537 #>>44570585 #>>44570642 #>>44570674 #>>44572113 #>>44574176 #
1. kazinator ◴[] No.44570024[source]
You're discounting the times when it doesn't work. I recently experienced a weird 4X slowdown across multiple VirtualBox VM's on a Windows 10 host. AI led me down rabbit holes that didn't solve the problem.

I finally noticed a configuration problem. For some weird reason, in the Windows Features control panel, the "Virtual Machine Platform" checkbox had become unchecked (spontaneously; I did not touch this).

I mentioned this to AI, which insisted on not flipping that option, that it is not it.

> "Virtual Machine Platform" sounds exactly like something that should be checked for virtualization to work, and it's a common area of conflict. However, this is actually a critical clarification that CONFIRMS we were on the right track earlier! "Virtual Machine Platform" being UNCHECKED in Windows Features is actually the desired state for VirtualBox to run optimally.'

In fact, it was that problem. I checked the option, rebooted the host OS, and the VMs ran at proper speed.

AI can not only not be trusted to make deep inferences correctly, it falters on basic associative recall of facts. If you use it as a substitute for web searches, you have to fact check everything.

LLM AI has no concept of facts. Token prediction is not facts; it's just something that is likely to produce facts, given the right query in relation to the right training data.