←back to thread

141 points luu | 1 comments | | HN request time: 0.481s | source
Show context
Legend2440 ◴[] No.43797047[source]
Seems premature, like measuring the economic impact of the internet in 1985.

LLMs are more tech demo than product right now, and it could take many years for their full impact to become apparent.

replies(2): >>43797341 #>>43798428 #
amarcheschi ◴[] No.43797341[source]
I wouldn't call "premature" when llm companies ceos have been proposing ai agents for replacing workers - and similar things that I find debatable - in about the 2nd half of the twenties. I mean, a cold shower might eventually happen for a lot of Ai based companies
replies(2): >>43797890 #>>43798723 #
dehrmann ◴[] No.43798723[source]
The most recent example is the Anthropic CEO:

> I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code

https://www.businessinsider.com/anthropic-ceo-ai-90-percent-...

This seems either wildly optimistic or comes with a giant asterisk that AI will write it by token predicting, then a human will have to double check and refine it.

replies(6): >>43798817 #>>43798885 #>>43798940 #>>43799360 #>>43800080 #>>43801115 #
1. amarcheschi ◴[] No.43798885[source]
I'm honestly slightly appalled by what we might miss by not reading the docs and just letting Ai code. I'm attending a course where we have to analyze medical datasets using up to ~200gb of ram. Calculations can take some time. A simple skim through the library (or even asking the chatbot) can tell you that one of the longest call can be approximated and it takes about 1/3rd of the time it takes with another solver. And yet, none of my colleagues thought about either looking the docs or asking the chatbot. Because it was working. And of course the chatbot was using the solver that was "standard" but that you probably don't need to use for prototyping.

Again. We had some parts of one of 3 datasets split in ~40 files, and we had to manipulate and save them before doing anything else. A colleague asked chatgpt to write the code to do it and it was single threaded, and not feasible. I hopped up on htop and upon seeing it was using only one core, I suggested her to ask chatgpt to make the conversion run on different files in different threads, and we basically went from absolutely slow to quite fast. But that supposed that the person using the code knows what's going on, why, and what is not going on. And when it is possible to do something different. Using it without asking yourself more about the context is a terrible use imho, but it's absolutely the direction that I see we're headed towards and I'm not a fan of it