←back to thread

577 points simonw | 1 comments | | HN request time: 0s | source
Show context
AlexeyBrin ◴[] No.44723521[source]
Most likely its training data included countless Space Invaders in various programming languages.
replies(6): >>44723664 #>>44723707 #>>44723945 #>>44724116 #>>44724439 #>>44724690 #
NitpickLawyer ◴[] No.44723707[source]
This comment is ~3 years late. Every model since gpt3 has had the entirety of available code in their training data. That's not a gotcha anymore.

We went from chatgpt's "oh, look, it looks like python code but everything is wrong" to "here's a full stack boilerplate app that does what you asked and works in 0-shot" inside 2 years. That's the kicker. And the sauce isn't just in the training set, models now do post-training and RL and a bunch of other stuff to get to where we are. Not to mention the insane abilities with extended context (first models were 2/4k max), agentic stuff, and so on.

These kinds of comments are really missing the point.

replies(7): >>44723808 #>>44723897 #>>44724175 #>>44724204 #>>44724397 #>>44724433 #>>44729201 #
haar ◴[] No.44723808[source]
I've had little success with Agentic coding, and what success I have had has been paired with hours of frustration, where I'd have been better off doing it myself for anything but the most basic tasks.

Even then, when you start to build up complexity within a codebase - the results have often been worse than "I'll start generating it all from scratch again, and include this as an addition to the initial longtail specification prompt as well", and even then... it's been a crapshoot.

I _want_ to like it. The times where it initially "just worked" felt magical and inspired me with the possibilities. That's what prompted me to get more engaged and use it more. The reality of doing so is just frustrating and wishing things _actually worked_ anywhere close to expectations.

replies(1): >>44724064 #
aschobel ◴[] No.44724064[source]
Bingo, it's magical but the learning curve is very very steep. The METR study on open-source productivity alluded to this a bit.

I am definitely at a point where I am more productive with it, but it took a bunch of effort.

replies(2): >>44724470 #>>44724770 #
devmor ◴[] No.44724470[source]
The subjects in the study you are referencing also believed that they were more productive with it. What metrics do you have to convince yourself you aren't under the same illusionary bias they were?
replies(1): >>44724497 #
simonw ◴[] No.44724497[source]
Yesterday I used ffmpeg to extract the frame at the 13 second mark of a video out as a JPEG.

If I didn't have an LLM to figure that out for me I wouldn't have done it at all.

replies(4): >>44724574 #>>44724628 #>>44724962 #>>44733418 #
dfedbeef ◴[] No.44733418[source]
Was the answer:

ffmpeg -ss 00:00:13:00 -i myvideo.avi -frames:v 1 myimage.jpeg

Because this is on stack overflow and it took maybe one second to find.

I've found reading the man page for a tool is usually a better way to learn what a tool can do for you now and also in the future.

replies(1): >>44733619 #
kamranjon ◴[] No.44733619[source]
This is the rub for me… people are so quick to forget the original source for a lot of the data these models were trained on, and how easy and useful these platforms were. Now Google will summarize this question for you in an AI overview before you even land on Stack Overflow. It’s killing the network effect of the open web and destroying our crowd sourced platforms in favor of a lossy compression algorithm that will eventually be regurgitating its own entrails.
replies(1): >>44733819 #
1. dfedbeef ◴[] No.44733819{3}[source]
Well, maybe. People will just stop using them and will make fun of people who do. You can only bullshit people for so long.