←back to thread

466 points 0x63_Problems | 7 comments | | HN request time: 0.84s | source | bottom
1. leptons ◴[] No.42138221[source]
I asked the AI to write me some code to get a list of all the objects in an S3 bucket. It returned some code that worked, it would no doubt be approved by most developers. But on further inspection I noticed that it would cause a bug if the bucket had more than 1000 objects because S3 only delivers 1000 max objects per request, and the API is paged, and the AI had no ability to understand this. So the AI's code would be buggy should the bucket contain more than 1000 objects, which is really, really easy to do with an S3 bucket.
replies(4): >>42138486 #>>42139105 #>>42139285 #>>42139303 #
2. asabla ◴[] No.42138486[source]
at some extent I do agree with the point you're trying to make.

But unless you include pagination needs to be handled as well, the LLM will naively just implement the bare minimum.

Context matters. And supplying enough context is what makes all the difference when interacting with these kind of solutions.

replies(1): >>42139424 #
3. yawnxyz ◴[] No.42139105[source]
yeah AI isn't good at uncovering all the foot guns and corner cases, but I think this reflects most of StackOverflow, which (not coincidentally) also misses all of these
4. justincormack ◴[] No.42139285[source]
Claude did the simple version by default but I asked it to support more than 1000 and it did it fine
5. awkward ◴[] No.42139303[source]
Most AI code is kind of like that. It's sourced from demo quality examples and piecemeal paid work. The resulting code is focused on succinctly solving the problem in the prompt. Factoring and concerns external to making the demo work disappear first. Then any edge cases that might complicate the result get tossed.
6. dijksterhuis ◴[] No.42139424[source]
not parent, but

> I asked the AI to write me some code to get a list of all the objects in an S3 bucket

they didn’t ask for all the objects in the first returned page of the query

they asked for all the objects.

the necessary context is there.

LLMs are just on par with devs who don’t read tickets properly / don’t pay attention to the API they’re calling (i’ve had this exact case happen with someone in a previous team and it was a combination of both).

replies(1): >>42139792 #
7. danielbln ◴[] No.42139792{3}[source]
LLMs differ though. Newest Claude just gave me a paginated solution without further prodding.

In other more obscure cases I just add the documentation to it's context and let it work based on that.