←back to thread

466 points 0x63_Problems | 3 comments | | HN request time: 0.001s | source
Show context
leptons ◴[] No.42138221[source]
I asked the AI to write me some code to get a list of all the objects in an S3 bucket. It returned some code that worked, it would no doubt be approved by most developers. But on further inspection I noticed that it would cause a bug if the bucket had more than 1000 objects because S3 only delivers 1000 max objects per request, and the API is paged, and the AI had no ability to understand this. So the AI's code would be buggy should the bucket contain more than 1000 objects, which is really, really easy to do with an S3 bucket.
replies(4): >>42138486 #>>42139105 #>>42139285 #>>42139303 #
1. asabla ◴[] No.42138486[source]
at some extent I do agree with the point you're trying to make.

But unless you include pagination needs to be handled as well, the LLM will naively just implement the bare minimum.

Context matters. And supplying enough context is what makes all the difference when interacting with these kind of solutions.

replies(1): >>42139424 #
2. dijksterhuis ◴[] No.42139424[source]
not parent, but

> I asked the AI to write me some code to get a list of all the objects in an S3 bucket

they didn’t ask for all the objects in the first returned page of the query

they asked for all the objects.

the necessary context is there.

LLMs are just on par with devs who don’t read tickets properly / don’t pay attention to the API they’re calling (i’ve had this exact case happen with someone in a previous team and it was a combination of both).

replies(1): >>42139792 #
3. danielbln ◴[] No.42139792[source]
LLMs differ though. Newest Claude just gave me a paginated solution without further prodding.

In other more obscure cases I just add the documentation to it's context and let it work based on that.