←back to thread

114 points cmcconomy | 4 comments | | HN request time: 0.849s | source
Show context
anon291 ◴[] No.42174879[source]
Can we all agree that these models far surpass human intelligence now? I mean they process hours worth of audio in less time than it would take a human to even listen. I think the singularity passed and we didn't even notice (which would be expected)
replies(11): >>42174949 #>>42174987 #>>42175002 #>>42175008 #>>42175019 #>>42175095 #>>42175118 #>>42175171 #>>42175223 #>>42175324 #>>42176838 #
giantrobot ◴[] No.42175002[source]
My old TI-86 can calculate stuff faster than me. You wouldn't ever ask if it was smarter than me. An audio filter can process audio faster than I can listen to it but you'd never suggest it was intelligent.

AI models are algorithms running on processors running at billions of calculations a second often scaled to hundreds of such processors. They're not intelligent. They're fast.

replies(1): >>42175045 #
anon291 ◴[] No.42175045[source]
Except the LLM can solve a general problem (or tell you why it cannot), while your calculator can only do that which it's been programmed.
replies(2): >>42175106 #>>42175194 #
vlovich123 ◴[] No.42175194[source]
Go ask your favorite LLM to write you some code to implement the backend of the S3 API and see how well it does. Heck, just ask it to implement list iteration against some KV object store API and be amazed at the complete garbage that gets emitted.
replies(1): >>42175596 #
anon291 ◴[] No.42175596[source]
So I told it what I wanted, and it generated an initial solution and then modified it to do some file distribution. Without the ability to actually execute the code, this is an excellent first pass.

https://chatgpt.com/share/673b8c33-2ec8-8010-9f70-b0ed12a524...

Chat GPT can't directly execute code on my machine due to architectural limitations, but I imagine if I went and followed its instructions and told it what went wrong, it would correct it.

and that's just it, right? If i were to program this, I would be iterating. ChatGPT cannot do that because of how its architected (I don't think it would be hard to do this if you used the API and allowed some kind of tool use). However, if I told someone to go write me an S3 backend without ever executing it, and they came back with this... that would be great.

EDIT: with chunking: https://chatgpt.com/share/673b8c33-2ec8-8010-9f70-b0ed12a524...

IIRC, from another thread on this site, this is essentially how S3 is implemented (centralized metadata database that hashes out to nodes which implement a local storage mechanism -- MySQL I think).

replies(1): >>42175722 #
vlovich123 ◴[] No.42175722[source]
And that's why it's dangerous to evaluate something when you don't understand what's going on. The implementation generated not only saves things directly to disk [1] [2] but it doesn't even implement file uploading correctly nor does it implementing listing of objects (which I guarantee you would be incorrect). Additionally, it makes a key mistake which is that uploading isn't a form but is the body of the request so it's already unable to have a real S3 client connect. But of course at first glance it has the appearance of maybe being something passable.

Source: I had to implement R2 from scratch and nothing generated here would have helped me as even a starting point. And this isn't even getting to complex things like supporting arbitrarily large uploads and encrypting things while also supporting seeked downloads or multipart uploads.

[1] No one would ever do this for all sorts of problems including that you'd have all sorts of security problems with attackers sending you /../ to escape bucket and account isolation.

[2] No one would ever do this because you've got nothing more than a toy S3 server. A real S3 implementation needs to distribute the data to multiple locations so that availability is maintained in the face of isolated hardware and software failures.

replies(1): >>42175888 #
1. anon291 ◴[] No.42175888[source]
> I had to implement R2 from scratch and nothing generated here would have helped me as even a starting point.

Of course it wouldn't. You're a computer programmer. There's no point for you to use ChatGPT to do what you already know how to do.

> The implementation generated not only saves things directly to disk

There is nothing 'incorrect' about that, given my initial problem statement.

> Additionally, it makes a key mistake which is that uploading isn't a form but is the body of the request so it's already unable to have a real S3 client connect.

Again.. look at the prompt. I asked it to generate an object storage system, not an S3-compatible one.

It seems you're the one hallucinating.

EDIT: ChatGPT says: In short, the feedback likely stems from the implicit expectation of S3 API standards, and the discrepancy between that and the multipart form approach used in the code.

and

In summary, the expectation of S3 compatibility was a bias, and he should have recognized that the implementation was based on our explicitly discussed requirements, not the implicit ones he might have expected.

replies(1): >>42176029 #
2. vlovich123 ◴[] No.42176029[source]
> There's no point for you to use ChatGPT to do what you already know how to do.

If it were more intelligent of course there would be. It would catch mistakes I wouldn't have thought about, it would output the work more quickly, etc. It's literally worse than if I'd assigned a junior engineer to do some of the legwork.

> ChatGPT says: In short, the feedback likely stems from the implicit expectation of S3 API standards, and the discrepancy between that and the multipart form approach used in the code. > In summary, the expectation of S3 compatibility was a bias, and he should have recognized that the implementation was based on our explicitly discussed requirements, not the implicit ones he might have expected

Now who's rationalizing. I was pretty clear in saying implement S3.

replies(1): >>42176103 #
3. anon291 ◴[] No.42176103[source]
> Now who's rationalizing. I was pretty clear in saying implement S3.

In general, I don't deny the fact that humans fall into common pitfalls, such as not reading the question. As I pointed out this is a common human failing, a 'hallucination' if you will. Nevertheless, my failing to deliver that to chatgpt should not count against chatgpt, but rather me, a humble human who recognizes my failings. And again, this furthers my point that people hallucinate regularly, we just have a social way to get around it -- what we're doing right now... discussion!

replies(1): >>42176691 #
4. vlovich123 ◴[] No.42176691{3}[source]
My reply was purely around ChatGPT's response which I characterized as a rationalization. It clearly was following the S3 template since it copied many parts of the API but then failed to call out if it was deviating and why it made decisions to deviate.