←back to thread

579 points paulpauper | 1 comments | | HN request time: 0s | source
Show context
lukev ◴[] No.43604244[source]
This is a bit of a meta-comment, but reading through the responses to a post like this is really interesting because it demonstrates how our collective response to this stuff is (a) wildly divergent and (b) entirely anecdote-driven.

I have my own opinions, but I can't really say that they're not also based on anecdotes and personal decision-making heuristics.

But some of us are going to end up right and some of us are going to end up wrong and I'm really curious what features signal an ability to make "better choices" w/r/t AI, even if we don't know (or can't prove) what "better" is yet.

replies(10): >>43604396 #>>43604472 #>>43604738 #>>43604923 #>>43605009 #>>43605865 #>>43606458 #>>43608665 #>>43609144 #>>43612137 #
ramesh31 ◴[] No.43612137[source]
>"This is a bit of a meta-comment, but reading through the responses to a post like this is really interesting because it demonstrates how our collective response to this stuff is (a) wildly divergent and (b) entirely anecdote-driven."

People having vastly different opinions on AI simply comes down to token usage. If you are using millions of tokens on a regular basis, you completely understand the revolutionary point we are at. If you are just chatting back and forth a bit with something here and there, you'll never see it.

replies(2): >>43612388 #>>43616322 #
1. antonvs ◴[] No.43612388[source]
It's a tool and like all tools, it's sensitive to how you use it, and it's better for some purposes than others.

Someone who lacks experience, skill, training, or even the ability to evaluate results may try to use a tool and blame the tool when it doesn't give good results.

That said, the hype around LLMs certainly overstates their capabilities.