←back to thread

32 points rntn | 2 comments | | HN request time: 0.439s | source
Show context
beacon473 ◴[] No.45077272[source]
LLMs are decent at providing feedback and validation. I often have few sources of that from humans, and long periods without it are bad for my motivation and sense of well-being.

LLMs filling that hole is great if it's done in discrete and intermittent bumps. TFA shows the psychological risks of binging on artificial validation.

All things in moderation, especially LLMs.

replies(1): >>45078334 #
exe34 ◴[] No.45078334[source]
how do you convince yourself that it is real? I think if you know some linear algebra and read Vaswani et al. 2017 it can be very difficult to maintain the suspension of disbelief. I had great hopes for a future AI companion, but knowing how the trick is done seems to have ruined the magic for me.
replies(1): >>45087851 #
1. beacon473 ◴[] No.45087851[source]
Think of it as talking to yourself. LLMs can be a 10x multiplier to giving yourself a pep talk. Also 10x negative thoughts, so it's a sharp tool.
replies(1): >>45093114 #
2. exe34 ◴[] No.45093114[source]
I guess we could look at it as the ghost of humanity talking to us, the same way long dead authors can whisper in our ears.