←back to thread

178 points themgt | 2 comments | | HN request time: 0.057s | source
Show context
nis0s ◴[] No.45780477[source]
If there truly was any introspection in these models, they wouldn’t hallucinate. All these cognitive processes are not just philosophical artifacts, but have distinct biological purposes. If you don’t find them serving any purpose in your model, then you’re just looking at noise, and your observations may not be above a statistically significant threshold to derive a conclusion (because they’re noise).
replies(1): >>45782092 #
1. DangitBobby ◴[] No.45782092[source]
That doesn't follow. We have introspection and we hallucinate (confabulate, bullshit, lie, etc). You are just assuming they would never intentionally say something untrue or say something they don't know is true.
replies(1): >>45792075 #
2. nis0s ◴[] No.45792075[source]
You’re confounding too many things for me to go into it.