←back to thread

322 points atomroflbomber | 2 comments | | HN request time: 0.419s | source
Show context
lelag ◴[] No.36983601[source]
If 2023 ends up giving us AGI, room-temperature superconductors, Starships and a cure for cancer, I think we will able to call it a good year...
replies(10): >>36983623 #>>36984116 #>>36984118 #>>36984549 #>>36986942 #>>36987008 #>>36987250 #>>36987546 #>>36987577 #>>36992261 #
azinman2 ◴[] No.36986942[source]
We’re not getting AGI anytime soon…
replies(6): >>36987177 #>>36987360 #>>36987472 #>>36987477 #>>36987541 #>>36987759 #
ericmcer ◴[] No.36987541[source]
Seriously, the google generative AI actively suggests completely inaccurate things. It has no ability to say: "I don't know", which seems like a huge failing.

I just asked "what does the JS ** operator do" and it made up an answer about it being a bitwise XOR. 1 ** 2 === 3. The fact that all these LLMs will confidently suggest wrong information makes me feel like LLM is going to be a difficult path to AGI. It will be a big problem if an AI receptionist just confidently spews misinformation and is unable to tell customers they are wrong.

replies(1): >>36987776 #
1. incrudible ◴[] No.36987776[source]
> It has no ability to say: "I don't know"

So do many humans. The expression of ignorance and self-doubt must certainly be woefully underrepresented in training data.

replies(1): >>36988193 #
2. galangalalgol ◴[] No.36988193[source]
Yeah, no one posts to say they don't know the answer. It is the smallest of the problems that come from using the internet to train. I realize these are just statistical text generators, but if we do end up training a real AGI on the internet I find that both apalling and terrifying. If I said my parenting strategy was to lock my genius newborn in a room with provided food and water and a web browser, you'd call me insane and expect my child to be a sociopath...