←back to thread

359 points FromTheArchives | 1 comments | | HN request time: 0.26s | source
Show context
oceanhaiyang ◴[] No.45293700[source]
No one who understands ai can rely on it to help us learn. I provided one with 100 citations I wanted to standardize and it deleted 10 and made up 10 to replace them. Can’t imagine this being used to replace a textbook or even explain a textbook.
replies(3): >>45293802 #>>45294049 #>>45296961 #
criddell ◴[] No.45293802[source]
> explain a textbook

I've had very good luck using LLMs to do this. I paste the part of the book that I don't understand and ask questions about it.

replies(3): >>45293911 #>>45293968 #>>45294577 #
bigfishrunning ◴[] No.45293911[source]
But the problem is, you don't understand the passage, so therefore how will you vet the answers? Seems like hallucinations would be very very damaging in this use-case
replies(4): >>45294006 #>>45294052 #>>45294616 #>>45295154 #
0xEF ◴[] No.45294006[source]
I was in the middle of typing the same question. This is the part that worries me about Generative AI; far too many people seem to have forgotten that its prone to confabulation and telling the user what they want to hear.
replies(1): >>45294310 #
1. criddell ◴[] No.45294310[source]
Sure, but if the LLM tells you the jump from step 2 to 3 in a calculus problem is the use of l'hopital's rule, you should be able to figure out pretty quickly if it's a red herring or not.