←back to thread

645 points ReadCarlBarks | 1 comments | | HN request time: 0.238s | source
Show context
icapybara ◴[] No.44333232[source]
Why wouldn't you want an LLM for a language learning tool? Language is one of things I would trust an LLM completely on. Have you ever seen ChatGPT make an English mistake?
replies(5): >>44333272 #>>44333473 #>>44334660 #>>44335861 #>>44339618 #
Groxx ◴[] No.44333272[source]
uh. yes? it's far from uncommon, and sometimes it's ludicrously wrong. Grammarly has been getting quite a lot of meme-content lately showing stuff like that.

it is of course mostly very good at it, but it's very far from "trustworthy", and it tends to mirror mistakes you make.

replies(1): >>44334550 #
perching_aix ◴[] No.44334550[source]
Do you have any examples? The only time I noticed an LLM make a language mistake was when using a quantized model (gemma) with my native language (so much smaller training data pool).
replies(1): >>44356339 #
1. Breza ◴[] No.44356339[source]
Not GP, but I've definitely seen cutting edge LLMs make language mistakes. The most head scratching one I've seen in the past few weeks is when Gemini Pro decided to use <em> and </em> tags to emphasize something that was not code.