←back to thread

645 points ReadCarlBarks | 5 comments | | HN request time: 1.479s | source
Show context
tolerance ◴[] No.44334333[source]
I would much rather check my writing against grammatical rules that are hard coded in an open source program—meaning that I can change them—than ones that I imagine would be subject to prompt fiddling or worse; implicitly hard coded in a tangle of training data that the LLM would draw from.

The Neovim configuration for the LSP looks neat: https://writewithharper.com/docs/integrations/neovim

The whole thing seems cool. Automattic should mention this on their homepage. Tools like this are the future of something.

replies(2): >>44335438 #>>44336086 #
triknomeister ◴[] No.44335438[source]
You would lose out on evolution of language.
replies(3): >>44335826 #>>44337273 #>>44337956 #
phoe-krk ◴[] No.44335826[source]
Natural languages evolve so slowly that writing and editing rules for them is easily achievable even this way. Think years versus minutes.
replies(2): >>44336057 #>>44336245 #
fakedang ◴[] No.44336245[source]
Aight you win fam, I was trippin fr. You're absolutely bussin, no cap. Harvard should be taking notes.

(^^ alien language that was developed in less than a decade)

replies(4): >>44336259 #>>44336266 #>>44337030 #>>44339957 #
1. phoe-krk ◴[] No.44336259[source]
Yes, precisely. This "less than a decade" is magnitudes above the hours or days that it would take to manually add those words and idioms to proper dictionaries and/or write new grammar rules to accomodate aspects like skipping "g" in continuous verbs to get "bussin" or "bussin'" instead of "bussing". Thank you for illustrating my point.

Also, it takes at most few developers to write those rules into a grammar checking system, compared to millions and more that need to learn a given piece of "evolved" language as it becomes impossible to avoid learning it. It's not only fast enough to do this manually, it also takes much less work-intensive and more scalable.

replies(2): >>44336870 #>>44339014 #
2. fakedang ◴[] No.44336870[source]
Not exactly. It takes time for those words to become mainstream for a generation. While you'd have to manually add those words in dictionaries, LLMs can learn these words on the fly, based on frequency of usage.
replies(1): >>44337153 #
3. phoe-krk ◴[] No.44337153[source]
At this point we're already using different definitions of grammar and vocabulary - are they discrete (as in a rule system, vide Harper) or continuous (as in a probability, vide LLMs). LLMs, like humans, can learn them on the fly, and, like humans, they'll have problems and disagreements judging whether something should be highlighted as an error or not.

Or, in other words: if you "just" want a utility that can learn speech on the fly, you don't need a rigid grammar checker, just a good enough approximator. If you want to check if a document contains errors, you need to define what an error is, and then if you want to define it in a strict manner, at that point you need a rule engine of some sort instead of something probabilistic.

4. efitz ◴[] No.44339014[source]
I’m glad we have people at HN who could have eliminated decades of effort by tens of thousands of people, had they only been consulted first on the problem.
replies(1): >>44339261 #
5. phoe-krk ◴[] No.44339261[source]
Which effort? Learning a language is something that can't be eliminated. Everyone needs to do it on their own. Writing grammar checking software, though, can be done few times and then copied.