←back to thread

645 points ReadCarlBarks | 1 comments | | HN request time: 0.206s | source
Show context
oersted ◴[] No.44337270[source]
Fantastic work, I was so fed up with Grammarly and instantly installed this.

I'm just a bit skeptical about this quote:

> Harper takes advantage of decades of natural language research to analyze exactly how your words come together.

But it's just a rather small collection of hard-coded rules:

https://docs.rs/harper-core/latest/harper_core/linting/trait...

Where did the decades of classical NLP go? No gold-standard resources like WordNet? No statistical methods?

There's nothing wrong with this, the solution is a good pragmatic choice. It's just interesting how our collective consciousness of expansive scientific fields can be so thoroughly purged when a new paradigm arises.

LLMs have completely overshadowed ML NLP methods from 10 years ago, and they themselves replaced decades statistical NLP work, which also replaced another few decades of symbolic grammar-based NLP work.

Progress is good, but it's important not to forget all those hard-earned lessons, it can sometimes be a real superpower to be able to leverage that old toolbox in modern contexts. In many ways, we had much more advanced methods in the 60s for solving this problem than what Harper is doing here by naively reinventing the wheel.

replies(2): >>44338799 #>>44338850 #
chilipepperhott ◴[] No.44338850[source]
I'll admit it's something of a bold label, but there is truth in it.

Before our rule engine has a chance to touch the document, we run several pre-processing steps that imbue semantic meaning to the words it reads.

> LLMs have completely overshadowed ML NLP methods from 10 years ago, and they themselves replaced decades statistical NLP work, which also replaced another few decades of symbolic grammar-based NLP work.

This is a drastic oversimplification. I'll admit that transformer-based approaches are indeed quite prevalent, but I do not believe that "LLMs" in the conventional sense are "replacing" a significant fraction of NLP research.

I appreciate your skepticism and attention to detail.

replies(1): >>44348446 #
1. s1291 ◴[] No.44348446[source]
Here's an article you might find interesting: https://www.quantamagazine.org/when-chatgpt-broke-an-entire-...