Most active commenters
  • genewitch(3)

←back to thread

684 points prettyblocks | 12 comments | | HN request time: 0.929s | source | bottom

I mean anything in the 0.5B-3B range that's available on Ollama (for example). Have you built any cool tooling that uses these models as part of your work flow?
1. cwmoore ◴[] No.42786641[source]
I'm playing with the idea of identifying logical fallacies stated by live broadcasters.
replies(8): >>42787010 #>>42787653 #>>42788090 #>>42788889 #>>42791080 #>>42793882 #>>42798043 #>>42798458 #
2. spiritplumber ◴[] No.42787010[source]
That's fantastic and I'd love to help
replies(1): >>42787121 #
3. cwmoore ◴[] No.42787121[source]
So far not much beyond this list of targets to identify https://en.wikipedia.org/wiki/List_of_fallacies
4. genewitch ◴[] No.42787653[source]
I have several rhetoric and logic books of the sort you might use for training or whatever, and one of my best friends got a doctorate in a tangential field, and may have materials and insights.

We actually just threw a relationship curative app online in 17 hours around Thanksgiving., so they "owe" me, as it were.

I'm one of those people that can do anything practical with tech and the like, but I have no imagination for it - so when someone mentions something that I think would be beneficial for my fellow humans I get this immense desire to at least cheer on if not ask to help.

5. petesergeant ◴[] No.42788090[source]
I'll be very positively impressed if you make this work; I spend all day every day for work trying to make more capable models perform basic reasoning, and often failing :-P
6. JayStavis ◴[] No.42788889[source]
Automation to identify logical/rhetorical fallacies is a long held dream of mine, would love to follow along with this project if it picks up somehow
7. vaylian ◴[] No.42791080[source]
LLMs are notoriously unreliable with mathematics and logic. I wish you the best of luck, because this would nevertheless be an awesome tool to have.
8. grisaitis ◴[] No.42793882[source]
even better, podcasters probably easier to fetch the data as well
9. thesz ◴[] No.42798043[source]
I think this is the best idea thus far!

Keep good work, good fellow. ;)

10. halJordan ◴[] No.42798458[source]
Logical fallacies are oftentimes totally relevant during anything that is not predicate logic. I'm not wrong for saying "The Surgeon General says smoking is bad, you shouldn't smoke." That's a perfectly reasonable appeal to authority.
replies(1): >>42799109 #
11. genewitch ◴[] No.42799109[source]
It's still a fallacy, though. I hope we can agree on that part. If you have something map-reducing audio to timestamps of fallacies by who said them it makes it gamified and you can use the information shown to decide how much weight to give to their words.
replies(1): >>42838702 #
12. genewitch ◴[] No.42838702{3}[source]
btw i have verified that whisper-diarization works, at least on my machine, so all this needs is an LLM finetuned on rhetoric and the type of logic used when discussing fallacies. I know a lot of people like to call it "formal logic" or whatever, but the way i understood it both in college and from my own reading of the books is that the only true formal logic is tautological, everything else is varying shades thereof. Blatant appeals to emotion, uninformed native advertising (appeal to authority, others), Argumentum ad baculum (aside: if you typo that it means a specific bone in the male canine's body and i think that's hardly an accident.)

I got no idea how to finetune or train an LLM. i know how to run inference, lots of it. I also know how to scan and OCR texts, and feed a data ingestion pipeline. I know how to finetune a stable diffusion model, but i doubt that software works with language models...