←back to thread

306 points slyall | 2 comments | | HN request time: 0.001s | source
Show context
DeathArrow ◴[] No.42058383[source]
I think neural nets are just a subset of machine learning techniques.

I wonder what would have happened if we poured the same amount of money, talent and hardware into SVMs, random forests, KNN, etc.

I don't say that transformers, LLMs, deep learning and other great things that happened in the neural network space aren't very valuable, because they are.

But I think in the future we should also study other options which might be better suited than neural networks for some classes of problems.

Can a very large and expensive LLM do sentiment analysis or classification? Yes, it can. But so can simple SVMs and KNN and sometimes even better.

I saw some YouTube coders doing calls to OpenAI's o1 model for some very simple classification tasks. That isn't the best tool for the job.

replies(11): >>42058980 #>>42059047 #>>42059100 #>>42059544 #>>42059813 #>>42060244 #>>42060447 #>>42060561 #>>42060833 #>>42062658 #>>42088131 #
mentalgear ◴[] No.42059047[source]
KANs (Kolmogorov-Arnold Networks) are one example of a promising exploration pathway to real AGI, with the advantage of full explain-ability.
replies(2): >>42059624 #>>42073900 #
astrange ◴[] No.42059624[source]
"Explainable" is a strong word.

As a simple example, if you ask a question and part of the answer is directly quoted from a book from memory, that text is not computed/reasoned by the AI and so doesn't have an "explanation".

But I also suspect that any AGI would necessarily produce answers it can't explain. That's called intuition.

replies(1): >>42059743 #
diffeomorphism ◴[] No.42059743[source]
Why? If I ask you what the height of the Empire State Building is, then a reference is a great, explainable answer.
replies(1): >>42061157 #
astrange ◴[] No.42061157[source]
It wouldn't be a reference; "explanation" for an LLM means it tells you which of its neurons were used to create the answer, ie what internal computations it did and which parts of the input it read. Their architecture isn't capable of referencing things.

What you'd get is an explanation saying "it quoted this verbatim", or possibly "the top neuron is used to output the word 'State' after the word 'Empire'".

You can try out a system here: https://monitor.transluce.org/dashboard/chat

Of course the AI could incorporate web search, but then what if the explanation is just "it did a web search and that was the first result"? It seems pretty difficult to recursively make every external tool also explainable…

replies(2): >>42061585 #>>42061651 #
diffeomorphism ◴[] No.42061651[source]
Then you should have a stronger notion of "explanation". Why were these specific neurons activated?

Simplest example: OCR. A network identifying digits can often be explained as recognizing lines, curves, numbers of segments etc.. That is an explanation, not "computer says it looks like an 8"

replies(1): >>42065185 #
krisoft ◴[] No.42065185[source]
But can humans do that? If you show someone a picture of a cat, can they "explain" why is it a cat and not a dog or a pumpkin?

And is that explanation the way how they obtained the "cat-nes" of the picture, or do they just see that it is a cat immediately and obviously and when you ask them for an explanation they come up with some explaining noises until you are satisfied?

replies(2): >>42067149 #>>42067384 #
1. diffeomorphism ◴[] No.42067149{7}[source]
Wild cat, house cat, lynx,...? Sure, they can. They will tell you about proportions, shape of the ears, size as compared to other objects in the picture etc.

For cat vs pumpkin they will think you are making fun of them, but it very much is explainable. Though now I am picturing a puzzle about finding orange cats in a picture of a pumpkin field.

replies(1): >>42075270 #
2. krisoft ◴[] No.42075270[source]
> They will tell you about proportions, shape of the ears, size as compared to other objects in the picture etc.

But is that how they know that the image is a cat, or is that some after the fact tacked on explaining?

Let me tell you an example to better explain what I mean. There are these “botanical identifying” books. You take a speciment unknown to you and and it asks questions like “what shape the leaves are?” “Is the stem woody or not?” “How many petals on the flower?” And it leads you through a process and at the end gives you ideally the specific latin name of the species. (Or at least narrows it down.)

Vs the act of looking at a rose and knowing without having to expend any further energy that it is a rose. And then if someone is questioning you you can spend some energy on counting petals, and describing leaf shapes and find the thorns and point them out and etc.

It sounds like most people who want “explainable AI” want the first kind of thing. The blind and amnesiac botanist with the plant identifying book. Vs what humans are actually doing which is more like a classification model with a tacked on bulshit generator to reason about the classification model’s outputs into which it doesn’t actually have any in-depth insight.

And it gets worse the deeper you ask them. How do you know that is an ear? How do you know its shape? How do you know the animal is furry?