←back to thread

251 points slyall | 1 comments | | HN request time: 0s | source
Show context
DeathArrow ◴[] No.42058383[source]
I think neural nets are just a subset of machine learning techniques.

I wonder what would have happened if we poured the same amount of money, talent and hardware into SVMs, random forests, KNN, etc.

I don't say that transformers, LLMs, deep learning and other great things that happened in the neural network space aren't very valuable, because they are.

But I think in the future we should also study other options which might be better suited than neural networks for some classes of problems.

Can a very large and expensive LLM do sentiment analysis or classification? Yes, it can. But so can simple SVMs and KNN and sometimes even better.

I saw some YouTube coders doing calls to OpenAI's o1 model for some very simple classification tasks. That isn't the best tool for the job.

replies(10): >>42058980 #>>42059047 #>>42059100 #>>42059544 #>>42059813 #>>42060244 #>>42060447 #>>42060561 #>>42060833 #>>42062658 #
jasode ◴[] No.42059813[source]
>I wonder what would have happened if we poured the same amount of money, talent and hardware into SVMs, random forests, KNN, etc.

But that's backwards from how new techniques and progress is made. What actually happens is somebody (maybe a student at a university) has an insight or new idea for an algorithm that's near $0 cost to implement a proof-of concept. Then everybody else notices the improvement and then extra millions/billions get directed toward it.

New ideas -- that didn't cost much at the start -- ATTRACT the follow on billions in investments.

This timeline of tech progress in computer science is the opposite from other disciplines such as materials science or bio-medical fields. Trying to discover the next super-alloy or cancer drug all requires expensive experiments. Manipulating atoms & molecules requires very expensive specialized equipment. In contrast, computer science experiments can be cheap. You just need a clever insight.

An example of that was the 2012 AlexNet image recognition algorithm that blew all the other approaches out of the water. Alex Krizhevsky had an new insight on a convolutional neural network to run on CUDA. He bought 2 NVIDIA cards (GTX580 3GB GPU) from Amazon. It didn't require NASA levels of investment at the start to implement his idea. Once everybody else noticed his superior results, the billions began pouring in to iterate/refine on CNNs.

Both the "attention mechanism" and the refinement of "transformer architecture" were also cheap to prove out at a very small scale. In 2014, Jakob Uszkoreit thought about an "attention mechanism" instead of RNN and LSTM for machine translation. It didn't cost billions to come up with that idea. Yes, ChatGPT-the-product cost billions but the "attention mechanism algorithm" did not.

>into SVMs, random forests, KNN, etc.

If anyone has found an unknown insight into SVM, KNN, etc that everybody else in the industry has overlooked, they can do cheap experiments to prove it. E.g. The entire Wikipedia text download is currently only ~25GB. Run the new SVM classification idea on that corpus. Very low cost experiments in computer science algorithms can still be done in the proverbial "home garage".

replies(3): >>42061648 #>>42063764 #>>42065288 #
1. FrustratedMonky ◴[] No.42061648[source]
"$0 cost to implement a proof-of concept"

This falls apart for breakthroughs that are not zero cost to do a proof-of concept.

Think that is what the parent is rereferring . That other technologies might have more potential, but would take money to build out.