←back to thread

1311 points msoad | 5 comments | | HN request time: 0.001s | source
Show context
abujazar ◴[] No.35394638[source]
I love how LLMs have got the attention of proper programmers such that the Python mess is getting cleaned up.
replies(2): >>35395088 #>>35398259 #
faitswulff ◴[] No.35395088[source]
How so?
replies(2): >>35395298 #>>35399707 #
1. seydor ◴[] No.35399707[source]
C has an almost infinite horizon for optimization. Python is good prototypes but we are beyond that stage now
replies(1): >>35400522 #
2. lostmsu ◴[] No.35400522[source]
99% of LLM evaluation with PyTorch was already done in C++.

These .cpp projects don't improve anything for performance. They just drop dependencies necessary for training and experimentation.

replies(1): >>35400556 #
3. seydor ◴[] No.35400556[source]
Optimization isn't just about speed. As you said, dropping dependencies makes it portable, embeddable, more versatile
replies(1): >>35405181 #
4. jart ◴[] No.35405181{3}[source]
It's also nice to not lose your mind over how crazy Python and Docker are, when all you want to do is run inference in a shell script as though it were the `cat` command. That sacred cow is going to have to come out of the temple sooner or later, and when that happens, people are going to think, wow, it's just a cow.
replies(1): >>35442417 #
5. Max-Limelihood ◴[] No.35442417{4}[source]
Have you tried Julia for this instead?