Most active commenters
  • mehulashah(4)

←back to thread

224 points azhenley | 18 comments | | HN request time: 0.202s | source | bottom
1. mehulashah ◴[] No.45074995[source]
I do believe it’s time for systems folks to take a hard look at building systems abstractions on top of LLMs as they did 50 years ago on top of CPUs. LLMs are the new universal computing machines, but we build with them like we built computers in the 1950s - one computer at a time.
replies(6): >>45075043 #>>45075096 #>>45075104 #>>45075134 #>>45075169 #>>45075192 #
2. csmpltn ◴[] No.45075043[source]
This is the wrong take.

There's a point beyond which LLMs are an overkill, where a simple script or a "classic" program can outdo the LLM across speed, accuracy, scalability, price and more. LLMs aren't supposed to solve "universal computing". They are another tool in the toolbox, and it's all about using the right tool for the problem.

replies(1): >>45075057 #
3. mehulashah ◴[] No.45075057[source]
I shared your opinion for a while. But, that’s not what’s happening. People are using them for everything. When they do, expectations are set. Vendors will adjust and so will the rest of the industry. It’s happening.
replies(5): >>45075136 #>>45075186 #>>45075244 #>>45075250 #>>45075366 #
4. vjvjvjvjghv ◴[] No.45075096[source]
Using LLMs for universal computing would be crazy inefficient. On top of that they aren't deterministic (at the moment) so building systems on top of that would be shaky
5. lloydatkinson ◴[] No.45075104[source]
I can’t believe such hyperbolic fantasies are on HN now too.
6. eldenring ◴[] No.45075134[source]
Hmmmm I don't think that would make sense. Closest analogy is working with humans. The easiest way to work with a human isn't a thin, limited api, but rather to give them context and work together (employment). I think the future of software will look more like Claude Code. Lets the model work in a similar space as us, where it can intelligently seek out information and use tools as a human would.
7. narrator ◴[] No.45075169[source]
LLMs are already hard to interpret as it is. Why not just have some minimal linux so at least someone with a linux background can observe and easily make sense of its actions? If we make an unfamiliar opaque virtual machine, it's likely to be even less user friendly.
8. baby_souffle ◴[] No.45075186{3}[source]
> People are using them for everything

Let’s see if that continues to be the case after some time. On a long enough timeline, that 100 line python script that is deterministic is going to beat the non deterministic llm.

They are a tool. They re not an omnitool.

replies(2): >>45075321 #>>45078056 #
9. moduspol ◴[] No.45075192[source]
CPUs are deterministic! I think it’s going to be inherently limiting how far you can abstract above something non-deterministic.
replies(1): >>45077015 #
10. kibwen ◴[] No.45075244{3}[source]
At the risk of being blunt, this comment reads like someone in the throes of religious euphoria. It makes no sense to call LLMs "the new universal computing machines". Please take a step back and reevaluate the media bubbles you're pickling your brain in.
replies(2): >>45077023 #>>45078030 #
11. greenavocado ◴[] No.45075250{3}[source]
We need fusion power plants as soon as possible because pretty soon it we will need to boil the oceans to write a Hello World in a hypothetical LLM VM, nevermind actually running the thing, which will require a Dyson Sphere at minimum
replies(1): >>45075304 #
12. dingnuts ◴[] No.45075304{4}[source]
well Sam A said that is the plan
13. Imustaskforhelp ◴[] No.45075321{4}[source]
If all you have is a hammer, everything looks like a screw.

The problem is not the tool, the problem are the people selling the hype, the people accepting the hype as it is and this "everyone" person who I keep on listening about, I wonder why his takes are often times wrong but he doesn't accept them...

People can be wrong, People are wrong in a lot of contexts, I don't think the world is efficient in this sense. We are emotional beings, sell us hype and we would accept it and run with it.

Agree on how it is not an omnitool. Why are we going towards an inferior product (AI for VMS??) when this doesn't make sense when we already have superior product (I think)

I guess somethings don't make sense and if everyone jumps in a well, most people would follow it (maybe including myself, if I didn't want to be contrarian in such sense, my knowledge is really limited regarding AI so all of the above statements can be wrong and hey I am wrong I usually am)

14. Swizec ◴[] No.45075366{3}[source]
> People are using them [LLMs] for everything

Yeah and when I was in college, StackOverflow was full of questions like “how do I add 2 numbers with jQuery”. This is normal. The newbies don’t know what they’re doing and with time they will get enough hard knocks to learn. We’ve all gone through this and still are in areas that are new to us.

LLMs aren’t gonna solve the fundamentals: Seniors still gotta senior and newbies still gotta learn.

15. mehulashah ◴[] No.45077015[source]
Agree. The non-determinism is a problem. Perhaps systems thinking can help here. After all, traditional computer systems often behave non-deterministically and we’ve learned to make them more reliable.
16. mehulashah ◴[] No.45077023{4}[source]
Trust me. I’m usually the last to jump on a bandwagon. That said, this is not just my take, but the take of many others that I trust. Andrej Karpathy, Joseph Hellerstein, etc.
17. IX-103 ◴[] No.45078030{4}[source]
There was a time when everybody did arithmetic by hand and "calculator" was a profession. There was a time when spreadsheets were made with ink and paper. There was a time when all programs were written in assembly.

Now, most arithmetic is done by computers, spreadsheets are done by computers, and almost all assembly language is written by computers. People have moved on to higher level programming languages for communicating ideas to computers. Would it really be that surprising to learn that in the future people use natural language to speak with computers?

The company I work for went through a big AI push about a month ago. Before then, almost no code was written by AI. Now, it's the majority of code. I'm not saying that AI coders will be able to replace people, because the AI honestly is just not as capable - if it were interviewing for my company, I would not hire it. But the thing is that I can go from design doc to prototype using less than a day and $10 of tokens. Sure I have to make corrections and rewrite some of the more fiddly bits, but it saves me a ton of typing.

And LLMs are not limited to programming. Any computer-based task that you would allow an unpaid intern to complete for you would be a reasonable fit for AI.

18. IX-103 ◴[] No.45078056{4}[source]
They're pretty close to being an omnitool. But like any tool you need to know how to use it.

Asking it to do some common task is probably silly when you should be asking it to write a program to do the common task (and add the program to its list of tools if it works).