←back to thread

1479 points sandslash | 8 comments | | HN request time: 1.082s | source | bottom
Show context
practal ◴[] No.44315505[source]
Great talk, thanks for putting it online so quickly. I liked the idea of making the generation / verification loop go brrr, and one way to do this is to make verification not just a human task, but a machine task, where possible.

Yes, I am talking about formal verification, of course!

That also goes nicely together with "keeping the AI on a tight leash". It seems to clash though with "English is the new programming language". So the question is, can you hide the formal stuff under the hood, just like you can hide a calculator tool for arithmetic? Use informal English on the surface, while some of it is interpreted as a formal expression, put to work, and then reflected back in English? I think that is possible, if you have a formal language and logic that is flexible enough, and close enough to informal English.

Yes, I am talking about abstraction logic [1], of course :-)

So the goal would be to have English (German, ...) as the ONLY programming language, invisibly backed underneath by abstraction logic.

[1] http://abstractionlogic.com

replies(6): >>44315728 #>>44316008 #>>44319668 #>>44320353 #>>44322194 #>>44323749 #
kordlessagain ◴[] No.44319668[source]
This thread perfectly captures what Karpathy was getting at. We're witnessing a fundamental shift where the interface to computing is changing from formal syntax to natural language. But you can see people struggling to let go of the formal foundations they've built their careers on.
replies(8): >>44319838 #>>44319876 #>>44319932 #>>44320126 #>>44321046 #>>44321371 #>>44322384 #>>44326827 #
mkleczek ◴[] No.44320126[source]
This is why I call all this AI stuff BS.

Using a formal language is a feature, not a bug. It is a cornerstone of all human engineering and scientific activity and is the _reason_ why these disciplines are successful.

What you are describing (ie. ditching formal and using natural language) is moving humanity back towards magical thinking, shamanism and witchcraft.

replies(3): >>44320228 #>>44321653 #>>44321882 #
1. diggan ◴[] No.44320228[source]
> is the _reason_ why these disciplines

Would you say that ML isn't a successful discipline? ML is basically balancing between "formal language" (papers/algorithms) and "non-deterministic outcomes" (weights/inference) yet it seems useful in a wide range of applications, even if you don't think about LLMs at all.

> towards magical thinking, shamanism and witchcraft.

I kind of feel like if you want to make a point about how something is bullshit, you probably don't want to call it "magical thinking, shamanism and witchcraft" because no matter how good your point is, if you end up basically re-inventing the witch hunt, how is what you say not bullshit, just in the other way?

replies(3): >>44320267 #>>44320397 #>>44322142 #
2. mkleczek ◴[] No.44320267[source]
> Would you say that ML isn't a successful discipline? ML is basically balancing between "formal language" (papers/algorithms) and "non-deterministic outcomes" (weights/inference) yet it seems useful in a wide range of applications

Usefulness of LLMs has yet to be proven. So far there is more marketing in it than actual, real world results. Especially comparing to civil and mechanical engineering, maths, electrical engineering and plethora of disciplines and methods that bring real world results.

replies(1): >>44320321 #
3. diggan ◴[] No.44320321[source]
> Usefulness of LLMs has yet to be proven.

What about ML (Machine Learning) as a whole? I kind of wrote ML instead of LLMs just to avoid this specific tangent. Are you feelings about that field the same?

replies(1): >>44320599 #
4. lelanthran ◴[] No.44320397[source]
> Would you say that ML isn't a successful discipline?

Not yet it isn't; all I am seeing are tools to replace programmers and artists :-/

Where are the tools to take in 400 recipes and spit out all of them in a formal structure (poster upthread literally gave up on trying to get an LLM to do this). Tools that can replace the 90% of office staff who aren't programmers?

Maybe it's a successful low-code industry right now, it's not really a successful AI industry.

replies(1): >>44320434 #
5. diggan ◴[] No.44320434[source]
> Not yet it isn't; all I am seeing are tools to replace programmers and artists :-/

You're missing a huge part of the ecosystem, ML is so much more than just "generative AI", which seems to be the extent of your experience so far.

Weather predictions, computer vision, speech recognition, medicine research and more are already improved by various machine learning techniques, and already was before the current LLM/generative AI. Wikipedia has a list of ~50 topics where ML is already being used, in production, today ( https://en.wikipedia.org/wiki/Machine_learning#Applications ) if you're feeling curious about exploring the ecosystem more.

replies(1): >>44321169 #
6. mkleczek ◴[] No.44320599{3}[source]
> What about ML (Machine Learning) as a whole? I kind of wrote ML instead of LLMs just to avoid this specific tangent. Are you feelings about that field the same?

No - I only expressed my thoughts about using natural language for computing.

7. lelanthran ◴[] No.44321169{3}[source]
> You're missing a huge part of the ecosystem, ML is so much more than just "generative AI", which seems to be the extent of your experience so far.

I'm not missing anything; I'm saying the current boom is being fueled by claims of "replacing workers", but the only class of AI being funded to do that are LLMs, and the only class of worker that might get replaced are programmers and artists.

Karpathy's video, and this thread, are not about the un-hyped ML stuff that has been employed in various disciplines since 2010 and has not been proposed as a replacement for workers.

8. skydhash ◴[] No.44322142[source]
ML is basically greedy determinism. If we can’t get the correct answer, we try to get one that is most likely wrong, but give us enough information that we can make a decision. So the answer is not useful, but its nature is.

If we take object detection in computer vision, the detection by itself is not accurate, but it helps with resources management. instead of expensive continuous monitoring, we now have something cheaper which moves the expensive part to be discrete.

But something deterministic would be always more preferable because you only needs to do verification once.