←back to thread

S1: A $6 R1 competitor?

(timkellogg.me)
851 points tkellogg | 1 comments | | HN request time: 0.001s | source
Show context
mtrovo ◴[] No.42951263[source]
I found the discussion around inference scaling with the 'Wait' hack so surreal. The fact such an ingeniously simple method can impact performance makes me wonder how many low-hanging fruit we're still missing. So weird to think that improvements on a branch of computer science is boiling down to conjuring the right incantation words, how you even change your mindset to start thinking this way?
replies(16): >>42951704 #>>42951764 #>>42951829 #>>42953577 #>>42954518 #>>42956436 #>>42956535 #>>42956674 #>>42957820 #>>42957909 #>>42958693 #>>42960400 #>>42960464 #>>42961717 #>>42964057 #>>43000399 #
cubefox ◴[] No.42951764[source]
Now imagine where we are in 12 months from now. This article from February 5 2025 will feel quaint by then. The acceleration keeps increasing. It seems likely we will soon have recursive self-improving AI -- reasoning models which do AI research. This will accelerate the rate of acceleration itself. It sounds stupid to say it, but yes, the singularity is near. Vastly superhuman AI now seems to arrive within the next few years. Terrifying.
replies(2): >>42952687 #>>42955196 #
zoogeny ◴[] No.42955196[source]
This is something I have been suppressing since I don't want to become chicken little. Anyone who isn't terrified by the last 3 months probably doesn't really understand what is happening.

I went from accepting I wouldn't see a true AI in my lifetime, to thinking it is possible before I die, to thinking it is possible in in the next decade, to thinking it is probably in the next 3 years to wondering if we might see it this year.

Just 6 months ago people were wondering if pre-training was stalling out and if we hit a wall. Then deepseek drops with RL'd inference time compute, China jumps from being 2 years behind in the AI race to being neck-and-neck and we're all wondering what will happen when we apply those techniques to the current full-sized behemoth models.

It seems the models that are going to come out around summer time may be jumps in capability beyond our expectations. And the updated costs means that there may be several open source alternatives available. The intelligence that will be available to the average technically literate individual will be frightening.

replies(2): >>42956212 #>>42963164 #
pjc50 ◴[] No.42963164[source]
This frightens mostly people whose identity is built around "intelligence", but without grounding in the real world. I've yet to see really good articulations of what, precisely we should be scared of.

Bedroom superweapons? Algorithmic propaganda? These things have humans in the loop building them. And the problem of "human alignment" is one unsolved since Cain and Abel.

AI alone is words on a screen.

The sibling thread details the "mass unemployment" scenario, which would be destabilizing, but understates how much of the current world of work is still physical. It's a threat to pure desk workers, but we're not the majority of the economy.

Perhaps there will be political instability, but .. we're already there from good old humans.

replies(4): >>42963468 #>>42964183 #>>42965461 #>>43000641 #
1. ben_w ◴[] No.42964183[source]
> This frightens mostly people whose identity is built around "intelligence", but without grounding in the real world.

It has certainly had this impact on my identity; I am unclear how well-grounded I really am*.

> I've yet to see really good articulations of what, precisely we should be scared of.

What would such an articulation look like, given you've not seen it?

> Bedroom superweapons? Algorithmic propaganda? These things have humans in the loop building them.

Even with current limited systems — which are not purely desk workers, they're already being connected to and controlling robots, even by amateurs — AI lowers the minimum human skill level needed to do those things.

The fear is: how far are we from an AI that doesn't need a human in the loop? Because ChatGPT was almost immediately followed by ChaosGPT, and I have every reason to expect people to continue to make clones of ChaosGPT continuously until one is capable of actually causing harm. (As with 3d-printed guns, high chance the first ones will explode in the face of the user rather than the target).

I hope we're years away, just as self driving cars turned out to be over-promised and under-delivered for the last decade — even without a question of "safety", it's going to be hard to transition the world economy to one where humans need not apply.

> And the problem of "human alignment" is one unsolved since Cain and Abel.

Yes, it is unsolved since time immemorial.

This has required us to not only write laws, but also design our societies and institutions such that humans breaking laws doesn't make everything collapse.

While I dislike the meme "AI == crypto", one overlap is that both have nerds speed-running discovering how legislation works any why it's needed — for crypto, specifically financial legislation after it explodes in their face; for AI, to imbue the machine with a reason to approximate society's moral code, because they see the problem coming.

--

* Dunning Kruger applies; and now I have first-hand experience of what this feels like from the inside, as my self-perception of how competent I am at German has remained constant over 7 years of living in Germany and improving my grasp of the language the entire time.