Most active commenters
  • digging(3)
  • cbozeman(3)

←back to thread

614 points nickthegreek | 12 comments | | HN request time: 0.018s | source | bottom
Show context
mgreg ◴[] No.39121867[source]
Unsurprising but disappointing none-the-less. Let’s just try to learn from it.

It’s popular in the AI space to claim altruism and openness; OpenAI, Anthropic and xAI (the new Musk one) all have a funky governance structure because they want to be a public good. The challenge is once any of these (or others) start to gain enough traction that they are seen as having a good chance at reaping billions in profits things change.

And it’s not just AI companies and this isn’t new. This is art of human nature and will always be.

We should be putting more emphasis and attention on truly open AI models (open training data, training source code & hyperparameters, model source code, weights) so the benefits of AI accrue to the public and not just a few companies.

[edit - eliminated specific company mentions]

replies(17): >>39122377 #>>39122548 #>>39122564 #>>39122633 #>>39122672 #>>39122681 #>>39122683 #>>39122910 #>>39123084 #>>39123321 #>>39124167 #>>39124930 #>>39125603 #>>39126566 #>>39126621 #>>39127428 #>>39132151 #
1. digging ◴[] No.39123084[source]
It isn't just money, though. Every leading AI lab is also terrified that another lab will beat them to [impossible-to-specify threshold for AGI], which provides additional incentive to keep their research secret.
replies(1): >>39123246 #
2. JohnFen ◴[] No.39123246[source]
But isn't that fear of having someone else get there first just a fear that they won't be able to maximize their profit if that happens? Otherwise, why would they be so worried about it?
replies(2): >>39123392 #>>39131709 #
3. zer00eyz ◴[] No.39123392[source]
"Fusion is 25/10/5 years away"

"string theory breakthrough to unify relativity and quantium mechanics"

"The future will have flying cars and robots helping in the kitchen by 2000"

"Agi is going to happen 'soon'"

We got a rocket that landed like it was out of a 1950's black and white B movie... and this time without strings. We got Star Trek communicators. The rest of it is fantasy and wishful thinking that never quite manages to show up...

Lacking a fundamental undemanding of what is holding you back from having the breakthrough, means you're never going to have the breakthrough.

Credit to the AI folks, they have produced insights and breakthroughs and usable "stuff" unlike the string theory nerds.

replies(2): >>39124482 #>>39126465 #
4. JohnFen ◴[] No.39124482{3}[source]
I honestly don't understand how your comment here relates to what I said...
replies(1): >>39125394 #
5. zer00eyz ◴[] No.39125394{4}[source]
My point is that there is no "there there". I think all of them get that AGI isnt coming but they can make a shit load of money.

Hope, progress... both of those left the building, it's just greed moving them forward now.

6. cbozeman ◴[] No.39126465{3}[source]
Fusion is well on the way, you just don't hear about it as much because the whole point of fusion isn't to make money, it's to permanently end the energy "crisis", which will end energy demand, which will have nearly unfathomable ripple effects on the global economy.

String theory is waste of time and has been for awhile now. The best and brightest couldn't make it map onto reality in any way, and now the next generation of best and brightest are working either on Wall Street or in Silicon Valley.

The robots are also coming sooner than we think. They won't be like Rosey from the Jetsons, but they'll get there.

AGI may or may not happen soon, it's too early to tell. True AGI is probably 100 years away or more. Lt. Cmdr. Data isn't coming any time soon. A half-ass approximation that "appears" mostly human in it's reasoning and interaction is probably 3-10 years off.

replies(3): >>39129303 #>>39131766 #>>39133025 #
7. mlrtime ◴[] No.39129303{4}[source]
We don't hear about it [Fusion] because it doesn't work for energy production.

I don't believe there is a grand conspiracy to keep it down because of money.

replies(1): >>39138690 #
8. digging ◴[] No.39131709[source]
No, it's a fear that the other lab will take over the world. Profit is secondary to that. (Whether or not you or I think that's a reasonable fear is immaterial.)
9. digging ◴[] No.39131766{4}[source]
> AGI may or may not happen soon, it's too early to tell. True AGI is probably 100 years away or more. Lt. Cmdr. Data isn't coming any time soon. A half-ass approximation that "appears" mostly human in it's reasoning and interaction is probably 3-10 years off.

The goal of AGI is not to emulate a human. AGI will be an alien intelligence and will almost immediately surpass human intelligence. Looking for an android is like asking how good a salsa verde a pizza restaurant can make.

replies(1): >>39138704 #
10. insane_dreamer ◴[] No.39133025{4}[source]
> Fusion is well on the way

I hope it succeeds, but after decades of research there is still no demonstrable breakthrough in fusion (that outputs more energy than required as input)

11. cbozeman ◴[] No.39138690{5}[source]
I wouldn't call it a "grand conspiracy" so much as a "plain case of human greed".

Intel does everything in it's power to stymie AMD in the late 80s, all of the 90s, and the early 2000s - that's an established fact on record. It wasn't a "grand conspiracy", it was just the dominant power exerting it's force.

12. cbozeman ◴[] No.39138704{5}[source]
> The goal of AGI is not to emulate a human.

I am not sure if that's accurate based on the researchers I read and listen to.

> AGI will be an alien intelligence

Possibly. Remains to be seen.

> and will almost immediately surpass human intelligence.

There's no proof that will be the case, we just assume that because of the advancement of technology in the past 50 years. It may well be an accurate assumption, then again it may not be. This is very much a case of, "We won't know until it happens."