←back to thread

360 points danielmorozoff | 1 comments | | HN request time: 0.199s | source
Show context
ckemere ◴[] No.45035120[source]
I think that the negativity here is unfortunate. The reality is that it’s very hard to see a normal VC level return on the $100M+ Elon and friends have invested here. And don’t let anyone fool you - this is the fundamental reason the BCI field has moved slowly.

If Neuralink proceeds to a scenario where quadriplegic patients can get reliable (ie lifelong) control of their computers for less than $100k that will be a huge win for them for a cost that no one else was willing to pay.

To be clear, at that order of magnitude they might make back their investment, but it won’t be 10x or 100x, and the potential healthy-brain-connected-to-the-AI play is much less rooted in reality than Teslas all becoming taxis.

Worst case scenario is that Elon loses interest and pulls the plug and Mr Arbaugh loses continued tech support a la a google product. I think that’s the one question I wish the author had asked…

replies(14): >>45035214 #>>45035665 #>>45035718 #>>45036739 #>>45037140 #>>45037901 #>>45038149 #>>45038255 #>>45038321 #>>45038387 #>>45038811 #>>45040093 #>>45043135 #>>45043220 #
rc5150 ◴[] No.45035214[source]
The unfortunate part is that your first thought went to return on investment rather than the humanitarian angle, which I think is the common perspective; optics and money.

Then there's the pessimists, like me, wondering how long it'll take to Neuralink to turn their army of computer connected paraplegics into some Mechanical Turk-esque Grok clean up.

replies(8): >>45035425 #>>45036180 #>>45036455 #>>45036527 #>>45036672 #>>45037703 #>>45038903 #>>45042396 #
h0h0h0h0111 ◴[] No.45036672[source]
I don't think it's unfortunate - in principle, return on investment today can achieve greater humanitarian impact tomorrow vs humanitarian impact today.

Of course, this creates a perverse situation where choosing humanitarian impact today over investment is always irrational, but this is the fundamental tension in charity vs investment, and aside from relying on governments and guilt, I'm not sure we have discovered a great model to solve it

replies(1): >>45036984 #
moomin ◴[] No.45036984[source]
Problem is, when people start to analyse things like this, even apart from falling into utilitarian traps, they don’t apply regular business reasoning.

There’s a bunch of effects to consider 1) improving lives right now may well improve subsequent generations lives directly 2) your future project may have a higher failure rate than your current one 3) the problems you are trying to solve may no longer be relevant in the future 4) you could be very wrong about future population growth.

All of this boils down to: you should be risk-discounting future benefits just the same way as you do future cash flows.

replies(1): >>45037641 #
autoexec ◴[] No.45037641[source]
It's the people thinking about the bottom line who will push for the gradual enshittification of the product until it's beaming ads into people's brain, preventing them from saying anything bad about elon, forcing them to sing his praises against their will, or charging them a monthly fee for "continued autonomous breathing as a service".

Taking a good thing and fucking people over with it in every way possible is "regular business reasoning"

At a certain point it's smart to say "We have the technology to do something good, let's be extremely cautious about concerns over what's profitable and focus on doing what's right with it"

replies(1): >>45043227 #
1. est31 ◴[] No.45043227[source]
I really loved the "Common People" episode from Black Mirror. IMO it's the best episode of the whole series. It shows the enshittification cycle of technology really well applied to technology connected to your brain.