←back to thread

789 points huseyinkeles | 3 comments | | HN request time: 0.616s | source
Show context
karimf ◴[] No.45569878[source]
I've always thought about the best way to contribute to humanity: number of people you help x how much you help them. I think what Karpathy is doing is one of the highest leverage ways to achieve that.

Our current world is build on top of open source projects. This is possible because there are a lot of free resources to learn to code so anyone from anywhere in the world can learn and make a great piece of software.

I just hope the same will happen with the AI/LLM wave.

replies(12): >>45571834 #>>45571836 #>>45571900 #>>45571959 #>>45571975 #>>45572208 #>>45572425 #>>45572536 #>>45572555 #>>45572584 #>>45572596 #>>45573593 #
martin-t ◴[] No.45572208[source]
As noble as the goal sounds, I think it's wrong.

Software is just a tool. Much like a hammer, a knife, or ammonium nitrate, it can be used for both good or bad.

I say this as someone who has spent almost 15 years writing software in my free time and publishing it as open source: building software and allowing anyone to use it does not automatically make other people's lives better.

A lot of my work has been used for bad purposes or what some people would consider bad purposes - cheating on tests, cheating in games, accessing personal information without permission, and in one case my work contributed to someone's doxxing. That's because as soon as you publish it, you lose control over it.

But at least with open source software, every person can use it to the same extent so if the majority of people are good, the result is likely to be more positive than negative.

With what is called AI today, only the largest corporations can afford to train the models which means they are controlled by people who have entirely different incentives from the general working population and many of whom have quite obvious antisocial personality traits.

At least 2 billion people live in dictatorships. AI has the potential to become a tool of mass surveillance and total oppression from which those countries will never recover because just like the models can detect a woman is pregnant before she knows it, it will detect a dissenter long before dissent turns into resistance.

I don't have high hopes for AI to be a force for good and teaching people how toy models work, as fun as it is, is not gonna change it.

replies(3): >>45572393 #>>45572401 #>>45572770 #
1. simonw ◴[] No.45572770[source]
"With what is called AI today, only the largest corporations can afford to train the models"

I take it you're very positive about Andrej's new project which allows anyone to train a model for a few hundred dollars which is comparable to the state-of-the-art from just 5 years ago then.

replies(1): >>45574257 #
2. hn_acc1 ◴[] No.45574257[source]
For a few hundred dollars, given heavily-VC-subsidized hardware that is probably partially funded by nvidia and various AI companies, etc.

Can I run it on my local hardware (nvidia consumer card, AMD cpu)? No. When could that corporation cut off my access to that hardware if I did anything it didn't like? Anytime.

Lots of things have started off cheap / subsidized to put competitors out of business, and then the prices go up, up and up..

replies(1): >>45574458 #
3. simonw ◴[] No.45574458[source]
> Can I run it on my local hardware?

Yes. The training process requires big expensive GPUs. The model it produces has 561M parameters, which should run on even a high end mobile phone (I run 4B models on my iPhone).