←back to thread

600 points antirez | 1 comments | | HN request time: 0s | source
Show context
bgwalter ◴[] No.44625261[source]
Translation: His company will launch "AI" products in order to get funding or better compete with Valkey.

I find it very sad that people who have been really productive without "AI" now go out of their way to find small anecdotal evidence for "AI".

replies(2): >>44625432 #>>44625574 #
brokencode ◴[] No.44625432[source]
I find it even more sad when people come out of the woodwork on every LLM post to tell us that our positive experiences using LLMs are imagined and we just haven’t realized how bad they are yet.
replies(4): >>44625504 #>>44625551 #>>44625771 #>>44625799 #
on_the_train ◴[] No.44625504[source]
If LLMs were actually useful, there would be no need to scream it everywhere. On the contrary: it would be a guarded secret.
replies(8): >>44625575 #>>44625612 #>>44625707 #>>44625750 #>>44625891 #>>44626117 #>>44626235 #>>44629115 #
logsr ◴[] No.44625707[source]
posting a plain text description of your experience on a personal blog isn't exactly screaming. in the noise of the modern internet this would be read by nobody if it wasn't coming from one of the most well known open source software creators of all time.

people who believe in open source don't believe that knowledge should be secret. i have released a lot of open source myself, but i wouldn't consider myself a "true believer." even so, i strongly believe that all information about AI must be as open as possible, and i devote a fair amount of time to reverse engineering various proprietary AI implementations so that i can publish the details of how they work.

why? a couple of reasons:

1) software development is my profession, and i am not going to let anybody steal it from me, so preventing any entity from establishing a monopoly on IP in the space is important to me personally.

2) AI has some very serious geopolitical implications. this technology is more dangerous than the atomic bomb. allowing any one country to gain a monopoly on this technology would be extremely destabilizing to the existing global order, and must be prevented at all costs.

LLMs are very powerful, they will get more powerful, and we have not even scratched the surface yet in terms of fully utilizing them in applications. staying at the cutting edge of this technology, and making sure that the knowledge remains free, and is shared as widely as possible, is a natural evolution for people who share the open source ethos.

replies(1): >>44625974 #
1. bgwalter ◴[] No.44625974[source]
If consumer "AI", and that includes programming tools, had real geopolitical implications it would be classified.

The "race against China" is a marketing trick to convince senators to pour billions into "AI". Here is who is financing the whole bubble to a large extent:

https://time.com/7280058/data-centers-tax-breaks-ai/