←back to thread

336 points mooreds | 3 comments | | HN request time: 0.431s | source
Show context
dathinab ◴[] No.44484445[source]
I _hope_ AGI is not right around the corner, for social political reasons we are absolutely not ready for it and it might push the future of humanity into a dystopia abyss.

but also just taking what we have now with some major power usage reduction and minor improvements here and there already seems like something which can be very usable/useful in a lot of areas (and to some degree we aren't even really ready for that either, but I guess thats normal with major technological change)

it's just that for those companies creating foundational models it's quite unclear how they can recoup their already spend cost without either major break through or forcefully (or deceptively) pushing it into a lot more places then it fits into

replies(6): >>44484506 #>>44484517 #>>44485067 #>>44485492 #>>44485764 #>>44486142 #
1. Davidzheng ◴[] No.44486142[source]
I think it's rather easy for them to recoup those costs, if you can disrupt some industry with a full AI company with almost no employees and outcompete everyone else, that's free money for you.
replies(2): >>44486778 #>>44489235 #
2. energy123 ◴[] No.44486778[source]
Possibly but not necessarily. Competition can erode all economic rents, no matter how useful a product is.
3. dathinab ◴[] No.44489235[source]
I think they are trying to do something like this(1) by long term providing a "business suite", i.e. something comparable to g suite or microsoft 360.

For a lot of the things which work well with current AI technology it's supper convenient to have access to all your customer private data (even if you don't train on them, but e.g. stuff like RAG systems for information retrieval are one of the things which already with the current state of LLMs work quite well). This also allows you to compensate hallucinations, non understanding of LLMs and similar by providing (working) links (or inclusions of snippets of) sources where you have the information from and by having all relevant information in the context window of the LLM instead of it's "learned" data from training you in general get better results. I mean RAG systems already did work well without LLMs to be used in some information retrieval products.

And the thing is if your user has to manually upload all potentially relevant business documents you can't really make it work well, but what if they anyway upload all of them to your company because they use your companies file sharing/drive solution?

And lets not even consider the benefits you could get from a cheaper plan where you are allowed to train on the companies data after anonymizing (like for micro companies, too many people thing "they have nothing to hide" and it's anonymized so okay right? (no)). Or you going rogue and just steal trade secrets to then breach into other markets it's not like some bigger SF companies had been found to do exactly that (I think it was amazon/amazon basics).

(1:) Through in that case you still have employees until you AI becomes good enough to write all you code, instead of "just" being a tool for developers to work faster ;)