←back to thread

128 points ArmageddonIt | 1 comments | | HN request time: 0.206s | source
Show context
kylecazar ◴[] No.44500849[source]
I like the historical part of this article, but the current problem is the reverse.

Everyone is jumping on the AI train and forgetting the fundamentals.

replies(2): >>44500874 #>>44500915 #
nofriend ◴[] No.44500874[source]
AI will plausibly disrupt everything
replies(2): >>44501043 #>>44501294 #
jorl17 ◴[] No.44501043[source]
We have a system to which I can upload a generic video, and which captures eveeeeeerything in it, from audio, to subtitles onscreen, to skewed text on a mug, to what is going on in a scene. It can reproduce it, reason about it, and produce average-quality essays about it (and good-quality essays if prompted properly), and, still, there are so many people who seem to believe that this won't revolutionize most fields?

The only vaguely plausible and credible argument I can entertain is the one about AI being too expensive or detrimental to the environment, something which I have not looked sufficiently into to know about. Other than that, we are living so far off in the future, much more than I ever imagined in my lifetime! Wherever I go I see processes which can be augmented and improved though the use of these technologies, the surface of which we've only barely scratched!

Billions are being poured trying to use LLMs and GenAI to solve problems, trying to create the appropriate tools that wrap "AI", much like we had to do with all the other fantastic technology we've developed throughout the years. The untapped potential of current-gen models (let alone next-gen) is huge. Sure, a lot of this will result in companies with overpriced, over-engineered, doom-to-fail products, but that does not mean that the technology isn't revolutionary.

From producing music, to (in my mind) being absolutely instrumental in a new generation of education or mental health, or general support for the lonely (elderly and perhaps young?), to the service industry!...the list goes on and on and on. So much of my life is better just with what little we have available now, I can't fathom what it's going to be like in 5 years!

I'm sorry I highjacked your comment, but it boggles the mind how so many people so adamantly refuse to see this, to the point that I often wonder if I've just gone insane?!

replies(1): >>44501365 #
DeepSeaTortoise ◴[] No.44501365[source]
People dislike the unreliability and not being able to reason about potential failure scenarios.

Then there's the question whether a highly advanced AI is better at hiding unwanted "features" in your products than you are at finding them.

And lastly, you've went to great lengths of completely air gapping the systems holding your customers' IP. Do you really want some Junior dev vibing that data into the Alibaba cloud? How about aging your CFO by 20 years with a quote on an inference cluster?

replies(1): >>44501502 #
1. jorl17 ◴[] No.44501502[source]
I mostly agree with all your points being issues, I just don't see them as roadblocks to the future I mentioned, nor do I find them issues without solutions or workarounds.

Unreliability and difficulty reasoning about potential failure scenarios is tough. I've been going through the rather painful process of taming LLMs to do the things we want them to, and I feel that. However, for each feature, we have been finding what we consider to be rather robust ways of dealing with this issue. The product that exists today would not be possible without LLMs and it is adding immense value. It would not be possible because of (i) a subset of the features themselves, which simply would not be possible; (ii) time to market. We are now offloading the parts of the LLM which would be possible with code to code — after we've reached the market (which we have).

> Then there's the question whether a highly advanced AI is better at hiding unwanted "features" in your products than you are at finding them.

I don't see how this would necessarily happen? I mean, of course I can see problems with prompt injection, or with AIs being lead to do things they shouldn't (I find this to be a huge problem we need to work on). From a coding perspective, I can see the problem with AI producing code that looks right, but isn't exactly. I see all of these, but don't see them as roadblocks — not more than I see human error as roadblocks in many cases where these systems I'm thinking about will be going.

With regards to customers' IP, this seems again more to do with the fact some junior dev is being allowed to do this? Local LLMs exist, and are getting better. And I'm sure we will reach a point where data is at least "theoretically private". Junior devs were sharing around code using pastebin years ago. This is not an LLM problem (though certainly it is exacerbated by the perceived usefulness of LLMs and how tempting it may be to go around company policy and use them).

I'll put this another way: Just the scenario I described, of a system to which I upload a video and ask it to comment on it from multiple angles is unbelievable. Just on the back of that, nothing else, we can create rather amazing products or utilities. How is this not revolutionary?