←back to thread

340 points agomez314 | 2 comments | | HN request time: 0.418s | source
Show context
jvanderbot ◴[] No.35245898[source]
Memorization is absolutely the most valuable part of GPT, for me. I can get natural language responses to documentation, basic scripting / sysadmin, and API questions much more easily than searching other ways.

While this is an academic interest point, and rightly tamps down on hype around replacing humans, it doesn't dissuade what I think are most peoples' basic use case: "I don't know or don't remember how to do X, can you show me?"

This is finally a good enough "knowledge reference engine" that I can see being useful to those very people it is over hyped to replace.

replies(6): >>35245958 #>>35245959 #>>35245985 #>>35246065 #>>35246167 #>>35252251 #
vidarh ◴[] No.35245958[source]
And asking higher level questions than what you'd otherwise look up. E.g. I've had ChatGPT write forms, write API calls, put together skeletons for all kinds of things that I can easily verify and fix when it gets details from but that are time consuming to do manually. I've held back and been sceptical but I'm at the point where I'm preparing to integrate models all over the place because there are plenty of places where you can add sufficient checks that doing mostly ok much of the time is sufficient to already provide substantial time savings.
replies(1): >>35246018 #
zer00eyz ◴[] No.35246018[source]
> I've held back and been sceptical but I'm at the point where I'm preparing to integrate models all over the place because there are plenty of places where you can add sufficient checks that doing mostly ok much of the time is sufficient to already provide substantial time savings.

Im an old engineer.

Simply put NO.

If you don't understand it don't check it in. You are just getting code to cut and paste at a higher frequency and volume. At some point in time the fire will be burning around you and you won't have the tools to deal with it.

Nothing about mostly, much and sufficient ever ends well when it has been done in the name of saving time.

replies(7): >>35246026 #>>35246079 #>>35246149 #>>35246308 #>>35248566 #>>35249906 #>>35257939 #
vidarh ◴[] No.35246026[source]
Nobody suggested checking in anything you don't understand. On the contrary. So maybe try reading again.
replies(4): >>35246280 #>>35246737 #>>35246792 #>>35246983 #
1. anon7725 ◴[] No.35246737[source]
The parent said:

> I'm at the point where I'm preparing to integrate models all over the place

Nobody understands these models right now. We don’t even have the weights.

You may draw some artificial distinction between literally checking in the source code of a model into your git repo and making a call to some black box API that hosts it. And you may claim that doing so is no different than making a call to Twilio or whatever, but I think there is a major difference: nobody can make a claim about what an LLM will return or how it will return it, cannot make guarantees about how it will fail, etc.

I agree with zer00eyz.

replies(1): >>35248652 #
2. vidarh ◴[] No.35248652[source]
I said that,and you're missing the point. We don't need to understand the models to be able to evaluate the output manually.