←back to thread

1244 points adrianh | 1 comments | | HN request time: 0.206s | source
Show context
kragen ◴[] No.44491713[source]
I've found this to be one of the most useful ways to use (at least) GPT-4 for programming. Instead of telling it how an API works, I make it guess, maybe starting with some example code to which a feature needs to be added. Sometimes it comes up with a better approach than I had thought of. Then I change the API so that its code works.

Conversely, I sometimes present it with some existing code and ask it what it does. If it gets it wrong, that's a good sign my API is confusing, and how.

These are ways to harness what neural networks are best at: not providing accurate information but making shit up that is highly plausible, "hallucination". Creativity, not logic.

(The best thing about this is that I don't have to spend my time carefully tracking down the bugs GPT-4 has cunningly concealed in its code, which often takes longer than just writing the code the usual way.)

There are multiple ways that an interface can be bad, and being unintuitive is the only one that this will fix. It could also be inherently inefficient or unreliable, for example, or lack composability. The AI won't help with those. But it can make sure your API is guessable and understandable, and that's very valuable.

Unfortunately, this only works with APIs that aren't already super popular.

replies(23): >>44491842 #>>44492001 #>>44492077 #>>44492120 #>>44492212 #>>44492216 #>>44492420 #>>44492435 #>>44493092 #>>44493354 #>>44493865 #>>44493965 #>>44494167 #>>44494305 #>>44494851 #>>44495199 #>>44495821 #>>44496361 #>>44496998 #>>44497042 #>>44497475 #>>44498144 #>>44498656 #
suzzer99 ◴[] No.44492212[source]
> Sometimes it comes up with a better approach than I had thought of.

IMO this has always been the killer use case for AI—from Google Maps to Grammarly.

I discovered Grammarly at the very last phase of writing my book. I accepted maybe 1/3 of its suggestions, which is pretty damn good considering my book had already been edited by me dozens of times AND professionally copy-edited.

But if I'd have accepted all of Grammarly's changes, the book would have been much worse. Grammarly is great for sniffing out extra words and passive voice. But it doesn't get writing for humorous effect, context, deliberate repetition, etc.

The problem is executives want to completely remove humans from the loop, which almost universally leads to disastrous results.

replies(8): >>44492777 #>>44493106 #>>44493413 #>>44493444 #>>44493773 #>>44493888 #>>44497484 #>>44498671 #
jll29 ◴[] No.44493888[source]
> The problem is executives want to completely remove humans from the loop, which almost universally leads to disastrous results

Thanks for your words of wisdom, which touch on a very important other point I want to raise: often, we (i.e., developers, researchers) construct a technology that would be helpful and "net benign" if deployed as a tool for humans to use, instead of deploying it in order to replace humans. But then along comes a greedy business manager who reckons recklessly that using said technology not as a tool, but in full automation mode, results will be 5% worse, but save 15% of staff costs; and they decide that that is a fantastic trade-off for the company - yet employees may lose and customers may lose.

The big problem is that developers/researchers lose control of what they develop, usually once the project is completed if they ever had control in the first place. What can we do? Perhaps write open source licenses that are less liberal?

replies(9): >>44493910 #>>44494335 #>>44494590 #>>44496019 #>>44496054 #>>44496324 #>>44497061 #>>44498650 #>>44504196 #
csinode ◴[] No.44496054[source]
The problem here is societal, not technological. An end state where people do less work than they do today but society is more productive is desirable, and we shouldn't be trying to force companies/governments/etc to employ people to do an unnecessary job.

The problem is that people who are laid off often experience significant life disruption. And people who work in a field that is largely or entirely replaced by technology often experience permanent disruption.

However, there's no reason it has to be this way - the fact people having their jobs replace by technology are completely screwed over is a result of the society we have all created together, it's not a rule of nature.

replies(4): >>44496249 #>>44496884 #>>44498165 #>>44498629 #
shafyy ◴[] No.44498629[source]
> The problem here is societal, not technological.

I disagree. I think it's both. Yes, we need good frameworks and incentivizes on a economic/political level. But also, saying that it's not a tech problem is the same as saying "guns don't kill people". The truth is, if there was no AI tech developed, we would not need to regulate it so that greed does not take over. Same with guns.

replies(2): >>44498679 #>>44499437 #
visarga ◴[] No.44498679[source]
Oh the web was full of slop long before LLMs arrived. Nothing new. If anything, AI slop is higher quality than was SEO crap. And of course we can't uninvent AI just like we can't unborn a human.
replies(1): >>44498784 #
Sophira ◴[] No.44498784[source]
It depends on the metric you use.

Yes, AI text could be considered higher quality than traditional SEO, but at the same time, it's also very much not, because it always sounds like it might be authoritative, but you could be reading something hallucinated.

In the end, the text was still only ever made to get visitors to websites, not to provide accurate information.

replies(2): >>44499452 #>>44501361 #
1. jsjohnst ◴[] No.44499452[source]
> it's also very much not, because it always sounds like it might be authoritative, but you could be reading something hallucinated

I keep hearing this repeated over and over as if it’s a unique problem for AI. This is DEFINITELY true of human generated content too.