←back to thread

1246 points adrianh | 5 comments | | HN request time: 0.791s | source
Show context
kragen ◴[] No.44491713[source]
I've found this to be one of the most useful ways to use (at least) GPT-4 for programming. Instead of telling it how an API works, I make it guess, maybe starting with some example code to which a feature needs to be added. Sometimes it comes up with a better approach than I had thought of. Then I change the API so that its code works.

Conversely, I sometimes present it with some existing code and ask it what it does. If it gets it wrong, that's a good sign my API is confusing, and how.

These are ways to harness what neural networks are best at: not providing accurate information but making shit up that is highly plausible, "hallucination". Creativity, not logic.

(The best thing about this is that I don't have to spend my time carefully tracking down the bugs GPT-4 has cunningly concealed in its code, which often takes longer than just writing the code the usual way.)

There are multiple ways that an interface can be bad, and being unintuitive is the only one that this will fix. It could also be inherently inefficient or unreliable, for example, or lack composability. The AI won't help with those. But it can make sure your API is guessable and understandable, and that's very valuable.

Unfortunately, this only works with APIs that aren't already super popular.

replies(23): >>44491842 #>>44492001 #>>44492077 #>>44492120 #>>44492212 #>>44492216 #>>44492420 #>>44492435 #>>44493092 #>>44493354 #>>44493865 #>>44493965 #>>44494167 #>>44494305 #>>44494851 #>>44495199 #>>44495821 #>>44496361 #>>44496998 #>>44497042 #>>44497475 #>>44498144 #>>44498656 #
suzzer99 ◴[] No.44492212[source]
> Sometimes it comes up with a better approach than I had thought of.

IMO this has always been the killer use case for AI—from Google Maps to Grammarly.

I discovered Grammarly at the very last phase of writing my book. I accepted maybe 1/3 of its suggestions, which is pretty damn good considering my book had already been edited by me dozens of times AND professionally copy-edited.

But if I'd have accepted all of Grammarly's changes, the book would have been much worse. Grammarly is great for sniffing out extra words and passive voice. But it doesn't get writing for humorous effect, context, deliberate repetition, etc.

The problem is executives want to completely remove humans from the loop, which almost universally leads to disastrous results.

replies(8): >>44492777 #>>44493106 #>>44493413 #>>44493444 #>>44493773 #>>44493888 #>>44497484 #>>44498671 #
jll29 ◴[] No.44493888[source]
> The problem is executives want to completely remove humans from the loop, which almost universally leads to disastrous results

Thanks for your words of wisdom, which touch on a very important other point I want to raise: often, we (i.e., developers, researchers) construct a technology that would be helpful and "net benign" if deployed as a tool for humans to use, instead of deploying it in order to replace humans. But then along comes a greedy business manager who reckons recklessly that using said technology not as a tool, but in full automation mode, results will be 5% worse, but save 15% of staff costs; and they decide that that is a fantastic trade-off for the company - yet employees may lose and customers may lose.

The big problem is that developers/researchers lose control of what they develop, usually once the project is completed if they ever had control in the first place. What can we do? Perhaps write open source licenses that are less liberal?

replies(9): >>44493910 #>>44494335 #>>44494590 #>>44496019 #>>44496054 #>>44496324 #>>44497061 #>>44498650 #>>44504196 #
kragen ◴[] No.44494590[source]
You're trying to put out a forest fire with an eyedropper.

Stock your underground bunkers with enough food and water for the rest of your life and work hard to persuade the AI that you're not a threat. If possible, upload your consciousness to a starwisp and accelerate it out of the Solar System as close to lightspeed as you can possibly get it.

Those measures might work. (Or they might be impossible, or insufficient.) Changing your license won't.

replies(2): >>44495102 #>>44495808 #
1. antonvs ◴[] No.44495102[source]
Alternatively, persuade the AI that you are all-powerful and that it should fear and worship you. Probably a more achievable approach, and there’s precedent for it.
replies(3): >>44495144 #>>44498252 #>>44499109 #
2. kragen ◴[] No.44495144[source]
That only works on the AIs that aren't a real threat anyway, and I don't think it helps with the social harm done by greedy business managers with less powerful AIs. In fact, it might worsen it.
3. Bendy ◴[] No.44498252[source]
That didn’t work out for God, we still killed him.
4. mistersquid ◴[] No.44499109[source]
> Alternatively, persuade the AI that you are all-powerful and that it should fear and worship you.

I understand this is a bit deeper into one of the _joke_ threads, but maybe there’s something here?

There is a distinction to be made between artificial intelligence and artificial consciousness. Where AI can be measured, we cannot yet measure consciousness despite that many humans could lay plausible claim to possessing consciousness (being conscious).

If AI is trained to revere or value consciousness while simultaneously being unable to verify it possesses consciousness (is conscious), would AI be in a position to value consciousness in (human) beings who attest to being conscious?

replies(1): >>44505963 #
5. antonvs ◴[] No.44505963[source]
> being unable to verify it possesses consciousness

One of the strange properties of consciousness is that an entity with consciousness can generally feel pretty confident in believing they have it. (Whether they're justified in that belief is another question - see eliminativism.)

I'd expect a conscious machine to find itself in a similar position: it would "know" it was conscious because of its experiences, but it wouldn't be able to prove that to anyone else.

Descartes' "Cogito, ergo sum" refers to this. He used "cogito" (thought) to "include everything that is within us in such a way that we are immediately aware [conscii] of it." A translation into a more modern (philosophical) context might say something more like "I have conscious awareness, therefore I am."

I'm not sure what implications this might have for a conscious machine. Its perspective on human value might come from something other than belief in human consciousness - for example, our negative impact on the environment. (There have was that recent case where an LLM generated text describing a willingness to kill interfering humans.)

In a best case scenario, it might conclude that all consciousness is valuable, including humans, but since humans haven't collectively reached that conclusion, it's not clear that a machine trained on human data would.