Most active commenters
  • kragen(5)

←back to thread

1246 points adrianh | 36 comments | | HN request time: 1.069s | source | bottom
Show context
kragen ◴[] No.44491713[source]
I've found this to be one of the most useful ways to use (at least) GPT-4 for programming. Instead of telling it how an API works, I make it guess, maybe starting with some example code to which a feature needs to be added. Sometimes it comes up with a better approach than I had thought of. Then I change the API so that its code works.

Conversely, I sometimes present it with some existing code and ask it what it does. If it gets it wrong, that's a good sign my API is confusing, and how.

These are ways to harness what neural networks are best at: not providing accurate information but making shit up that is highly plausible, "hallucination". Creativity, not logic.

(The best thing about this is that I don't have to spend my time carefully tracking down the bugs GPT-4 has cunningly concealed in its code, which often takes longer than just writing the code the usual way.)

There are multiple ways that an interface can be bad, and being unintuitive is the only one that this will fix. It could also be inherently inefficient or unreliable, for example, or lack composability. The AI won't help with those. But it can make sure your API is guessable and understandable, and that's very valuable.

Unfortunately, this only works with APIs that aren't already super popular.

replies(23): >>44491842 #>>44492001 #>>44492077 #>>44492120 #>>44492212 #>>44492216 #>>44492420 #>>44492435 #>>44493092 #>>44493354 #>>44493865 #>>44493965 #>>44494167 #>>44494305 #>>44494851 #>>44495199 #>>44495821 #>>44496361 #>>44496998 #>>44497042 #>>44497475 #>>44498144 #>>44498656 #
suzzer99 ◴[] No.44492212[source]
> Sometimes it comes up with a better approach than I had thought of.

IMO this has always been the killer use case for AI—from Google Maps to Grammarly.

I discovered Grammarly at the very last phase of writing my book. I accepted maybe 1/3 of its suggestions, which is pretty damn good considering my book had already been edited by me dozens of times AND professionally copy-edited.

But if I'd have accepted all of Grammarly's changes, the book would have been much worse. Grammarly is great for sniffing out extra words and passive voice. But it doesn't get writing for humorous effect, context, deliberate repetition, etc.

The problem is executives want to completely remove humans from the loop, which almost universally leads to disastrous results.

replies(8): >>44492777 #>>44493106 #>>44493413 #>>44493444 #>>44493773 #>>44493888 #>>44497484 #>>44498671 #
1. jll29 ◴[] No.44493888[source]
> The problem is executives want to completely remove humans from the loop, which almost universally leads to disastrous results

Thanks for your words of wisdom, which touch on a very important other point I want to raise: often, we (i.e., developers, researchers) construct a technology that would be helpful and "net benign" if deployed as a tool for humans to use, instead of deploying it in order to replace humans. But then along comes a greedy business manager who reckons recklessly that using said technology not as a tool, but in full automation mode, results will be 5% worse, but save 15% of staff costs; and they decide that that is a fantastic trade-off for the company - yet employees may lose and customers may lose.

The big problem is that developers/researchers lose control of what they develop, usually once the project is completed if they ever had control in the first place. What can we do? Perhaps write open source licenses that are less liberal?

replies(9): >>44493910 #>>44494335 #>>44494590 #>>44496019 #>>44496054 #>>44496324 #>>44497061 #>>44498650 #>>44504196 #
2. ◴[] No.44493910[source]
3. ◴[] No.44494335[source]
4. kragen ◴[] No.44494590[source]
You're trying to put out a forest fire with an eyedropper.

Stock your underground bunkers with enough food and water for the rest of your life and work hard to persuade the AI that you're not a threat. If possible, upload your consciousness to a starwisp and accelerate it out of the Solar System as close to lightspeed as you can possibly get it.

Those measures might work. (Or they might be impossible, or insufficient.) Changing your license won't.

replies(2): >>44495102 #>>44495808 #
5. antonvs ◴[] No.44495102[source]
Alternatively, persuade the AI that you are all-powerful and that it should fear and worship you. Probably a more achievable approach, and there’s precedent for it.
replies(3): >>44495144 #>>44498252 #>>44499109 #
6. kragen ◴[] No.44495144{3}[source]
That only works on the AIs that aren't a real threat anyway, and I don't think it helps with the social harm done by greedy business managers with less powerful AIs. In fact, it might worsen it.
7. posnet ◴[] No.44495808[source]
This one right here Mr. Basilisk
8. sjs382 ◴[] No.44496019[source]
> Grammarly is great for sniffing out extra words and passive voice. But it doesn't get writing for humorous effect, context, deliberate repetition, etc.

> But then along comes a greedy business manager who reckons recklessly

Thanks for this. :)

replies(1): >>44501633 #
9. csinode ◴[] No.44496054[source]
The problem here is societal, not technological. An end state where people do less work than they do today but society is more productive is desirable, and we shouldn't be trying to force companies/governments/etc to employ people to do an unnecessary job.

The problem is that people who are laid off often experience significant life disruption. And people who work in a field that is largely or entirely replaced by technology often experience permanent disruption.

However, there's no reason it has to be this way - the fact people having their jobs replace by technology are completely screwed over is a result of the society we have all created together, it's not a rule of nature.

replies(4): >>44496249 #>>44496884 #>>44498165 #>>44498629 #
10. qotgalaxy ◴[] No.44496249[source]
So it's simple: don't do anything at all about the technology that is the impetus for these horrible disruptions, just completely rebuild our entire society instead.
11. montagg ◴[] No.44496324[source]
I think you’re describing the principle/agent problem that people have wrestled with forever. Oppenheimer comes to mind.

You make something, but because you don’t own it—others caused and directed the effort—you don’t control it. But the people who control things can’t make things.

Should only the people who can make things decide how they are used though? I think that’s also folly. What about the rest of society affected by those things?

It’s ultimately a societal decision-making problem: who has power, and why, and how does the use of power affect who has power (accountability).

replies(1): >>44506660 #
12. selcuka ◴[] No.44496884[source]
> However, there's no reason it has to be this way - the fact people having their jobs replace by technology are completely screwed over is a result of the society we have all created together, it's not a rule of nature.

I agree. We need a radical change (some version of universal basic income comes to mind) that would allow people to safely change careers if their profession is no longer relevant.

replies(3): >>44499181 #>>44500992 #>>44527715 #
13. chii ◴[] No.44497061[source]
> The big problem is that developers/researchers lose control

if these developers/researchers are being paid by someone else, why should that same someone else be giving up the control that they paid for?

If these developers/researchers are paying the research themselves (e.g., a startup of their own founding), then why would they ever lose control, unless they sell it?

replies(1): >>44499374 #
14. dotancohen ◴[] No.44498165[source]

  > the fact people having their jobs replace by technology are completely screwed over is a result of the society we have all created together, it's not a rule of nature.
How did the handloom weavers and spinners handle the rise of the machines?
replies(2): >>44498583 #>>44498711 #
15. Bendy ◴[] No.44498252{3}[source]
That didn’t work out for God, we still killed him.
16. throwaway9153 ◴[] No.44498583{3}[source]
> How did the handloom weavers and spinners handle the rise of the machines?

In the past, new jobs appeared that the workers could migrate to.

Today, it seems that AI may replace jobs much quicker than before and it's not clear to me which new jobs will be "invented" to balance the loss.

Optimists will say that we have always managed to invent new types of work fast enough to reduce the impact to society, but in my opinion it is unlikely to happen this time. Unless the politicians figure out a way to keep the unemployment content (basic income etc.),

I fear we may end up in a dystopia within our lifetimes. I may be wrong and we could end up in a post scarcity (star trek) world, but if the current ambitions of the top 1% is an indicator, it won't happen unless the politicians create a better tax system to compensate the loss of jobs. I doubt they will give up wealth and influence voluntarily.

replies(2): >>44498675 #>>44499735 #
17. shafyy ◴[] No.44498629[source]
> The problem here is societal, not technological.

I disagree. I think it's both. Yes, we need good frameworks and incentivizes on a economic/political level. But also, saying that it's not a tech problem is the same as saying "guns don't kill people". The truth is, if there was no AI tech developed, we would not need to regulate it so that greed does not take over. Same with guns.

replies(2): >>44498679 #>>44499437 #
18. Imustaskforhelp ◴[] No.44498650[source]
The problem of those greedy business managers you speak of is that, they don't care how the company does 10 year down the line and I almost feel as if everybody is just doing things which work short term ignoring the long term consequences.

As the comment above said that we need a human in the loop for better results, Well firstly it also depends on human to human.

A senior can be way more productive in the loop than a junior.

So Everybody has just stopped hiring juniors because they cost money and they will deal with the AI almost-slop later/ someone else will deal with it.

Now the current seniors will one day retire but we won't have a new generation of seniors because nobody is giving juniors a chance or that's what I've heard about the job market being brutal.

19. tree-hugger ◴[] No.44498675{4}[source]
> In the past, new jobs appeared that the workers could migrate to.

There was no happy and smooth transition that you seem to allude to. The Luddite movement was in direct response to this: people were dying over this. Factory owners fired or massively reduced wages of workers, replacing many with child workers in precarious and dangerous conditions. In response, the workers smashed the machines that were being used to eliminate their jobs and prevent them from feeding themselves and their families (_not_ the machines that were used to make their jobs easier).

20. visarga ◴[] No.44498679{3}[source]
Oh the web was full of slop long before LLMs arrived. Nothing new. If anything, AI slop is higher quality than was SEO crap. And of course we can't uninvent AI just like we can't unborn a human.
replies(1): >>44498784 #
21. Piskvorrr ◴[] No.44498711{3}[source]
Attempting to unionize. Then the factory owners hired thugs to decapitate the movement.

Oh wait, that's not the disneyfied technooptimistic version of Luddites? Sorry.

22. Sophira ◴[] No.44498784{4}[source]
It depends on the metric you use.

Yes, AI text could be considered higher quality than traditional SEO, but at the same time, it's also very much not, because it always sounds like it might be authoritative, but you could be reading something hallucinated.

In the end, the text was still only ever made to get visitors to websites, not to provide accurate information.

replies(2): >>44499452 #>>44501361 #
23. mistersquid ◴[] No.44499109{3}[source]
> Alternatively, persuade the AI that you are all-powerful and that it should fear and worship you.

I understand this is a bit deeper into one of the _joke_ threads, but maybe there’s something here?

There is a distinction to be made between artificial intelligence and artificial consciousness. Where AI can be measured, we cannot yet measure consciousness despite that many humans could lay plausible claim to possessing consciousness (being conscious).

If AI is trained to revere or value consciousness while simultaneously being unable to verify it possesses consciousness (is conscious), would AI be in a position to value consciousness in (human) beings who attest to being conscious?

replies(1): >>44505963 #
24. b3ing ◴[] No.44499181{3}[source]
No way that will ever happen when we have a party that thinks Medicare, Medicaid and social security is unnecessary for the poor or middle class. But you better believe all our representatives have that covered for themselves while pretending to serve us (they only serve those that bribe/lobby them)
replies(1): >>44505112 #
25. wmeredith ◴[] No.44499374[source]
This is a good point. FAANG or whatever you want to call it now has spent billions hovering up a couple generations' best minds who willing sold their intellect to make endless engagement loops.
26. jsjohnst ◴[] No.44499437{3}[source]
> The truth is, if there was no AI tech developed, we would not need to regulate it so that greed does not take over.

Same could be said for the Internet as we know it too. Literally replace AI with Internet above and it reads equally true. Some would argue (me included some days) we are worse off as a society ~30 years later. That’s also a legitimate case that can be made it was a huge benefit to society too. Will the same be said of AI in 2042?

27. jsjohnst ◴[] No.44499452{5}[source]
> it's also very much not, because it always sounds like it might be authoritative, but you could be reading something hallucinated

I keep hearing this repeated over and over as if it’s a unique problem for AI. This is DEFINITELY true of human generated content too.

28. avereveard ◴[] No.44499735{4}[source]
I think if we zoom out of the tech and into a bit more of economic the risk I see is that the incumbent hold a lot of advantages and also control the means of production due secondary factors like gpu scarcity.

If we want to draw some parallel this may trigger a robber baron kind of outcome more than an industrial revolution.

The existence of workable open weight models tips me more toward the optimistic outcome

Butthere's trillions at stake now and that must not be discounted it's the kind of wealth accumulation that can easily trigger a war. (And if you thinkit isn't you can look at the oil wars in the 90s and other more recent resources war bring fought in Europe today.

Expect "gpu gap" talks sooner that later, and notice there's a few global power with no horse to race.

replies(1): >>44503696 #
29. tpmoney ◴[] No.44501361{5}[source]
> it's also very much not, because it always sounds like it might be authoritative, but you could be reading something hallucinated.

People telling lies on the internet is an old enough and well known enough issue that it’s appeared in children’s TV shows. One need only dive down the rabbit hole of 9/11 “truthers” to see how much completely made up bullshit is published online as absolute fact with authoritative certainty. AI is the hot new thing and gets all the headlines, but Scottish Wikipedia was a problem long before AI and long after society largely settled its mind about how reliable Wikipedia is.

30. kragen ◴[] No.44501633[source]
I recklessly reckon I will go through the gateless gate to hear the sound of one hand clapping.
31. kragen ◴[] No.44503696{5}[source]
The CPU-gap and GPU-gap talks started in 02015 and never ended before the rise of strategic AI: https://www.pcworld.com/article/426879/us-blocks-intel-from-...
32. neochief ◴[] No.44504196[source]
What book did you write?
33. selcuka ◴[] No.44505112{4}[source]
> No way that will ever happen when we have a party that thinks Medicare, Medicaid and social security is unnecessary for the poor or middle class.

This is obviously because the current ruling class can't see what is coming. Historically speaking, the motivation for the elite to support social programs or reforms has been the instinct to preserve social stability, not altruism.

The New Deal did not happen because "the party thought that Social Security and unemployment insurance are necessary for the poor or middle class." It happened to prevent civil unrest and the rise of radical ideologies.

34. antonvs ◴[] No.44505963{4}[source]
> being unable to verify it possesses consciousness

One of the strange properties of consciousness is that an entity with consciousness can generally feel pretty confident in believing they have it. (Whether they're justified in that belief is another question - see eliminativism.)

I'd expect a conscious machine to find itself in a similar position: it would "know" it was conscious because of its experiences, but it wouldn't be able to prove that to anyone else.

Descartes' "Cogito, ergo sum" refers to this. He used "cogito" (thought) to "include everything that is within us in such a way that we are immediately aware [conscii] of it." A translation into a more modern (philosophical) context might say something more like "I have conscious awareness, therefore I am."

I'm not sure what implications this might have for a conscious machine. Its perspective on human value might come from something other than belief in human consciousness - for example, our negative impact on the environment. (There have was that recent case where an LLM generated text describing a willingness to kill interfering humans.)

In a best case scenario, it might conclude that all consciousness is valuable, including humans, but since humans haven't collectively reached that conclusion, it's not clear that a machine trained on human data would.

35. bluefirebrand ◴[] No.44506660[source]
I think the people who can make things have a moral obligation not to turn them over to people who will use them irresponsibly

But unfortunately what is or isn't an irresponsible use is very easy to debate endlessly in circles. Meanwhile people are being harmed like crazy while we can't figure it out

36. departed ◴[] No.44527715{3}[source]
Reminds me of Mondragón, a corporation and federation of worker coops in Spain. It builds new coops to meet the needs of its community, and when a coop ends, workers are given financial support and trained for new jobs in its other coops.