←back to thread

1246 points adrianh | 10 comments | | HN request time: 0.383s | source | bottom
Show context
kragen ◴[] No.44491713[source]
I've found this to be one of the most useful ways to use (at least) GPT-4 for programming. Instead of telling it how an API works, I make it guess, maybe starting with some example code to which a feature needs to be added. Sometimes it comes up with a better approach than I had thought of. Then I change the API so that its code works.

Conversely, I sometimes present it with some existing code and ask it what it does. If it gets it wrong, that's a good sign my API is confusing, and how.

These are ways to harness what neural networks are best at: not providing accurate information but making shit up that is highly plausible, "hallucination". Creativity, not logic.

(The best thing about this is that I don't have to spend my time carefully tracking down the bugs GPT-4 has cunningly concealed in its code, which often takes longer than just writing the code the usual way.)

There are multiple ways that an interface can be bad, and being unintuitive is the only one that this will fix. It could also be inherently inefficient or unreliable, for example, or lack composability. The AI won't help with those. But it can make sure your API is guessable and understandable, and that's very valuable.

Unfortunately, this only works with APIs that aren't already super popular.

replies(23): >>44491842 #>>44492001 #>>44492077 #>>44492120 #>>44492212 #>>44492216 #>>44492420 #>>44492435 #>>44493092 #>>44493354 #>>44493865 #>>44493965 #>>44494167 #>>44494305 #>>44494851 #>>44495199 #>>44495821 #>>44496361 #>>44496998 #>>44497042 #>>44497475 #>>44498144 #>>44498656 #
afavour ◴[] No.44492216[source]
From my perspective that’s fascinatingly upside down thinking that leads to you asking to lose your job.

AI is going to get the hang of coding to fill in the spaces (i.e. the part you’re doing) long before it’s able to intelligently design an API. Correct API design requires a lot of contextual information and forward planning for things that don’t exist today.

Right now it’s throwing spaghetti at the wall and you’re drawing around it.

replies(2): >>44492474 #>>44492500 #
1. kragen ◴[] No.44492474[source]
Maybe. So far it seems to be a lot better at creative idea generation than at writing correct code, though apparently these "agentic" modes can often get close enough after enough iteration. (I haven't tried things like Cursor yet.)

I agree that it's also not currently capable of judging those creative ideas, so I have to do that.

replies(1): >>44493497 #
2. bbarnett ◴[] No.44493497[source]
This sort of discourse really grinds my gears. The framing of it, the conceptualization.

It's not creative at all, any more than taking the sum of text on a topic, and throwing a dart at it. It's a mild, short step beyond a weighted random, and certainly not capable of any real creativity.

Myriads of HN enthusiasts often chime in here "Are humans any more creative" and other blather. Well, that's a whataboutism, and doesn't detract from the fact that creative does not exist in the AI sphere.

I agree that you have to judge its output.

Also, sorry for hanging my comment here. Might seem over the top, but anytime I see 'creative' and 'AI', I have all sorts of dark thoughts. Dark, brooding thoughts with a sense of deep foreboding.

replies(3): >>44493681 #>>44493926 #>>44495312 #
3. kragen ◴[] No.44493681[source]
I understand. I share the foreboding, but I try to subscribe to the converse of Hume's guillotine.
4. Dylan16807 ◴[] No.44493926[source]
Point taken but if slushing up half of human knowledge and picking something to fit into the current context isn't creative then humans are rarely creative either.
5. LordDragonfang ◴[] No.44495312[source]
> Well, that's a whataboutism, and doesn't detract from the fact that creative does not exist in the AI sphere.

Pointing out that your working definition excludes reality isn't whataboutism, it's pointing out an isolated demand for rigor.

If you cannot clearly articulate how human creativity (the only other type of creativity that exists) is not impugned by the definition you're using as evidence that creativity "does not exist in the AI sphere", you're not arguing from a place of knowledge. Your assertion is just as much sophistry as the people who assert it is creativity. Unlike them, however, you're having to argue against instances where it does appear creative.

For my own two cents, I don't claim to fully understand how human creativity emerges, but I am confident that all human creative works rest heavily on a foundation of the synthesis of author's previous experiences, both personal and of others' creative works - and often more heavily the latter. If your justification for a lack of creativity is that LLMs are merely synthesizing from previous works, then your argument falls flat.

replies(2): >>44495397 #>>44495696 #
6. kragen ◴[] No.44495397{3}[source]
Agreed.

"Whataboutism" is generally used to describe a more specific way of pointing out an isolated demand for rigor—specifically, answering an accusation of immoral misconduct with an accusation that the accuser is guilty of similar immoral misconduct. More broadly, "whataboutism" is a term for demands that morality be judged justly, by objective standards that apply equally to everyone, rather than by especially rigorous standards for a certain person or group. As with epistemic rigor, the great difficulty with inconsistent standards is that we can easily fall into the trap of applying unachievable standards to someone or some idea that we don't like.

So it makes some sense to use the term "whataboutism" for pointing out an isolated demand for rigor in the epistemic space. It's a correct identification of the same self-serving cognitive bias that "whataboutism" targets in the space of ethical reasoning, just in a different sphere.

There's the rhetorical problem that "whataboutism" is a derogatory term for demanding that everyone be judged by the same standards. Ultimately that makes it unpersuasive and even counterproductive, much like attacking someone with a racial slur—even if factually accurate, as long as the audience isn't racist, the racial slur serves only to tar the speaker with the taint of racism, rather than prejudicing the audience against its nominal target.

In this specific case, if you concede that humans are no more creative than AIs, then it logically follows that either AIs are creative to some degree, or humans are not creative at all. To maintain the second, you must adopt a definition of "creativity" demanding enough to exclude all human activity, which is not in keeping with any established use of the term; you're using a private definition, greatly limiting the usefulness of your reasoning to others.

And that is true even if the consequences of AIs being creative would be appalling.

7. bbarnett ◴[] No.44495696{3}[source]
I'll play with your tact in this argument, although I certain do not agree it is accurate.

You're asserting that creativity is a meld of past experience, both personal and the creative output of others. Yet this really doesn't jive, as an LLM does not "experience" anything. I would argue that raw knowledge is not "experience" at all.

We might compare this to the university graduate, head full of books and data jammed therein, and yet that exceptionally well versed graduate needs "experience" in a job for quite some time, before having any use.

The same may be true of learning how to do anything, from driving, to riding a bike, or just being in conversations with others. Being told, on paper (or as part of your baked in, derived "knowledge store") things, means absolutely nothing in terms of actually experiencing them.

Heck, just try to explain sex to someone before they've experienced it. No matter the literature, play, movie or act performed in front of them, experience is entirely different.

And an AI does not experience the universe, nor is it driven by the myriad of human totality, from the mind o'lizard, to the flora/fauna in one's gut. There is no motive driving it, for example it does not strive to mate... something that drives all aspect of mammalian behaviour.

So intertwined with the mating urge is human experience, that it is often said that all creativity derives from it. The sparrow dances, the worm wiggles, and the human scores 4 touchdowns in one game, thank you Al.

Comparatively, an LLM does not reason, nor consider, nor ponder. It is "born" with full access to all of its memory store, has data spewed at it, searches, responds, and then dies. It is not capable of learning in any stream of consciousness. It does not have memory from one birth to the next, unless you feed its own output back at it. It can gain no knowledge, except from "context" assigned at birth.

An LLM, essentially, understands nothing. It is not "considering" a reply. It's all math, top to bottom, all probability, taking all the raw info it has an just spewing what fits next best.

That's not creative.

Any more than Big Ben's gears and cogs are.

replies(1): >>44497922 #
8. LordDragonfang ◴[] No.44497922{4}[source]
Experiences are not materially different from knowledge once they are both encoded as memories. They're both just encoded in neurons as weights in their network of connections. But let's assume there is some ineffable difference between firsthand and secondhand experience, which fundamentally distinguishes the two in the brain in the present.

The core question here, then, is why you are so certain that "creativity" requires "experience" beyond knowledge, and why knowledge is insufficient? What insight do you have into the human mind that top neuroscientists lack that grants you this gnosticism on how creativity definitely does and does not work?

Because, if you'll permit me to be crude, some of the best smut I've read has been by people I'm certain have never experienced the act. Their writing has been based solely on the writings of others. And yet, knowledge alone is more than enough for them to produce evocative creative works.

And, to really hammer in a point (please forgive the insulting tone):

>It's all math, top to bottom, all probability, taking all the raw info it has an just spewing what fits next best.

You are just biology, top to bottom, just electrical signals, taking all the raw info your nerves get, matching patterns and just spewing what fits next best.

Calling LLMs "just math" -- that's not creative, it's part of your input that you're predicting fits the likely next argument.

You didn't "reason, consider, or ponder" whether I would find that argument convincing or be able to immediately dismiss it because it holds no weight.

You're simply being a stochastic parrot, repeating the phrases you've heard.

...Etcetera. Again, apologies for the insult. But the point I am continually trying to make is that all of the arguments everyone tries to make about it not reasoning, not thinking, not having creativity -- they all are things that can and do apply to almost every human person, even intelligent and articulate ones like you or I.

When it comes down to it, your fundamental argument is that you do not believe that a machine can possibly have the exceptional qualities of the human mind, for some ineffable reason. It's all reasoning backwards from there. Human creativity must require human-like experience, the ability to grow, and a growing context cannot possibly suffice, because you've already decided on your conclusion.

(Because, perhaps, it would be too unsettling to admit that the alien facsimile of intelligence that we've created might have actual intelligence -- so you refuse that possibility)

replies(1): >>44498374 #
9. bbarnett ◴[] No.44498374{5}[source]
Experiences are not materially different from knowledge once they are both encoded as memories.

The storage medium is not relevant here, and actual experience has sensorially laden additions which are not currently possible from reading. Memories are laid down differently as well.

In terms of what top neuroscientists know or do not know, you pulled such out of your back pocket, and threw them into onto the floor, perhaps you should describe precisely how they negate what I am saying?

Is there consensus? LLM is indeed creative?

What you seem to be missing here, is I am not railing against machine intelligence, nor creativity. It is merely that an LLM is not it, and will never become it. This is no different than an argument over whether to use sysvinit or systemd, it is a discussion of technical capabilities of a technology.

LLMs may become a backing store, a "library" of sorts for any future AGI to use as data, knowledge, an exceptionally effective method to provide a wikipedia-ish like, non-sql backed data source.

But they provide no means for cognition.

And creativity requires cognition. Creativity is a conscious process, for it requires imagination, which is an offshoot of a conscious process. Redefining "creativity" to exclude the conscious process negates its very meaning.

You can say "Wow, this appears to be creative", and it may appear to be creative, yet without cognition the act is simply not possible. None would dare say that a large Goldberg machine, which spits out random answers dependent upon air currents was producing creative ideas.

Some may say "What a creative creation this machine is!", but none would attribute creativity to the output of any algorithmic production by that machine, and this is what we have here.

Should we derive a method of actual conscious cognition in a mind not flesh, so be it. Creativity may occur. But as things stand now, a mouse provides more creativity than an LLM, the technology is simply not providing the underlying requirements. There is no process for consciousness.

There are ways to provide for this, and I have pondered them (again, I'm validating here that it's not "oh no, machines are NOT thinking!!", but instead "LLMs aren't that").

One exceptionally rough and barely back-of-napkin concept, would be sleep.

No I am not trying to mimic the human mind here, but when the concept is examined front to end, the caboose seems to be 'sleep'. Right now, the problem is how we bring each LLM onto a problem. We simply throw massive context at it, and then allow it to proceed.

Instead, we need to have a better context window. Maybe we should call this 'short term' memory. An LLM is booted, and responds to questions, but has a floating context window which never shrinks. Its "context window" is not cleared. Perhaps we use a symbolic database, or just normal SQL with fluff modz, but we allow this ongoing context window to exist and grow.

After a point, this short term memory will grow too large to actively swap in/out of memory. Primarily, even RAM has bandwidth limits and is a detractor on response speed and energy requirements per query.

So -- the LLM "goes to sleep". During that time, the backend is converted to LLM, or I suppose in this case a small language model. We now have a fuzzy, RRD almost conversion of short-term memory to long-term memory, yet one which enables some very important things.

That being, an actual capacity to learn from interaction.

The next step is to expand that capability, and the capabilities of an LLM with senses. I frankly think the best here is real, non-emulated robotic control. Give the LLM something to manipulate, as well as senses.

At that point, we should inject agency. A reason to exist. With current life, the sole primary reason is "reproduce". Everything else has derived from that premise. I spoke of the mating urge, we should recreate this here.

(Note how non-creative this all actually is, yet it seems valid to me... we're just trying to provide what we know works as a starting base. It does not mean that we cannot expand the creation of conscious minds into other methods, once we have a better understanding and success.)

There are several other steps here, which are essential. The mind must, for example, reload its "long-term memory" backing store SLM, and when "sleep" comes, overlay new short-term thoughts over long-term. This is another fuzzy process, and it would be best to think of it(though technically not accurate) as unpacking its SLM, overlaying new thoughts, and creating an entirely new SLM. As its short-term memory would have output derived from the LLM, plus overlaid SLM, its short-term memory would be providing output derived from its prior SLM.

So there is a form of continuity here.

So we have:

* A mind which can retain information, and is not merely popping into creation with no stored knowledge, and killed at each session end

* That same mind has a longer term memory, which allows for ongoing concept modification and integration, eg new knowledge affecting the "perception" of old knowledge

* That SLM will be "overlaid" on top of its LLM, meaning experiences derived during waking moments will provide more context to an LLM (that moment when you ride a bike, and comprehend how all the literature you read, isn't the same as doing? That moment where you link the two? That's in the SLM, and the SLM has higher priority)

* A body (simple as it may be), which allows access to the environment

* Senses fed into the process

* Agency, as in, "perform to mate", with aspects of "perform" being "examine, discover, be impressive" that sort of thing

I think, this overlaid SLM, along with actual empirical data would provide a more apt method to simulate some form of consciousness. It would at least allow a stream of consciousness, regardless of whatever debates we might have about humans dying when they sleep (which makes no sense, as the brain is incredibly active during sleep, and constantly monitoring the surroundings for danger).

I'd speak more along the sensory aspects of this, but it's actually what I'm working on right now.

But what's key here, is independent data and sensory acquisition. I see this is a best-able way to kickstart a mind.

replies(1): >>44524018 #
10. LordDragonfang ◴[] No.44524018{6}[source]
> It is merely that an LLM is not it, and will never become it.

Okay, I didn't want to put words in your mouth by claiming you said this, but now that you have, I can address it.

You have literally no way of knowing this. You don't understand how cognition actually works, because no one does, and you don't understand how LLMs actually produce a facsimile of intelligence, for very similar reasons. So you can't say that with certainty, and likewise you cannot claim to know what is actually required for cognition (without leaning heavily on human exceptionalism)

Skeptics of LLMs have been claiming that it "cannot possibly X" for the better part of a decade, and time and time again they have been proved wrong. Ironically, I was just reading an article this morning[1] that reiterated this point:

> [W]e’re still having the same debate - whether AI is a “stochastic parrot” that will never be able to go beyond “mere pattern-matching” into the realm of “real understanding”.

> My position has always been that there’s no fundamental difference: you just move from matching shallow patterns to deeper patterns, and when the patterns are as deep as the ones humans can match, we call that “real understanding”. This isn’t quite right - there’s a certain form of mental agency that humans still do much better than AIs - but again, it’s a (large) difference in degree rather than in kind.

> I think this thesis has done well so far. So far, every time people have claimed there’s something an AI can never do without “real understanding”, the AI has accomplished it with better pattern-matching.

While I can't claim to have been quite as prescient as the author, I agree with his position.

It wasn't so long ago that our standard conception of AI was that it could never make anything that could be called "art"[2], and now we have model that churn out images in seconds and poetry that average people rate as better than most humans[3]

You have a whole, well-spoken argument, but it despite claiming otherwise, every point boils down to "in order to have a trait associated with human thought, it needs to be more like a human brain". Your reasoning is circular and all comes down to human exceptionalism.

> But they provide no means for cognition. And creativity requires cognition. Creativity is a conscious process, for it requires imagination, which is an offshoot of a conscious process.

"Consciousness", the philosophical term primarily infamous for the fact that no one understands or agrees how it works - only that humans have it and some other animals may or may not. But somehow you know that "imagination" (a term not otherwise defined or justified) requires it, and that cognition (again undefined except to assert that LLMs don't have it, despite being able to take in information and retain it) requires that, and therefore, LLMs can't be creative unless they are more human-like.

> At that point, we should inject agency. A reason to exist. With current life, the sole primary reason is "reproduce". Everything else has derived from that premise. I spoke of the mating urge, we should recreate this here.

Again, algorithms are infamous for following goals and chasing rewards even more intently than humans, but you only count that as a "purpose" if it's the same one humans have.

And so on.

[1] https://www.astralcodexten.com/p/now-i-really-won-that-ai-be... [2] https://knowyourmeme.com/memes/can-a-robot-write-a-symphony [3] https://www.nature.com/articles/s41598-024-76900-1