Experiences are not materially different from knowledge once they are both encoded as memories.The storage medium is not relevant here, and actual experience has sensorially laden additions which are not currently possible from reading. Memories are laid down differently as well.
In terms of what top neuroscientists know or do not know, you pulled such out of your back pocket, and threw them into onto the floor, perhaps you should describe precisely how they negate what I am saying?
Is there consensus? LLM is indeed creative?
What you seem to be missing here, is I am not railing against machine intelligence, nor creativity. It is merely that an LLM is not it, and will never become it. This is no different than an argument over whether to use sysvinit or systemd, it is a discussion of technical capabilities of a technology.
LLMs may become a backing store, a "library" of sorts for any future AGI to use as data, knowledge, an exceptionally effective method to provide a wikipedia-ish like, non-sql backed data source.
But they provide no means for cognition.
And creativity requires cognition. Creativity is a conscious process, for it requires imagination, which is an offshoot of a conscious process. Redefining "creativity" to exclude the conscious process negates its very meaning.
You can say "Wow, this appears to be creative", and it may appear to be creative, yet without cognition the act is simply not possible. None would dare say that a large Goldberg machine, which spits out random answers dependent upon air currents was producing creative ideas.
Some may say "What a creative creation this machine is!", but none would attribute creativity to the output of any algorithmic production by that machine, and this is what we have here.
Should we derive a method of actual conscious cognition in a mind not flesh, so be it. Creativity may occur. But as things stand now, a mouse provides more creativity than an LLM, the technology is simply not providing the underlying requirements. There is no process for consciousness.
There are ways to provide for this, and I have pondered them (again, I'm validating here that it's not "oh no, machines are NOT thinking!!", but instead "LLMs aren't that").
One exceptionally rough and barely back-of-napkin concept, would be sleep.
No I am not trying to mimic the human mind here, but when the concept is examined front to end, the caboose seems to be 'sleep'. Right now, the problem is how we bring each LLM onto a problem. We simply throw massive context at it, and then allow it to proceed.
Instead, we need to have a better context window. Maybe we should call this 'short term' memory. An LLM is booted, and responds to questions, but has a floating context window which never shrinks. Its "context window" is not cleared. Perhaps we use a symbolic database, or just normal SQL with fluff modz, but we allow this ongoing context window to exist and grow.
After a point, this short term memory will grow too large to actively swap in/out of memory. Primarily, even RAM has bandwidth limits and is a detractor on response speed and energy requirements per query.
So -- the LLM "goes to sleep". During that time, the backend is converted to LLM, or I suppose in this case a small language model. We now have a fuzzy, RRD almost conversion of short-term memory to long-term memory, yet one which enables some very important things.
That being, an actual capacity to learn from interaction.
The next step is to expand that capability, and the capabilities of an LLM with senses. I frankly think the best here is real, non-emulated robotic control. Give the LLM something to manipulate, as well as senses.
At that point, we should inject agency. A reason to exist. With current life, the sole primary reason is "reproduce". Everything else has derived from that premise. I spoke of the mating urge, we should recreate this here.
(Note how non-creative this all actually is, yet it seems valid to me... we're just trying to provide what we know works as a starting base. It does not mean that we cannot expand the creation of conscious minds into other methods, once we have a better understanding and success.)
There are several other steps here, which are essential. The mind must, for example, reload its "long-term memory" backing store SLM, and when "sleep" comes, overlay new short-term thoughts over long-term. This is another fuzzy process, and it would be best to think of it(though technically not accurate) as unpacking its SLM, overlaying new thoughts, and creating an entirely new SLM. As its short-term memory would have output derived from the LLM, plus overlaid SLM, its short-term memory would be providing output derived from its prior SLM.
So there is a form of continuity here.
So we have:
* A mind which can retain information, and is not merely popping into creation with no stored knowledge, and killed at each session end
* That same mind has a longer term memory, which allows for ongoing concept modification and integration, eg new knowledge affecting the "perception" of old knowledge
* That SLM will be "overlaid" on top of its LLM, meaning experiences derived during waking moments will provide more context to an LLM (that moment when you ride a bike, and comprehend how all the literature you read, isn't the same as doing? That moment where you link the two? That's in the SLM, and the SLM has higher priority)
* A body (simple as it may be), which allows access to the environment
* Senses fed into the process
* Agency, as in, "perform to mate", with aspects of "perform" being "examine, discover, be impressive" that sort of thing
I think, this overlaid SLM, along with actual empirical data would provide a more apt method to simulate some form of consciousness. It would at least allow a stream of consciousness, regardless of whatever debates we might have about humans dying when they sleep (which makes no sense, as the brain is incredibly active during sleep, and constantly monitoring the surroundings for danger).
I'd speak more along the sensory aspects of this, but it's actually what I'm working on right now.
But what's key here, is independent data and sensory acquisition. I see this is a best-able way to kickstart a mind.