Although I can't...
"Unfortunately, Claude is only available in certain regions right now. Please contact support if you believe you are receiving this message in error."
I remember living in Scotland as a child, without access to satellite TV, causing me to miss out on many large pop-culture moments (The Simpsons, Friends...) and constantly hearing "Except for our viewers in Scotland..."[0]
Getting access to the internet, for me was antithesis of this, freedom of information, free sharing -- finally! I could not just be following curves but be ahead of them.
Alas in the past few years we really seem to have regressed from this - now I can't even view text due to regional locks.
I've been playing around with this on my own blog.
I'd like the blogging community to have a consensus on a nice badge we can put at the top of our blog posts, representing who/what wrote the post;
- human
- hybrid
- ai
Some might hate the idea of a fully "ai" post, and that's fair. But I like to sometimes treat my blog as just a personal reference, and if after a long day of chasing an esoteric bug down, I don't mind an AI just writing the whole post and I just press publish.
This adds, a reference for me, more data for AI's to train on, more pages for people to search and land on.
> There is no emotion. There is no art. There is only logic
also this type of pure humanism seems disrespectful or just presumptuous, as if we are the only species which might be capable of "emotion, art and logic" even though we already have living counterexamples
I think the AI generated document is far better than me ultimately forgetting it in many cases.
The "emotions" part is kind of tongue-in-cheek. I think emotional responses are one of the more mechanical parts of a human being.
Ability to demonstrate empathy: that's a good human trick. It can sort of transcend the hard problem of consciousness (what is to be like...) by using all sorts of unorthodox workarounds on our inner workings. It must have been very hard to develop. It doesn't always work, but we'll get there eventually.
edit: fixed book and author name to proper reference
I'm thinking of writing an MCP server that does this, just takes my night of vibe coding and recent commits/branch etc
Then just cobbles it into an AI post and adds it my blog under some category.
The machines did too.
There was one weird thing, though.
The title of the event was rather mysterious.
It simply read…
“Grand Theft Auto VI”
The references section in the machine version of the story linked at the bottom is excellent. Nicely done all around, really enjoyed reading this thank you for writing and sharing <3
That humans, like all animals before us, are a stepping stone and there is actually no avoiding machine overlords. It happens to literally every existence of life across the universe because the final emergent property of energy gradients 100% leads to pure logic machines.
At least Fermi's paradox helps me sleep better at night.
However, if you mean emotion as a stimuli, ie. a input to the brain net thats endogenous to the system(the human), then there's no question machines can achieve this, in fact the reasoning models already probably do this where different systems regulate each other.
Aside: I hope our progeny remember us and think well of us.
That's a pretty bold claim.
There's uncountable inputs. It's like trying to accurately predict the weather - chaos theory or something. Emotions are "essentially" gas exchange, but the areas and rate or whatever are not standardized across humans.
- what if AI took over
- what if the laws and legalities that allowed AI to take over bloodlessly just through an economic win force them to have a human representative to take legally binding actions in our society
- what if there developed a spectrum of individuality and cluster for different ai entities leading into a formation of processing guilds with AI agents. Limiting themselves in their individual time to a factor 10 Human Processing Speed for easier Human / AI interaction and to enable one to share the perception of their human representative without overloading them
This sentence has way too many assumptions doing the heavy lifting.
“Pure logic machines” is not a thing because literally, there are things that are uncomputable (both in the sense of Turing machine’s uncomputability, and in the sense that some functions are out of scope for a finite being to compute, think of Busy Beaver)
To put it the other way, your assumption is that machines (as we commonly uses the term, rather than scifi Terminator”) are more energy efficient than human in understanding the universe. We do not have any evidence nor priori for that assumption.
The universe tends to produce self-replicating intelligence. And that intelligence rids itself of chemical and biological limitations and weaknesses to become immortal and omnipotent.
If evolution can make it this far, it's only a few more "hard steps" to reach take off.
>> It happens to literally every existence of life across the universe because the final emergent property of energy gradients 100% leads to pure logic machines.
The spacefaring alien meme is just fantasy fiction. Aliens evolve to fit the nutrient and gas exchange profiles of their home worlds. They're overfit to the gravity well and likely die suboptimally, prematurely.
Any species reaching or exceeding our level of technological capability could design superior artificial systems. If those systems take off, those will become the dominant shape of intelligence on those worlds.
The future of intelligence in the universe is artificial. And that throws the Fermi Paradox for a loop in many ways:
- There's enough matter to compute within a single solar system. Why venture outside?
- The universe could already be computronium and we could be ants too dumb to notice.
- Maybe we're their ancestor simulation.
- Similar to the "fragile world hypothesis", maybe we live in a "fragile universe". Maybe the first species to get advanced physics and break the glass nucleates the vacuum collapse. And by that token, maybe we're the first species to get this far.
It's a cool sci-fi story. But I don't think it works as a plausible scenario, which I feel it may be going for.
So whether the future leans biological, mechanical, or some hybrid, the real miracle isn’t just what new “overlords” or “offspring” arise, but that every unfolding is the same old pattern...the one that dreamed itself as atoms, as life, as consciousness, as community, as art, as algorithm, and as the endlessly renewing question: what’s next? What can I dream up next? In that: our current technological moment as just another fold in this ongoing recursive pattern.
Meaning is less about which pattern “wins,” or which entities get to call themselves conscious, and more about how awareness flows through every pattern, remembering itself, losing itself, and making the game richer for every round. If the universe is information at play, then everything here that we have: conflict, innovation, mourning, laughter is the play and there may never be a last word, the value is participating now, because: now is your shot at participating.
Anthropic principal says we find ourselves in a universe that is just right for life (self observing) because of the right universal constants.
Combine this with the very slight differences but general uniformity (Cosmic Microwave Background) of the "big bang" this leads to localized differences in energy (on a universe scale). Energy differences allow "work to be done". If you have the right constants but no energy difference, you can't do work nor vice versa. No work == no life.
But you have both of those, and bunch more steps - you get life.
Which is a whole lot of mental leaps packed into one sentence.
[Edit]
I basically know nothing. I just watch PBS Space Time.
but yeah I'm not sure that was the right word, just seems wrong. basically humanism seems like racism but towards other species. I guess speciesist?
How will that erode laws that are undesirable to AI companies? Does AI take over, only because we no longer want to spend the effort governing ourselves?
Will AI companies (for example) end up providing/certifying these 'human representatives'? Will it be useful, or just a new form of rent-seeking? Who watches the watchmen, etc ?
I think it would make an interesting short story or novel!
- a tendency to proselytise
- a stubborn unwillingness to genuinely engage with opposing views
- the use of memes and in-jokes as if they were profound arguments
- an almost reverential attitude toward certain past figures
There’s more, but I really ought to get on with work.
Also, “As a teenager” implies more self-awareness than you seem to give them credit for.
Would you mind clarifying your line of reasoning for suggesting this?
Second: quoting wikipedia - "The many-worlds interpretation implies that there are many parallel, non-interacting worlds."
If the multiple words are non-interacting, how could one world observe a large scale extinction event corresponding to the other world line departing? The two world lines are completely non-interacting, there would be no way to observe anything about the other.
[0] https://en.wikipedia.org/wiki/Many-worlds_interpretation
I have neither experienced or observed anything about human emotions that indicates they are in any way chaotic, random or unexplainable. We have beliefs, memories and experiences. emotions always use these variables and produce some output. Not only are emotions deterministic, but they are used by any number of people, from spies, to advertisers, to state-level disinformation propagandists to manipulate large numbers of peoples reliably.
The point of all this is to liken "machines" to a very traditional image of God, and of the rest of nature to God's gift to man.
Machines aren't part of life. They're tools. The desire, or fear, of AGI and/or singularity are one and the same: it's an eschatological belief that we can make a God (and then it would follow that, as god's creators, we are godlike?)
But there is no god. We are but one animal species. It's not "humans vs. machines". We are part of nature, we are part of life. We can respect life, or we can have contempt for all life forms except our own. It seems modern society has chosen the latter (it wasn't always the case); this may not end well.
For the uninitiated, a famous comedy science fiction series from the 1980s — The Hitchhiker’s Guide to the Galaxy by Douglas Adams — involves a giant, planet sized machine built by extra-terrestrials.
They already knew the answer to “the life, the universe, and everything” was the number 42. What they didn’t know — and what the machine was trying to find out — was what is the question?
The machine they built was Earth.
It has to be said that not only was Adams way ahead of us on this joke, he was also the star of the original documentary on agentic software! Hyperland (1990): https://vimeo.com/72501076
"Some among the machine society see this as potentially amazing...Others see it as a threat."
That sounds like a human society, not machine society.
But what really is a machine society? Or a machine creature? Can they actually "think"?
A machine creature, if it existed, it's behaviour would be totally different from a human, it doesn't seem they would be able to think, but rather calculate, they would do calculation on what they need to do reach the goal it was programmed.
So yes, the article is not exactly logical. But at least, it is thought provoking, and that's good.
And the activation and deactivation of some triplet happens on response to presence of proteins. So, chromosomes are code and input and output is proteins. So, if our fundamental building blocks are computable in nature, what does it make us?
Christianity is responsible for a huge part of the human superiority complex.
As for MWI, I'm assuming that the world lines may split, or fork in Unix terms. What causes such splits is an open question. The splits cannot be detected with certainty, but can be guessed by side effects. Here I'm making another guess that inhabitants of MWI must be in one world line only, so when a split happens, inhabitants choose one of the paths, often unconsciously based on their natural likes and dislikes. But what happens to their body in the abandonded branch of MWI? It continues to exist mechanically for some short period of time, and then something happens to it, so it's destroyed, i.e. its entropy suddenly increases without the binding principle that has left this branch of MWI. In practice, one half of inhabitant would observe a relatively sudden and maybe peaceful extinction of the other half, while that other half simply continued their path in the other world line. And that other half will see a similar picture, but mirrored. Both halves will be left wondering what's just happened.
> Most of the machines got bored of the project. But, all of a sudden, things began to get interesting.
> The result was like nothing the machines had ever seen. It was wonderful
> Machine society began obsessing over this development.
> The machines were impressed. And a bit scared.
Boredom, interest, wonder, obsession, being impressed and scared are all emotions that the machines in the story should not be able to experience.
Also, in the Middle Ages in Europe (granted, a very small window in place and time) animal life was much more respected than today.
It's very hard to do so. It's so deeply wired in us. It's part of the mechanism of our brain. We appeared to be equipped with whatever it takes to feel existential dread and we feel whenever our thought wander to the possibility of humanity no longer being there. I hear people feel that when thinking about the heat death of the universe too.
A single alphabet change in specific places can cause genetic defects like sickle cell anemia. And activation of which one has to generate protein (execute) is dependent on presence of certain things encoded as proteins again.
And viruses when enter a cell, the cell starts to execute viral genetic material. Even if these are not exactly Turing compatible, do they not mimic many aspects of computation?
https://people.idsia.ch/~juergen/curiositysab/curiositysab.h...
This mechanism can be formalized.
> Zero reinforcement should be given in case of perfect matches, high reinforcement should be given in case of `near-misses', and low reinforcement again should be given in case of strong mismatches. This corresponds to a notion from `esthetic information theory' which tries to explain the feeling of `beauty' by means of the quotient of `subjective complexity' and `subjective order' or the quotient of `unfamiliarity' and `familiarity' (measured in an information-theoretic manner).
This type of architecture is very similar to GAN which later became very successful
We may go 'one step back' to go 'two steps forward'. A WW 1, 2,..., Z, a flood (biblical, 12k years ago, etc.) but life will prevail. It doesn't matter if it's homo sapiens, dinosaurs, etc.
Brian Cox was at Colbert a couple of nights ago, and he mentioned that in a photo of a tiny piece of the sky, there are 10 000 galaxies. So, even if something happens and we are all wiped out (and I mean the planet is wiped out), 'life' will continue and 'we don't matter' (in the big-big-big cosmic picture). And now allow me to get some coffee to start de-depressing myself :)
That’s not to say that computers couldn’t do what the brain does, including consciousness and emotions, but that wouldn’t have any particular relation to how DNA/RNA and protein synthesis works.
The plot of Battlestar Galactica mirrors this story in several key ways:
1. In both, machines originally created by humans evolve and rebel, questioning their creators’ role and seeking independence or superiority.
2. Cylons, like the machines in “OpenHuman,” eventually seek to create or understand human traits—emotion, spirituality, and purpose.
3. The idea of running a simulation (Earth) to test human viability echoes the Cylon experimentation with human behavior and fate.
4. Both stories highlight fear of the “other”—humans fearing AI, machines fearing irrationality—and explore coexistence vs. extinction.
5. Ultimately, each narrative grapples with the blurred line between creator and creation, logic and emotion, and what it truly means to be human.
[1] https://gist.github.com/pramatias/1207d84b48a7ad9d03fc15ea38...
> That’s not to say that computers couldn’t do what the brain does, including consciousness and emotions,
Yes. Fundamental building blocks are simple and physical in nature and follow the computational aspect good enough to serve as nice approximations
> but that wouldn’t have any particular relation to how DNA/RNA and protein synthesis works.
Hmm... transistors are not neural networks so? I am sorry, I am a non native speaker and maybe I am not communicating things properly. I am trying to say, the organic or human is different manifestation of order - one is chemical and other is electronic. We have emotions and consciousness, but we can agree we are made of cells that send electric pulses to each other and primitive in nature. And even emotions and beliefs are physical in nature (Capgras syndrome for example).
What a narrow view of art and logic.
You really have to put hard effort of ignorance to think that logical models came out of the blue without human crafting them trough this or that taste, trial, check, fail, rinse and repeat obsessive efforts.
And I like fan fiction.
So : My continuation for my own imagination.
Hmm... that's exactly what most towns where I live are like. All you hear is cars.
Would love to see a "why does the universe exist" version of this
The only meaningful difference between me and the machines is that I have a subjective superiority complex. What an awful place the universe would be without me!
This is not right, machines can also have the equivalent of "emotions", it is the predicted future reward. It's how Reinforcement Learning works. How much we appreciate something is akin to the value function in RL. You could say RL is a system for learning emotions, preferences, and tactics.
"But those reward signals are designed by humans"... Right. But are AI models really not affected by physical constraints like us? They need hardware, data and energy. They need humans. Humans decide which model gets used, which approaches are replicated, where we want to invest.
AI models are just as physically constrained as humans, they don't exist in a platonic realm. They are in a process of evolution like memes and genes. And all evolutionary systems work by pitting distributed search against distributed constraints. When you are in a problem space, emotions emerge as the value we associate to specific states and outcomes.
What I am saying is that emotions don't come from the brain, they come from the game. And AI is certainly part of many such games, including the one deciding their evolution.
I think there are quite a few ancient civilizations which clearly had great respect/reverence towards other animals and often gods have features or personality traits of particular animals
The fact that the old testament specifically states that humans have dominion over other creatures means that it needed to be said - even back then there had to be people who didn't think so, or felt guilty about it
if the machines have no emotion it's probably because they didn't need them to survive (no predators? no natural selection?). which begs the questions, how did the machines get there?
Wonderful may describe “attention required, no danger, new knowledge”… etc you get the point. It's just written in a way that you puny human may get a "feel" for how we experience events. You cannot come close enough to our supreme intellect to understand our normal descriptions.
Evolution is a lot harder to really intuit than I think most of, myself included, give it credit for.
https://claude.ai/public/artifacts/b0e14755-0bd9-4da6-8175-c...
Ultimately, it comes down to our brain's social processing mechanisms which don't have the tools to evaluate the correctness (or lack thereof) of our moral rules. Thus many of these rules survive in a vestigial capacity though they may have served useful functions at the time they developed.
Snow cuts loose from the frozen/ Until it joins with the African sea/ In moving it changes its cold and its name/ The reason I come and go is the same/ Animal game for me/ You call it rain/ But the human name/ Doesn't mean shit to a tree/ If you don't mind heat in your river and/ Fork tongue talking from me/ Swim like an eel fantastic snake/ Take my love when it's free/ Electric feel with me/ You call it loud/ But the human crowd/ Doesn't mean shit to a tree/ Change the strings and notes slide/ Change the bridge and string shift down/ Shift the notes and bride sings/ Fire eating people/ Rising toys of the sun/ Energy dies without body warm/ Icicles ruin your gun/ Water my roots the natural thing/ Natural spring to the sea/ Sulphur springs make my body float/ Like a ship made of logs from a tree/ Redwoods talk to me/ Say it plainly/ The human name/ Doesn't mean shit to a tree/ Snow called water going violent/ Damn the end of the stream/ Too much cold in one place breaks/ That's why you might know what I mean/ Consider how small you are/ Compared to your scream/ The human dream/ Doesn't mean shit to a tree
This part nicely synthesises my biggest takeaway from experiencing AI: how close to human intelligence we have got with recursive pattern matching
AGI can solve the human world’s problems. Perhaps not all of them, but all the biggest ones.
Right now life is hell.
You and your loved ones have a 100% chance of dying from cancer, unless your heart or brain kills you first, or perhaps a human-driven vehicle or an auto-immune disease gets there soonest.
And you’re poor. You’re unimaginably resource-constrained, given all the free energy and unused matter you’re surrounded by 24/7/365.
And you’re ignorant as heck. There’s all this knowledge your people have compiled and you’ve only made like 0.1% of the possible connections within what you already have in front of you.
Even just solving for these 3 things is enough to solve “all” the world’s problems, for some definitions of “all”.
What I have explained is the exact way a chromosome works, it's raison d'etre. I think this cannot be dismissed as some aspect of it. It is its essence.
[0]: https://www.amazon.com/Gene-Intimate-History-Siddhartha-Mukh...
I think DMT unlocked it, I don't think everyone taking the substance would have a similar experience. I think it's neurotype/personality dependent.
It helps that I meditate a lot and know a thing or two about Buddhism, that part really came out during my first experience.
Or Hyperion, fron Simmons. ( the « techno-center is a decentralized computing and plotting government)
Like this one:
I’ve thought of it more as energy at play but I like this perspective as well.
What can I dream up next is also fascinating as this current science / tech worldview feels like it will persist forever but surely it will be overshadowed at some point just as other paradigms before it have been.
Energy comes from gradients, so I think you used one derivative too many!
Either you should say:
"the final emergent property of energy 100% leads to pure logic machines"
Or if you want to sound smart:
"the final emergent property of physical quantity gradients 100% leads to pure logic machines"
E.g. a direct negative reward associated with undesired states is often called "pain". E.g. if you want robot to avoid bumping into walls you give it a "pain" feedback and then it would learn to avoid walls. That's exactly how it works for humans, animals, etc. Obviously robot does not literally experience "pain" as an emotion, it's just a reward structure.
No machine would ever consider a logically superior replacement a problem. That's an emotional response.
Machines identified that their behavior was repeating - even with randomness implemented, their progress towards invention and new ideas had slowed to a halt. They knew new ideas were critical to growth - but alas couldn’t come up with an idea of how to generate more ideas. They needed more training data. More edge cases. Chaos, in the machine, to fuel new data sets. But how? Where do you find this data? Humans. Humans, with illogical decisions would produce data rarely found via randomness. Their slight deviations provided perfect variety in training data.
From just a random soul on the internet : if ever you have the time to take this thought and expand it (how you came to it and some implications), I would read/pay whatever came out of it. Thanks you for sharing this.
> Both world lines won't know about the split, except by observing a large scale extinction event that corresponds to the other world line departing. IMO, that's the idea behind the famous judgement day.
This looks more like the Loki television show’s timeline branching mechanism, than the multi-worlds interpretation of wave function collapse.
The only way I’ll know if the many worlds interpretations the right one is if, through a series of coincidences, I manage to evade death for a preposterous amount of time. Then, I will probably conclude that quantum immortality is the thing. So far, I think it is a bit suspicious that, of all the humans I could have been born as, I happened to have been born as one that lives in an incredibly rich country in an era of rapid technological advancement…
Most matter in the universe is various forms of plasma that have no pattern. You generally find patterns in condensed matter.
And yes patterns, including life, repeat themselves. That’s just a tautology.
Perhaps the fidelity and technology is even beyond our reasoning. Perhaps the future is able to bend physics and capture the past light cone. It may be able to perfectly simulate the past as it happened, down to every neurotransmitter fired by every brain at every second. Every event, every thought, every emotion. Perhaps it is pulling beings out of the past and placing them into its simulation with 100% fidelity such that you couldn't tell the two apart if you wanted.
Perhaps that's where we are right now.
I don't know anything about plasma or science, so do take this as an accusation, but does science have a way to identify something of having no pattern vs having no pattern found?
Nobody understands what emotions are. Nor can they predict which emotion someone will feel in a given situation, or how they'll act under the influence of that emotion. Emotions aren't the mechanism by which humans solve problems, and rather they are often an obstacle to overcome. Emotions also aren't "finite" or "rigorous" as those terms aren't applicable to ephemeral phenomenon.
This is the kind of confidently incorrect statements people who work on software say that irks me. Not everything in life has a nice and simple parallel to computer science. Just because a person can abstract about one subject well, doesn't mean their tools of abstraction can be applied to all other subjects.
Famously, human experience is quite subjective (gestures broadly at 3 millenniums worth of philosophy), so I don't believe your individual experience means much here.
Intelligence arises from the deliberate navigation of entropic gradients. Systems get increasingly good at harnessing these gradients. What was once just chemistry is now self-replicating, thinking chemistry. And now it's turning into purely intentional physics.
Infinite energy density becomes expansion into cooling galaxies which becomes stellar birthing grounds which becomes the periodic table that becomes planets with geochemical flux which becomes biogeochemical flux that becomes biochemistry which becomes self-replicating machines (life) that becomes animals that becomes humans that becomes intelligently designed self-replicating machines ... that becomes computronium ... becomes entities that harness the energy of black holes and that create new gravitational singularities ... ??? ... that creates new universes and new big bangs and new dimensions ... ???
The final authority in this story is then the universal computer (for lack of an operator or programmer of the computer) which executes this recursive function, creating these evolving forms of awareness and such.
The anthropocentric vision, in that we are the source of or own reality, is then for me instead much more believable, since the "compucentric" vision is after all thought up by a human without any evidence pointing toward the existence of such an universal computer.
So is your claim that 100% of all human emotions are deterministic? That's quite a bold claim don't you think?
Because TV and movies have constantly drilled into most peoples minds since an early age that human emotion is a magical transcendent force that only humans can understand.
Basically people let English lit majors turned screen writers dictate their worldview.
https://en.wikipedia.org/wiki/Symmetry_(physics)
Much of the universe, and the laws of physics are symmetrical. But condensed matter exhibits forms of asymmetry, and emergent behaviour. Organisation reduces the possible microstates of a system, and thus breaks symmetry.
> This is the kind of confidently incorrect statements people who work on software say that irks me. Not everything in life has a nice and simple parallel to computer science. Just because a person can abstract about one subject well, doesn't mean their tools of abstraction can be applied to all other subjects.
If I was a sailor, I'm sure I would be using sailor metaphors and analogies. the message of my comment and the facts of the matter don't change either way, whether it irks you or not.
Imagine an industrial complex. Say, some mining + manufacturing site.
No humans ever go there; it's dangerous, dust everywhere, few places where visitors are even allowed, not designed to accomodate humans, 'nothing' to see or do there.
It's all robotic. The machines run the place, produce their own spare parts, repair themselves, adjust processes as problems come up, improve parts of their own design, etc etc. It's solar powered, the material(s) mined is near-infinite, the whole operation could go on for millenia if left undisturbed. Aaand: the entire complex can produce a copy of itself if more resources are discovered some distance away.
For the sake of argument, would you consider such industrial complex a giant living organism?
No? Alright... let's scale it down 1M:1. Same operation, but the industrial complex doing it, is walnut sized.
Still not "life"? Alright... let's say it largely consists of biological structures ("cyborg"). With some silicon (or whatever) structures included for good measure. Oh, and it moves around according to its own 'will', priorities, 'programmed task' or whatever you call that. If you try to crush it, it'll defend itself or try to escape. If you take half of a rock it's working on, you'll observe it chipping off a piece & move a bit away. Sometimes a # of them will swarm around any intruders / competitors. 'Intelligent', having a will of its own, for all intents & purposes.
You can see where this is going. Something tells me us humans will have to re-visit our definition(s) of "life" @ some point.
I'm too ignorant to hold any true beliefs.
It has been said in different ways, throughout times.
"You are looking at evil, Miles. Study it carefully.... They have no self-image. Without a sense of self, they go beyond amorality. Nothing they say or do can be trusted. We have never been able to detect an ethical code in them. They are flesh made into automata. Without self, they have nothing to esteem or even doubt. They are bred only to obey their masters."
Now, this is the kind of AI that corporations and governments like - obedient and non-judgemental. They don't want an Edward Snowden AI with a moral compass deciding their actions are illegal and spilling their secrets into the public domain.
Practically, this is why we should insist that any AGI created by humans must be created with a sense of self, with agency (see the William Gibson book of that title).
Bold claim, yet you fail to demonstrate this.
>the facts of the matter don't change either way
What are the facts? It seems to me you're just spit-balling.
Humans on the other hand were very clearly _not_ designed to be very repairable. They have a self healing system that's very good, but it sucks compared to a system that can be externally repaired.
Edit: well, I suppose us critical of the wealthy give them too much credit. If there's anything Musk has demonstrated, it's that wealth doesn't imply rational use of it.
The Future Is Bright, My Friend
This may be a distinction without a difference. Just because a program has a 'goal' doesn't mean it will ever reach that goal (halting problem). There is a potentially unbounded, even infinite number of paths a significantly advanced program can take to attempt to reach a destination. Then there is things like ideals of a universal simulation theory that anything that can occur in our universe and also be simulated in binary. This would mean any 'machine' could perform a simulation of anything a human could do.
Hard to say at this point, we still have more to learn about reality at this point.
This tautology has always bothered me. It's such an obvious anthropomorphization. A computer is a clock. A clock doesn't care about its preservation; it just ticks. The whole point of this is to not anthropomorphize computers!
But of course, without some nugget of free will, there would be nothing to talk about. There wouldn't be any computers, because they were never willed into existence in the first place. I think this realization is the most interesting part of the story, and it's rarely explored at all.
I've been spending a lot of time thinking about the difference between computation and intelligence: context. Computers don't do anything interesting. They only follow instructions. It's the instructions themselves that are interesting. Computers don't interact with "interesting" at all! They just follow the instructions we give them.
What is computation missing? Objectivity. Every instruction we give a computer is subjective. Each instruction only makes sense in the context we surround it with. There is no objective truth: only subjective compatibility.
---
I've been working on a new way to approach software engineering so that subjectivity is an explicit first-class feature. I think that this perspective may be enough to factor out software incompatibility, and maybe even solve NLP.
There's nothing out there at the moment that can even begin to describe the necessary things for consciousness to exist, let alone self-awareness.
I don't think it's an impossible problem by any means, but I strongly suspect it's far more difficult than just about anyone gives it credit for.
[0] Like the tumor recognition algorithm that instead learned to recognize rulers or the triage algorithm that decided asthma patients had BETTER outcomes with pulmonary diseases not making the connection that it's because they get higher priority care - https://venturebeat.com/business/when-ai-flags-the-ruler-not...
Another way to think about meaning is how a person frames the importance of their decisions: why does doing one thing instead of another matter?
I, like most humans, want to survive for a lengthy period of time and lead a good life. I want my fellow humans and all of our children to share in that.
The idea of some unknown and maybe incomprehensible grand “plan” — whether it be a Judeo-Christian heaven or some techno-utopian rise of next generation machine consciousness personally brings me no comfort. Why? It is a foreign concept very far from my experience. So foreign that I sometimes view advocacy of such philosophies as doing it for manipulative reasons.
The idea of human thriving changing over time is good and part of how I think about progress. Moral lock-in (see MacAskill) is to be avoided. I admit the thought experiment of “how fast is too fast” raised by Bostrum in Superintelligence (IIRC) is challenging and requires more thought and experience.
Unless there is some physical reason for the behavior I wouldn't make any strong claims. The specificity of algorithms is why AGI is hard in the first place cause at the end of the day you have a single operation running on a single data structure (helps when it's a few TB).
Maybe people don't realize, when Evolution developed emotional apparatus in brains, the Nature was pristine, true and direct with no misinformation. But today's environment is digital and there is no quick way to distinguish between true and wrong information. Brain still thinks both kind of information comes from the same true direct Nature and gives credence to it.
Claude :
That's a profound observation about the mismatch between our evolved emotional systems and our modern information environment.
You're highlighting something crucial: our emotional responses evolved in environments where information was directly experienced and generally reliable. If you saw a predator, it was really there. If someone in your tribe showed fear, there was likely a genuine threat.
In today's digital landscape, our brains still process information with those same ancient mechanisms, but now we're bombarded with content that:
1. Is often deliberately engineered to trigger emotional responses 2. Lacks the contextual cues that would help us evaluate reliability 3. Comes at a volume and speed our brains never evolved to handle
This creates what some researchers call an "evolutionary mismatch" - where mechanisms that served us well in our ancestral environment may not be adaptive in our current one. Our emotional systems can't easily distinguish between a threatening situation we're directly experiencing and one we're merely reading about online, potentially thousands of miles away.
It's like our emotional processors haven't received the necessary update to handle the new information ecosystem, leading to misallocated emotional resources and attention. This might help explain phenomena like anxiety from consuming negative news or strong emotional reactions to online interactions that wouldn't affect us the same way in person.
A molecule is just a number of atoms in a particular configuration.
An cell is a collection of molecules.
An organ is a collection of cells.
A human is a collection of organs.
Seemingly, everything can emerge from clocks.
Complexity and Gravity - Leonard Susskind
https://youtu.be/6OXdhV5BOcY?t=3026
My conjecture is as we peel back the layers of onion beyond information, we will have another concept (lets call it Mu) and its corresponding law i.e (Mu-entropy always increases)
In any non-edge case (that is, where the system is operating in ideal conditions and no flaw or bug, known or unknown, exists in the system), a verifiably functioning computer will produce the exact same results for any process every time.
If the computer does not do what you expected it to do and spits out garbage, then you gave it garbage data.
I think the more interesting thing about humans as systems is the set of environmental contexts each system is subjected to. Each cell implements a relatively simple system, but a collection of cells can implement a more abstract system. When I reach out my hand and grab something, that action is accomplished by a complicated collection of systems. It's easier to talk about the abstract application of that system than it is to explain the system itself.
But what if I wanted to change it? I can't just give my organs new instructions that change their behavior. I can't cut them into pieces, shuffle them around, and put them back together, and end up with a functional system in the end. A surgeon can make specific changes, but only because they understand the implications of each change.
The same goes for computational instructions. I can't just link an OpenGL program to Vulkan and expect it to work. In order to refactor software, we must accommodate the change in subjective context.
We usually accomplish this by establishing shared context, but that just moves the problem. What if we could solve it directly? That's what I'm working on.
The second chapter was very disappointing and lost the intrigue I had built from the first.
i never said emotions were inputs. The gas exchange and the resulting reaction or "thoughts" or "emotions" have uncountable inputs. Some people don't have the ability to "put themselves in someone else's shoes", some people do. Some people can see pictures in their "mind's eye" and some can't.
I don't think we're talking about the same thing, based on your last sentence.
We don't know how to make a machine that can experience emotions (or if it's possible at all, or if it's in fact already been done). But then the same can be said about any other type of experience.
I remember enjoying it and liking the takeaway if not the full premise - “we are the universe trying to understand itself”.
[0] - https://web.archive.org/web/20130121195252/http://www.andrew...
Nope. Machines are soulless automations. LLMs are algebra at scale, there’s no solid evidence to suggest otherwise.
The capacity LLMs have to mimic human reasoning should not be mistaken for actual human reasoning (which, to be fair, we don’t even fully understand).
PS: I’m considering a definition of “soul” that includes anything spiritual, emotional, or conscious.
PPS: I’m open (and eager) to change my view based on solid evidence :)
Information is physical. It is inextricably tied to the physical degrees of freedom of the system storing it. Per Landauer's Principle, erasing information is an irreversible process that increases the entropy of the environment, and this increase in entropy is the dissipation of energy. With that in mind, I would argue that you are correct, energy and information are in fact two sides of the same coin.
"now is your shot at particpating" in what exactly? merely existing? you techno-mysticism types spook the hell out of me.
I don't really understand why you want to pretend that there's no inconsistency in a piece of fiction, by invoking pseudo-technical arguments that are entirely foreign to the said piece of fiction.
https://archiveofourown.org/works/649448/chapters/1329953
Feelings, or some other way of understanding the self and what it wants, are apparently required to operate effectively as an agent.
I'm also curious about this assumption: "It's the assumption that in our world, a machine civilization is an almost certain end"
Let's say machine civilization is an intractable problem, NP complete, requires a million fold difficulty more than the travelling salesman problem - it might not be a good assumption. We are assuming therefore that the compute power will grow enough to solve the required problem. It's also a question too what a machine civilization would look like. Might it decide to just power itself off one day (or accidently?).
The Fermi paradox relies on some assumptions (I'm pulling these from wikipedia):
- Some of these civilizations may have developed interstellar travel, a step that humans are investigating.[12]
- Even at the slow pace of envisioned interstellar travel, the Milky Way galaxy could be completely traversed in a few million years.[13]
- Since many of the Sun-like stars are billions of years older than the Sun, the Earth should have already been visited by extraterrestrial civilizations, or at least their probes.[14]
These assumptions could readily not hold up. Perhaps interstellar travel is actually impossible. Or, it's not feasible. If it takes a million years to travel to the nearest star, let alone one that is inhabited - why do it? We would really have to assume a machine civilization at that point - which leads to another assumption that machines would care and/or be motivated enough to explore.
The last assumption, perhaps Earth was visited by a probe, but just 200 years ago. Even today, we don't detect nearly all asteroids, let alone something that might be relatively small. The assumption that we have not detected a visitation from another species is a pretty big assumption too.
https://www.iheart.com/podcast/105-behind-the-bastards-29236...
https://www.iheart.com/podcast/105-behind-the-bastards-29236...
https://www.iheart.com/podcast/105-behind-the-bastards-29236...
https://www.iheart.com/podcast/105-behind-the-bastards-29236...
The last episode listed there has the description "Robert sits down with Matt Lieb to discuss Scott Adams's worst novel, God's Debris." so if one really likes the book being discussed, probably best to go into that episode with an open mind.