https://www.inf.fu-berlin.de/inst/ag-ki/rojas_home/documents...
"However, we should be careful with the metaphors and paradigms commonly introduced when dealing with the nervous system. It seems to be a constant in the history of science that the brain has always been compared to the most complicated contemporary artifact produced by human industry [297]. In ancient times the brain was compared to a pneumatic machine, in the Renaissance to a clockwork, and at the end of the last century to the telephone network. There are some today who consider computers the paradigm par excellence of a nervous system. It is rather paradoxical that when John von Neumann wrote his classical description of future universal computers, he tried to choose terms that would describe computers in terms of brains, not brains in terms of computers."
I have no idea what the submitted MIT article is trying to say. Does the MIT article try to make the point that neural networks can be used for computation given ridiculous amounts of memory? They can, but that still does not explain real intelligence. Otherwise, the article makes the same mistakes as pointed out in the above quote.
So asking if life is a computation seems mostly like a semantic musing. Define "life" and define "computation", then see if they're the same.
[1] https://plato.stanford.edu/entries/computational-mind/#GodIn...
It’s likely if different life forms on another planet, it will have a different “computation” model because its defined by different physics that it experiences during evolution. Though I suppose there will some similarities depending on some fundamental rules of the universe. Will propagation molecules like RNA or DNA always look like helixes, or will the radiation or physics of another planet create another form of propagation molecule we haven’t yet observed. Might make for an interesting experiment to simulate.
"It's not even wrong" - Pauli
Nothing about life is discussed here, it's not even defined once.
Computation really is a fancy word for calculation. What matters about computation is that its teleological. Computers are physical systems designed towards a particular end. A computer is, physically, no different than any other system. What differentiates it is that it's designed and we're interpreting its behaviour in a particular way.
Unless you're trying to make a grand theological argument in which "life" is taken to be some Hitchhikers Guide-like machination towards some end, it's not a computation. Life doesn't compute anything, the same way a falling pen doesn't compute gravity unless in a metaphorical sense.
The article is a pretty good example honestly of the problems of taking metaphors literally, common in the AI space where the author hails from. A similar case "artificial neurons" which are really metaphorical neurons. You have to be particularly careful when making comparisons between intentionally designed technological artifacts and biological and physical processes.
I'm not expert to judge the result of "drawing a missing hand by using neural network on each pixels"(if it's what it's done? Again not an expert).
In that sense life is obviously not a computation: it makes some sense to view DNA as symbolic but it is misleading to do the same for the proteins they encode. These proteins are solving physical problems, not expressing symbolic solutions to symbolic problems - a wrench is not a symbolic solution to the problem of a symbolic lug nut. From this POV the analogy of DNA to computer program is just wrong: they are both analogous to blueprints, but not particularly analogous to each other. We should insist that DNA is no more "computational" than the rules that dictate how elements are formed from subatomic particles.
[1] Turing computability, lambda definability, primitive recursion, whatever.
Enzymes in particular are a lot like unix pipelines. An enzyme catalyzes its substrate's conversion into its product which is the substrate of another enzyme. When cells ingest glucose, it flows through the glycolysis metabolic pathway until it becomes pyruvate, and may be reduced even further depending on available resources. It's a huge pipeline of enzymes. They just kinda float around within the cell and randomly perform their tasks when their substrates chemically interact with them. No explicit program exists, it emerges from the system within the cell.
Cell - Computer
Enzyme - Function / Process / Filter
Substrate - Data
Product - Data
Metabolic pathway - Program / Script
I've been playing in my mind with an idea for an esoteric programming language modeled around enzymes. The program defines a set of enzymes which are functions that match on the structure of data, automatically apply themselves to them and produce a modified version of the input which may in turn match against other enzymes. The resulting program metabolizes input by looping over the set of enzymes and continuously matching and applying them until the data is reduced to its final form. If no enzymes match, the output is the unmodified input.There is no optimization, if organisms can reproduce, they'll continue to exist. That does not mean they are the "best adapted" or on a trajectory toward better adaptation.
It's entirely possible for a germ line to become less fit over time, even to the point of extinction, and that's still evolution. Time has shown that is the case for most germ lines.
A physical computer is still a computer, no matter what it's computing. The only use a computer has to us is to compute things relative to physical reality, so a physical computer seems even closer to a "real computer" or "real computation" to me than our sad little hot rocks, which can barely simulate anything real to any degree of accuracy, when compared to reality.
And the flux of geothermal and chemical energy
Or do you mean that optimization by definition must include intent, and evolution as a mindless process has no intentionality?
I'm just not sure what you're driving at.
Abstractions don't really exist, they're a product of the human mind, but then we apply them to nature. Calling DNA code, comparing NNs and the brain, etc. But those abstractions fall apart when you look a little too deeply at what actually happens in nature.
Is DNA code? Or is it more like a machine? Is it neither, or is it something embedded in such a complex space that our simple abstractions can't capture the full nature of its being?
When you look at the nature of DNA, it does more than simply act as code. It can edit and self-modify, self-assemble, self-replicate, it can turn genes on and off, it can perform what can be argued as computations itself. If you limit yourself to thinking of it as code, you might miss crucial ways it exists/performs in real life.
There are quite a number of people who believe this is the universe. Namely, that the universe is the manifestation of all rule sets on all inputs at all points in time. How you extract quantum mechanics out of that... not so sure
It's a shame because there *has* been a lot of deep work done on what kind of computer life is. People often use the Chomsky Hierarchy (https://en.wikipedia.org/wiki/Chomsky_hierarchy) to define the different types of computer vs automata. Importantly, a classical Turing machine is Type-0 on the Chomsky Hierarchy. Depending on what parts you include from a biological system, you could argue it's anywhere from Type-0 to Type-4.
Interestingly, the PhD thesis of well-known geneticist Aviv Regev was to show that certain combinations of enzymes with chemical concentration states are enough to emulate pi-calculus, and therefore are Turing machines! https://psb.stanford.edu/psb-online/proceedings/psb01/regev....
> It can edit and self-modify, self-assemble, self-replicate, it can turn genes on and off
Unless my knowledge of biology is very outdated or incomplete, all of those things you cited are done to DNA. They don't happen spontaneously.
DNA doesn't self-replicate, a whole bunch of enzymes come and actively copy it. Genes don't spontaneously turn on and off, some enzyme comes and attaches or removes a methyl group. DNA doesn't self-assemble, it is actively coiled around histones to form nucleosomes. Bacteria have a huge variety of enzymes for manipulating native and foreign DNA, they have their own CRISPR mechanisms.
My addition: it's funny for how much speculation we get in the, "hard cognitive science" (RIP) that in lieu of the big insights we get from Godel, Turing, Russell that many/most undergraduates and even post-graduates still haven't internalized Wittgenstein's work especially the Tractatus. I feel like it gets us to, "the questions you're asking about how life works and the questions about what is at the core of logic and mathematics (language) are definitely related but not in any of the fundamental ways you hope they are..."
For the uninitiated-- try reading the thing in one sitting. It takes about an hour:
https://wittgensteinproject.org/w/index.php/Tractatus_Logico...
I'm reminded of an old YouTube video [0] that I rewatched recently. That video is "Every Zelda is the Darkest Zelda." Topically, it's completely different. But in it Jacob Geller talks about how there are many videos with fan theories about Zelda games where they're talking about how messed up the game is. Except, that's their only point. If you frame the game in some way, it's really messed up. It doesn't extract any additional meaning, and textually it's not what's present. So you're going through all this decoding and framing, and at the end your conclusion is... nothing. The Mario characters represent the seven deadly sins? Well, that's messed up. That's maybe fun, but it's an empty analysis. It has no insight. No bite.
So, what's the result here other than: Well, that's neat. It's an interesting frame. But other than the thought to construct it, does it inform us of anything? Honestly, I'm not even sure it's really saying life is a form of programming. It seems equally likely it's saying programming is a form of biochemistry (which, honestly, makes more sense given the origins of programming). But even if that were so, what does that give us that we didn't already know? I'm going to bake a pie, so I guess I should learn Go? No, the idea feels descriptive rather than a synthesis. Like an analogy without the conclusion. The pie has no bite.
OP's specific phrasing is that they "map symbols to symbols". Analog computers don't do that. Some can, but that's not their definition.
Turing machines et al. are a model of computation in mathematics. Humans do math by operating on symbols, so that's why that model operates on symbols. It's not an inherent part of the definition.
But DNA is effectively separation of concerns: RNA systems evolved to RNA mediated systems with DNA as more inert and reliable storage and enzymes as more effective catalysts. Or so the RNA world hypothesis goes.
I learned something new today! Thank you.
It's impressive that RNA of all things can be folded in such a way that it also acts like an enzyme.
Your comment is only true if you take an excessively reductive view of "symbol."
Similarly, RNA and DNA "machines" could have existed before cellular life, in which genetic material self-assembled, transferred genes horizontally/vertically, etc, blurring the lines between genes as "code" and something else.
You keep referring to what we are interested in, but that's not a relevant quantity here.
A symbol is a discrete sign that has some sort of symbol table (explicit or not) describing the mapping of the sign to the intended interpretation. An analog computer often directly solves the physical problem (e.g. an ODE) by building a device whose behavior is governed by that ODE. That is, it solves the ODE by just applying the laws of physics directly to the world.
If your claim is that analog computers are symbolic but the same physical process is not merely because we are "interested in" the result then I don't agree. And you'd also be committed to saying proteins are symbolic if we build an analog computer that runs on DNA and proteins. In which case it seems like they become always symbolic if we're always interested in life as computation.
This is true, but that sure seems unfair. ;) You have multiple competing systems, in the case of a germ. The system that human related germs are competing with is around 30 trillion times the size, with the advantage of some fairly incredible emergent properties that come from that. The germ is evolving, but in a system that completely overwhelms it, with evolved tricks to specifically force the germ along the "unhappy path" of evolution.
> OP's specific phrasing is that they "map symbols to symbols". Analog computers don't do that. Some can, but that's not their definition.
How is that not symbolic? Fundamentally that kind of computer maps the positions of some rods or gears or what have you to the positions of some other rods or gears or what have you, and the first rods or gears are symbolising motion or elevation or what have you and the final one is symbolising barrel angle or what have you. (And sure, you might physically connect the final gear directly to the actual gun barrel, but that's not the part that's computation; the computation is the part happening with the little gears and rods in the middle, and they have symbolic meanings).
Computers are functional mappings from inputs to outputs, sure.
Analog fire computers are continuous mappings from a continuum, a line segment (curved about a cam), to another continuum, a dial perhaps.
Symbolic operations, mapping from patterns of 0s and 1s (say) to other patterns are discrete, countable mappings.
With a real valued electrical current, discrete symbols are forced by threshold levels.
Human imagination allows us to explore as a simulation anything we want with a form of physicalized internal coherence.
Does internal coherence align with repeatable external coherence? That's what we call empirical.
Humans are the known meaning generators of the universe, we are interesting and special and our unique/random walks are important in an uncomputable and unbound sense. Who knows what casual chains will lead us, where they'll take us or how they might save us (from asteroids let's say) or might reshape the topology of spacetime.
It's early days yet.
There are several notions that aren’t examined, which stands in the way of having a sensible conversation about the question.
1. The definition of computation.
2. The definition of life.
3. The difference between the real order and the logical and epistemic orders.
Searle famously pointed out that computation is observer-relative. Sure, we can establish some kind of abstracted correspondence between a computing formalism and a natural process, and this correspondence can be fun or even a useful metaphor, but it is senseless to ask whether life is computation. Objectively, without an observer, there is no computation going on. In fact, even your computer is not objectively speaking computing.
You can effectively draw this correspondence with anything (Seth Lloyd did this with quantum mechanics), and if everything is computation, then nothing is. It becomes a synonym for all of reality.
1. Things in nature have a maximum complexity which is like computation 2. Most things get this complicated 3. Therefore most things are "computationally equivalent" 4. "For example, the workings of the human brain or the evolution of weather systems can, in principle, compute the same things as a computer. "
The leap between things being in an equivalence class according to some relation and being "in principle, the same" might present difficulty if you've done any basic set theory, but that's just because you lack vision.
[1] https://mathworld.wolfram.com/PrincipleofComputationalEquiva...
"Everything can be understood through mathematics" is usually said by a mathematician.
One extension I'd make from your comment is how rich interdisciplinary work can be, because all the resonances between different fields can come to life and some really wonderful creativity happens.
It's sort of like a car mechanic telling you "SQL query, eh? It must be similar to what happens in an intake manifold." For all I know, there might be Turing-equivalency between databases and the inner workings of internal combustion engines, but you wouldn't consider that to be a useful take.
That's the important question indeed. In particular, classing life as a computation means that it's amenable to general theories of computation. Can we make a given computation--an individual--non-halting? Can we configure a desirable attractor, i.e. remaining "healthy" or "young"? Those are monumentally complex problems, and nobody is going to even try to tackle them while we still believe that life is a mixture of molecules dunked in unknowable divine aether.
Beyond that, the current crop of AI gets closer to anything we have had before to general intelligence, and when you look below the hood, it's literally a symbols-in symbols-out machine. To me, that's evidence that symbol-in symbol-out machines are a pretty general conceptual framework for computation, even if concrete computation is actually implemented in CPUs, GPUs, or membrane-delimited blobs of metabolites.
Think of it like saying water has the goal of flowing down the mountain along the path of least resistance. Of course it doesn't, it's just something that happens. There's no goal.
A shark is pretty damn optimized bunch of molecules to survive in water, would you not agree?
I suppose this boils down to your definition of "optimize".
Proteins can also be seen as sequence of symbols: one symbol for each aminoacid. But that's beyond the point. Computational theory uses Turing Machines as a conceptual model. The theories employ some human-imposed conceptual translation to encode what happens in a digital processor or a Lego computer, even if those are not made with a tape and a head. Anybody who actually understands these theories could try to make a rigorous argument of why biological systems are Turning Machines, and I give them very high chances of succeeding.
> These proteins are solving physical problems, not expressing symbolic solutions to symbolic problems
This sentence is self-contradictory. If a protein solves a physical problem and it can only do so because of its particular structure, then its particular structure is an encoding of the solution to the physical problem. How can that encoding be "symbolic" is more of a problem for the beholder (us, humans), but as stated before, using the aminoacid sequence gives one such symbolic encoding. Another symbolic encoding could be the local coordinates of each atom of the protein, up to the precision limits allowed by quantum physics.
The article correctly states that biological computation is full of randomness, but it also explains that computational theories are well furnished with revolving doors between randomness and determinism (Pseudo-random numbers and Hopfield networks are good examples of conduits in either direction).
> ... whatever.
Please don't use this word to finish an argument where there are actual scientists who care about the subject.
Just keep in mind though that you have to think of cells as very slow, but massively parallel computers.
How is mutation and selection entail it's not optimization? Your motivating the lack of a goal for a process by describing it's composition. It seems like a logical (Non sequitur fallacy) and categorical erorr.
For reference
> optimization = the selection of a best element, with regard to some criteria, from some set of available alternatives
What's the selection selecting from, what's evolution evolving towards?
Moreover, you motivate with conservation. Conservation is an optimization criterion.
“A man doesn't think. It's just probabilistic generation.”
Heretic!
A symbol is a discrete sign that has some sort of symbol table (explicit or not) describing the mapping of the sign to the intended interpretation
Symbols do not have to be discrete signs. You are thinking of inscriptions, not symbols. Symbols are impossible for humans to define. For an analog computer, the physical system of gears / etc symbolically represent the physical problem you are trying to solve. X turns of the gear symbolizes Y physical kilometers.Edit: On further reflection, I suppose he didn't, if we consider the effort to span Gödel Escher Bach and I Am a Strange Loop.
Isn't that the entire point of making abstractions? Understanding things "as they are" is impossible, so we need simplifications. Of course it should be appreciated that the abstractions are always "wrong".
"A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness."
https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation
At the root of this question is whether life is entirely deterministic. Either position - yes it is, or no it isn't - is unfalsifiable.
And in either case, one must live as if life is not deterministic, or else one's sense of agency and meaning dissolve.
Also, maybe the goal of the computation isn't to generate a deterministic output. Maybe it's just to compress a lot of very random input, in a way that smooths out the noise. In this way life could be essentially random on purpose, because all the varieties of randomness are better at modeling the data (the observable universe) than a classical deterministic function would be.
Anything can be used to implement aTuring machine
Maybe? It is correct according to the Copenhagen interpretation of quantum mechanics, but there are other interpretations that are deterministic.
Given we have no evidence of the existence of anything effectively computable that is not Turing computable, it's a reasonable hypothesis, with no evidence pointing towards falsifying it, nor any viable theories for what a "level of computational power" that exceeds this hypothetical maximum would look like.
And, yes, if that hypothesis holds, then life is equivalent, to the point of at least being indistinguishable from when observed from the outside, computation.
A lot of people get upset at this, because they want life to be special, and especially human thought. If they want to disprove this, a single example of humans computing a function that is outside the Turing computable would be a very significant blow to this hypothesis, and the notion of life as a computation (it wouldn't conclusively falsify it, as to do that you'd need to also disprove that there might we ways to extend computers to compute the set of newly discovered functions that can't be computed by a Turing machine, but it would be a very significant blow)
As I commented elsewhere...
Human brains are not computers. There is no "memory" separate from the "processor". Your hippocampus is not the tape for a Turing machine. Everything about biology is complex, messy and analogue. The complexity is fractal: every neuron in your brain is different from every other one, there's further variation within individual neurons, and likely differential expression at the protein level.
The symbolic nature of digital computers is our interpretation on top of physical "problems". If we attribute symbols to the proteins encoded by DNA, symbolic computation takes place. If we don't attribute symbols to the voltages in a digtal computer, we could equally dismiss them as not being computers.
And we have a history of analogue computers as well, e.g. water-based computation[1][2], to drive home that computers are solving physical problems in the process of producing what we then interpret as symbols.
There is no meaningful distinction.
The question of whether life is a computation hinges largely on whether life can produce outputs that can not be simulated by a Turing complete computer, and that can not be replicated by an artificial computer without some "magic spark" unique to life.
Even in that case, there'd be the question of those outputs were simply the result of some form of computation, just outside the computable set inside our universe, but at least in that case there'd be a reasonable case for saying life isn't a computation.
As it is, we have zero evidence to suggest life exceeds the Turing computable.
[1] https://en.wikipedia.org/wiki/Water_integrator
[2] https://news.stanford.edu/stories/2015/06/computer-water-dro...
If life is not a computation, then neither of those are a given.
But it has other impacts too, such as moral impacts. If life is a computation, then that rules out any version of free will that involves effective agency (a compatibilist conception of free will is still possible, but that does not involve effective agency, merely the illusion of agency), and so blaming people for their actions would be immoral as they could not at any point have chosen differently, and moral frameworks for punishment would need to center on minimising harm to everyone including perpetrators. That is hard pill to swallow for most.
It has philosophical implications as well, in that proof that life is computation would mean the simulation argument becomes more likely to hold.
LLMs run on billions, gigawatts, and all of human knowledge to predict the next word.
Life runs on scraps to predict the next world.
Sometimes it’s hard to believe it’s computation alone.
1. Complexity != computation. How does a weather system compute anything at all for example? By any standard definition of these words it doesn’t. Since Wolfram never defines his terms rigourously, this statement is prima facie meaningless.
2. Computational complexity != equivalence. He’s talked about implementing the universe in 4 lines of mathematica code when clearly mathematica itself is in the universe and takes more than 4 lines of code to implement. What he (actually his staff) has implemented in 4 lines is a cellular automaton that is Turing equivalent. That’s cool but it’s not the universe. If you’re not drinking the kool-ade it’s just nonsense.
3. How does any of that make life indistinguishable from computation? All life that I’ve observed seems to be very easily distinguishable from computation, and I would suggest that anyone who finds this confusing should probably get out more.
Turing equivalence applies to all computation. "Computer programs" has nothing to do with it.
> How does a weather system compute anything at all for example? By any standard definition of these words it doesn’t
By every normal definition of these words it does. Any computation with a digital computer is us applying an interpretation onto physical computation in the form of basic physical interactions that carry out operations that we interpret in terms of logic.
And we have computing devices that makes this link more explicit, such as e.g. the Soviet "water integrator". Using physical interactions to compute is trivial, e.g. ranging from the trivial, with two pools of water merging is the computational equivalent of addition, to the slightly less trivial classic demonstration of Pythagoras theorem with tree interconnected triangles filled with fluid.
Every physical system carries out computations with every interaction, but most of them are useless to us. But every digital computer can carry out computations that are useless to us too, if we let them run chaotic programs on chaotic data.
> That’s cool but it’s not the universe.
It's not the universe, but that is irrelevant unless you can either disprove Turing equivalence or prove that the universe contains computation that exceeds the Turing computable. If you could, there'd likely be a Nobel prize with your name on it.
> 3. How does any of that make life indistinguishable from computation? All life that I’ve observed seems to be very easily distinguishable from computation, and I would suggest that anyone who finds this confusing should probably get out more.
If life does not exceed the Turing computable, then it can be fully simulated, to the point of giving identical responses to identical stimuli when starting from the same state and at that point if there is any distinction at all, it would need to require observing the internal processes of the entities involved.
Put another way: If life does not exceed the Turing computable, then you don't know whether or not you are simply a simulation, nor do you know whether or not the universe itself is.
On long enough scales - and they're not that long when you're talking about billions of years - we don't even know if the solar system is stable.
Bio-computability has the same issue at smaller scales. There are islands of conceptual stability in a sea of noise, but good luck to you if you think you can compute this sequence of comments on Hacker News given the position of every atom in the original primordial soup.
The universe is not clockwork. The concept of computability is essentially mechanical, and it's essentially limited - not just by conceptual incompleteness theorems, but by the fact that any physical system of computation has physical limits which place hard bounds on precision and persistence.
We have no evidence to suggest that is true. If no individual process in the universe exceeds the Turing computable - and we have no evidence it does, or that anything exceeding the Turing computable can even exist - then the universe itself would be existence-proof that it is computable. Now, we can't be 100% sure, because we'd have to demonstrate that every physical interaction everywhere is individually Turing computable. But we also have nothing that even hints of evidence to the contrary.
Note that it is possible the universe is not computable from within with full precision due to e.g. lack of compressibility.
> On long enough scales - and they're not that long when you're talking about billions of years - we don't even know if the solar system is stable.
That has zero relevance to whether or not it is computable. If it is computable, then any such instability is simply an effect of a computation.
In other words you're committing the logical fallacy of begging the question - your conclusion rests on your premise, as you're trying to argue that the universe is computable by using processes as evidence that can only be uncomputable if the universe as a whole is uncomputable.
> The universe is not clockwork.
That is irrelevant to whether or not it is computable.
> but by the fact that any physical system of computation has physical limits which place hard bounds on precision and persistence.
This is also in general irrelevant to whether or not a system is computable. We can operate symbolically on entities that requires any arbitrary (including infite) precision and persistence within various constraints. E.g. we can do math with 1/3 to infinite precision for a whole lot of calculations.
Unless you can show specific processes that demonstrably happens with a precision that is impossible to simulate without the computation becoming infinite, this argument doesn't get you anywhere. Note that it would be insufficient to show a process that appears to have infinite precision in a way that would take infinite time to calculate unless there is demonstrably no way to lazily calculate it to whatever precision you actually try to observe in a finite amount of time, as such a system can be simulated.
Length of time would also not be a problem unless you can show why such a simulation needs to run at full speed to work, rather than impose a subjective time on the inside of the simulation that can vary with computational complexity.
Space complexity is also irrelevant unless you can show limits on the theoretical maximum capacity of an outside simulator.
Now to the question of whether life is computable, then if the universe is computable, then life is too, but if the universe is not, life might still be, and so this is largely a digression from the original point I made.
I don't think ontology is quite that simple. They maybe don't exist in the same way as molecules and atoms do, but abstract concepts have some kind of reality to them.
HUMAN FACTORY COMPUTER
----- ------- --------
Cell Factory Computer
Enzyme Worker Functions
Ribosome Assembler Compiler
Acids Blue Print Source Code
The difficulty with this type of analogy is so many things need these various capabilities that it's not unique to a computer, or a factory or even a human.life is computation not because it's life, but because an inert rock in a very cold place is also computation, as is plasma within the sun.
you can model the universe as physical matter plus energy, with computations to describe what happens over time, or you could do away with the physical model and just assume it's all computation.
(as a side note, don't assume digital anywhere, and don't assume analog either, the math would take the form of what is)
https://publicservicesalliance.org/2025/05/24/what-is-intell...
I happen not to believe in this, personally. It seems to me that the non-dual metaphysical teachings of the East show us that determinism happens within a sphere that is subject to free-will. The realm of phenomenology is a subset of something greater, where nothing is bound by conditions. This is the way things necessarily must be for free will to exist, by the way.
> Symbolic operations, mapping from patterns of 0s and 1s (say) to other patterns are discrete, countable mappings.
What definition of "symbolic" are you using that draws a distinction between these two cases? If it means merely something that symbolises something else (as I would usually use it), then both a position on a line segment and a pattern of voltage levels qualify. If you mean it in the narrow sense of a textual mark, that pattern of voltage levels is just as much not a "symbol" as the position on the line segment.
Shapiro was also the author of one of the two PhD theses that were a major influence to Inductive Logic Programming, a field at the intersection of logic programming and machine learning.
A lot of the kind of "deep work" you mention used to be done in the logic programming and ILP community in times past, before everyone seemingly switched to neural nets and statistical machine learning.
Ironically, the Copehhagen interpretation is the relatively easier one to grasp. Other interpretations, such as Many Worlds, make it much more complicated. Can we really speak of free will if we actually make every decision that we possibly could?
Unfortunately, we won't get any real answers related to consciousness, or specifically the "hard question" and the "even harder question" in our lifetimes, if humanity can ever crack those.
And 10 seconds later, we're still alive.
An astronomically tiny fraction of possible worlds would result in us being alive at all.
IMHO, we have total free will, but we can only be conscious of a world where we're not dead. That eliminates most of them. I don't know what further proof we actually need. If it's true, it will become more and more obvious over time as you outlive your peers, then outlive every other living thing on the planet. If not... well, I would have no way to prove that you didn't live forever (in your timeline). Best of luck to us, lol.
[edit] also, just back to the original topic... if the point of this universe algorithm is to monte carlo stuff, like compare a billion versions of the stock market on each of a billion planets, then each run is randomized and free will is a requirement. No individual timeline is pre-baked because the randomness has to go down to the quantum level to make it not be deterministic. If our universe were deterministic, there would be no reason to run the sim.
[edit 2] this is what I've thought for about 15 years, but I just pulled the above summary completely out of my ass. So you won't know until your brain is conscious in a tank orbiting Saturn 200 years from now, and Farrah Fawcett wakes you up and you go... "That guy from HN was right!"