Announcing Science’s Third Great Transition
The Reasonable Ineffectiveness of Mathematics in the Biological Sciences
In 1960, physicist Eugene Wigner wrote what many consider one of the most profound scientific papers of the 20th century: “The Unreasonable Effectiveness of Mathematics in the Natural Sciences.” He revealed how Mathematics haunts physics in a most uncanny way.
Wigner marveled at how perfectly mathematics describes physical reality. Newton’s laws, quantum electrodynamics, relativity – all use elegant mathematical formulas that predict physical behavior with extraordinary precision.
The fact that abstract math, developed in the minds of humans, so perfectly describes the physical universe is, as Wigner put it, “a wonderful gift which we neither understand nor deserve.”
Many times in physics, solutions dreamed up by mathematicians decades or centuries before suddenly show themselves to be precisely applicable.
Everyone in science accepts this, to the point where many define science itself as the imperative to reduce everything in the cosmos to equations.
But there’s twist to the story. Stuart Kauffman, Sy Garte and I just published a new paper in the journal Entropy. It’s called “The Reasonable Ineffectiveness of Mathematics in the Biological Sciences.”
Biology Rewrites the Mathematical Playbook
While physics bows obediently to mathematical formulations, biology stubbornly resists them. For decades, scientists have battled with this perplexing reality. Many deny it outright, suffering from what we call “physics envy” – desperately trying to force biological complexity into neat mathematical boxes.
It doesn’t work. Now there’s an indisputable reason why: In 1931, Kurt Godel proved that mathematics cannot explain mathematics. Recent papers by myself, Stu Kauffman and others extend this principle to biology.
Take evolution. The concept of “fitness” in natural selection is circular. We define fitness by what survives, then explain survival by fitness. There is no mathematical “law of evolution” that predicts how species will adapt and change. Stuart Kauffman proved using mathematical Set Theory that such a law is not merely difficult but impossible.
“There are no equal signs in biology”
Biology doesn’t operate in equilibrium like physics; it’s constantly creating, adapting, and inventing new possibilities that no equation could have encoded.
This isn’t just a practical limitation of our current knowledge. Biology transcends the limits of computation itself. Our thesis is not a matter debatable scientific data; it is a mathematical proof. Any assumption that the universe is entirely mathematical trucks in a mind-bending contradiction that Kurt Godel unearthed in 1931.
David Chalmers famously introduced the Hard Problem of Consciousness in the 1990s. This philosophical puzzle asks how subjective experience (feeling pain, seeing red) emerges from physical brain processes. To date we have no physical explanation for our experience of being.
Biology has its own Hard Problem: How do living cells make choices that cannot be reduced to algorithms?
When flatworm embryos are exposed to barium, a chemical they’ve never encountered in evolutionary history, their heads explode. Then they generate new barium-resistant heads within hours. No algorithm could possibly predict this specific adaptation. Flatworms do not have an “in case of barium” subroutine in their DNA. These worms are exercising agency, making choices from an indefinite set of possibilities.
This means: Biology doesn’t simply obey mathematics; it creates mathematics. Organisms perform induction which by definition creates new mathematics as they adapt to their environments.
“We believe humans really are choosing which scientific theories are good or bad, and our dogs really are choosing whether to urinate in the living room or back yard.”
The Power of Heuristics
If we can’t reduce biology to equations, how do we make progress? The answer isn’t more data – it’s asking superior questions and choosing more elegant models.
A heuristic is an educated guess that works well enough in practice, even if it’s not perfect or mathematically precise. A great heuristic is incredibly powerful.
In business strategy, Bruce Henderson’s Growth-Share Matrix divides businesses into just four categories (Stars, Cash Cows, Question Marks, and Dogs), providing a powerful framework that bypasses enormous underlying complexity. I’ve been teaching Henderson’s “Star Principle” to my business clients for a decade. Similarly, physiologist Denis Noble points out that simple metrics like blood pressure, heart rate, height and weight tell a doctor more about her patient than an entire genome analysis.
The most powerful scientific tools are not larger datasets, but ingenious heuristics that demand we wisely choose very small data sets and well-posed questions to solve specific problems. This fact will only prove itself more central as the AI age advances.
The Third Transition in Science
The world is in birth pangs as everyone knows. AI is transforming the Internet; we are experiencing swings of economic cycles; and we’ve also entered what Stuart Kauffman calls a “third transition” in science.
The first was the Newtonian paradigm with its ‘clockwork universe’. The second came with quantum mechanics and its probabilistic nature. Now, biology forces us beyond both into a new framework that can accommodate life’s creative freedom.
This transition doesn’t diminish the achievements of mathematical biology. Rather, it places them in a wider context, reminding us that nature’s creativity transcends any single formalism.
By recognizing the reasonable ineffectiveness of mathematics in biology, we open ourselves to a more humble, wonder-filled science. We acknowledge that the world is not merely a theorem to be solved but a creative process unfolding in ways that cannot be fully prestated or predicted.
This isn’t a failure of science but an invitation to expand its horizons – to develop new concepts and frameworks worthy of life’s endless forms most beautiful.
The world is not a theorem. And that is why science will forever perplex and amaze us with its wonders.
Read our new paper “The Reasonable Ineffectiveness of Mathematics in the Biological Sciences.”
~
Watch the podcast conversation with Sy Garte that gave birth to this paper.
~
Download The First 3 Chapters of Evolution 2.0 For Free, Here – https://evo2.org/evolution/Where Did Life And The Genetic Code Come From? Can The Answer Build Superior AI? The #1 Mystery In Science Now Has A $10 Million Prize. Learn More About It, Here – https://www.herox.com/evolution2.0
Where are Mendel’s laws?
Are mendel’s laws absolutely 100% true?
Do some searching on that. This might be a starting point: https://www.perplexity.ai/search/are-mendel-s-laws-absolutely-p-B2c4UydkSgSb.qfzRiwdVA
https://grok.com/share/bGVnYWN5_41dfe9a2-659e-43d3-8c6c-ddcad7a9b2a6
Conclusion
In summary, while biology lacks strict laws like physics, it has fundamental principles such as cell theory, central dogma, and evolution. These principles differ from physical laws in their generality, context-dependency, and historical contingency, requiring a systems-level approach to capture emergent properties. By analogy with Feynman’s work, biological laws reflect the complexity and beauty of life, offering insights into how organisms function within the constraints of physical laws, shaped by evolutionary processes.
We do not know the fundamental principles of cell theory. If we did, we could build a cell.
The central dogma is wrong. Denis Noble has made this very clear in a half dozen papers.
Nobody knows what really drives evolution because nobody knows how to build anything that evolves the way life does.
This is a respectable answer considering it’s an AI and I credit it for acknowledging we need a systems level approach. But it’s still word salad.
We cannot build a galaxy, but we know the structure of galaxies (in general) and the laws of their motion.
We know quite accurately the structure of cells, biological molecules and even single-celled organisms, such as bacteriophages.
So it turns out that there are not enough laws of motion?
They are probably non-deterministic.
Including organisms, starting from a certain level of complexity, are engaged in goal-setting.
What you said resonates with Karl Friston’s view on this. Small FEP systems are flat. They have generative models but these are not hierarchical. As a function of the system’s size, among other things, the system develops internal information-theoretic structure, which means at some point the system is capable of planing – evaluating the sequences of possible actions – and in order to evaluate and chose, the system must have a value matrix. Joscha Bach calls that – it matters to agents what’s going on (it doesn’t to a thermostat).
Teleology with goals happens when the system must remain as a system – this is the core goal (to resist dissipation and minimize free energy) and from there other goals emerge as the organisms become more complex.
A hypothesis on the progression of increasing complexity and organization (necessitating “code” in some sense) is expressed in Kevin Mitchell’s wonderful book – from hydrothermal vents at the bottom of the ocean (geochemistry to biochemistry) to a development of lipid membranes, leading to cells, then to multi-cellular organisms.
Mitchell, K. J. (2023). Free agents: how evolution gave us free will.
All of these are highly complex topics, however, there are certainly theoretical framework when consciousness evolves at some point as a system giving adaptive advantage, similarly, complexity emerges, intelligence gradually increases, agency emerges.
In Levin’s TAME one can see the expansion of the light cone. When the light cone is large enough – the organism has nested and embedded structure. All of this can develop from simple beginnings, like an ant colony from individual ants
I challenge anyone to name any five equations in biology that are always absolutely true in the same fashion that gravity is absolutely true.
Gravity is not “absolutely true.” Gravitational law is no longer operational in subnuclear distances and in the black hole singularity. All laws in physics are models. All of them have limitations and are formulated within specific formal systems.
Alexey,
Well you are right… gravity is not absolutely unequivocally true. It’s almost entirely true for all practical purposes, and is still just a model.
I believe your observation does accord with the overall thesis of our paper. We are saying that while mathematical models work very very well in some parts of physics, it is impossible for them to be 100% accurate in biology.
Just to keep in mind: “Mathematics has been used in biology as early as the 13th century, when Fibonacci used the famous Fibonacci series to describe a growing population of rabbits.”
https://en.wikipedia.org/wiki/Mathematical_and_theoretical_biology
I find this argument to be a bit overgeneralized and simplified. What branch of mathematics, used how exactly and in which part of biology? On the surface, the argument of a generalized “ineffectiveness” of mathematics doesn’t hold water if one considers Karl Friston’s work. We use MRI machines in biological sciences and we would not have that without Friston’s statistical parametric mapping, which is math. His Free Energy Principle-based models and formulation are used very widely by biologists and psychologists – with specific tangible experiments and simulations that advance science forward. Helmhotz used math. Walter Freeman III used math, William Sulis uses math. All of these scientists contributed to the significant progress in biological sciences. The chaos theory that you like is a branch of mathematics – non-linear dynamical systems theory and it is used now in neurology and cardiology.
The argument that math alone is insufficient is a less strongly worded statement vs the supposed “ineffectiveness” of all math.
Who measures the “effectiveness” of fundamental science and how exactly? Is Einstein’s general relativity “effective?” Is large hydron collider effective? Is geometry effective? By what measure? Is it effective to discuss Euclid’s axioms vs. Lobachevsky’s axioms?
There are examples of poor methodology in modeling – yes – in every science.
Mathematics (used appropriately) is an essential component of modeling in biology. I think decomposing the biological models into components and saying that component A is inefficient while component B is very efficient is not doing it justice. Collectively, the entire model can be evaluated by the accuracy of its predictions. And with this criterion specific computational models in biology and psychiatry can be evaluated based on the accuracy of their predictions and not based on their use of physics vs. computer science to build these models.
Alexey,
The nuance of this argument is made more clear in my paper “Biology Transcends the Limits of Computation” and if you very closely read the “Reasonable Ineffectiveness” paper I think we do explain exactly what we mean.
We are not throwing mathematics under the bus.
But we are saying that because biology makes choices, and because choices are not computable, a completely accurate “down to the nth decimal place” mathematical model of biology is impossible. Not just in practice, but in principle. This is a matter of mathematical proof, not scientific and evidential debate. The laws of logic make this so. They are natural consequences of Turing’s halting problem and Godel’s incompleteness theorem.
This does not invalidate Friston’s work, or any other biologist whose models produce reliable results. But there can in principle be no such thing as a successful “Laplacian” model of biology simply because biology does inductive reasoning and induction by definition is not computation.
I encourage you to read both of these papers very carefully because we do nuance this question very carefully.
Thank you, Perry. I’ll read the second paper.
My main issue with this one is a generalization of “mathematics.”
Essentially, my comment about this particular paper is that it seems to be of the “straw man” kind. It seems the co-authors of this “inefficiency” paper chose a very narrow and stereotyped view of “mathematics” and proceeded to labelling it as “inefficient.”
Mathematics is much broader than Laplace. Not just math, but physics moved on from determinism with Ludwig Boltzmann in 1877. By the 1920s, when quantum mechanics theory was born, strict determinism had become a historical artifact.
Why equate contemporary mathematics with Laplace?
When you say “logic” as in “The laws of logic make it so” – you seem to assume Aristotelean logic, where A and not-A can’t co-exist. Did I get that right?
This is not the only logic in mathematics and physics, as you know. In fact, you would not get very far with Aristotelean logic in quantum mechanics. Quantum logic or Schrödinger logic also exist in mathematics. Euclid is not the only geometry – we moved on from Euclid with Lobachevsky and Riemann. We would not have general relativity without them. Classical causality is not the only causality in mathematics – we have Granger causality.
So I appreciate the advice to read another paper, but it’s perfectly fine to comment on this one it its own right if it is published and passed the peer review. In my view, this particular paper would be much stronger had it been more specific. If you’d like to critique the applicability of aristotelean logic and strict determinism to biology – you have an argument. This has already been before, so it’s not new, but it is at least specific. If you must talk about all of mathematics, then I find the paper to be a straw man-type argument.
To bring this home, here is a paper written by biologists, whom who like and cite:
Pio-Lopez, L., Hartl, B., & Levin, M. (2025). Aging as a Loss of Goal-Directedness: An Evolutionary Simulation and Analysis Unifying Regeneration with Anatomical Rejuvenation.
Lots of math. Very efficient. No Aristotle, no determinism, no classical causality, but Granger-like causality. But there is much math, which requires attention and commands respect. There would be no code to complete the in-silico experiments without this math.
Transfer Entropy and Active Information Storage and Spatial Entropy in this papers are straight up math. Information theory you like is all math. And there are many, many more papers like that in computational biology.
In our paper we enumerate many places in biology where math is quite helpful. Our comments about protein folding. As you say, information theory is math.
The problem is much deeper than this. Godel’s incompleteness theorem applies to all mathematical systems of any appreciable power. It’s a universal.
You ask:
When you say “logic” as in “The laws of logic make it so” – you seem to assume Aristotelean logic, where A and not-A can’t co-exist. Did I get that right?
Any formal mathematical system can be either consistent or complete. Never both. A and not-A cannot co-exist.
Godel never stated that his theory is universal. Here are specific exceptions:
https://plato.stanford.edu/entries/goedel-incompleteness/
The natural first-order theory of arithmetic of real numbers (with both addition and multiplication), the so-called theory of real closed fields (RCF), is both complete and decidable, as was shown by Tarski (1948); he also demonstrated that the first-order theory of Euclidean geometry is complete and decidable. Thus, one should keep in mind that there are some non-trivial and interesting theories to which Gödel’s theorems do not apply.
Neither of these systems can model general computation or arithmetic with natural numbers (the most basic set of numbers used for counting).
So I agree they are non-trivial, yet they have strict limitations.
Can we take a step back and consider the original context of why we are discussing this?
This came up because my paper with Stu and Sy doesn’t merely assert, but proves that because mathematics and computation are incomplete, any science based on mathematics is necessarily incomplete as well; and furthermore, that because biology has agency (free will, scientists are really choosing theories and your dog really is choosing whether to urinate in the living room or back yard) it is in principle impossible to precisely model mathematically – by definition.
The RCF and Euclidean examples reinforce my thesis, rather than undercutting it.
This conversation feels very “tit for tat”. You are asking very good questions and I feel that as a good scientist I ought to respond to objections and defend my assertions insofar as they are defensible. I feel I am doing this. What I don’t understand is why you seem so uncomfortable with this. I’m not attacking you.
Many of the standard assumptions in molecular biology are flawed and the evolution textbooks are 30 years out of date. Furthermore the whole field is predicated on the often unstated assumptions of reductionism, which is proven false both by quantum mechanics and by Turing’s halting problem and Godel’s incompleteness theorems. Which is further reinforced by the work of Friston, Levin and other people we both admire.
By the way I find this to be true in every field. We haven’t even talked about cancer, which I’ve been deeply involved with now for 5 years. Same thing there. We’re using the exact same drugs to treat leukemia now as in 1977. Which is appalling.
A revolution in science is long overdue. Numerous authors have proven this. These are not scientific hypotheses, they are inviolable truths. They help to explain why so many areas of biology are making so little progress – basically any branch of biology that deals with the agency of the organism. That means cancer, immunology, origin of life, and evolutionary biology (which in public has been stuck in the 1970s for decades). It explains why the Human Genome Project has not yet delivered even 25% of what it originally promised. Remember when they told us they would be curing all kinds of diseases once we had our genomes?
23andMe just went bankrupt.
So is all this bad news, or is it good news?
I think it’s good news because there’s a way forward.
Okay, Perry, makes sense. This is your space. The discussion did deviate from the original topic. You don’t need to defend anything. We have a difference of opinions on some topics and agreement on other topics.
I do think that there are also various interests/angles involved here, some are commercial (to patent a highly efficient new technology) and some are scientific. At times these two interests can be in conflict with each other. I think it’s useful to separate the interests clearly and perhaps scientific debates are best on the pages of peer-reviewed journals. And of course, commercial competitions are fine, incentives to develop new technologies are fine. It is however, a different ball game.
What I am saying is that a scientific debate on the question “can physics produce code” is suboptimal when it must be constrained by the rules you have put forward, which are designed for the company offering the reward to be able to patent a marketable technology – there is a commercial incentive to constrain the rules of a scientific question.
This was a scientific and philosophical debate 5+ years before it was a prize or commercial endeavor. I believe the question of how nature produces classical Shannon information is one of the most fundamental questions in science that can be precisely defined. So I have defined this question very very carefully. More precisely in fact than anyone else I’ve seen.
I do believe the quantum information question is extremely important as well, and per my paper “The role of quantum mechanics in cognition based evolution” (2023) quantum information has to be cracked in order to solve the classical problem. So I think everything you’ve posted about Markov blankets and Karl Friston et al is highly relevant to this. I intend to review those things in the near future.
I have been extremely rigorous in my definition because anything less stringent than what I’ve written in the prize specification does not solve the problem. Frankly I think my definition is easy: “Produce a classical Shannon information system with encoder, message channel and decoder that handles at least 5 bits of information with fidelity and with the ability to objectively determine whether the message has been reproduced correctly or not.” To a computer engineer who is allowed to use his own intelligence, this is absolutely trivial. He can do it with a few logic gates.
But to an origin of life chemist, so far it’s proven impossible.
Again, anything less does not satisfy Claude Shannon’s definition of communication.
The money is just skin in the game. And since we are putting money on the line, we can’t put out prize money for something that’s not useful in an engineering sense. After all, most valid scientific theories are commercially valuable. (This is why the government spends billions on science grants. Because it eventually pays off.) This is no exception.
I have never had any origin of life researcher, or other scientist who understands the question tell me that I have incorrectly defined the problem.
I like Steve Benner’s saying “If the airplane crashes your theory is wrong.” Because I have reframed origin of information as an engineering problem, it is more clear what science has yet to discover.
Again I think everything you’ve said about quantum information has relevance here. And there is an open question of how this connects to the classical world in a living cell.
I haven’t approved your posts yet because I intend to explore the links and give you a proper response.
Perry in your paper
Marshall, P. (2021). Biology transcends the limits of computation. Progress in Biophysics and Molecular Biology, 165, 88-101.
You make the following claim: “Despite this prevailing view, there are no examples in the literature to show that the laws of physics and chemistry can produce codes, or that codes produce cognition. ”
If we take one part of that statement: “there are no examples in the literature to show that the laws of physics and chemistry can produce codes,” then here are two examples from quantum information theory / Chris Field’s work and then FEP on physics producing code. In fact, Fields takes communication through the holographic screen as primary and then derives time and space. This framework can produce communication of any complexity, as it scales with an increase in the number of q bits in the screen, it contains and encoder and a decoder.
https://www.youtube.com/watch?v=RpOrRw4EhTo
https://chrisfieldsresearch.com/
An analogous, if slightly different formulation in FEP is a Markov Blanket, and a deep generative model – where the hierarchy of internal states are seen as indeed encoding inferences about external states. The communications are described as Markov chains comprising the blanket. The encoding happens by the sensory parts of the blanket and the decoding happens by the influence of sensory parts of the blanket on the internal states; Conversely – communication back happens by the internal states encoding onto active states, which then decode this information through action onto the external environment.
Here is a paper, where you can see pure physics and nothing else producing very sophisticated code. – Complexity and the code emerge here from a primordial soup (ensemble of random dynamics systems). As a rigorous paper in physics, it makes some assumptions (e.g. ergodicity). Subsequent papers show similar results without assuming ergodicity.
Friston, K. (2013). Life as we know it. Journal of the Royal Society Interface, 10(86), 20130475.
This paper contains proof in math and in silico. It actually inspired prominent neuroscientist and neuropsychologist Mark Solms to create a model of consciousness based on FEP, so it has influence in biology and psychology.
Alexey,
I have deep respect for Karl Friston’s work, not because I’ve studied it closely, but because people I really know like and respect (Adam Goldstein, Mike Levin and Chris Fields) admire his work.
That said, what Friston is doing here does not solve the information problem in biology as I have defined it. (My definition is BTW essentially the same as people like Walker and Davies and Wiener and Shannon have defined it; my definition is more precise than the others because of its rigorous specification, but still is fairly conventional):
Digital vs. Analog: Friston’s model shows statistical dependencies between internal and external states mediated by a Markov blanket, but it’s not clearly a digital encoding-decoding system as defined in the prize.
Concrete Encoding/Decoding Tables: The prize requires specific examples of encoding and decoding tables. Friston shows mathematical relationships but doesn’t produce explicit encoding/decoding tables that could be filled out as requested.
Symbolic Alphabet: While Friston’s model shows information transfer, it doesn’t clearly establish a finite symbolic alphabet with discrete encoding and decoding operations.
Implementation Gap: Friston provides a mathematical framework and simulations, but the prize requires a concrete natural example or laboratory demonstration of encoding/decoding.
Since I announced the prize at ASU’s Beyond Center in 2017, no one has come with counterevidence to my insistence that there are no examples in the literature to show that the laws of physics and chemistry can produce codes, or that codes produce cognition.
Perry, I didn’t suggest Chris’s work or Karl’s work as candidates to get the prize. I used these as illustrations of physics producing code, in response to your claim:
“Despite this prevailing view, there are no examples in the literature to show that the laws of physics and chemistry can produce codes, or that codes produce cognition.”
If you and I differ on the interpretation of their work or semantics, or definitions, it’s okay by me. For me, these examples are clearly examples of complex communications, which are encoded and then decoded, either on a holographic screen or on a system of nested and embedded Markov blankets.
The important part of your core argument, I think is that you seem to require a thinker, a designer who, if I got correctly from your videos produced the code.
The details of Life as we know it paper by Friston and Chris’s course show code and complex communications emerging from physics. Quantum information theory with Chris and statistical mechanics + other elements in Friston’s work.
Again, I am okay if we disagree.
I need to say though that your definitions limit the “code” to only specific kinds of computations. I don’t think your definitions cover q-bits or imprecise computing and inferential/Bayesian computing.
These “alphabet” definitions are somewhat deterministic in a sense that the alphabet is fixed – I don’t think it is for living systems that compute, or for quantum systems.
If you check Chris’s course, then you’ll see his reference frames as analogous in principles to your alphabets, but again quantum mechanics or quantum information theory will not work like an alphabet. English alphabet, for example, is a discrete and deterministic system.
You say
The important part of your core argument, I think is that you seem to require a thinker, a designer who, if I got correctly from your videos produced the code.
I believe life requires a thinker in a “wholesale” sense, but not in a “retail” sense the way a creationist or ID person would.
I believe the cosmos is divinely ordered. That doesn’t mean “God is a programmer and put his finger in the soup and rigged the genetic code.” I think it means something more like the universe is imbued with potential for cognition down to the subatomic level and has the freedom to develop as it wants. I use “wants” deliberately. It is intrinsically teleological. This could mean q-bits or imprecise computing and inferential/Bayesian computing.
It’s also consonant with the anthropic principle and the fine tuning of the constants of the universe. Most of the early titans of science prior to the 19th century shared my conviction of divine order. I think some implicit version of this is necessary to practice science, even though explicitly it is forbidden.
What the prize specification is highlighting is that the design problem in biology and the information problem problem in biology are really the same problem; and that the origin of life, as Bill Miller likes to say, is co-terminous with the origin of cognition.
Part of the OOL problem is: At what point and by what mechanism do we get simple classical discrete symbols like we see in the genetic code?
What machinery connects the micro levels of quantum mechanics to this more macro level of natural genetic engineering?
In modern science there is still a gap between physico-chemical reactions and communication, codes, Turing Machines, computation and all the rest. There are clues everywhere that might connect those worlds, like Mike and Adam’s sorting algorithms, Friston’s free energy, Fields’ treatment of Markov blankets.
Perry, this is where we disagree and this is entirely okay.
When you say:
“I believe life requires a thinker in a “wholesale” sense, but not in a “retail” sense the way a creationist or ID person would.”
I hear you. This is indeed a belief. To cite Yuval Harari, the scientific revolution started from a-gnosticism in a stark contrast with religion, which is gnostic. Religion knows what is true and not true – it is past-oriented and the source of truth is documented in sacred texts, while questioning of anything written there is seen as “lack of faith.”
Agnosticism – We don’t know, no it’s not enough what’s written in the sacred text. We really don’t know. We then form hypothesis in a specific way and we test them with data.
Belief is just that – a belief, repeated many times over generations and vast differences it becomes a meme in Dawkins sense, and is perceived as unquestioned fact then, while all it is essentially – is a story. Coherent, typically short, not overly complex, meaningful, but a story. Stories are powerful and important, but a coherent story is nothing like a falsification of a falsifiable hypothesis. By virtue of being human, we are prone to Barnum/Forer effect and to Narrative Fallacy:
https://www.psychologytoday.com/us/blog/embrace-the-unknown/202312/narrative-fallacy-in-clinical-psychology-and-psychiatry
Many stories are being told all over from pharma to politics to marketing and religion – which are coherent and meaningful, which doesn’t make them accurate.
A falsifiable hypothesis is “water boils under normal pressure at 100 degrees Celcius.” One experiment can falsify it. It is also highly specific.
When you say “there are no examples in the literature to show that the laws of physics and chemistry can produce codes” – this assumes you have read all of literature on physics. If the statement above is firm as a conviction, then one might be inclined to pick the data and select only the data in support of one’s belief (and disregard other data) – which then becomes a self-fulfilling prophecy. (There is an interesting FEP perspective on persuasion and stories like that.)
When I cited two frameworks, which show from pure physics how complex communications and code emerge from very simple beginnings, you suggested that they didn’t fit your criteria of code.
You may be inclined to consider these frameworks if you are flexible in position and don’t declare that “All of physics can’t produce code” – this is a preconceived notion.
The frameworks I cited are highly complex, it took me years to get to some level of understanding of Karl’s work and I admit – don’t 100% understand Chris’s course entirely, as I am far less fluent in quantum information theory. You can also ask them, if you’d like – if it is in principle possible showing how from basic axioms and laws you get complex communications and code.
You can also consider Kevin Mitchell’s theory – he is a geneticist and a neuroscientist, he lays it out in minute details in his book – “Free Agents. How evolution gave us free will” – from basic non-organic chemistry to multicellular agents.
So here are three prominent scientists. Chris and Karl are indeed physicists (among other qualifications) and a neuroscientist/geneticist.
They do not talk on behalf of all physics, all biology, all of mathematics. They show in specific work, which is reproducible, mathematically proven and simulated in silico how you get basic intelligence from random dynamical systems.
To summarise, I’d say that you present an opinion on the matter of “can physics produce code” and alternative opinions exist.
Debates are healthy and normal in science. You may have seen the debate between Mitchell and Sapolsky on Free Will, between Mark Solms and Lisa Barrett, etc. However, it is customary to formulate a question in a form that can be falsifiable, e.g. – “does water boil at 100 degrees Celcius under normal pressure.” For example in Solms-Barrett debate, Mark asked Lisa if babies feel pain. – specific enough.
So I most respectfully disagree on the subject of complex communications, intelligence, agency, consciousness and as a small part of that – code – emerging from simpler forms and “physics”. I have data to believe this can indeed emerge – without any intelligent designers. Specific example – Life as we know it paper by Karl Friston.
Then the question is – can we agree that there are two competing stories here and there is data in support of each alternative hypothesis? That it might not be warranted to speak on behalf of all physics, math and biology and to present one of these stories as the “state of science?”
Thank you, Perry. Again, if we disagree – it is fine, I don’t wish to convince you or be convinced and I enjoyed exchanging opinions with you.
I have written a specification based on a universally accepted definition of communication (Claude Shannon 1948) and offered $10 million USD (with all the hassles that entails – lawyers, securities laws, corporations, bank accounts, investors, accountants). I wrote it from a standpoint that it is solvable and will pay if someone solves it.
I believe that if/when this is solved it is bigger than Google or ChatGPT and rivals relativity and the discovery of genetic code in its importance. It creates new trillion dollar economies and breakthroughs in medicine.
My approach is as empirical as it gets.
Nobody has even come close to winning the money. There is no empirical data in support of the claim that chemicals unaided by intelligence can produce a 5 bit communication system as defined by Claude Shannon.
There are only theories and hypotheses bits and pieces of related data that indicate that this may nevertheless be solvable. For example there is evidence that non-living matter can still exhibit very low levels of intelligence, which I do accept.
It is a matter of faith / hope / optimism / ambition / outlook / perspective / personal narrative of any particular individual as to how all this is interpreted, and whether this can be solved.
There is no such thing as a scientific model that does not invoke what is for all practical purposes religious (metaphysical, philosophical) assumptions.
In my book “Evolution 2.0” page 215 I list 10 Presuppositions of Science:
1. The existence of a theory-independent, external world
2. The orderly nature of the external world
3. The knowability of the external world
4. The existence of truth
5. The laws of logic
6. The reliability of our cognitive and sensory faculties to serve as truth
gatherers and as a source of justified true beliefs in our intellectual
environment
7. The adequacy of language to describe the world
8. The existence of values used in science (e.g., “Test theories fairly and
report test results honestly”)
9. The uniformity of nature and induction
10. The existence of numbers
None of these are scientifically testable quantities. They are philosophical and metaphysical assumptions. Without them, science cannot be practiced. In this section I am citing the book Moreland, J. P., & Craig, W. L. (2003). Philosophical Foundations for a Christian
Worldview. Downer’s Grove, IL: IVP Academic.
There are also beliefs in science such as “the next layer of order I discover in science will be beautiful and elegant”. Another one might be “nature is nothing but blind pitiless indifference.” Both of those statements are likewise metaphysical beliefs.
Science can seldom absolutely prove anything the way mathematics can. Science can only rely on inference to the best explanation. Inference always requires preferences. It is subjective.
I agree there is data supporting the belief that the information problem in biology CAN be solved. There are also credible arguments that it can’t (the fact that OOL is one of the least successful or productive fields in science and technology to date, for example). I side with the former.
Alexey,
I just want to be very clear in saying I mean no disrespect to Friston’s work. I think his model may be a big key to solving these problems, and I believe these problems are solvable. In the article I wrote discussing this with Mike Levin at IAI – https://iai.tv/articles/patterns-cant-explain-lifes-complexity-auid-2922 – I acknowledge that there is an “analog” shades of gray area between chemicals and codes that deserves very close exploration.
Joana Xavier and I have discussed this with some origin of life researchers. For example how many bits does it take to legitimately have a code? One? Two? Eight? In the EV2 prize we set the number at 5. That allows for 32 discrete states. That’s half the states of the current genetic code (64), which has one bit more. It seems like much fewer than that and your system would be too simple to do much of anything interesting.
What I am pushing against is a very common intonation that one gets from Origin of Life literature that makes it seem like this stuff is mostly solved. It’s not. It’s mostly unsolved and we’re barely into the foothills of the real exploration.
Just to clarify your 5 bits, Perry. You cite and like Information Theory. Would you accept Shannon’s definition of a bit, which is a reduction of uncertainty by a factor of 2?
https://www.youtube.com/watch?v=ErfnhcEV1O8
If so then your five bit (of useful information!) code would result in recipient reducing the uncertainty by a factor of 32? Would this work for you? You see, this takes us from a discrete and deterministic framing of the question to a probabilistic one – which is Shannon. And this applies to biology front and center (we can ask Mike) you always have noise and chaotic phenomena there, no exchanges there are as pure as telegraph with discrete 0s and 1s determinisrtically interpreted.
If you frame the question that was – can physics produce systems capable of reducing their uncertainty by a factor of 32 – then the answer is YES, this has been done – by Fields and Friston and their colleagues.
You could also consider that genetic processing contains probabilistic computations and not deterministic ones. (e.g. latest paper on aging by Leo-Lopez and Levin, Mitchell’s book is all about it “Future is not written”). You can also use q-bits and not measure quantum information theory with regular bits, which is suboptimal. You will have Shannon’s measures in each of these frameworks – that would be a fair measure, I think, otherwise, we are trying to measure stochastic and chaotic phenomena with discrete, deterministic mathematics and then we’ll miss the phenomenon. So if Laplace’s demon is in the past, maybe he should be in the past in how this question is formed as well?
Is the question “can physics produce systems capable of reducing their uncertainty by a factor of 32?”
Or is the question
“How do you get an encoder, decoder, encoding table, decoding table, and symbolic relationships to emerge from physics and chemistry, unaided by human intelligence?”
For the origin of the genetic code, it is the latter.
Furthermore we have to also explain how the system instantiates error correction. That has to be there from the very beginning or the system will not reproduce properly. Walker and Davies have written about this.
You say: “no exchanges there are as pure as telegraph with discrete 0s and 1s determinisrtically interpreted.” That is incorrect. The default machinery for replication has one error in 10^4 and after error correction is applied by the machinery of the cell, the error rate is 1 in 10^9. (Shapiro, Noble.)
And we are still left with a further question which is: How does the system know how to evolve? If you read Evolution 2.0 and the extensive literature that it cites, or Levin’s subsequent work, evolution is not driven by random copying errors and selection. It’s driven by cognition and natural genetic engineering.
Cognition appears to be co-terminous with the origin of life (William Miller).
The former definition is only a small subset of this problem. I agree that it is relevant and helpful.
There is no escaping the question of how some of the most fundamental mechanisms in biology came to be and what principles caused them to occur. We do not yet understand those principles.
Perry, as the creator of the company, the inventor of the idea of the prize and its primary funder, you have the right to ask the question any which way you wish and certainly to insist on asking it this way specifically.
if Shannon’s definition of a bit doesn’t work for you and you prefer to define the problem your way, that’s okay. I thought I’d ask, since you refer to Shannon.
On your question: “How do you get an encoder, decoder, encoding table, decoding table, and symbolic relationships to emerge from physics and chemistry, unaided by human intelligence?”
If you want, you can go through this body of work, it’s complex and can’t be dismissed in a day, it takes time to digest. As H.L. Mencken said – “for every complex problem there is an anwer that is clear, simple, and wrong”
https://www.youtube.com/watch?v=RpOrRw4EhTo
You can also run this course and Karl Friston’s “Life as we know it” paper by the judges – of you want, sure, no need to award the prize, but for the sake of science advancement, it wouldn’t hurt to ask?
_
If we abstract from the prize and the money for a moment, and entertain the question – “can a system emerge from physics that can minimize uncertainty by a factor of 32” – then, using Shannon’s definition of a bit, you got something to look at, purely as a matter of science and not prize winning.
A simple reframing of the question in probabilistic way gives you an interesting result. Sure, no prize, fine. Something to hold on to. Good enough for some purposes, I hope 🙂
_
It matters how the questions are asked. Say, if I ask (to cite Mark Solms) – how does thunder produce lightning? Then no one will ever provide a correct answer, because thunder doesn’t produce lightning.
There are debates along these lines on the hard problem of consciousness, where Solms and Friston suggest that David Chalmers posed a question in a way that cannot be answered – but another question does move the science forward.
Solms, M., & Friston, K. (2018). How and why consciousness arises: some considerations from physics and physiology. Journal of Consciousness Studies, 25(5-6), 202-238.
Anil Seth said the same thing about the hard problem of Consciousness in his book :
Seth, A. (2021). Being you: A new science of consciousness. Penguin.
__
Thank you again for the discussion and for your patience. I think we heard each other, exchanged opinions, agreed on some things (gravity), disagreed on other things, all of which is good fun.
Thank you
Shannon’s definition of a bit is not sufficient. You need a Shannon system with an encoder, channel that transmits symbols, and a decoder. Otherwise the bits have no context or meaning.
Perry, you replied the following:
“You say: “no exchanges there are as pure as telegraph with discrete 0s and 1s determinisrtically interpreted.” That is incorrect. The default machinery for replication has one error in 10^4 and after error correction is applied by the machinery of the cell, the error rate is 1 in 10^9. (Shapiro, Noble.)”
Let me be specific then. To cite Michael Levin – Take a human genome and place it in a different context – in an anthrobot made from epithelial cells from a human donor.
50% of the transcriptome in an anthrobot is different from how it is in a human body. Context-dependency is a feature of non-linear dynamical systems. Strict determinism (with one variation in 10^9) applies well to linear systems . Replication is not sufficient in and of itself, the expression matters a great deal at the end of the day and if it is that context-dependent, then we have much context-dependent variability.
Agreed. That said, without high fidelity copying the system doesn’t work.
I read the peer-reviewed paper
Garte, S., Marshall, P., & Kauffman, S. (2025). The Reasonable Ineffectiveness of Mathematics in the Biological Sciences. Entropy, 27(3), 280.
1) Ineffectivenss does not seem to be defined so it remains a word. It’s unclear how the judgement of effectiveness or not can be made without a definition? Say one can define and effectiveness of an engine with a measure of efficiency coefficient. How does one do this for all of math as applied to all of bio is unclear
2) Section 3 – reasons for ineffectiveness seems to be composed entirely of opinions. A repetition of the claim can hardly be a reason, but it is listed in this section: “The problem may be that biology, unlike physics,
is simply not amenable to mathematical description.” No data is provided in support of this claim of the entire biology and entire mathematics.
3) Section 4, citation “Our arguments are closely related to Godel’s incompleteness theorem.” Godel’s incompleteness theorems are highly specific – they apply to “formal systems” which are strictly defined in mathematics. His first theorem deals with arithmetic manipulations. His theory applies to specific framework, he didn’t prove, nor state that every theory in every science is incomplete. He was highly specific. Taking his work and applying his arguments made for formal systems in math to biology seems to be a stretch. He didn’t say that. If we must use Godel, let’s start form defining a formal system we are trying to apply his argument to?”
To summarize, this paper is essentially an opinion piece. The claim is not specific enough to be proven or falsified. The subjects discussed are generalized – all math and all of biology.
It is fair to say that models in science are indeed incomplete. Science is never fully sure and we are perpetually refining the models based on new data coming in, which happened in physics – Einstein showed limitations of Newton’s work and Aspect Clauser and Zellinger showed the limitations of some of Einstein’s work – and none of them called each other’s work or the tools they use as “ineffective.” Math is quite simply a language and a tool, appropriate for some tasks in some aspects of biology and less appropriate to others. A tool can’t be inherently ineffective. It’s how it’s used and the context that determine the applicability.
1) We use the term ineffectiveness in the same sense that Eugene Wigner uses the term effectiveness.
2) Kauffman and Roli prove that science cannot quantify affordances based on argument from Set Theory. I prove that mathematical models of biology cannot describe induction based on Turing’s halting problem, which is a model of a machine.
The notion that science can reduce everything to mathematics, and therefore mathematics can be fully effective in describing all physical phenomena, is no different than saying the universe is a Turing machine. You might well define reductionist science as the objective to do exactly that, and to build a model of that machine.
The very existence of inductive reasoning and the Halting problem prove that this is not possible. There are always more things that are true than you can prove (quantify).
3) The halting problem applies to physical mechanisms that can be described as sequences of steps, not just certain classes of academic math problems. This is why mathematics in physics can be “effective.” This is also why I framed my argument from a Halting point of view rather than a Godel point of view. But the Halting problem and incompleteness are different ways of saying the same thing, which is easily verified by a search of the literature.
This gets us to the new reality that the math community had to reckon with when Godel proved his theorems in 1931 – that mathematics cannot explain or justify itself. It says that no tool can be 100% effective. The irony was the Godel used mathematics to prove the limitations of mathematics… and he proved it.
Godel talked about a highly specific problem in a specific system. He clearly did not make statements about all of mathematics (then the statement would not be falsifiable). Citation of what he has actually proven:
“Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e. there are statements of the language of F which can neither be proved nor disproved in F.”
He said “consistent” system
“formal system”
“within which a certain amount of elementary arithmetic can be carried out”
Specifically, Godel’s theorem does not apply to quantum logic. As you know, quantum theory is very well experimentally supported.
More to the point, just because a theory or a model is “incomplete” it doesn’t become “inefficient.” Science is based on agnosticism. We don’t know, we form hypotheses and we test them with data. Incomplete is the norm in science. It is good enough for the planes to fly and computers to compute – all of which was built based on incomplete models and turned out to be efficient enough.
Your last paragraph is stating my thesis: That good enough is good enough. We are saying that while models in physics can (at least sometimes) be nearly perfect, models in biology are never better than good enough.
I have very much enjoyed reading this exchange, and learned quite a bit from it. I think it is possible, Alexey that you are putting too much emphasis on the wording of the article’s title, which I confess was chosen largely as an eye catcher being in contrast to Wigner’s famous paper. We actually never claim that mathematics is useless in all of biology. All three of us have published papers including mathematical analysis of biological systems. As we say in the paper,
The golden ratio and fractal geometry are used to describe shape. Differential equations [49] are used to describe population development, growth, etc. Probability theory is widely used in genetics. Game theory is used in models of behavior. Models of intelligence are based on discrete mathematics and algebra. Processing of any data in biology uses mathematical statistics. We are not throwing mathematics under the bus.
We are saying that 1) a chosen subset of mathematics always has to be selectively applied to a chosen subset of the physical system in question, and 2) that biology generates its own choices and cognitive models as the organism responds to its environment; non-living things do not.
What we are referring to by “ineffectiveness” (which as Perry said, is meant in the same way that Wigner used “effectiveness”) is the application of mathematical laws to large and important areas of biology such as evolution. There is no mathematical law of evolution. While some version of the Hardy Weinberg equilibrium equation have been used to detect evolutionary effects, those rely on a measurement of fitness, which can only be defined empirically. As the paper states:
One of the key concepts in evolutionary theory is fitness, which underlies the principle of natural selection. The problem is that fitness has no precise scientific definition in biology.—. It is impossible to predict a priori what phenotypic trait will render a particular individual or population living in a particular environment more or less “fit.” The only way we can quantify fitness in any population is by comparative measurement of survival rates and reproductive success. This means that any mathematical description of the mechanism of evolution by natural selection is ultimately circular—a tautology.
This fact of biosphere evolution rules out any chance of a mathematical law governing evolution. Again from the paper:
evolution of the biosphere, besides the impossibility of being entailed, is inherently not mathematizable… The specific trajectory of evolution is radically contingent, rather than converging on one optimal mathematical solution. The system is not obeying a fixed set of equations but continuously creating new rules and possibilities.
The simple fact on which this paper is based is that life, unlike non life has agency. While this fact has been in dispute by most biologists since the rise of neo Darwinism, it is now being re established by a large number of empirical and theoretical studies, as exemplified by the recent book from MIT Press titled “Evolution on Purpose: Teleonomy in Living Systems”. The literature is growing exponentially in support of this reality. It is the agency of all living organisms that renders mathematical analysis of every level of biological behavior resistant to mathematical treatment at some level. Storms, volcanoes, stars, and oceans do not decide to do anything. Bacteria, oak trees and squirrels do. And those decisions are not computable.