A know what you're thinking: anon's gone all kooky mashing together a bunch of mysterious things, wrapping it in some gold foil, and worshipping it as an idol. Maybe, maybe not. Hear me out.
I attended Charles Rosenbauer's talk on NP at Avalon the other day, and while Charles is a notoriously disorganized speaker, the subject matter was fascinating. To reiterate, NP is the class of problems whose solutions can be verified quickly (polynomial time) and thus could be solved by an imaginary Nondeterministic (N-) computer in Polynomial (-P) time by trying all possibilities in parallel (impossible in reality). More specifically, we are interested in the NP-complete problems, which can't be solved quickly in the general case by real deterministic computers as far as we know (P!=NP), and to which all other problems in NP can be reduced.
NP-complete somehow rhymes with Turing-complete. There is a universality class here: any algorithm that can solve one of these problems can solve all of them, and many others besides, just as any universal computer can emulate all of them. I have suspected something like a universality class underlying intelligence and life, so I am taking note of this.
The most obvious and intuitive of these problems is the BSAT (Boolean SATisfiability) problem: given a set of binary variables and binary constraint expressions over those variables, is there a setting of those variables that satisfies all the constraints? A solution can be verified by simply checking the constraints, but they are not so easily found. You can build just about anything with arbitrary binary logic circuits, so the ability to solve such problems is extremely powerful. For example, you could invert most functions, including cryptography. However if P!=NP, then there's no general algorithm to solve these better than raw exponential brute force.
But in practice, practical problems that are best expressed in NP complete terms in fact can be solved quite rapidly. Cutting edge BSAT algorithms rip through them in much less than exponential time. The best of these are learning algorithms. They assume the problem has some repeatable structure, like a core of key variables that determine the rest, or repeatable connections between seemingly distant variable clusters, and they learn that structure inductively. For example, when they encounter a contradiction in one of their guesses, they trace the causal origin of that contradiction, invent a new constraint to capture that causality, and then backtrack and re-try other possibilities taking the now-explicit constraint into account. As inductive learners, they don't work on random problems, only problems constructed from actual reality.
This brings us to the connection to life and intelligence. Life is "verifiable in polynomial time" as the "solution" to a niche: just run the organism in the niche and see if it reproduces. Or if we're thinking about intelligence more broadly, you can verify skill in a domain by just seeing if it can take on the domain and produce results within some resource constraint. But how to produce such organisms and such skills? Bruteforcing the genetics of an organism or the weights of a domain model is not tractable. And in the general case that's probably what you have to do. Life is NP-complete.
But we don't live in the general case. We live in reality, which is not cryptographically secure against life, but has exploitable regularities all over the place. All you need is an inductive process to learn those regularities: Darwin tells us organisms come from an inductive process of natural selection over hereditary mutations. Hinton tells us at least domain-specific intelligence can be created by an inductive process (gradient descent with backprop). Both processes work by learning the repeatable structure of the problem.
There's something very interesting here. Could you construct BSAT-based AI, or solve BSAT problems with gradient descent?
There is indeed something interesting here about life and intelligence, but I don't think it has much to do with NP or BSAT.
It's more in the vicinity of why it is that the No Free Lunch theorems do not, in practice, block induction or machine learning algorithms. A naive interpretation of the No Free Lunch theorems says that induction and learning should be impossible. But that's from a perspective ranging over all possible worlds, i.e., no informative prior. In reality, in our world, we know that induction and learning do work. And so do life and intelligence.
>>3500 Well we know why no free lunch theorems don't block practical compression and induction: because reality is in fact quite structured and thus describable with simple inductive priors.
As for BSAT, no life is not about BSAT in particular, but the nice thing about NP complete problems is they are all equivalent up to a polynomial time translation. I gave my argument above that there's something NP-complete-ish about the problem life and intelligence are solving. I don't think it's literally NP-complete because I don't think the problem is that formal and self-contained, but I think it's the same type of difficulty, with the same type of solution (induction).
But you are right there's something additional in the problem of life that takes it beyond mere NP-complete and possibly beyond mere induction. We have good inductive models of eg the world's text, but they aren't intelligent or alive per se. That extra thing is dynamism. Life solves a dynamic problem, where the "training data" or "constraints" are discovered as a direct result of the system's output, rather than the system just being statically fit to a fixed set of training constraints. What's that about?
But I'm also still curious about whether there's a formal connection between what's going on with NP complete problems and what's going on with deep learning. I suspect we will get an answer one of these days when victor taelin or someone trains a language model using his GPU accelerated interaction nets or something instead of deep learning. Basically, are NP complete problems also program synthesis problems, and are program synthesis problems NP complete (yes, I think).
Then the remainder is just that dynamism problem. What's the difference between a dynamic program syntheses problem and a static one?
NP problems including NP complete ones can be verified in polynomial time. But it seems life generates solutions that can't be verified tractably sometimes. Many zero sum games like N x N Reversi are PSPACE-complete. See True Quantified Boolean Problems https://en.wikipedia.org/wiki/True_quantified_Boolean_formula
To square with the idea of verifying an organism by placing it in an environment, consider that this only tells how well it does in a fixed environment, with fixed competitors. This doesn't tell you how "meta" the strategy is because it's relative to a fixed set of competitors. Doing well at a zero sum game in general isn't just doing well against a finite set of competitors, it's doing well against a set of possible competitors that are themselves optimized for playing well.
>>3503 Well life is at least a solution to what has been, verified in reasonable time. Where it starts to break the model as you point out is where it starts to become dynamic and reflexive. The metagame is a whole new level of dynamism as well beyond what I was talking about. But still at the root there's this NP-like asymmetry. Try every lifeform that could exist and see which ones work. Is that a description of what darwin's process is approximating, or a nondeterministic solution to the problem of life? Well it's both.