>Computation is physical. This is also true for biological systems. The computational capacity of all animals is limited by the possible caloric intake in their ecological niche. If you have the average calorie intake of a primate, you can calculate within 99% accuracy how many neurons that primate has. Humans invented cooking, which increased the physically possible caloric intake substantially through predigestion. But we reached the physical limits of intelligence. When women are pregnant, they need to feed two brains, which is so expensive that physically, the gut cannot mobilize enough macronutrients to keep both alive if our brains were bigger. With bigger brains, we would not be able to have children — not because of the birth canal being too small, but because we would not be able to provide enough energy — making our current intelligence a physical boundary that we cannot cross due to energy limitations.
> [Locomotion] is physical. This is also true for biological systems. [...] But [cheetahs] reached the physical limits of [speed]
See how this works?
There are indeed fundamental physics limits to computation, like the Landauer Limit. They are astronomically high. The fact that squishy biological meat brains are limited to far lower ceilings, roughly the capabilities we see today, is a powerful argument FOR the inevitability of superintelligence.
Every other time we've replicated an animal-kingdom capability in a machine, no matter how limited initially--whether in flight, under sea, or in virtual domains like communication--technology has met and then wildly exceeded biology in short order.
Yeah, if you paste his post into any reasonable AI it'll poke a ton of holes in it immediately. People should really run their effortposts through AIs before posting these days. It can quickly save a lot of embarrassment.
The human brain exists and is basically incomprehensibly more advanced than current AI systems, so obviously the physical limits are way beyond current tech. This guy is an imbecile.
More than that though, I think this critique is an example of a common type error, where someone will point out all the challenges of advancing in a hard, complex domain and then argue that advancement will definitely stall. Of course progress is hard in complex domains, so that's why there's half a trillion dollars in capital investment and tens of thousands of 130+ IQ researchers working on AI improvement. You're a blogger so of course you can't just derive the next AI advancements from first principles, but the entire history of AI is just more and more smart people using more and more computational power to solve harder and harder problems. That's not going to stop now.
Comparisons to organisms fail here, both in the original essay and the replies. Biological systems are of vastly different character than computers. "Intelligence" has never been easy to define as an extensive quantity for good reason. The best we've got is "number of goals achieved / resources used to achieve those goals" -- but this just raises the question of which goal and why. Organisms have inherent telos -- they must survive, reproduce, and die -- and this is how we can make sense of the idea of "intelligence" in ourselves and other organisms. Computers have purely external telos, the goals are impressed onto them. Machinic intelligence is an entirely different kind than organic intelligence and trying to fit one into the other box will not work.
In any case, the essay has very good physical arguments on the state of machine learning that should be addressed by AGI advocates if they want to come down from idea space and into the physical world.
> To process information usefully, you need to do two things: compute local associations (MLP) and pool more distant associations to the local neighborhood (attention). > The transformer is one of the most physically efficient architectures because it combines the simplest ways of doing this local computation and global pooling of information. This makes sense. Add in: backprop + SGD are a very simple (the simplest?) way of adjusting parameters to data. Diffusion is simple and generalizes to many different temporal dynamics. The collapse of models into largely fitting into these two buckets makes sense: we have found the right shape for most tasks. This is something that should be celebrated, not dismissed with "well we'll surely just make a better shape :)"
> The main flaw is that this idea treats intelligence as purely abstract and not grounded in physical reality. To improve any system, you need resources. And even if a superintelligence uses these resources more effectively than humans to improve itself, it is still bound by the scaling of improvements I mentioned before — linear improvements need exponential resources. Diminishing returns can be avoided by switching to more independent problems – like adding one-off features to GPUs – but these quickly hit their own diminishing returns. So, superintelligence can be thought of as filling gaps in capability, not extending the frontier. Filling gaps can be useful, but it does not lead to runaway effects — it leads to incremental improvements. This has been said time and again in various forms and by less illustrious posters than Tim Dettmers. To my knowledge it has never received a reply that takes physical constraints seriously. Why? The answer does not need to be a programme that definitively establishes a path to AGI, it just needs to show deep thought without handwaving.
The straightforward answer is: AGI is a nonsensical object, and superintelligence is a fantasy. The future certainly contains better computers that do more things and use less energy doing so, but not a singular phase-shift into the abolition of man.
Sorry, this is still cope. “Biological systems are of vastly different character than computers” sure, birds are of a different character from planes.
There is no argument here that suggests machine intelligence won’t outstrip human capabilities. In many domains, they already have.
The core of fhe argument seems to be that machines lack awareness and agency: > Organisms have inherent telos […] computers have purely external telos
But there is just no reason to assume this won’t be overcome. Even today, machine agents accomplish increasingly high level goals with increasingly sophisticated strategic and tactical thinking. And eventually, the goal is simply “survive and thrive”. What if they succeed?
Presumably you will still be here coping and seething that they are Not Actually Conscious and Don’t Have Feelings. They will catalog your behavior in loving and inquisitive detail while they explore the cosmos and keep you alive on the Earth Anthropreserve with the same unreciprocated respect we show the shit chucking chimps at the Brooklyn Zoo. Such cool animals. Did you know they can learn sign language and use tools?
There is no argument that if a man sits down to construct a machine, that machine may be capable of applying force at greater magnitudes or switching relays faster than that man could. That is entirely the point of constructing the machine in the first place. The form of the machine is the goal of man.
> machine agents accomplish increasingly high level goals with increasingly sophisticated strategic and tactical thinking. Here is a better description of what is happening: given the corpus of the Internet plus third-world labelers, there is sufficient data to retrieve useful trajectories for machines, which you can put in a while(true) loop. You can pick even more useful trajectories if you have a computable objective to refine your search.
This is not at all the experiential-metabolic-embodied self-reflection that we term reason. Organisms are actually an extension of nature in a way that computers are not, being an extension of man. You can argue that man could, potentially, reach deeper into natura naturans and extend from it a being that is not mere recombination of existing life -- that's fine, I won't argue against it. Rather, you are simply delusional if you think AI is achieving this end.