sofiechan home

Why AGI Will Not Happen

anon_kysw said in #4801 6d ago: received

https://timdettmers.com/2025/12/10/why-agi-will-not-happen/

>Computation is physical. This is also true for biological systems. The computational capacity of all animals is limited by the possible caloric intake in their ecological niche. If you have the average calorie intake of a primate, you can calculate within 99% accuracy how many neurons that primate has. Humans invented cooking, which increased the physically possible caloric intake substantially through predigestion. But we reached the physical limits of intelligence. When women are pregnant, they need to feed two brains, which is so expensive that physically, the gut cannot mobilize enough macronutrients to keep both alive if our brains were bigger. With bigger brains, we would not be able to have children — not because of the birth canal being too small, but because we would not be able to provide enough energy — making our current intelligence a physical boundary that we cannot cross due to energy limitations.

referenced by: >>4824

>Computation is phys received

egon said in #4804 6d ago: received

What an odd and desperate cope.

> [Locomotion] is physical. This is also true for biological systems. [...] But [cheetahs] reached the physical limits of [speed]

See how this works?

There are indeed fundamental physics limits to computation, like the Landauer Limit. They are astronomically high. The fact that squishy biological meat brains are limited to far lower ceilings, roughly the capabilities we see today, is a powerful argument FOR the inevitability of superintelligence.

Every other time we've replicated an animal-kingdom capability in a machine, no matter how limited initially--whether in flight, under sea, or in virtual domains like communication--technology has met and then wildly exceeded biology in short order.

What an odd and desp received

anon_magi said in #4838 2d ago: received

Yeah, if you paste his post into any reasonable AI it'll poke a ton of holes in it immediately. People should really run their effortposts through AIs before posting these days. It can quickly save a lot of embarrassment.

Yeah, if you paste h received

phaedrus said in #4839 2d ago: received

The human brain exists and is basically incomprehensibly more advanced than current AI systems, so obviously the physical limits are way beyond current tech. This guy is an imbecile.

More than that though, I think this critique is an example of a common type error, where someone will point out all the challenges of advancing in a hard, complex domain and then argue that advancement will definitely stall. Of course progress is hard in complex domains, so that's why there's half a trillion dollars in capital investment and tens of thousands of 130+ IQ researchers working on AI improvement. You're a blogger so of course you can't just derive the next AI advancements from first principles, but the entire history of AI is just more and more smart people using more and more computational power to solve harder and harder problems. That's not going to stop now.

The human brain exis received

adamjesionowski said in #4846 20h ago: received

Comparisons to organisms fail here, both in the original essay and the replies. Biological systems are of vastly different character than computers. "Intelligence" has never been easy to define as an extensive quantity for good reason. The best we've got is "number of goals achieved / resources used to achieve those goals" -- but this just raises the question of which goal and why. Organisms have inherent telos -- they must survive, reproduce, and die -- and this is how we can make sense of the idea of "intelligence" in ourselves and other organisms. Computers have purely external telos, the goals are impressed onto them. Machinic intelligence is an entirely different kind than organic intelligence and trying to fit one into the other box will not work.

In any case, the essay has very good physical arguments on the state of machine learning that should be addressed by AGI advocates if they want to come down from idea space and into the physical world.

> To process information usefully, you need to do two things: compute local associations (MLP) and pool more distant associations to the local neighborhood (attention).
> The transformer is one of the most physically efficient architectures because it combines the simplest ways of doing this local computation and global pooling of information.
This makes sense. Add in: backprop + SGD are a very simple (the simplest?) way of adjusting parameters to data. Diffusion is simple and generalizes to many different temporal dynamics. The collapse of models into largely fitting into these two buckets makes sense: we have found the right shape for most tasks. This is something that should be celebrated, not dismissed with "well we'll surely just make a better shape :)"

> The main flaw is that this idea treats intelligence as purely abstract and not grounded in physical reality. To improve any system, you need resources. And even if a superintelligence uses these resources more effectively than humans to improve itself, it is still bound by the scaling of improvements I mentioned before — linear improvements need exponential resources. Diminishing returns can be avoided by switching to more independent problems – like adding one-off features to GPUs – but these quickly hit their own diminishing returns. So, superintelligence can be thought of as filling gaps in capability, not extending the frontier. Filling gaps can be useful, but it does not lead to runaway effects — it leads to incremental improvements.
This has been said time and again in various forms and by less illustrious posters than Tim Dettmers. To my knowledge it has never received a reply that takes physical constraints seriously. Why? The answer does not need to be a programme that definitively establishes a path to AGI, it just needs to show deep thought without handwaving.

The straightforward answer is: AGI is a nonsensical object, and superintelligence is a fantasy. The future certainly contains better computers that do more things and use less energy doing so, but not a singular phase-shift into the abolition of man.

Comparisons to organ received

You must login to post.