sofiechan home

Will Homo sapiens ever make it to Alpha Centauri?

landposting said in #5108 6d ago: received

As AI moves from science fiction to real-life economics, it's becoming clearer that the period of Homo sapiens as the most intelligent life form on earth is coming to an end in the near future. Is there any future for our species?

For a long time now, perhaps since the end of the nineteenth century, there's been a predominant view that as long as humanity doesn't destroy itself or regress to savagery, the future, the deep future, holds the expansion of man, the development of his technics, and the pacification of nature. Mankind has imagined itself spreading from Europe to the dark continent, to America, to Australia, even the Moon, Mars, the outer planets and the stars. Growing up in the 2000s, I thought it certain that one day we would walk on other worlds around other suns, thousands of years of future progress in the human story.

Yet, in the few centuries since Western Europe embarked on the grand project of the "relief of man's estate" as Bacon terms it, we've discovered that there are certain technics achievable today beyond what Descartes or Bacon could have dreamed, cybernetic technologies that dwarf our atavistic dreams of sailing between the stars. The transistor, the computer, the internet — these are not the technologies that we had imagined would define our future. And AI most of all... the idea of artificial laborers has birthed the prospect of systems that can surpass ourselves in reasoning, in creativity, in perception, and in speed of thought and action. However capable the current generation of LLMs are, they are nothing in comparison to the machine-minds that will be built in the coming decades. The human capitalist economic system is bootstrapping itself to another paradigm of labor, production, and cybernetic processing, and has mobilized trillions of dollars in order to do so.

As AI takes off, humans will struggle to remain "in the loop" of production and decision-making. This is one of Nick Land's core insights, that the darwinian dynamics of economic competition will both help the development of more and more capable AI/cybernetic systems, but also necessitate that they be given more and more autonomy in order to better compete in the market economy. With China pushing the United States into a new era of great power competition, the prospect of successful government containment of the AI revolution is doubtful. Whether or not humans are nominally in charge of the hyper-economy of AI agents, (AI?) corporations, and automated software and hardware development, the real power will be in the hands of the market and the AIs participating therein.

In a recent conversation I had with Land, he described the likely best outcome for humans as a "panda zoo," with humans kept around as interesting specimens watched over and cared for by massively more capable AI systems. Even in the most optimistic scenarios, with humans succeeding to dominate their more intelligent AI systems (note that Land anticipates human control efforts are unlikely to succeed), I fail to see how we can escape the fate of glorified zoo animals. When AIs are running the productive economy, developing technology, and making all key strategic decisions, putting humans in the loop just leads to strictly worse outcomes, like giving toddlers authority over the work and finances of their parents. It just doesn't make sense. Obviously here we're talking about scenarios where the parents and children don't devour each other alive, but I want to stick to exploring the "best" outcomes here.

So, back to the main question: will humans ever make it to Alpha Centauri? Will the sci-fi future we imagine for our race, the trials and successes of our children's children's children, ever come to pass? I don't see how, at this point, it's possible. Even if we do build starships, we'll be cargo, brought along to be trotted out by our parents, tourists on new worlds. We will achieve nothing that could not be done faster, better by them. The future, even though ours in name, will in reality be theirs. Will we even bother to go to stars?

As AI moves from sci received

anon_twxy said in #5154 25h ago: received

I tend to approach this question from two pretty straightforward angles:
1) The transition technology. Just like cell walls and ribosomes and neuron clusters, something from our time will be preserved as a transition technology-- likely language or software, some of our semantic technology. Some of our innovations will echo in deep time.
2) The right-scaled. Due to the issues with coordinating thought over large distances and the speed of light, I think the structure of intelligent matter in the future will involve elements at many different scales, from the very small to the very large. I think human-sized intelligences (which might in some way resemble human archetypes) are likely to be found in systems that support agents of that scale.

The second sense is probably naively the most appealing, and I do think there is a good chance that similar things to us are around for a long time. But we will be in competition with robots and so will converge with them on highly optimized forms that are probably not very similar in the ways we care about to ourselves.

I'm assuming a return to malthusian dynamics-- this plus ultimate biotech will transform human-scale and human-type things very very far.

We might get a national park Earth of course. Its almost a silly enough idea and irrelevant enough in the scheme of things to be possible.

I tend to approach t received

You must login to post.