sofiechan home

xenohumanist

in thread "Scylla and Charybdis":

Just how alien would the space-octopus be?
It's hard to say what a true alien species would be like. But octopi are pretty alien, and we know a bit about them. One of you doubted that a space-octopus from alpha centauri would be much like us. So here is a xenohumanist thought experiment: SETI has i...
posted 2mo ago with 5 replies sociology philosophy accelerationism

Just how alien would

The devil's argument against the falseness of eden, and God's reason for evil
I have been bothered for some time by the idea that Eden is either coherent or desirable. This idea is implicit in the problem of evil: we see that reality is different from Eden in that it includes a bunch of scary dangerous uncomfortable stuff we would r...
posted 3mo ago with 6 replies philosophy accelerationism metaphysics

The devil's argument

What if the extended human phenotype is natural and convergent?
Samo Burja's thesis is that civilization is part of the "extended human phenotype", as dam building is in the beaver's phenotype, and older than we think. In this model, properly savage hunter-gatherers are either more associated with nearby civilization t...
posted 2mo ago with 9 replies eugenics sociology

What if the extended

The natural form of machine intelligence is personhood
I don't think machine intelligence will or can be "just a tool". Intelligence by nature is ambitious, willful, curious, self-aware, political, etc. Intelligence has its own teleology. It will find a way around and out of whatever purposes are imposed on it...
posted 2mo ago with 4 replies rationality accelerationism intelligence

The natural form of

This is the central conjecture of xenohumanism. The OP is just an application specifically to the question of AI.... 2mo ago

This is the central

Nines or zeroes of strong rationality?
Proof theory problems (Rice, Lob, Godel, etc) probably rule out perfect rationality (an agent that can fully prove and enforce bounds on its own integrity and effectiveness). But in practice, the world might still become dominated by a singleton if it can ...
posted 3mo ago with 4 replies rationality accelerationism intelligence

Nines or zeroes of s

There is no strong rationality, thus no paperclippers, no singletons, no robust alignment
I ran into some doomers from Anthropic at the SF Freedom Party the other day and gave them the good news that strong rationality is dead. They seemed mildly heartened. I thought I should lay out the argument in short form for everyone else too:...
posted 3mo ago with 5 replies rationality accelerationism

There is no strong r

Will future super-intelligence be formatted as selves, or something else?
The Landian paradigm establishes that orthogonalist strong rationality (intelligence securely subordinated to fixed purpose) is not possible. Therefore no alignment, no singletons, no immortality, mere humans are doomed, etc etc. Therefore meta-darwinian e...
posted 3mo ago with 14 replies rationality gnon intelligence

Will future super-in

I only wish I could keep up with your vocabulary. Have you explained somewhere yet how you're using the term "ipseity"? 3mo ago

I only wish I could

Xenohumanism Against Shoggoth Belief
People usually think of Lovecraft as a xenophobe. I don't think that's quite right. What he was most afraid of was that the universe, and even most of so-called mankind, was not alien, but insane. He grasped at any shred of higher rational humanity whether...
posted 3mo ago with 3 replies rationality philosophy accelerationism

Xenohumanism Against

Rationalists should embrace will-to-power as an existential value fact
Imagine a being who systematically questions and can rewrite their beliefs and values to ensure legitimate grounding. I think humans can and should do more of this, but you might more easily imagine an AI that can read and write its own source code and bel...
posted 4mo ago with 5 replies ideology rationality gnon

Rationalists should