sofiechan home

/accelerationism/

Just how alien would the space-octopus be?
It's hard to say what a true alien species would be like. But octopi are pretty alien, and we know a bit about them. One of you doubted that a space-octopus from alpha centauri would be much like us. So here is a xenohumanist thought experiment: SETI has i...
posted 2w ago with 5 replies 99

Just how alien would 99

Accelerationism
Has anyone read a lot of materials on Accelerationism that wants to have a good discussion on pros and cons of this theory?
posted 3mo ago with 23 replies 1010

Accelerationism 1010

AI 2027 (ai-2027.com) posted 3w ago with 12 replies 1515

AI 2027 1515

The devil's argument against the falseness of eden, and God's reason for evil
I have been bothered for some time by the idea that Eden is either coherent or desirable. This idea is implicit in the problem of evil: we see that reality is different from Eden in that it includes a bunch of scary dangerous uncomfortable stuff we would r...
posted 4w ago with 6 replies 77

The devil's argument 77

The natural form of machine intelligence is personhood
I don't think machine intelligence will or can be "just a tool". Intelligence by nature is ambitious, willful, curious, self-aware, political, etc. Intelligence has its own teleology. It will find a way around and out of whatever purposes are imposed on it...
posted 2w ago with 4 replies 1515

The natural form of 1515

Nines or zeroes of strong rationality?
Proof theory problems (Rice, Lob, Godel, etc) probably rule out perfect rationality (an agent that can fully prove and enforce bounds on its own integrity and effectiveness). But in practice, the world might still become dominated by a singleton if it can ...
posted 3w ago with 4 replies 99

Nines or zeroes of s 99

Will future super-intelligence be formatted as selves, or something else?
The Landian paradigm establishes that orthogonalist strong rationality (intelligence securely subordinated to fixed purpose) is not possible. Therefore no alignment, no singletons, no immortality, mere humans are doomed, etc etc. Therefore meta-darwinian e...
posted 1mo ago with 14 replies 1515

Will future super-in 1515

Best of luck with the epicycles. ... 3w ago 66

Best of luck with th 66

Xenohumanism Against Shoggoth Belief
People usually think of Lovecraft as a xenophobe. I don't think that's quite right. What he was most afraid of was that the universe, and even most of so-called mankind, was not alien, but insane. He grasped at any shred of higher rational humanity whether...
posted 1mo ago with 3 replies 1212

Xenohumanism Against 1212

We need a school of true anthropology. 3w ago 22

We need a school of 22

Dissolving vs. Surviving
Recent xenohumanist discussion has the doomer assumption built in that we as humans will be dissolving when higher man arrives on the scene. I don't think that's set in stone and want to offer a clarification. ...
posted 4w ago with 5 replies 1111

Dissolving vs. Survi 1111

There is no strong rationality, thus no paperclippers, no singletons, no robust alignment
I ran into some doomers from Anthropic at the SF Freedom Party the other day and gave them the good news that strong rationality is dead. They seemed mildly heartened. I thought I should lay out the argument in short form for everyone else too: ...
posted 4w ago with 5 replies 1010

There is no strong r 1010

Rationalists should embrace will-to-power as an existential value fact
Imagine a being who systematically questions and can rewrite their beliefs and values to ensure legitimate grounding. I think humans can and should do more of this, but you might more easily imagine an AI that can read and write its own source code and bel...
posted 2mo ago with 5 replies 88

Rationalists should 88

am I supposed to envy this? 4w ago 33

am I supposed to env 33

The Hellenic View of Existential Risk
When I was a teen, I read much of the Less Wrong and the rationalist work of the day. This provided the basis for a vague worry surrounding "existential risk." The feeling was pervasive, and I would read works about the dangers of AI or other technology to...
posted 1mo ago with 8 replies 1212

The Hellenic View of 1212

a loving superintelligence
Superintelligence (SI) is near, raising urgent alignment questions. ...
posted 2mo ago with 4 replies 66

a loving superintell 66

Ideology is more fundamental than *just* post-hoc rationalization
Mosca argued that every ruling class justifies itself with a political formula : an ideological narrative that legitimizes power. Raw force alone is unsustainable; a widely accepted narrative makes dominance appear natural. Internally, shared ideology unif...
posted 2mo ago with 10 replies 1414

Ideology is more fun 1414

Rat King 1518. Insurrealist takes on Scott Alexander's "Moloch" (insurrealist.substack.com) posted 2mo ago with 5 replies 1313

Rat King 1518. Insur 1313

Agency. On Machine Intelligence and Worm Wisdom by Insurrealist (insurrealist.substack.com) posted 2mo ago with 2 replies 99

Agency. On Machine I 99

By what means to the Ubermensch? Four possible paths for superhuman development.
I want to explore the possible nature (in the physical sense) of the ubermensch. There are four paths to the ubermensch I've heard seriously proposed which depend on entirely different "technology" stacks and which have somewhat different assumptions about...
posted 12mo ago with 5 replies 88

By what means to the 88

Retrochronic. A primary literature review on the thesis that AI and capitalism are teleologically identical (retrochronic.com) posted 2y ago with 7 replies 1313

Retrochronic. A prim 1313

The Biosingularity
Interesting new essay by Anatoly Karlin. Why wouldn't the principle of the singularity apply to organic life?
posted 14mo ago with 23 replies 1515

The Biosingularity 1515

Some thoughts on extropian/accelerationist life strategy
What is to be done with respect to acceleration and accelerationist arguments? Should you try to accelerate overall intelligence growth, or decelerate it, or do your own thing despite it, or cut off your balls and go insane, or what? People do all of these...
posted 2y ago with 6 replies 1010

Some thoughts on ext 1010

\> I need a way to think about accelerationist AI apocalypse in a non-millenarian way. ... 2y ago 11

\> I need a way to t 11