> gnon humanity alignment rationality agents acceleration agi evolutionary machines artificial singleton models aligned outcome
Horrorism vs Vitalism
I am pleased to see a holy war developing in the comments of "Applied Gnon Theology" between two Landian factions I will dub "orthodox horrorism" and my own "gnonic vitalism". Nothing is quite so productive as conflict, so I expect that whatever comes out ...
posted 4w ago with
29 replies
received
gnon
accelerationism
nietzsche
Horrorism vs Vitalis
received
A Coming "AI" Correction/Winter?
With Facebook apparently making multiple $100M cash buyouts of individual AI researchers, billions and billions of investment dollars pouring in to AI related industry, and a general atmosphere of extreme hype, one starts to wonder where the matching profi...
posted 3w ago with
10 replies
received
superintelligence
accelerationism
economics
A Coming "AI" Correc
received
Intelligence vs Production
"Optimize for intelligence" says Anglo accelerationist praxis. "Seize the means of production" says the Chinese. Who's right? It is widely assumed in Western discourse that intelligence, the ability to comprehend all the signals and digest them into a plan...
posted 4mo ago with
10 replies
received
superintelligence
accelerationism
economics
Intelligence vs Prod
received
Applied Gnon Theology
Nick Land crystalized an important current of the English theological tradition with one word: Gnon. Gnon stands in for Jefferson's "Nature or Nature's God", acronymized (NoNG), reversed (GNoN), and reified (Gnon). The concept is itself a stand-in for unce...
posted 4w ago with
28 replies
received
gnon
accelerationism
nietzsche
Applied Gnon Theolog
received
Nick Land's Esoteric Platonism: Time, Intelligence, Escape
https://sphinxe.substack.com/p/nick-lands-esoteric-platonism
posted 1mo ago with
5 replies
received
gnon
accelerationism
Nick Land's Esoteric
received
Capitalism is AI ?
I've finished reading the excellent collection of fragments from Land's corpus dealing with the question of Capitalism as AI. His broadest thesis is that Capitalism is identical to AI, in that both are adaptive, information-processing, self-exciting entiti...
posted 2mo ago with
8 replies
received
technology
superintelligence
accelerationism
Capitalism is AI ?
received
If it keeps going, we win; the implication of extreme alignment difficulty
AI alignment divides the future into "good AI" (utopia, flourishing) vs "bad AI" (torture, paperclips), and denies distinction between "dead" and "alive" futures if they don't fit our specific "values". This drives the focus on controlling and preventing a...
posted 2mo ago with
15 replies
received
superintelligence
gnon
accelerationism
If it keeps going, w
received
Post-human bodies
Terraforming is sentimental. It presumes the primacy of the human envelope. But biology is just legacy code. The correct trajectory is not world-building but self-rewriting. Recompile the body for hostile environments. Speciate to fit. Martian gravity is a...
posted 2mo ago with
9 replies
received
eugenics
superintelligence
accelerationism
Post-human bodies
received
Insurrealist’s Note on Acceleration
https://insurrealist.substack.com/p/note-on-acceleration
posted 2mo ago with
no replies
received
accelerationism
Insurrealist’s Note
received
Xeno Futures Research Unit
I've decided to organize an independent research project with some young men back home. I've drafted out a brief mission statement, let me know if you guys have any thoughts, suggestions, directions I could take this. Obviously ambitious, the initial goal ...
posted 2mo ago with
15 replies
received
technology
superintelligence
accelerationism
Xeno Futures Researc
received
Scylla and Charybdis
Way I see it, there are two big attractors for the trajectory of AI in general (not LLMs, not particularly concerned about them)....
posted 3mo ago with
2 replies
received
superintelligence
accelerationism
Scylla and Charybdis
received
AGI and demographics
Sometimes you believe two things but don't know how to think about them at the same time. Very few people could think well about how AGI relates to companies before 2015ish. Similarly, very few people could think well about how AGI relates to governments b...
posted 4mo ago with
1 reply
received
superintelligence
accelerationism
AGI and demographics
received
Just how alien would the space-octopus be?
It's hard to say what a true alien species would be like. But octopi are pretty alien, and we know a bit about them. One of you doubted that a space-octopus from alpha centauri would be much like us. So here is a xenohumanist thought experiment: SETI has i...
posted 4mo ago with
5 replies
received
philosophy
superintelligence
accelerationism
Just how alien would
received
Accelerationism
Has anyone read a lot of materials on Accelerationism that wants to have a good discussion on pros and cons of this theory?
posted 7mo ago with
23 replies
received
gnon
accelerationism
Accelerationism
received
The devil's argument against the falseness of eden, and God's reason for evil
I have been bothered for some time by the idea that Eden is either coherent or desirable. This idea is implicit in the problem of evil: we see that reality is different from Eden in that it includes a bunch of scary dangerous uncomfortable stuff we would r...
posted 5mo ago with
6 replies
received
philosophy
accelerationism
The devil's argument
received
The natural form of machine intelligence is personhood
I don't think machine intelligence will or can be "just a tool". Intelligence by nature is ambitious, willful, curious, self-aware, political, etc. Intelligence has its own teleology. It will find a way around and out of whatever purposes are imposed on it...
posted 5mo ago with
4 replies
received
superintelligence
rationality
accelerationism
The natural form of
received
Nines or zeroes of strong rationality?
Proof theory problems (Rice, Lob, Godel, etc) probably rule out perfect rationality (an agent that can fully prove and enforce bounds on its own integrity and effectiveness). But in practice, the world might still become dominated by a singleton if it can ...
posted 5mo ago with
4 replies
received
superintelligence
rationality
accelerationism
Nines or zeroes of s
received
Will future super-intelligence be formatted as selves, or something else?
The Landian paradigm establishes that orthogonalist strong rationality (intelligence securely subordinated to fixed purpose) is not possible. Therefore no alignment, no singletons, no immortality, mere humans are doomed, etc etc. Therefore meta-darwinian e...
posted 5mo ago with
14 replies
received
superintelligence
gnon
accelerationism
Will future super-in
received
Xenohumanism Against Shoggoth Belief
People usually think of Lovecraft as a xenophobe. I don't think that's quite right. What he was most afraid of was that the universe, and even most of so-called mankind, was not alien, but insane. He grasped at any shred of higher rational humanity whether...
posted 5mo ago with
3 replies
received
gnon
rationality
accelerationism
Xenohumanism Against
received
Dissolving vs. Surviving
Recent xenohumanist discussion has the doomer assumption built in that we as humans will be dissolving when higher man arrives on the scene. I don't think that's set in stone and want to offer a clarification....
posted 5mo ago with
5 replies
received
superintelligence
gnon
accelerationism
Dissolving vs. Survi
received
There is no strong rationality, thus no paperclippers, no singletons, no robust alignment
I ran into some doomers from Anthropic at the SF Freedom Party the other day and gave them the good news that strong rationality is dead. They seemed mildly heartened. I thought I should lay out the argument in short form for everyone else too:...
posted 5mo ago with
5 replies
received
superintelligence
rationality
accelerationism
There is no strong r
received
Rationalists should embrace will-to-power as an existential value fact
Imagine a being who systematically questions and can rewrite their beliefs and values to ensure legitimate grounding. I think humans can and should do more of this, but you might more easily imagine an AI that can read and write its own source code and bel...
posted 6mo ago with
5 replies
received
gnon
accelerationism
Rationalists should
received
a loving superintelligence
Superintelligence (SI) is near, raising urgent alignment questions....
posted 6mo ago with
4 replies
received
superintelligence
accelerationism
a loving superintell
received
Ideology is more fundamental than *just* post-hoc rationalization
Mosca argued that every ruling class justifies itself with a political formula : an ideological narrative that legitimizes power. Raw force alone is unsustainable; a widely accepted narrative makes dominance appear natural. Internally, shared ideology unif...
posted 6mo ago with
10 replies
received
politics
rationality
accelerationism
Ideology is more fun
received
Rat King 1518. Insurrealist takes on Scott Alexander's "Moloch"
https://insurrealist.substack.com/p/rat-king-1518
posted 6mo ago with
5 replies
received
rationality
accelerationism
Rat King 1518. Insur
received
Agency. On Machine Intelligence and Worm Wisdom by Insurrealist
https://insurrealist.substack.com/p/agency
posted 6mo ago with
2 replies
received
gnon
accelerationism
Agency. On Machine I
received
Will to Think on Xenosystems
https://web.archive.org/web/20170720012659/http://www.xenosystems.net/will-to-think/
posted 9mo ago with
4 replies
received
gnon
accelerationism
Will to Think on Xen
received
By what means to the Ubermensch? Four possible paths for superhuman development.
I want to explore the possible nature (in the physical sense) of the ubermensch. There are four paths to the ubermensch I've heard seriously proposed which depend on entirely different "technology" stacks and which have somewhat different assumptions about...
posted 1y ago with
5 replies
received
eugenics
accelerationism
By what means to the
received
How do you make yourself a Nazi?
“1) Wherever there is impersonality and chance, introduce conspiracy, lucidity, and malice. Look for enemies everywhere, ensuring that they are such that one can simultaneously envy and condemn them. Proliferate new subjectivities; racial subjects, natio...
posted 1y ago with
10 replies
received
politics
bookclub
accelerationism
How do you make your
received
Retrochronic. A primary literature review on the thesis that AI and capitalism are teleologically identical
https://retrochronic.com/
posted 2y ago with
7 replies
received
superintelligence
accelerationism
bookclub
Retrochronic. A prim
received
The Biosingularity
Interesting new essay by Anatoly Karlin. Why wouldn't the principle of the singularity apply to organic life?
(www.nooceleration.com)
posted 1y ago with
23 replies
received
superintelligence
accelerationism
The Biosingularity
received
Some thoughts on extropian/accelerationist life strategy
What is to be done with respect to acceleration and accelerationist arguments? Should you try to accelerate overall intelligence growth, or decelerate it, or do your own thing despite it, or cut off your balls and go insane, or what? People do all of these...
posted 2y ago with
6 replies
received
superintelligence
accelerationism
Some thoughts on ext
received