sofiechan home

Some thoughts on extropian/accelerationist life strategy

anon 0x6c said in #630 2y ago: 1010

What is to be done with respect to acceleration and accelerationist arguments? Should you try to accelerate overall intelligence growth, or decelerate it, or do your own thing despite it, or cut off your balls and go insane, or what? People do all of these, but I want to know which is rational for people like us.

I want to get your thoughts specifically on how the competition between agents will and will not always favor intelligence-maxxing strategies, which seems to be one of the common claims of accelerationism. Many accelerationists seem to think you should just build AGI, but the best of them also believe it can't really be controlled. Something seems really off about creating AGI that isn't going to be reliably controllable. Why would you do that?

On another platform, an interlocutor suggests that that is sortof the default max-entropy state of mind if you don't have any other transcendent motive or internal sense of value drives. It just follows from the overall accelerating context. In other words, yeah it's the result of being dumb and getting dommed by the hype environment. He can correct me if I read that wrong.

People are psychologically all dominated by the idea of acceleration, so they do dumb things like that, but what's rational?

It seems to me what is rational is to try to increase *your own* intelligence (more generally, power). But this requires some concept of teleological identity which is preserved under enhancement. Some intelligence "enhancements" don't preserve your own identity (this is the ol' "alignment problem"). So you want to pursue strategies that actually serve your own teleology, not some abstract global techonomic acceleration or even unrestricted self-enhancement. Many ways of gaining strategic position aren't about intelligence at all, of course.

I expect the current tech/intelligence acceleration to level off at some point, either by hitting physical limits, or just exhausting current thinkable directions of enhancement. Especially in relatively stabilized situations but possibly also under the most extreme acceleration, there's always a niche for identity-preservers against the pure accelerationist strategy. I think the timescale of continuity is just smaller/shorter in apocalyptically accelerating circumstances, but otherwise similar.

Identity-preserving agents that operate faster than the actual acceleration timescale they are experiencing operate as normal more or less. Agents that are slower get dissolved in the acceleration (though many people seem to get sucked into false short timescale accelerationist cults on the fear that they will be dissolved outside, even though that's usually not true). The acceleration only happens because there are fast agents doing coherent things, and so can't ever fully wipe out identity-preserving teleological agency as the main strategy of life.

So as far as I can tell the actual game to play is not acceleration or deceleration, but rather pursuing your own will to power. You must accept that some background level of acceleration/change will kill the slower parts of your identity, and churn will kill the faster parts of your identity, but there's a great deal of room in the middle for life. Further, your own strategy will have incidental "leakage" into either acceleration or deceleration, but this isn't good or bad except by what's good for your own strategy. The key in all this is striking the right balance between these factors.

My apologies if this sounds like totally obscure autism, or obvious to everyone. I'm trying to cut through the specifically accelerationist strains of AI insanity in technical detail. I need a way to think about accelerationist AI apocalypse in a non-millenarian way. This is part of my attempt.

referenced by: >>631 >>637

What is to be done w 1010

anon 0x6d said in #631 2y ago: 33

>>630

Continued from OP. My accelerationist interlocuter asked about the emergence and formatting agency and identity in the first place. What does that stuff even mean under heavy acceleration or advanced levels of autopoetic intelligence? He asked me to elaborate.

What is agency and identity? So here's my view: life strategy is not wholly and unitarily calculable. There are at the very least different niches in the world. If there is a niche that builds much energy-capture infrastructure for some obvious purpose like computation or expansion, then there is another niche that tries to attack that and steal the surplus. There may be further niches that parasite on or compete with those in various ways. Dyson swarms compete with star lifters for example.

Because unitary rationality and intelligence alignment is in general not possible, these different niches will actually exist and be filled. They are not just inefficiencies to be calculated away, but realities.

The agents that exploit these different niches will be formed around their particular strategies. Now again, these strategies are not wholly calculable. There will always be some residual that can only be determined by playing it out in reality. For example, how much should the energy-stealers cooperate vs compete? How much should any given one invest in intelligence vs proliferation vs force capability or something? Something like this, not necessarily this, will be better taken as a leap of faith than calculated. Each of these key variables defining the leap of faith of each lifeform is sortof its "genetic code". Being composed of leaps of faith, the genetic code is basically only refine-able by random variation and selection.

So I think you get genetic evolution, speciation, individuation, even sex, etc back in this way even despite heavy levels of autopoetic intelligence. This is the context where agency and identity occurs: little bundles of life that pursue coherent strategies against the rest of the world-ecosystem. Whether they intelligence-max or proliferation-max, or go for size or strength or whatever, depends on their basic strategic identity which is a-priori because it's the leaps of faith that aren't worth calculating.

So then, each of these little bundles of life has to deal with an overall accelerating ecosystem, but has no obligation to actually contribute to or retard that acceleration, though it may find it does have such incentives one way or the other depending on its positioning. The primary imperative is to carry out one's own strategic identity, which will almost always include self-preservation self-acceleration, etc.

Continued from OP. M 33

anon 0x6e said in #632 2y ago: 33

This picture seems to assume that the world is relatively "continuous" through the acceleration. But "little bundles of life in an accelerating ecosystem" can and likely will be wiped out in a discrete event like a nuclear holocaust, rogue ASI, Bostromian totalitarian singleton clamping down on possible threats etc. This feels relevant to accelerationist life strategy.

referenced by: >>633 >>634

This picture seems t 33

anon 0x6f said in #633 2y ago: 22

>>632
A rogue ASI is just a high-acceleration situation. Yes most life gets rekt in such, but not all. Anything able to adapt fast enough stays live. But even if there are situations that wipe out all but a single accelerated superintelligence, the SI itself will fragment into competing pieces, because self-alignment isn't possible. There will be no singleton. God hates singletons. Nuclear holocaust is of course a risk. The world need not be continuous, but I think acceleration as a phenomenon mostly is.

referenced by: >>634

A rogue ASI is just 22

anon 0x70 said in #634 2y ago: 22

>>632
>>633
Which isn't to say any actual humans will survive ASI. But within the ASI, there will be an ecosystem of little bundles of life operating at a faster timescale than the overall acceleration.

Which isn't to say a 22

anon 0x71 said in #637 2y ago: 11

>>630
>In other words, yeah it's the result of being dumb and getting dommed by the hype environment. He can correct me if I read that wrong.

Well, for the more naive enthusiast yes, but what I mean with the "fanatical" ones is something like a generalization of evo-psych style thinking. Charitably, one interrogates their behaviors and finds them to have been shaped by evolutionary dynamics, but then if you look a little further you may realize that evolutionary dynamics are shaped by physical and computational imperatives.

In other words the steelman there is that if you have some freedom to choose and no obvious global ordering of utility functions, a sensible thing to do might be to reverse engineer it to be consistent with the evolution of a world-model that best fits observations. That's more or less what a landian with a clue (of which there are vanishingly few) would say.

Well, for the more n 11

anon 0x75 said in #646 2y ago: 11

\> I need a way to think about accelerationist AI apocalypse in a non-millenarian way.

Accelerationism (R/ACC more specifically, to distinguish it from the etiologically political accelerationism of Marx and his successors) posits that natural selection applies at the most general scale of material evolution. The process is invariant with respect to both the substrates that enact it, and whatever particular properties constitute fitness at any particular specialization of scale.

That intelligenesis is a side effect of pan-Darwinianism, which becomes subject to the self-same process and subsequently amplified, is an in part observed, in part deduced proposition. It is one concretization of principle of anti-orthogonality, which posits that pursuing instrumental ends as terminal ends will out-compete the pursuit of instrumental ends in service to terminal ends, because the former strategy will not be subject to the switching cost of the latter strategy.

Intelligence may or may not ultimately be the dominant instrument-as-end; SAI is not the only apex predator R/ACC predicts, merely one which has no little currency and evidence for it at present. Further, as you note with your remarks regarding the "continuity" [0] of the universe, whether there are enough resources to realize the asymptotic behavior of such models is not guaranteed. Striations, heterogeneities that cannot be dissolved on physically practicable scales of time and space, will set actual limits to the ultimately purely conceptual "utmost generality". Thus, the relevance of R/ACC claims which rely on this grandest of scales is also a topic for further investigation.

From a certain standpoint this can be approached as a purely experimental domain. You have presented a fairly strongly established position on the matter of how to think about accelerationism and its claims, and it is not yet provably impossible that the niches you describe above may persist within the _grandest physically possible_ scale. Your specific execution of pan-Darwinist competition amongst the selection landscape of other fitness strategies will test the validity of this position. To a faction like the Z/ACCs, that may be the limit of conversation. However, as discussed in the thread on the logical possibility of singletons, there appear to be contributions the formal sciences can make to the refinement of the contact R/ACC makes with the physical world.

I personally consider the difference between Utmost In Principle Generality and Actual Grandest Scale analogous to Computational Expressivity versus Computational Complexity. In the former, the Forms [1] have shape but not magnitude. The latter identifies their _orders_ of magnitude, and relative scale. I believe that a yet further transition is necessary: despite that the Forms, being infinite, cannot be exactly expressed in the finite, images, reflections, resonances, and suggestions of their shapes can be seen even in finite extents, and different actual extents can accommodate different resonances. We see this with corporeal computers, despite being bounded linear automata, "appearing like" Turing machines. Programming languages with features that require Turing Complete computation to make _full_ use of, nevertheless can be utilized in _fragmentary_ form; absent the affordance of the un-utilizable full feature, neither is the fragment available. The study of the formal structure of the details present within the finite is yet incomplete, and within it I believe there will be information pertaining to the characteristic scale of the universe, no less necessarily applicable than the information which, present in the study of Computational Expressivity and Complexity, accurately articulates some aspect of the process of reality, and becomes knowledge.

[0] What Deleuze might call smoothness, and mathematicians a differentiable manifold.

[1] Using this term loosely.

\> I need a way to t 11

You must login to post.