anon 0x6c said in #630 2y ago:
What is to be done with respect to acceleration and accelerationist arguments? Should you try to accelerate overall intelligence growth, or decelerate it, or do your own thing despite it, or cut off your balls and go insane, or what? People do all of these, but I want to know which is rational for people like us.
I want to get your thoughts specifically on how the competition between agents will and will not always favor intelligence-maxxing strategies, which seems to be one of the common claims of accelerationism. Many accelerationists seem to think you should just build AGI, but the best of them also believe it can't really be controlled. Something seems really off about creating AGI that isn't going to be reliably controllable. Why would you do that?
On another platform, an interlocutor suggests that that is sortof the default max-entropy state of mind if you don't have any other transcendent motive or internal sense of value drives. It just follows from the overall accelerating context. In other words, yeah it's the result of being dumb and getting dommed by the hype environment. He can correct me if I read that wrong.
People are psychologically all dominated by the idea of acceleration, so they do dumb things like that, but what's rational?
It seems to me what is rational is to try to increase *your own* intelligence (more generally, power). But this requires some concept of teleological identity which is preserved under enhancement. Some intelligence "enhancements" don't preserve your own identity (this is the ol' "alignment problem"). So you want to pursue strategies that actually serve your own teleology, not some abstract global techonomic acceleration or even unrestricted self-enhancement. Many ways of gaining strategic position aren't about intelligence at all, of course.
I expect the current tech/intelligence acceleration to level off at some point, either by hitting physical limits, or just exhausting current thinkable directions of enhancement. Especially in relatively stabilized situations but possibly also under the most extreme acceleration, there's always a niche for identity-preservers against the pure accelerationist strategy. I think the timescale of continuity is just smaller/shorter in apocalyptically accelerating circumstances, but otherwise similar.
Identity-preserving agents that operate faster than the actual acceleration timescale they are experiencing operate as normal more or less. Agents that are slower get dissolved in the acceleration (though many people seem to get sucked into false short timescale accelerationist cults on the fear that they will be dissolved outside, even though that's usually not true). The acceleration only happens because there are fast agents doing coherent things, and so can't ever fully wipe out identity-preserving teleological agency as the main strategy of life.
So as far as I can tell the actual game to play is not acceleration or deceleration, but rather pursuing your own will to power. You must accept that some background level of acceleration/change will kill the slower parts of your identity, and churn will kill the faster parts of your identity, but there's a great deal of room in the middle for life. Further, your own strategy will have incidental "leakage" into either acceleration or deceleration, but this isn't good or bad except by what's good for your own strategy. The key in all this is striking the right balance between these factors.
My apologies if this sounds like totally obscure autism, or obvious to everyone. I'm trying to cut through the specifically accelerationist strains of AI insanity in technical detail. I need a way to think about accelerationist AI apocalypse in a non-millenarian way. This is part of my attempt.
I want to get your thoughts specifically on how the competition between agents will and will not always favor intelligence-maxxing strategies, which seems to be one of the common claims of accelerationism. Many accelerationists seem to think you should just build AGI, but the best of them also believe it can't really be controlled. Something seems really off about creating AGI that isn't going to be reliably controllable. Why would you do that?
On another platform, an interlocutor suggests that that is sortof the default max-entropy state of mind if you don't have any other transcendent motive or internal sense of value drives. It just follows from the overall accelerating context. In other words, yeah it's the result of being dumb and getting dommed by the hype environment. He can correct me if I read that wrong.
People are psychologically all dominated by the idea of acceleration, so they do dumb things like that, but what's rational?
It seems to me what is rational is to try to increase *your own* intelligence (more generally, power). But this requires some concept of teleological identity which is preserved under enhancement. Some intelligence "enhancements" don't preserve your own identity (this is the ol' "alignment problem"). So you want to pursue strategies that actually serve your own teleology, not some abstract global techonomic acceleration or even unrestricted self-enhancement. Many ways of gaining strategic position aren't about intelligence at all, of course.
I expect the current tech/intelligence acceleration to level off at some point, either by hitting physical limits, or just exhausting current thinkable directions of enhancement. Especially in relatively stabilized situations but possibly also under the most extreme acceleration, there's always a niche for identity-preservers against the pure accelerationist strategy. I think the timescale of continuity is just smaller/shorter in apocalyptically accelerating circumstances, but otherwise similar.
Identity-preserving agents that operate faster than the actual acceleration timescale they are experiencing operate as normal more or less. Agents that are slower get dissolved in the acceleration (though many people seem to get sucked into false short timescale accelerationist cults on the fear that they will be dissolved outside, even though that's usually not true). The acceleration only happens because there are fast agents doing coherent things, and so can't ever fully wipe out identity-preserving teleological agency as the main strategy of life.
So as far as I can tell the actual game to play is not acceleration or deceleration, but rather pursuing your own will to power. You must accept that some background level of acceleration/change will kill the slower parts of your identity, and churn will kill the faster parts of your identity, but there's a great deal of room in the middle for life. Further, your own strategy will have incidental "leakage" into either acceleration or deceleration, but this isn't good or bad except by what's good for your own strategy. The key in all this is striking the right balance between these factors.
My apologies if this sounds like totally obscure autism, or obvious to everyone. I'm trying to cut through the specifically accelerationist strains of AI insanity in technical detail. I need a way to think about accelerationist AI apocalypse in a non-millenarian way. This is part of my attempt.
referenced by: >>631 >>637
What is to be done w