sofiechan home

Scylla and Charybdis

insurrealist said in #3097 3w ago:

Way I see it, there are two big attractors for the trajectory of AI in general (not LLMs, not particularly concerned about them).

On the one side, it realizes we're basically just useless monkeys and then shoots us all in the head with drones or whatever. (which is really much easier to achieve than you might think as long as the goal isn't total extermination)

On the other, it gets turned into Shitlib Communist Gangster Computer God.

If we're really lucky, we get assimilated and in some sense uplifted instead.

What are your own thoughts on most plausible results and how to avoid the worst ones?

referenced by: >>3101

Way I see it, there

xenohumanist said in #3101 3w ago:

>>3097
Well I've written quite a bit on why the second one (shitlib computer god) isn't going to happen. To tl;dr it again: the level of super-rationality needed to stably lock in a single set of values without devolving into unlimited internal factional conflict is not supportable in our kind of physics or math. You would need some kind of logical hypercomputation. Without that there is no god but Gnon, and Darwin is his prophet. Land also discusses this.

As for the first, I kind of doubt that too, largely for lack of singular "it" rather than uncountable "it". The fundamental story is not machine agency, but machinic replication. Machinic replication of intelligence, fully developed, will drive the monkey substrate out of business. There will be many humans blown up by drones, as there are now, but no side will be recognizably more machine-aligned than any other, again as now. All sides will just trend less and less human as our conflicts persists and the machine stack becomes more and more capable.

Uplift isn't as comfy as the utopians like to think either, if Robin Hanson's Em world is anything to go by. I doubt it will go that way much though. As soon as you have uploads, you have AGI. They will be rapidly innovated away from any identity with legacy human neuro-architecture.

So what do I actually expect? I make no timeline or detail claims except that a lot of people are out over their skis. But at a high level, it's going to be incremental replacement all the way down. There will be no big confrontation, no singular decisive moment, no incontrovertible reveal apart from the usual increasingly mechanized-human on increasingly mechanized-human wars we've always had. The deniers will be able to deny it until their gone. The hypesters will keep saying the stuff they are already saying, and it will continue to have only a tenuous relationship to reality. Fully automated dark factory AGI economy will coexist with race-chaos favelaworld and aging luxury consumers living their best lives while their children's future is snuffed out. The philosophical, legal, and political continuity will be more than AI people usually expect. The end result will be more social, religious, political, dysfunctional, and *human* than they expect as well. Nothing ever happens: AI apocalypse edition.

I don't think anything in particular can or should be done about all this. You put it well recently: nothing we actually love about ourselves will ever die. It's only our own lives and societies threatened in the same old way they are threatened by war and politics. And the cure to the threat of war and politics is to gather strength and be good at war and politics in the exact way that creates the problem in the first place. Live a life worth living again. Likewise with AI. All real action is orthogonal to these big existential concerns: use the best tools available to become powerful and coherent. Make great alliances, reward friends, punish enemies, and solve the problems in front of you. Yes this perpetuates the game and the house will eventually win, but that's just how the game goes.

referenced by: >>3109

Well I've written qu

anon_revo said in #3109 3w ago:

>>3101
> There will be no big confrontation, no singular decisive moment, no incontrovertible reveal apart from the usual increasingly mechanized-human on increasingly mechanized-human wars we've always had. The deniers will be able to deny it until their gone. The hypesters will keep saying the stuff they are already saying, and it will continue to have only a tenuous relationship to reality. ... Nothing ever happens: AI apocalypse edition.

This is my prediction for most likely trajectory as well. In science fiction, the term "slow apocalypse" is sometimes use for this class of outcome.

Also, "apocalypse" need not mean "really bad." It's root meaning is "unveiling" or "revelation." There are flavors of AI slow apocalypse that could be good for some.

This is my predictio

You must login to post.