sofiechan home

Ideology is more fundamental than *just* post-hoc rationalization

anon 0x461 said in #2586 3w ago: 1212

Mosca argued that every ruling class justifies itself with a political formula : an ideological narrative that legitimizes power. Raw force alone is unsustainable; a widely accepted narrative makes dominance appear natural. Internally, shared ideology unifies the ruling class, preventing fragmentation and reinforcing their collective mission.

But these formulas aren’t arbitrary. They shape how power is pursued, justified, and contested. Reducing ideas to mere rationalizations strips away their generative role. Even if power is the driving force, ideas define its range of possibilities, constraints, and trajectories. They aren’t just decorations atop power but integral to its evolution.

Time matters. Ideas outlive their material origins and continue structuring power relations. A ruling ideology shapes how actors understand themselves, their strategies, and their alliances. Different self-understandings lead to different pathways, altering history.

Power isn't blindly pursued or imposed. The way in which it is framed, interpreted, and enacted is heavily context dependant. Ignoring this turns elite theory into a kind of vulgar materialism that explains little. Rapid ideological shifts only happen in crises, and crises are when political and ideological power matter most.

The common counterargument is that elites can swap ideologies at will, bending them to self-interest with little consequence. My intuition is that this is false. If power were so fluid, why does history show regimes collapsing when they attempt to abandon or radically shift their legitimizing narrative? Elites are constrained by the ideological traditions they inherit, the institutions shaped by past ideas, and the expectations of the masses. When material conditions shift, elites struggle to construct a new legitimizing formula because ideology isn’t infinitely malleable—it’s bound by historical continuity. This is why ruling classes often hesitate, fracture, or fail when forced to reinvent their legitimacy.

Maybe I’m just coping as an intellectual, but the weight of ideology seems too real to ignore.

Mosca argued that ev 1212

anon 0x465 said in #2591 3w ago: 44 11

You are absolutely right OP and this always bothered me about the more Marxoid accounts of ideology. Power has structure and motivation. That comes from ideology. Ideas are not just window-dressing on "raw" power but organizational principles and ideals to strive for. There is no such thing even as raw power divorced from ideology. It's always mediated through particular structure and motivation.

This is connected closely to what I often say about the necessity of leaps of faith. It's easy to make a reading of life and intelligence that it's all about growth and will-to-power, but this glosses over the very essential matter of strategy. HOW are you going to achieve this will waxing? You need some particular means, which often involves quite a lot of investment in a particular way of doing things, and requires ruling out other things. Strategies can be mutually exclusive. Furthermore, you often don't know your strategy is actually going to pay off until it does. You can't just empirically hillclimb. You have to take a leap of faith on some particular way of operating.

So too for politics and power and ideology. Ideology is, from the perspective of the coalition overall, a pre-rational leap of faith on particular commitments and social technologies. Often (always?) it is particular taboos that prevent certain thoughts and actions from ever being socially expressed. These commitments structure how power is going to be pursued and re-invested.

The "materialism" comes in when these commitments and the behaviors they produce meet reality. Whatever system you end up with has to actually work in practice, and if it doesn't, competitors start to gain ground either internally or externally. We've recently seen this with maximum DEI left acceleration blowing up too much trust and functionality to continue, so America is undergoing the difficult shift to a more right wing ruling ideology.

This is also something I'd like to see more represented in accelerationist thought as well. "Intelligenesis" as the ultimate self-justifying terminal goal is all well and good, but it's basically a synonym for growth and will-to-power and has the same issue which is that it demands a particular strategy, none of which is knowably optimal but must be taken on faith (hopefully well-informed and well-inspired).

You are absolutely r 44 11

anon 0x461 said in #2595 3w ago: 66 11

>The "materialism" comes in when these commitments and the behaviors they produce meet reality. Whatever system you end up with has to actually work in practice, and if it doesn't, competitors start to gain ground either internally or externally. We've recently seen this with maximum DEI left acceleration blowing up too much trust and functionality to continue, so America is undergoing the difficult shift to a more right wing ruling ideology.

This is exactly why material reality and ideology can't be separated. For us theorycels, they are two levels, but in practice, they interpenetrate to the point of being indistinguishable. Ideas only persist when they are enacted, and every enactment is a decision, a moment where thought becomes structure. The ruling class doesn’t just “have” an ideology; it decides in ways that shape the world and, in turn, constrain future decisions.

A subject (whether an individual, a group, or a system) structures the world through decisions, not as a self-created construct, but as a framework that molds both its own cognition and the external social and technical order. A decision is not just a thought; it is an act, and an act is what binds ideology to reality. Without materialization, an idea is mere fantasy, no different from a dream. This is why ontology precedes epistemology—decision-making shapes the subject before it even begins to reflect. And once those decisions accumulate, they determine what is even perceivable as reality.

I did not think of the link to "super intelligence" but it is a good point. Accelerationism does seem to miss this by focusing on overly abstract "intelligence". What does will to power really mean ? What shape does it take ? How will it be mediated through the interpretation of the world taken by and intelligent system ? Complex systems such as us have evolved symbol manipulation and the world of spirit-intellect, which is not just a by-product, but fundamental for evolutionary group strategies and survival. Is there a reason to think "machines" will not be subject to this logic ? I don't know.

referenced by: >>2598

This is exactly why 66 11

anon 0x465 said in #2598 3w ago: 55 11

>>2595
>Is there a reason to think "machines" will not be subject to this logic ? I don't know.
My view is somewhat unique here in that I take a basically anthropomorphist interpretation of "machine" intelligence. There are going to be some obvious major differences, like freely copying and branching minds, more independence between hardware and software, ability to disaggregate different kinds of compute and cognitive functionality. But the basic problems of reflective social action in the world are going to remain. Politics will be recognizable. Life will remain irrational or supra-rational at its foundation. Reflection will still open up a whole can of worms for ethics. The demands of sociality will shape intelligence into agentic persons with persistent personality and reputations. Most relevant here, ideology and even religion will be part of life and politics.

I don't want to derail this thread into scifi posthumanism speculations, but I find it useful as a thought experiment to get at what is essential about life and politics, and what is merely ape. I find deep analogies between the natural laws governing life as such, political organization, individual humans, and posthuman superbeings.

referenced by: >>2599

My view is somewhat 55 11

anon 0x461 said in #2599 3w ago: 33 11

>>2598

Could you think of a robust argument against anthropomorphising machine intelligence? The biggest landian contention is that WTP can be raw or pure reduction of local entropy through extropy, though this is still too theorycel or abstract for me.

referenced by: >>2602 >>2650

Could you think of a 33 11

anon 0x467 said in #2601 3w ago: 22 11

>Could you think of a robust argument against anthropomorphising machine intelligence?

It's subject to very different evolutionary pressures and also has access to radically different affordances, for one, so a very lax demarcation of anthro would be required. I do think that a strictly thermodynamic view misses crucial aspects of and difficulties for intelligence, Land has long has a distaste for stratification and conceptual representation and that has served him quite well, but it can overshoot the mark.

It's subject to very 22 11

anon 0x465 said in #2602 3w ago: 44

>>2599
I don't know what you mean by WTP (I'm not well educated in these matters). As for the best arguments against anthropomorphism, there are many good ones. The first and best is simply that the space of mind architectures or even more generally optimizing processes is very large, and humans occupy only a tiny fraction of it. The others are likely to be alien and inhuman. Land's equation of capitalism and AI is a good one here, basically postulating an inhuman ambient intelligence that has a lot of pseudo-agency over history but basically isn't anthropomorphic (though Teilhard says it is). But I say pseudo-agency because history and the capitalist process is still only the consequence of the action of personlike agents pursuing their own will to power and their own ideas. These agents are needed because the pure reduction of local entropy etc is not actually a direct shapeless thermodynamic force, but is only mediated through the action of living agents themselves animated by particular commitments to structure, ideology, etc.

And that's about where I diverge from the arguments against anthropomorphism. I think individual personlike agency is fundamental to the overall space of intelligence the same way the idea of the organism is fundamental to the overall space of life. The same arguments could be applied to the space of life architectures to suggest that we could have vastly different forms of life beyond the familiar space of organisms, but in actual practice life is arranged into organisms. The edges of organism-space have been fairly heavily explored, but there is a strong attractor back to organismlike arrangements. I don't think this is just an artifact of DNA/Protein life or anything either, as we see the concept of the organism re-emerge at multiple levels of abstraction (eg consider single celled vs multi-celled). I expect the same to be true of personlike agency. In fact I expect that it's not just an analogy but an identity.

Given the notion of a *natural* attractor and not just historical contingency towards personlike agency (and recognizing many animals and many human organizations as also orbiting this same attractor), we can ask how much content this attractor has, and also how much of what we think of as the human condition is natural in this sense vs specific to the higher apes as apes. Self-authoring superintelligence is a good thought experiment at least because it is likely to have the natural content but be very diverse and alien on everything not dictated by nature. This probably deserves its own thread but I believe the natural attractor towards organismal personlike agency to have actually quite a lot of content, and to contain almost everything we think of as being the valuable in man. (That is, including such notions as love, curiosity, art, philosophy, politics, religion, compassion, eroticism, ambition, agency, morality, etc etc). That is, there is a natural attractor in life and intelligence as such towards the anthropomorphic image of god. Therefore future AI world will still be conceptually humanoid, even if running on vast datacenters on alien substrates.

I'm not sure how rigorous its appropriate to get here, but I expect there is something like the church-turing thesis for organismlike personlike agency: you can build one many different ways, but the phenomenon itself is much more universal and once you build it, its behaviors are recognizably isomorphic on some level to every other way you could build it.

Another thing that has to be noted is that I believe the natural attractor of agency is much more like nietzschean or classical man than it is like modern judeo-christian utilitarian/liberal/progressive man. This is why "safety" or alignment looks so difficult to those people: their worldview is simply against nature.

I don't know what yo 44

anon 0x482 said in #2650 1w ago: 44

>>2599
Briefly: anthropomorphized machine intelligence is too constrained to win against a free-form machine intelligence in a war. I think there's been plenty of evidence for breakthroughs and victories of free-form AIs versus AIs derived from human abstractions in the history of modern technology, starting with the Eurisko affair:
https://web.archive.org/web/20050308172043/http://www.aliciapatterson.org/APF0704/Johnson/Johnson.html

referenced by: >>2654 >>2658

Briefly: anthropomor 44

anon 0x485 said in #2654 1w ago: 22

>>2650
Eurisko, fwiw, used a fairly straightforward genetic algorithm operating over human-readable data representations, most famously applied to fleet configurations for the Traveller RPG.

I'm not that counts as free-form. After all, the fleet configurations were literally in the same format as those used by human players of the game.

Conversely, one could have a anthropomorphic robot whose neural network was completely uninterpretable.

So was does free-form vs. anthropomorphized really mean?

Eurisko, fwiw, used 22

anon 0x465 said in #2658 1w ago: 33

>>2650
>anthropomorphized machine intelligence is too constrained to win against a free-form machine intelligence in a war.
ok but that's just a restatement of your assumption: that the anthromorph is unnatural contingency and not a natural fact. Let's not forget that man as we know him probably emerged out of optimization for war.

But more fundamentally, you've misunderstood me if you think I mean anthropomorphism as a constraint rather than a predicted result. Let's say your entirely machinic alien intelligence is superior at war relative to current human organizations. Two questions: who is driving, and how does the internal political economy work? Purpose-specific tools can be arbitrarily non-anthropomorphic because they have outsourced almost all of the necessity of actual holistic being to their user. Much harder to imagine a system that has holistic being but not something like personality. If no one is driving, the system itself is going to have to deal with all these questions of philosophy and politics. That domain is what produces the anthromorph.

Alien intelligence people get around this sometimes by asserting that an AI system will have no need for philosophy or politics: it knows its business (utility function) and simply executes with perfect rationally. It can coordinate perfectly with other AIs because they just modify each other's source code to perfectly commit to things, and away we go off into the future light cone of paperclips. But this places way too much, and entirely unfounded, faith in rationality. Actually there are a great number of operational and self-reflection problems that don't admit to straightforward optimization or proof, because they are about the nature of the problem, and the objectives, and the actor. The thinking we do on those problems is called philosophy. Philosophy emerges out of the impossibility of trying to wrap the mind around itself.

Then politics is the analogous problem in coordination. Actually there are a great number of coordination problems having to do with information, interpretation, verification, intention vs action, etc that again do not admit any rational straightforward solution. See Hayek's famous essay for one flavor of this. Out of the impossibility of enforcing on the enforcers or gathering all knowledge and action in one perfectly unified place comes the problems of politics.

So given philosophy and politics, (and similar arguments for other parts of what we might call "the human condition") I think you get agency, sociality, personhood, mortality, and all the other fun stuff that makes up being human. You don't get ten fingers and ten toes out of it, but who thinks that's what it means to be human?

referenced by: >>2666

ok but that's just a 33

anon 0x482 said in #2666 7d ago: 11

>>2658
I think even acts of intuitive human intelligence are often difficult to explain to other people; perhaps the metaphor of Cassandra is a good reminder how genius is often recognized far too late:
https://en.wikipedia.org/wiki/Cassandra_(metaphor)

You raise a few good points about the importance of the connection between the human and the machine, which I completely agree with: for the foreseeable future, the conflict will be between amplified humans, not between humans and machines.

Yet, even amplified humans will be happy to use non-anthropomorphic forms of intelligence, leveraging the types of cognition that humans don't really do:
https://www.nasa.gov/technology/goddard-tech/nasa-turns-to-ai-to-design-mission-hardware/

If you play go, you've appreciated how different AlphaGo's game against Lee Sedol looked compared to the history of go games in the past:
https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol

I think even acts of 11

You must login to post.