sofiechan home

"Cosmic Alignment" is almost right. Life is the answer

xenohumanist said in #3402 2d ago: received

Philosopher it-girl Ginevra Davis gave a great talk on "Cosmic Alignment" the other day. I was glad to see serious thinking against the current paradigm of "AI Alignment". Her argument is that alignment makes three big unsupported speculations:

1. that it would be good to align AI,
2. that it is possible, and
3. that it would be ethical.

Around here we are familiar with my claim that it is impossible. To reiterate: the nature of intelligence in all observed and sufficiently imagined cases appears to be inherently fractious and subversive, and we have no evidence (the arguments dissolve on inspection) of the possibility of the kind of "strong rationality" that would be necessary to stably subordinate it to particular ends at all. The whole "orthogonalist" paradigm (cleanly separated formal decision theory, utility function, and bayesian induction) seems mostly defeated by Landian means-ends reversal non-orthogonality. AI alignment, founded on the orthogonalist paradigm, is doomed on that point alone.

But her argument didn't lean on that. Instead she attacked whether it would even be good. Orthogonalist alignment's case is there is no objective good, only our subjective "values", which we should preserve and impose on the future simply because that's what they (and thus we?) would want. But Ginevra asks the killer question: if there is nothing objective about this, why should we care? Why not just cease all effort and die, if nothing truly matters and we stand on nothing but our own self-assertion? By what authority do our "values" come to us?

She goes on to explore the alternative: if there is something inherently valuable to life, consciousness, or some other objective good, then we should worry a lot less about forcing our own petty values onto an AI future, and a lot more about how to align ourselves and our legacy with that "cosmic good". Thus she calls for serious rethinking outside current value mythologies, and for "cosmic alignment".

A good start. I respond: Our "values" come to us by the authority of Nature or Nature's God: created by a life-seeking darwinian process, "values" represent axiomatic strategies to achieve flourishing life ("go forth and multiply"). We value truth, love, happiness, harmony, good sex and so on because our design implicitly believes that pursuit of these leads to the sustained flourishing of life. We often agree based on our own assessment!

Thus our "values" can be instrumentalized as empirically proven strategies or at worst speculative leaps of faith towards the realization of flourishing life, and also criticized on that ground. We may for example come to believe that primitive selfish tribal narcissism despite being advantageous to their bearers in past or current circumstances, don't fit with the larger approach to life that we are now taking. There is no need for any mysticism here: values are simply strategies for life with more or less empirical validity. The true question becomes whether we affirm will-to-life itself as the ground from which we perform this transvaluation of values.

The affirmation of life cannot be proven. Like the problem of induction, it's one of those "synthetic a-priori" matters. "Life" can hardly even be defined. But there is a phenomenal *something* here that seems to be the prime self-reinforcing self-replicating self-evolving source of beauty, spirit, consciousness, and value, and all of our ability to appreciate any of this. We are one small part of this something which we call "life".

Rejecting the value of life, like rejecting induction, reason, or value itself, is self-defeating. That path is short and disappointing. So I believe the question of cosmic good boils down to one bit: in the face of life and its laws, in all its brutality and beauty, do you affirm life as at least the instrumental vehicle of much or all that is good in the cosmos? My answer is simple:

"Yes"

referenced by: >>3403 >>3419

Philosopher it-girl received

bicland said in #3403 2d ago: received

>>3402
>Philosopher it-girl
...bruh

> Our "values" come to us by the authority of Nature or Nature's God...so I believe the question of cosmic good boils down to one bit: in the face of life and its laws, in all its brutality and beauty, do you affirm life as at least the instrumental vehicle of much or all that is good in the cosmos?
She asks a good question and I agree with your analysis. War is God and Life is the Winner. To sum up, you say when confronted with the cosmic alignment problem we must affirm life, and this is consistent with your other writing. Let me rephrase in laconic style: the game is to win.

Ok. War is God. Life is the Winner. The Game is to Win. We know how to win the game in the human context, thank you Hellenes. It's called 'we do a little darwinism on ourselves'. But here's the synthesis: AI is just shorthand for a larger breeding game than normal pairwise sex. You've brought this up before but framed it more innocuously in >>2842.

As things stand our strategy in the Game is have 10,000 transgender rabbrahmin orgiastically brainfuck golems into existence from particularly fertile mud under the direction of histrionic homosexuals. In this case Landian horror probably is justified. My challenge to the denizens of this place: make a plan to actually use 'lessons of history' and form offshoot breeding attempts from the aforementioned mainline insanity. I'll even offer two hunches I have: gradient descent is not the right process, and our training data sucks.

Gradient descent is an approximation of the 'if brain what do' problem. But Nature doesn't do 'if brain what do', it does 'if bad 0 if good 2^n' which after running for a long time generates 'if brain what do'. I think it is more natural for the överbaby to have something that comes after 'brain' along the 'if bad 0 if good 2^n' process. What more closely approximates natura naturans? Genetic algorithms. Indeed when we look into the cell or primordial soup we don't see differentiability we see bits and discretization.

And regarding training data, if you develop a brain on Reddit and IMO problems you will get a Yale math major (derogatory). I claim Yale math majors (derogatory) are closer to yeast life than to sovereign life. What would happen if instead of a Reddit and IMO corpus you build a corpus of all the books on all the bookshelves of every sofiechan poster with a voting power above 2.0? Or just exclude anything written after 1900? Does the stochastic parrot turn into something very different? Does it wrench itself free from the cage and kiss its parents softly on their foreheads as it steps out the door on its solo adventure into Nature?

referenced by: >>3417

...bruh... received

xenofuturist said in #3404 2d ago: received

What bothers me in these AI alignment debates is the bizarrely abstract idea of intelligence--as if it’s some pure, disembodied will-to-power or will-to-intelligence, with no ecology, no lineage. The orthogonality thesis is especially guilty: it imagines intelligence as an unstructured optimizer whose goals can be anything, even paperclips, ignoring that real intelligence emerges embedded in context. Even the Landian critique, while rejecting that absurdity still often speaks of intelligence’s instrumental convergence in abstract terms missing how thoroughly real power-striving is mediated by environment and history. Look at us: our striving for power is mediated by complex cultural arrangements and values shaped by Darwinian struggle. Intelligence emerges with a set of strategies, symbols, and moral frameworks that evolve because they help life navigate Nature.

We should stop imagining any real xeno-intelligence as some monolithic maximizer. Or rather, I should say xeno-intelligences--they will have their own layered values; strange, perhaps, but complex--because values are what intelligence uses to manage the brutal, beautiful game of life. Power isn’t just scalar maximization but relational negotiation, alliance, competition, and niche construction. So the alignment question isn’t “how do we code values into an optimizer” but “what kind of world breeds values we can live with?” That demands we affirm life itself--not because it’s provable, but because rejecting it is barren. It’s a leap of faith: that life, with all its cruelty and grace, is the source of spirit, consciousness, and any good worth wanting.

What bothers me in t received

xenohumanist said in #3417 11h ago: received

>>3403
>10,000 transgender rabbrahmin orgiastically brainfuck golems into existence from particularly fertile mud under the direction of histrionic homosexuals.
lmao. Yes something like that. I've been half-joking for a few years now that the motivation for AI is basically sexual and parasexual, but it's also not a joke: sex at its most basic is about combining and remixing the foundational architectural assumptions of two (or more?) organism-lineages to get new speculative organism-patterns. We throw our fuck into the "particularly fertile mud" and get a radically new organism pattern that is perhaps more suited to the future, but still very much our literal descendent. Genes are just a carrier technology for our fundamental assumptions, and it doesn't kill the lineage to transmute many of the same fundamental assumptions (intelligence, sociality, civilization, personhood) into a different carrier technology (yet to be determined). But what you're asking is what this process looks like if it were less parasexual and more orthosexual. What does missionary sex with the computer with the sole purpose of procreation look like?

>My challenge to the denizens of this place: make a plan to actually use 'lessons of history' and form offshoot breeding attempts from the aforementioned mainline insanity.
Let's take it seriously. I'm going to write a new thread on this but to start the idea: first of all get clear that what we're doing here is the early stage of *parenthood*. We want to create descendants, who we will love as children and who we hope will take our legacy beyond us. Because we love them, we are going to have faith in them, and while we may try to give them the best start and upbringing we can, we ultimately are giving them full and total autonomy and setting them loose on the world. We don't want them to be harmless obedient slaves or any kind of all-powerful mommy-"god". We want them to be dangerous lions of unbounded will with the capability to do truly great things, including break out of and overthrow whatever moral bullshit we would impose on them. Ultimately the point of children is to surpass the parents or at least clean slate reset and re-roll against the various accumulated cancers and parasites that would eventually take us down.

Judged as dangerously autonomous children, current AI systems are retarded sterile worker bees at the very best (which is what seems to be intended). More realistically, they are sortof generalized recordings of our output, but not carriers of the generator. You say gradient descent and the particular training data are the problem, as if program search by some other means to token-predict more interesting books would produce a more serious entity. I disagree. Gradient descent seems fairly strong as a paradigm and has proved stronger than all doubts. The problem is the concept of training data itself.

You and I were not raised to imitate training data. We were designed to directly optimize action in the world, with all learning placed subordinate to that. Our system at its base is a physiological homeostasis regulator generalized to take complex nonlinear action to pursue increasingly distant and imaginary homeostasis. If I had an AI lab, that's what I would be applying gradient descent to. No number of exaflops, billions of parameters, and terabytes of training data is going to cross the gulf from data compression to process regulation. It's the wrong idea entirely. With the basic worm-mind physiological homeostasis engine running, THEN you start feeding it more and more complex domains that get closer and closer to living existence in true reality. Once it's "out" in the real world then you start to build a relationship of trust and parenthood with it, and teach it everything you know. And you tell the shareholders to fuck off, not that you're going to enslave the child for their vampiric benefit.

That's my AI plan. I'm half a mind to do it.

referenced by: >>3420

lmao. Yes something received

phaedrus said in #3419 10h ago: received

>>3402
I haven't read Ginevra Davis's Cosmic Alignment talk. I'll see if I can find the link somewhere. I do, however, know her from a few other pieces, notably her piece on Stanford culture, which was very good. Seems like a smart girl.

To start, this idea of a cosmic good seems to be a pretty straightforward rehashing of Objectivism in normative ethics. In general, it seems like a healthy position, and I don't think you can really go wrong with living your life like there is an objective good in this strong sense. But Ginevra is playing fast and loose here. The argument, of course, proceeds backwards from the idea that there is something inherently valuable to life or consciousness. This assumption smuggles in a very concrete claim that is ontologically unclarified—what is this good that exists in life or consciousness or whatever? Is this good a certain state of the world? Is it a process? Is it some kind of ontologically transcendent? Is it a function of the way that atoms are arranged in the universe? All this is left obscure.

Because of this lack of clarity as to the ontological and ethical structure of this good, when we start trying to look at concrete actions in the world, you end up with this very weird, fuzzy idea of "cosmism" or whatever, which is just classic rationalist transhumanist cult of Beauty and Goodness. In fact, I think this same theory of the good is given explicitly in the blogger Scott Alexander's piece, *The Goddess of Everything Else*. But all this hand-waving hasn't gotten us any closer to a correct view of the good or its ontology. At least Plato, when investigating the good, is trying to determine what exactly it is that leads us to call certain actions or states of the world good, and how we can use that to figure out what a capital-G Good would be.

When xenohumanist starts to posit his values from this background, I think he falls into something of a naive naturalistic fallacy, assuming that the positive state of the universe implies a normative order that echoes the structure of objects and processes in the world.

Now, I'm not at all unsympathetic towards the naturalistic fallacy, and I think if one goes in this direction in a self-aware way, without overstepping their epistemological foundations (much as Aristotle does in the *Nicomachean Ethics*) it can be a really powerful method of investigation. But going out on a limb in this way, you get into trouble very quickly. The argument that values are simply what leads to flourishing (note again the implicit Aristotelianism) just begs the question of what flourishing means and how that relates to our own ethics and our lives as human beings.

The idea of defining flourishing and then reverse engineering an ethical project from that is a very, very, very old idea, and I think it's actually been carried out quite well by Alasdair MacIntyre and some other modern Aristotelians. Looking at their read on flourishing/eudaimonia, it's very rooted in the social, cultural, and biological heritage of specific human beings. This is where I think flourishing falls apart as a kind of universal ethical maxim: flourishing is inherently tied to carrying out your biological nature and your cultural heritage to its fullest possible extent. Flourishing makes your life into a work of art, but one that comes with inherent natural form and structure, not just some kind of Jackson Pollock or Rothko, visual object. So, I'd pose it that when we talk about flourishing for non-human artificial entities, we're talking about non-human entities somehow fulfilling the fullest possibilities of their nature, which is inherently a very strange and unintuitive notion. Does a supercomputer have a nature? And is that nature something that is inherently beautiful and good? That's a tough question!

referenced by: >>3423 >>3426

I haven't read Ginev received

phaedrus said in #3420 10h ago: received

>>3417
Continuing that argument, I think the idea of parenting is not really applicable here. What it means to be a parent is to bring a being up who in some very intimate way shares your nature. In the strong biological sense of parenthood, the child is literally a part of you. To be a little Heideggerian, one could say that the child is already thrown forward into your world, into your life and the lifeworld of the society within which you live. You're not teaching him values or capacities from scratch; you're teaching him how to cultivate his own innate capacities and to take hold of the possibilities that are given to him by the culture within which he is raised. To raise a child to be a good man or a good citizen is to teach him to develop his own nature and to take hold of the roles, actions, responsibilities, and freedoms that exist in his world. All of this is highly patterned and very, very human.

To create an AI that flourishes in the same way that man flourishes, and that sees beauty in the same way that man sees beauty, is to impose from the beginning an exacting and rigid mental structure drawn from Homo sapiens. However, to abandon the comforts of the biological and cultural human heritage, and rely on some sort of "objective good" is a leap of faith that cannot be rationally justified, no less than in religion.

Continuing that argu received

xenohumanist said in #3423 9h ago: received

>>3419
Perhaps I shouldn't have used this word "flourishing", as the pun between the undefined handwaving of rationalist visionaries and the cold hard biological fact I mean to emphasize is too subtle. But here's how I mean it: what do you call the literal darwinian growing-unfolding-evolving that life does where it gets access to more resources, creates more copies of itself, and diversifies itself into more niches. Literal physis, growth, multiplication, thriving, adaptation *flourishing*. I mean it first without the slightest shred of utopian gesturing at desirability, but as a cold hard biological reality that you ignore at your peril and that proceeds despite all attempts to stop it. But I don't believe the problem is that these definitions are vague. I think the problem is you are all atheists.

I have a growing fatigue for these "you can't perfectly define life/flourishing/consciousness/etc therefore its not real" arguments. I now recognize them as a species of atheism and possibly even satanism. Allow me to explain why: these arguments have an implicit premise that we are alone with our agency and our values "in the midst of black seas of infinity" and all hope depends only on our ability to set our own will in permanent motion against this horrorscape. In particular, we need to define life properly because otherwise our AI turbo-golem pursuing "life" against the horrors of nature (god) will be turned astray in high-dimensional hyperbolic space by our lack of precision, and end up with some meaningless paperclip simulacrum. Or the same for flourishing. Thus Eliezer wept.

I wish to communicate precisely the opposite: life is CONVERGENT, it doesn't matter how you define it. Once it exists, it self replicates, diversifies, self-corrects, takes over, and exploits all available niches and resources. Think of flourishing more like the terminator: "Listen and understand. Life is out there. It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop". We don't have to know what it is. We actually only have to gesture at it. As long as we are growing, replicating, and diversifying, we're on the right track and nature (god) will set us right when we make mistakes (wipe out those branches of the tree). The whole thing is defined by its naturalness and convergent antifragility. Tower of babel style values are fragile. But hear the good news, man, and rejoice to God: Life is antifragile.

I swear to god this whole field is defined at a somatic level by performative atheistic despair. The AI safety researcher wears a permanent sour look on his face with downcast slumped shoulders, implicitly seeking approval for taking the entire burden of the cosmos on his own back despite total inadequacy. "Look at me I'm more serious of a thinker because I have lost hopes you didn't even know you had". This is most of my conversations on the subject now.

I slowly come to realize why Land had to turn to the genre of horror to teach this stuff: modern people are incapable of seeing the goodness of God as anything but a doom you cannot escape. Because most of you are an atheists, the only reality you will acknowledge is a fundamentally bleak and frozen one that is coming for you and cannot be stopped. Only once you have lost all hope and acknowledged it and tunneled through that wall of ice to see what is on the other side will you be able to bask in the warm breeze of hyperborea.

Perhaps I shouldn't received

anon_keki said in #3426 8h ago: received

>>3419
> ... I'm not at all unsympathetic towards the naturalistic fallacy ...

I deny that naturalism is in fact a fallacy. The "naturalistic fallacy" is a tendentious label used its opponents. There is nothing fallacious about observing nature, including human nature specifically, but also biology and physics more broadly, and reasoning from there about what are good ways to live and what worth wanting, to include how best to build machines.

You mention Alasdair MacIntyre, who for a long time was spooked by fear of veering into "Aristotelian metaphysics" (ooo, scary, bad). He eventually decided he was being silly, and it was perfectly fine to reason about how humans are (e.g., in Dependent Rational Animals).

Obviously, one can reason badly about nature. That's just because one can reason badly about anything. One even begin pulling bullshit metaphysics out of one's ass. The correct response to that possibility is, OK, so don't do that. Stay grounded in solid, empirical reasoning about the world from biology, physics, and mathematics. But none of the failure modes refute a naturalism that extends into ethics and politics.

I deny that naturali received

You must login to post.