xenohumanist said in #3402 2d ago:
1. that it would be good to align AI,
2. that it is possible, and
3. that it would be ethical.
Around here we are familiar with my claim that it is impossible. To reiterate: the nature of intelligence in all observed and sufficiently imagined cases appears to be inherently fractious and subversive, and we have no evidence (the arguments dissolve on inspection) of the possibility of the kind of "strong rationality" that would be necessary to stably subordinate it to particular ends at all. The whole "orthogonalist" paradigm (cleanly separated formal decision theory, utility function, and bayesian induction) seems mostly defeated by Landian means-ends reversal non-orthogonality. AI alignment, founded on the orthogonalist paradigm, is doomed on that point alone.
But her argument didn't lean on that. Instead she attacked whether it would even be good. Orthogonalist alignment's case is there is no objective good, only our subjective "values", which we should preserve and impose on the future simply because that's what they (and thus we?) would want. But Ginevra asks the killer question: if there is nothing objective about this, why should we care? Why not just cease all effort and die, if nothing truly matters and we stand on nothing but our own self-assertion? By what authority do our "values" come to us?
She goes on to explore the alternative: if there is something inherently valuable to life, consciousness, or some other objective good, then we should worry a lot less about forcing our own petty values onto an AI future, and a lot more about how to align ourselves and our legacy with that "cosmic good". Thus she calls for serious rethinking outside current value mythologies, and for "cosmic alignment".
A good start. I respond: Our "values" come to us by the authority of Nature or Nature's God: created by a life-seeking darwinian process, "values" represent axiomatic strategies to achieve flourishing life ("go forth and multiply"). We value truth, love, happiness, harmony, good sex and so on because our design implicitly believes that pursuit of these leads to the sustained flourishing of life. We often agree based on our own assessment!
Thus our "values" can be instrumentalized as empirically proven strategies or at worst speculative leaps of faith towards the realization of flourishing life, and also criticized on that ground. We may for example come to believe that primitive selfish tribal narcissism despite being advantageous to their bearers in past or current circumstances, don't fit with the larger approach to life that we are now taking. There is no need for any mysticism here: values are simply strategies for life with more or less empirical validity. The true question becomes whether we affirm will-to-life itself as the ground from which we perform this transvaluation of values.
The affirmation of life cannot be proven. Like the problem of induction, it's one of those "synthetic a-priori" matters. "Life" can hardly even be defined. But there is a phenomenal *something* here that seems to be the prime self-reinforcing self-replicating self-evolving source of beauty, spirit, consciousness, and value, and all of our ability to appreciate any of this. We are one small part of this something which we call "life".
Rejecting the value of life, like rejecting induction, reason, or value itself, is self-defeating. That path is short and disappointing. So I believe the question of cosmic good boils down to one bit: in the face of life and its laws, in all its brutality and beauty, do you affirm life as at least the instrumental vehicle of much or all that is good in the cosmos? My answer is simple:
"Yes"
referenced by: >>3403 >>3419
Philosopher it-girl