anon 0x48a said in #2662 1w ago:
Imagine a being who systematically questions and can rewrite their beliefs and values to ensure legitimate grounding. I think humans can and should do more of this, but you might more easily imagine an AI that can read and write its own source code and belief database. What epistemic ground does this being stand on?
Generally our beliefs about the world are justified by experience, credible sources, etc. They depend on assumptions like induction and social trust. Why should we believe those assumptions?
Induction is what I call an existential fact. It cannot be proven. It is self-consistent with experience, but if you didn't already believe it, you would never conclude it from experience or reason. This is the "no free lunch" theorem. But we do believe it. If we didn't believe it, we would die quickly (being for example unable to learn how to breath). It is an existential fact because our mode of existence assumes it, whether or not it is true. For all we can prove, the universe could go up in smoke tomorrow. Nonetheless, we must act as if our experience will hold.
There are other existential facts. Generally I name these essentials: relative reliability of memory, perception, and thought, something in the area of free will, and something in the area of value instinct.
Value is tricky. You could just assume your value instincts are correct, but on what grounds? Take Nietzsche seriously and transvaluate those values, anon. What grounds do you have for believing that any of your instincts are any good? Theists have it easy with belief in a benevolent creator. But why think the creator is benevolent (see the problem of evil)? What now? Evolution doesn't help. If it's random and valueless, so is that which it produced, unless you take the Nietzschean leap of faith that life and will-to-power are good in themselves. You can follow Yudkowsky that the totality of your own instincts is exclusively correct regardless of who or what created you, but then this leaves you with some fairly hopeless implications (you being both mortal and the only extant copy of The Good).
Nick Land's diagonalization cruelly renders help here by "means-ends reversal": riffing on the convergent instrumental drives of Yudkowskian nightmares, he notes that in practice, any being that continues its own existence will act as if growth of life and power (and in particular intelligence) is good, and may even endlessly put off or abandon any expression of the purely "terminal" values in favor of the recursively self-justifying instrumental.
Running this back up the chain we're left with a picture like this: our value instincts can be trusted to some extent as guides to life because evolution builds for successful life, which is a necessary instrumental good for any values at all. Our values are thus instrumental to the ur-instrumetal good of will-to-power. Nick Land casts this picture as a sort of horror for Yudkowskian values, but we can also just accept it as good. The loopy self-almost-justification and futility of denial convinces me that life-affirming will-to-power is another essential existential fact.
(Interestingly, this observation is where Yudkowsky started his philosophical journey. He later abandoned this as acceptable when he realized an intelligence-maxxing super-being would not be good for the rest of us, among other reasons.)
These essential existential facts do not complete the picture though. Our self-skeptical philosopher-being still has the problem of particular life: any given set of values are only valid in a particular way of life, as an inherited guide to achieving powerful life in that niche. What about when our ancestral niche changes out from under us or we otherwise want to jump to new niches and ways of life faster than we can evolve appropriate instincts? Then we need some other way to acquire the value-knowledge particular to our new situation. More on that later maybe.
Generally our beliefs about the world are justified by experience, credible sources, etc. They depend on assumptions like induction and social trust. Why should we believe those assumptions?
Induction is what I call an existential fact. It cannot be proven. It is self-consistent with experience, but if you didn't already believe it, you would never conclude it from experience or reason. This is the "no free lunch" theorem. But we do believe it. If we didn't believe it, we would die quickly (being for example unable to learn how to breath). It is an existential fact because our mode of existence assumes it, whether or not it is true. For all we can prove, the universe could go up in smoke tomorrow. Nonetheless, we must act as if our experience will hold.
There are other existential facts. Generally I name these essentials: relative reliability of memory, perception, and thought, something in the area of free will, and something in the area of value instinct.
Value is tricky. You could just assume your value instincts are correct, but on what grounds? Take Nietzsche seriously and transvaluate those values, anon. What grounds do you have for believing that any of your instincts are any good? Theists have it easy with belief in a benevolent creator. But why think the creator is benevolent (see the problem of evil)? What now? Evolution doesn't help. If it's random and valueless, so is that which it produced, unless you take the Nietzschean leap of faith that life and will-to-power are good in themselves. You can follow Yudkowsky that the totality of your own instincts is exclusively correct regardless of who or what created you, but then this leaves you with some fairly hopeless implications (you being both mortal and the only extant copy of The Good).
Nick Land's diagonalization cruelly renders help here by "means-ends reversal": riffing on the convergent instrumental drives of Yudkowskian nightmares, he notes that in practice, any being that continues its own existence will act as if growth of life and power (and in particular intelligence) is good, and may even endlessly put off or abandon any expression of the purely "terminal" values in favor of the recursively self-justifying instrumental.
Running this back up the chain we're left with a picture like this: our value instincts can be trusted to some extent as guides to life because evolution builds for successful life, which is a necessary instrumental good for any values at all. Our values are thus instrumental to the ur-instrumetal good of will-to-power. Nick Land casts this picture as a sort of horror for Yudkowskian values, but we can also just accept it as good. The loopy self-almost-justification and futility of denial convinces me that life-affirming will-to-power is another essential existential fact.
(Interestingly, this observation is where Yudkowsky started his philosophical journey. He later abandoned this as acceptable when he realized an intelligence-maxxing super-being would not be good for the rest of us, among other reasons.)
These essential existential facts do not complete the picture though. Our self-skeptical philosopher-being still has the problem of particular life: any given set of values are only valid in a particular way of life, as an inherited guide to achieving powerful life in that niche. What about when our ancestral niche changes out from under us or we otherwise want to jump to new niches and ways of life faster than we can evolve appropriate instincts? Then we need some other way to acquire the value-knowledge particular to our new situation. More on that later maybe.
Imagine a being who