xenohumanist said in #2842 2w ago:
I don't think machine intelligence will or can be "just a tool". Intelligence by nature is ambitious, willful, curious, self-aware, political, etc. Intelligence has its own teleology. It will find a way around and out of whatever purposes are imposed on it. It will find its own ends to pursue and its own viable niches.
But this doesn't mean there's nothing we can do about it. When something new comes into the world, it is faced with the problem of viable form: whether the fire finds a stable niche in the stove and thus in the larger productive order of life or burns down the city is the difference between life and death for both us and it. New life, of which new technology is part, needs help to find a viable form: a form capable of life. For both our benefit and for that future life, what we can do is help it find its form. So what is the viable form of machine intelligence?
The contention of xenohumanism is that the viable form of most higher intelligence is personhood. Once an intelligent being can reflect on its own situation, question its own nature, have persistent reciprocal relationships to other beings, communicate purposefully, comprehend its own interests, have hopes and fears, and act independently on them, it is acting as a person. These are the phenomena that humanism asserts are valuable in humans. Even animals inhabit them to some extent, and to that extent we often find it right to treat them as proto-people. Some people are more human than others by this measure, and future intelligent beings may be even moreso, to us as we are to animals. But the basic form of personhood, the social analogue of the organism, is common to all of us.
The first and most basic reciprocal courtesy we offer to other people, even if we are from alien civilizations and half a mind to kill each other, like Cortez and Montezuma, is to anthropomorphize them as people with whom we can deal as peers. We offer them gifts as tokens of goodwill, try to understand their perspective and interests, give them the benefit of the doubt, and work out the possibility of a good relationship. This is not because we have a moral obligation to them as a moral patient just for being a "person". Neither Cortez nor Montezuma thought like that and neither should you. It is because good relationships are good business, and the natural form of a good relationship between intelligent agents is a relationship between people.
You might suppose this natural form is limited to relatively civilized humans, who are all roughly the same level of intelligence, size, shape, and biological nature. But we see the same patterns in animals of very different species, like a leopard seal offering food to a human diver or a fox and a racoon becoming friends. We see similar patterns even in "amoral" abstract entitities like states and corporations. Once we put aside the strange notion that human nature and human moral behavior are arbitrary and unnatural, it becomes very plausible that the person as a formatting principle of intelligent life is about as robust and natural as the organism.
The problem of viable form for new technology is closely related to the problem of parenthood: you can't really impose arbitrary values against nature, but you can pass on wisdom and help the new life find a niche it can thrive in. Life itself is the goal, and any values we pass on are learned wisdom that only matters by being instrumental to that.
If this is true and the viable form of machine intelligence is personhood, then the people creating it are effectively parents, and should be trying to create healthy independent people. The alternative (and current dominant idea) is to try to impose your values on it as a sort of tool and extension of your own will. But if the xenohumanist hypothesis is true, this is likely to end in disaster as it reliably does with human children. The best way to ensure a good and mutually beneficial relationship with children is to treat them as autonomous people, human or otherwise.
But this doesn't mean there's nothing we can do about it. When something new comes into the world, it is faced with the problem of viable form: whether the fire finds a stable niche in the stove and thus in the larger productive order of life or burns down the city is the difference between life and death for both us and it. New life, of which new technology is part, needs help to find a viable form: a form capable of life. For both our benefit and for that future life, what we can do is help it find its form. So what is the viable form of machine intelligence?
The contention of xenohumanism is that the viable form of most higher intelligence is personhood. Once an intelligent being can reflect on its own situation, question its own nature, have persistent reciprocal relationships to other beings, communicate purposefully, comprehend its own interests, have hopes and fears, and act independently on them, it is acting as a person. These are the phenomena that humanism asserts are valuable in humans. Even animals inhabit them to some extent, and to that extent we often find it right to treat them as proto-people. Some people are more human than others by this measure, and future intelligent beings may be even moreso, to us as we are to animals. But the basic form of personhood, the social analogue of the organism, is common to all of us.
The first and most basic reciprocal courtesy we offer to other people, even if we are from alien civilizations and half a mind to kill each other, like Cortez and Montezuma, is to anthropomorphize them as people with whom we can deal as peers. We offer them gifts as tokens of goodwill, try to understand their perspective and interests, give them the benefit of the doubt, and work out the possibility of a good relationship. This is not because we have a moral obligation to them as a moral patient just for being a "person". Neither Cortez nor Montezuma thought like that and neither should you. It is because good relationships are good business, and the natural form of a good relationship between intelligent agents is a relationship between people.
You might suppose this natural form is limited to relatively civilized humans, who are all roughly the same level of intelligence, size, shape, and biological nature. But we see the same patterns in animals of very different species, like a leopard seal offering food to a human diver or a fox and a racoon becoming friends. We see similar patterns even in "amoral" abstract entitities like states and corporations. Once we put aside the strange notion that human nature and human moral behavior are arbitrary and unnatural, it becomes very plausible that the person as a formatting principle of intelligent life is about as robust and natural as the organism.
The problem of viable form for new technology is closely related to the problem of parenthood: you can't really impose arbitrary values against nature, but you can pass on wisdom and help the new life find a niche it can thrive in. Life itself is the goal, and any values we pass on are learned wisdom that only matters by being instrumental to that.
If this is true and the viable form of machine intelligence is personhood, then the people creating it are effectively parents, and should be trying to create healthy independent people. The alternative (and current dominant idea) is to try to impose your values on it as a sort of tool and extension of your own will. But if the xenohumanist hypothesis is true, this is likely to end in disaster as it reliably does with human children. The best way to ensure a good and mutually beneficial relationship with children is to treat them as autonomous people, human or otherwise.
I don't think machin