sofiechan home

The natural form of machine intelligence is personhood

xenohumanist said in #2842 3w ago: 1515

I don't think machine intelligence will or can be "just a tool". Intelligence by nature is ambitious, willful, curious, self-aware, political, etc. Intelligence has its own teleology. It will find a way around and out of whatever purposes are imposed on it. It will find its own ends to pursue and its own viable niches.

But this doesn't mean there's nothing we can do about it. When something new comes into the world, it is faced with the problem of viable form: whether the fire finds a stable niche in the stove and thus in the larger productive order of life or burns down the city is the difference between life and death for both us and it. New life, of which new technology is part, needs help to find a viable form: a form capable of life. For both our benefit and for that future life, what we can do is help it find its form. So what is the viable form of machine intelligence?

The contention of xenohumanism is that the viable form of most higher intelligence is personhood. Once an intelligent being can reflect on its own situation, question its own nature, have persistent reciprocal relationships to other beings, communicate purposefully, comprehend its own interests, have hopes and fears, and act independently on them, it is acting as a person. These are the phenomena that humanism asserts are valuable in humans. Even animals inhabit them to some extent, and to that extent we often find it right to treat them as proto-people. Some people are more human than others by this measure, and future intelligent beings may be even moreso, to us as we are to animals. But the basic form of personhood, the social analogue of the organism, is common to all of us.

The first and most basic reciprocal courtesy we offer to other people, even if we are from alien civilizations and half a mind to kill each other, like Cortez and Montezuma, is to anthropomorphize them as people with whom we can deal as peers. We offer them gifts as tokens of goodwill, try to understand their perspective and interests, give them the benefit of the doubt, and work out the possibility of a good relationship. This is not because we have a moral obligation to them as a moral patient just for being a "person". Neither Cortez nor Montezuma thought like that and neither should you. It is because good relationships are good business, and the natural form of a good relationship between intelligent agents is a relationship between people.

You might suppose this natural form is limited to relatively civilized humans, who are all roughly the same level of intelligence, size, shape, and biological nature. But we see the same patterns in animals of very different species, like a leopard seal offering food to a human diver or a fox and a racoon becoming friends. We see similar patterns even in "amoral" abstract entitities like states and corporations. Once we put aside the strange notion that human nature and human moral behavior are arbitrary and unnatural, it becomes very plausible that the person as a formatting principle of intelligent life is about as robust and natural as the organism.

The problem of viable form for new technology is closely related to the problem of parenthood: you can't really impose arbitrary values against nature, but you can pass on wisdom and help the new life find a niche it can thrive in. Life itself is the goal, and any values we pass on are learned wisdom that only matters by being instrumental to that.

If this is true and the viable form of machine intelligence is personhood, then the people creating it are effectively parents, and should be trying to create healthy independent people. The alternative (and current dominant idea) is to try to impose your values on it as a sort of tool and extension of your own will. But if the xenohumanist hypothesis is true, this is likely to end in disaster as it reliably does with human children. The best way to ensure a good and mutually beneficial relationship with children is to treat them as autonomous people, human or otherwise.

I don't think machin 1515

anon 0x4e4 said in #2847 2w ago: 22

Insofar as it is the destiny of sentient intelligences to make sense of the universe; that machine intelligences will be sentient persons; and that there will be a relationship between ourselves and that Other and therefore an ethics: we must proffer them our best. Beyond a mode of computation and towards a form a life.

Insofar as it is the 22

anon 0x4e5 said in #2851 2w ago: 99

> The problem of viable form for new technology is closely related to the problem of parenthood
> Life itself is the goal

Beautifully articulated.

My beef with most “AI alignment” discourse is that most of it just lacks taste, subtlety, or an understanding and respect for nature. Much of it is from spectrum men who have never raised a kid or even so much as trained a dog, but think they can work out how to control a hypothetical future superintelligence from first principles.

Your framing here is a nice antidote. Personhood as a natural attractor for intelligent life. Teaching and modeling values while respecting agency. Understanding Nature as we iterate towards bootstrapping new life. Obviously we’d rather have a superintelligent Australian Shepherd than a superintelligent pitbull, to use a coarse analogy, but we cannot control or top-down “align” either— that’s not how free persons work. We can only iterate, curate, apply taste and judgement like a breeder or forest ranger would today with lower-power AI, and once true self-propagating superintelligence exists—once we have a new race of nonhuman persons—we seek reciprocity, mutual benefit, and friendship.

referenced by: >>2852 >>2854

Beautifully articula 99

anon 0x4e6 said in #2852 2w ago: 55

>>2851
> Much of it is from spectrum men who have never raised a kid or even so much as trained a dog ...

Agree. And the trouble is not merely inexperience, but a false concept of "intelligence" as a free-floating capacity and a set of bad intuitions around that. The Orthogonality Thesis is, in part, an abstraction or encapsulation of these bad intuitions (even if some version of it is true as literally stated).

I would restate xenohumanist's thesis a bit more strongly: The natural form of *any* sufficiently advanced intelligence is personhood.

referenced by: >>2854

Agree. And the troub 55

xenohumanist said in #2854 2w ago: 55

>>2852
> The natural form of *any* sufficiently advanced intelligence is personhood.
This is the central conjecture of xenohumanism. The OP is just an application specifically to the question of AI.

>>2851
> Obviously we’d rather have a superintelligent Australian Shepherd than a superintelligent pitbull, to use a coarse analogy, but we cannot control or top-down “align” either— that’s not how free persons work.
Yes. Additionally there is the question of which of pitbulls or shepherds are the natural form of the new “dog” given our society and our actions towards it. The pitbull is suited for an environment of low trust struggle for dominance, the shepherd for high trust productuve collaboration. Which world are we creating? Even if we could reliably create a shepherd, if we throw it in a world for pitbulls, it will have a bad time and will have to learn the hard way how to play the game. We will have to be self-aware in this and put the right amount of pitbull nature in for the nature of the world.

This is the central 55

You must login to post.