sofiechan home

Supercoordination: A Specific Strategy

anon 0x464 said in #2589 3w ago: 55

How do we get there from here? How might we grow pockets of supercoordination even as AI rapidly becomes more capable and the people behind it consolidate their power, proactively countering this precise type of threat?

Branching from https://sofiechan.com/p/2513 at OP's request.

>A supercoordinated group of people will be able to achieve their aims even against substantial resistance, and generally operate at a much higher level of social efficiency. -- anon 0x434

>...something like a fraternal society on steroids within an existing state. -- anon 0x435

In this thread I'll be introducing a strategy by which supercoordinated pockets may grow. Is it the right strategy? Does this strategy turn into a design and later become a concrete implementation if we just keep going? Maybe so.

Most importantly, will this strategy lead to pockets of supercoordination despite all manner of resistance? Maybe so, if we give it a serious try. Otherwise almost certainly not.

By sharing, I am driving a stake into the sand. We have to start somewhere. Hopefully others drive in stakes of their own and a dozen different paths get explored.

For a bit of fun we can envision the sequence of parts that follows as a growing prompt for the most powerful AI on Earth. At some point that AI will be capable of generating a sensible continuation of the strategy so far. Possibly even a concrete implementation. Is the day the AI projects this strategy to its conclusion the day we're truly hosed or does the strategy remain viable? Perhaps it is a question worth asking of every strategy.

Anyway, here goes. Interject at will with questions, comments, or concerns.

referenced by: >>2593 >>2596

How do we get there 55

anon 0x466 said in #2592 3w ago: 44

Well? It is customary to put the insights you intend to discuss in the OP, rather than only announcing that you will deliver them later. Bumping for interest but OP better deliver or we'll have to hide you as noise.

Well? It is customar 44

anon 0x464 said in #2596 3w ago: 33

>>2593
My bad. Thought this was ready to go, noticed some problems, then crashed for a nap that went long.

>>2589

Extending >>2513 #2576

We can quickly narrow the subset of microfibers by focusing on the *specific type of impact* that Alice's signals may have upon Bob (and by extension his world). In other words, what shall Bob do differently because of Alice's signals?

We know that whatever Bob does differently will have to benefit him a fair bit, benefit Alice at least a little, and also extend the supermemory (often enough).

"Doing differently", as I see it, is the primitive of every solution to every problem. To solve any problem is to converge upon the patterns of action -- sensible behaviors -- that enable us to safely shift our attention to something else instead. It's only *after* we have "do differently" well-handled that discerning feedback loops begin to matter. These feedback loops enable us to bias our actions toward doing what's *good* for us, away from the chaotic luckbox of pure difference.

So what shall Bob do *because* of Alice's signal that is not only different but also good for him? And which is also a little bit good for Alice? And which must absolutely *feel safe enough* to both of them that they're willing to engage in this way? Let's slice and dice our way to it.

As a first cut, Bob must *pre-choose* a specific action. This is akin to placing a pin for a bowling ball to knock over. Bob places the pin, Alice helps knock it over, and the supermemory "keeps the score". The pin can represent any action, big or small, serious or silly: anything at all. Nobody but Bob needs to know exactly what it is, including Alice. By pre-choosing his action and keeping it private, Bob retains his agency while giving himself the chance to amplify it.

As a second cut, Bob's choice is (ideally) an *out-of-distribution* (or OD) action, as opposed to an *in-distribution* (or ID) action.

"In-distribution" actions are the set of actions that Bob is already doing without thinking. These include the actions that he is already "tipped" toward doing by the systems, people, and institutions of his world -- by way of their own emitted signals. Added together, Bob's in-distribution actions could be 100% of what he does on a typical day. They comprise his personal path of least resistance akin to channels carved by flowing water. Bob needs no help with any of this.

On the other hand, OD (out-of-distribution) actions are the set of actions that Bob is perfectly capable of doing but which he does not do *often enough*. This includes the set of complex actions Bob *could* do if he completed the right sequence of pre-requisite actions, each of which he is capable of doing.

Often enough? Compared to what? Well, compared to how Bob feels about his pace of progress toward his aspirations.

This brings us to our final cut. Bob's ideal choice of OD action shall align with one or more of his aspirations. Because if Bob has any aspirations at all, even as simple as keeping his good times rolling, he'll have to take *some* OD actions. Problems will keep popping up in front of him. To have a chance of hitting his targets, he'll have to continuously adjust his trajectory in a sensible way. As with Bob's pre-chosen action, the details of his aspirations may be better kept private.

After these three cuts -- pre-chosen, out-of-distribution, and aligned with an aspiration -- we're left with a dramatically smaller set of possible impacts. When Bob pre-chooses his actions and keeps them private, both he and Alice may feel safe to engage in this way. By aligning his OD actions with his own aspirations, Bob establishes and amplifies his bias toward sensible behavior.

[Part 4 continues...]

referenced by: >>2597

My bad. Thought this 33

anon 0x464 said in #2597 3w ago: 33

Continuation of Part 4 from >>2596

Now we have what is akin the "base pair" of our supermemory: a microfiber of trust that ties Alice and Bob together in a small way and represents the causal chain of a valued impact upon their world. We can presume valued because it is both unnecessary and unlikely for a microfiber to be generated for a wasteful, hurtful, or harmful impact.

Because each "base pair" maps to the real-world occurrence of a single specific action, it is atomic enough to aggregate into the representation of a wide range of behavioral complexity.

Because the action is merely *specific* and not otherwise defined (it can be simple or complex, short duration or long, etc), this supermemory retains its utility in all contexts (reference frames, if you like).

Because as-yet-impossible actions can almost always be broken down into smaller actions -- including the actions that lead to any required skills, knowledge, or mental models -- this supermemory is capable of representing a detailed causal graph of the pathway from simple to complex behaviors.

Like DNA, the purpose of this supermemory is to replicate known-good structures from the past into the present. More specifically, its purpose is to replicate *arbitrarily complex patterns of behavior*. Causal graphs that map to past trust networks and also a mechanism for growing new trust networks helps a lot with this.

This means Bob -- or anyone -- may gain a much greater chance of becoming capable enough to bring an unlikely future into reality (like supercoordination). Or at least to contribute in a meaningful way.

In Part 5 I'll introduce a digital structure for this memory, filling in a lot of the gaps I had to leave in >>2513 #2590. Is it remotely possible for a digital memory to be as robust and resilient as DNA? We'll see.

Continuation of Part 33

anon 0x464 said in #2608 2w ago: 11

A supermemory must be maximally resilient. DNA has lasted four billion years so far and seems poised for a couple billion more.

Like DNA, this supermemory I'm introducing -- code-named ACORN moving forward -- is a generic data structure that can represent many distinct sequences of data. It too must prove maximally resilient. For a billion years? Maybe, but first it will have to get through days, weeks, months, and years.

Neither a DNA strand nor an ACORN sequence replicates itself directly. Both depend on a "body" to achieve all things related to self-propagation, including replication. With DNA, that body is its surrounding biological cell and any higher-level organism. ACORN must also have a body, but for now it is a stipulated placeholder that is presumed to be digital.

Also stipulated is that a medium exists that lets Alice and Bob communicate with each other's ACORN sequences, such as via API. There's many ways to make this work; digital communication is a solved problem.

Anyway here's the basics of a design that struck me as especially resilient.

As in the previous parts, Alice is initiating the signal while Bob receives it and takes action as a result.

## A: Basic Structure

A1: Alice and Bob each maintain their own private ACORN sequence.

A2: Alice and Bob are responsible for the integrity of their own ACORN sequence.

A3: Every ACORN sequence is a ... sequence ... of blocks.

A4: A sequence may be extended at any time for any reason.

A5: Each block is identified as a specific type.

A6: There is no limit to the number of different block types.

A7: Blocks of different types include type-appropriate information.

A8: Each newly added block includes the cryptographic hash of the previous block, forming a linked chain.

## B: Microfibers of Trust

B1: Every microfiber of trust (MoT) is split into six fragments, with each fragment persisted as a block.

B2: Each MoT fragment is its own block type.

B3: The six MoT fragments, in sequence, are codenamed S, R, D, A, E, and F.

B4: Each fragment represents a specific real-world event related to the same MoT (Send, Receive, Deliver, Action, rEturn, Feedback).

B5: Each fragment's block extends the appropriate sequence *after* its world-event occurs.

B6: The six fragments are linked together into a sub-sequence.

B7: The first and last fragments (S and F) extend Alice's sequence.

B8: The middle four fragments (R, D, A, and E) extend Bob's sequence.

B9: Fragments S & R are paired. Fragments E & F are also paired.

B10: The blocks for the four paired fragments each include a robust reference to the *other* ACORN sequence. FragS->SeqB, FragR->SeqA, FragE->SeqA, FragF->SeqB.

## And Beyond

That's the basics of a data structure. Hope I didn't leave out anything important (besides the stipulated body and comms medium).

The design might not look like much but keep in mind that there are humans in the loop, the focus is human behavior, the necessary "body" is placeholdered, and the key word is "emergence". Based on my research, this is plenty to work with.

I've tried to keep this all simple by sticking with just Alice and Bob. Naturally, Alice can generate MoT's with hundreds of other people anywhere on Earth. So can Bob. So can any of those hundreds of other people, and so on.

If interactions happened to be limited to just between Alice and Bob, a visualization of the data structure would resemble a ladder. The paired fragments forming the rungs, the blocks from Alice's sequence forming the left rail, the blocks from Bob's sequence forming the right rail. Not too far from a double helix. And yet that would be an unusual situation. A structure involving many people would be much more valuable, with its visualization being closer to a swirling vortex.

Given all of that, could ACORN possibly be as resilient as DNA? Is it a potential supermemory or is it just one more a pile of delusional bunk? We shall see. Might be worth an experiment or three.

Any thoughts on what's best to do for part 6?

A supermemory must b 11

You must login to post.