sofiechan home

Towards a supercoordination protocol

anon_kesi said in #3284 2d ago:

Inspired by >>2513 and >>2589 I have been thinking about what a software implementation of a supercoordination protocol might look like. Specifically, this is based on the signal/impact model presented in >>2570. I would be interested to hear what other anons have to say. If you have not followed the previous threads, I would start there.

At the core of my concept is a local-first, append-only log. Agents push their signal and impact events to this log as they happen. This log should also be hash-linked to prevent tampering with past events, backdating/falsifying events etc. This forms the agent-centric memory.

These logs can then be synced between agents. Rather than a global "everyone knows everything" style of network a la Bitcoin, I envision this to be a more curated kind of sync. Perhaps two agents get together and sync their logs, or perhaps a small network of agents configure a sync server that they all participate in. Perhaps that server later becomes federated with other sync servers and a wider network is built. This forms the "supermemory" of the protocol. The idea here being that there is no panopticon scenario where some authority knows the full supermemory.

The events themselves are signed by the agent that produced them and may contain some metadata such as timestamp, hash etc., as well as a payload. The payload contains information about the event itself. To use an example from a previous thread, a signal event could be generated by Alice when she tells Bob "Hey Bob, your fly is down". Bob would then generate an impact event when he acknowledges: "Oh shit, thanks Alice". One thing not yet accounted for in previous discussions is the varying weight of impacts. For example, Alice saving Bob from being crushed by a falling tree is probably worth more 'trust' than Alice telling Bob his fly is down. I propose that an impact payload also includes a 'weight' field that somehow quantifies the impact. This also allows for damaging/breaking trust relationships via negative weights -- something also not accounted for until now.

With a growing local log, an agent can now start to compute the trust value of their relationships. Alice can look at all of her recorded interactions with Bob and produce a numerical weight representing the 'trust' (or perhaps 'alignment' is a better term) that their relationship is built on. As more logs are synced, the picture becomes clearer. Through syncing with Bob, Alice may receive a number of logs showing that Bob and Carol also have a high alignment relationship. Through this information, Alice can infer that she is also likely to align well with Carol -- and a new relationship is formed.

One obvious problem is how impacts are quantified. Does an arbitrary -1 represent a life changing (for the worse) impact, and +1 for the better? I imagine anyone would have a hard time quantifying a mundane event on that scale.

I am keen to hear the thoughts of others.

referenced by: >>3287 >>3298

Inspired by >>2513 a

anon_nuhy said in #3287 2d ago:

>>3284
Your idea of an append-only log for registering events is very close to things Balaji has proposed using blockchains for some time (https://balajianthology.com/anthology-of-balaji/cryptographic-truth). I would dig into that.

The problem I've always had with these ideas is that they don't magically evade the problem of Garbage In, Garbage Out. You can use automated reputational systems to improve quality, but that doesn't just synthesize truth. Also, you have to get the right people to actually use your system, and people choose not to use particular systems with great-sounding properties all the time.

referenced by: >>3289

Your idea of an appe

anon_kesi said in #3289 1d ago:

>>3287
>they don't magically evade the problem of Garbage In, Garbage Out
That is true. But an advantage I see in the system described is that 'garbage in' doesn't really have an impact on the network if nobody chooses to interact with said garbage. For example, if Mallory wants to spam their own log with garbage signals, they have no meaning without Alice or Bob registering impacts of those signals -- that is if they were even to sync with Mallory in the first place.

>you have to get the right people to actually use your system
Also true, but the same could be said or almost any product or system.

That is true. But an

anon_mosy said in #3298 10h ago:

>>3284
The way I see it with these supercoordination systems, the hard part besides actually coming up with a good protocol is the right bootstrap network and the right use cases. Data entry is annoying unless it is made fun and rewarding and part of some activity we enjoy doing with each other. This is essentially the role of social media platforms: a network and class of data entry affordances that are made fun and rewarding.

So yes I agree with your proposal in general (though I don't think the technical decentralization is going to help you). People should be able to register impacts, perhaps even with more open ended language-driven reporting rather than narrow structured data. We do have LLMs these days after all. But yeah let's have a big database of judgements and events which we can add up by various means into trust scores of various kinds (allow innovation outside the protocol here). We then use those trust scores for social gating, distribution of prestige, etc.

I think the weight problem might be best addressed retrospectively. Imagine it more like a court. When you make an impact report, you report primarily *what* happened and your primary claim is that it happened and who was involved. Then others may corroborate, the system may trust you based on your reputation, etc to establish these facts as known to some extent. Then later there is the process of interpretation. The system and its participants judge the event to be of greater or lesser significance as a reflection on the character and standing of those involved. If we learn that a certain class of events are actually more or less significant than the originator thought subjectively, then the network should be able to retrospectively re-weight those reports and otherwise reconceptualize their significance.

Of course this places more pressure on the design of the protocol and system to be quite a bit more intelligent. But one of the original premises of supercoordination when we came up with the idea is that it's a potential consumer of computational intelligence. We pour large amounts of computational power into solving these social equilibria and get vastly better results as a consequence. The difficult part then is what is that algorithm (and where to bootstrap its utility).

The way I see it wit

You must login to post.