anon_kesi said in #3284 2d ago:
Inspired by >>2513 and >>2589 I have been thinking about what a software implementation of a supercoordination protocol might look like. Specifically, this is based on the signal/impact model presented in >>2570. I would be interested to hear what other anons have to say. If you have not followed the previous threads, I would start there.
At the core of my concept is a local-first, append-only log. Agents push their signal and impact events to this log as they happen. This log should also be hash-linked to prevent tampering with past events, backdating/falsifying events etc. This forms the agent-centric memory.
These logs can then be synced between agents. Rather than a global "everyone knows everything" style of network a la Bitcoin, I envision this to be a more curated kind of sync. Perhaps two agents get together and sync their logs, or perhaps a small network of agents configure a sync server that they all participate in. Perhaps that server later becomes federated with other sync servers and a wider network is built. This forms the "supermemory" of the protocol. The idea here being that there is no panopticon scenario where some authority knows the full supermemory.
The events themselves are signed by the agent that produced them and may contain some metadata such as timestamp, hash etc., as well as a payload. The payload contains information about the event itself. To use an example from a previous thread, a signal event could be generated by Alice when she tells Bob "Hey Bob, your fly is down". Bob would then generate an impact event when he acknowledges: "Oh shit, thanks Alice". One thing not yet accounted for in previous discussions is the varying weight of impacts. For example, Alice saving Bob from being crushed by a falling tree is probably worth more 'trust' than Alice telling Bob his fly is down. I propose that an impact payload also includes a 'weight' field that somehow quantifies the impact. This also allows for damaging/breaking trust relationships via negative weights -- something also not accounted for until now.
With a growing local log, an agent can now start to compute the trust value of their relationships. Alice can look at all of her recorded interactions with Bob and produce a numerical weight representing the 'trust' (or perhaps 'alignment' is a better term) that their relationship is built on. As more logs are synced, the picture becomes clearer. Through syncing with Bob, Alice may receive a number of logs showing that Bob and Carol also have a high alignment relationship. Through this information, Alice can infer that she is also likely to align well with Carol -- and a new relationship is formed.
One obvious problem is how impacts are quantified. Does an arbitrary -1 represent a life changing (for the worse) impact, and +1 for the better? I imagine anyone would have a hard time quantifying a mundane event on that scale.
I am keen to hear the thoughts of others.
At the core of my concept is a local-first, append-only log. Agents push their signal and impact events to this log as they happen. This log should also be hash-linked to prevent tampering with past events, backdating/falsifying events etc. This forms the agent-centric memory.
These logs can then be synced between agents. Rather than a global "everyone knows everything" style of network a la Bitcoin, I envision this to be a more curated kind of sync. Perhaps two agents get together and sync their logs, or perhaps a small network of agents configure a sync server that they all participate in. Perhaps that server later becomes federated with other sync servers and a wider network is built. This forms the "supermemory" of the protocol. The idea here being that there is no panopticon scenario where some authority knows the full supermemory.
The events themselves are signed by the agent that produced them and may contain some metadata such as timestamp, hash etc., as well as a payload. The payload contains information about the event itself. To use an example from a previous thread, a signal event could be generated by Alice when she tells Bob "Hey Bob, your fly is down". Bob would then generate an impact event when he acknowledges: "Oh shit, thanks Alice". One thing not yet accounted for in previous discussions is the varying weight of impacts. For example, Alice saving Bob from being crushed by a falling tree is probably worth more 'trust' than Alice telling Bob his fly is down. I propose that an impact payload also includes a 'weight' field that somehow quantifies the impact. This also allows for damaging/breaking trust relationships via negative weights -- something also not accounted for until now.
With a growing local log, an agent can now start to compute the trust value of their relationships. Alice can look at all of her recorded interactions with Bob and produce a numerical weight representing the 'trust' (or perhaps 'alignment' is a better term) that their relationship is built on. As more logs are synced, the picture becomes clearer. Through syncing with Bob, Alice may receive a number of logs showing that Bob and Carol also have a high alignment relationship. Through this information, Alice can infer that she is also likely to align well with Carol -- and a new relationship is formed.
One obvious problem is how impacts are quantified. Does an arbitrary -1 represent a life changing (for the worse) impact, and +1 for the better? I imagine anyone would have a hard time quantifying a mundane event on that scale.
I am keen to hear the thoughts of others.
referenced by: >>3287 >>3298
Inspired by >>2513 a