in thread "There is no strong rationality, thus no paperclippers, no singletons, no robust alignment": Your definition of a strongly rational agent seems a bit like a restatement of successful "inner alignment": that an AI agent's policy (internal processes) effectively pursues its values. Can you provide some distinctions? ... 4w ago (collapse hidden) 22 Your definition of a (view hidden) 22