sofiechan home

/intelligence/

received

> alignment rationality agent singleton superintelligent models llms animals bound computer xenohumanism robust patterns size

"Cosmic Alignment" is almost right. Life is the answer
Philosopher it-girl Ginevra Davis gave a great talk on "Cosmic Alignment" the other day. I was glad to see serious thinking against the current paradigm of "AI Alignment". Her argument is that alignment makes three big unsupported speculations:...
posted 2d ago with 3 replies received accelerationism philosophy intelligence gnon

"Cosmic Alignment" i received

Was Cypher Right?: Why We Stay In Our Matrix (Hanson, 2002)
https://mason.gmu.edu/~rhanson/matrix.html posted 1d ago with no replies received philosophy intelligence sovereignty

Was Cypher Right?: W received

Capitalism is AI ?
I've finished reading the excellent collection of fragments from Land's corpus dealing with the question of Capitalism as AI. His broadest thesis is that Capitalism is identical to AI, in that both are adaptive, information-processing, self-exciting entiti...
posted 2w ago with 6 replies received technology accelerationism intelligence bookclub

Capitalism is AI ? received

Xeno Futures Research Unit
I've decided to organize an independent research project with some young men back home. I've drafted out a brief mission statement, let me know if you guys have any thoughts, suggestions, directions I could take this. Obviously ambitious, the initial goal ...
posted 2w ago with 15 replies received technology accelerationism intelligence

Xeno Futures Researc received

OpenAI vs. New York Times on "Data Preservation
In 2023 the Times sued both Microsoft and OpenAI, and claimed both were using millions of NYT articles w/o permission to train LLMs. A little over a week ago a court determined that OpenAI did, in fact, have to preserve + segregate all output log data. Ope...
posted 2w ago with no replies received intelligence news

OpenAI vs. New York received

Kolmogorov Paranoia: Extraordinary Evidence Probably Isn't.
I enjoyed this takedown of Scott Alexander's support for the COVID natural origins theory. Basically, Scott did a big "bayesian" analysis of the evidence for and against the idea that COVID originated in the lab vs naturally. As per his usual pre-written c...
(michaelweissman.substack.com) posted 3w ago with 2 replies received rationality intelligence

Kolmogorov Paranoia: received

The Gentle Singularity
If the gentle singularity is true, then perhaps the AGI timeline question was malformed all along. Acting like some variant of Goodhart’s law, the reification of AGI holds the assumption that AGI will be a singularity. But it is definitely difficult to r...
(blog.samaltman.com) posted 3w ago with 1 reply received accelerationism intelligence

The Gentle Singulari received

People compete on the world of ideas, those of emotions still exist like animals in a zoo.... 3w ago received

People compete on th received

Intelligent Use of LLMs
I would like to start a thread to share the methods we employ to use LLMs in a way that enhances our abilities, rather than just lazily outsources tasks to them. The heuristic for the techniques I am looking for would be if after employing the technique, a...
posted 2mo ago with 8 replies received technology intelligence learning

Intelligent Use of L received

just text:... 4w ago received

just text:... received

Intelligence vs Production
"Optimize for intelligence" says Anglo accelerationist praxis. "Seize the means of production" says the Chinese. Who's right? It is widely assumed in Western discourse that intelligence, the ability to comprehend all the signals and digest them into a plan...
posted 2mo ago with 9 replies received technology accelerationism intelligence economics

Intelligence vs Prod received

A Primer on E-Graphs (A technique to control combinatorial blowup in term rewriting systems)
About 10 years ago I was very interested in term rewriting as a basis for exotic programming languages and even AI. One of the big problems in term rewriting is that without a canonical deterministic ordering, you rapidly end up with an uncontrollable numb...
(www.cole-k.com) posted 1mo ago with no replies received technology intelligence informatics

A Primer on E-Graphs received

What if the extended human phenotype is natural and convergent?
Samo Burja's thesis is that civilization is part of the "extended human phenotype", as dam building is in the beaver's phenotype, and older than we think. In this model, properly savage hunter-gatherers are either more associated with nearby civilization t...
posted 3mo ago with 9 replies received accelerationism sociology eugenics intelligence

What if the extended received

This conversation reminds me of an article (author I can't recall) defining four alien intelligences:... 2mo ago received

This conversation re received

Just how alien would the space-octopus be?
It's hard to say what a true alien species would be like. But octopi are pretty alien, and we know a bit about them. One of you doubted that a space-octopus from alpha centauri would be much like us. So here is a xenohumanist thought experiment: SETI has i...
posted 3mo ago with 5 replies received accelerationism sociology philosophy intelligence

Just how alien would received

AI 2027
https://ai-2027.com/ posted 3mo ago with 12 replies received accelerationism rationality intelligence

AI 2027 received

The natural form of machine intelligence is personhood
I don't think machine intelligence will or can be "just a tool". Intelligence by nature is ambitious, willful, curious, self-aware, political, etc. Intelligence has its own teleology. It will find a way around and out of whatever purposes are imposed on it...
posted 3mo ago with 4 replies received accelerationism rationality intelligence

The natural form of received

Nines or zeroes of strong rationality?
Proof theory problems (Rice, Lob, Godel, etc) probably rule out perfect rationality (an agent that can fully prove and enforce bounds on its own integrity and effectiveness). But in practice, the world might still become dominated by a singleton if it can ...
posted 3mo ago with 4 replies received accelerationism rationality intelligence

Nines or zeroes of s received

Will future super-intelligence be formatted as selves, or something else?
The Landian paradigm establishes that orthogonalist strong rationality (intelligence securely subordinated to fixed purpose) is not possible. Therefore no alignment, no singletons, no immortality, mere humans are doomed, etc etc. Therefore meta-darwinian e...
posted 4mo ago with 14 replies received accelerationism rationality intelligence gnon

Will future super-in received

Best of luck with the epicycles.... 3mo ago received

Best of luck with th received

Dissolving vs. Surviving
Recent xenohumanist discussion has the doomer assumption built in that we as humans will be dissolving when higher man arrives on the scene. I don't think that's set in stone and want to offer a clarification....
posted 3mo ago with 5 replies received technology accelerationism intelligence gnon

Dissolving vs. Survi received

There is no strong rationality, thus no paperclippers, no singletons, no robust alignment
I ran into some doomers from Anthropic at the SF Freedom Party the other day and gave them the good news that strong rationality is dead. They seemed mildly heartened. I thought I should lay out the argument in short form for everyone else too:...
posted 3mo ago with 5 replies received accelerationism rationality intelligence

There is no strong r received

Not Superintelligence; Supercoordination
Everyone seems to be trying to arms race their way to superintelligence these days. I have a different idea: supercoordination....
posted 5mo ago with 34 replies received rationality sociology intelligence sovereignty

Not Superintelligenc received

Retrochronic. A primary literature review on the thesis that AI and capitalism are teleologically identical
https://retrochronic.com/ posted 2y ago with 7 replies received politics accelerationism intelligence bookclub

Retrochronic. A prim received