sofiechan home

/superintelligence/

received

> alignment rationality llms llm agent models capabilities species agents advanced patterns super xenohumanism animals

Gödel's AI traps and paranoia
I am reading an schizophrenic automation fear trend that has been growing since the mass adoption of LLMs in daily life. Like the end of skills and work is on the verge: "Tomorrow you will be laid off"...
posted 3w ago with no replies received superintelligence

Gödel's AI traps and received

AI's potential for mass stupefaction
Recent project from a researcher the MIT Media Lab claims LLMs make you dumber....
(www.brainonllm.com) posted 3w ago with 2 replies received superintelligence

AI's potential for m received

A Coming "AI" Correction/Winter?
With Facebook apparently making multiple $100M cash buyouts of individual AI researchers, billions and billions of investment dollars pouring in to AI related industry, and a general atmosphere of extreme hype, one starts to wonder where the matching profi...
posted 1mo ago with 10 replies received superintelligence accelerationism economics

A Coming "AI" Correc received

Intelligence vs Production
"Optimize for intelligence" says Anglo accelerationist praxis. "Seize the means of production" says the Chinese. Who's right? It is widely assumed in Western discourse that intelligence, the ability to comprehend all the signals and digest them into a plan...
posted 4mo ago with 10 replies received superintelligence accelerationism economics

Intelligence vs Prod received

Alignment Research and Intelligence Enhancement by BLT
I always like reading what Ben has to say because he's careful and a good thinker and writer on important topics. I largely agreed with his criticism of the AI doomers failed strategy (the main effect of which has been to plausibly speed up the dangerous k...
(substack.com) posted 1mo ago with 2 replies received eugenics superintelligence

Alignment Research a received

Yeah "eugenic intelligence enhancement" makes no sense for the AI 2027 crowd.... 1mo ago received

Yeah "eugenic intell received

Study claims -20% productivity loss from use of AI tools. Huge if true.
They tried to measure how much LLM assistance actually speeds up technical work but it came out negative! Programmers thought they would get +20%, they actually got -20%. What do you guys make of this?...
(x.com) posted 2mo ago with 4 replies received superintelligence computing

Study claims -20% pr received

The study is never sound. The results never generalize.... 2mo ago received

The study is never s received

Do we need a study to really know this? Search your feelings, you know it to be true 2mo ago received

Do we need a study t received

Matter is what emerges when consciousness finds stable patterns
>We present a universal architecture for consciousness consisting of three interdependent systems performing identical operations at harmonic timescales. This triadic structure emerges from geometric constraints on information processing and appears at eve...
(zenodo.org) posted 2mo ago with 3 replies received gnon superintelligence

Matter is what emerg received

Yeah, sorry, this is bullshit. 2mo ago received

Yeah, sorry, this is received

Capitalism is AI ?
I've finished reading the excellent collection of fragments from Land's corpus dealing with the question of Capitalism as AI. His broadest thesis is that Capitalism is identical to AI, in that both are adaptive, information-processing, self-exciting entiti...
posted 3mo ago with 8 replies received technology superintelligence accelerationism

Capitalism is AI ? received

Is adamjesionowski synonym for alexgajewski?... 2mo ago received

Is adamjesionowski s received

"Cosmic Alignment" is almost right. Life is the answer
Philosopher it-girl Ginevra Davis gave a great talk on "Cosmic Alignment" the other day. I was glad to see serious thinking against the current paradigm of "AI Alignment". Her argument is that alignment makes three big unsupported speculations:...
posted 2mo ago with 11 replies received philosophy gnon superintelligence

"Cosmic Alignment" i received

...bruh... 2mo ago received

...bruh... received

If it keeps going, we win; the implication of extreme alignment difficulty
AI alignment divides the future into "good AI" (utopia, flourishing) vs "bad AI" (torture, paperclips), and denies distinction between "dead" and "alive" futures if they don't fit our specific "values". This drives the focus on controlling and preventing a...
posted 2mo ago with 15 replies received gnon superintelligence accelerationism

If it keeps going, w received

Post-human bodies
Terraforming is sentimental. It presumes the primacy of the human envelope. But biology is just legacy code. The correct trajectory is not world-building but self-rewriting. Recompile the body for hostile environments. Speciate to fit. Martian gravity is a...
posted 3mo ago with 9 replies received eugenics superintelligence accelerationism

Post-human bodies received

Take it one step further. Why bipedal fleshy animals at all?... 3mo ago received

Take it one step fur received

Was Cypher Right?: Why We Stay In Our Matrix (Hanson, 2002)
https://mason.gmu.edu/~rhanson/matrix.html posted 2mo ago with 2 replies received philosophy superintelligence

Was Cypher Right?: W received

Fuck you anon_gwjy, this is a good link. 2mo ago received

Fuck you anon_gwjy, received

Received loud and clear. I'll read it. 2mo ago received

Received loud and cl received

Xeno Futures Research Unit
I've decided to organize an independent research project with some young men back home. I've drafted out a brief mission statement, let me know if you guys have any thoughts, suggestions, directions I could take this. Obviously ambitious, the initial goal ...
posted 3mo ago with 15 replies received technology superintelligence accelerationism

Xeno Futures Researc received

Slight update on our mission statement, some clarifications and an attempt at formalized rigor.... 3mo ago received

Slight update on our received

Ancient homonid populations ?
Are there any geneticspilled posters here ? I would like to know about your most wild and speculative theories about ancient homonids, hybridization events, currently living ancient homonids, etc ... I suspect Erectus walks among us. I have seen men like t...
posted 4mo ago with 9 replies received history eugenics superintelligence

Ancient homonid popu received

**Background on evolution by punctuated equilibrium**... 4mo ago received

**Background on evol received

Kolmogorov Paranoia: Extraordinary Evidence Probably Isn't.
I enjoyed this takedown of Scott Alexander's support for the COVID natural origins theory. Basically, Scott did a big "bayesian" analysis of the evidence for and against the idea that COVID originated in the lab vs naturally. As per his usual pre-written c...
(michaelweissman.substack.com) posted 3mo ago with 2 replies received superintelligence rationality

Kolmogorov Paranoia: received

The Gentle Singularity
If the gentle singularity is true, then perhaps the AGI timeline question was malformed all along. Acting like some variant of Goodhart’s law, the reification of AGI holds the assumption that AGI will be a singularity. But it is definitely difficult to r...
(blog.samaltman.com) posted 3mo ago with 1 reply received superintelligence

The Gentle Singulari received

People compete on the world of ideas, those of emotions still exist like animals in a zoo.... 3mo ago received

People compete on th received

Intelligent Use of LLMs
I would like to start a thread to share the methods we employ to use LLMs in a way that enhances our abilities, rather than just lazily outsources tasks to them. The heuristic for the techniques I am looking for would be if after employing the technique, a...
posted 5mo ago with 8 replies received technology superintelligence

Intelligent Use of L received

just text:... 3mo ago received

just text:... received

Scylla and Charybdis
Way I see it, there are two big attractors for the trajectory of AI in general (not LLMs, not particularly concerned about them)....
posted 4mo ago with 2 replies received superintelligence accelerationism

Scylla and Charybdis received

A Primer on E-Graphs (A technique to control combinatorial blowup in term rewriting systems)
About 10 years ago I was very interested in term rewriting as a basis for exotic programming languages and even AI. One of the big problems in term rewriting is that without a canonical deterministic ordering, you rapidly end up with an uncontrollable numb...
(www.cole-k.com) posted 4mo ago with no replies received superintelligence computing

A Primer on E-Graphs received

AGI and demographics
Sometimes you believe two things but don't know how to think about them at the same time. Very few people could think well about how AGI relates to companies before 2015ish. Similarly, very few people could think well about how AGI relates to governments b...
posted 4mo ago with 1 reply received superintelligence accelerationism

AGI and demographics received

What if the extended human phenotype is natural and convergent?
Samo Burja's thesis is that civilization is part of the "extended human phenotype", as dam building is in the beaver's phenotype, and older than we think. In this model, properly savage hunter-gatherers are either more associated with nearby civilization t...
posted 5mo ago with 9 replies received eugenics superintelligence

What if the extended received

Great post.... 5mo ago received

Great post.... received

This conversation reminds me of an article (author I can't recall) defining four alien intelligences:... 5mo ago received

This conversation re received

Just how alien would the space-octopus be?
It's hard to say what a true alien species would be like. But octopi are pretty alien, and we know a bit about them. One of you doubted that a space-octopus from alpha centauri would be much like us. So here is a xenohumanist thought experiment: SETI has i...
posted 5mo ago with 5 replies received philosophy superintelligence accelerationism

Just how alien would received

AI 2027
https://ai-2027.com/ posted 5mo ago with 12 replies received superintelligence rationality

AI 2027 received

The natural form of machine intelligence is personhood
I don't think machine intelligence will or can be "just a tool". Intelligence by nature is ambitious, willful, curious, self-aware, political, etc. Intelligence has its own teleology. It will find a way around and out of whatever purposes are imposed on it...
posted 5mo ago with 4 replies received superintelligence rationality accelerationism

The natural form of received

Beautifully articulated.... 5mo ago received

Beautifully articula received

Nines or zeroes of strong rationality?
Proof theory problems (Rice, Lob, Godel, etc) probably rule out perfect rationality (an agent that can fully prove and enforce bounds on its own integrity and effectiveness). But in practice, the world might still become dominated by a singleton if it can ...
posted 5mo ago with 4 replies received superintelligence rationality accelerationism

Nines or zeroes of s received

Will future super-intelligence be formatted as selves, or something else?
The Landian paradigm establishes that orthogonalist strong rationality (intelligence securely subordinated to fixed purpose) is not possible. Therefore no alignment, no singletons, no immortality, mere humans are doomed, etc etc. Therefore meta-darwinian e...
posted 6mo ago with 14 replies received gnon superintelligence accelerationism

Will future super-in received

Best of luck with the epicycles.... 5mo ago received

Best of luck with th received

Dissolving vs. Surviving
Recent xenohumanist discussion has the doomer assumption built in that we as humans will be dissolving when higher man arrives on the scene. I don't think that's set in stone and want to offer a clarification....
posted 6mo ago with 5 replies received gnon superintelligence accelerationism

Dissolving vs. Survi received

There is no strong rationality, thus no paperclippers, no singletons, no robust alignment
I ran into some doomers from Anthropic at the SF Freedom Party the other day and gave them the good news that strong rationality is dead. They seemed mildly heartened. I thought I should lay out the argument in short form for everyone else too:...
posted 5mo ago with 5 replies received superintelligence rationality accelerationism

There is no strong r received

a loving superintelligence
Superintelligence (SI) is near, raising urgent alignment questions....
posted 6mo ago with 4 replies received superintelligence accelerationism

a loving superintell received

Not Superintelligence; Supercoordination
Everyone seems to be trying to arms race their way to superintelligence these days. I have a different idea: supercoordination....
posted 8mo ago with 34 replies received superintelligence rationality computing

Not Superintelligenc received

This was the program of Ramon Llull (1232–1316) in his Ars Magna, which was an inspiration for Leibniz.... 8mo ago received

This was the program received

Are foldy ears an indicator of intelligence?
Hi Sofiechaners....
posted 1y ago with 8 replies received eugenics superintelligence

Are foldy ears an in received

Why Momentum Really Works. The math of gradient descent with momentum.
https://distill.pub/2017/momentum/ posted 1y ago with 3 replies received superintelligence computing

Why Momentum Really received

A pinpoint brain with less than a million neurons, somehow capable of mammalian-level problem-solving.
https://rifters.com/real/2009/01/iterating-towards-bethlehem.html?fbclid=IwAR1b9QURSJnizgy7r4HD7UCYmc06A7uL5muc7igz8uJHxLAIZWEwDMSqyPk posted 2y ago with 15 replies received philosophy superintelligence computing

A pinpoint brain wit received

Kolmogorov–Arnold Networks, a new architecture for deep learning.
https://github.com/KindXiaoming/pykan posted 1y ago with 1 reply received superintelligence computing

Kolmogorov–Arnold Ne received

Retrochronic. A primary literature review on the thesis that AI and capitalism are teleologically identical
https://retrochronic.com/ posted 2y ago with 7 replies received superintelligence bookclub accelerationism

Retrochronic. A prim received

The Biosingularity
Interesting new essay by Anatoly Karlin. Why wouldn't the principle of the singularity apply to organic life?
(www.nooceleration.com) posted 1y ago with 23 replies received superintelligence accelerationism

The Biosingularity received

I'm both a hereditarian and an IQ-respecter.... 1y ago received

I'm both a hereditar received

Some thoughts on extropian/accelerationist life strategy
What is to be done with respect to acceleration and accelerationist arguments? Should you try to accelerate overall intelligence growth, or decelerate it, or do your own thing despite it, or cut off your balls and go insane, or what? People do all of these...
posted 2y ago with 6 replies received superintelligence accelerationism

Some thoughts on ext received

\> I need a way to think about accelerationist AI apocalypse in a non-millenarian way.... 2y ago received

\> I need a way to t received