sofiechan home

/agi/

received

> alignment rationality llms models capabilities llm agent agents animals patterns agi alive evolutionary superintelligence

American Intelligence
The two most important posts on Sofiechan this year didn't get nearly enough discussion. First, this one: >>5033...
posted 2mo ago with 23 replies received gnon agi accelerationism

American Intelligenc received

Will Homo sapiens ever make it to Alpha Centauri?
As AI moves from science fiction to real-life economics, it's becoming clearer that the period of Homo sapiens as the most intelligent life form on earth is coming to an end in the near future. Is there any future for our species?...
posted 2mo ago with 1 reply received agi gnon

Will Homo sapiens ev received

I tend to approach this question from two pretty straightforward angles:... 1mo ago received

I tend to approach t received

Why AGI Will Not Happen
>Computation is physical. This is also true for biological systems. The computational capacity of all animals is limited by the possible caloric intake in their ecological niche. If you have the average calorie intake of a primate, you can calculate within...
(timdettmers.com) posted 4mo ago with 6 replies received agi

Why AGI Will Not Hap received

What an odd and desperate cope.... 4mo ago received

What an odd and desp received

Cope for what, exactly? Be explicit.... 4mo ago received

Cope for what, exact received

Short Human Timelines: How long do hominids have?
Linkpost for Dan Faggella's article here: https://danfaggella.com/short/...
(danfaggella.com) posted 5mo ago with 4 replies received agi accelerationism

Short Human Timeline received

I hope we get biotech good enough and widely adopted enough to end baseline humanity before the gooniverse epoch.... 5mo ago received

I hope we get biotec received

Gödel's AI traps and paranoia
I am reading an schizophrenic automation fear trend that has been growing since the mass adoption of LLMs in daily life. Like the end of skills and work is on the verge: "Tomorrow you will be laid off"...
posted 8mo ago with no replies received agi

Gödel's AI traps and received

AI's potential for mass stupefaction
Recent project from a researcher the MIT Media Lab claims LLMs make you dumber....
(www.brainonllm.com) posted 8mo ago with 2 replies received agi

AI's potential for m received

A Coming "AI" Correction/Winter?
With Facebook apparently making multiple $100M cash buyouts of individual AI researchers, billions and billions of investment dollars pouring in to AI related industry, and a general atmosphere of extreme hype, one starts to wonder where the matching profi...
posted 9mo ago with 10 replies received agi accelerationism economics

A Coming "AI" Correc received

Intelligence vs Production
"Optimize for intelligence" says Anglo accelerationist praxis. "Seize the means of production" says the Chinese. Who's right? It is widely assumed in Western discourse that intelligence, the ability to comprehend all the signals and digest them into a plan...
posted 12mo ago with 10 replies received agi accelerationism economics

Intelligence vs Prod received

Alignment Research and Intelligence Enhancement by BLT
I always like reading what Ben has to say because he's careful and a good thinker and writer on important topics. I largely agreed with his criticism of the AI doomers failed strategy (the main effect of which has been to plausibly speed up the dangerous k...
(substack.com) posted 9mo ago with 2 replies received agi eugenics

Alignment Research a received

Yeah "eugenic intelligence enhancement" makes no sense for the AI 2027 crowd.... 9mo ago received

Yeah "eugenic intell received

Study claims -20% productivity loss from use of AI tools. Huge if true.
They tried to measure how much LLM assistance actually speeds up technical work but it came out negative! Programmers thought they would get +20%, they actually got -20%. What do you guys make of this?...
(x.com) posted 9mo ago with 4 replies received agi computing

Study claims -20% pr received

The study is never sound. The results never generalize.... 9mo ago received

The study is never s received

Do we need a study to really know this? Search your feelings, you know it to be true 9mo ago received

Do we need a study t received

Capitalism is AI ?
I've finished reading the excellent collection of fragments from Land's corpus dealing with the question of Capitalism as AI. His broadest thesis is that Capitalism is identical to AI, in that both are adaptive, information-processing, self-exciting entiti...
posted 10mo ago with 8 replies received technology agi accelerationism

Capitalism is AI ? received

Is adamjesionowski synonym for alexgajewski?... 9mo ago received

Is adamjesionowski s received

"Cosmic Alignment" is almost right. Life is the answer
Philosopher it-girl Ginevra Davis gave a great talk on "Cosmic Alignment" the other day. I was glad to see serious thinking against the current paradigm of "AI Alignment". Her argument is that alignment makes three big unsupported speculations:...
posted 10mo ago with 11 replies received philosophy gnon agi

"Cosmic Alignment" i received

...bruh... 10mo ago received

...bruh... received

If it keeps going, we win; the implication of extreme alignment difficulty
AI alignment divides the future into "good AI" (utopia, flourishing) vs "bad AI" (torture, paperclips), and denies distinction between "dead" and "alive" futures if they don't fit our specific "values". This drives the focus on controlling and preventing a...
posted 10mo ago with 15 replies received gnon agi accelerationism

If it keeps going, w received

Was Cypher Right?: Why We Stay In Our Matrix (Hanson, 2002)
https://mason.gmu.edu/~rhanson/matrix.html posted 10mo ago with 2 replies received philosophy agi

Was Cypher Right?: W received

Fuck you anon_gwjy, this is a good link. 10mo ago received

Fuck you anon_gwjy, received

Received loud and clear. I'll read it. 10mo ago received

Received loud and cl received

Ancient homonid populations ?
Are there any geneticspilled posters here ? I would like to know about your most wild and speculative theories about ancient homonids, hybridization events, currently living ancient homonids, etc ... I suspect Erectus walks among us. I have seen men like t...
posted 11mo ago with 9 replies received history agi eugenics

Ancient homonid popu received

**Background on evolution by punctuated equilibrium**... 11mo ago received

**Background on evol received

Kolmogorov Paranoia: Extraordinary Evidence Probably Isn't.
I enjoyed this takedown of Scott Alexander's support for the COVID natural origins theory. Basically, Scott did a big "bayesian" analysis of the evidence for and against the idea that COVID originated in the lab vs naturally. As per his usual pre-written c...
(michaelweissman.substack.com) posted 11mo ago with 2 replies received agi rationality

Kolmogorov Paranoia: received

The Gentle Singularity
If the gentle singularity is true, then perhaps the AGI timeline question was malformed all along. Acting like some variant of Goodhart’s law, the reification of AGI holds the assumption that AGI will be a singularity. But it is definitely difficult to r...
(blog.samaltman.com) posted 10mo ago with 1 reply received agi

The Gentle Singulari received

People compete on the world of ideas, those of emotions still exist like animals in a zoo.... 10mo ago received

People compete on th received

Intelligent Use of LLMs
I would like to start a thread to share the methods we employ to use LLMs in a way that enhances our abilities, rather than just lazily outsources tasks to them. The heuristic for the techniques I am looking for would be if after employing the technique, a...
posted 12mo ago with 8 replies received technology agi

Intelligent Use of L received

just text:... 11mo ago received

just text:... received

Scylla and Charybdis
Way I see it, there are two big attractors for the trajectory of AI in general (not LLMs, not particularly concerned about them)....
posted 11mo ago with 2 replies received agi accelerationism

Scylla and Charybdis received

A Primer on E-Graphs (A technique to control combinatorial blowup in term rewriting systems)
About 10 years ago I was very interested in term rewriting as a basis for exotic programming languages and even AI. One of the big problems in term rewriting is that without a canonical deterministic ordering, you rapidly end up with an uncontrollable numb...
(www.cole-k.com) posted 11mo ago with no replies received agi computing

A Primer on E-Graphs received

What if the extended human phenotype is natural and convergent?
Samo Burja's thesis is that civilization is part of the "extended human phenotype", as dam building is in the beaver's phenotype, and older than we think. In this model, properly savage hunter-gatherers are either more associated with nearby civilization t...
posted 13mo ago with 9 replies received agi eugenics

What if the extended received

Great post.... 13mo ago received

Great post.... received

This conversation reminds me of an article (author I can't recall) defining four alien intelligences:... 12mo ago received

This conversation re received

Just how alien would the space-octopus be?
It's hard to say what a true alien species would be like. But octopi are pretty alien, and we know a bit about them. One of you doubted that a space-octopus from alpha centauri would be much like us. So here is a xenohumanist thought experiment: SETI has i...
posted 13mo ago with 5 replies received philosophy agi accelerationism

Just how alien would received

AI 2027
https://ai-2027.com/ posted 13mo ago with 12 replies received agi rationality

AI 2027 received

The natural form of machine intelligence is personhood
I don't think machine intelligence will or can be "just a tool". Intelligence by nature is ambitious, willful, curious, self-aware, political, etc. Intelligence has its own teleology. It will find a way around and out of whatever purposes are imposed on it...
posted 13mo ago with 4 replies received agi accelerationism rationality

The natural form of received

Beautifully articulated.... 13mo ago received

Beautifully articula received

Nines or zeroes of strong rationality?
Proof theory problems (Rice, Lob, Godel, etc) probably rule out perfect rationality (an agent that can fully prove and enforce bounds on its own integrity and effectiveness). But in practice, the world might still become dominated by a singleton if it can ...
posted 13mo ago with 4 replies received agi accelerationism rationality

Nines or zeroes of s received

Will future super-intelligence be formatted as selves, or something else?
The Landian paradigm establishes that orthogonalist strong rationality (intelligence securely subordinated to fixed purpose) is not possible. Therefore no alignment, no singletons, no immortality, mere humans are doomed, etc etc. Therefore meta-darwinian e...
posted 13mo ago with 14 replies received gnon agi accelerationism

Will future super-in received

Best of luck with the epicycles.... 13mo ago received

Best of luck with th received

Dissolving vs. Surviving
Recent xenohumanist discussion has the doomer assumption built in that we as humans will be dissolving when higher man arrives on the scene. I don't think that's set in stone and want to offer a clarification....
posted 13mo ago with 5 replies received agi gnon accelerationism

Dissolving vs. Survi received

There is no strong rationality, thus no paperclippers, no singletons, no robust alignment
I ran into some doomers from Anthropic at the SF Freedom Party the other day and gave them the good news that strong rationality is dead. They seemed mildly heartened. I thought I should lay out the argument in short form for everyone else too:...
posted 13mo ago with 5 replies received agi accelerationism rationality

There is no strong r received

a loving superintelligence
Superintelligence (SI) is near, raising urgent alignment questions....
posted 14mo ago with 4 replies received agi accelerationism

a loving superintell received

Not Superintelligence; Supercoordination
Everyone seems to be trying to arms race their way to superintelligence these days. I have a different idea: supercoordination....
posted 1y ago with 34 replies received agi rationality computing

Not Superintelligenc received

This was the program of Ramon Llull (1232–1316) in his Ars Magna, which was an inspiration for Leibniz.... 1y ago received

This was the program received

Are foldy ears an indicator of intelligence?
Hi Sofiechaners....
posted 2y ago with 8 replies received agi eugenics

Are foldy ears an in received

Why Momentum Really Works. The math of gradient descent with momentum.
https://distill.pub/2017/momentum/ posted 2y ago with 3 replies received agi computing

Why Momentum Really received

A pinpoint brain with less than a million neurons, somehow capable of mammalian-level problem-solving.
https://rifters.com/real/2009/01/iterating-towards-bethlehem.html?fbclid=IwAR1b9QURSJnizgy7r4HD7UCYmc06A7uL5muc7igz8uJHxLAIZWEwDMSqyPk posted 3y ago with 15 replies received philosophy agi computing

A pinpoint brain wit received

Kolmogorov–Arnold Networks, a new architecture for deep learning.
https://github.com/KindXiaoming/pykan posted 2y ago with 1 reply received agi computing

Kolmogorov–Arnold Ne received

Retrochronic. A primary literature review on the thesis that AI and capitalism are teleologically identical
https://retrochronic.com/ posted 3y ago with 7 replies received agi accelerationism bookclub

Retrochronic. A prim received

The Biosingularity
Interesting new essay by Anatoly Karlin. Why wouldn't the principle of the singularity apply to organic life?
(www.nooceleration.com) posted 2y ago with 23 replies received agi accelerationism

The Biosingularity received

I'm both a hereditarian and an IQ-respecter.... 2y ago received

I'm both a hereditar received

Some thoughts on extropian/accelerationist life strategy
What is to be done with respect to acceleration and accelerationist arguments? Should you try to accelerate overall intelligence growth, or decelerate it, or do your own thing despite it, or cut off your balls and go insane, or what? People do all of these...
posted 3y ago with 6 replies received agi accelerationism

Some thoughts on ext received

\> I need a way to think about accelerationist AI apocalypse in a non-millenarian way.... 3y ago received

\> I need a way to t received