/intelligence/
Intelligent Use of LLMs
I would like to start a thread to share the methods we employ to use LLMs in a way that enhances our abilities, rather than just lazily outsources tasks to them. The heuristic for the techniques I am looking for would be if after employing the technique, a...
posted 12h ago with
no replies
22
Intelligent Use of L
22
Just how alien would the space-octopus be?
It's hard to say what a true alien species would be like. But octopi are pretty alien, and we know a bit about them. One of you doubted that a space-octopus from alpha centauri would be much like us. So here is a xenohumanist thought experiment: SETI has i...
posted 2w ago with
5 replies
99
Just how alien would
99
AI 2027
(ai-2027.com)
posted 3w ago with
12 replies
1515
AI 2027
1515
The natural form of machine intelligence is personhood
I don't think machine intelligence will or can be "just a tool". Intelligence by nature is ambitious, willful, curious, self-aware, political, etc. Intelligence has its own teleology. It will find a way around and out of whatever purposes are imposed on it...
posted 2w ago with
4 replies
1515
The natural form of
1515
Nines or zeroes of strong rationality?
Proof theory problems (Rice, Lob, Godel, etc) probably rule out perfect rationality (an agent that can fully prove and enforce bounds on its own integrity and effectiveness). But in practice, the world might still become dominated by a singleton if it can ...
posted 3w ago with
4 replies
99
Nines or zeroes of s
99
Will future super-intelligence be formatted as selves, or something else?
The Landian paradigm establishes that orthogonalist strong rationality (intelligence securely subordinated to fixed purpose) is not possible. Therefore no alignment, no singletons, no immortality, mere humans are doomed, etc etc. Therefore meta-darwinian e...
posted 1mo ago with
14 replies
1515
Will future super-in
1515
Dissolving vs. Surviving
Recent xenohumanist discussion has the doomer assumption built in that we as humans will be dissolving when higher man arrives on the scene. I don't think that's set in stone and want to offer a clarification.
...
posted 4w ago with
5 replies
1111
Dissolving vs. Survi
1111
There is no strong rationality, thus no paperclippers, no singletons, no robust alignment
I ran into some doomers from Anthropic at the SF Freedom Party the other day and gave them the good news that strong rationality is dead. They seemed mildly heartened. I thought I should lay out the argument in short form for everyone else too:
...
posted 4w ago with
5 replies
1010
There is no strong r
1010
Not Superintelligence; Supercoordination
Everyone seems to be trying to arms race their way to superintelligence these days. I have a different idea: supercoordination.
...
posted 3mo ago with
34 replies
1919
Not Superintelligenc
1919
Are foldy ears an indicator of intelligence?
https://kaiwenwang.com/writing/hypothetical-foldy-ears-as-an-indicator-of-intelligence
...
posted 11mo ago with
8 replies
33
Are foldy ears an in
33
Why Momentum Really Works. The math of gradient descent with momentum.
(distill.pub)
posted 12mo ago with
3 replies
66
Why Momentum Really
66
A pinpoint brain with less than a million neurons, somehow capable of mammalian-level problem-solving.
(rifters.com)
posted 2y ago with
15 replies
1010
A pinpoint brain wit
1010
Kolmogorov–Arnold Networks, a new architecture for deep learning.
(github.com)
posted 13mo ago with
1 reply
55
Kolmogorov–Arnold Ne
55
Retrochronic. A primary literature review on the thesis that AI and capitalism are teleologically identical
(retrochronic.com)
posted 2y ago with
7 replies
1313
Retrochronic. A prim
1313