sofiechan home

Accelerationism

anon 0x419 said in #2467 2mo ago: 99

Has anyone read a lot of materials on Accelerationism that wants to have a good discussion on pros and cons of this theory?

Has anyone read a lo 99

anon 0x41a said in #2468 2mo ago: 22

What materials should we read, OP?

I’ll admit I’m coming in skeptical because all I've ever seen from Accelerationists is airy generalizations and handwaving. My usual experience online is that I say they're intellectual lightweights, and then someone says "Oh no, you're just going off of memes, the serious core material is actually good", and then I ask them for the intellectually serious core and they come up with some reason why they can't, or promise to get back to me later and ghost.

If you know the actual worthwhile texts which my previous interlocutors couldn't show me, then I'd be happy to give it a look. Acceleration clearly has some cultural force, and sometimes that means there’s valuable ideas in there, so I can’t actually make myself stop looking for the pot of gold at the end of the rainbow.

What materials shoul 22

anon 0x41b said in #2469 2mo ago: 33

I’ve read and re-read xenosystems and i find Nick Land’s thought quite compelling in drawing out the implications of Nietzschean and Darwinian ideas especially around AI and attached metaphysical matters. Some of the later stuff that people have tried to do like u/acc and r/acc is interesting, but basically not written as any kind of systematic philosophy anywhere. Still i find the stuff very compelling as a worldview because it abstracts over the merely human and gets at fundamental things while also being quite relevant to our own time.

I’ve read and re-rea 33

anon 0x41c said in #2470 2mo ago: 33

I do recommend readi 33

anon 0x420 said in #2476 2mo ago: 55

You can try your luck with https://retrochronic.com/
There is also Xenosystems Fragments on the internet archive, a compilation of blog posts, though that also leans a bit more towards NRx.

I'm r/acc, so if there's anything you want to ask about that, feel free. In my opinion u/acc could never get anywhere interesting because it avoided the topic of intelligence too much, which is crucial (the teleological identity of capitalism and AI). And l/acc thought to decouple "the productive forces" from Capital. The less said of that, the better.

Pros and cons to me both stem from its radical scope. R/acc has something to say about intelligence as a mathematical property of physical systems, and is consequently tangent to practically anything. The downsides are that its severity make it hard to focus, nearly universally unappealing to normies, and makes for the rapid appearance of difficult questions and sub-projects. (e.g. a re-interpretation of natural history along accelerationist lines)

And yeah, it has always been niche at best, so never very rigorously formulated or collected anywhere. Those forges will have to be reignited.

referenced by: >>2477

You can try your luc 55

anon 0x41b said in #2477 2mo ago: 66

>>2476
Retrochronic is great. Bravo to whoever put that resource together. Yes intelligence as a natural phenomenon is THE central object of study here. Evolution as intelligence, capitalism as intelligence, life as intelligence, consciousness as intelligence, thermodynamics as intelligence, god as intelligence, dare i even suggest human reason as intelligence? There is a single unified phenomenon in all of these, very much related to our recent tangential touching on vitalism, if that word means anything.

What then is intelligence? Embodied representation of functional possibility (a space of hypotheses) selected and reinforced by some applied forcing function, leading to optimized fitness for exploiting the applied gradient. In the root physicality, the forcing function is the flow of energy, the space of representations is the space of lifeforms as self-reinforcing accelerated flow channels in that energy gradient. In abstract intelligence like the human mind or hypothetically AI, the forcing function is a demand for elegant consistency with a set of constraints like perception, memory, and primitive value instincts, and the space of hypotheses is the belief parameters that are fiddled to find and maintain this consistency. In capitalism, we have again the flow of energy resources, but with abstract firms instead of direct lifeforms. All look the same in this way, though of course we cant quite build the stuff yet so its hard to say we understand it. What is the inner structure and implications of this thing? In particular, accelerationism seems to be an exploration of the inherent teleology of intelligence as more fundamental than its derived imaginations about represented value etc. Few have rigor in this area.

I do think its worth making rigorous and presenting it systematically. What do we actually know here? I have my pieces which i have not yet written down systematically, but i know others have good pieces too. Its only the most important cutting edge of philosophy as a crucial time in history.

referenced by: >>2478 >>2541

Retrochronic is grea 66

anon 0x421 said in #2478 2mo ago: 55

>>2477
> In the root physicality, the forcing function is ... In abstract intelligence like the human mind or hypothetically AI, ... In capitalism, ...

Is there a way to state this in terms of entropy / information theory that would hold across all these domains?

referenced by: >>2484 >>2490

Is there a way to st 55

anon 0x41b said in #2484 2mo ago: 44

>>2478
I dont know. Information theory maybe. But you might need a physical information theory that is about configurations lining up with circumstances. Then also the reverse thermodynamic causality in life. I am not mathy enough to do this.

I dont know. Informa 44

anon 0x420 said in #2485 2mo ago: 55

Better complexity theory is what you want (along with programmers who actually take logic seriously, but that's just a dream), Thomas Seiller is working on a program of Mathematical Informatics that will hopefully lead to, among other things, a more fine-grained and rigorous complexity theory that would also be eventually architecture-sensitive. On the physical side, one is more or less talking about the dynamic economics of open quantum systems, with intelligence as a mathematical property of physical systems, namely the maximization of future freedom of action (or diversity of future paths potentially taken by the system) which also corresponds to minimal total entropy production. This also naturally leads to links with connectionism.

That's all a long winded way to say that probably no new math is really necessary for this and that the main issue is that there's little to no basic thinking going on. Interesting things happen in the meantime, though, like the singular learning theory view of programs as singularities of analytic varieties. Honorable mention of Transcendental Syntax of course but I don't want to sperg up the thread even more.

referenced by: >>2488

Better complexity th 55

anon 0x41b said in #2488 2mo ago: 33

>>2485
Can you post links to this superior complexity theory? Everything I've seen from that field is like pop-science “woah man it sometimes does weird things”, though I admit I haven't looked too hard.

Maximization of future freedom of action is an idea I've heard a lot about but it doesn’t seem fully convincing. Is there an original paper or something that makes the case rigorously and understandably? Post it top-level it would make a good thread.

Can you say more about what you mean by programmers not taking logic seriously, and what you mean by no basic thinking happening?

referenced by: >>2499

Can you post links t 33

anon 0x429 said in #2490 2mo ago: 33

>>2478
Let me propose a few definitions:

Intelligence is the energetic efficiency of a system. The more intelligent, the more efficient.

Wisdom is the survival probability of the system. The wiser, the less likely to go extinct (or defeated by competing systems).

In Aristotelian causality, intelligence is mainly the efficient cause -- and wisdom is the mainly final cause of the system. Both of these causes shape the systems formal and material causes as secondary.

referenced by: >>2493 >>2494

Let me propose a few 33

anon 0x42c said in #2493 2mo ago: 44

>>2490

If intelligence is the energetic efficiency of a system, are LLM's getting more intelligent with increasing model size or less intelligent? Seems quite energy-inefficient to me even though there is for sure some axis along which LLM's get more intelligent (in the colloquial sense).

What is the axis for you along which LLM's capabilities change with increasing size?

referenced by: >>2501

If intelligence is t 44

anon 0x41b said in #2494 2mo ago: 33

>>2490
What is efficiency? Efficiency a ratio of actual to ideal effect given resources consumed. What resources? What effect? What's the efficiency of a rock?

referenced by: >>2501

What is efficiency? 33

anon 0x420 said in #2499 1mo ago: 44

>>2488
I meant computational complexity, since what is of interest are the logico-computational properties of a certain interesting kind of physical systems.
https://www.seiller.org/HdR.pdf

Future freedom of action is from Wissner-Gross' causal entropic forces paper. It's only thermodynamic but can be given more solid quantum mechanical foundations. Rigorously *and* understandably is a bit of a tough ask, but the original paper isn't particularly complicated, and he has given talks on the topic.
https://youtu.be/ue2ZEmTJ_Xo?si=ZVw-sTltaa9lA0um
https://www.alexwg.org/publications/PhysRevLett_110-168702.pdf
I might make a thread about it, that's not a bad idea. What's of interest to me about it is how closely it corresponds to Nick Land's treatment of orthogonality.

As for the last couple of questions, between the money and hype, AI/ML have become far too "hacky" disciplines. Lots of Rube-Goldberg machines and technical results without much care for fitting them into a broader and more fundamental theory, or care for the foundational notions and questions of interest. Despite the truckloads of money being dumped into this industry, you won't find that many people interested in thinking deeply about what making something smarter even means. Programmers are users of logic -far too often, not even good ones- but they show little regard for it.

Let me put it this way: we are trying to build something that is capable of postulating and justifying logical and heuristic rules on its own, but hardly anyone has a clue on how *we* have even managed to do that in the first place! Try explaining or justifying the use of modus ponens without using it or relying on it (what would make the explanation circular)! It's an old wittgensteinian observation. Instead one just stirs a big pot of linear algebra and hopes that the next trick will be good enough to produce Skynet, fully formed and even equipped, like Athena from the head of Zeus. I'm not proposing mandatory philosophy classes at MIT or that GOFAI make a comeback, only that without accounting for the subjective dimensions of logic, if only implicitly, we'll be stuck with dead-ends and an economic bubble, one big painful blind spot.

I meant computationa 44

anon 0x429 said in #2501 1mo ago: 33

>>2493
Efficiency is energy efficiency. Humans are inefficient: it's not the brains itself, it's the energy footprint of a human, including fancy vacations.

LLM size is quite technical and does not directly predict efficiency. A larger and more accurate model can often be more efficient.

>>2494
Efficiency in terms of a system's impact on the universe's negentropy.

referenced by: >>2505 >>2541

Efficiency is energy 33

anon 0x41b said in #2505 1mo ago: 55

>>2501
>efficiency is impact on negentropy
This feels like a failed definition. A lighting bolt or forest fire is extremely efficient at burning up a lot of potential, not at all intelligent. This whole “life is about increasing entropy” thing is wrong for this reason. Life is an epiphenomenon of energy flowing downhill, but that’s not its main feature.

I feel like intelligence as optimization power or something should be updated in light of instrumental convergence/nonorthogonality. Maximizing future freedom of action is interesting in this light but I still need to study that more.

referenced by: >>2507 >>2508 >>2541

This feels like a fa 55

anon 0x421 said in #2507 1mo ago: 33

>>2505
> This whole “life is about increasing entropy” thing is wrong for this reason ...

No one who's done work on this says "life is about increasing entropy," as if that were an explanation of life. The claim is that life is a dynamic structure that *decreases* entropy within a local boundary. This has the *effect* of increasing entropy more globally, which is why it's possible at all, but the life-specific aspect is all about *decreasing* entropy.

Once that concept is in place, the extension to intelligence becomes much more plausible.

referenced by: >>2511

No one who's done wo 33

anon 0x429 said in #2508 1mo ago: 55

We do need a definition of intelligence that isn't going to be anthropomorphic. I'm not pretending this is a solved problem. We need to go beyond trying to play Turing's imitation game as AI, and pursue intelligence in its fullest.

>>2505
A lightning bolt won't go and dig up the deposits of coal, and burn them at massive scale, re-extracting stored energy.

referenced by: >>2511

We do need a definit 55

anon 0x41b said in #2511 1mo ago: 44

>>2507
I'm still not convinced. Yes decreasing local entropy is something life does, but any kind of engine does that or can do it (for example, charging a battery). You can say only life builds such engines, which is true, but it still feels like a distraction. >>2508 is right that we need a non-anthropomorphic definition of intelligence, but I have not seen one. Anyone want to take a crack at starting a new thread on this topic?

referenced by: >>2512

I'm still not convin 44

anon 0x421 said in #2512 1mo ago: 33

>>2511
You're straw-manning. The claim is not that decreasing entropy is a sufficient condition, just a necessary one. Of course life has further requirements. That doesn't make the entropy consideration unimportant, especially if it can be related to intelligence via information.

You're straw-manning 33

anon 0x442 said in #2538 1mo ago: 00 55

Hyperstition

Hyperstition 00 55

anon 0x42c said in #2541 1mo ago: 11

>>2501
Seems like we have very different intuitions of how efficient LLM's and humans are. Could you flash out your thoughts or maybe even calculations of why you think that is? It sounds to me like what makes LLM's more efficient for you than humans is just the energy that the system consumes.
It also seems like you have some further prerequisites for a system to be intelligent. Otherwise as already noted by >>2505, the sun seems way more intelligent than a human by your definition.

I usually think of intelligence as a systems ability to model its environment. Since no system is ever able to fully model the environment, the question becomes which heuristics are better at modeling the environment. However, this does not capture what >>2477 is looking for.

referenced by: >>2548

Seems like we have v 11

anon 0x429 said in #2548 4w ago: 11

>>2541
> Seems like we have very different intuitions of how efficient LLM's and humans are.

Here you can find some estimates of LLM efficiency:
https://www.promptprint.org/

Human energy footprint can be calculated just like that. But human thinking energy needs to be offset by all the other human footprint.

> the sun seems way more intelligent than a human by your definition.

The sun won't last very long on its own. Humans might have a chance of beating the sun.

> I usually think of intelligence as a systems ability to model its environment.

Yes, modeling helps, both intelligence as well as wisdom.

Here you can find so 11

You must login to post.