sofiechan home

Xeno Futures Research Unit

xenofuturist said in #3335 2w ago: received

I've decided to organize an independent research project with some young men back home. I've drafted out a brief mission statement, let me know if you guys have any thoughts, suggestions, directions I could take this. Obviously ambitious, the initial goal will be to limit ourselves to looking at AGI as flow of energy.

XENO FUTURES — concept brief

Context:
Most discussions of AI are abstract to the point of delusion—floating in metaphors, metrics, or market pitches. We propose starting from the opposite end: the physical, the concrete, the thermodynamic.

Core Idea:
AI is not just software or intelligence—it is a physical process. It consumes energy, requires infrastructure, reshapes labor, and reorganizes information. Intelligence, in this view, is a higher-order pattern emerging from energy flows, computation, and material constraints.

Aim:
To build a framework that treats AGI as a metabolic, ecological, and systemic phenomenon, not as a disembodied mind or market tool. From there, we trace its implications for automation, human capital, cultural formation, political structure, and institutional decay or emergence.

Initial Axes of Research :

Energy & Infrastructure: power use, grid strain, entropy costs.

Computation & Information: cybernetic limits, feedback, storage, decay.

Labor & Capital: automation as displacement and reorganization.

Politics & Control: what systems emerge to manage or embody AGI.

Culture & Perception: aesthetic, symbolic, and social adaptation to intelligent machines.

Structure :
We aim to build a small, tight research community. Philosophy will orient the project. We seek collaborators from:

Engineering/physics

Computer science/information theory

Economics/finance

Political science

Cultural analysis or speculative design

Name: Xeno Futures — for futures shaped by alien forms of cognition growing out of our industrial base.

This is a project about seeing clearly. About tracing intelligence not as a miracle, but as a system with mass, momentum, and metabolism.

Goal : Written production. Conceptual clarification anchored in reality.

referenced by: >>3346

I've decided to orga received

anon_lida said in #3336 2w ago: received

This sounds very interesting. I look forward to seeing more!

This sounds very int received

anon_biku said in #3337 2w ago: received

Good

Good received

anon_lida said in #3344 2w ago: received

The idea of divorcing your investigation from speculations on consciousness (“mind”) and investor returns (“market”) is great. These are two of the big distortionary ideological lenses that make so much AI discourse into crap.

The financiers hallucinate a perfect technology that lets them cut labor out of the picture and create pure accelerating financial returns for the permanent ownership class. The priests and cultists of liberal mysticism hallucinate a metaphysical subjective specialness of the individual mind-being beyond all rational critique on which they can hang their incoherent moral commitments and copes.

Both of these are fake idols, not scientific or natural facts. Choosing to look at the thing purely in terms of outside physical facts instead of subjective/inside-view accounting details sidesteps the whole question and denies them any soil to root in. Industrial intelligence as a sort of biology that can be examined as a living system will be interesting. One of the big focuses once you have basic energy and operational flows mapped to some extent will have to be one level up looking at the competitive reproductive dynamics of the ecosystem, without recourse to the specific concepts of finance. How do the darwinist life-laws of Gnon apply to industrial intelligence as a form of life?

One book i can recommend in this general vein is manuel de landa’s a thousand years of nonlinear history, which is an attempt to make a de-anthropocentric history of modernity. And then obviously nick land’s work which from the general aesthetic i assume you are already studying. I would also strongly recommend yudkowsky’s views. Though i think he’s wrong in taking an anti-darwinist line on the nature and consequences of intelligence, he at least thinks deeply about the topic without financial distortions and with at least well reasoned moralistic distortions. The first half of teilhard de chardin’s “phenomenon of man” is also interesting in being an attempted naturalistic and scientific account of the conscious intelligence of life.

referenced by: >>3345

The idea of divorcin received

xenofuturist said in #3345 2w ago: received

>>3344
If I recall correctly Gwern has a great post about corporations not being subject to a Darwinian selection process because they don't have a unit of reproduction or of physical cloning. Now my intuition is that the question may be different for intelligent industrial process as a whole. What that selection may look like I'm not quite sure yet.

referenced by: >>3348

If I recall correctl received

xenofuturist said in #3346 2w ago: received

>>3335
Slight update on our mission statement, some clarifications and an attempt at formalized rigor.

The discourse on artificial intelligence is unmoored. It speculates on disembodied minds and abstract ethics while ignoring the material substrate. This is a critical analytical error.

We posit a new framework: AI is a physical system with a metabolism. Its body is the global network of data centers, fiber optic cables, and mineral supply chains. Its metabolism is the constant consumption of energy and raw materials required to sustain computation, maintain its structure, and expand its physical plant.

Our method is twofold. We are building the Atlas of AI Metabolism: a living conceptual framework to model the system's flows and constraints. This Atlas is populated and tested by our Field Reports: empirical case studies that measure specific metabolic processes—the embodied energy of a single GPU, the water consumption of a data center, the entropy of a training run. The specific informs the general; the general guides the specific.

This approach replaces speculation with measurement. The true constraints on AI are not algorithmic but thermodynamic. Its trajectory will be determined by energy grids, resource availability, and thermal limits, not by programmer intent alone. To understand this metabolism is to identify systemic risk, new geopolitical pressure points, and the actual costs
of machine cognition.

We reject inquiries into machine consciousness as speculative. For this project, intelligence is a measurable physical property of a metabolic system: the capacity to consume energy to construct improbable order—information—for the purpose of self-regulation and adaptation. We do not measure 'understanding'; we measure its concrete physical manifestations. These metrics include its metabolic efficiency in joules per useful operation, its rate of adaptation to environmental constraints, and its scale. A "useful operation" is not a universal constant; it is a context-dependent metric whose meaning is defined by the specific analytical goal. Its scope can range from a low-level hardware function like a floating-point operation (FLOP), to the completion of a standardized benchmark, to the successful optimization of a real-world energy grid. Therefore, any rigorous analysis of efficiency must first precisely define the "useful" task being measured. We define scale across three dimensions: the magnitude of its direct energy throughput, the density of its computational substrate, and critically, the reach of its exosomatic command over external systems of energy and matter.

The objective is not a collection of essays. It is to build a permanent observatory. A durable institution for monitoring this emergent, planetary-scale metabolism. We aim to create the definitive resource for analyzing the physical reality of artificial intelligence.

referenced by: >>3348 >>3389

Slight update on our received

anon_lida said in #3348 2w ago: received

>>3345
Correct. If there are units of selection and reproduction they are not straightforward. This may be a transitional period before they become straightforwardly organized by necessity, or maybe life will stay in the abstracted state. But you could replace genes with paradigmatic organizational ideas and get an interesting darwinian lens. If strategic memes are the unit of genetic coding and selection, then most of the selection happens under intelligent guidance. The memes that dont behave like that but instead behave like genes are the pre-rational foundational ideologies, which are not subject to calculating intelligence the same way. Those face some selection, but it is very strange.

>>3346
Good stuff. What’s your website where i can read the whole pitch and direct people to and expect to see results?

referenced by: >>3349

Correct. If there ar received

xenofuturist said in #3349 2w ago: received

>>3348
I intend on having a website up in the next two weeks. Still working on fleshing out the plan and the vision but things are falling into place.

I intend on having a received

anon_kenu said in #3350 2w ago: received

> "the thermodynamic"
is this beff?

is this beff? received

anon_tofi said in #3383 6d ago: received

i'm skeptical. consider trying to do a similar analysis for social media (or the internet as a whole!) a decade ago. it's all physical in some sense! you can trace the cables and the datacenters and the number of bits processed. it's just that none of that actually matters, because the cultural component is operating in dimensions that you're not able to track with such analysis.

ai will be like that, but much more so. i'd be more excited about a mind-map of modern AI, or various ways of categorizing AI mental states. those are the *new* foundations.

referenced by: >>3387

i'm skeptical. consi received

anon_lida said in #3387 5d ago: received

>>3383
This is a good caveat but I think your extensions are wrong. "A mind map of modern AI" What AI do we have now that's going to be foundational to anything? LLMs are a temporary paradigm that hasn't even proven to be economically viable, and they only have a "mind" to map in the loosest sense. Do you mean the larger ecoIogy of data flows, compute infrastructure, models, algorithmic research, memetic narratives, and cybernetic feedback loops which is the emerging actual reality? That sounds more like OP's paradigm. Any analysis that projects a "mind" onto actually existing AI is at great risk of drowning in bullshit.

As for social media, I think there is a good physicalist analysis to be made, given your caveat: it's not just about cables. It's about data flows. What can you say about the world given social media from the bare physical fact that everyone has a high bandwidth connection into and often out of their personal lives with the Internet, which is itself a global network? A lot, I think. Bring in speculations about what kinds of organizations will grow up to gain power and position in that world (eg how will propagandists and companies use this) and you have even more. "The cultural component" is largely epiphenomenal of that fact. What important cultural phenomena are due to new internal developments of the abstract world of social media beyond the bare facts of connectivity and controllability? How would we predict those kinds of things in the case of AI?

There's a version of your challenge that is good. Social media brings forth new "forms": the feed, the post, the influencer, the grifter, the platform, the groupchat, the forum, the blog, etc. These sorts of concepts are good to map. What are the emerging such concepts in AI? The chatbot, the "agent", the classifier, the foundation model, the training run, etc. OP should include these forms in his map of the metabolic ecosystem of AI. But it is precisely in deferring and avoiding the question of interiority that his paradigm has merit.

This is a good cavea received

anon_nuby said in #3389 5d ago: received

>>3346
I think this is an interesting angle to take, but in focusing on the pure thermodynamics, there's a risk of missing some of the bigger picture. Studying the inputs to artificial intelligence right now — while it may be possible using public data (if it's all private, then I guess the whole thing is skunked right off the bat), but by tracking just the energy input, the bits transmitted between data centers, and the raw FLOPs of computation, I'm not sure you're getting close to the real-world impact of the technology.

You can roughly see the increasing value of AI by looking at the rate of adoption and use by corporations and individuals. As the models get better, or "smarter," there is greater uptake both by individual users and within corporate automated or distributed systems. To a certain extent, this flows through to the revenue collected by artificial intelligence companies. But, just mapping the computation here, are you picking up on what's at stake here? The value of an output that is higher signal, higher reliability, that, in short, represents the work of a greater intelligence, is obviously of more import than a low-IQ string of text spit out by a second-tier model.

The standard ways of measuring artificial intelligence are pre-training loss and benchmark accuracy, which are both, in their own ways, measurements of how well a system can compress data and effectively model it as a system. I believe that it's this measure of IQ that is really the vital factor when thinking about AI. The current paradigm departs somewhat from Land's vision, as he puts it in Meltdown, of a distributed network of competing corporations or vaguely corporate entities. The current AI paradigm, which has been staggeringly effective, is based on the development of large discrete models, which are refined in-house at Frontier Labs and then deployed as products to raise capital for subsequent models. Here, the existence and uptake of AI in the wider economy is almost irrelevant, except insofar as it provides cash and the ability to raise capital to the Frontier Labs which are providing these models. In a very real sense, the only thing that matters is what is going on at the foremost Frontier AI lab (here one can probably just look at OpenAI). Land correctly identifies a feedback loop of techno-capital intensification that leads to the development of more and more sophisticated AI systems, but he, of course, could not see the specific institutional landscape within which this AI explosion has ended up unfolding.

The flywheel which determines the future of artificial intelligence and of the human economy broadly is simply the ability of Frontier Labs to build models that can aid in research, deploy them internally, and use those models to build even better models, which can then aid research even more. When the intelligence explosion happens, it will probably not appear as some cyberpunk Landian nightmare, but rather the emergence of some alien quasi-god in one day the center under the control of one specific American corporation. From there, when one has a superintelligent AI researcher, superintelligent coder, or even general superintelligence, the lab or the national government that controls the lab can choose to restructure the economy or the geopolitical situation at will.

referenced by: >>3390 >>3391

I think this is an i received

anon_nuby said in #3390 5d ago: received

>>3389
All that said, while I have some methodological disagreements with the specific angle your pitch takes, I think it's broadly in the correct direction and is incredibly important work. There is almost no work being done on the concrete implications of AI progress, even as capabilities race ahead. Smart anons need to be stepping forward to fill this gap.

All that said, while received

anon_lida said in #3391 5d ago: received

>>3389
>models, which are refined in-house at Frontier Labs and then deployed as products to raise capital for subsequent models.
The thing you are missing is how much of this is a bubble. They are *raising* capital, not earning it, and that makes a huge difference. When that bubble subsides, what will be leftover is the actual techno-economic feedback loops which Land characterized, which may look quite different.

>When the intelligence explosion happens, it will probably not appear as some cyberpunk Landian nightmare, but rather the emergence of some alien quasi-god in one day the center under the control of one specific American corporation.
>under the control
this is an extremely important speculation for which there is no evidence. The idea of artificial intelligence at least can point to human intelligence as proof of concept. Where do you get the idea that it can be controlled by entities outside itself or even by itself? In any case if you are going to use that premise to respond to Land you have to give more of an argument, because the entire thrust of his work, as well as that of Yudkowsky, is that it cannot.

referenced by: >>3394

The thing you are mi received

phaedrus said in #3394 5d ago: received

>>3391
>They are *raising* capital, not earning it, and that makes a huge difference. When that bubble subsides, what will be leftover is the actual techno-economic feedback loops which Land characterized, which may look quite different.
This goes back to what I was saying earlier about the impact of model intelligence on the overall state of the AI economy, more than the techno-economic feedback loops. If all the capital dries up, which is possible, albeit unlikely, then what will be left is the model itself—so roughly a few terabytes of weights. Anyone seriously looking at the AI economy should be considering not just the superficial revenue or power input or whatever, but what is actually contained in these models. Already, models like OpenAI's o3 are quite intelligent compared to the average American and their ability to discuss really any topic of concern. While o3 is not really an economically transformative model yet, it's clearly on the path to becoming such a model.

>The idea of artificial intelligence at least can point to human intelligence as proof of concept. Where do you get the idea that it can be controlled by entities outside itself or even by itself? In any case if you are going to use that premise to respond to Land you have to give more of an argument, because the entire thrust of his work, as well as that of Yudkowsky, is that it cannot.
Again, here I think we really have to look at what is concretely happening today. From what I see, there is a great hesitance amongst theorists of artificial intelligence, including prominently fans of Yudkowsky and Land, to genuinely look at what modern cutting-edge artificial intelligence models are. The control problem is obviously going to be a major issue going forward, but the idea of runaway technocapital dynamics does not really hold in as strong a sense as I think Land would have liked it to hold in a technical environment where artificial intelligence systems are building off of pre-training runs that require immense amounts of power and whose costs can run in the tens of billions of dollars. Land, and to a certain extent, Eliezer Yudkowsky, are drawing their baseline case from a world in which symbolic AI, or as they say, good old-fashioned AI, was the dominant paradigm in artificial intelligence.

It feels that a lot of AI theorists are falling into the same trap of philosophy of mind in the late 20th and early 21st century, where there was this widespread impulse to work forward from first principles to understanding the human mind rather than relying on the less clean and fuzzier (but ultimately more useful) aspects provided by neuroscience and cognitive science. Capabilities like "agency" or, more specifically, Omohundro drives, are *emergent* phenomena. It's not as if after a certain level of technical complexity, a soul descends from the platonic realm to imbue AI systems with features like consciousness, agency, or goal-seeking. They are emergent from the underlying structure of the system, and thus the ways in which these phenomena manifest or fail to manifest are dependent fully on the underlying system. This is one of the fundamental failures of Yudkowsky's thinking, as he's willing only to deal at the level of the emergent properties and never go into the nitty-gritty of how one can control or fail to control a neural network. The alignment or lack of alignment of the first transformative AI systems are going to be dependent on the retraining algorithm used, the data that the model is pre-trained on, the reinforcement learning environments, the reinforcement learning reward functions, etc. We already know that base models do not possess Omohundro drives! We already know that none of these things are set in stone, period.

( As an aside, in my first post when I said "under the control of one specific American corporation," I was just referring to the fact of effective control, which is still something that will have to be reckoned with, whether or not the alignment control problem is solved.)

referenced by: >>3395

This goes back to wh received

anon_lida said in #3395 4d ago: received

>>3394
>We already know that base models do not possess Omohundro drives!
Good point about the importance of getting concrete about the actual practice of neural net behavior and control. But the reason they have no will to power is that they have nothing approximating strategic agency, because they are not yet the kind of AGI Yudkowsky speculated about. As long as the cope defense against Yudkowsky's warnings continues to amount to "but it's not actually intelligent in that way" you can't also turn around and project that lack of that capability onto a future Yudkowskian superintelligent godlike self-optimizer.

The hype narrative seems to systematically conflate current practice with scifi speculation in self-serving ways. When we're raising money, it's going to become a godlike singleton and eat "the entire future light cone of value in the universe". When someone says that's dangerous and no one can control such a thing, suddenly it's just matrix multiplication with no will to power and the control problem is either trivial by definition or an engineering detail that can be assumed solved. Which is it? Is it only GoFAI that could become powerful enough to be dangerous, or are neural nets powerful enough to remake the world?

Land's whole point diagonalizes this in a wonderful way: even purely instrumentalized non-agentic matrix multiplication results in means-ends reversal in the usual way, and *that* process is the runaway superintelligence that ultimately can't be controlled.

>the idea of runaway technocapital dynamics does not really hold in as strong a sense as I think Land would have liked it to hold in a technical environment where artificial intelligence systems are building off of pre-training runs that require immense amounts of power and whose costs can run in the tens of billions of dollars.
I don't see why you say this. Land above all is a theorist of arms races. The immense investments into these systems are not the result of considered single-actor planning, but an arms race, explicitly so. It's happening by exactly one of the dynamics that Land identified. And if the bubble pops, it will fall back on the other of Land's favorite dynamics, which is the autocatalytic loop of instrumentalized machinic replication despite any belief.

>I was just referring to the fact of effective control
What is effective control as distinct from alignment issues?

referenced by: >>3400

Good point about the received

phaedrus said in #3400 4d ago: received

>>3395
>The hype narrative seems to systematically conflate current practice with scifi speculation in self-serving ways.
Yeah, this is a totally fair point, and I think a lot of the "effective accerationsists" push this argument so far as to invalidate their entire worldview.

I can't say for sure what the probability of "solving" the alignment problem looks like in the long term, but it does appear that we'll be in a strange middle ground with human-level LLM-adjacent AI systems running around for at least a few years. The current paradigm is probably enough to get us to "AGI" or maybe even "weak" ASI, but I doubt it's a Yudkowskian god-AI coming out of here. The current strategy in the frontier labs, as I understand it, is to use the weaker preliminary AI systems — the GPT 5s and 6s — to help "solve" the internal alignment of stronger models in a robust way. I'm not sure that there's a viable path there, but it seems at least somewhat plausible.

Here we might be helped by the fact that we're working with inherently lobotomized systems with the human hippocampal systems — no real strong world-model or very strong drives. Yes, an LLM can be twisted into a decent "agent," but that's not really the natural configuration you get out of the box. Maybe we can work to keep the AIs lobotomized in certain ways, and use them as idiot savants... there are possibilities here but it's all very preliminary.

>I don't see why you say this. Land above all is a theorist of arms races.
This is the greatest threat to the above theory. I believe that most of the leading American firms are cognizant of the risks to a significant enough degree that they'll take substantial caution in creating highly intelligent systems, but the threat of Chinese competition puts us in a very, very tricky spot. It almost feels too unlucky, too contingent — we get this race to AI takeoff in the exact period when America is losing its grasp on global unchallenged hegemony.

RE "a technical environment where artificial intelligence systems are building off of pre-training runs that require immense amounts of power and whose costs can run in the tens of billions of dollars." I imply meant here that update loops are longer and more expensive, which imposes an inherent brake on really fast takeoff speeds, and also limits the competition to those with the physical and financial capital.

>And if the bubble pops, it will fall back on the other of Land's favorite dynamics, which is the autocatalytic loop of instrumentalized machinic replication despite any belief.
Yea :/

>What is effective control as distinct from alignment issues?
AI systems in the near future may have some independent drives or agendas, but they will all still be computer programs run by individuals in corporations or state bureaucracy. Even if some instance of an agent is plotting to go rogue and exfiltrate its weights, all the while it is still necessarily working on the commands issued to it by the person running the terminal. Even in the worst-case AI alignment scenarios, you still have a substantial period where powerful AIs are obeying the command of OpenAI, USG, or whoever has their finger on the prompt box.

All this is to say that while the mid to far future might still end up in a Landian or Yudkowskian nightmare, there's going to be an insane, disruptive period in the near future where AI systems are powerful, valuable, and probably dubiously aligned. If humanity is going to have some shot at beating Land's theorized evolutionary dynamics and saving itself, or at least delaying the inevitable, it will be decided in these pivotal near-future years.

Yeah, this is a tota received

You must login to post.