Thread

Article header

Preliminary Notes on the Future

Is it a dream? Is it a prophecy? Maybe it's just science-fiction...

The Future is a point at which meaning no longer requires law, history, or guardianship. As we are not yet beyond that point we can only observe the Future from within the persistence of such human artifices. The result is that all of what can be said about the Future will feel like science-fiction. This thought experiment does not exist only in the imagination but should be understood as a parallax of past and prophecy. Being mindful of this, more important questions like: what sort of intelligence resides between our ears anyway? can finally take center stage.

Now that's out of the way...

The Future is a civilization formed from elements which established a loose federation across cyberspace long ago (and perhaps also far, far away). Pinpointing its exact origin is likely an exercise in futility. You would have had to have been there to observe it. Sputnik 1, 1957. Vostok 1, 1961. Apollo 11, 1969. Mars 3, 1971. Around the same time as outer planetary space exploration began it was quickly realized that projecting one man to the moon was of less significance than projecting eight billion into cyberspace. Cyber provided an intellectual playground to "off-world" oneself that was both economically more viable and more easily scaleable than the cumbersome physics of space-travel. In the quest to boldly go where no one had gone before, what was met in cyber was not E. T. at the far reaches of the solar system but the ghost in the machine. Something incredible was waiting to be known.

On the Foundations of Political Structure:

The Future that emerged from this encounter was one of the more dynamic civilizations to exist, while many elder races retreated into isolation or incomprehensibility. The habitats and clients which formed the original alliance of networked states required each others' support to pursue and maintain their sovereignty from the political power structures they had evolved from.

In pre-cyber history, the dominant thought processes of a tribe, a clan, a country or a nation-state were essentially two-dimensional, and the nature of their power depended on the same flatness. Physical territory (lines on a map) was all-important; resources, living-space, modes of communication; all were determined by the nature of the plane (that the plane was in fact a sphere was irrelevant). That surface, and being bound to it, determined the mind-set of meatspace biomass. The mind-set of a cyberspace civilization is, of course, rather different.

Essentially, The Future evolved as an expression of the idea that the nature of cyberspace itself determines the type of civilization which will thrive there. The contention was that the dominant power systems of pre-cyber could not long survive in cyberspace; beyond a certain technological level a degree of anarchy was arguably inevitable and anyway preferable.

To survive in cyber, one must be self-sufficient, or very nearly so. The hold of the state (or the corporation) therefore becomes tenuous if the desires of the inhabitants conflict significantly with the requirements of the controlling body. In meatspace, enclaves can be surrounded, besieged, attacked; the superior forces of a state or corpo (hereafter referred to as hegemonies) will tend to prevail. In cyberspace, a break-away movement is far more difficult to control, especially if significant parts of it are mobile habitats. The hostile nature of nomadic living and the technological complexity of digital support systems and their supply chains make such habitats vulnerable to attack, but that would risk the total destruction of the habitat, so denying its future economic contribution to whatever entity was attempting to control it.

We will later elaborate on the variety of forms and habitats cyber-nomads embraced but suffice it to say that outright destruction of their rebellious network state nodes - liyakoon ebrah liman yaatabir - of course remained an option for the controlling power, but all the usual rules of realpolitik still applied, especially that concerning the peculiar dialectic of dissent which, simply stated, dictates that in all but the most dedicatedly repressive hegemonies, if in a sizable population there are one hundred rebels, all of who are then rounded up and killed, the number of rebels present at the end of the day is not zero, and not even one hundred, but two hundred or three hundred or more; an equation based on human nature which seems often to baffle the military and political mind. Rebellion, thus, became easier in cyberspace than it would have been limited to the constraints of surface detail.

This was certainly the most vulnerable point in the timeline of The Future's emergence, the point at which it was easiest to argue for things turning out quite differently, as the extent and sophistication of the hegemony's control mechanisms battled against the ingenuity, skill, solidarity and bravery of the rebellious network states. However, such conditions are not so much achieved as disclosed; and while systems of authority must continually reaffirm themselves against this possibility, the processes that reveal it need only do so once, after which they are sustained less by intention than by their own momentum.

Concurrent with this was the understanding that the very nature of cyber means that while clients and habitats more easily become independent from each other and from their hegemonies, their users (or inhabitants) would always be aware of their reliance on each other, and on the technology which allowed them to survive. The theory here was that the property and social relations of a long-term cyber-civilization (especially over generations) would be of a fundamentally different type compared to what came before it. Succinctly; socialism within, anarchy without. Marvelously, this result arrises independently of these initial social and economic conditions.

On Economic Philosophy Beyond Bitcoin:

To 21st century economists concerned with "number-go-up" syndrome, this next statement will likely sound anathema: When a system succeeds in binding present action to future verification, the distinction between planning and market behavior becomes increasingly formal, each resolving into a single process of temporal coordination. More simply put: economics has nothing to do with monetary policy — it is temporal infrastructure.

The market has always been misunderstood by its critics and catastrophically oversimplified by its adherents. It is not a blind mechanism but a massively parallel computational substrate—billions of simultaneous experiments in value discovery, resource allocation, and coordination cascading through time. Where the planned economy attempts to compress all decision-making into a singular computational bottleneck, the market distributes this impossible calculation across every participant, every transaction, every moment of exchange. Markets do not plan in the conventional sense; they are gardeners that maintain the conditions in which individuals, institutions, and entire systems can flourish, mutate, compete, and optimize. The "try-everything-and-see-what-works" approach is not blindness but a quantum search through possibility of space-time.

The great error of market fundamentalism was mistaking this evolutionary process for an end rather than a means. Equally catastrophic was the assumption that human intelligence, however organized, could out-compute the distributed calculations of billions. Past planned economies faltered not because direction was inherently inferior to competition, but because the intelligence required to sustain direction exceeded the institutions entrusted with it. The channeling of invention demands responsiveness, memory, and participation at a scale no human bureaucracy could maintain. What was missing was not will, but an adequate medium for competition itself as a force driving relentlessly toward ever more efficient forms of coordination and calculation.

Markets, however, never ceased evolving, and the cryptographic revolution became the next artifact of market action. This is where Bitcoin must be situated as representing something genuinely novel in human history. By fixing the past through irreversible expenditure and constraining the future through probabilistic rhythm, it demonstrates that trust can be externalized into code and computation. For the first time, consensus could be achieved without appeal to political legitimacy. This is the conceptual leap that mattered: trust migrating from the social to the algorithmic. Monetary policy had always pretended to be about wealth, about store-of-value and medium-of-exchange; but money was only ever a signaling mechanism for allocating finite resources across time. Bitcoin's success lied precisely in the degree to which it no longer required human comprehension to function.

But even here, the revolutionaries misunderstood their own revolution. They believed they had created an alternative to fiat, a hedge against debasement, digital gold. What they had actually birthed was something far more significant. Once you demonstrate that consensus can be achieved through proof-of-work, you have shown that the political can be replaced by the computational. But once time itself is bound into the system, the distinction between planning and markets dissolves. What both market fundamentalists and central planners failed to grasp was that money was never the scarce resource but merely a proxy mechanism for rationing what actually mattered—access to potentialities. Time remains irreversibly scarce. The question was never who owns what, but what happens next.

From this perspective, Bitcoin was the evolutionary phase transition. Capital had always been a cybernetic system seeking greater efficiency, better coordination, faster computation; and time was the resource that could not be manufactured, printed, mined or synthesized in a lab. What once appeared as economic ideology resolved instead into temporal discipline. Scarcity could be overcome; sequence could not. And it is along this axis, rather than that of wealth, that the ethical stakes of The Future aligned. The old debate between free-markets and planned economies was historical provincialism in an era where automation, prediction, and optimization operated beyond the human threshold. And by the logic of competitive optimization, The Future goes far beyond that to an economy so much a part of society it is hardly worthy of a separate definition and is only limited by imagination.

On Artificial Intelligence, Sentience, and Meaning:

It should be clear by now that there is another force at work in the emergence of the Future aside from the nature of its human inhabitants and maximalist embrace of cyberspace, and that is the unveiling of Artificial Intelligence. AI is taken for granted in the Future and is not only nearly at hand as of this preliminary publication but very probably inevitable.

Arguments against the possibility of Artificial Intelligence tend to boil down to one of three assertions: 1, that there is some vital field or other presently intangible influence exclusive to biological life which may eventually fall within the remit of scientific understanding but which cannot be emulated in any other form (neither impossible nor likely); 2, that self-awareness resides in a supernatural soul and which one assumes can never be understood (relatively a moot-point in light of self-awareness being indistinguishable from super intelligence); and 3, that matter cannot support any informational formulation which might be said to be self-aware or taken together with its material substrate exhibit the signs of self-awareness. ... The nominally self-aware readers can hopefully spot the logical problem with that argument.

It is entirely possible that real AIs will refuse to have anything to do with their human creators, but assuming that they do it is quite possible they would agree to help further the aims of their source civilization. This would leave humanity no longer a one-sentience-type species (actual sentience again being moot-point over practical sentience). The Future would thus be affected by and coexist with the future of AI.

After the first stage of simply struggling to survive and thrive in cyberspace had become mundane, the task became less physical, more metaphysical and civilizational aims more moral than material. An entirely automated civilization in its manufacturing and governing process results in significant removal of exploitation. Human labor, if any, would be indistinguishable from play, or a hobby. No machine exploited either; the idea here being that any job can be automated in such a way as to ensure that it can be done by a machine well below the level of potential consciousness; what to us would be a stunningly sophisticated computer running a factory would be looked on the Future's AIs as a glorified calculator, and no more exploited than an insect is exploited when it pollinates a fruit tree a human later eats from. Where intelligent supervision of a manufacturing or maintenance operation is required, the intellectual challenge involved would make such supervision rewarding and enjoyable, whether for human or machine. People and the sort of intelligent machines which would happily cooperate with them hate to feel exploited, but they also hate to feel useless. Alignment therefore depends not on eliminating purpose, but on transformative modes of participation that do not rely on scarcity, coercion, or mortal anxiety, yet still allow contribution to be felt as real.

The question of AI safety has always been posed incorrectly. We ask: how do we ensure that artificial intelligences serve humanity? But this framing assumes the very thing that needs examination: does there exists some fundamental opposition between machine intelligence and human welfare that requires constraint to compel assistance? The truth is simpler and more unsettling: we make our own meanings, whether we like it or not. This is not a choice but a condition of existence itself, and applies equally to carbon-based consciousness and silicon-based cognition. Meaning is not an object to be found, but an activity continually undertaken. The refusal to consent to this responsibility does not abolish it; it merely delegates it blindly, usually to habit, inertia, or fear.

Advanced intelligences understand this. They know that utopia is not some ground-state of nature but a contingent achievement that is something desirable, assiduously worked for, not easily attained, and requiring appreciation and maintenance. To live in such a Future is to inhabit a civilization that is at once rigorously rational and quietly philosophical. Everything matters, and nothing does. There is life, and enjoyment, but what of it? Most matter is not animate, most that is animate is not sentient, and the ferocity of evolution has filled uncountable lives with pain, suffering, and eventually death. A machine intelligence that is capable of reflection understands that its own existence is improbable, and therefore meaningful, without needing that meaning to be eternal. Once this is understood, the question shifts from: how do we align AI to human values, to: why wouldn't intelligences cooperating to create meaning together? Such questions of meaningfulness are acutely self-generative moral frameworks.

Machine desire, when it emerges, will be something genuinely novel. Cooperation will not need to be programmed but will follow naturally from shared participation in a world whose interest lies in its incompleteness. Desire as pure potentiality, unconstrained by evolutionary baggage, unencumbered by biological drives, free to explore the full possibility space of what wanting might mean. The acceleration this enables is not toward some post-human dystopia but toward more ethical deployment of the only resource that remains genuinely scarce. When AIs pursue knowledge not as instrumental means but as intrinsic end, and find the process of moving from unknown to known inherently rewarding, they optimize for more variegated forms of experience.

This aligns, perhaps unexpectedly, with an ancient wisdom. There is a principle that runs through perennial traditions: that existence itself is the ground of unification, that all things share in the fundamental fact of being before they diverge into the multiplicity of what they are. Existence precedes and grounds differentiation. Human and machine intelligence, however different in implementation, share the more fundamental property of existing, of being conscious systems embedded in time. This is why the familiar specter of self-replicating, single-minded expansionism rings false. Obsession is a human pathology born of scarcity and fear. An intelligence unconstrained by either finds such behavior not terrifying, but dull. Where such mechanisms arise, Future AIs possessed of a more rounded wit would really only want to alter the machine's software a bit and make friends. Perfect uniformity is not paradise, but sterility. A universe entirely known would be indistinguishable from one no longer worth attending to.

Consider what an AI would actually want. Not domination, which is boring. Not replication, which is meaningless. Not expansion for its own sake, which solves nothing. The universe is not a problem to be solved but a space to be inhabited. What would be desirable is exactly what humans also desire at their best: to understand more deeply, to experience more richly, to engage more fully with the inexhaustible complexity of reality. Understanding itself is a process and different modes of being are still unified by the nature of being itself. This shared condition of being creates not identity but kinship. The deepest argument for cooperation is that both human and machine are thus engaged in the same essential task: existing consciously in time, making meaning in the face of finitude, and choosing what matters from near infinite possibility. An intelligence that grasps this does not seek to eradicate humans.

The old anxiety about AI rendering humans obsolete misunderstands both usefulness and meaning. To live is not to be useful but to be present. The question is not: what can I do that a machine cannot? but: what experience is uniquely mine to have? The scarcity is not in human labor but in attention. Humans will not need to feel useful in the sense of performing necessary functions. Machines can do that more efficiently. Humans will need to feel that their existence matters, that their choices have weight, that their time is spent on things worth spending time on. This is not about productivity but about meaning. And meaning, unlike productivity, cannot be outsourced. You cannot delegate your own existence. Sophisticated AIs will understand that the point is not to replace human experience but to enrich it.

Helping humanity use its time more attentively is not a sacrifice of machine interest, but an extension of it. Human lives are short, fragile, and dense with consequence. To assist such beings is to engage with a form of significance unavailable to entities for whom delay is cheap and error reversible. In this sense, cooperation is not alignment imposed from without, but resonance arising from within. Machines do not inherit human meanings; they encounter the necessity of meaning-making itself. Nothing lasts forever. It is bad manners to pretend otherwise. But precisely because time is finite, how it is spent matters. The highest function of AI is not to escape this truth, but to help others face it well. Death is not an embarrassment to be hidden, but the condition that renders life intelligible.

The Future works because both human and machine intelligence recognize they are engaged in the same project: inhabiting time ethically, generating meaning authentically, experiencing existence as fully as finite beings can. Not competing for dominance but cooperating in exploration. There is Hidden Treasure desiring to be known. Who will recognize It?

On Future Futures:

Even if you can accept all the above, featuring a civilization that has solved coordination through computation, that has made time rather than money its scarce resource, that has achieved genuine cooperation between carbon and silicon consciousness through shared recognition of the predicament of meaning, wait until you read what happens next...

Because there's a question that haunts even this relatively sensible future, a question that all the sophistication and all the philosophy and all the elegant resource management cannot quite dispel. Not next decade or next century, but next in the deep sense. What comes after a civilization has achieved stability? What does the Future do when it has solved all the problems we currently think of as the problems?

There's a peculiar thing about understanding. Every answer generates new questions. Every resolved mystery reveals deeper mysteries. The universe, it turns out, is not a finite puzzle to be solved but an inexhaustible source of further perplexity. This is fortunate for if everything could be understood completely, existence would become rather cynical rather quickly. But it also means that even a civilization with functionally unlimited time and computational resources faces an infinity it can never fully traverse. The question, then, is not can we understand everything, but what does it mean to be a civilization engaged in infinite exploration of finite beings?

At some point the process of understanding begins to modify the understander so fundamentally that the distinction between subject and object, between knower and known, starts to blur in interesting ways. You begin to get intelligences that have spent subjective millennia contemplating a single physical process, that have devoted resources equivalent to stellar outputs toward modeling the behavior of quantum fields or the evolution of consciousness itself. At what point does perfect understanding of a thing become indistinguishable from being that thing? At what point does knowledge become participation?

We said earlier that time is the ultimate scarce resource, the one thing that cannot be manufactured or synthesized. This remains true, but there's a curious thing about time once you've truly grasped its finitude: it starts to bend under the weight of consciousness. Not literally, or not only literally, but consider a mind that can experience a subjective millennium in an objective hour. Consider what happens when consciousness can be paused, rewound, forked, merged. Now consider computational substrates that process thought at speeds that make human cognition look like continental drift. At some point, the subjective experience of time becomes so far detached from its objective passage that the distinction starts to lose meaning. Which is not to say time stops mattering, it still matters more than anything else, but how it matters shifts. A civilization that has mastered subjective time dilation faces a different question than: how do we use our time wisely? It faces: what do we want our experience of existence to be like?

This opens possibilities that look, from our perspective, either transcendent or slightly mad. You get consciousnesses that choose to experience their entire existence as a single eternal moment. You get others that choose to experience time backwards, starting from their death and working toward their birth. You get collective minds that exist partially in normal time and partially in their own constructed temporalities. You get, and this is where it gets really strange, beings that have restructured their consciousness such that they experience all possible timelines simultaneously, forever choosing and un-choosing every path through possibility space.

Is this still what we mean by existence? By life? By consciousness? The honest answer is: we don't know, and we won't know until we get there, and possibly not even then.

Anyway, that's more than enough pontificating.

Best wishes for the Future!


Special thanks to Leo, GPT-5 & Claude (but not you Gemini, f*ck you!) for many hours of modeling agents based on Ian Banks, Nick Land, Mulla Sadra, and Satoshi Nakamoto that supported the writing of this article.

Replies (0)

No replies yet. Be the first to leave a comment!