We have been imagining this future for a century. Why are we so surprised?

The recent market tremors surrounding the Citrini “2028” scenario have been instructive, though perhaps not in the way intended. Within hours of its circulation, headlines hinted at systemic rupture, commentators warned of political instability, and investors behaved as though an unexpected future had abruptly revealed itself. The tone suggested revelation.

Except: It isn't. And this should be more worrisome than anything else. Because it tells us we have not been paying attention beyond hype. Not for years, for decades.

The structural tensions it outlines, accelerated automation, labour displacement, institutional fragility, political overreach in response to technological stress. They are not novel diagnoses. They are the logical extension of trajectories that have been compounding for decades. If anything, the document reads less like prophecy and more like a synthesis of patterns already visible to anyone willing to connect economic incentives, technological capability and political reaction.

Long before generative AI entered boardrooms, we were wrestling with the consequences of intelligent systems. In the 1940s, Isaac Asimov was already sketching worlds in which machines operated alongside, and sometimes beyond, human governance structures. The Complete Robot (the first book I've finished in ages) does not feel quaint today; in many passages it feels uncomfortably contemporary. The moral dilemmas, the labour anxieties, the institutional gaps: they were all there. Not as fantasy, but as extrapolation.

For the better part of the last fifteen years, efficiency has functioned as a kind of secular religion. Corporations optimised relentlessly for margin, speed and scale. Governments digitised services without proportionally upgrading oversight or labour frameworks (and we've seen how that goes). Entire sectors embraced automation as competitive necessity. Very few paused to consider what happens when these optimisations compound simultaneously.

We have not been blindsided by AI. We have been accelerating toward this moment.

And so the current panic reveals something deeper: not fear of technology itself, but discomfort with the consequences of our own priorities. When systems built for efficiency begin to stress social and political infrastructure, the instinctive response is often prohibition — restrict access, slow deployment, build regulatory walls high enough to feel in control again. It is an understandable reflex. It is rarely an effective one. Pulling phones out of the hands of children does not solve the bigger problem at hand: We need to readjust.

Technological transitions have never been orderly. The industrial revolution destabilised labour markets for generations before new equilibrium emerged. The digitisation wave of the late twentieth century redrew industries and redistributed power long before governance frameworks caught up. Each phase felt destabilising from within. In retrospect, they appear inevitable. The present moment belongs in that lineage.

Artificial intelligence is not an alien force descending upon stable systems; it is an accelerant applied to structures already under strain. It amplifies capability, yes, but it also amplifies imbalance. That duality is precisely why it demands seriousness rather than hysteria. At Moonraker, our position is pragmatic. We build with AI because refusing to engage with structural change is strategically unserious. But we are equally clear that AI is a means, not an ideology. It is a tool for sharpening insight, extending creativity and deepening strategic clarity. Used superficially (like most will), it generates noise at scale. Understood properly, it enhances judgment rather than replacing it.

This is why we invest in understanding the architecture, not merely the interface. That is why we are building our own systems, why we treat technological capability as something to interrogate rather than merely deploy. If history teaches anything, it is that power without comprehension produces fragility. The more interesting question, therefore, is not whether dystopian scenarios are possible. They always are. The more consequential question is whether institutions, leaders and organisations are willing to evolve at the same pace as the tools they adopt.

We have been imagining this future for nearly a century. Economists modelled it. Novelists dramatized it. Technologists engineered toward it. The surprise, if there is one, lies in our continued reluctance to accept that structural change requires structural maturity. The future will not be decided by those who panic at inflection points, nor by those who romanticise them. It will be shaped by those who understand that acceleration without reflection is reckless, and that reflection without engagement is irrelevant.

We are not witnessing the arrival of the unimaginable: we are witnessing the consequences of momentum. And momentum, once built, does not disappear. It must be directed.

-Alex

Previous
Previous

The real reputation risk after a data breach? Poor communication.

Next
Next

Reputation Is Decided When Companies Restructure