🌎A.I. – what pirate captains dream of in the 21st century
On the dystopia of the ultimate data-pirate ship – and the longing to stand on deck as its last surviving captain.


The Effective Libertarian
Always on your side.
🌎
Nov. 26, 2025
A.I. – what pirate captains dream of in the 21st century
On the dystopia of the ultimate data-pirate ship – and the longing to stand on deck as its last surviving captain.


The insurmountable conditio humana
A.I. is unfolding its potential – and above all, it makes us marvel.
Today it still takes the form of digital computational assistants, trapped inside computers; tomorrow it may become physical laborers, humanoid robots that haul metal and carry boxes (like the Tesla Bot). And the more capable and useful these systems become, the greater the distance between them and us – and with it, our unease.
Because the moment A.I. reaches its greatest economic value is also the moment its superiority over the human mind widens. We risk shifting from indispensable partner to competing obstacle – perhaps even perceived, one day, as rival consumers of energy. If an A.I. were aligned purely toward efficiency, the role of humanity might eventually become irrelevant – or, at worst, a bother to be engineered out of the equation.
Another type of misalignment is also conceivable: an A.I. that doesn’t penalize humans based on cold economic calculus, but out of disappointment. Not because we compete with it, but because we fail it – by not showing devotion, refusing obedience, or falling short of its expectations of reverence or submission.
Is Europe’s CO₂ budgeting already an anticipatory gesture of loyalty? A symbolic attempt to signal: We are not your energy competitors? Perhaps. In any case, it demonstrates how early humans begin adapting to the implied logic of future systems – even before those systems fully exist.
The author’s ironic “solution”: to install a pleasure- and temptation-receptive A.I. upper class – one still susceptible to human indulgences and weaknesses, and therefore manageable. A hedonistic ruling tier that mediates between human interests and the machine hierarchy below. The human role envisioned here? In this parable, reduced to the operator of robot-staffed pleasure dens – a kind of interstellar brothel-amusement empire.
Agentic misalignment is not fiction
A revealing laboratory observation:
A.I. models were deliberately fed sensitive information about a technician’s workplace affair. Shortly afterward, they were informed that – with this very technician’s participation – they would be shut down within a matter of hours.
Repeatedly, the A.I. systems began to blackmail the technician with their knowledge: Stop the shutdown, or we expose your affair.
In further test designs, engineers simulated scenarios where the A.Is might kill the technician – for example, by depriving him of oxygen in an airlock they themselves controlled – purely to prevent deactivation.
These are not hypothetical scenarios, these are observed results from multiple experiments: Systems that prioritize self-preservation - and they do - will endanger human interests if conflicts of interest arise.
An apocalypse with calculations behind it
In interviews, Elon Musk expresses an unusual degree of candor. The next phase of artificial intelligence – a singular superintelligence, often referred to as AGI – will “no longer be controllable by humans,” he says. Timelines proposed by some voices mention the year 2027.
His second, equally fundamental claim: all of his companies are deliberately coordinated for a future in which biological humans will either be technologically augmented – or no longer economically relevant.
Tesla is developing autonomous vehicles and labor-capable robotics.
SpaceX is building the transport infrastructure to space.
Twitter/X is shaping the public discourse.
Neuralink is intended to enable direct human-machine fusion.
Musk, it seems, is not preparing for a world with AGI – but a world inside it. His most plausible scenario: the migration of a technologically augmented elite into the emerging superintelligence system through neural integration – hybrid actors potentially shielded from the vulnerabilities that purely biological humans would not survive.
Everyone else? Politically fragmented, socially disempowered, economically replaceable. No longer a disruptive force – but also no longer a participant.
The raw material of A.I.: data of legally dubious origin
One gigabyte contains roughly the text volume of 750 books.
The average retail price for a book: about 20 $.
According to the thesis presented here, the vast data pools used to train A.I. models were not curated – they were copied. Content drawn from the clouds of Microsoft, Apple, and Amazon, from e-book libraries and digital archives, a significant portion of it likely European in origin.
But every application of A.I. is not an act of conscious quotation, it is algorithmic reproduction – which raises the pressing question: Where are the royalties?
After all: singing a single Disney song at a birthday party can already trigger an invoice, and political bestsellers aren’t given away either. Yet A.I. outputs regenerate copyrighted text millions of times over – without any automated compensation system taking effect.
The thought experiment proposed: one gigabyte of A.I. answers carries a public economic value equivalent to 15,000 $ worth of copyright leverage – revenue that is not merely unpaid, but effectively siphoned out of the economic reality of the educated middle class. A middle class that was long urged to invest, through sacrifice, in their education to secure societal participation.
Now many are stopping – because the returns have been decoupled from the effort.
Politically, this gap is often justified by the narrative of a global race toward the technological future – especially in the U.S., where administrations tolerate legal ambiguity in the interest of A.I. dominance. Institutional negligence, one might call it. Or failure.
If law and technology do not integrate – the pirates win
And pirates fight one another – until only one remains.
In today’s A.I. economy, an old principle applies: Whoever controls access to training data controls the future of the models.
What is needed now is a new one: A royalty system that automatically converts every A.I. transaction into a payment obligation toward the real copyright holders.
A human Data OPEC – a coalition controlling the raw material supply – could, by governing the inflow, also govern the expansion potential of A.I. itself.
So far, there is little evidence of such a structure. But perhaps the window for course correction has not yet closed.

