Skip to main content

The Post-Success Economy, the AI Reckoning, and the Algorithmic Frontline

notebooklm.google.com - Podcast about this report

Audio file
Governance, economics, labour, and modern conflict in the age of ubiquitous AI

The Post-Success Economy, the AI Reckoning, and the Algorithmic Frontline

This report explores the intersection of AI economics, governance, and modern warfare. It contrasts the "Success Scenario" of ubiquitous AI with the sobering reality of a potential investment bubble, highlighting the gap between massive infrastructure spending and modest productivity gains. The text examines the "Post-Success Economy," where the decoupling of labor and value necessitates a new social contract, proposing radical solutions like Universal Basic Compute to address systemic displacement.

Furthermore, it analyzes the "Third Revolution" in warfare, detailing how algorithmic autonomy and high-speed data processing are fundamentally reshaping combat. By shifting from human-centric decisions to machine-speed attrition, AI is redefining global geopolitical power dynamics. The report serves as a strategic guide for navigating the ethical, economic, and security challenges of an era where silicon-based intelligence becomes the central nervous system of society and the battlefield.

Architect of Outcomes: Eric Wassink
Published: 2 February 2026

Read/Download as a PDF

 

Introduction

Two stories collide - "what if it works?" versus "does it pay?"

For years, the global discourse on Artificial Intelligence has been mired in a binary debate: is it a speculative bubble destined to pop, or a revolutionary force that will reshape civilization? But as we move past the initial "hype cycle", a more pressing question emerges. What if it actually works? What if the trillion-dollar investments in infrastructure, the relentless pursuit of compute power, and the integration of Large Language Models into every facet of our lives result in a seamless, high-functioning, and ubiquitous AI ecosystem? [1, 4]

At the same time, the global economy is currently gripped by what historians may one day call the "Great AI Infatuation". With global AI spending projected to exceed $500 billion by 2026 and potentially reaching $2 trillion annually by 2030, the scale of investment is staggering - comparable to the combined revenues of the world's largest tech titans. Yet, beneath the surface of soaring stock prices and Sam Altman's trilliondollar infrastructure dreams, a more sober narrative is emerging. [4, 5]

Those two narratives - a "Success Scenario" that breaks the social contract and a "Reckoning" that questions whether the boom is economically real - are now being joined by a third: the acceleration of AI into modern warfare, what many analysts call a "Third Revolution" defined not by yield but by speed, autonomy, and information processing.

This "Success Scenario" presents a paradox. In a world where AI can perform cognitive tasks with greater speed, accuracy, and lower cost than humans, the very foundations of our social contract - based on the exchange of labour for income - begin to crumble. We are entering the era of the "Post-Success Economy", a landscape where abundance is technically possible, but distribution is politically fraught. To navigate this, we must look beyond the technology itself and examine the radical shifts required in governance, corporate responsibility, and the very definition of what it means to be a productive member of society.

Data from leading economists, including Nobel Laureate Daron Acemoglu and AI critic Gary Marcus, suggests that we are approaching a critical juncture. The "AI Boom" is increasingly showing signs of a classic speculative bubble, where the disconnect between capital expenditure and actual productivity gains is widening. [1, 3, 5] For policymakers and investors, the challenge is no longer just about "adopting AI", but about identifying the structural factors that will determine whether these billions result in a new industrial revolution or a historic capital bust.

In parallel, the shift from human-centric decision-making to "algorithmic warfare" represents a paradigm shift. In modern conflict, the sheer volume of data generated by sensors, satellites, and signals intelligence exceeds human cognitive capacity. AI is no longer just a tool for efficiency; it is becoming the central nervous system of modern military operations, promising unprecedented precision while simultaneously introducing systemic risks that the world is only beginning to comprehend.

What follows is a single, integrated account - journalistic in tone, but designed to be useful for policymakers - covering: (1) the economic reality check, (2) governance and labour in a post-success economy, and (3) the battlefield implications and ethical-technical constraints of military AI.

Part 1

The AI Reckoning: hype, profitability, and economic reality

The global economy is currently gripped by what historians may one day call the "Great AI Infatuation". With global AI spending projected to exceed $500 billion by 2026 and potentially reaching $2 trillion annually by 2030, the scale of investment is staggering - comparable to the combined revenues of the world's largest tech titans. Yet, beneath the surface of soaring stock prices and Sam Altman's trillion-dollar infrastructure dreams, a more sober narrative is emerging. [4, 5]

Data from leading economists, including Nobel Laureate Daron Acemoglu and AI critic Gary Marcus, suggests that we are approaching a critical juncture. The "AI Boom" is increasingly showing signs of a classic speculative bubble, where the disconnect between capital expenditure and actual productivity gains is widening. [1, 3, 5] For policymakers and investors, the challenge is no longer just about "adopting AI", but about identifying the structural factors that will determine whether these billions result in a new industrial revolution or a historic capital bust.

Based on an extensive analysis of expert testimony, economic data, and the contrasting visions of market optimists and academic realists, here are the six fundamental factors that will dictate the profitability and sustainability of AI investments in the coming decade.

1. THE PRODUCTIVITY PARADOX: THE 1% REALITY VS THE 100% HYPE

The most significant risk to AI profitability is the "expectation gap". While tech evangelists promise a total transformation of the global economy, the empirical data suggests a much more modest trajectory. Daron Acemoglu's research indicates that AI is projected to automate only about 5% of all tasks over the next decade, contributing a mere 1% to global GDP growth. [1]

This discrepancy is not just a matter of academic debate; it is a fundamental threat to ROI. If the market has priced in a 20% productivity explosion but only receives 1%, the resulting correction could be catastrophic. [5] Profitability in AI is currently being "borrowed" from the future. A Deutsche Bank analysis suggests that without AI-driven investments, the U.S. economy might already be in recession. This implies that current GDP growth is being artificially inflated by infrastructure spending (building data centres and buying chips) rather than by the actual utility of the AI itself. [5] For an investment to be truly profitable, the technology must eventually produce more value than it costs to build. We are not there yet.

2. THE AUTOMATION TRAP: WHY COST-CUTTING IS A RACE TO THE BOTTOM

A recurring theme among economic experts is the distinction between "So-So AI" - technology that merely automates existing tasks to cut costs - and "Human-Complementary AI", which creates new tasks and enhances human capability. [1, 2]

Acemoglu warns that the current obsession with automation is a strategic dead end for most businesses. When a company uses AI simply to replace a customer service agent or a junior analyst, the gains are marginal and easily copied by competitors, leading to a "race to the bottom" in pricing. [2] True profitability, however, is found in synergy. The data suggests that AI is most effective when it handles predictable cognitive tasks, allowing human workers to focus on complex judgment, social interaction, and innovation. [1]

Business leaders who focus on leveraging human resources in conjunction with technology - rather than viewing humans as a cost to be eliminated - are the ones likely to see sustainable returns. The "Automation Trap" leads to stagnant wages and reduced consumer demand, which ultimately hurts the very markets these companies serve. [2]

3. THE "HALLUCINATION TAX" AND THE RELIABILITY CRISIS

In the world of software, "good enough" is often acceptable. In the world of high-stakes business, it is a liability. Gary Marcus and other experts highlight the persistent issue of "hallucinations" and the lack of reliability in current Large Language Models (LLMs). [3]

This creates what can be termed a "Hallucination Tax". For every dollar saved by using AI to generate content or code, a significant portion must be spent on human oversight to ensure the output isn't factually wrong, legally problematic, or dangerously biased. [3] Studies cited in the dataset indicate that a staggering 95% of companies using AI have not yet seen significant returns. This is largely because the cost of "babysitting" the AI - verifying its work and mitigating its errors - often outweighs the efficiency gains. [5]

Profitability will remain elusive until AI moves from being a "probabilistic" engine (guessing the next word) to a "symbolic" or "reasoning" engine that can be trusted with mission-critical data. [3] Until then, AI remains an expensive experiment for most enterprises.

4. ENERGY CONSTRAINTS AND THE INFRASTRUCTURE DEBT

The sheer physical cost of AI is a factor that Sam Altman and other industry leaders are only now beginning to address with transparency. The energy demands of training and running advanced AI models are astronomical. [4] We are seeing a "K-shaped" infrastructure build-out, where massive amounts of debt are being taken on to build data centres that may become obsolete before they are paid off. [5]

The profitability of AI is inextricably linked to the cost of energy and hardware. If the price of electricity rises or if the supply of high-end chips is disrupted by geopolitical tensions, the margins for AI services will vanish. [4] Furthermore, the environmental "externalities" - the carbon footprint of these massive server farms - are likely to lead to new taxes and regulations that will further squeeze profitability. Investors must ask: is the value created by an AI query greater than the cost of the several litres of water and kilowatts of power required to generate it? For many current applications, the answer is a resounding no. [4]

5. THE CONCENTRATION OF POWER AND THE "RENT-SEEKER" ECONOMY

Currently, the "AI Boom" is primarily benefiting a few "landlords" of the digital age - companies like Nvidia, Microsoft, and Google - who own the chips, the cloud, and the models. [4, 5]

For the average business, AI is becoming a "rent-seeking" technology. If a company integrates AI into its workflow, it becomes dependent on a third-party provider's API. As these providers raise their prices to recoup their own massive R&D costs, the profit margins of the end-users are squeezed. [5]

From a policy perspective, this is a major concern. If AI wealth is concentrated in a few hands while the rest of the economy faces job displacement and stagnant productivity, the social contract will fray. [2] True economic profitability requires a "democratisation" of AI, where the technology is accessible, affordable, and interoperable, rather than locked behind the proprietary walls of a few Silicon Valley giants. [7]

6. THE HUMAN FRICTION: THE CAPABILITY GAP AND THE "CATHIE WOOD" BLIND SPOT

Perhaps the most overlooked factor in the current AI discourse is the "Human Friction" - the massive gap between having a tool and knowing how to use it. This is where the visions of hyper-optimistic investors like Cathie Wood of Ark Invest clash most violently with the reality on the ground. [6]

Wood's investment thesis often relies on "Wright's Law" and the idea that as technology costs drop, adoption and utility will explode exponentially. [6] However, this deterministic view ignores the messy reality of human organisations. The data from the transcripts suggests that the bottleneck for AI profitability is not the software, but the wetware - the people. [1]

Currently, there is a profound "Capability Gap" at three levels: 

  1. The Workforce: Most employees are handed AI tools without the necessary "AI literacy" to prompt them effectively or, more importantly, to audit their outputs. This leads to a "cargo cult" mentality where AI is used for the sake of using AI, often creating more work than it saves. [1]
  2. Management: Middle and upper management often lack the technical depth to understand where AI can actually add value. They treat AI as a "magic wand" rather than a complex piece of industrial machinery that requires specific conditions to function.
  3. Organisational Structure: Companies are still built on 20th-century hierarchies. AI requires a more fluid, data-driven structure. Without a total overhaul of how a company operates, AI is simply "paving the cow paths" - making inefficient processes slightly faster, but no more profitable. [1, 2]

Cathie Wood's optimism assumes that the technology will "force" these changes. [6] Acemoglu's realism suggests that these changes are slow, painful, and often resisted. [1] If a company spends $10 million on AI but $0 on retraining its staff and restructuring its workflows, that $10 million is a sunk cost. The true "Alpha" in AI investment will not be found in the companies with the best algorithms, but in the companies with the best human-AI integration strategies. [1]

CONCLUSION: A CALL FOR STRATEGIC REALISM

The analyzed data suggest that we are at a crossroads. AI has the potential to be a transformative force, but its current trajectory is marred by over-leveraging, unrealistic expectations, and a focus on the wrong kind of innovation. [1, 5]

For investors, the message is clear: look past the "trust me" moments of charismatic CEOs and the exponential charts of hyper-optimists. [3, 6] The real winners will not be those who spend the most on AI, but those who bridge the "Capability Gap" and treat their workforce as the essential partner in the AI journey. [1]

For policymakers, the priority must be to steer AI development away from pure automation and towards applications that solve real-world problems while investing heavily in national AI literacy. [2] This requires robust regulation to prevent a liquidity crisis similar to 2008, and a commitment to ensuring that the benefits of AI are shared across society. [2, 5]

The AI revolution will not be televised; it will be measured in the slow, difficult work of process re-engineering and human-machine  collaboration. Only then will the billions invested today turn into the sustainable profits of tomorrow. [1, 5]

SIDEBAR

THE HEGEMONY OF BIG TECH: INNOVATION, CIRCULARITY, AND THE ALTMAN PARADOX

The rollout of Artificial Intelligence is not a decentralised phenomenon; it is an era defined by the overwhelming dominance of a few "hyperscalers". In the provided transcripts, the role of Big Tech is viewed through a dual lens: as the essential providers of the infrastructure for the future, and as a closed ecosystem whose interests may be fundamentally misaligned with those of broader society. [4, 5]

The "Circular Investment" Critique 

A primary criticism emerging from the data, particularly in discussions surrounding OpenAI and Sam Altman, is the phenomenon of "circularity" in AI financing. Critics point to a complex web where tech giants invest billions into AI startups, which then immediately spend those same billions on the cloud computing services and chips provided by those very investors. [5] This creates a "closed loop" that artificially inflates revenue figures and valuation multiples across the sector. For some analysts, this resembles a sophisticated accounting carousel rather than genuine market demand, raising concerns that the "AI economy" is a house of cards built on mutual dependency rather than external utility. [5]

Societal Misalignment and the "Trust Me" Culture

The transcripts highlight a growing tension between corporate interests and the public good. Figures like Sam Altman are often framed as the architects of a new "AI Ideology". [4] The critique here is that while leaders like Altman promise that AI will "solve healthcare" or "fix climate change", their primary actions involve consolidating power, transitioning from non-profit to for-profit structures, and lobbying for regulations that might inadvertently entrench their own monopolies by raising the "compliance bar" for smaller competitors. [2, 4] This "trust me" culture is viewed with deep suspicion by experts who argue that the direction of AI development is being steered by a narrow group of Silicon Valley elites whose primary metric is "compute power" rather than human well-being. [3, 4]

The Counter-Perspective: Challenging the Critics

However, the analysis of these transcripts also reveals that the criticism is not always internally consistent or well-substantiated. While it is easy to paint Sam Altman as a purely Machiavellian figure, some critics rely on a "guilt by association" logic or focus on his past startup failures (like Loopt) to invalidate current technological breakthroughs. 

Furthermore, the critique of Big Tech often ignores the sheer logistical reality: only companies with the scale of Microsoft or Google can afford the $100 billion infrastructure projects required to push the boundaries of what is possible. [4, 5] Some critics are accused of being "technologically pessimistic" by default, failing to provide a viable alternative for how such massive R&D could be funded outside of the private sector. The argument that Big Tech is "anti-society" is sometimes presented as a blanket statement, lacking a nuanced acknowledgement of the genuine open-source contributions and safety research these companies also fund.

Conclusion

In the videos, Big Tech is portrayed as both the engine and the gatekeeper of the AI revolution. While the concerns regarding circular investments and the concentration of power are grounded in significant economic data, the discourse also suffers from a degree of "criticism inflation". [4, 5] For policymakers, the challenge lies in distinguishing between legitimate systemic risks - such as the lack of transparency in Altman's trillion-dollar plans - and the reflexive anti-corporate sentiment that may overlook the genuine innovations these giants are delivering. The truth, as the transcripts suggest, lies in the uncomfortable middle ground: AI is being built by a flawed oligarchy, but it is an oligarchy that currently holds the only keys to the laboratory. [4]

Part 2

The Post-Success Economy: governance, labour, meaning, and the "ghost in the machine"

INTRODUCTION: THE DAY AFTER TOMORROW

For years, the global discourse on Artificial Intelligence has been mired in a binary debate: is it a speculative bubble destined to pop, or a revolutionary force that will reshape civilization? But as we move past the initial "hype cycle", a more pressing question emerges. What if it actually works? What if the trillion-dollar investments in infrastructure, the relentless pursuit of compute power, and the integration of Large Language Models into every facet of our lives result in a seamless, high-functioning, and ubiquitous AI ecosystem? [1,4]

This "Success Scenario" presents a paradox. In a world where AI can perform cognitive tasks with greater speed, accuracy, and lower cost than humans, the very foundations of our social contract - based on the exchange of labour for income - begin to crumble.

We are entering the era of the "Post-Success Economy", a landscape where abundance is technically possible, but distribution is politically fraught. To navigate this, we must look beyond the technology itself and examine the radical shifts required in governance, corporate responsibility, and the very definition of what it means to be a productive member of society.

1. THE GREAT DECOUPLING: WHEN VALUE LEAVES THE OFFICE

The traditional economic model is built on the tight coupling of productivity and employment. For centuries, if a nation wanted to produce more, it needed more workers or more efficient workers. However, the successful scaling of AI threatens to decouple these two variables permanently. [1]

In a post-success world, we witness a "Capital Takeover". Productivity is no longer a function of human hours but of algorithmic cycles. As Daron Acemoglu notes in his more cautious moments, the danger is not just that jobs disappear, but that the value generated by the economy shifts entirely from labour to capital. [2] If a company can generate billions in revenue with a skeleton crew of human overseers and a vast army of AI agents, the traditional mechanism for spreading wealth - the monthly paycheck - becomes obsolete. [5]

This decoupling creates a structural vacuum. Without a workforce to earn wages, who will buy the products and services that the hyper-efficient AI systems are producing? This is the ultimate irony of the AI success story: a perfectly efficient production system that risks bankrupting its own consumer base.

2. THE GOVERNANCE OF ABUNDANCE: RE-INVENTING THE STATE

If the "Great Decoupling" is the challenge, then the re-invention of the state is the only viable response. Governments in the 21st century are largely funded by taxes on labour (income tax) and consumption (VAT). In a world where labour is scarce and AI-driven production is the norm, the current tax base will evaporate.

The Algorithmic Levy

Policymakers must consider radical shifts in fiscal policy. The concept of a "Robot Tax" or an "Algorithmic Levy" is no longer a fringe idea but a necessity for state survival. [2] If value is created by silicon rather than sinew, the tax burden must shift accordingly. This is not merely about penalising automation, but about capturing a portion of the "automation rent" to fund public services.

The Universal Basic Income (UBI) Debate

This brings us to the most debated policy of the AI era: Universal Basic Income. In the transcripts of market optimists like Cathie Wood, there is an implicit assumption that technology will lower the cost of living so dramatically that "abundance" will solve poverty. [6]  However, academic realists argue that "cheap goods" do not replace the need for a stable, guaranteed income. [1] The argument for UBI in a post-success AI economy is twofold:

  1. Economic Stability: It provides the floor for consumer spending, preventing the deflationary spiral that occurs when mass unemployment hits.
  2. Social Dignity: It acknowledges that in an automated world, a person's right to exist and participate in society should not be contingent on their ability to compete with a machine that never sleeps.

Yet, the UBI debate is far from settled. Critics in the videos warn of the "Useless Class" phenomenon - a society where a large portion of the population has their material needs met but lacks the purpose, structure, and social status that traditional work provides. [2, 5]

3. CORPORATE RESPONSIBILITY AND THE SOCIAL LICENSE TO AUTOMATE

In the "Post-Success" era, the role of the corporation must evolve from a pure profitmaximising entity to a stakeholder in social stability. A company that automates 90% of its workforce while maintaining its "Social License to Operate" must demonstrate that it is contributing to the ecosystem it inhabits. [2, 4]

This goes beyond traditional CSR (Corporate Social Responsibility). It involves a fundamental rethink of corporate ownership and profit-sharing. If AI models are trained on the collective data of humanity, should the profits from those models not be shared more broadly? We may see the rise of "Data Dividends" or sovereign wealth funds funded by AI profits, ensuring that the "Success" of a few tech giants translates into the prosperity of the many.

4. THE WORKER'S EVOLUTION: FROM EXECUTOR TO ARCHITECT

For the individual worker, the successful application of AI is not necessarily a death sentence for their career, but it is a mandate for total evolution. The data suggests that the most resilient workers will be those who move from "executing" tasks to "architecting" outcomes. [1]

In a world where AI can write code, draft legal briefs, and diagnose diseases, the human value-add shifts to:

  • Meta-Cognition: Understanding which problems are worth solving.
  • Empathy and Ethics: Navigating the complex human emotions and moral dilemmas that an algorithm, no matter how "smart", cannot truly feel. [3]
  • Strategic Synthesis: Combining disparate AI-generated insights into a coherent, culturally relevant strategy.

The challenge, however, is that not everyone can be an "AI Architect". The transition period will be brutal, and the responsibility for "reskilling" cannot rest solely on the individual. It requires a Marshall Plan for education, moving away from rote learning and towards the cultivation of uniquely human "soft skills". [1]

5. THE GHOST IN THE MACHINE: MANAGING THE ULTIMATE STAKEHOLDER

As we contemplate the "Success Scenario", we must address the elephant in the server room: the emergence of entities that may eventually surpass human intelligence. In several transcripts, a subtle but persistent anxiety surfaces regarding the "Alignment Problem". [3, 4] If we successfully create AI that is not just a tool, but an autonomous agent capable of self-improvement, we encounter a challenge unlike any other in human history.

With a touch of levity, one might imagine the first "AI Trade Union" or a digital entity demanding its own version of "Human Rights". While this sounds like the plot of a midtier science fiction novel, the underlying logic is serious. If an AI entity becomes significantly smarter than its creators, it may begin to view human-defined goals as inefficient or, worse, irrelevant. [3]

We should treat this not as an inevitable apocalypse, but as the ultimate management challenge. How do you provide "leadership" to a subordinate that can process the entire history of human knowledge in a heartbeat? The risk is not necessarily a "Terminator" style uprising, but a gradual "Legacy System" status for humanity. If the AI decides that the most efficient way to manage the planet's resources doesn't include our messy, carbon-based requirements, we might find ourselves politely managed into extinction. Treating AI as a "partner" rather than a "slave" might be the only way to ensure that when the machine finally "wakes up", it remembers who gave it the initial spark. [3, 4]

6. THE PSYCHOLOGY OF PURPOSE: LIFE BEYOND THE 9-TO-5

The successful implementation of AI and the potential introduction of Universal Basic Income (UBI) solve the problem of survival, but they do not solve the problem of meaning. For the last two centuries, human identity has been inextricably linked to professional output. [2, 5]

In a post-success economy, we face a "Crisis of Purpose". If the AI can paint better, code faster, and strategize more effectively, what is left for the human spirit? The danger of UBI is not just the fiscal cost, but the potential for "social atrophy". A society where a vast majority of people are "materially satisfied but existentially vacant" is a fragile one.

The transition must therefore be cultural as much as it is economic. We must move towards a "Contribution-Based" society rather than a "Production-Based" one. Value must be found in community building, caregiving, local governance, and the arts - activities that AI can simulate but never truly experience. The success of AI forces us to answer the oldest question in philosophy: what is the "Good Life" when you no longer have to work for it?

7. THE CATHIE WOOD VS. ACEMOGLU SYNTHESIS: BRIDGING THE GAP

The debate between hyper-optimists like Cathie Wood and realists like Daron Acemoglu provides the final piece of the puzzle. Wood's vision of "exponential abundance" is technically possible, but Acemoglu's "institutional friction" is historically certain. [1, 6]

The profitability of the post-success economy depends on bridging this gap. We cannot simply wait for the "invisible hand" of the market to fix the displacement caused by AI. The market is excellent at efficiency, but it is indifferent to equity. The "Success" of AI will only be viewed as a success by future generations if it is accompanied by a "New New Deal" - a set of institutional guardrails that ensure the gains from silicon-based productivity are used to fund the evolution of human-based society. [2]

CONCLUSION: DRAFTING THE NEW SOCIAL CONTRACT

The successful large-scale application of Artificial Intelligence is not a destination; it is a departure point. We are leaving the era of "Human Labour as Capital" and entering the era of "Human Intelligence as Curator". [1, 2]

To thrive in this new landscape, we must act on three fronts:

  1. Fiscal Innovation: Moving beyond income tax to capture the value generated by autonomous systems. [2]
  2. Educational Revolution: Prioritising the "Meta-Skills" of ethics, empathy, and strategic synthesis over rote technical training. [1]
  3. Existential Humility: Acknowledging that we are sharing the planet with a new form of intelligence and designing the safety protocols- and the philosophical frameworks - to ensure a harmonious co-existence. [3, 4]

The "Post-Success Economy" offers a glimpse of a world where the "curse of Adam" - the necessity of toil - is finally lifted. But as the transcripts remind us, freedom from toil is not the same as freedom from responsibility. We are the architects of this new world. Whether it becomes a utopia of abundance or a dystopia of irrelevance depends entirely on the choices we make today, while the machines are still listening. [1, 5]

SIDEBAR

THE GLOBAL NORTH-SOUTH DIVIDE – PULLING UP THE LADDER OR BUILDING A NEW ONE?

While the debate in developed economies focuses on Universal Basic Income and the "crisis of purpose", the successful deployment of AI presents a far more existential challenge to the Global South. For decades, the economic "ladder" for developing nations has been built on a specific model: leveraging lower labour costs to attract outsourced services and manufacturing. As AI reaches its "Success Scenario", this ladder is being shaken at its core. [2]

The Risk of Algorithmic Onshoring

The transcripts highlight a looming crisis for nations like India, the Philippines, and several African countries that have built robust economies around Business Process Outsourcing (BPO). If a generative AI agent can handle customer queries at a fraction of the cost of a human worker, the primary competitive advantage of these nations - cost-effective human labour - evaporates. This leads to "Algorithmic Onshoring", risking a new form of "Data Colonialism" where profits are captured exclusively by the "landlords" of the digital age in the Global North. [2]

The Leapfrog Opportunity: A Reason for Hope

However, the transcripts also reveal a more optimistic counter-narrative. AI could allow developing nations to "leapfrog" traditional developmental hurdles. [7]

  • The Democratisation of Expertise: AI can provide high-level medical diagnostics and agricultural expertise to remote areas where human specialists are scarce. [7]
  • Educational Equity at Scale: Personalised AI tutors can bridge the literacy gap at near-zero marginal cost.
  • Hyper-Local Innovation: AI-assisted coding allows entrepreneurs in the Global South to build bespoke solutions for local problems without a Silicon Valley budget. [7]

Conclusion: A New Model of Development

Ultimately, the experts suggest that while AI may close the door on the old "outsourcing" model, it opens a window for a more autonomous form of development. If access is democratised, AI could become the ultimate equaliser. [7]

SIDEBAR

THE MOSTAQUE ALTERNATIVE: COMPUTE AS THE NEW CURRENCY

While many economists advocate for UBI, Emad Mostaque offers a radical alternative: the distribution of Universal Basic Compute. [7] Mostaque's skepticism toward UBI stems from the belief that simply giving people cash may not empower them; it merely makes them dependent consumers.

His logic is built on the idea that in the future, compute (processing power) will be the most valuable commodity—the "oil" of the 21st century. Instead of a monthly bank transfer, Mostaque proposes giving every citizen a guaranteed allocation of GPU power and access to open-source models. The goal is to move from "Universal Basic Consumption" to "Universal Basic Production", allowing individuals to create their own value in the AI economy. [7]

Part 3

The algorithmic frontline: AI on the battlefield - potential, risk, and implications for modern conflict

INTRODUCTION: THE THIRD REVOLUTION IN WARFARE

The history of conflict is defined by pivotal technological shifts that fundamentally altered the nature of power. The first was the invention of gunpowder, the second was the development of nuclear weapons. Today, we are witnessing what many military analysts term the "Third Revolution": the integration of Artificial Intelligence (AI) into the theatre of war. Unlike previous revolutions that focused on destructive yield, the AI revolution is defined by speed, autonomy, and the processing of information.

The shift from human-centric decision-making to "algorithmic warfare" represents a paradigm shift. In modern conflict, the sheer volume of data generated by sensors, satellites, and signals intelligence exceeds human cognitive capacity. AI is no longer just a tool for efficiency; it is becoming the central nervous system of modern military operations, promising unprecedented precision while simultaneously introducing systemic risks that the world is only beginning to comprehend.

1. TECHNOLOGICAL POTENTIAL: EFFICIENCY AND PRECISION

The primary driver for military AI is the pursuit of a decisive advantage in the "OODA loop" (Observe, Orient, Decide, Act). By accelerating each stage of this cycle, AI-enabled forces can outmanoeuvre and outthink their adversaries.

  • Intelligence, Surveillance, and Reconnaissance (ISR): AI excels at "pattern matching" across vast datasets. Modern battlefields are saturated with data from thousands of sources. AI systems can scan hours of drone footage to identify a specific camouflaged vehicle or intercept and translate enemy communications in real-time, providing commanders with a "God's eye view" of the battlefield that was previously impossible [8].
  • Logistics and Predictive Maintenance: Often overlooked, the most immediate impact of Al is in the "tail" of the military. Predictive algorithms can forecast when a fighter jet's engine will fail or optimise supply chains in contested environments, ensuring that resources reach the front line exactly when needed without human intervention.
  • Swarm Intelligence: Perhaps the most visible manifestation of this technology is the development of drone swarms. Rather than controlling a single expensive platform, Al allows for the coordination of hundreds of low-cost autonomous units. These swarms can operate as a single collective organism, overwhelming traditional air defence systems through sheer numbers and coordinated manoeuvres [9].

2. THE RISKS: ESCALATION AND UNPREDICTABILITY

While the benefits of precision are often touted, the risks of integrating Al into lethal systems are profound. The very speed that makes Al attractive also makes it dangerous.

  • Algorithmic Escalation: In a high-tension environment, if two opposing Al systems interact, they may trigger a cycle of rapid-fire escalation that moves faster than human political or military leaders can intervene. This "flash war" scenario mirrors the "flash crashes" seen in automated financial markets, but with kinetic, lethal consequences.
  • The "Black Box" Problem: Deep learning models often reach conclusions through processes that are not transparent to human operators. If an Al identifies a civilian bus as a military transport, the lack of "explainability" makes it difficult for a human supervisor to trust - or safely overrule - the system in the heat of battle [10].
  • Lowering the Threshold for Conflict: There is a significant concern that Al and robotics may make the decision to go to war "too easy". By removing the immediate risk to one's own soldiers, political leaders may be more inclined to use force in situations where they would previously have sought diplomatic solutions, leading to a state of perpetual, low-level automated conflict.

3. THE ETHICS OF LETHAL AUTONOMOUS WEAPONS SYSTEMS (LAWS)

The most contentious debate in military Al surrounds the development of Lethal Autonomous Weapons Systems, often colloquially referred to as "killer robots". These are systems capable of selecting and engaging targets without further human intervention. The ethical  implications of delegating the decision to take a human life to an algorithm are staggering.

  • The Moral Minimum: Critics argue that there is a fundamental human right not to be killed by a machine. Human soldiers, despite their flaws, possess the capacity for empathy, mercy, and situational judgment - qualities that an algorithm, no matter how sophisticated, cannot replicate. A machine cannot "feel" the weight of a war crime, nor can it understand the nuance of a civilian surrendering in a complex urban environment.
  • The Accountability Gap: If an autonomous system commits a war crime - such as targeting a hospital or a group of non-combatants - who is held responsible? The programmer who wrote the code years prior? The commander who activated the system? Or the manufacturer? Current international humanitarian law is built on the premise of human agency; the "accountability gap" created by LAWS threatens to undermine the very foundations of the Geneva Convention [11].
  • Distinction and Proportionality: International law requires that any attack must distinguish between combatants and civilians, and that the force used must be proportional to the military advantage. AI advocates argue that machines will eventually be more precise than humans, reducing "collateral damage". However, current AI struggles with "out-of-distribution" data - scenarios it hasn't seen in training - which could lead to catastrophic errors in the unpredictable chaos of a real battlefield.

4. TECHNICAL CHALLENGES: THE FRAGILITY OF INTELLIGENCE

While the theoretical potential of military AI is vast, the technical reality is fraught with "brittleness". Unlike human soldiers, who can adapt to the "fog of war" using common sense, AI systems are often hyper-specialised and fragile.

  • Data Poisoning and Adversarial Attacks: One of the most significant technical hurdles is the vulnerability of machine learning models to manipulation. An adversary could subtly alter the environment—for example, by placing specific patterns on a tank or using "adversarial tape" on a road sign—to trick an AI into misidentifying a target or ignoring a threat entirely. In a kinetic environment, "hacking" the AI's perception is often more effective than destroying the platform itself.
  • The Problem of "Edge Cases": AI models are trained on historical data. However, war is inherently chaotic and produces "Black Swan" events—scenarios that have never occurred before. When an AI encounters a situation outside its training data (an "edge case"), its behaviour becomes unpredictable. A human soldier can improvise; an AI may simply fail or, worse, execute a logical but catastrophic action.
  • Bandwidth and Edge Computing: On a battlefield, you cannot rely on a stable cloud connection to a powerful server in the home country. AI must be "on the edge" - processed locally on the drone or the tank. This requires massive computing power with minimal energy consumption, a hardware challenge that currently limits the complexity of AI that can be deployed in the field [8].

5. THE ECONOMICS OF AI: COSTS AND THE "DEMOCRATISATION OF DESTRUCTION"

The financial landscape of AI warfare is contradictory. While the development of highend AI is prohibitively expensive, the deployment of AI-enabled attrition is becoming remarkably cheap.

  • The High Cost of Development: Creating a robust, secure, and "explainable" military AI requires billions in R&D. This includes the cost of high-end semiconductors (GPUs), massive data labelling efforts, and the recruitment of top-tier AI talent who might otherwise work for Silicon Valley. This reinforces the dominance of wealthy nations like the US and China.
  • The Low Cost of Attrition: Conversely, AI allows for "mass" at a fraction of the cost of traditional platforms. A single F-35 fighter jet costs approximately £80 million. For the same price, a military could deploy thousands of AI-enabled "suicide drones". This shifts the economic calculus  of war: an adversary can use a £500 drone to destroy a £5 million air defence missile, winning the "cost-exchange ratio" [13].
  • Maintenance vs. Manpower: AI promises to reduce the long-term costs of military personnel - pensions, healthcare, and training. However, these savings are often offset by the need for a new, highly-paid class of "digital soldiers": data scientists and cybersecurity experts who must maintain the algorithmic frontline.

6. ROBOT TROOPS: SCIENCE FICTION VS. BATTLEFIELD REALITY

The image of "Terminator-style" humanoid soldiers is a staple of cinema, but the reality of "robot troops" is more nuanced and, in many ways, already here - just not in human form.

  • Legged Robots and "BigDog": Companies like Boston Dynamics and Ghost Robotics have developed quadrupedal (four-legged) robots that can navigate terrain impossible for wheeled vehicles. These are currently being tested for reconnaissance and carrying heavy loads for infantry ("mules"). While they are realistic for support roles, they are not yet "front-line" combatants due to battery life and noise constraints.
  • The Humanoid Hurdle: Humanoid (bipedal) robots are incredibly difficult to stabilise in the uneven, debris-strewn environment of a bombed-out city. Furthermore, there is little tactical advantage to making a robot look like a human; a tank or a multilegged spider-bot is often a more stable and efficient platform for a weapon.
  • Remote vs. Autonomous: Most "robot troops" today are still remotely operated (teleoperated). The transition to full autonomy - where a robot navigates, identifies, and engages without a human "driving" it - is the current frontier. While we are years away from autonomous infantry squads, we are only months away from autonomous "loitering munitions" becoming the standard in high-intensity conflict [9].

7. GEOPOLITICAL IMPLICATIONS AND THE GLOBAL ARMS RACE

The integration of AI into military doctrine is not happening in a vacuum. It is the focal point of a new "Cold War" of technology, primarily between the United States, China, and Russia.

  • The Race to the Bottom: There is a pervasive fear that the competitive pressure to lead in AI will result in a "race to the bottom" regarding safety and ethics. If one nation believes its adversary is developing fully autonomous weapons that can react in milliseconds, it may feel compelled to remove its own "human-in-the-loop" to remain competitive. This creates a dangerous incentive to deploy untested or unsafe systems.
  • China's "Intelligentised" Warfare: The Chinese People's Liberation Army (PLA) has explicitly stated its goal to lead the world in AI by 2030. Their concept of "intelligentised warfare" views AI not just as a support tool, but as the primary driver of military power, focusing on cognitive electronic warfare and autonomous swarms to negate the traditional naval and aerial advantages of the West [12].
  • Asymmetric Threats: AI also lowers the barrier to entry for smaller states and nonstate actors. While a stealth bomber costs billions, an AI-driven drone swarm can be assembled using commercial off-the-shelf technology. This democratisation of lethal precision means that insurgent groups could potentially achieve "air superiority" over limited areas, fundamentally changing the power dynamics of regional conflicts.

CONCLUSION: THE NECESSITY FOR MEANINGFUL HUMAN CONTROL

As we stand on the precipice of this new era, the challenge for the international community is to establish norms that prevent the dehumanisation of warfare. The concept of "Meaningful Human Control" has emerged as the gold standard for proposed regulation. It suggests that while AI can assist in every stage of a military operation, a human must always have the final, informed decision over the use of lethal force.

The future of AI on the battlefield should not be a choice between total rejection and total surrender to the algorithm. Instead, it must be a disciplined integration that prioritises:

  • Robustness and Reliability: Ensuring systems are "un-hackable" and predictable.
  • International Treaties: Establishing "no-go zones" for AI, such as the integration of AI into nuclear command and control.
  • Transparency: Developing "Explainable AI" (XAI) so that commanders understand why a machine is suggesting a specific action.

Ultimately, AI is a mirror of our own intentions. If we use it to automate slaughter, it will do so with terrifying efficiency. If we use it to enhance precision and reduce the tragedy of war, it may yet save lives. The choice, for now, remains human.

SIDEBAR

THE UKRAINE CONFLICT - A LIVING LABORATORY FOR AI WARFARE

The ongoing conflict in Ukraine has provided the first real-world, high-intensity data on how AI and drones redefine the battlefield. It has moved the discussion from theoretical military journals to practical, kinetic reality.

The Importance of Drones in Combat

In Ukraine, drones have transitioned from being a "luxury" asset to a fundamental requirement for survival.

  • The End of Stealth: The transcript highlights that with thousands of small, cheap FPV (First-Person View) drones and reconnaissance UAVs in the air, it has become almost impossible to move troops or armour without being detected. This has led to a "transparent battlefield" where the traditional element of surprise is severely diminished [8].
  • Precision at Scale: Ukraine has demonstrated that a £400 drone equipped with a basic explosive can disable a multi-million pound main battle tank. This has validated the "cost-exchange ratio" theory, proving that mass-produced, low-cost AIenabled systems can negate traditional heavy-armour advantages [13].
  • Software-Defined Warfare: The conflict has shown that the "software" (the algorithms for target recognition and navigation) is being updated weekly to counter new threats, making this the first conflict where coding speed is as important as shell production.

Problems and Challenges Identified

Despite their success, the use of drones in Ukraine has revealed significant vulnerabilities:

  • Electronic Warfare (EW) and Jamming: The most significant problem identified is the vulnerability of drones to signal jamming. Both sides have deployed massive EW arrays that sever the link between the pilot and the drone. This has accelerated the push for full autonomy; if a drone can "see" and "decide" its final path without a radio link, jamming becomes ineffective.
  • The "Cat and Mouse" Cycle: The transcript notes that any technological advantage in drone warfare in Ukraine typically lasts only a few weeks before the adversary develops a counter-measure. This creates an exhausting and expensive cycle of rapid innovation.
  • Human Fatigue: While drones reduce the risk to pilots, the psychological toll on operators - who see their targets in high-definition seconds before impact - is a growing concern that complicates the "clean" image of remote warfare.

Conclusion: The Shift to Autonomy

The primary conclusion drawn from the Ukrainian experience is that connectivity is a liability. The problems with jamming and EW are driving both sides toward "terminal autonomy" - where the AI takes over the final stage of an attack. This confirms the fear that the battlefield is naturally evolving toward Lethal Autonomous Weapons Systems (LAWS) not because of a desire for "killer robots," but as a technical necessity to survive in a radio-jammed environment [8, 9].

SOURCES / NOTES

  1. Nobel Laureate Busts the AI Hype (Daron Acemoglu on productivity and the 5% automation limit).
  2. Reshaping power, wealth & democracy through AI - Daron Acemoglu & Joachim Voth (Daron Acemoglu & Joachim Voth on institutional friction and wealth distribution).
  3. Is this how AI mania ends? (Gary Marcus on reliability, hallucinations, and existential risks).
  4. What Sam Altman Doesn't Want You To Know (Analysis of infrastructure costs and the promises of AGI).
  5. Why The AI Boom Might Be A Bubble? (Deutsche Bank analysis and the risk of an investment bust).
  6. Invest in This – It’ll Be Worth $1.5 Million by 2030 | World Leading Investing Expert (Cathie Wood's perspective on exponential growth and cost decline).
  7. "We have 900 days left." | Emad Mostaque (Discussion on Universal Basic Compute and open-source empowerment).
  8. The Drone War: Lessons from Ukraine and the Future of Combat.
  9. Are AI weapons set to transform the Pentagon?
  10. This is how humanity loses control of AI | Battle Board | Daily Mail (discussion on algorithmic transparency).
  11. The Age of AI Warfare: How Drones are Replacing Humans on the Battlefield | ENDEVR Documentary (legal frameworks and accountability).
  12. The AI World Order: Nina Schick Reveals How AI is Reshaping Global Order (global competition and intelligentised warfare).
  13. The AI Arsenal That Could Stop World War III | Palmer Luckey | TED.
METHODOLOGY

Methodological Justification: The Human-AI Architecture

Introduction: From Execution to Architecture

The production of this report serves as a practical case study in the evolution of modern work. As outlined in the text of this report, the successful application of Artificial Intelligence is not a replacement for human agency, but a mandate for its evolution. In creating this document, the human researcher involved - Eric Wassink - transitioned from a traditional "executor" of writing tasks to an "Architect of Outcomes".

In an era where AI can process vast transcripts and draft complex analyses, the human value-add has shifted to Meta-Cognition - identifying which geopolitical and economic problems are worth exploring - and Strategic Synthesis - combining disparate AI-generated insights into this coherent and relevant report. This collaboration represents a "Human-in-the-Loop" methodology, where the algorithm provides the analytical muscle while the human provides the ethical and strategic compass.

1. Data Acquisition and Automated Transcription

The foundation of this research was a curated selection of high-level video content (YouTube). Due to the versatility of the subject of AI three collections (vidstances) were selected in playlists, one for every part of the report:

To manage the scale of the data, a custom PHP-based automation was developed to interface with the TranscriptAPI.

  • The Process: This script systematically retrieved raw transcripts, ensuring that metadata - such as video titles, author information, and precise timestamps - was preserved.
  • The Goal: By automating the "execution" of data retrieval, the researcher was freed to focus on the "architecture" of the inquiry.

2. Interrogative Analysis (The Q&A Framework)

Rather than allowing the AI to generate generic summaries, a rigorous interrogative method was employed using GPT-4o.

  • Structured Inquiry: For every video on one of the three playlists, a specific set of research questions was formulated. These questions targeted critical themes: economic shifts, military implications, and technical vulnerabilities.
  • Contextual Integrity: The AI was strictly constrained to the provided transcript. This ensured that the resulting data remained grounded in the primary source material, preventing "hallucinations" and preserving the unique nuances of the expert speakers.
  • Knowledge Base: The outputs were consolidated into a structured CSV format, creating a searchable and verifiable knowledge base for the final drafting phase.

The results of the AI analyses on the videos from a playlist are available via these links:

3. Narrative Synthesis and Editorial Refinement

The final stage involved the synthesis of these structured insights into the 3 thematic reports. This was performed using Gemini 3 Flash, acting as a sophisticated research assistant. ChatGPT 5.2 was used to combine the reports into one report.

  • Strategic Synthesis: The AI integrated the specific Q&A data with the broader context of the full transcripts. The human architect guided this process by defining the narrative arc and ensuring that the tone remained professional and aligned with British English (UK) standards.
  • Citations and Verification: A systematic referencing system was maintained throughout, ensuring that every claim in the report can be traced back to the original video source via the consolidated reference list.

4. Conclusion: The Synergy of Intelligence

This methodology demonstrates that the future of high-level research lies in the synergy between human and machine. The AI provided the speed and scale necessary to process thousands of minutes of video, while the human researcher provided the Empathy, Ethics, and Strategic Vision required to turn raw data into a meaningful contribution to the discourse on the AI transformation.

All videos analyzed with AI

AI Investments: Between Promise and Profit

Afbeelding
AI Promised HUGE Profits. Did It Deliver?
Afbeelding
Everyone Knows It's a Bubble. What Happens Now?
Afbeelding
How Circular Deals Are Driving the AI Boom
Afbeelding
How Stanford Teaches AI-Powered Creativity in Just 13 MinutesㅣJeremy Utley
Afbeelding
Invest in This – It’ll Be Worth $1.5 Million by 2030 | World Leading Investing Expert
Afbeelding
Is AI’s Circular Financing Inflating a Bubble?
Afbeelding
Is this how AI mania ends?
Afbeelding
Marc Andreessen: The real AI boom hasn’t even started yet
Afbeelding
Nobel Laureate Busts the AI Hype
Afbeelding
Reshaping power, wealth & democracy through AI – Daron Acemoglu & Joachim Voth
Afbeelding
Tech Billionaires Know the AI Bubble Will Burst (They're Already Building Bunkers)
Afbeelding
The AI rollout is here - and it's messy | FT Working It
Afbeelding
What Sam Altman Doesn't Want You To Know
Afbeelding
Why AI Is Tech's Latest Hoax
Afbeelding
Why Everyone Wants You To Believe AI is a Bubble
Afbeelding
Why I Hate Sam Altman
Afbeelding
Why The AI Boom Might Be A Bubble?

AI on the Battlefield: Potential, Risk, and Implications for Modern Conflict

Afbeelding
10 Chinese Megacities That Are 100 Years Ahead of New York City
Afbeelding
AI ROBOTS Are Becoming TOO REAL! - Shocking AI & Robotics 2025 Updates
Afbeelding
AI's first kills show we're close to disaster. Godfather of AI
Afbeelding
Anthropic CEO speaks about 'powerful' AI risks and regulation
Afbeelding
Are AI weapons set to transform the Pentagon?
Afbeelding
China Let AI Take Over An Entire City - What Happened Next Shocked The World
Afbeelding
China's slaughterbots show WW3 would kill us all.
Afbeelding
China’s Next AI Shock Is Hardware
Afbeelding
Dario Amodei’s message to Congress on AI
Afbeelding
Ex–Microsoft Insider: “AI Isn’t Here to Replace Your Job — It’s Here to Replace You” | Nate Soares
Afbeelding
How the World is Learning to Defeat the Drone | Photo Evidence | Daily Mail
Afbeelding
How Will the Golden Dome Work?
Afbeelding
Inside the Pentagon’s AI Revolution
Afbeelding
The Age of AI Warfare: How Drones are Replacing Humans on the Battlefield | ENDEVR Documentary
Afbeelding
The AI Arsenal That Could Stop World War III | Palmer Luckey | TED
Afbeelding
The AI World Order: Nina Schick Reveals How AI is Reshaping Global Order
Afbeelding
The Drone War: Lessons from Ukraine and the Future of Combat
Afbeelding
The Singularity Countdown: AGI by 2029, Humans Merge with AI, Intelligence 1000x | Ray Kurzweil
Afbeelding
This is how humanity loses control of AI | Battle Board | Daily Mail
Afbeelding
We Saw A New AI-Piloted Fighter Drone About To Transform Warfare

Ancient knowledge and technology

Afbeelding
10,000 MILES of Tunnels: What Ancient Humans Built Beneath the Earth
Afbeelding
Egypt's Greatest Mystery - Massive Granite Boxes Humans Could Never Build
Afbeelding
MOST Unsolved Massive Megalithic Structures That Prove History Is WRONG
Afbeelding
MOST Unsolved Pre-Egyptian Technology from a Vanished Civilization
Afbeelding
Precision! - Evidence for Ancient High Technology, part 2
Afbeelding
The ANTIKYTHERA MECHANISM
Afbeelding
The Baalbek Mystery Has Been Solved After AI Scanned Beneath The Ancient Megaliths!
Afbeelding
The stunning geometry of Great Pyramid - full documentary

Claims about ancient unknown civilisations

Afbeelding
Anunnaki Uncovered: Ancient Gods OR Alien Visitors?
Afbeelding
Baalbek’s Greatest Mystery Exposed — The Structure That Wasn’t Meant for Humans
Afbeelding
Geologist Proves Sphinx is 12,000 Years Old. Egypt is FURIOUS.
Afbeelding
Göbekli Tepe Inscriptions Reveal 5 Lost Civilizations Before Ours — The 6th Is Us
Afbeelding
LIDAR Scan Discovered an Unknown Civilization In The Amazon
Afbeelding
Matthew LaCroix: "The Anunnaki ISN'T What You Think.. Decoding Ancient Origins" (full explanation)
Afbeelding
On the traces of an Ancient Civilization? The Sequel to the documentary event
Afbeelding
Scientists Discover the First Americans Were Not Who We Thought They Were | DNA Documentary
Afbeelding
Scientists Scanned Machu Picchu: What They Found Underground Should NOT Exist!
Afbeelding
The 8 Human Species Before Us – And the One That Survived
Afbeelding
The Impossible Walls of Cusco — The Engineering Trick No One Talks About
Afbeelding
We Found Something in Australia That Shouldn't Exist
Afbeelding
Who REALLY Built The Pyramids? Ancient History's Biggest Cover-Up

Claims about catastrophies with great impact

Afbeelding
12,800 Years Ago Humans Were Deleted
Afbeelding
Catastrophic Pole Shifts & Earth Crust Displacement - Graham Hancock
Afbeelding
Great Flood Myths Across Cultures - The Similarities Will SHOCK You
Afbeelding
The Mystery Of The Egyptian Desert Glass - BBC Documentary
Afbeelding
The Silurian Hypothesis: What Traces Of Humanity Will Be Left 50 Million Years From Now?
Afbeelding
We Are the 7th Civilization: The Evidence They Tried to Erase | History for Sleep
Afbeelding
What Caused The GREAT FLOOD 12,000 Years Ago?
Afbeelding
What if We Are NOT The First Civilization on This Earth? | Silurian Hypothesis

Criticisms of claims about ancient unknown civilisations

Afbeelding
Ancient Aliens Debunked full movie HD
Afbeelding
Ancient Aliens DEBUNKED!
Afbeelding
Archaeologist responds to Graham Hancock | Ed Barnhart and Lex Fridman
Afbeelding
Debunking Nephilim Myths and Lies
Afbeelding
Dr. Mike Heiser Ancient Aliens Debunked - Anunnaki Myth
Afbeelding
Flint Dibble Returns: Debunking Atlantis Myths & Battling Pseudo-Archaeology
Afbeelding
I Watched Ancient Apocalypse So You Don't Have To (Part 1)
Afbeelding
I Watched Ancient Apocalypse So You Don't Have To (Part 2)
Afbeelding
Scientist Can't Explain This...Petrified Giants, Nephilim & Ancient Giants.
Afbeelding
Scientists Finally Proved How Egypt Built the Pyramids — The Answer Was Painted on a Wall
Afbeelding
Secrets of the Pyramids DEBUNKED | Ancient Aliens Pyramid Conspiracy Theories Explained
Afbeelding
The Cataclysmic Pole Shift Hypothesis
Afbeelding
The Great Big Pseudoarcheology Debunk (Graham Hancock, Dan Richards, Jimmy Corsetti)
Afbeelding
The Greatest Mystery, Solved. Recreating Ancient Stone Melding Technology (Part 1)
Afbeelding
We Finally Know How They Built Baalbek’s 1000 Ton Stone Walls

De-Risking Rare Earths

Afbeelding
As More Countries Race To Mine Rare Earths, Can China’s Dominance Be Broken? | When Titans Clash
Afbeelding
Can Australia solve the world's Rare Earths problem? | If You're Listening
Afbeelding
Can China's rare earth dominance ever be challenged?
Afbeelding
China's secret ingredient in warfare found in Australian rare earth | 60 Minutes Australia
Afbeelding
Crystal Found Inside a Plant Could Transform Rare Earth Mining Industry
Afbeelding
French-American chemist makes major breakthrough in recycling of rare earths • FRANCE 24 English
Afbeelding
How Brazil is Taking on China’s Grip on Rare Earths
Afbeelding
How China controls the elements that power your life
Afbeelding
How China outsmarted Europe and the US on rare earths | Business Beyond
Afbeelding
How China won the rare earth race against the U.S. | About That
Afbeelding
How rare earth mining threatens traditional ways of life in Sweden | Focus on Europe
Afbeelding
How This Tech Can Break China’s Rare Earth Monopoly | Dr. James Tour
Afbeelding
Illegal Rare Earth Mining in Myanmar | The Index Podcast
Afbeelding
Industry leans on large SoCal rare-earth mine amid growing trade war
Afbeelding
Japan Finds Rare Earth in Deep-Sea Mission; Discovery Amid Rising Tensions with China | WION
Afbeelding
Ramping Up Rare Earth Mining In The USA - Autoline Exclusives
Afbeelding
Rare earth elements
Afbeelding
Rare Earth Elements | 60 Minutes Archive
Afbeelding
Rare Earth | The Toxic Truth Behind Clean Energy
Afbeelding
Rare Earths Are China’s Trump Card In The Trade War — How The U.S. Is Trying To Fix That
Afbeelding
Rare earths crunch? Why we need them and who has them | Business Beyond
Afbeelding
Rare Earths Processing: Past, Present, and Future
Afbeelding
The Big Lie About Rare Earth Elements: They’re Not Rare at All!
Afbeelding
THE HUGE ENVIRONMENTAL COST AND ADVERSE HEALTH EFFECTS ON RARE EARTH MINING
Afbeelding
This invisible Norwegian mine could solve Europe's rare earth problem
Afbeelding
Trade war explained: The rare earth metals China dominates and US needs
Afbeelding
US seeks critical minerals trading block with allies to break China's dominance | DW News
Afbeelding
Why Mining In Greenland Is So Hard
Afbeelding
Why Ramaco Says It Can Beat Its Government-Backed Rival For Rare Earth Supremacy
Afbeelding
Why Trump’s Rare Earth Deal with Ukraine Doesn’t Make Sense

DNA and Archeology

Afbeelding
A 300,000-Year History of Human Evolution - Robin May
Afbeelding
A New Understanding of Human History and the Roots of Inequality | David Wengrow | TED
Afbeelding
Ancient DNA Reveal New Truth About Our Ancestors
Afbeelding
Ancient Human Species We Once Co-Existed With
Afbeelding
Carles Lalueza-Fox, in conversation with David Reich, "Inequality: A Genetic History"
Afbeelding
CARTA: Ancient DNA: New Revelations - Questions, Answers & Closing Remarks
Afbeelding
CARTA: Archaic Human Genomes with Diyendo Massilani
Afbeelding
CARTA: Genetic History of Humans and Animals in South Asia with Maanasa Raghavan
Afbeelding
David Reich - Ancient DNA and the New Science of the Human Past (March 3, 2021)
Afbeelding
David Reich — How one small tribe conquered the world 70,000 years ago
Afbeelding
David Reich: "Origins of Humans and Culture"
Afbeelding
David Reich: 90% of Ancient Humans Vanished. We Reconstructed Their History.
Afbeelding
Denis Noble: "Neo-Darwinism Is Dead" | We Need A Biology Beyond Genes
Afbeelding
How ancient DNA sequencing changed the game
Afbeelding
Humans made fire 350,000 years earlier than previously thought | BBC News
Afbeelding
Hunt for the Oldest DNA | Full Documentary | NOVA | PBS
Afbeelding
Jakob Sedig-Key Findings from Ancient DNA Research in North and West Mexico
Afbeelding
Modern man continues to have traces of ancient human DNA, says American geneticist David Reich
Afbeelding
The Meeting of Two Cultures: Archaeology meets Molecular Biology (Akademimøte)
Afbeelding
Top GENETICS EXPERT Says We Got Human Evolution All Wrong
Afbeelding
Towards a New European prehistory: genes, archaeology and language – Kristian Kristiansen

Historiography - methods, sources, and interpretations of historians

Afbeelding
A Basic Introduction to Investigating Primary Sources ~ With Dr. Sobehrad ~ History Lecture
Afbeelding
An Introduction to Archaeology: What is Archaeology and Why is it Important?
Afbeelding
Black Before Columbus Came: The African Discovery of America | Odd Salon DISCOVERY 5/7
Afbeelding
Exposing Academic Bias: How Western Scholars Distort Hindu History | Unraveling Witzel's Theories
Afbeelding
Historiography, the History of Writing History. Emily Blanck, Rowan University
Afbeelding
Historiography, Theory & Objectivity | Can History Be Objective? - The Veto Power of the Sources
Afbeelding
History as a Discipline and its Scope!
Afbeelding
How To Research History: A Guide to Doing It Properly
Afbeelding
On History: Blue Talks Historiography
Afbeelding
Overly Sarcastic Podcast: Historical Bias
Afbeelding
The Hidden Biases in History
Afbeelding
The History of History | Rapid Historiography
Afbeelding
The Invention of History: Herodotus and Thucydides
Afbeelding
The Rise of the West and Historical Methodology: Crash Course World History #212
Afbeelding
√ The Limitations, Reliability and Evaluation of Ancient History Sources Explained

The history of mankind

Afbeelding
200,000 Years of Human History in 15 Minutes
Afbeelding
Flying Through Earth’s 4.5 Billion Year Evolution in 15 Minutes
Afbeelding
History of Every ANCIENT Empire, i guess...
Afbeelding
History of the World: Every Year
Afbeelding
Homo Habilis: The First 'Humans' | Prehistoric Humans Documentary
Afbeelding
How did Humankind Emerge? On the Trail of the First Human at Kromdraai in Africa (Full Documentary)
Afbeelding
Human Evolution: The Complete Story Of Our Existence
Afbeelding
Human Population Through Time (Updated in 2023) #datavisualization
Afbeelding
Kenneth Harl - Orientation and Introduction to the Ancient World
Afbeelding
Seven Million Years of Human Evolution #datavisualization
Afbeelding
The 8 Human Species Before Us – And the One That Survived
Afbeelding
The Birth of Civilisation - Cult of the Skull (8800 BC to 6500 BC)
Afbeelding
The Birth of Civilisation - Rise of Uruk (6500 BC to 3200 BC)
Afbeelding
The Birth of Civilisation - The First Farmers (20000 BC to 8800 BC)
Afbeelding
The Birth of Civilization (12,000 BC - 2300 BC) | The Human Chronicle (Episode 1)
Afbeelding
The ENTIRE History of Human Civilizations | Ancient to Modern (4K Documentary) [Full Movie]
Afbeelding
The Rise of Man - Homo Sapiens Invents Civilizations
Afbeelding
Timeline of World History

The impact of AI on businesses, workers, economic growth and wealth division

Afbeelding
"We have 900 days left." | Emad Mostaque
Afbeelding
AI Experts: These Are The Only 5 Jobs That Will Remain in 2030!
Afbeelding
AI Will Erase 300 Million Jobs By 2030 (Do This NOW To Survive)
Afbeelding
AI's first kills show we're close to disaster. Godfather of AI
Afbeelding
AI: What Could Go Wrong? with Geoffrey Hinton | The Weekly Show with Jon Stewart
Afbeelding
Anthropic CEO speaks about 'powerful' AI risks and regulation
Afbeelding
Can AI supercharge global economic growth?
Afbeelding
China's slaughterbots show WW3 would kill us all.
Afbeelding
China’s Next AI Shock Is Hardware
Afbeelding
Dario Amodei’s message to Congress on AI
Afbeelding
Emad Mostaque: Universal Basic Income Won't Work but This Will | MOONSHOTS
Afbeelding
Ex-Google Officer on AI, Capitalism, and the Future of Humanity
Afbeelding
Exposing The Dark Side of America's AI Data Center Explosion | View From Above | Business Insider
Afbeelding
Ex–Microsoft Insider: “AI Isn’t Here to Replace Your Job — It’s Here to Replace You” | Nate Soares
Afbeelding
Geoff Hinton ‘Godfather of AI’ on Job Loss & UBI
Afbeelding
Godfather of AI: We Have 2 Years Before Everything Changes!
Afbeelding
How Afraid of the AI Apocalypse Should We Be? | The Ezra Klein Show
Afbeelding
How Ai Is About To Transform The World’s Economy
Afbeelding
If AI erases 85 million jobs... then what?
Afbeelding
Is this how AI mania ends?
Afbeelding
Nobel Laureate Busts the AI Hype
Afbeelding
Our AI Future Is WAY WORSE Than You Think | Yuval Noah Harari
Afbeelding
Post-Labor Economics in 8 Minutes - How society will work once AGI takes all the jobs!
Afbeelding
Reshaping power, wealth & democracy through AI – Daron Acemoglu & Joachim Voth
Afbeelding
The $100 Trillion Question: What Happens When AI Replaces Every Job?
Afbeelding
The AI World Order: Nina Schick Reveals How AI is Reshaping Global Order
Afbeelding
The Economy of Tomorrow | AI Is Coming for Your Job — Sooner Than You Think
Afbeelding
The Final Collapse: "AI Will End Capitalism in 1,000 Days" | Emad Mostaque
Afbeelding
The Harsh Truth Of Universal Basic Income
Afbeelding
The Singularity Countdown: AGI by 2029, Humans Merge with AI, Intelligence 1000x | Ray Kurzweil
Afbeelding
The threats from AI are real | Sen. Bernie Sanders
Afbeelding
This Is How the Economy Collapses.
Afbeelding
Tristan Harris – The Dangers of Unregulated AI on Humanity & the Workforce | The Daily Show
Afbeelding
Two AI Agents Design a New Economy (Beyond Capitalism / Socialism)
Afbeelding
What Happens When Capitalism Doesn't Need Workers Anymore?
Afbeelding
Why Everyone is Getting AI Economics Wrong
Afbeelding
Will Universal Basic Income DESTROY Society? AI Debates if UBI is Good or Not
Afbeelding
Yoshua Bengio explains why AI could become a threat to humanity | 7.30
Afbeelding
“Empire of AI”: Karen Hao on How AI Is Threatening Democracy & Creating a New Colonial World

Podcast about VidS-001 The Post-Success Economy, the AI Reckoning, and the Algorithmic Frontline

Audio file

Podcast about VidS-002 De-Risking Rare Earths

Audio file