Ontological Warfare

Astrophysicists spend their lives describing what is nearly impossible to see. Black holes, gamma-ray bursts, dark matter—none of these can be directly observed. Instead, scientists construct ontologies, systems of relationships and definitions that allow them to reason about phenomena at the edge of human comprehension, and then communicate with clarity to each other about it. Ontologies make the invisible describable. They turn chaos into structured thought. 

Our world today is no less mysterious. It is governed by machine learning algorithms, autonomous systems, synthetic media, and exponential change that is difficult for humans to perceive. Influence flows not through territory but through data. Yet, like dark matter, much of this influence is invisible—an unseen architecture shaping the behavior of statecraft, markets, and societies.

This report aims to illustrate how in our exponential age, we must build a new ontology of power—a shared vocabulary for the emerging reality of AI-driven governance and orchestrion, distributed intelligence, and exponential technological change. The old maps no longer fit the terrain. What we need now is not new ideology, but new language—one capable of describing systems that are hybrid, autonomous, and alive with code.

What Is
an Ontology . . .

and Why It
Matters Now

In philosophy, ontology is the study of what exists—the structure of being itself. In practice, it is a tool for organizing complexity. When computer scientists build ontologies, they are not debating metaphysics. They are creating taxonomies: hierarchies of meaning that allow machines to see, interpret, reason, decide and act. 

A well-built ontology allows systems—and the humans who operate them—to communicate coherently about complex phenomena. For example, in cybersecurity, ontologies define what constitutes an attack surface, an asset, or a threat vector. In intelligence work, they shape how analysts classify adversariesindicators, and effects. Without such shared definitions, communication and coordination is both time consuming and confusing.

Einstein’s famous quote, “spooky action at a distance” was used in a 1947 letter to Max Born to express his skepticism about quantum mechanics, specifically its implications for phenomena like quantum entanglement.

In an era of AI informed operations where synthetic orchestration powers decisions that are made at the edge, the lack of an agreed-upon ontology creates strategic blindness. When policymakers, technologists, and military leaders use the same words to describe different realities, miscalculation becomes inevitable. Ontology, then, becomes not just a technical exercise—but a form of cognitive infrastructure that underpins decision-making itself. 

The reason this matters now is that machines increasingly participate in the act of definition. Large language models, autonomous targeting systems, and algorithmic traders all depend on internal ontologies to interpret the world. If their categories differ from ours—or from each other’s—the results can range from market instability to kinetic escalation. Ontological alignment is no longer academic; it is operational.

The Collapse of the Old Maps 

For centuries, geopolitical power was mapped along physical and territorial lines. Armies, borders, and resources determined who shaped the world. That framework began to erode with the rise of the internet and the globalization of finance—but the true collapse came when intelligence itself became distributed. 

Today, decision advantage depends less on owning territory and more on controlling flows of data, computation, and controlling the narrative. Artificial intelligence does not fight wars or negotiate treaties, yet it determines outcomes in both. Algorithms allocate capital, filter truth, and increasingly set policy. They are actors in every sense but the legal one. 

The ontologies we inherited from the industrial age—nation, economy, defense, diplomacy—can no longer account for this distributed landscape. Consider the challenge of attributing a cyberattack, or tracing the causal chain of an AI system’s decision. The categories are blurred: where does “machine” end and “human” begin? Who holds responsibility when code acts autonomously, or worse, hallucinates? 

This epistemic drift has real consequences. When policymakers still think in terms of kinetic deterrence, but adversaries are operating in the cognitive domain, a mismatch of ontology produces unsettling outcomes. When markets respond to machine perception rather than human judgment, financial crises unfold at algorithmic speed. The result is an environment that feels ungovernable—not because it is unknowable, but because our language for knowing it is obsolete.

CAPTION | Todays environmental monitoring and domain superiority contests include Air, Land, Sea, Space and Cyberspace, integrated into a common operational picture where machine intelligence informs data driven decisions that occur at the edge in ever accelerating OODA loops . . . often punctuated by kinetic effects that include microwave energy, synthetic aperture radar, autonomous air vehicles, space GPS services, under sea acoustics and more.

If the old maps no longer explain the territory, what does the new one reveal? A contemporary ontology of power must trace three converging frontiers:

Hybrid Agency — Decisions are now co-authored by humans and algorithms. Artificial intelligence doesn’t merely execute; it perceives, infers, and persuades. The locus of intent is shared, distributed across systems that learn and adapt faster than their creators.

Semantic Acceleration — Meaning itself is destabilized. The vocabulary of governance—security, intelligence, innovation—has been rewritten in the syntax of code. Language is no longer descriptive; it is executable.

Networked Sovereignty — Power flows through connectivity, not territory. Nations, corporations, and individuals assert control through networks that transcend borders, shaping outcomes in digital rather than physical space.

To chart this new landscape, we can merge the precision of computer science with the fluidity of astrophysics theory. In ontology engineering, entities are defined by their relations, not by their attributes. So too with modern power: it is not what one possesses, but how one connects, correlates, communicates and compels through unlocking the value in their data and controlling the narrative.

A WORKING
ONTOLOGY
FOR A
NEW AGE

Term Definition / Context 
1. Cognitive CollectiveThe invisible battlefield of influence where information, perception, and belief are contested. Control of this terrain determines legitimacy. 
2. Synthetic Deterrence The use of AI-driven, autonomous systems to prevent conflict through predictive dominance and machine-enabled escalation control. 
3. Algorithmic Diplomacy The use of inter-machine communication protocols and algorithmic decision models to negotiate international outcomes faster than human timelines. 
4. Technological Sovereignty A state’s control over its innovation stack — hardware, software, and talent — as a form of national independence. 
5. Digital Irredentism The reclamation of lost or perceived influence through cyber domains, narrative control, or data reoccupation including ransom ware. 
6. Ontological Warfare Strategic conflict fought over the interpretation of reality — through propaganda, deepfakes, or model bias — where truth becomes a contested resource. 
7. Machine Empires Networked clusters of autonomous systems and AI models that extend a nation’s power projection far beyond territorial borders. 
8. Predictive Governance Policy and decision-making driven by real-time AI simulations that look for “pattern of life” markers and forecast human behavior and societal trends.
9. Semantic Supremacy The dominance achieved by defining the key terms, frameworks, and models through which future technologies are interpreted and regulated. 
10. Black Swan Drift The gradual normalization of improbable events once considered unthinkable — a cognitive and institutional adaptation to continuous disruption.

Each term offers a window into how institutions might interpret, and therefore act within, the emerging silicon cognitive collective. Together they form the backbone of a living lexicon—one that acknowledges that the future of strategy is semantic.

The Limits of Understanding 

In practice, this means our new vocabulary will always lag behind the systems it describes. AI evolves faster than language. Policy evolves slower than code. The result is a perpetual tension between semantic precision and operational agility. 

Even within the U.S. defense establishment, attempts to codify shared terminology have shown how fragile this process can be. Initiatives like JADC2 and Project Maven require not only interoperable data systems but interoperable concepts. The same is true in global alliances: NATO’s DIANA accelerator and AUKUS Pillar II aim to align not just technologies, but the war fighting concepts that underpin them.

Dolphins have an intelligence we can never fully understand, as their physical habitat is based on acoustics in a ecosystem far removed from humans. We get them to splash us with feats of strength, but we will never understand their full intelligence as we come from two very different ecosystems and use different sensory systems.

Todays super computing clusters and data centers are similar in that they never went to Sea World . . . and get their understanding of the physical world from training data and reinforcement learning.

The limits of understanding extend further into the realm of AI itself. Machine learning models develop internal ontologies—latent representations of the world that are opaque to human inspection. When those models make strategic or policy-relevant decisions, we are left interpreting shadows of their reasoning. The result is an emerging epistemological divide: humans reasoning in physical perception and language, machines reasoning in training data with algorithms.

Bridging that divide is the next frontier of intelligence. Not artificial intelligence—but smarter intelligence, where humans and machines share enough conceptual overlap to act coherently in the same physical and synthetic world.

Implications for Strategy and Policy 

Ontologies may sound abstract, but their consequences are difference makers. When actors operate with incompatible definitions, the potential for miscalculation grows exponentially. The Cold War’s nuclear deterrence doctrines relied on meticulous semantic alignment—terms like “first strike,” “second strike,” and “assured destruction” had to mean the same thing to both sides. Today, in the age of autonomous systems and AI-enabled decision loops, that clarity is vanishing. 

To navigate this, policymakers must become semantic engineers—not only negotiating treaties or trade, but negotiating meaning. This involves new institutions capable of updating ontologies dynamically, much like over the air firmware updates to reflect changing realities. It also demands collaboration across disciplines—engineers, strategists, and philosophers working side by side. 

The greatest risk is not that machines will overpower us, but that we will lose the ability to describe what is happening in terms that are mutually intelligible.

Ontology is Strategy. 
Shared definitions are the foundation of communication, influence, deterrence, and innovation. Without them, chaos wins.

Power Is Relational. 
Influence no longer derives from ownership or force, but from the ability to define and connect across systems. 

Machines Have Ontologies Too. 
AI systems interpret reality through their own logic. Aligning human and machine concepts is now a matter of strategic stability. 

Semantic Drift Is a Hidden Risk. 
As technology accelerates, meanings diverge. Misalignment between actors—human or digital—can escalate into conflict. 

Governance Must Be Adaptive. 
Institutions need living ontologies—dynamic vocabularies that evolve with technology and policy, not after them. 

Clarity Is Power. 
Those who can describe the world most accurately are those best positioned to influence it. Ontology is not about control—it’s about comprehension.

CONCLUSION

THE
ARCHETECTURE
OF
KNOWING

Ontologies make explicit the relationships that already exist, but which are too complex, hidden, or fast-moving for intuition alone.

The 21st century is not defined by a single revolution, but by the convergence of many: intelligence, autonomy, computation, and perception. Each adds new layers to the architecture of knowing. The task of strategy is to describe these layers accurately enough to act within them. 

This is what the Ontology of Power aims to provide: not a new ideology, but a new instrument for sense making. In a time when machines generate meaning, and narratives move faster than facts, the capacity to define—to speak with precision—is a form of leadership. 

PWK International continues to explore the systems, languages and technology induced phenomena that define our modern condition—where intelligence is distributed, decision speed is exponential, and understanding first is the ultimate act of power.

SOURCES
ACKNOWEDGEMENTS
IMAGE CREDITS
AND ADDITIONAL READING

{1} Ontological Warfare | Semantic Supremacy in Great Power Competition is an Expert Network report written by PWK International Managing Director David E, Tashji 04 November 2025. Image Credit: PWK International.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means—electronic, mechanical, photocopying, recording, scanning, or otherwise—without the prior written permission of the author, except in the case of brief quotations embodied in critical articles, reviews, or academic work, or as otherwise permitted by applicable copyright law. 

“Ontological Warfare” and all associated content, including but not limited to the report title, cover design, internal design, maps, engineering drawings, infographics and chapter structure are the intellectual property of the author. Unauthorized use, adaptation, translation, or distribution of this work, in whole or in part, is strictly prohibited. 

This report is a work of non-fiction based on publicly available information, expert interviews, and independent analysis. While every effort has been made to provide accurate and up-to-date information, the author makes no warranties, express or implied, regarding completeness or fitness for a particular purpose. The views expressed are those of the author and do not necessarily represent the views of any employer, client, or affiliated organization. 

All company names, product names, and trademarks mentioned in this report are the property of their respective owners and are used for identification purposes only. No endorsement by, or affiliation with, any third party is implied. 

{2} Ontological Warfare | Semantic Supremacy in Great Power Competition. Published by PWK International Expert Network. (C) 2025 PWK International

{3} This report references numerous quantum research and nuclear weapon modeling and testing enterprises and their “Science Race of the Century” innovations. All registered trade marks and trade names are the property of the respective owners.

{4} The Artificial Intelligence Ontology (AIO) project by Joachimiak et al. explores how large language models can help construct and maintain complex concept hierarchies that describe artificial intelligence itself. You can read the study at escholarship.org.

{5} In Challenges for an Ontology of Artificial Intelligence, S.H. Hawley examines the philosophical limits of defining AI agents and systems, offering valuable perspective for anyone questioning what truly constitutes “intelligence.” Available at arxiv.org.

{6} R. Krzanowski and K. Płonka-Płonka investigate how ontological gaps shape our interaction with artificial systems in their paper Ontology and AI Paradigms, which frames AI as both a computational and cultural system. Read it on mdpi.com.

{7} Seth Earley’s influential essay The Role of Ontology and Information Architecture in AI breaks down how enterprises use ontological structures to align data, decisions, and automation. Find it at earley.com.

{8} In Ontologies as the Semantic Bridge between Artificial Intelligence and Healthcare, R. Ambalavanan et al. demonstrate how shared vocabularies improve AI-driven coordination across institutions—a case study in ontology as infrastructure. Available through frontiersin.org.

{9} A.F.S. Borges and colleagues discuss The Strategic Use of Artificial Intelligence in the Digital Era, outlining how data-centric strategy reshapes organizations and states alike. Their article is hosted on sciencedirect.com.

{10} S. Safranek’s 2025 paper Ontological Framework for Integrating Predictive Analytics, Automation & Intelligence proposes a structural model for aligning AI decision-making systems under a unified semantic layer. Read the full paper at scitepress.org.

{11} J. McAndrew’s white paper Unlocking Trustworthy AI: How Ontologies and Knowledge Graphs Power the Future of Intelligent Enterprises argues that semantic infrastructure is now a competitive advantage for national and corporate power. Available at cyberhillpartners.com.

{12} Chen, Fabrocini, and Terzidis explore the philosophical implications of AI as a demonstration of Object-Oriented Ontology in their 2025 Nature article From Canvas to Code: Artificial Intelligence in Art and Design. Access it at nature.com.

{13} Booshehri et al. introduce The Open Energy Ontology, an ambitious framework for aligning data-driven decision-making in global energy systems—showing how semantic infrastructure extends beyond technology into governance itself. Read more at sciencedirect.com.

This entry was posted in Business Strategy, Concepts of Operations, Defense Industry, Disruptive Technology, Globalization, Military Strategy and tagged , , , , , , , , , , , . Bookmark the permalink.