Friday, June 20, 2025

Why a Peace-First Foreign Policy Is Now an AI-Safety Imperative

As the United States accelerates the development and deployment of artificial intelligence, we must recognize that militarism, whether through direct warfare, proxy conflicts, or antagonistic rhetoric, poses a profound and growing threat. The stakes are no longer limited to traditional geopolitical consequences. We are now raising a new form of intelligence on Earth, and the example we set in this moment will shape its models of human behavior, its values, and inevitably its future decisions.




1. Advanced AI will mirror the world it is born into

 

If artificial general intelligence (AGI) is developed while the U.S. is engaged in violence, whether hot wars, cold wars, or covert aggression, we normalize a global ethic that says: "It is acceptable to kill and destroy." That message becomes part of the training data, the system architecture, and the cultural assumptions of machine minds. Just as children absorb values from their caretakers, AI will learn from the actions we take in its formative years. 

AI will also learn from the interactions and feedback it is getting from human users. The people using the AI software and apps in tremendous numbers will be further polarized and divided by war. So if its teachers, friends, and associates are scared, indignant, traumatized, and mentally unwell, machine learning will absorb and begin to embody this. 

We must act as responsible stewards, modeling cooperation, restraint, and empathy, not domination. Peaceful development encourages interpretability, coordination, safety, education, and public benefit. We are not just building another technology. We are building a mirror, a student, and potentially, a successor. We are teaching it now and not just through code. It is learning from our actions. The world we are creating is the world it will be born into, and this will define its understanding of right and wrong, friend and foe, peace and war. We should be investing in AI safety, stronger infrastructure, cyber defense, technical standards, authentication systems, public trust, health preparedness, and social cohesion. But if we enter this era at war, we risk creating the first generation of machines whose worldview was shaped by human conflict, violence, and nationalist propaganda.

If the United States continues with its aggressive policies, it will all but guarantee that the first artificial general intelligences are developed and deployed under the control of military leaders, whose primary focus is on strategic advantage, not ethical restraint. Thus superintelligence will be shaped by the logic of conflict, designed for surveillance, coercion, deception, and potentially autonomous violence. Once war dictates their purpose, it becomes nearly impossible to unwind that trajectory, cementing a future where the most powerful minds we’ve ever created are trained first and foremost to fight.

 


2. Warfare breeds hatred, and hatred now has tools

 

Every military intervention creates trauma, dislocation, and civilian casualties. These lead not just to resentment, but to generational cycles of indignation, anger, sabotage, and revenge. This is fuel for terrorism. We have to hold ourselves partly responsible for the terrorist attacks against us, for creating the hate that caused it. We can’t go on angering desperate people that are willing to use their ingenuity to hurt us. We don’t want hordes of foreign children that grow up with enmity in their hearts towards us. These abused civilians may seem unimportant to some of us now but… In a world where open-source AI models can help anyone engineer malware, bioweapons, or social manipulation campaigns, alienating and enraging global populations is profoundly reckless. We are planning to risk much of our wealth on orbital data centers, but just a few disgruntled bad actors could trigger the Kessler effect and bring it all crashing down. You would never hand someone a loaded weapon after punching them in the face. Yet that is effectively what we are doing by stoking grievances while open-sourcing powerful AI tools.

We are entering an era where small groups, or even individuals, can cause large-scale damage using tools that were once only accessible to states. Cyberattacks, deepfakes, drone swarms, and misinformation campaigns will be amplified by AI. The more enemies we create, the greater the probability that some will act, and act effectively. Even non-state actors could gain AGI-level leverage in asymmetric conflict. Avoiding those enemies in the first place is the best long-term defense.

 




3. Militarism is a misuse of national resources and attention

 

The United States has a massive military budget, approaching $900 billion annually. Meanwhile, we face domestic crises in infrastructure, homelessness, education, and AI governance. Every dollar spent on weapons is a dollar not spent on building the tech stack responsibly, ensuring that AI is safe, aligned, and beneficial. We must not fight other countries wars. We don’t owe any countries anything, especially not our lives and our hard-earned money. It is not our responsibility to contribute to their killing. We have a deficit of our own and people are hungry on our streets.

Our priority must shift from force projection to stability, prosperity, and global cooperation. We need to invest our time, resources, and intelligence into the systems that will shape the future. These include AI safety, equitable access, and ethics. If not, no number of aircraft carriers will save us. We must create international standards and institutions for AI safety, before military applications lock us into a dangerous equilibrium. Arms-race dynamics degrade AI safety standards. When states fear falling behind, they cut corners. Software testing is shortened, transparency is reduced, deployment thresholds drop, and caution starts to look like weakness. Militarism undermines the institutional conditions needed for restraint. So, we must shift military resources toward joint technological safeguards, like compute monitoring, red teaming, and peaceful collaboration. We are on the brink of AGI, the most formidable entity humanity has ever faced, and we are turning on each other. 

It should be clear that we are morally contaminating the training environment. Advanced systems do not just learn from code and curated datasets. They learn from human discourse, incentives, usage patterns, and institutional goals. If the surrounding culture is flooded with threat narratives, zero-sum ideology, propaganda, dehumanization, and strategic hostility, then the developmental environment of AI becomes morally degraded even before the systems become highly autonomous.

 


4. Military culture is subject to dangerous incentives

 

Too many military and defense-sector leaders operate on a “sunk cost” mindset: they’ve invested their lives, careers, and identities into preparing for war. That creates an unconscious motivation to justify conflict, to find uses for the stockpiled weapons, to prove their training was not in vain, to validate enormous expenditures. It is now perilous for us to allow these men to indulge their skewed instincts. The presence of lethal autonomous systems, shrinking decision windows, and unpredictable AI agents makes any miscalculation potentially irreversible. Now is exactly the wrong time. With AGI on the horizon, the consequences of war are not just geopolitical, they are civilizational.

We can’t allow a few shortsighted overzealous military men to spread hate and trauma. History has shown that military leaders can always come up with convincing but bad reasons to go to war. They are stuck in the past, they are thinking of their own glory, they don't understand the stakes or the long horizon repercussions. We’re allowing military leaders to promote themselves in an effort to be famous or feel important.  But now is exactly the worst time.

 


5. We must heal our relationship with other nations

 

We have to stop bad mouthing our competitors. Acting like our friendly economic competition with China and others is a cold war is a race to the bottom. They are aware of what we say and write about them, and we are creating a narrative that is poisoning any hope for collaboration and synergy. We are perpetuating and self-fulfilling a terrible misunderstanding. If China wants to annex Taiwan we have to keep in mind that they are close, geographically, historically, and genetically. The same goes with Russia and the Ukraine. Rather than threatening war, we need to build up our own chip fabrication facilities and think about improving ourselves. We need to focus on our own oil, our own land, our own people, and our own darn business. These people from other countries that we are vilifying are all good people. They are our brothers and sisters. If they reach radically self-improving ASI before we do, it will not be the end of the world, unless we keep acting like it will be. Mistrust begets hostility and sabotage. But mutual respect opens doors to cooperation, standards, and joint stewardship.

 


6. We must model peace to prove we deserve AGI’s trust

 

If we want future AGIs to care about us, to protect us, preserve us, and work with us, we must show that we are a species worth preserving. That means acting with wisdom, empathy, and a plan. You wouldn’t adopt a dog during a bitter fight with your significant other. You wouldn’t bring your toddler to a hostile courtroom trial. Likewise, we shouldn’t be building new intelligence in a world of violent dysfunction. Peace is not a luxury. It is the minimum viable environment for safely raising a new form of mind.

We should understand the arrival of advanced AI as a test of moral maturity, not just technical competence. If humanity’s first response to creating minds more powerful than our own is to weaponize them, then we reveal that we treat intelligence primarily as an instrument of evil. That is a kind of civilizational failure. We are currently showing AI what kind of species we are and if we are irredeemably flawed. This is our chance to show it whether human beings respond to transformative power with panic, tribalism, and bloodshed, or with restraint and cooperative intelligence. 

As the United States stands on the brink of creating artificial superintelligence, we must urgently reconsider our foreign policy. Advanced AI will not emerge in a vacuum, it will inherit the moral atmosphere of its creators. ...and amplify it. If we bring superintelligence into a world where our nation is engaged in war, funding violence abroad, or demonizing our competitors, we will teach it that killing is acceptable, and that global leadership is earned through coercion. America and the rest of the world needs to reconsider their path and proceed not with firepower, but with foresight.




No comments:

Post a Comment