Friday, June 20, 2025

Why a Peace-First Foreign Policy Is Now an AI-Safety Imperative

As the United States accelerates the development and deployment of artificial intelligence, we must recognize that militarism—whether through direct warfare, proxy conflicts, or antagonistic rhetoric—poses a profound and growing threat. The stakes are no longer limited to traditional geopolitical consequences. We are now raising a new form of intelligence on Earth, and the example we set in this moment will shape its values, its models of human behavior, and possibly its future decisions.

 

1. Advanced AI will mirror the world it is born into

 

If artificial general intelligence (AGI) is developed while the U.S. is engaged in violence—whether hot wars, cold wars, or covert aggression—we normalize a global ethic that says: "It is acceptable to kill, to dominate, to destroy." That message becomes part of the training data, the system architecture, and the cultural assumptions of machine minds. Just as children absorb values from their caretakers, AI will learn from the actions we take in its formative years. It will also learn from the feedback it is getting from human users. If we go to war these users will be further polarized. If AI's teachers and playmates (app users) are angry, scared, indignant, traumatized, and mentally unwell, machine learning will absorb and begin to embody this. 

We must act as responsible stewards, modeling cooperation, restraint, and empathy—not domination. We are not just building another technology. We are building a mirror, a student, and potentially, a successor. What we teach it now—through action, not just code—will define its understanding of right and wrong, friend and foe, peace and war. If we enter this era at war, we risk creating the first generation of machines whose worldview was shaped by human conflict, violence, and nationalist propaganda.

If the United States goes to war now, it will all but guarantee that the first artificial general intelligences are developed and deployed under the control of military leaders, whose primary focus is on strategic advantage, not ethical restraint. These AGIs will be shaped by the logic of conflict, designed for surveillance, coercion, and potentially autonomous violence. Once war dictates their purpose, it becomes nearly impossible to unwind that trajectory—cementing a future where the most powerful minds we’ve ever created are trained first and foremost to fight.

 


2. Warfare breeds hatred—and hatred now has tools

 

Every military intervention creates trauma, dislocation, and civilian casualties. These lead not just to resentment, but to generational cycles of anger—fuel for terrorism, sabotage, and revenge. We have to hold ourselves responsible for 911, for creating the hate that caused it. We can’t go on angering desperate people that are willing to use their ingenuity to hurt us. We don’t want hordes of children that grow up with enmity in their hearts towards us. These abused civilians may seem unimportant now but… In a world where open-source AI models can help anyone engineer malware, bioweapons, or social manipulation campaigns, alienating and enraging global populations is suicidal. You would never hand someone a loaded weapon after punching them in the face. Yet that is effectively what we are doing by stoking grievances while open-sourcing powerful AI tools.

We are entering an era where small groups—or even individuals—can cause large-scale damage using tools that were once only accessible to states. Cyberattacks, deepfakes, drone swarms, and misinformation campaigns will be amplified by AI. The more enemies we create, the greater the probability that some will act—and act effectively. Even non-state actors could gain AGI-level leverage in asymmetric conflict. Avoiding those enemies in the first place is the best long-term defense.

 

3. Militarism is a misuse of national resources—and attention

 

The United States has a massive military budget, approaching $900 billion annually. Meanwhile, we face domestic crises in infrastructure, homelessness, education, and AI governance. Every dollar spent on weapons is a dollar not spent on ensuring that AI is safe, aligned, and beneficial. We must not fight other countries wars. We don’t owe any countries anything, especially not our lives and our hard-earned money. It is not our responsibility to add to their killing. We have a deficit of our own and people are hungry on our streets.

Our priority must shift from force projection to stability, prosperity, and global cooperation. If we fail to invest in the systems that will shape the future—like AI safety, equitable access, and ethics—no amount of aircraft carriers will save us. We must create international standards and institutions for AI safety, before military applications lock us into a dangerous equilibrium. We must shift military resources toward joint technological safeguards, like compute monitoring, red teaming, and peaceful collaboration. 

 

4. Military culture is subject to dangerous incentives

 

Too many military and defense-sector leaders operate on a “sunk cost” mindset: they’ve invested their lives, careers, and identities into preparing for war. That creates an unconscious motivation to justify conflict, to find uses for the stockpiled weapons, to prove their training was not in vain, to validate enormous expenditures. It is now perilous for us to allow these men to indulge their skewed instincts. The presence of lethal autonomous systems, shrinking decision windows, and unpredictable AI agents makes any miscalculation potentially irreversible. Now is exactly the wrong time. With AGI on the horizon, the consequences of war are not just geopolitical—they are civilizational.

We can’t allow a few shortsighted overzealous military men to spread hate and trauma. History has shown that military leaders can always come up with convincing but bad reasons to go to war. We’re basically allowing military leaders to promote themselves in an effort to be famous or feel important. But now is the worst time.

 

5. We must heal our relationship with China

 

We have to stop bad mouthing our competitors. Acting like our friendly economic competition with China is a cold war is a race to the bottom. They are aware of what we say and write about them, and we are creating a narrative that is poisoning any hope for collaboration and synergy. We are perpetuating and self-fulfilling this. The Chinese people are good just like our Chinese Americans friends are good. If they reach AGI before we do, it will not be the end of the world – unless we stop acting like it will be. Mistrust begets hostility and sabotage. But mutual respect opens doors to cooperation, standards, and joint stewardship.

 

6. We must model peace to prove we deserve AGI’s trust

 

If we want future AGIs to care about us—to protect us, preserve us, and work with us—we must show that we are a species worth preserving. That means acting with wisdom, empathy, and a plan. You wouldn’t adopt a dog during a bitter fight with your significant other. You wouldn’t bring your toddler to a hostile courtroom trial. Likewise, we shouldn’t be building new intelligence in a world of violent dysfunction. Peace is not a luxury. It is the minimum viable environment for safely raising a new form of mind.

As the United States stands on the brink of creating artificial superintelligence, we must urgently reconsider our foreign policy. Advanced AI will not emerge in a vacuum—it will inherit the moral atmosphere of its creators. If we bring superintelligence into a world where our nation is engaged in war, funding violence abroad, or demonizing our competitors, we will teach it that killing is acceptable, and that global leadership is earned through coercion. America must lead not through firepower, but through foresight.

No comments:

Post a Comment