Keynote remarks delivered by Joshua A. Geltzer[1] for the Yale Journal of International Law 2025 Symposium on “Reimagining the International Legal Order.”
—
Thank you all so much for the chance to join this impressive gathering. As a former student editor right here at the Yale Journal of International Law, it’s a particular treat and privilege to get to be back in New Haven, and I’m very grateful for the invitation.
While I might be back at a law school again, that does not mean I suddenly claim to be a law professor again. Indeed, even when I had the pleasure of teaching national security law to wonderful Georgetown Law students, I would offer a fair warning on the first day of the semester that I wasn’t an academic with a long list of 800-footnote law review articles to my name. Instead, I’d warn the students that they were stuck with a practitioner—someone who thinks about the law of armed conflict (LOAC) and related issues from the perspective of having had the privilege of working through real questions about their actual application while serving in various national security legal and policy jobs.
And that’s what you’re stuck with today—a practitioner’s perspective. I have had the opportunity to work on aspects of the issues that I will be discussing at the White House, at the Justice Department, and now at WilmerHale.
That practitioner’s perspective surely contributes to the basic case I am going to make today for gradualism in thinking through aspects of the intersection between artificial intelligence and the law of armed conflict. By gradualism, I mean forgoing grand categorial proclamations of the type that, I admit, sometimes can make for more interesting and even more compelling arguments in the abstract. But, in practice, I think AI is not proving susceptible to such sweeping generalizations, at least in the three areas related to the law of armed conflict that I intend to address today.
All three of those areas begin with the letter “A,” which fulfills my recollection that arguments fare best in legal academia when they come in groups of three or when they have alliteration. And I have both! So, today, I want to discuss autonomy, accountability, and ad bellum, making a case for gradualism regarding AI’s implications for each. Ultimately, I will distinguish between gradualism and slowness and explain why I do not think we can afford the latter, given the rapid deployment of AI technology.
And, given that I approach these issues from the perspective of their actual operationalization, let me offer two fact patterns to consider as we discuss the three “A”s I’ve chosen.
First, imagine a senior government official, one minute proceeding calmly to his vacation home with his wife beside him in a motorcade surrounded by elite armed bodyguards, the next minute shocked to find his motorcade screeching to a halt as he and his companions face a hail of bullets ripping through their vehicles. The official, wounded by the incoming fire, staggers out of his vehicle, only to face more incoming fire, killing him on the spot.
Afterward, as the official’s government investigates how this all happened, they find that there was no human attacker present. Instead, all of the bullets had been fired remotely, from a vehicle that then exploded but failed to destroy itself fully, thus leaving traces of what had unfolded. Moreover, and the really key part for our purposes today, the final adjustments to the incoming fire had been made not even by the remote operator of the vehicle, but instead by AI. It was AI that, in particular, compensated for the delay between the video feed and the remote operator of the weapon, as well as for the shake of the mounted weapon and the speed of the vehicle involved.[2] In other words, AI aimed the gun.
Now, imagine a second scenario. A fighter pilot is flying in formation with others in her squadron when her screen lights up with identification of a hostile fighter jet. The pilot communicates with her “air battle manager,” confirms with the manager that the identified jet is an adversary actor with hostile intent, and, as the two planes speed toward each other, fires missiles that destroy the other aircraft, killing its pilot in the process.
Here, the key part for our purposes is that the air battle manager was not a human on the ground or in the air—it was AI. It was, in fact, the same AI system that produced the hostile on-screen identification of the other aircraft posing an apparent threat in the first place. And while the human pilot pulled the trigger, when asked about her decision-making back on the ground, she would likely say she really had no choice—the AI told her that the time window to eliminate the hostile aircraft before being eliminated by it was rapidly closing.[3]
This is, of course, the part of the sonnet where I deliver the “turn” and say, to your shock, and mix of delight and horror, that both of these are not in fact hypotheticals; they are real.
The first occurred in November 2020, at least according to New York Times reporting the following year. The target of the attack was Iran’s top nuclear scientist. And the operation was carried out by the Israeli Government, albeit on Iranian soil.
The second reflects a glimpse that the U.S. Air Force and Navy provided just a couple of months ago to journalists at Fox News, though I’ve added some color to the details. And it wasn’t, to be clear, a vision of the future—it was a snapshot of the present.
There is, of course, much that we as international lawyers could issue-spot about both reports. But let’s focus, to start, on what they mean for the notion of autonomy as it relates to the law of armed conflict.
Discussions about autonomous weapons systems seem perhaps the most mature of the conversations occurring about AI and the law of armed conflict. There are healthy, thoughtful, and just plain interesting debates among law professors and lawyers about whether so-called fully autonomous weapons systems can ever be consistent with the law of armed conflict and, in particular, with the requirements of jus in bello, meaning the law governing conduct in warfare. Furthermore, there has been an emerging dialogue on these issues among and between governments. During the Biden Administration, for example, we tried to engage Chinese Government interlocutors on autonomous weapons systems as an admittedly initial foray into what really should become a robust dialogue between the U.S. and Chinese Governments on AI-related military matters.[4] But you’ll notice that I say we tried, and that is because our success was decidedly limited. Still, my point is that even governments often reluctant to engage with global rivals on advanced weapons systems are at least beginning to broach this topic, an indication of its prominence.
And all of this is worthwhile. But the argument I’d make is that, when it comes to the law of armed conflict, autonomy and its absence simply do not constitute a dichotomy. Sure, one can conjure weapons systems that seem truly autonomous: nuclear weapons hooked up to AI models that are programmed to decide when a nation is facing imminent attack and to fire first in such circumstances. I don’t think it really takes the big brains in this room to conclude that that’s not a good idea.
But, at least to my mind, autonomy has long been coming to weapons systems a bit like how Ernest Hemingway described the onset of bankruptcy: “gradually, then suddenly.” Consider the two scenarios I summarized earlier. In the first instance, AI reportedly determined the final angle of fire for kinetic weapons that resulted in human casualties. There was, it’s important to note, a human in the loop; there was human decision-making in deciding to pull a remote trigger; but it seems to me to matter a whole lot that AI determined where the gun was pointing when it fired. That’s literally life or death—and it’s potentially shots-fired at a lawful target or at a civilian, with potential consequences for compliance or the lack thereof with the law of armed conflict. (I’m bracketing for today’s purposes the question of which one the nuclear scientist should be considered!) Who or what is exercising autonomy here: the remote trigger-puller, or the AI final-targeter?
In the second scenario, a human pilot is also pressing a joystick to fire the missiles. But who or what holds autonomy here? If the pilot’s screens say that there is an incoming hostile actor whose AI-calculated trajectory and speed suggest an imminent attack, and if the AI air battle manager in the pilot’s ears confirms that characterization, then there is a pretty strong push, to say the least, driving the pilot to take the shot. Sure, there are historical examples about individuals who thankfully did not fire nuclear weapons despite their systems indicating that the circumstances had arrived requiring them to do so; search online for “Stanislov Petrov” if you want to read something both harrowing and heroic.[5] But AI is really good—better than those systems—and the pressure to heed its life-or-death guidance, especially in a scenario in which swift defensive action seems essential, will be, to say the least, much stronger.
And that’s the thing: AI is really good, or at least some of it is at some tasks, and it’s getting even better. This is where my case for gradualism comes in: I am open to the possibility that it can be consistent with jus in bello requirements to cede aspects of autonomy to AI-driven systems. For example, it seems to me quite possible that the requirements of, say, distinction or proportionality can be met, and indeed potentially better met, by AI-adjusted final targeting of a weapon, of the type we saw in the first scenario, than by even the best-trained, best-equipped, best-intentioned humans. Possible, but by no means certain. And, of course, the baseline is not perfection: it is remarkable but imperfect human efforts to abide by distinction, proportionality, necessity, humanity, and other jus in bello requirements.
So, it seems to me that we need to scrutinize these AI deployments system by system and LOAC requirement by LOAC requirement. In fact, Article 36 of the Geneva Conventions’ Additional Protocol I requires states party to the treaty to review new weapons and ensure that they comply with international law, and the United States has long done so. When and how often that needs to happen for AI-driven systems that are themselves learning and thus becoming, in a sense, gradually “new” again and again over time is itself one of the many open questions for international lawyers in the AI era. But I just don’t find the question of autonomy-or-not satisfying, from a practitioner’s point of view. Autonomy is being shared between human and AI, and we need to roll up our sleeves and get really granular—and proceed very methodically—to scrutinize how LOAC applies. This is not a place for grand categorical pronouncements. It is instead a place for the even harder work of very detailed, technologically informed considerations.
Perhaps these are aided by concepts that have been introduced, such as “meaningful human control”[6] and “appropriate human judgment.”[7] Or maybe the debates over what those words mean confuse more than clarify. I don’t have a strong view on that, though I do worry that apparent agreement on words might disguise actual disagreement on substance. I do have a strong view that autonomy involves a spectrum, not a dichotomy, and probably more like a multidimensional set of spectra at that. That means we need detailed, granular analysis of weapons systems—and of the humans involved with them—to apply thoughtfully and appropriately the law of armed conflict to AI. What’s more, we need to tackle that work with the recognition that these issues are not entirely new with AI’s recent prominence. The Aegis Combat System and the PATRIOT air defense system, for example, have long employed some levels and types of automation for consequential kinetic actions taken in circumstances deemed to unfold too swiftly for humans to decide and act. Indeed, where “AI” itself begins and ends is famously fuzzy. Historical precursors won’t solve these hard questions, but they can help us to avoid misconceiving them as entirely new.
Let me shift from autonomy to accountability. Returning to the scenarios under consideration throughout this Essay, imagine that something has gone wrong in each. The mounted weapon’s ultimate line of fire, upon AI-driven adjustments, kills scores of innocent Iranians driving on the highway past the nuclear scientist. The pilot returns to the ground only to learn that the aircraft she obliterated based on AI identification of it as a hostile actor posing an imminent threat was a commercial airplane with civilians aboard. Who is to blame? That is, to the extent that there’s potentially a LOAC violation—and that itself is complicated, of course, but let’s just assume the possibility for sake of argument—to what actor does culpability and indeed liability attach? In other words, who acted with accountability in a manner that could generate culpability and even liability under the law of armed conflict?[8]
There are a few possibilities here. In trying to assign blame, one could, at least in theory, go pretty far back in the chain to scrutinize how these bad outcomes occurred. There is the AI model developer, a company that trained the model on, one could imagine, thousands of images that could be visualized as the targets of uniformed soldiers’ fire, thus contributing to the AI model, in the first scenario, opening fire on, for example, civilian targets of a particular ethnicity. Indeed, there might well be multiple model developers, such as a generalized initial model developer followed by a military contractor that fine-tuned the model specifically for military purposes. Also, there is the AI model deployer, the entity that took a trained model and embedded it in the architecture specifically of a weapon—and in so doing potentially misused, mis-designed, or otherwise mistook how it would and should operate.
As my law school classmate Becca Crootof, now an actual law professor whose fascinating work on these issues I commend to all of you (especially a great 2015 article called, “The Killer Robots Are Here”), has pointed out, that doesn’t seem the right place to look in applying concepts of command responsibility.[9] But then, where to look? The commander who oversaw the use of the AI model in this context? The technician who set it up? The last military computer scientist to test it? Or maybe whichever civilian or uniformed leader authorized the use of AI in this context in the first place?
It’s not as simple as who pulled the trigger and who ordered him to do so, knowing what at the time—and even that inquiry has, of course, proven anything but simple in the annals of applying LOAC.
But here’s what I’m not prepared to say: that there can be an absence of accountability, or attribution, whatever one calls it. To the contrary, someone, or multiple someones, must be accountable when AI systems are deployed. And, if the requisite mental state can be established, there needs to be someone (or multiple someones) on the hook for when AI goes awry—that is, accountability in some form, in a way that advances the fundamental objectives of the law of armed conflict.
And here’s another thing I’m not prepared to say: that the AI model itself can be that someone. In other words, we cannot hold the AI model itself accountable. Now, I recognize that there are intriguing efforts underway to pin on AI legal rights and responsibilities. For example, the D.C. Circuit decided a case[10] this spring in which a computer scientist sought to endow the generative AI model he had produced with the copyright to a piece of art he attributed to the model itself.[11]
When faced with the question of whether the AI model that generated the creepy artwork could hold its copyright, the D.C. Circuit said no. It held, based on its statutory construction of the text of the Copyright Act, that only human authorship can garner copyright protection.[12] Now, that outcome might have been determined by the particular language of a particular statute. Still, the core issue is one that will recur in many contexts in the months and years to come: can AI possess legal rights and responsibilities? And if so, and therefore an AI model gets hit with legal culpability, who actually pays damages or, to take things even further, serves time?
Maybe traditional respondeat superior and other principles can help us here, or maybe not. But for now, I’m not prepared to let the humans off the hook. This is part of my case for gradualism today. AI models do extraordinary things, and they do them in ways we do not fully understand and indeed in ways that they do not seem, at least for now, always capable of explaining to us—the interpretability challenges are very real. And, especially with their remarkable language fluency, AI models seem to exercise a lot of agency of the type we generally associate with accountability—at least it is fair to say that a lot happens at the stage of the AI’s intervention, well more than with virtually any technology I can think of. That can create a sense that AI, unlike all of those other technologies, is itself an entity with cognizable legal standing.
I suppose, as a theoretical matter, I should be open to the possibility that, for purposes of the law of armed conflict and perhaps others as well, we reach that point someday. But not today. For today, I urge gradualism: a careful analysis of who did what in a chain of command to yield an alleged LOAC violation, even considering the often remarkable and often inscrutable nature of what AI does. Among the reasons for that, I would put high on the list the importance of not letting humans off the hook for how they deploy and utilize this technology. In fact, I would argue we need that accountability more than ever, as this tech gets incorporated into weapons systems and military planning in fast-evolving ways. The law of armed conflict seems best served, and seems to serve us best, by emphasizing human accountability, rather than abandoning it, even if the search to identify it and link it causally to bad outcomes is becoming more complex than ever. In turn, that calls for gradualism—gradual application of traditional principles to admittedly new relationships with a new form of technology.
On to the final “A”: ad bellum. Here, I want to be brief. Much of the conversation about AI and the law of armed conflict has so far been focused on jus in bello, and that is for good reason. Indeed, you have indulged my dwelling on two scenarios that demonstrate just how fascinating these questions can be. But I think that AI’s repercussions for jus ad bellum, meaning the law of going to war, might just prove, over the medium- to long-term, more consequential than its implications for jus in bello.
That is because going to war will become driven, I think, increasingly based on calculations made in consultation with AI models. Indeed, I think that will be true of many of the most consequential decisions made by the leaders of all sorts of large organizations, in government as well as in the private and public sectors. But especially for key questions that confront a government’s leader before ordering the use of force—in particular, whether there is a reasonable belief that her country is facing an imminent threat of armed attack against it or its people, thus potentially justifying her own decision to use force in self-defense—the calculation will be made in part through dialogue with AI that has been informed of the relevant facts and circumstances.
That’s not to say that diplomats, intelligence analysts, and military leaders will be supplanted. To the contrary, their assessment and advice to presidents and prime ministers will itself be increasingly informed by the AI models on which they and their top advisors will rely. Plus, those heads of state will want to use their own models and have their own dialogues with them—not just to gauge empirically grounded questions like the probability of an adversary proceeding with a feared armed attack but also to pose more analytic and even legal questions, such as grappling with whether a contemplated response should properly be considered necessary and proportionate.
So, imagine AI having a role not just in finalizing the trajectory of the bullets in the first scenario but instead playing a major role in informing whether Israeli leadership orders the operation targeting the Iranian nuclear scientist in the first place. Or picture AI playing a role not just as “air battle manager” but in informing a decision to put the pilot’s squadron in the air in the first place to confront another country’s air assets.
We need more work on this: on AI and jus ad bellum, not just jus in bello. And here, too, we need to reap the benefits of gradualism by applying traditional concepts astutely and adeptly to new technologies and new human interactions with those technologies.
But let me be clear: on this and the other matters I have discussed, a case for gradualism is not a case for moving slowly. This technology is here, it’s improving, and it’s being used more and more. As I have argued, I do not believe its intersection with the law of armed conflict is susceptible to rigid dichotomies or categorical proscriptions. But I also do not believe it is susceptible to lollygagging. We need big brains tackling these questions now and likely picking up the pace in doing so.
This would not be a proper return to the Yale Law School if I wrapped up before quoting a Greek philosopher. All of us gathered here will, I’m sure, recall that Aristotle, in his Nichomachean Ethics, said that it’s the mark of an educated person to seek only as much precision in each discipline as the nature of that discipline permits. More humbly, I’ll call that a case for gradualism. AI is here to stay, and I very much hope that the law of armed conflict is here to stay, too, despite facing mounting challenges, as my own wonderful teacher when I studied here, Oona Hathaway, has laid out with characteristic insight.[13] To my mind, the intersection of those two cannot be solved or answered by categorical ins or outs, or by rigid pronouncements intended to last forever more. Instead, we all will have a lot more to do to work through what AI means for the law of armed conflict bit by bit by bit. At least, that is what this practitioner thinks.
Huge thanks to the Yale Journal of International Law and to all of you for the chance to be part of this conversation.
[1] Dr. Joshua A. Geltzer is a partner in the Defense, National Security, and Government Contracts Group at Wilmer Cutler Pickering Hale and Dorr LLP. The author is grateful for wise and helpful feedback on earlier drafts from Ben Buchanan, Tarun Chhabra, Teddy Collins, Becca Crootof, and Chris Fonzone.
[2] R. Berman & F. Fassihi, “The Scientist and the A.I.-Assisted, Remote-Control Killing Machine,” N.Y. Times (Sept. 18,2021), https://perma.cc/29FW-FJYY.
[3] M. Phillips, “Fighter Pilots Take Direction from AI in Pentagon’s Groundbreaking Test,” Fox News (Aug. 26, 2025), https://perma.cc/M5KH-QR48.
[4] W. Knight, “The US Wants China To Start Talking About AI Weapons,” Wired (Nov. 13, 2023), https://perma.cc/38UT-2R7K.
[5] G. Myre, “Stanislav Petrov, ‘The Man Who Saved the World,’ Dies at 77, National Public Radio (Sept. 18, 2017), https://perma.cc/4BDP-GX2W.
[6] See H. Roff & R. Moyes, “Meaningful Human Control, Artificial Intelligence and Autonomous Weapons,” briefing paper for delegates at the CCW Meeting of Experts on LAWS (Apr. 2016), https://perma.cc/5RPB-6REP.
[7] See Lena Trabucco, “What Is Meaningful Human Control, Anyway? Cracking the Code on Autonomous Weapons and Human Judgment,” Modern War Institute at West Point (Sept. 21, 2023), https://perma.cc/4NTU-9C4D.
[8] I think this is also sometimes called the “attribution” problem, which would still have allowed me a trio of “A”s, I’ll just note.
[9] R. Crootof, “Symposium on Military AI and the Law of Armed Conflict: Front- and Back-End Accountability for Military AI,” OpinioJuris (Feb. 4, 2024), https://perma.cc/LY47-6KXN.
[10] Thaler v. Perlmutter, 130 F.4th 1039 (D.C. Cir. 2025).
[11] The artwork is called “A Recent Entrance to Paradise,” and I encourage you to look it up online. It is not how I’ve ever pictured the entrance to paradise, I’ll confess, and it really does make me fear our coming robotic overlords.
[12] See supra note 9.
[13] O. Hathaway & S. Shapiro, “Might Unmakes Right,” Foreign Affairs (June 24, 2025), https://perma.cc/8N8W-VCNT.