Re-examining the History of Chemical Warfare - Part IV
From Tactical Utility to Strategic Deterrence
Following the war, the U.S. Chemical Warfare Service (CWS) remained, though subject to the post war drawdown experienced by the rest of the Army. The interwar years were a mixed bag for the CWS. Fries, who took over the CWS after the war, became obsessed with hunting communists and allowed military intelligence to hide a domestic surveillance operation within the organization.[1] A vocal and outspoken anti-communist Fries distributed a pamphlet via the CWS that accused a number of groups, including the League of Women Voters, among others, of being a communist front. This led to much embarrassment for Fries and the CWS.[2] Following these turbulent beginnings, the CWS emerged as a powerful component of the American military establishment. Tanks and aircraft were the trendy new tools of warfare’s future, but all the great powers assumed chemical warfare would dominate future conflict.[3] Yet, the general view was that chemical weapons were auxiliary to all aspects of warfare, but not central to them. With a general "no first use" policy among the major powers to restrain them, interwar uses were largely against unprotected populations or in insurgencies, from the British at Archangel in 1919 to the Italian campaign in Ethiopia [more on all that in a future post].
The interwar period saw the continued development of chemical weapons. Fritz Haber, the “father of chemical warfare” continued his work in Germany under the guise of pesticide research, and the United States, France, Italy, and the United Kingdom all continued the development of their chemical programs. In Geneva on June 17, 1925, the Major European Powers signed the Protocol for the Prohibition of the Use in War of Asphyxiating, Poisonous, or other Gases, and of Bacteriological Methods of Warfare. It had no effect on chemical weapons programs however. The protocol banned only the use of chemical weapons, not research, production, storage, or transfer. Entering into force in 1928, it had little effect on the national chemical warfare programs. Oddly, the United States, which refused to ratify the protocol, was one of the only major powers not to use chemical weapons in the interwar period.[2]
Despite the treaty, interwar chemical weapons programs continued. These were typically underfunded, and predominantly focused on technical aspects. They sought improvements in the production of various gases, developed new weapons and means of dissemination and delivery, tested the new weapons, and carried out defensive research into respirators and protective clothing. The fielding and stockpiling of chemical munitions, however, was very abbreviated in Europe, largely due to funding constraints.
The development of aircraft capabilities did spur some new strategic thinking about chemical warfare amongst strategic airpower advocates though. B.H. Liddell Hart, J.F.C. Fuller, and Giulio Douhet all considered chemical agents as decisive weapons of war when used against civilians. In Liddell Hart and Fuller’s view, the non-lethality of future agents was important. Douhet in his seminal The Command of the Air, took a more lethal view. In the United States, the chief air advocate Billy Mitchell echoed Douhet, especially after meeting him in 1922, three years before Mitchell's now infamous court-martial.
While all of these inter-war strategic theorists are remembered for their advocacy for independent air forces and strategic bombing, important in World War II, their pre-war advocacy for chemical weapons is less noted. As noted below, chemical weapons in the interwar period presaged Cold War deterrence theory. Likewise, the interwar airpower advocates for strategic chemical bombing presaged Cold War advocates for strategic airpower in the nuclear role. Douhet influenced men like Mitchell in the US and Hugh Trenchard in Britian, whose ideas were embodied in World War II by Arthur "Bomber" Harris and Curtis Lemay. Lemay took the ideas of Douhet after the war, replaced chemical weapons with nuclear bombs, and the rest, as they say, is history. Cold War doctrines emerged from the interwar period.
The humanity of non-lethal chemical agents that occupied the public debate in the early post-war years faded quickly. It mattered little in the public mind anyway, despite the advocacy of Fuller, Hart, Fries and J.B.S. Haldane. The subject was made relatively moot due to the development of more lethal agents during the same period. While strategists largely believed chemical warfare would see both battlefield and strategic use in the next war, the only significant doctrinal development was the use of gas against civilian populations, which only increased public fears and the fueled the push to ban chemical weapons.
Significantly, the United State and Japan were the only major powers that failed to ratify the Geneva Protocol. The major powers that did sign on to the treaty largely maintained a “no first use” reservation, either stated or unstated, though that only applied to the other major powers. The United States officially eschewed the treaty all together, echoing the arguments of Mahan, decades before. Ultimately it came down to political pressure brought by the Chemical Warfare Service, veterans groups like the American Legion (of which Fries was a senior member), and the American Chemical Society, all spurred on by Amos Fries. It was a Pyrrhic victory though. Presidential, Congressional, and public opposition to chemical weapons remained strong in the United States, with fiscal consequences for the CWS. The same was true for the chemical services of the various powers, which suffered similar resource constraints in the interwar period that largely limited their pursuit of offensive weapons production. The major powers, beyond whatever remained previous stockpiles, maintained only a skeleton capability for wartime expansion. The Protocol "ended the debate," in many minds and that led to reduced funding, even as strategic airpower advocates and the chattering class of the period all openly stated that chemical weapons in a strategic role, from the air, would define the next war. That prediction proved more aspirational than real.
In the First World War, chemical weapons were a tactical weapon with significant effects on the conduct of battle in the final year of the war. Battlefield doctrine in the interwar period largely maintained the existing methodologies inherited from late in the Great War. The idea of using chemical weapons against civilians, and the development of advanced aerial delivery via bombs and long range bombers, elevated chemical warfare into a strategic weapon. This, more than any other factor drove the European powers to adopt a no first use policy. Subject to potential air attack, the political leadership and the public were both sensitive to the potentiality of retaliation. While ideas of escalation and deterrence would develop later as a product of the Cold War nuclear standoff, the interwar period was a precursor to post WWII deterrence theory when it came to chemical weapons. Yet, this was predominately a European phenomenon. The United States, immune in the interwar period to intercontinental attack, did not face such threats and was undeterred. Thus, as late as 1938, the Joint Army and Navy Basic War Plan Orange, the military’s plan of war against Japan, authorized “the use of chemical warfare, including the use of toxic agents, from the inception of hostilities” (Moon 1996, 498)
When war came, the Europeans largely adopted a no first use pledge after September 1939, and when it became known to the British that the Germans were contemplating chemical warfare against the Russians in May 1942, Britain deterred their actions by claiming such use was equivalent to attack on the U.K. While an effective deterrence against use was in place in Europe, U .S. strategic planners, and the Chemical Warfare Service, believed they had the necessary authorization for the use of chemical weapons whenever circumstances warranted, if the United States were to enter the war in Europe or face war with Japan (Moon 1996).
Ultimately, accusations of Japanese use of chemical warfare in China led to threats from the Roosevelt administration of massive retaliation in kind against Japan in 1942. In reality, the United States possessed no air capability of reaching Japan at the time, nor did it have significant chemical weapons stockpiles (Moon 1996). This was a curious factor during the war. Despite all the threats, much of it was bluster; few nations actually possessed the immediate means to carry them out early in the war, while all assumed their enemy did. By 1943, Roosevelt joined the no first use club, declaring, “I state categorically that we will under no circumstances resort to the use of such weapons unless they are first used by our enemies” (Roseman 1943, 243).
That declaration, and coalition politics, effectively ended chemical warfare in World War II. Fears of escalation and retaliation prevented the U.S. from using chemical weapons in the Pacific, despite significant internal pressure within the military, and despite Japan‘s use of chemical warfare against the Chinese, which a Chinese post war evaluation put at 1,312 separate instances (Moon 1996, 506).
The lack of significant chemical warfare in the European theater of World War II and between the U.S. and Japanese in the Pacific came down to mutual deterrence. In the interwar years, chemical warfare retained the tactical utility all sides embraced in 1918, but it evolved beyond the battlefield into a strategic weapon as well. This factor is essential to understanding why chemical warfare disappeared from most modern battlefields after World War I and why history buried its importance to 1918. Interwar air power advocates for strategic bombing openly advocated the use of chemical weapons against civilian populations. Likewise, while the public and military establishments generally reviled chemical weapons, there was a countervailing view that chemical weapons were inherently more humane than shrapnel, projectiles, and high explosive.[4] This idea grew in part from the low lethality of chemical agents in World War I.
The development of strategic bombing and air delivered chemical munitions was a real threat in World War II, inspiring numerous protective schemes for troops and civilians. Outside of the war in China, all sides eschewed first use, believing the other to possess extensive secret chemical capabilities. The tactical use of chemical weapons on the battlefield threatened to escalate quickly to aerial bombing of civilian centers and both the Germans and the British believed the other to have a superior arsenal. In fact, the Germans were ahead of the Allies in chemical weapons throughout the war, just as they were in the first war, but escalation and mutual deterrence worked, based on false assumptions about the others strength and legitimate fears about the consequences of escalation.[5]
It was the strategic consequences of chemical warfare, even if the threat was inflated, that overtook the tactical and operational advantages of chemical warfare on the battlefield. Like nuclear weapons in the Cold War, tactical use might light the fire. This led to an (incorrect) conclusion that chemical weapons no longer possessed tactical utility, though both the United States and the Soviet Union continued to pursue and develop them. The truth was more complex. The weapons still had significant tactical utility, just as they did in 1918. Strategic deterrence obscured it. The tactical utility of chemical weapons consumed the planners of Desert Shield and Desert Storm in 1991 and led to a threat of nuclear weapons in retaliation if the Iraqis employed them.[6] Until 2010, when the Obama administration changed the United States nuclear weapons policy, the use of nuclear weapons to deter chemical weapons was a de facto United States policy copied by the British and Israelis.[7]
The conflation of nuclear and chemical threats elevated chemical weapons out proportion to the actual threat. The same conflation of threats also made a historical anomaly. If chemical warfare were only a nuisance in World War I, why would a nation with extensive chemical defense equipment and training, threaten nuclear annihilation to prevent its use against their troops? While it may be inviting to talk about threat inflation, the case for war, and a host of other issues like the increased lethality of third generation agents (those developed after World War I), it is more likely that historical assumptions about early chemical warfare are the source of the disparity. Chemical weapons have tactical utility, even against protected troops trained and accustomed to operating in chemical environments. The last year of the Great War proved it. It was as true for the American Army in 1919, as it was in 1991. History just forgot.
[1] Thomas Ian Faith, Dissertation: Under a Green Sea: The US Chemical Warfare Service 1917-1928, (Washington, DC: George Washington University, 2008), 130-133.
[2] Ibid.
[3] For an example of this mindset see Hiram Percy Maxim, “The Next War on the Land, Part II,” Popular Mechanics Magazine 65:2 (February 1936): 194-201, 136A. See also Prentiss.
[4] Haldane was a vocal advocate of this, as was Fries. See also discussion in Faith, Under a Green Sea, cited above.
[5] John Ellis van Courtland Moon, “Chemical Weapons and Deterrence: The World War II Experience,” International Security 8:4 (Spring 1984), 3-35.
[6] This was well reported at the time, and confirmed by Secretary of State James Baker after the war. The proof is in the George H.W. Bush Presidential Library and Archive at College Station, Texas in a copy of a letter delivered to Tariq Aziz. Oddly enough, Saddam saw the weapons as a retaliatory strategic asset and overestimated their power. So while the US was genuinely concerned about the tactical effects chemical weapons might have on Operation Desert Storm, Saddam only thought of using the weapons to deter Israeli nukes and retaliate in the event of a US or Israeli nuclear strike. See Arkin, William, "US Nukes in the Gulf," The Nation, December 31, 1990, 834. Livingston, Neil, "Iraq's Intentional Omission," Sea Power, June 1991, 29-30. Tygel, Yaacov, "Israel and the Bomb," The Nation, 18 February 1991, 191. Benjamin Buch and Scott D. Sagan, “Why Saddam Never Used Chemical Weapons in the Gulf War,” Hamodia, December 17, 2013, 10.
[7] U.S. Department of Defense, Nuclear Posture Review Report, April 2010, http://www.defense.gov/npr/docs/2010%20Nuclear%20Posture%20Review%20Report.pdf (Accessed December 1, 2014).