A memorable scene in the 1983 film The Dead Zone provides an ethical justification for actions that harm innocent people. The protagonist presents his friend and psychiatrist with a well-worn hypothetical query: If he could travel back in time to pre-Nazi Germany, would he kill the young Hitler? His friend responds cannily, “I’m a man of medicine. I’m expected to save lives and ease suffering, and I love people. Therefore I would have no choice but to kill the son of a bitch.” I think of this scene often when reading about the scientists who made the first atomic bombs. They typically claimed they did so to deter Hitler. As ethical justifications go, invoking Hitler is the gold standard. But, of course, the justification does not fit the outcome: The atomic bombs were used against Japan.
After World War II, scientists were sensitive about the ethical implications of their work when it had obvious military dimensions. Some were proud to do it. After all, was deterring Stalin so different from deterring Hitler? Many claimed they were not responsible for what nonscientists did with their research. Wartime funding did not dry up after the war, and science flourished thanks to its military patrons. Publications such as Vannevar Bush’s 1945 report Science: The Endless Frontier encouraged scientists to play up the idea of “basic” research to continue pursuing pure science without calling it military work. Still others wondered what public responsibilities, if any, fell to scientists like themselves who developed instruments of immense destruction. In the late 1940s Albert Einstein warned that, by allowing the armed services to bankroll so much research, the United States was headed down the path of militarism.
Historian Sarah Bridger’s ambitious book Scientists at War rethinks scientists’ role within the Cold War military establishment, viewing it through an unusual lens: that of the 20th century’s continually evolving ethical norms. She frames her story as a paradigm shift in the sense of Thomas Kuhn’s well-known 1962 book The Structure of Scientific Revolutions, drawing heavily on Kuhn’s suggestion that worldviews are often generational. Kuhn himself makes an appearance in the book, although ironically representing the older generation. The book’s title refers not so much to the actual battlefield as to ethical conflicts among the scientists themselves.
Bridger has written a stimulating account of the clash of worldviews that divided the Manhattan Project generation from Vietnam-era scientists. Science superstars such as Edward Teller, Herbert York, and George Kistiakowsky saw their careers skyrocket after World War II, and they played important roles directing weapons laboratories and advising presidents. In contrast, the experiences of many Vietnam-era scientists, such as Agent Orange critic Matthew Meselson, were marked by doubt, disillusionment, and the desire to speak out.
Bridger carefully avoids depicting the war in Vietnam as the event that led scientists to consider ethical implications for the first time. Such a view would be far too simplistic. Scientists advocated for international control of atomic energy, as well as for arms-control agreements with the Soviet Union long before the Vietnam War, she notes. Scientific celebrities such as Einstein, J. Robert Oppenheimer, and many others openly expressed their concerns on the pages of publications such as the Bulletin of the Atomic Scientists. Yet they also wanted to keep the United States at the cutting edge of weapons technology, especially after the Soviet Union’s 1957 launch of Sputnik. The satellite may have begun the space age, but it also waved a bright red flag in the arms race, demonstrating that the Soviet Union was ahead of the United States on ballistic missiles, an important delivery system for bombs.
The title Making Weapons, Talking Peace, the 1987 memoir of York, who served from 1958 to 1961 as the Pentagon’s director of defense research and engineering, perfectly captures his generation’s ethos. These scientists believed what they were doing was justified—not merely from their sense of duty, but also because they felt they could pursue ethical choices best from within government.
That sense of ethical agency collapsed in the late 1960s and early 1970s as scientists sharply disagreed about the proper role of scientists as advisors and about whether universities should work closely with the armed services. The war in Vietnam drove a wedge through the scientific establishment, and the clashes were often generational.
Just as young people rallied on campuses to protest the war, younger scientists led attempts to decouple universities from military-sponsored laboratories such as the Lincoln Laboratory at the Massachusetts Institute of Technology, which by the late 1960s drew 98 percent of its funding from the U.S. Department of Defense. Elite bodies such as JASON, an independent group of researchers that advised the Pentagon, no longer attracted the cream of the young scientific crop. As Bridger describes it, they could not compete with the growing political momentum for a “sweeping ethical reckoning” demanded by the Scientists and Engineers for Social and Political Action (SESPA), which published leaflets, picketed military-sponsored labs, and railed against the Nixon administration’s 1970 decision to expand the Vietnam War into Cambodia. More vocal and influential than earlier pacifist science groups, SESPA was not exactly mainstream; nonetheless, it succeeded in shifting mainstream attitudes. Government work lost its appeal, Bridger notes, no longer attracting geniuses but rather “loyal talent eager to work within the defense establishment.”
Looking back, it is impossible to identify a clear-cut consensus about the scientist-military collaboration after Vietnam. Some universities, including MIT, severed official ties with military-dominated labs, although informal practice allowed connections to remain. Scientists who continued to work in defense-related realms fell back on the old argument that a strong military lowers the risk of provocation and war. That rationale saw a challenge in the 1980s when President Ronald Reagan promoted the Strategic Defense Initiative (SDI), popularly called Star Wars, a technological wonder intended to protect American cities from incoming nuclear missiles. There were numerous reasons to oppose developing SDI, including the fact that the United States had signed the Anti-Ballistic Missile Treaty, which forbade such defense systems, and doubts about whether the system could work as advertised.
Bridger argues that scientists balked at SDI because it shook the foundation of deterrence, the precarious ethical justification that American scientists had been nurturing for decades. These scientists believed their research was keeping the Cold War from getting hot. By making nuclear war survivable, SDI seemed to normalize it, perhaps even making it more likely to occur. Undermining deterrence in this way drew the ire of not only Vietnam-era scientists but the older generation as well. While SDI had advocates, including Teller and the scientists in national laboratories who stood to gain from lucrative contracts, it received sharp criticism from major organizations such as the American Physical Society and the American Mathematical Society. Bridger proposes that the widespread opposition to SDI on ethical grounds represents a kind of “generational reconciliation after Vietnam.”
Although SDI might have put multiple generations of American scientists on the same page again, such a conclusion may be too tidy for a book about scientists and the ethics of weapons research. Bridger misses opportunities to engage directly with complex ethical issues connected to pacifism or the doctrine of “just war,” whereby a conflict must meet strict criteria to be considered morally justifiable. Religion does not enter the discussion at all, and she rarely moves beyond notions of patriotic duty or deterrence as ethical rationales.
During a 1983 speech to the National Association of Evangelicals, President Reagan used the term “evil empire” to describe the Soviet Union. In a religious context, invoking evil squarely resolves most ethical quandaries. He approvingly recounted the story of a man who said he would rather see his children die today than see them live under godless communism. Reagan used the term “evil” no fewer than eight times in that speech and stated outright that the Soviet Union was “the focus of evil in the modern world.” A couple of weeks later he introduced SDI. Because Bridger demonstrates so well that ethical considerations evolved alongside social change, it is fair to ask how religious doctrines or fervor influenced scientists’ choices, especially among those who considered the Soviet Union, with its official stance of atheism, to be a force for evil in the world.
The protagonists in Scientists at War are those who scrutinized their research not only for intrinsic value but also for its potential uses and who formulated their actions accordingly. Bridger portrays scientists such as physical chemist Kistiakowsky and electrical engineer Jerome Wiesner, both advisors of presidents Eisenhower and Kennedy, as weathervanes of broadly held ethical positions. They crafted arguments as responsible citizens, were outspoken in their views, and were respected by their peers. Both men were scientists of the Manhattan Project generation who tried to work toward disarmament from the inside. But after failing to sway the government against steady escalation of the war in Vietnam, Kistiakowsky left his advisory role. He explained that he had “not grown long hair” or become a “peacenik,” yet his resignation surprised colleagues who presumed their duty was to advise, not to publicize dissent. Wiesner similarly broke ranks over Vietnam, and as president of MIT he sympathized with efforts to decouple the university from its military partner. Conversely, Bridger sees physicist Lee DuBridge and electrical engineer Edward David—who did little politically in the Nixon years, neither commenting on Vietnam nor straying from the president on substantive policy issues—as “milquetoast science advisors”: yes-men who did not ruffle feathers.
To her immense credit, Bridger has avoided crafting Scientists at War as a heroic tale of student protesters and pioneering moralists changing American science for the better. It is less a morality tale than a story of evolving consensus about the ethical underpinnings of everyday scientific work. Its themes should resonate, as the question of military-supported scientific research is alive and well. Recent revelations about the complicity of the American Psychological Association (APA) in the use of torture during military interrogations provide a case in point. During the George W. Bush administration—at a time when, even within the Central Intelligence Agency, many professionals raised questions about enhanced techniques such as sleep deprivation and waterboarding—academic scientists turned a blind eye, hoping to maintain existing relationships and funding streams. One investigation concluded that APA officials had colluded with the Pentagon to ensure ethics policies were never more restrictive than military officials desired. Ethical standards in this instance proved to be quite flexible.
Today, every working scientist weighs his or her own ethical options, and each should do so consciously. Bridger’s book reminds us that such choices are rarely as straightforward as the hypothetical decision to deter Hitler. The consensus has changed before, and it likely will change again.
Sarah Bridger. Scientists at War: The Ethics of Cold War Weapons Research. x + 278 pp. Harvard University Press, 2015. $45.
Review by Jacob Darwin Hamblin, originally published as “An Ethical Evolution” in American Scientist (Nov/Dec 2015)