The Ethics of Climate Change
by Andrew Light, Kent Keller, Bill Kabasenche, Eugene Rosa | © Washington State University
In 2012 the Thomas S. Foley Institute for Public Policy and Public Service, in conjunction with the School of Politics, Philosophy, and Public Affairs, began a new public symposia series that focuses on the ethical and public policy ramifications of new scientific innovations and knowledge. Each semester the symposia, which are open to the public, bring together WSU faculty with other internationally prominent scholars. The first in the series, “Ethics and Global Climate Change,” was held in April 2012, and brought to WSU’s campus Andrew Light, director of the Center for Global Ethics at George Mason University and a fellow at the Center for American Progress. Others panelists included Eugene Rosa, the Boeing Distinguished Professor of Environmental Sociology at WSU; Kent Keller, professor in the School of Earth and Environmental Sciences at WSU; and William Kabasenche, associate professor in the School of Politics, Philosophy, and Public Affairs. The series will continue during the fall 2012 semester with a symposium focusing on ethical questions surrounding reproductive rights and religious freedom. For more information you can contact the Foley Institute at email@example.com.
The equity dilemma
It should be clear from the other contributions to this Forum that by its very nature the problem of global climate change requires a global solution. Once this reality is accepted then an immediate question is how to most equitably distribute the obligations for reducing emissions to safer levels. After being absent from international climate negotiations for some years now, determination of the most equitable distribution of greenhouse gas reductions is now back on the table.
The global community has been working on the creation of a comprehensive treaty to reduce greenhouse gas emissions to safe levels for the past 20 years. The first climate treaty was proposed in 1992 at the Rio Earth Summit in Brazil, leading to the creation of the United Nations Framework Convention on Climate Change (UNFCCC), which was ratified by 194 parties, including the United States.
But while it represented a landmark piece of diplomacy at the time, the UNFCCC only called for voluntary reductions in greenhouse gases and so was considered by most parties to be inadequate to help to solve this problem. Since that time this body has been struggling to create a binding treaty that could achieve the reductions in these gases which the scientific community believes is both feasible and adaptable by most countries. The chief sticking point, though, is finding the right balance of responsibilities among various parties.
The only guidance provided by the language of the framework convention was that the assembled parties shall have “common but differentiated responsibilities” to reduce their emissions. This phrase has been interpreted as meaning that while all greenhouse gas polluters have some obligation to reduce their emissions, these responsibilities are different based on (1) their historical emissions and (2) their development needs. Historical emissions matter because the main anthropogenic greenhouse gas—CO2—depending on the source, continues to force increases in temperatures for hundreds and sometimes thousands of years. The almost one degree Celsius of global warming humans have caused so far is due largely to the emissions produced by today’s developed world. Development needs matter because the often crushing poverty still experienced in many parts of the developing world may require a slower transition from dirtier carbon-intensive fuels, which still tend to be cheaper than cleaner fuels.
Unfortunately, all efforts so far by the UNFCCC to create a binding treaty that both reduces emissions to safe levels and creates a formula for distribution of them that all parties can agree upon, have failed for various reasons. While the framework convention created the Kyoto Protocol in 1998, the interpretation of common but differentiated responsibilities embraced in this agreement only legally bound developed countries to reduce their emissions, while developing countries were only asked to enact voluntary measures. Because of this perceived imbalance in responsibilities, the United States never ratified Kyoto. Since the United States is the second largest emitter of greenhouse gases (now behind China), and is still the largest historically, it’s difficult to imagine a workable international climate regime which does not include the United States as a full participant. It’s also impossible to achieve the reductions necessary to stop at some level of relatively safe warming without the participation of the biggest emitters in the developed and developing world.
Last year at the UNFCCC’s annual summit, in Durban, South Africa, an effective reset was called and the parties agreed to start a three-year process to create a new climate agreement that would have the same legal force for all parties. Beginning this November, at the next meeting of the UNFCCC in Qatar, the parties will begin to create the language of a new treaty, which, if successful, will go into effect soon after 2020.
It is in this context that the issue of equity is being discussed once again. The conversation started this past spring at an intercessional meeting of the UNFCCC in Bonn, Germany, where a two-day workshop was held on equity in sustainable development. Already, though, old divisions that have haunted these talks have re-emerged.
A key problem is that many influential developing countries continue to take the UNFCCC mandate of common but differentiated responsibilities as the only acceptable outcome for division of responsibilities for reducing emissions. This has led them to embrace some solutions which effectively grant the right to all countries to emit some greenhouse gases into the atmosphere as a right of development. While there are many forms of this argument, they tend to look something like this:
1. Start with an assumption that the global commons can only absorb X trillion tons of carbon before reaching unacceptable levels of global temperature increase.
2. Divide X by the global population and allocate an equal amount of emissions for all on a country by country basis.
3. Subtract the amount any country has historically emitted (back to an accepted baseline) from its total allotment based on population.
4. From 2 and 3 assess the amount of future emissions allowable for each country starting now, expressing these positive allowable emissions as an emission right or “development right.”
5. If a country has already emitted more than its fair share of CO2 into the atmosphere over its history (such as the United States in all of these treatments), then it has a “carbon debt” and must either radically reduce its emissions to zero or compensate those countries which have not emitted their fair share of historical greenhouse gases for holding back on the emissions they still have a right to emit.
We can see this reasoning at work in various communiqués over equity in the climate talks. In a submission to the convention on October 10, 2011, the Indian government put the point this way: “Equitable access [to sustainable development], for its part, must derive from the notion that all human beings have an equal entitlement to the global atmospheric space, and that in determining just shares of the remaining atmospheric space, past usage (or over-usage) of the global atmospheric space must be taken into account.” As one might imagine, the United States, and some other developed countries, have categorically rejected this idea.
Rather than weighing in on one side or another of these debates, here I only want to point out one significant hurdle that any scheme such as that just described would have in order to be accepted by the United States. If the United States were to sign onto an international treaty that accepted such a notion as outlined earlier, it could potentially dismantle the current basis for regulating these substances at home. The Indian submission describes greenhouse gases as the source of a positive right, a resource if you will, rather than as a pollutant. At present the only basis for regulation of CO2 in the United States is in negative terms as pollution.
The origin of this designation goes back to the 2007 Massachusetts v. EPA decision (549 U.S. 497). There, the Supreme Court ruled in favor of 12 states and several cities which had sued the Bush Administration over its refusal to determine whether CO2 and other greenhouse gases constituted pollutants under the Clean Air Act. In the 5–4 decision the court determined that global warming could present a potential threat to these states and cities for various reasons, and so the EPA was required to undertake an “endangerment finding” to determine if these substances needed to be regulated to protect the health and safety of Americans. While the Bush administration never started the process of producing this finding, the new Obama administration started the process a few months into its first year and announced in December 2009 that these gases did meet the standard of a dangerous pollutant under the Clean Air Act.
While the Obama administration was making this executive determination, the U.S. Congress was trying to pass a comprehensive energy and climate bill. Unfortunately, while the House version of this legislation (the “Waxman-Markey” bill) passed, companion legislation in the Senate never even made it to a floor vote. As a result, the determination of greenhouse gases as a pollutant under the Clean Air Act became the “plan B” for the United States for joining the rest of the world in reducing its emissions. The results have been impressive, with EPA regulations passed on the basis of this authority to limit emissions from mobile sources, new stationary sources (particularly from coal-fired power plants), and most likely existing stationary sources if President Obama is reelected.
Nonetheless, many environmental critics of the administration find these regulations to be insufficient to meet the United States’ global responsibility to reduce our emissions given the amount we have historically emitted. But no matter how much one may disagree with these efforts, it is undeniable that the authority to regulate greenhouse gases in the United States stems from a description of them as harmful pollution. Since the 2010 midterm elections there have been numerous attempts to overturn the authority of the EPA in Congress and all have failed by narrow margins so far in the Senate. If the Obama administration were to embrace a global treaty that defined greenhouse gases instead as the source of a positive right, then it would undermine its defense so far of this authority, or force it to defend a contradictory conception of the same set of substances in two different arenas.
While this consideration is not absolutely defeating for embracing something like a greenhouse development rights approach to equity in the international climate negotiations, it does at least demonstrate how a new treaty has to grapple with a delicate combination of philosophical and practical considerations that are made all the more difficult by national circumstances. While in the abstract there may well be an optimal allocation of global reductions in emissions, the reality is that a global environmental treaty may not be the best vehicle for carrying that allocation forward.
Pointing out tensions like these, though, does not mean that a new equitable, workable, and effective climate treaty is beyond our reach. Over the next few years we will see the emergence of several cooperative efforts among state actors and NGOs to try to produce a more flexible, less abstract notion of climate equity that has the potential to represent a consensus of views on a fair outcome allocation of global responsibilities to address this challenge.
A geologist’s view
We are children of the Pleistocene. From a geological perspective the past two million years of Earth’s history are characterized by the cyclic alternation of ice ages interspersed with relatively short warm periods (the last one of which, the Pleistocene, allowed agriculture, animal life, and human civilization to flourish). The Pleistocene coincides with the rise of our species, and every aspect of us, from DNA to culture, has been sculpted by its climate cycles. Indeed, climate change, over and over, is the rule of human existence. Yet these climate cycles have occurred within a remarkably resilient Earth- system climate framework: Pleistocene temperature variations have rarely exceeded 10 degrees C on a global basis over a full 100,000-year cycle. Such resilience has characterized Earth’s climate far back into deep time, sustaining life here continuously since its dawn at least 3.7 billion years ago.
Against this backdrop we must now consider what, if any, role humans have in altering Earth’s climate. Scientists with expertise, as in any field, have a particular ethical responsibility to carefully make interpretations of available data, to place our interpretations within context of the larger scientific community, and to explicitly acknowledge and describe uncertainty. Recent studies that contain billions of carefully-filtered data show unequivocally that Earth’s land surface has warmed about 1 degree C over the past 100 years. This trend is highly likely to accelerate over the coming decades. Our best models, incorporating the mechanisms by which sunlight is processed by Earth’s atmosphere, biosphere, and oceans, indicate that our greenhouse gas (GHG) emissions to the atmosphere are substantially responsible for these changes. There is great uncertainty regarding how the climate system will respond to our unprecedented and presently uncharted path of GHG emissions on centennial and longer timescales. Much of this uncertainty is due to our rudimentary understanding of a vastly complex climate system. These views regarding the nature and probable causes of changes to Earth’s climate, as well as the admission of uncertainty about how the climate system will respond, are fully in the mainstream of geologic and climate science.
What ethical and policy conclusions should be drawn from this? As ever, we all owe each other the fundamental responsibilities of citizenship: to learn as best we can about how the world works, and to make choices based on open-minded, critical inquiry, with due consideration of others with whom we share the planet. As we relate to climate, I believe that the obligations of common citizenship surely point us to a position of humility. Acknowledging our ignorance, we should engage in “intelligent tinkering”; that is, mitigate and adapt carefully, without sacrificing Earth’s subsystems or their parts. In practice this means erring on the side of caution by minimizing our disturbances to atmospheric chemistry, hydrologic and nutrient cycles, and plant and animal extinction rates, among other Earth processes.
The hardest work, and perhaps the critical ethical obligation in “climate citizenship,” is to join in the building of sociocultural resiliency at all scales. Sustaining soils and producing healthful food requires functioning communities and sound choices at a smaller set of scales than does managing the GHG composition of the atmosphere. More and more of us must contribute to all of this, leading in some cases, following in others.
Skepticism about climate change is one of the striking features of the public debate over this issue. But few of us are climate scientists who can claim expertise regarding the relevant projections. Two important ethical claims follow.
First, those of us who are not experts in this area seem to have some kind of ethical obligation to take seriously views of those who are experts—perhaps even an obligation to defer to consensus expert judgment in these matters. Second, experts would seem to have a variety of individual and collective responsibilities regarding how they conduct their research, how they portray their collective and individual views, and how they treat one another within the scientific community. If we who are nonexperts must trust those who are, then we want to know, for example, that their funding sources aren’t inappropriately influencing a variety of judgments scientists must make about what questions to study and what experimental designs to employ, and so on. We’ll also want those with views not in the majority among experts to say so frankly. And we’ll want to know that scientists who sincerely promote an outlier view are treated with the kind of respect that acknowledges that they may offer an important qualification of the received view.
Assuming that something like the dire projections of most climate scientists are correct, what ethical responsibilities follow? One intriguing line of thought might go as follows: The United States has gained an economic advantage in the world through emissions that we now know could harm all of us through global climate change. Therefore, we have a proportionately greater responsibility to curb emissions, making the sacrifices necessary to do so, and also a greater responsibility to research and disseminate any possible remediation technologies, assuming we can do so without imposing unacceptable risk on others.
Some proponents of this line of thought will distinguish between luxury emissions and subsistence emissions and argue that we should give priority to the latter under dire conditions. Another possible implication of this “you’re responsible for your mess” argument might be that the United States should develop and freely (or cheaply) disseminate technologies that would allow all of us to limit our harmful emissions without drastically changing our lifestyles. Of course, the history of World War II victory gardens might encourage us to think there is in the United States the kind of resolve that would enable us to make some important sacrifices in an effort to protect the well-being of others.
Some have suggested we might avert trouble if we could geo-engineer certain aspects of the earth’s climate. Suggestions range from seeding the ocean with iron filings to stimulate plankton growth, thereby sinking a lot of CO2 into the ocean, to placing mirrors in space to reduce the amount of solar radiation reaching earth’s surface. These proposals vary in terms of expense and also in terms of risk. And they also raise ethical questions, such as what the appropriate amount of precaution to exercise is when faced with high risks through action or inaction, or how we might gain the consent of all those who might be affected by such broadly impacting projects. All of the above questions are, of course, the focus of ethicists working in these areas.
Minding the gap
Eugene A. Rosa
Anyone who has ridden the London Underground—the “tube” to Londoners—has heard the repeated loudspeaker reminder to “mind the gap.” The warning is a risk management message to prevent passengers from tripping in the space between the platform and train car. A failure to mind the gap can have grave consequences.
A far more serious social and political gap has emerged in society, presenting unprecedented challenges for science-based governance. The gap consists, on the one hand, of the institutional constraints on scientists for bridging the gap between science and ethics and, on the other hand, of the challenge of addressing the growing divide between rapid advances in science and technology and the inevitable risks they generate.
The first had its roots in a centuries-old, universally held worldview. From roughly the fourteenth until the twentieth centuries the dominant view—attributable to Medieval scholar Nicole Oresme—was the shared imagery of the perfection of the universe by scientists and citizenry alike. That image of perfection in the cosmos was captured by Oresme in the metaphor of its status as a grand, perfectly functioning clock.
It was a short step to look at this perfection as a yardstick for judging ethical issues. If God was perfect, a rock-solid belief, then His creation, the universe, must be perfect, too. It followed, then, that if nature was operationally perfect, why not look there to develop systems of ethics with its underpinning of perfection. And many scholars did. However, the quest to translate the operations of nature into ethical codes encountered a serious bump in the road—a bump that would later become a pothole—the “gap”—for science. The bump appeared in the form of a 1903 book, Principia Ethica by Cambridge philosopher G.E. Moore, considered by sociologist Robert Merton as possibly the most influential book of the twentieth century. Moore developed a compelling argument demonstrating a logical fallacy—the naturalistic fallacy—when drawing ethical principles from nature’s operations, however perfect. In short, there was no logical way to take the observations of nature via categorical statements and convert them to ethical prescriptions via normative statements.
This conundrum for philosophy would create even more mischief for science. The place of mischief was 1920s-1930s Vienna, Austria. It was there that a distinguished group of philosophers, logicians, scientists, and mathematicians who called themselves logical positivists met regularly around a unifying topic: how to distinguish science from other systems of knowledge. The key logical quest was to look for a rule, a demarcation rule, for distancing science from other systems of knowledge. The first element of this rule was the is–ought separation codified by Moore. The rule was further elaborated to demarcate between pure logic (including mathematics), referred to as analytic statements that reduced to tautologies, and synthetic statements that were amenable to empirical verification. All other statements were metaphysical and, therefore, not scientific and of little use in generating knowledge.
Despite changes in logical positivism over time, two key components remained: (i) a commitment to evaluating theoretical statements on the basis of empirical evidence and (ii) an avoidance of ethical judgments. In the light of basic changes in science and society in the twentieth century, the ethical avoidance principle has become ever more difficult to sustain.
Two key developments in science have challenged the ethical avoidance principle rule (ii) and account for the challenge of minding the “gap.” The first is the transformation of science as an institution comprising a collection of quasi-isolated and devoted investigators to an enormous enterprise with much of its service to government. The single effort most responsible for this was the Manhattan Project, whose products were not only a breakthrough in subatomic physics but also in the technology of unprecedented destruction—the atomic bomb. More important, the destructive magnitude of the bomb was a harbinger of the human capacity to destroy the entire plane. It was a harbinger of a growing number of topics at the horizon of science that are embedded with “grand” risks; that is, risks of potential global reach and of incalculable consequence—including the extinction of the human race.
The magnitude and pace of global climate change is now the exemplar of this new era of risk, bearing all the characteristics of the “grand” risks that punctuate our age. Warming the planet is truly global in scope with incalculable possibilities and outcomes that could potentially wipe out human and most animal species. It embeds numerous ethical challenges that bring into sharp relief the science–ethics “gap.” It raises questions such as the proper trade-offs between devoting resources to mitigation versus adaptation, between protecting more versus less vulnerable societies, and between the needs of current versus future generations. Addressing such gap questions, then, is the key ethical challenge for science: namely, how to conduct valid scientific research while offering recommendations for proper policy and action in the light of its findings. Whether science is up to the task is a major policy challenge of our age.
Comments are temporarily unavailable while we perform some maintenance to reduce spam messages. If you have comments about this article, please send them to us by email: firstname.lastname@example.org