Beringia, Climate, and the Problem with Experts

July 06, 2019

Caveat Lector: I am not an expert in any of the below topics. As you’ll read below, I’m not sure how much my lack of expertise actually matters, but it’s at least worth pointing out. As author of this exploration, what I am is merely a somewhat intelligent, reasonably educated, very skeptical person that is curious and interested in lots of different topics across the scientific spectrum, including the process by which science itself is done.

This is also very link heavy. The links are all over the place so as to be available to foster your own explorations and interests. Wherever possible, the links are to Wikipedia; but some are to either opinion pieces or the writings of scientists.

Given the length, a PDF version is available as well.


A few months ago, I stumbled on a description of a place most of us have never heard of: Beringia. This is the name for a vast swath of land that connected Asia to North America until only about ten thousand years ago. A little further back - around 25,000 years ago - it was huge, over 1.5 times the size of Alaska today and spanning 620 miles from north to south.

Learning about this place made me say “Whoa”. When most people, including me, think about the Bering land bridge, we don’t think of a giant landmass. This was right in the middle of the Ice Age, and so we usually think of glaciers and ice. But there was no ice here (the glaciers were further east over top of Canada), just endless miles of grasslands and steppe with a climate slightly cooler than, but otherwise similar to, the Alaska of today. Our perspective is biased by what we know about the world of today, and so we think of a literal bridge possibly covered in ice.

After Beringia, I started digging even more and exploring other parts of the prehistoric Earth. As I did, it made me ask even more questions about the somewhat tenuous status of our Earth today and the concerns over climate change. The Earth has changed very dramatically and many times since just the last Ice Age, and while all of those changes were caused by nature and the last hundred years of change have been mostly caused by humans, there’s still a lot we can learn.

Science is the process we use for this kind of learning. We propose theories, evaluate them, and then either uphold or falsify the theory. Climate science seems to have a somewhat unique place amidst the pantheon of scientific fields. Nowhere else does the public constantly hear the refrain of scientific consensus. Nowhere else are lines of certainty drawn so tightly around future predictions. Nowhere else does the scientific evidence directly construct public policy so clearly in the public eye.

There’s a set of viewpoints put forward to the public on our changing climate that always go together. A Danish economist named Bjorn Lomborg calls this “The Litany”, and they go as follows: the climate is warming, humans are the cause, the effects will be devastating, and we must act now and effect dramatic policy changes, usually by government edict.

None of these statements are a problem, but as part of The Litany they are tightly coupled to each other and to the public’s perception of science when perhaps they should each stand alone. That the climate is warming and that we’re the primary cause is squarely within the bounds of science. Future predictions about the effects of these changes are still science too, but they are, by definition, as yet unproven theories. And all of the public debate, laws, tax structures, government programs, and cultural changes we might make to effect change are only policy ideas, somewhat informed by science.

Why do we always put these together? And why does climate science and climate change seem to hold such a unique place in the public’s view of science? Perhaps more importantly, is our perception of this issue getting in the way of progress; in fact, if everyone is a part of the consensus group and thinks in the same way, how do we even know if we’re going in the right direction? As we’ll see, some of our well-intended thoughts and solutions to climate and our warming planet may not have the effects we hope.

Somehow, the world of the last Ice Age led me down a rabbit hole towards exploring the process of science and it’s relationship with society. So as much as this focuses on the amazing world we live in, it’s even more about the norms of science, the roles of skepticism and heterodox theories, and what science should do for us.

The Last Glacial Maximum

Let’s start out by moving through history from the last Ice Age to now. This tumultuous stretch of millennia takes us from the last Glacial Maximum, about 26,500 years ago, to today. We all think we know about glaciers and Ice Ages - at least we know the words. But there’s some utterly incredible ways in which the world has changed that are either misunderstood by most people or not really discussed.

And of course, this time period also completely includes the Holocene Epoch, from about 11,000 years ago to present. It’s name derives from it’s main feature: the rise and impact of humans on Earth. Interestingly, the Holocene coincides with what’s called MIS1: our current warm climate in the long history of alternating warm and cool paleoclimates that we know about.

One of the other sparking events for trying to understand this period of history more thoroughly was reading Harari’s incredible book Sapiens. Harari describes three Revolutions that are the defining points in humanity’s history: the Cognitive Revolution (70,000 BCE), the Agricultural Revolution (10,000 BCE), and the Scientific Revolution (1500-1800 AD). The Agricultural Revolution coincides with the Holocene era.

But to get to the Holocene, we’ll start with the Glacial Maximum and walk forward from there. Check out the below simulation of the earth from 21,000 years ago through present, and then moving forward through an estimation of the future until 3000 AD.

Pay keen attention both to where the white (glaciers) are and where the land is (There’s some zoomed in videos of specific areas too). The yellow outlines specify the current coastline around the world, and if you follow carefully, especially in the past, you’ll see a lot more land on the earth. We’ll get to all that land shortly.

Geologists and other scientists have done a lot of digging into Earth’s past and have a reasonable sense of this big picture over time. They’ve accumulated evidence across geologic, chemical, and biological domains to show how the climate and the earth has changed. A lot of this evidence is actually still pretty new. Even something as widely known and accepted today as Plate Tectonic Theory is not that old. As a theory, it’s only about a hundred years old. Wegener, who first proposed some of the ideas, was widely criticized and the theory wasn’t accepted until decades later. And it wasn’t validated by observation and evidence until the 1950s and 1960s when seafloor spreading was discovered along ocean ridges. Chemical evidence, based on measuring different isotopes of oxygen in ice cores is also from the 50s and 60s.

Eons and Epochs

We’ve used all of this different evidence from biology and geology and chemistry to divide the history of the earth into different time slices. The longest of these is the Eon, followed by the Era, the Period, and Epoch. Here’s how that works out for us; we live in:

  • the Phanerozoic Eon, from 541 million years ago to present
  • the Cenozoic Era, from 66 million years ago to present
  • the Quarternary Period, from 2.5 million years ago to present
  • the Holocene Epoch, from about 11,700 years ago to present

The last glacial maximum was nearing the end of the Pleistocene Epoch, which took up the entire rest of the Quarternary Period, lasting 2.5 million years. When you hear people say “the Last Ice Age”, what they mean is the Pleistocene. Glaciers were the predominant feature over those couple million years, with a recurring cycle of glacial and interglacial cycles. These seem to roughly correspond to something called Milankovitch cycles. These cycles describe how changes in Earth’s orbit affect our consumption of solar radiation. The primary effect acts on a 100,000 year period and while there are still significant holes in our understanding, these cycles align with the the glacial periods of the last Ice Age.

This is the most recent of five major Ice Ages in the Earth’s history, by the way. It’s also the shortest (so far). The second Ice Age, about 650 million years ago, was really rough. The Snowball Earth hypothesis is an open scientific debate here, with the premise being that the entire surface of the earth actually froze over. There would have been a lot of skiing available, but not much else to see.

The Glaciers

There would have been a lot of really good skiing during the last Glacial Maximum too, especially in Canada and Europe. The glaciers didn’t cover everything uniformly at different latitudes; instead they formed into two primary, very large ice sheets.

The first of these was called the Laurentide, and it’s the one we usually think of when we think of glaciers. It covered all of Canada and even sunk down over the Great Lakes and into Minnesota, Wisconsin, and Michigan. It gobbled up a lot of New England and would have covered Boston and most of New York state under at least hundreds of feet of ice. Further north in Canada it would have been miles deep, and some small remnants of it do still exist in Canada.

In the meantime, the Fenno-Scandian ice sheet spread out from Scandinavia to over the British Isles, northern Europe and some of Russia. It’s interesting to note that a lot of Siberia and Alaska were not covered by glaciers, they were simply alpine desert or tundra. Siberia today is not a terribly friendly place to live, and average global temperatures back then were approximately 10 degrees lower than today. Siberia has been a cold place for a very long time, but it wasn’t covered by glaciers. Meanwhile on the other side of the world, the tropics would still be tropical, with palm trees and coral reefs. They just wouldn’t be quite as warm and wet as they are today.

Charismatic Megafauna

It was only 25,000 years ago, but the defining wildlife of the planet was a lot different back then too. Everyone has heard of the Woolly Mammoth. The last of them died out probably only about four thousand years ago on Wrangel Island. There was the Saber-toothed tiger too, my son’s favorite. But the true diversity of the megafauna that existed only a few thousand years ago is staggering:

That list is kind of crazy, and fun, and it’s nowhere near complete. The megafauna of the last 100,000 years or so was pretty incredible, but most of them are gone now. This is called the Quarternary Extinction Event and it’s been ongoing for a little over 100,000 years. One of the most interesting things about this extinction is that it’s focused primarily around the diversity of megafauna. It’s certainly the case that large animal fossils are easier to find and track than other kingdoms, but it may also be the case that megafauna have simply been more affected.

The term megafauna just means “big animals”. There are different reasons animals can get big. The most commonly known reason is something called Island Gigantism, where relative isolation on a smaller landmass will cause a species to grow larger. This is true of the New Zealand Moas and some pretty rad looking insects. But colder temperatures can also cause body size growth. This is called Bergmann’s Rule, which states that animals in colder climates will grow larger based on a more favorable surface area to volume ratio. In other words, it’s easier to stay warm.

Large animals also usually use a particular set of traits aligned with something called K-selection. Different species use different kinds of survival strategies to grow their population over time. K-selected species tend to be larger, have longer lives, and produce a smaller number of children that require lots of care after birth. Humans, elephants, and whales are all K-selected. On the other hand, R-selected species tend to spread more quickly and exploit ecological niches, produce lots of offspring, and die quicker. It’s interesting to note that humans are K-selected but that in a lot of ways we behave like R-selected species. That’s a different way to think about the adaptation of a larger brain size; it’s the random variable that allows for this.

There’s still a ton of specific questions around the megafaunal extinction, and there are competing theories about the primary cause. Most of us grow up thinking humans were the cause, but even in 10,000 BC there were still only about 4 million people spread over the entire world which means they had a lot of killing to do across a lot of different species. Up until the 19th century, tens of millions of bison still roamed North America, despite there being millions of humans around for millennia. We didn’t crash the population until we gained the technology to do so (guns).

The other primary theory is around climate change and the significant warmup since the Glacial Maximum. The world used to be significantly colder, and larger animals were a byproduct of that climate. As the Earth has warmed up, those same animals would have to adapt or die and because megafauna are all K-selected, they would have been slow to adapt.

Before the ice started retreating, all of these crazy big animals lived everywhere. Not all of them were everywhere, they each had their own ranges, but it’s hard to imagine that there were cheetahs in America or lions in Europe (both true). There were hippos and rhinos in Europe too, and camels in Arizona and Mexico. Imagine driving across Texas and seeing a camel herd grazing nearby.

These big animals didn’t just live where we live today either, they lived in places that don’t even exist anymore.

Glacialands

Rewatch the Blue Marble video again and you’ll see that the coastlines were a lot different. Florida was twice as wide as it is today. England and Ireland were connected to mainland Europe and the North Sea was mostly land. Alaska and Russia weren’t just connected but represented an entire land. Where’d it all go?

Those ice sheets we talked about were massive, and they contained an enormous amount of water. In the last 25,000 years, most of it has melted - some of it very quickly - and it had to go somewhere. That somewhere is the ocean obviously, and the sea level has risen approximately 400 feet during that time.

Before we go into how the ice melted, let’s talk about some of the regions of land that used to exist but are gone now. These lands are all defined by the extent of the continental shelves. In some places, the shelf is right off the coast and the ocean gains depth very quickly, but in other places, often where there’s lots of islands, the water is not that deep. Much of that land was exposed during glacial periods. It hasn’t been classified together that I can find, so I’m grouping them together and calling them Glacialands. Here’s a list of the biggest:

  • Beringia. We’ve already said it, but Beringia was huge - over 600,000 additional square miles. That’s more extra land than all of Alaska today. Beringia was 620 miles from north to south. Which means that Siberia and Alaska were actually connected by a land bridge 620 miles wide. Despite existing during the glacial period, most of this area was actually land and not glaciated.
  • Doggerland. A decent amount of the North Sea was land, and a good amount of it was covered by glaciers. But after the glaciers started receding, the land remained for a long time. Great Britain was well-connected to Europe as recently as 6,000 years ago (here’s a great map).
    wow
  • Sundaland. This was a decent chunk of land in and around Indonesia and Malaysia. One of the interesting things about this area is that it helps to understand and define the Wallace Line that separates it and Wallacea. Ever wondered why marsupials only live on Australia? This is the line that separates the Asian and Australian biogeographic areas.
  • Sahul. Not near as large, but it’s worth noting that Papua New Guinea, Australia, and Tasmania were all connected.
  • Lake Agassiz. Ok, this isn’t a land strictly speaking, it was an enormous glacial lake larger than all of the Great Lakes put together. It lasted up until about 5,000 years ago and affected everything from the present day lakes of Canada and Minnesota (which are small remnants) to the flow and path of the Mississippi River.

There were lots of other differences too, for example the Philippines, Taiwan, and Japan were all connected to Asia. It’s hard to imagine what we’re missing in our history that’s now hidden under a couple hundred feet of ocean. All of these lands were large enough for entire species to live and thrive. Our current coastal areas would have been far inland and a couple hundred feet above sea level, so there may have been significant differences in ecology. With the techniques we have today, we can barely scratch the surface (or plumb the depths, if you prefer) of what lived there.

And consider this: we’re talking about a time slice from 25,000 to about 5,000 years ago. Humans were alive and well and had populated six of the continents. Where do we naturally build towns and cities? We build on the coast or next to rivers. Almost anything from that era of our history is likely hidden under many feet of water. We have intriguing evidence of some cultures from up to 12,000 years ago at sites like Gobekli Tepe. Jericho has evidence of construction back to the same age, and one notable example that is under the water is Atlit Yam, a small 8,000 year old fishing village off the coast of Israel.

Never before have I thought about the importance of marine archeology.

But all of that land is gone now, caused by massive ice melts over about 20,000 years. How did that happen?

Meltwater

Let’s start with a graph first that shows the sea level over this period of time:

Sea level rise since the glaciers

A few fits and starts in the middle, but this is a relatively flat trendline from 20,000-14,000 years ago and then again (but faster) from 11,000-7,000 years ago. Let’s pick apart the most obvious detail on the graph first, which is called Meltwater Pulse 1A.

This event was around 400-500 years long and reflects a massive and very quick change in sea level, presumably from a rapid melting of the ice sheets we discussed earlier. An interesting ongoing question is whether the meltwater came exclusively from the northern hemisphere’s ice sheets (usually called Heinrich events) or if Antarctica contributed (Antarctic Iceberg Discharge Event 6). Either way, it’s important to note that the rise came from the melting of glaciers over land. Sea ice, which makes up a lot of the Arctic, doesn’t contribute to sea level rise at all. It’s already in the ocean. The general population thinks a lot today about the vanishing Arctic Ice, but the primary potential causes for future sea level rise are the melting of ice on Greenland and Antarctica.

In any case, during Meltwater Pulse 1A sea level went up fast; between 50-80 feet over the whole event, or an average of about 2 inches every year. (For perspective, current estimates of sea level rise this century are 0.1 inch or less per year.) There were several other pulses too, which you can see on the graph as well, but 1A was the biggest.

250 Centuries

So, a lot of ice melted until the trend settled down about 8,000 years ago and sea level has been comparatively static since then. Our coastlines have thus been fairly static too. As the glaciers retreated, they carved up the earth, building beautiful lakes in Canada and Minnesota and depositing incredibly rich soil all across the midwest of the Unites States. The temperates increased significantly all around the world (an average of 10 degrees, although it varies regionally). Dust levels, a major problem during the dry glacial period, decreased too. The ecology of the whole world changed rapidly; steppe tundra had previously been the most extensive habitat, and it rapidly changed regionally into forests, rangeland, and deserts. The Quarternary Extinction Event continued, enhanced to some unknown degree by humans.

25,000 years is a long time. It’s 250 centuries. The industrial age has only occurred over the last 2 or 3, and real human civilization has only been around for the last 30 or 40.

It’s a long time.

The danger is that it’s easy to compress and perceive impossible timelines like this as constant and gradual. They weren’t. We’ve been sweeping across these 250 centuries pretty quickly and providing highlights, but keep in mind there were lots of smaller and regional changes and events throughout this time. So here’s yet another list, moving backwards in time, of smaller climate change events in the last 250 centuries:

A lot of the above list are known as Bond Events, and they follow a cycle of about 1000-1500 years. These, in turn, are a possible subset of D-O events that have a longer history and a normal period of about 1500 years. These are two of the many cycles that seem to exist and relate to changes in climate. We already mentioned Milankovitch cycles, but there’s a load of them. So many, in fact, that they’ve got their own Wikipedia page of Climate Oscillations.

Now I Said All That To Say This

Up until a few weeks ago, I didn’t know all that much about any of the above. I knew there were glaciers and that the mammoths died, which I think is what most people know, but that’s it. But now I feel like I’ve discovered whole new lands (literally), new branches of evolutionary biology inquiry, and more understanding of how paleoclimate is measured and estimated. (I didn’t dive into the scientific measures and techniques used to figure these things out, but follow some of the links and you’ll find it.)

This exploration has put the entire topic of climate change in a bit more perspective. It’s made me ask some questions about the current debate and conversation around climate change. These seem like perfectly reasonable questions to ask, but I’m not sure everyone will see it that way. As we’ll see, I believe that’s a problem with our politics, not with our inquiry.

Comparatively speaking, we’ve had stable global climate for the last 8,000 years or so. But in just the 16,000 years before that, we also have an extensive history on our planet of mind boggling change, including sea level changes of hundreds of feet, glacial melts, massive extinctions, volcanos, and temperature changes of ten degrees or more. And while we understand a lot about these processes, we definitely don’t understand everything. Aside from possibly the human brain, the earth ecosystem is the most complicated system we have ever faced.

We’re able to measure and understand what has happened in the distant past, but usually only at the level of millennia and centuries; rarely in decades or years. In the more recent past - in the past few decades or perhaps a hundred years at most - we’ve had much more accurate, frequent, and widely distributed data on some of these phenomena. And we also have highly complex and fairly amazing atmospheric models that work to predict how the earth will continue to change in the future.

And all of these models, data, and consensus among those in the field point to the Earth warming over the coming decades and centuries, with us - and CO2 specifically - as the culprit.

CO2 has become the entire focus of everything around the climate change debate. It’s all about CO2 and temperature. A lot of other issues are intertwined and mentioned in passing, but they always seem to be afterthoughts, symptoms, or secondary effects.

So what’s the big deal? Why is our CO2 output in the last century so important?

Curves and Complications

Let’s start with the simplest and most incontrovertible graph of CO2 change that most people have never seen. The Keeling Curve:

Charles Keeling started keeping measurements from the Mauna Loa Observatory in Hawaii in the 1950s and it hasn’t stopped since. This record is precise including seasonal variability and represents a reasonable measure of the atmosphere away from any large human populations (it isn’t increasing based on some urban “heat” island).

CO2 is increasing dramatically. This curve is incontrovertible. It’s easy to understand. And it’s one of the simplest things you’ll see in climate science, because the interactions between the atmosphere, the planet, and the sun are nuanced and complex.

Greenhouse Gases

CO2 is a greenhouse gas. Greenhouse gases are good.

You might have gone, “Wait, what?”, so let me say that again. The greenhouse effect is a good thing. It lets our atmosphere capture energy from the sun and warm the Earth to a higher temperature than it would be otherwise. Without the greenhouse effect, the Earth’s temperature would be around 59 degrees F colder. Liquid freshwater would only exist near the equator.

The greenhouse effect works by radiative forcing, which describes how much of the sun’s energy is absorbed into the atmosphere or reflected back into space, and greenhouse gases are the key ingredient. CO2 is the second most important of these, behind water vapor, and it’s actually not even close. Water vapor - clouds and humidity - are far and away the most important component of the atmosphere to influence the greenhouse effect.

Of course, the greenhouse effect is only a good thing if we’re in the right zone. Too cold? No good. Too hot? No good. And we have other planetary examples of these close by. Too cold is equivalent to Mars, where there’s very little atmosphere to capture any heat. Too hot is Venus, where the greenhouse effect has created a 900 degree surface temperature.

So when we introduce more greenhouse gases into the atmosphere, the radiative forcing effect increases over time, and the atmosphere slowly warms up. While this sounds simple, there’s a lot going on. For example:

  • The Earth is a really big place, and thinking of it simply like this makes us believe that it’s uniform. But it’s not at all uniform. Global warming affects cold and dry places much more than it does warm and wet places. This means that global warming has a stronger effect at night, during the winter, and in cold places like the Arctic. In other words, it’s much more likely to cause mild winters than scorching hot summers.
  • The sun is the primary factor in this, but often seems lost in the fog of details around the causes of temperature changes. It’s important to remember that the whole greenhouse question is all about how much of the sun’s energy we capture. Consider the seasons. The difference between winter and summer is mostly caused by the axial tilt of the earth and the intensity/duration of the solar radiation we receive. If you’ll recall from earlier, axial tilt (or obliquity) is one of the contributing factors in the Milankovitch climate cycle.
  • There’s other correlations to solar activity that are linked to climate, like albedo. Albedo (which literally means “whiteness”) is a measure of how much a surface reflects solar radiation. Albedo plays a significant role in both global and regional reflection of solar energy.
  • Another example is sunspot activity. The Maunder Minimum, a long period of time with almost no sunspots, coincided with the Little Ice Age in Europe, although there’s questions around the exact timing and whether a potential climate relationship would be global or regional. Israeli physicist Nir Shaviv has recently published papers describing how much solar variability could contribute to climate change. He has argued that about half of 20th century warming was solar, while the other half was anthropogenic.
  • Measurement of something like “global temperature” is hard. There are many different temperature datasets from different sources, and most of them have calibration and adjustments built-in. For example, there are differences between buoy-based ocean temperatures and ship-based temperatures that require correction for unromantic factors like engine room heat that causes ship temperatures to read slightly warmer. There’s also significant regional variation; while some places are getting warmer, others are getting colder. My favorite regional example of this, simply for the name, is the North Atlantic Cold Blob. The Cold Blob is also a great example where a large phenomena like global warming is causing a regional trend of cooling.
  • Prediction of changes is hard because we need to understand feedback loops and they are an entire branch of computations and predictions unto themselves. As the climate changes over time from CO2 and other inputs, secondary effects occur that can either dampen or increase the nature of the change. Examples of these feedback cycles include everything from rainforest loss and desertification to peat storage and decomposition. Freeman Dyson has a prosaic but fascinating example called root-to-shoot ratio. As CO2 increases, plants respond by naturally increasing the amount of root growth below ground compared to their shoot growth above ground, which makes sense since they don’t need to be as efficient to capture the CO2 they need. In this way, plants may act as a small damping effect, because increased root growth acts as a natural carbon sequester when atmospheric CO2 increases.
  • The UN’s IPCC reports starting with AR4 in 2007 have tried to take into account feedback loops when accounting for future change in their models. This introduces additional variables and increases the error bars for change.

So here’s the point: this shit is hard. A huge amount of fantastic and rigorous science has been done, but there’s still a lot of question marks too. Theories continue to be proposed, evaluated, and either upheld or falsified. Data is constantly evaluated in much the same way.

There’s a few easy concepts to remember as key points (like the Keeling Curve, the greenhouse effect, and radiative forcing), but it’s easy to rabbit hole in a particular area very quickly.

To that end, keep in mind that this diatribe isn’t meant to be exhaustive, rather it’s simply to provide some additional insight into the landscape beyond the headlines we all hear, which is simply: “CO2 is increasing, the world is getting hotter, and it’s all bad.”

Is It Bad?

CO2 is increasing. The world is getting hotter on average (more so in cold places, and at night, etc). Before we talk about possible solutions to this problem, let’s first ask an implicit question that we often forget.

Is it bad?

First, there’s the Goldilocks Principle, which we already discussed without mentioning. Just like our porridge, we want a planet that’s not too cold (Mars) and not too hot (Venus), and that has the right ingredients. It’s important to evaluate how large these parameters are - the range between Mars and Venus is pretty big - but also worth noting that throughout it’s history and with many much larger climatic swings than what we’re experiencing today, the Earth has remained a Goldilocks planet.

There are also some different perspectives on exactly how to answer the question, “Is it bad”. In particular, there’s a naturalist perspective and a humanist perspective. Freeman Dyson describes the difference as follows:

Naturalists believe that nature knows best. For them the highest value is to respect the natural order of things. Any gross human disruption of the natural environment is evil. Excessive burning of fossil fuels is evil. Changing nature’s desert, either the Sahara desert or the ocean desert, into a managed ecosystem where giraffes or tuna fish may flourish, is likewise evil. Nature knows best, and anything we do to improve upon Nature will only bring trouble.

The humanist ethic begins with the belief that humans are an essential part of nature. Through human minds the biosphere has acquired the capacity to steer its own evolution, and now we are in charge. Humans have the right and the duty to reconstruct nature so that humans and the biosphere can both survive and prosper. For humanists, the highest value is harmonious coexistence between humans and nature. The greatest evils are poverty, underdevelopment, unemployment, disease, and hunger, all the conditions that deprive people of opportunities and limit their freedoms. The human ethic accepts an increase of carbon dioxide in the atmosphere as a small price to pay if worldwide development can alleviate the miseries of the poorer half of humanity. The humanist ethic accepts the responsibility to guide the evolution of the planet.

If you’re a naturalist, then it’s true that all of the things happening because of our influence are bad. We have irrevocably altered the path of nature, not just in the amount of CO2 we’ve put into the atmosphere, but in a multitude of ways. Single crop agriculture for hundreds of miles is not natural, nor are massive swaths of asphalt, cities, or even domestication. The amount of truly wild spaces across the globe has shrunk dramatically.

The problem with the naturalist argument is that it’s somewhat fatalistic, and it seems to break down at the extremes. People with this ethic will agree that nature knows best, but generally won’t be very excited for extreme examples of nature doing it’s thing, like the Snowball Earth state we mentioned in Earth’s distant past. Similarly, their naturalism can be tested and overridden by human goals easily. Naturalists will reject any human disruption in the environment as it is today, but when confronted with a growing of population of 7 billion people, they correctly wouldn’t be willing to discuss or take any sort of extreme action to drastically lower the human population to “right the imbalance” between humans and nature.

The naturalist ethic also isn’t actually nature-first. Instead, naturalists tend to imagine a sort of romanticized, bucolic view of pre-industrial times when humans and nature lived more in harmony. It’s easy to do this, and humans always seem to romanticize a bygone era. But the pre-industrial human existence was a far more brutal life than what anyone would strive for today. It’s also easy to forget the hard, brutal and often gross parts of nature. We marvel at the beauty of mountain lakes, tropical beaches, or animals grazing the savannah, but we neglect the virulence of the tsetse fly, the pestilence of malaria, polio and influenza, or the disgust of parasitic worms. All of these are also a part of nature.

There’s another philosophy that wants to capture a harmony between nature and humans too: humanism. This sort of pastoral beauty and coexistence of nature and humans is a goal of humanism, along with the improvement of human lives everywhere. Freeman Dyson also talks about being heavily influenced by his home country of Britain where, over the last thousand years he claims, any true wilderness has been almost completely eliminated and replaced with a beautiful, natural, thriving but cultivated “wild” environment. The most interesting insight into this perspective, as we’ll see, is that it can be created most easily by prosperity and technology.

Given the humanist ethic, it certainly isn’t bad for humans to improve their environment to suit themselves, including eliminating species like malaria. It also isn’t bad in itself to support a thriving, growing, technologically advanced population. But it is also true that humans have a track record of ignoring all problems in the name of progress. Industrial humans have made places nearly unlivable at times. We’ve wiped out ecosystems without looking back.

Are we doing this now, on a global level, by warming the planet? This is the great fear of the environmental movement. The environment is certainly changing. Humans and nature are adapting in different ways and with different success rates. But the future has some degree of uncertainty, so let’s look at some of the predictions.

Trade-Offs

The most recent report from the IPCC, the United Nations Intergovernmental Panel on Climate Change, was released in late 2018 and titled The Special Report on Global Warming of 1.5 C. The whole thing can be found here, and it tries to highlight the potential differences in a world that warms up 1.5 degrees C compared to one that warms up by 2 degrees C. It’s especially noteworthy because it’s the origin of the most recent hype around humanity having “only twelve years” to adjust course and save ourselves. This view is what launched the Green New Deal. Here’s a couple of key quotes from the IPCC’s Summary for Policymakers:

In model pathways with no or limited overshoot of 1.5°C, global net anthropogenic CO2 emissions decline by about 45% from 2010 levels by 2030 (40–60% interquartile range), reaching net zero around 2050 (2045–2055 interquartile range). For limiting global warming to below 2°C CO2 emissions are projected to decline by about 25% by 2030 in most pathways (10–30% interquartile range) and reach net zero around 2070 (2065–2080 interquartile range).

Avoiding overshoot and reliance on future large-scale deployment of carbon dioxide removal (CDR) can only be achieved if global CO2 emissions start to decline well before 2030 (high confidence).

We’re back to the CO2 focus again. Hopefully after all of the history and concepts we’ve covered so far it’s clear that this is a wildly complex topic, and this whole vast sphere of study has a lot to unpack. The Earth is definitely getting warmer, but simply painting the canvas with a global average temperature increase doesn’t make for a very realistic picture. The history of our planet, even in just the last 25,000 years, has had many extremes and changes.

Yet for all this complexity, the landscape of dialogue still seems incredibly simple. Along with the CO2 headlines we mentioned before, we’ll add some more examples to the unending, repeated litany on climate change: Fossil fuels are bad, CO2 increases need to stop, the Paris Accords and the Kyoto Protocol, polar bears are dying, we’re next, and anyone that disagrees is a “climate denier”.

There’s a word for a negative litany like this: it’s called alarmism. Where it’s traditionally been a journalistic technique (for decades now), it now seems more firmly tied into politics as well, where candidates use extreme cases or push the very worst of the other side to advance their own arguments.

Examples of alarmism around climate change seem pretty frequent, but often oversimplify a more complex situation. Polar bear populations are stable or increasing over the last 30 years, for example. This was due mostly to behavior changes and laws around hunting, which was the greatest threat to polar bears.

The most recent alarmist sounding call from Green New Deal democrats is that “we have 12 years left”. 2030 has become a line in the sand for many, adopted originally on the most recent special report from the IPCC. Sir David Attenborough even said, “(this may lead to a) collapse of our civilizations and the extinction of much of the natural world”.

Here’s a link to the complete Summary for Policymakers referenced above from the most recent report. And here’s Chapter 3, in particular the section starting on page 253: “Avoided Impacts and Reduced Risks at 1.5C Compared with 2C of Global Warming”. The report does call out risks and grave concerns, but it emphatically does not call out the end of our civilization or of the world. It’s actually not even significantly different from the previous report.

To be clear, there are significant economic and ecologic risks ahead, and they vary to some degree based on much warming occurs. We need to consider them carefully. We need to understand the risks as well as just how confident we are in the future. The IPCC report itself does a great job of specifying high/medium/low confidences for its projections and correlations. But politicians just seem to botch this, and only give the alarming headline. It’s part of the Twitterization of our discourse.

One of the biggest impacts that climate change will have is purely economic. It will affect our GDP and our well-being in the future, and the estimates measure in the trillions of dollars (see Chapter 3 of the IPCC report again). But unfortunately, we don’t often see a comparison of this cost to the cost of our potential fixes.

Let me say this another way: climate change will in the future probably cost us trillions of dollars, but the fixes proposed right now to avoid those risks will also cost us trillions of dollars. Which is better?

The most practical version of this question is the Copenhagen Consensus, headed by Bjorn Lomborg. He worked with a bunch of economists and others to provide a practical ordering of importance of the world’s problems. We know there are a huge number of problems in the world. But given a concrete sum of money (in this case $50 billion), how can we use that money to do the most amount of good in the world? Every time this study has been done, with groups ranging from Nobel-prize winning economists to college students, climate change efforts end up near the bottom of the list. Why? Because we can do a great amount of good by fighting climate change, but it’s incredibly, exorbitantly expensive. For far less money we could almost completely eradicate malaria, which still affects millions each year and kills almost a million people.

Back to the “we have 12 years left” line. The Green New Deal was launched around the same time. While it’s light on specific policy, it includes a lot of additional left-leaning policy under the focus of climate change. Things like universal health care and a jobs guarantee and the transformation of every building in America.

But if things are so dire that we only have 12 years left, why are we focused on such an agenda? Why isn’t nuclear power a huge part of the equation? How does a jobs guarantee help at all (it probably makes it worse). And why are we posturing that transforming the United States will fix the problem when India, China, Russia and others are far more dependent on coal and related energy sources?

Al Gore says that climate change “offers us the chance to experience what very few generations in history have had the privilege of knowing: a generational mission; the exhilaration of a compelling moral purpose; a shared and unifying cause…” And a new crop of politicians say that this is our Space Race, our Apollo.

Wealth and Adaptation

But it’s not. We’re not united in cause. We don’t have a common opponent or even a common, concrete goal. The romance isn’t there either, as it is with a moonshot. Ted Nordhaus, an award-winning environmentalist writer, points out that if the issue really were that dire, the proposals being put forward aren’t far too radical, they’re far too modest. And even if we wanted to really change, the economic positions of populations like India and China wouldn’t allow them to follow us just yet. They need to get richer first, just like we did.

The richer the world gets, the better able we’ll be able to deal with climate change. The richer the world gets, the more able we are to afford more expensive solutions like solar, wind, or nuclear. The richer the world gets, the more able we are to invest in R&D costs to drive the economics of clean energy so that adoption is widespread The richer the world gets, the more we’ll be able to adapt to whatever climate change throws at us.

Our adaptation to the changing environment, something humans are incredibly good at, isn’t talked about enough in the context of climate change. A lot of the economic costs the IPCC discusses are focused on our methods of adaptation. Consider this example: over the 20th century when warming began to occur, have climate disaster death tolls increased or decreased?

![](https://wattsupwiththat.com/wp-content/uploads/2019/01/individual-climate-related-deaths.png)

As the world has gotten vastly richer and more advanced, we’ve been far more able to deal with droughts, floods, and other problems. And as we get richer in the future, we’ll be far better equipped to deal with the problems we encounter than we are now. We will continue to adapt well.

Politicians are trying to use alarmism to drum up a popular response and make this our generational mission. But if everything is a fire all the time, we lose our sensitivity to different kinds of threats.

And there are very valid scenarios of disastrous, cataclysmic, and fairly abrupt climate change. A shutdown of the ocean’s thermohaline circulation is one of the most concerning, but there are several ocean oscillations that can have significant and quick impact on the climate. Interestingly, the last time the thermohaline was interrupted by huge amounts of glacial meltwater, it helped cause the Younger Dryas cool period that lasted for a millennium. (A similar melting event also caused the Antarctic Cold Reversal previously.) A mass reduction in the Greenland ice sheet or the West Antarctic Ice Sheet would cause enormous sea level rise along with other changes. But the probabilities of these events happening in the next several hundred years is very low. (In fact, Antarctic ice sheets are currently showing net gains.)

So let’s eliminate the doomsday scenarios. They are still possible, but outlandish. The world is getting warmer, steadily but not disastrously. We should definitely be doing what we can about it, but the correct solutions are to economically incentivize the whole world towards clean energy use and to improve our technology. We can adapt to lots of changes - we’re really good at adapting - but we need a sense of what those changes are.

It turns out that we have a very good bellwether for how things may change: urban heat islands. Cities produce, absorb, and control a large amount of heat, a lot of it from incredibly prosaic things like asphalt and concrete. (It’s worth noting that moving cities from being predominantly dark surfaces to predominantly light surfaces will significantly affect the amount of solar radiation absorbed.) So cities end up several degrees warmer than the surrounding areas. As we said previously, CO2 concentration increases affect cold areas more than warm areas, and thus more at night, and more in the winter. As cities grow larger, the effect increases dramatically. Houston’s night time surface temperatures increased 1.5 degrees over a ten year period, but had very little effect during the day. Tokyo has gone up over 5 degrees in the last hundred years. Growing urban areas all over the world have the same trend.

The point of this is not to say that there are no ill effects to these increases; there obviously are. The point is that in large swaths of the population, we’ve already proved our adaptation to temperature changes inline with what’s coming. There is, of course, a huge difference between these changes existing only in cities and occurring over the entire world. But our adaptation in cities hasn’t just been complete, it’s been completely unnoticed. Cities have grown and thrived with hardly a thought to the rising temperatures. Only recently have we been starting to improve the heat island effect in many places; London has plans in place to significantly decrease their heat island over time, a very good thing.

Mainstream Solutions

The litany of climate change concepts usually includes some commonly references solutions too, like wind and solar power. But the efficacy of the solutions we put forward is up for debate too. The future is wildly uncertain, something we can remember easily by looking back at the predictions of the past.

These policies almost always go along with the climate change debate:

  • Efficiency, efficiency, efficiency. Our cars need to be more efficient or become electric (thereby bringing another layer of efficiency). And everything we have that uses energy - from lightbulbs to shipping processes - needs to get more efficient.
  • Sustainable, renewable energy. In the canon of climate change, this means wind and solar power.
  • More efficient agriculture, including better genetic engineering, herbicides/pesticides, and practices like no-till farming.

There’s a few holes in these policies though, and they’re fairly big holes.

First, there’s Jevons Paradox. This is a completely frightening concept that not enough people have heard of, despite that it’s been proven true since it’s conception. It states that when technology or policy increases the efficiency at which a resource is used (reducing the amount necessary for any one use), the rate of consumption of the resource rises based on increased demand.

The original variant referred to coal production and use in the 19th century, but it can be applied to almost anything. We made refrigeration more efficient and now middle-class households everywhere have 2 or 3 fridges in their home. Computing became more efficient, and the number of computers exploded. Cars became more efficient, and yet we drive more than ever.

Making our lives more efficient won’t limit our use of natural resources, it will increase it. The policy of efficiency pushed by environmentalists will increase the amount we use all of our natural resources.

Next, let’s consider wind and solar energy production, the usual suspects for fossil fuel replacements. They have the potential to do a lot of good, but they have serious problems too, like reliability and battery or natural gas backups. France has gotten the vast majority of it’s energy from nuclear plants for decades at a low cost and with a high reliability and safety record; they are also a net exporter of energy to the rest of Europe. When they started incorporating more solar and wind, following Germany’s lead despite Germany having higher energy prices, both the amount of CO2 output and the price of energy in France went up! (Germany is, in fact, a cautionary tale for the choice of going anti-nuclear before focusing on CO2 emissions.) In fact, the two countries that have gotten closest to net-zero carbon emissions are France and Sweden, and their focus has been nuclear and a nuclear/hydroelectric mix respectively.

Solar and wind also take up massive amounts of real estate for power generation, real estate that could otherwise be used for wildlife or farmland. When solar farms are created in the desert, endangered species and lots of other wildlife is cleared out. Meanwhile, wind farms are threatening insect and bat populations.

Said another way, wind and solar’s energy density is very low. It turns out that, despite what most people hear, energy density is one of the principal drivers of environmental impact. On the other hand, the energy density of nuclear plants is incredibly high. A soda can sized piece of Uranium is enough to power the entire lifetime of a high-energy American.

And then there’s the safety record. Wind turbines actually kill more people than nuclear. Nuclear is the safest energy option we have. The US Navy has a flawless track record over many decades on their ships. And remember the Fukushima disaster? There were no associated radiation casualties. Zero. The WHO reported that “the residents of the area who were evacuated were exposed to so little radiation that radiation induced health impacts are likely to be below detectable levels.” Nuclear gets a bad wrap because of the weapons. Most of our nuclear plants we have are still designs that came out of a previous era when weapons-grade material was a happy byproduct. But it’s possible to create nuclear plants without having weapons-grade fissile material as an output. If we fix the public perception of nuclear, then it becomes the primary solution for energy concerns, not solar and wind.

Last, there’s the problem of agriculture. I don’t think most people realize the incredible advances in agricultural productivity over the last couple hundred years. As late as the 1960s and 1970s, people thought millions would starve based on overpopulation. Ehrich wrote his dystopian The Population Bomb based on the Malthusian predictions of two hundred years prior. But Ehrlich and others didn’t count on the changes in production capacity of farming, brought on by heroes like Norman Borlaug and the Green Revolution. Rice yields tripled in India from the 1960s to the 1990s, and similar stories follow rice, wheat, and soybeans all over the world.

That said, these dramatic increases ride on the backs of technologies like herbicides, pesticides, industrial farms and monocultures. And while there’s been a lot of good produced (enough to feed the world), there’s also cause for concern in the amount of fossil fuels required, the elimination of biospheres for single crops, and plain old soil erosion.

It’s tricky to estimate the impact and amounts of CO2 that actually come into play from agriculture, but it’s definitely true that we’re having a very large impact on the terrestrial biosphere, which is one of the five main reservoirs of carbon on earth. A simple solution like no-till farming, which is happily growing in popularity, has the potential to do an enormous amount of overall good. It eliminates some of the need for fossil-fuels in agriculture, it improves the albedo of croplands (which has a positive effect on radiative forcing and will reflect more solar radiation), and, most importantly, it protects soil deposits from erosion and allows topsoil to increase over time.

If this seems inconsequential to you, consider Freeman Dyson’s calculation on topsoil. Based on studies of biomass percentages in topsoil, he calculates that one tenth of an inch of topsoil averaged over approximately half of the land area of earth (e.g., the amount of land that is currently covered by dirt, soil and vegetation of some sort) would sequester enough carbon to cease the increasing CO2 amounts in the atmosphere.

In other words, it’s possible we could solve a significant portion of our climate problems with better land management and understanding of the nature and amounts of carbon in the dirt under out feet.

Overton Windows

If you’re still with me by this point, I’ve said an awful lot and I’ve still only scratched the surface (thus, the hyperlinks). If you’re naturally curious or interested in science and systems, that’s really exciting. It can be intimidating too, since even for the vaulted scientific cognoscenti there’s a diverse set of specialties and subspecialties at play, each with their own ongoing debates.

And yet, in public discourse at least, we don’t get much appreciation for the depth and the details. Instead, we get bludgeoned over the head with that single metric - CO2 concentration and temperature. And more often than not when this topic comes up, you’re either with us or against us.

I can understand this reaction to some extent. There are some fringes that won’t listen regardless of the evidence, just like the flat earthers. We call these people ‘climate deniers’ or ‘climate skeptics’, and the label has become equivalent to banishment in some circles.

But we seem to have lumped a very large group of people under that label, to the extent that the label loses some of it’s power. In this group are scientists, sometimes speaking outside of their expertise, who disagree with certain parts of the climate change litany we’ve discussed.

This is where things get really dangerous, because a great deal of this labeling is an effort to shrink what’s called the Overton Window around the topic. The Overton Window describes the range of ideas that are actually tolerated in public discourse. If you really want to control a debate, you don’t argue your case, you try to change the whole Overton Window.

Of all the places where the Overton Window should have the largest aperture, science should be the most steadfast.

Surveys

We’ve all heard about the overwhelming consensus of scientists on climate change. Usually, the number 97% gets thrown out. But like all complex issues, there’s a bit more nuance than that. Let’s take a look at a few of the studies:

  • Verheggen 2014 says that over 90% of well-cited climate scientists agree that greenhouse gas increases by humans are causing warming.
  • In Cook’s study in 2013 of almost 12,000 scientific papers on topics like ‘climate change’ and ‘global warming’, 66.4% of them expressed no position on anthropogenic global warming (AGW), of those that did, 97.1% endorsed the consensus position that humans are contributing to global warming. Their conclusion from this was that “the fundamental science of AGW is no longer controversial among the publishing science community and the remaining debate in the field has moved on to other topics.” Meanwhile some scientists included in the study have disputed the figure, saying their work was misrepresented.
  • In Lefsrud and Meyer (2012), 99.4% of engineers and geoscientists agreed that the global climate is changing but that “the debate of the causes of climate change is particularly virulent among them.”
  • Of the respondents to Farnsworth/Lichter in 2011, 97% agreed that global temperatures have risen over the past century and 84% agreed that “human-induced greenhouse warming is now occurring.” When asked “What do you think is the % probability of human-induced global warming raising global average temperatures by two degrees Celsius or more during the next 50 to 100 years?’’, 19% of respondents answered less than 50% probability, 56% said over 50%, and 26% didn’t know.

Still very strong agreement, as their should be, but with some divergence around future impacts. The problem is that the public discourse takes those 97% figures and applies it to several questions all at once: that the climate is changing, that we’re the cause, that the effects will be devastating, and that government must step in and solve it.

Meanwhile, the results of surveys done in the 1990s, when the IPCC was still in it’s infancy and Al Gore’s movie wasn’t out, are a bit different.

  • In a 1997 survey of state climatologists, 44% considered global warming to be a largely natural phenomenon, 17% considered warming to be largely man-made, and 89% agreed that “current science is unable to isolate and measure variations in global temperatures caused ONLY by man-made factors.”
  • In a 1996 survey, climate scientists answered on a scale of 1 (highest confidence) to 7 (lowest confidence) regarding belief in the ability to make “reasonable predictions” and the mean was 4.8 and 5.2 for 10- and 100-year predictions.
  • In a 1992 review of surveys, scientists agree most on data and climate processes and agree least on causes of warming, predictions, and impacts. The conclusion is particularly striking and relevant for us to remember today:

    Surveys of scientists can provide important information by documenting disagreement and agreement among scientists, but it is less clear that they necessarily improve the use of scientific information in policy making. This is so because survey results may be misleading to the extent that they: a) encourage scientists to make judgments that they would not otherwise make without the requisite thought and preparation; b) provide the appearance of consensus by aggregating disparate views; c) enhance the appearance of disagreement through the use of vague questions that are subject to differing interpretations; d) encourage scientists to make value judgments that are beyond their expertis and confuse value judgments with scientific opinion; and e) are misinterpreted by their sponsors for use as political ammunition. Results from surveys should be interpreted with caution; this applies to the results of our own survey as well as those of others.

It seems reasonable to ask about the differences between the recent surveys and those from the 90s. For sure, the science and the modeling has improved over time. But so has focus on the issue, with a 5x funding increase in the same time period from the federal government alone. I don’t mean to imply that anything questionable is occurring; it’s simply another deep dive I want to take to understand the changes over time.

Deniers

Whatever the correlation, it’s certainly true that the science points to humans as causing a warming climate. Now let’s take a look at a few of the scientists who are lumped into the “climate denier/skeptic” category:

  • Richard Lindzen - atmospheric physicist from MIT that was a lead author on one section of the IPCC’s Third Report. He is critical of the alarmism around future climate change and it’s impacts and the power of the IPCC and UNFCCC, stating in a letter to the White House:

    Calls to limit carbon dioxide emissions are even less persuasive today than 25 years ago. Future research should focus on dispassionate, high-quality climate science, not on efforts to prop up an increasingly frayed narrative of “carbon pollution.” Until scientific research is unfettered from the constraints of the policy-driven UNFCCC, the research community will fail in its obligation to the public that pays the bills.

  • Freeman Dyson - theoretical physicist and, admittedly, one of my personal heroes. He is critical of the predictive abilities of the CFD models of the atmosphere over decadal timeframes, suggests that we don’t understand enough about the climate, and has questioned whether some warming could in fact be a net positive. ** 
  • Judith Curry - Climatologist and geophysicist who is critical of the IPCC’s certainty. She claims that the IPCC ignores the “Uncertainty Monster” in it’s predictions. She also engages skeptics and encourages discussion around the topic.
  • Ivar Giaever - Nobel Prize winner in physics who calls global warming “a new religion”.
  • Nir Shahiv - An astrophysicist we mentioned previously. He has done his own calculations and published papers on additional potential links between cosmic rays, solar activity, and climate change.

I pulled these as useful examples from this list of scientists that disagree with climate change. These scientists are notable for a few reasons. Remember the litany of statements that get lumped together and tied to the scientific consensus: that the climate is changing, that we’re the cause, that the effects will be devastating, and (arguably) that government must step in and solve it.

In my list above, the only scientist that actually questions human-caused climate change is Nir Shahiv, and he is actively publishing astrophysical papers on this topic. The rest of my examples completely agree that the climate is warming and that it’s caused by humans. They simply question some aspects of our predictive accuracy or public policy stances. For this, they are ostracized to varying degrees as deniers.

(It’s interesting to point out that all of the above are physicists of some kind, and I’ve wondered before if physicists are better attuned than other scientists to follow or pursue unusual paths or hypotheses. When you consider some of the completely obtuse and amazing parts of the universe that they study, it’s easy to imagine why.)

The concern of a lot of the scientists listed as climate skeptics is usually around the uncertainty of future predictions and effects, perceptions and alarms, and what exactly we should do about it. Which is exactly the division between science and policy.

The Nature of Science

A lot of people have trouble defining science, even if they think they know it when they see it. It’s always worthwhile to review the Scientific Method. What are the key aspects that make it special? Hypotheses and observations are critical, but are they the only defining attributes? After all, we wouldn’t consider the hypothesis that God made the sun and stars scientific.

Karl Popper, a philosopher of science, came up with a key inisight on what sets science apart: Falsifiability. A hypothesis or theory has to have the ability to be proven false. If you can’t experimentally prove that your theory is right or wrong, you can treat it like religion and both sides can argue about it ad nauseum.

Judith Curry, one of our skeptics above, discusses in detail how science has been changing over time. She says that the rise of the social sciences after World War 2 are a key factor, and that:

social claims often cannot be tested against pre-established impersonal criteria consonant with observation.

The danger here is falsification. She goes on:

Critically important was the norm that science had to be constantly open to criticism and debate. Scientists held beliefs only tentatively, based on the evolving theories and evidence, always subject to falsification.

Migrating away from this keystone of science engenders both more activism and more relativism. Scientists feel more empowered to enter the policy debate. And why shouldn’t they?

Robert Merton would disagree. He came up with the Four Mertonian Norms of science, ideals that represent the goals and methods of science:

  1. Communism: all scientists should have common ownership of scientific goods (intellectual property), to promote collective collaboration; secrecy is the opposite of this norm.
  2. Universalism: scientific validity is independent of the sociopolitical status/personal attributes of its participants
  3. Disinterestedness: scientific institutions act for the benefit of a common scientific enterprise, rather than for the personal gain of individuals within them
  4. Organized scepticism: scientific claims should be exposed to critical scrutiny before being accepted: both in methodology and institutional codes of conduct.

Curry worries about disinterestedness, saying:

disinterestedness, defined as “personal detachment from truth claims,” is the least popular contemporary norm, as academics align their research interests with funding opportunities.

It’s easy to see how advocacy or activism can follow closely after alignment with funding.

Perhaps more important is the degradation of organized skepticism. The act of organized skepticism is not a one-time judgement, like a court of law. It must be a continuous evaluation, always open to new theories and new data. But as Curry states, the values have changed:

Organized skepticism and openness to criticism have also come under serious pressure due to post-modern thinking that promotes “deconstruction,” “relativism,” and understands science as means of social control. An example of this trend is the issue of defining uncertainty in climate change science: “the mere occurrence of uncertainty talk is not interesting unless we can document and interpret its construction, representation, and/or translation. According to constructivist accounts, representations of uncertainty do not reflect an underlying ‘reality’ or a given ‘state of objective knowledge’ but are constructed in particular situations with certain effects.”

In other words, the narrative is what’s important. Error bars and statistical concerns matter less than the overall picture of the underlying reality. Consider MIT meteorologist Kerry Emanual’s comments on Richard Lindzen’s perspective on climate change:

Even if there were no political implications, it just seems deeply unprofessional and irresponsible to look at this and say, ‘We’re sure it’s not a problem.’ It’s a special kind of risk, because it’s a risk to the collective civilization.

Not only did Lindzen never say “we’re sure it’s not a problem”, his criticism and openness - in other words, his organized skepticism - is irresponsible because it is critical of the risk that the science he is critical of has shown.

The politicization of science is accelerating and we need to understand and deal with it. The norms of science have changed over time and introduced new problems and new perspectives. Why should we believe scientists are immune to argumentum ad populum? Or confirmation biases?

In the view of the wider population, this problem becomes even more important. Most people will give far more credit to a scientist if their views align with their own. None of us is immune to this.

Expertise and Specialization

Which is a problem. Over its history, science has moved from fighting authorities like the Church and philosophers to being a trusted and mostly unencumbered and neutral authority. But as the landscape of scientific inquiry and its relationship to the public discourse changes, we need to continually adjust course.

Science rightly puts heavy influence on deep knowledge of a particular field. This is called specialization. It is inevitable today to provide any sort of new formulations in any field. We build our scientists - and most academics - to be incredibly focused, specialized, and knowledgeable in a narrow field. And yet, here’s the perspective of two of the best scientists of the 20th century - Neils Bohr and Richard Feynman:

“Science is the belief in the ignorance of experts” -Feynmann

“An expert is a man who has made all the mistakes which can be made in a very narrow field.” -Bohr

“The first principle is that you must not fool yourself — and you are the easiest person to fool.” -Feynmann

There’s a humility in these experts that can be incredibly refreshing, but it’s important to remember that this healthy and continuous skepticism is a core part of what science should be. There’s a balance to be made between continually building on top of a foundation and rigorously inspecting what you’ve built.

Too often today we instead get arguments from authority, where we listen only to the specialists in a given field because they’re the expert. It’s a natural instinct and one that experts engage in as well. Re-read the Neils Bohr quote above; an expert simply understands the landscape of a very small minefield.

It’s gotten to the point that you can’t even step out of your lane. If you do, you get shamed. You can’t comment on a topic without showing your bona fides in that field.

We need to start recognizing this as a sort of ad hominem attack. I’ve deemed it credshaming: the act of shaming someone for speaking up about a subject where they don’t have the “appropriate” credential (e.g. a PhD in X). Outsiders in fields are dismissed frequently, and there are many examples of this practice, from the outlandish to the completely ordinary.

Outsider Examples

We already briefly touched on Plate Tectonic theory already, which was first proposed by Alfred Wegener. He was a polar researcher and meteorologist and was considered an outsider by geologists, some of whom organized symposiums specifically in opposition to his theory. Plate Tectonic theory wasn’t widely accepted by geologists until over 40 years after it was first proposed.

Another physicist named Tommy Gold discovered the electromechanical mechanism of the inner ear that allows us to hear by translating sound waves into electrical signals. He performed an experiment in the 1940s to verify it, but his work was rejected by physiologists and ear specialists until the 1970s, when the mechanism was finally verified by researchers in that field.

As an inverse example, when Theophilus Painter erroneously stated that there were 24 chromosomes instead of 23, other researchers suffered from confirmation bias and, expecting to find 24 pairs, nearly always did.

When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong. -Clarke’s First Rule

I’ve mentioned him enough by now, but Freeman Dyson is in my personal pantheon of heroes. He is both brilliant and heterodox. Many dislike him for this, but this is exactly why I’m so taken by his manner of thinking. His friend Oliver Sacks said about him, “A favourite word of Freeman’s about doing science and being creative is the word ‘subversive’. He feels it’s rather important not only to be not orthodox, but to be subversive, and he’s done that all his life.”

Curious intellectuals from outside a given field are subversive, sometimes with huge consequence. Alan Sokal has proven this twice. The first time he published a nonsensical paper to the academic journal Social Text to test the journal’s intellectual rigor. The paper was accepted, published, and then outed by Sokol as a complete fake. This sparked a debate around the rigor in some of the social sciences, especially those focused around postmodern cultural studies. Almost two decades later, Sokal, with others, rejected the math and statistical analysis underlying a key tenet of positive psychology called the Losada Line, despite the original paper already having been cited by other scientific papers over a thousand times.

Subversive elements like Sokal or Dyson look at questions in new ways and suffer less from the natural biases and histories of those inside the field. They won’t always be right - in fact, they’ll very probably be wrong - but their participation and serious inquiry work to solidify the fulcrum on which the balance of science sits.

Synthesis

There’s another problem with specialization that we see more often now, and it’s the problem of specialization itself. We reward based on narrow focus, and so we end up with compartments of experts that don’t often grapple outside of their lanes.

Said another way, there isn’t much in the profession of science that focuses on analyzing data and synthesizing results across fields. The aggregation techniques or overarching reviews that do exist, like meta-analysis, are still narrowly focused on a specific hypothesis, and there are natural biases embedded in the technique.

In a conversation with Russ Roberts on Econtalk, Paul Romer circles this topic in economics as well:

But there’s another problem that I think one should be worried about, which is that there may not be enough professional reward for the hard work of sorting through the facts. And summarizing them in a way that people can use them. But, yet, really being careful about the different, you know, ways to summarize them, and the possibility that you are building in some assumptions that are, you know, influencing how you summarize them. So, I, you know, I wish—over time, I’ve become more and more convinced of the importance of that pure hard work of collecting data and summarizing them in a manageable way. And, frankly, I think we give too many rewards to people who do theory, like I do. And not enough to the data that is actually the basis for judgments about, you know, like, growth, and growth and GDP. And so, as a tweak in the profession, I’d like to see us pay more attention to, you know, the analysis of data and the process of summarizing the data that we then use to make models and make policy recommendations.

As specialization continues to accelerate, we need to more coherently recognize that the ability to summarize and synthesize is different than the deep work of theory, and we need to better cultivate it.

Tragedies and Choices

In both the climate change debate and the issue of scientific specialization, some of the main problems are incentives. And when some of the ground rules change, the incentives do too. A scientist’s immediate goals are to publish, to do theory, to gain citations for their work. More recently and depending on the field, they can also be incentivized to weigh in on policy and public perception. Like it or not, they hunt for prestige in their field, moving as deeply as possible and specializing maximally. They become experts, and the public is thus incentivized to listen to them. Some of these incentives need to be tweaked, or at least have more humility injected into the expertise-building process.

A lot of people consider the problem of climate change as a tragedy of the commons, where individual incentives aren’t aligned to the common good. I think this is one of the reasons that centralized and forceful government action is posited as a solution to climate change. It’s not enough to say that the climate is warming and that we’re the cause. If we also refuse to say that it will be devastating and that government must intervene we’re still considered anathema. Thus, most of the alarmism that we see today seems constructed to force action on the issue.

We need to continue to consider exactly what kind of problem climate change is. Is it like an asteroid hit or is it diabetes for the planet? An asteroid would wipe us out quickly. Diabetes can be managed. It will certainly change things, but it’s not the end of the world. The Green New Deal isn’t actually constructed to fix an impending catastrophe, it’s simply a way to gather momentum behind a set of unrelated policy prescriptions. It leverages the idea of catastrophe to advance policy, but those policies don’t actually help much. As the above opinion article states:

Moreover, practically, nothing that Green New Deal advocates appear willing to seriously propose will actually cut US emissions at a scale or pace consistent with stabilizing emissions below two degrees, much less 1.5. Making the best of our chronic condition, rather, will require a climate movement that is less catastrophist about the problem and more ecumenical about its solutions.

Government and control is the usual solution to a tragedy of the commons: centralize through legislation and move towards the common good. But there’s another solution too: design incentive systems for individuals that align to the common good.

Arguably, that’s what government does, but they do it through force of law. We can do it through economic incentives too. Instead of forcing policy and taxes, we can pursue technoloy and R&D that allows us to do things like bring green energy prices lower than fossil fuels. Once solar panels, wind and nuclear are cheaper than coal-driven power plants everyone will want to switch.

Precaution and Choosing Right

Which brings me back to the Copenhagen Consensus and something called the Precautionary Principle. This principle is a risk management technique, usually used to say that we should take action on climate change to ensure there is no impending harm.

Every run of the Copenhagen Consensus has put investing in climate change policies near the bottom. They’re still big and important problems, they’re just not what we should be working on right now. The payoff based on the amount of investment required doesn’t make sense. Is it possible that in the future more people will die and trillions in economic loss will be sustained? Yes. But it’s also true that millions die today from much more solvable problems. And it’s the same with economic losses. Investing in eradicating malaria is much cheaper than working on climate change and it would save millions of lives right now. Similarly, eliminating trade barriers would reduce economic losses significantly.

The problem with climate change isn’t that it’s wrong, it’s that it crowds out too many other important issues. Just in the environmental sphere, we’re missing proximate causes. It’s true that climate change will still be a threat to polar bears. But the primary issue there was actually hunting, and addressing that has allowed the populations to stabilize. There are plenty more specific problems for us to focus on, like the potential extinction of bees, deforestation, overfishing, plastics in the ocean, and any other number of issues. When we focus exclusively on our CO2 numbers over the next 50-100 years and nothing else, we miss the things we can do right now.

But even so, doesn’t the Precautionary Principle dictate we take action? As Bjorn Lomborg has stated, we could easily eliminate nearly 100% of traffic deaths and injuries by a simple rule: cutting the speed limit everywhere to 5 mph. As a society, we’re not willing to do that, and extreme examples like this show that any decision to do something (or not do it) is always going to be a cost-benefit tradeoff. We can’t just evaluate the problems of climate change. We need to compare the costs and the pros and cons of the solutions against the problem itself. It’s likely that our investments and focus could be better spent elsewhere.

The Key Insights

We started our journey a long time ago during the last Glacial Maximum and learned how much the climate around us has changed without our help. Then we accelerated into modern times and saw how the climate is changing because of our influence. But we also saw that both the problems and the solutions are perhaps not as cut and dry as we would like to believe. We want to do something, but determining what we can do that is actually beneficial on a large scale can be, as Judith Curry says, a “wicked problem”. Finally, we dove into the nature of science and the power of both the oft-ridiculed expert-outside-their-field and some of the dangers of specialization.

Now I will confess that I come into this entire subject with my own biases, which are likely already apparent. I agree completely with Dyson’s view on being subversive and maintaining a skeptical perspective on science and data. It’s completely vital to approach ideas this way, even if only to ensure that the agreements being formed are real. While group think and the other cognitive biases are very real, the Wisdom of Crowds is real too. But the Wisdom of Crowds has some important criteria, like independence of individual opinion and diversity of opinion. If a subset of the population tells the larger group the solution to a problem up front, the wisdom of the crowd goes out the window. You’re stuck with 24 chromosomes like Painter or the Losada Line. We have to maintain our skepticism and our independent opinions to ensure the whole process is actually working.

We need to value specialization, but be wary of using expertise as a bludgeon. Outsiders can often bring key insights because they aren’t hampered by expertise. Remember that science is falsifiable. Predictions are a part of science - the theory and hypothesizing part of science. The experiment has to run to tell how accurate those hypotheses are. We’ve been wrong a lot in the past.

We must continue to separate science from policy. Science can be politicized too, a process that goes directly against the Mertonian Norms. Scientists in the public eye must try not to politicize it (despite what politicians do), and can’t treat the public like children. There’s a fuzzy line between posturing and propaganda. Kerry Emanual’s earlier quote is indicative of this. Skeptics can be convinced and change their minds. Religious zealots cannot.

And most importantly, learn as much of the data and science yourself. You will gain knowledge, insight and inspiration in a far richer way than getting the 15-second or 5-minute version from whichever TV network you prefer. There’s a lot of media hype out there on all sides; if you learn more about some of the underlying data you’ll be far more skeptical about everything you hear. That’s a good thing. That’s what got me so excited about Beringia in the first place. It was a fascinating new world I never knew existed and it made me learn a lot more about the history of our planet. Reality is always far more detailed and nuanced than you think.


Greg Olsen
Hi I'm Greg. Occasionally, I do things.ArchiveTumble