Guardian: Climate and Violence


Hot and Bothered: Climate Warming Predicted to Increase Violent Conflicts

A hot-off-the-presses SCIENCE study analyses historic and modern data gathered from around the world and finds a link between global warming and increased human violence.

By Bob O’Hara and GrrlScientist, August 2 2013.

As much of Europe and America swelter under the effects of unusually warm temperatures this summer, it may be cold comfort to learn that climate change affects more than the weather; it also influences our behaviour. A hot-off-the-presses study finds that as global temperatures increase, so does violent human behaviour. Further, thanks to climate change and extremes in rainfall, this study predicts that conflicts may increase between now and 2050.

This study, just published in the journal Science by Solomon Hsiang, Marshall Burke and Edward Miguel from Princeton University and UC Berkeley, synthesizes a lot of previously published studies of violence, ranging from baseball pitchers throwing balls at batters to wars and even to the collapse of entire civilisations. Their conclusion? When it gets hotter, people become more violent.

Theirs is an assertion about causation — that warmer temperatures cause violence. But they didn’t discover this by doing a controlled experiment (which would be unfeasible and unethical), instead they (and the researchers who carried out the original studies that are re-analysed in this paper) relied upon (uncontrolled) historical data.

But there are all sorts of ways that conclusions based upon uncontrolled studies can go wrong, so how did the authors try to avoid these pitfalls?

[Image link]

How did they do their study?

The obvious approach for comparing violence and climate is to measure violence (e.g. murder rates or outbreaks of war), plot these data against temperature and then fit a line to the data. This generally looks good, but even if it appears that there is a connection between these two variables, this relationship may be spurious.

So the first challenge is to make sure you collect the correct data. For example, you want to avoid data where either climate or rates of violence may be correlated with something else. This is the reason that Hsiang and his team omitted one type of study:

We do not consider studies that are purely cross sectional, i.e. studies that only compare rates of conflict across different locations and that attribute differences in average levels of conflict to average climatic conditions. There are many ways in which populations differ from one another (culture, history, etc.), many of them unobserved, and these “omitted variables” are likely to confound these analyses. … For example, a cross-sectional study might compare average rates of civil conflict in Norway and Nigeria, attributing observed differences to the different climate of these countries – despite the fact that there are clearly many other relevant ways in which these countries differ.

Instead the authors concentrated on studies that looked at violence over time. To continue using the authors’ example, if you compare violence rates in Norway from 1995 and 1996, the itself country will be essentially the same, so a lot of the omitted variables will not differ. (Keeping in mind that the one obvious difference here is that Norway won the Eurovision song contest in 1995, which then meant they hosted the contest in 1996, but that particular connection to violence has yet to be documented.)

A more difficult issue is that climate is not the only factor that affects violence: other variables can contribute, too. Normally, one would deal with this additional layer of complexity by including these other variables in the analysis. But that can create trouble:

This problem occurs when researchers control for variables that are themselves affected by climate variation, causing either (i) the signal in the climate variable of interest to be inappropriately absorbed by the “control” variable, or (ii) the estimate to be biased because populations differ in unobserved ways that become artificially correlated with climate when the “control” variable is included. … The difficulty in this setting is that climatic variables affect many of the socioeconomic factors commonly included as control variables – things like crop production, infant mortality, population (via migration or mortality), and even political regime type.

In other words, if the only effect of climate on violence was to reduce crop production, then including crop production in the model alongside climate would remove the climatic effect (or some of it, at least).

The authors have a simple solution: they don’t include any other variables in their analysis. This works because the assumption is that any correlation between climate and crop production is causal — in other words, climate affects crop production, and not the other way around. This is probably a reasonable assumption: it is unlikely that any of the variables will have a noticeable impact on climate, and averaged over all the studies used, it is doubtful that there are enough chance correlations to create any major biases.

This is not the best solution to the problem: a better, but more complicated, approach would be to either use path analysis or structural equation modelling (depending on who your mentor was). The idea here is to model the direct and indirect effects, such as the effects of climate and crop production on violence, and then to model the effect of climate on crop production. Doing this, it is easy to add up the direct effect of climate on violence as well as its indirect effect through altered crop production. Alas, this is something to play with later, I suppose.

Does it matter?

Once one has fitted a model to the data, the next question is whether all this number-crunching means anything. There are two ways to think about this. First, what is an important effect? Different people will have different answers, but the authors define an important effect where an increase of one standard deviation (σ) in temperature (or rainfall) changes the risk of violence by 10 percent. Hsiang’s group justifies it like this:

This … criteria [sic] uses an admittedly arbitrary threshold, and other threshold selections would be justifiable. However, we contend this threshold is relatively conservative since most policy makers or citizens would be concerned by effects well below 10%/σ. For instance, since random variation in a normally distributed climate variable lies in a 4σ range for 95% of its realizations, even a 3%/σ effect size would generate variation in conflict of 12% of its mean, which is probably important to those individuals experiencing these shifts.

This is reasonable but it also leaves plenty of room for discussion.

The second aspect to consider is whether such changes may be important. This is, of course, more subtle. Usually (and regrettably) scientists report the uncertainty in an effect by asking whether it is “significantly” different from zero (i.e. does it have a p-value).

But this can depend on two things: the size of the effect and the precision of the study. So a large effect with a low precision (i.e. one that has been estimated poorly, perhaps because of a small sample size) can be declared less significant than a smaller effect that is better estimated:

To summarize the evidence that each statistical study provides while also taking into account its precision, we separately consider three questions for each study in Table 1 [Grrlscientist Note: these data not shown here]: (1) Is the estimated average effect of climate on conflict quantitatively “large” in magnitude … , regardless of its uncertainty? (2) Is the reported effect large enough and estimated with sufficient precision that the study can reject the null hypothesis of “no relationship” at the 5% level? (3) If the study cannot reject the hypothesis of “no relationship,” can it reject the hypothesis that the relationship is quantitatively large? In the literature, often only question 2 is evaluated in any single analysis. Yet it is important to consider the magnitude of climate influence (question 1) separately from its statistical precision because the magnitude of these effects tell us something about the potential importance of climate as a factor that may influence conflict, so long as we are mindful that evidence is weaker if a study’s results are less certain. In cases where the estimated effect is smaller in magnitude and not statistically different than zero, it is important to consider whether a study provides strong evidence of zero association – i.e. the study rejects the hypothesis that an effect is large in magnitude (question 3) – or relatively weak evidence because the estimated confidence interval spans large effects as well as spanning zero effect.

This is something I wish more scientists — and other people, too — would think about.

Long story short; here’s what they found

A lot of work went into looking at 61 studies, and re-analysing some of those data. In general, the authors found that increasing temperatures lead to more violence. But what is unexpected is that they find this effect across all scales — from pissed-off pitchers to domestic violence, from inter-group conflicts and even up to regime change, wars and the collapse of empire. The breadth of the scale over which this is seen is truly impressive, and it’s also surprising that the results are just so consistent:

The magnitude of climate’s influence is substantial: for each 1 standard deviation (1σ) change in climate towards warmer temperatures or more extreme rainfall, median estimates indicate that the frequency of interpersonal violence rises 4% and the frequency of intergroup conflict rises 14%.

The cynic might suggest that this consistency is because of publication bias — the idea that researchers studying the effects of climate on violence tend to only write up their results if they see an effect. But we know this sort of distortion is a common pitfall, and Hsiang’s team specifically addressed this in their work and conclude that there is no evidence for publication bias.

My impression, however, is somewhat different: I think there is clear evidence that the low-power studies do have publication bias, but despite that, there is still an effect.

To analyse this study for publication bias — to determine whether only statistically “significant” findings were used — I used an analytic method that’s standard in medical statistics: a funnel plot. This plots the estimated size of the effect on the x-axis, and a measure of the uncertainty of the effect on the y-axis (here I used the standard error). If there was no effect, the plot data points should be centred at zero on the x-axis, and as the standard error increases, the data points should resemble a funnel. (Less precise studies produce a wider range of outcomes that are scattered all over the graph.)

The estimates themselves are correlated with study precision: less precise studies will have larger effect sizes, because they have to be larger to become statistically significant. But as studies become more precise, they cluster around zero.

In fact, this is almost what we see (larger view):

Funnel Plots describing the relationship between temperature and (left) interpersonal violence and (right) intergroup violence.
Image: Bob O’Hara.

The inter-personal violence plot looks almost typical for biased studies: the points (particularly those towards the bottom of the plot, which have large standard errors) lay close to the positive “significance” line.

The inter-group violence is a bit more complicated: the three estimates with low standard errors (at the bottom of the plot) are all very positive, but other than that the estimates cluster around the mean value.

Although there is some evidence for publication bias, this doesn’t mean that there is no effect. The precise studies cluster around positive values, rather than zero. But what is missing is the studies with low precision and low effect. Why? One reason might be conscious publication bias: if you find a non-significant result, you don’t bother wasting your time writing it up.

But there may be another reason. Hsiang and his team offer another explanation:

[M]any analyses are not explicitly focused on the direct effect of climate on conflict but instead use climatic variations instrumentally or account for it as an ancillary covariate in their analysis while trying to study a different research question – indicating that these authors have little professional stake in the sign, magnitude or sta[s]tical significance of the climatic effects they are presenting.

So basically Hsiang and his colleagues included studies that mention climate, but were actually looking at something else. In those studies, if climate is not what those authors were mainly interested in, but they looked anyway just to see if there was an effect but didn’t find one, they simply didn’t put it in their model. That means a climate effect could be missed for two reasons — either it wasn’t reported in those studies, or Hsiang and his team didn’t see it mentioned because it was tucked away in one or two lines (or it was only mentioned in those studies supplementary online material).

Based on this study, where should you holiday in 2050?

Basically, the prediction is that the world will warm up. The obvious conclusion from this study is that there will be more violence. (Hsiang’s team does discuss the possible mechanisms for this, but I’m not a social scientist so I won’t do violence to their explanations.) Further, they predict that the warmer that a place becomes, the more violent it will be.

In what looks like a fit of preemptive self preservation, Hsiang and his colleagues produced this intensely colourful map (larger view) that shows which bits of the world are warming up, i.e. which areas will become more violent (all else being equal):

Projected temperature change by 2050 as a multiple of the local historical standard deviation (σ) of temperature. [doi:10.1126/science.1235367]

In this map, the redder the colour, the more that violence is predicted to increase. For the darkest red colour, the prediction is an increase of around 60 percent in violence between groups, and 15 percent within groups. Of course, this will be modulated by a lot of other factors: the strength of central government or how much of the populace is “packing heat”, for example, or mitigation policies (like sending in UN peacekeepers), so this map is not showing where there will be more violence. I for one, am relieved to see that the Middle East isn’t going to be hit that bad: in fact the increase in violence may be worst in the USA.

Being wimps, we’re going to start looking for jobs in Svalbard. Those of you wanting to become war correspondents might look upon this map has a handy guide for your future, too.

See Bob O’Hara and GrrlScientist, Hot and Bothered, The Guardian, August 2 2013.

(Emphasis added)