Are Wars Becoming Less Lethal? The 2009 Human Security Report
The Human Security Report has become well-known for its argument, backed up by careful statistical analysis, that wars are becoming less common since the early 1990s, and also becoming less lethal. The new HSR study, ‘The Shrinking Costs of War’, brings new data to bear on this question, and makes stronger claims than in previous editions.
In particular, the Human Security Report argues that population death rates actually fall in many protracted wars, because improved health care for the general population outweighs the deaths caused by violence and disruption. This is sure to be controversial, and it would have been useful if the Report made a clear distinction between smaller and larger wars and clarified whether this claim holds for both.
Another striking and controversial claim, is that the very high levels of estimated excess mortality in the Democratic Republic of Congo (5.4 million) are exaggerated by a factor of three. If correct, this would compel a rethinking of Congo — but would still make it Africa’s most costly conflict in terms of human life over the last decade.
The Human Security Report press release is available here: Shrinking Costs of War Press Release. The full report is available on the HSR Website.
I didn’t see the correlation between child mortality and conflict when looking at the graphs. Child mortality rates have been dropping in all countries since the early 70s. All I could see from the graphs was that the rates continued to decline at roughly the same rate they already were. It would be interesting to compare this data to developing states not experiencing conflict to see if the drop in child mortality was more dramatic, meaning that conflict could still have a negative effect on child mortality.
I went to investigate HSR’s datasets to see if I could graph these trends in Excel or Stata and found that there are no child mortality statistics in their datasets, meaning that everything was taken from IACMEG and the World Bank. I then went to look at their conflict dataset to see how it was arranged and found that the HSR conflict database that they cite throughout the report is actually a collection of conflict figures from the Uppsala dataset. They made it their own by fusing it together with population data from the World Bank and calling it their own dataset (with a citation at the bottom recognizing the Uppsala set.)
Additionally, and this may answer the question about the HSR failing to define conflict and war, the Uppsala database differentiates between intensity 1 and intensity 2 conflicts by looking at their casualty thresholds (1=25 and 2=1000). When HSR transferred the Uppsala set into their own database, they lumped both types of conflicts into the database. This means that the casualty threshold they used for a conflict was 25. Ultimately, this makes figure 2.5 appear more dramatic than it actually is. The number of casualties per conflict may be dropping, but the number of low-intensity conflicts recorded, especially if using the 25 casualty threshold, could be rising. This makes it difficult to know how dramatic the trend really is.
Despite these shortcomings, the DRC section seemed to be much more valuable, challenging the mortality reports in the same way that the CRED challenged previous surveys in Darfur. Poor extrapolation techniques and inaccurate time projections seem to be common themes lately and hopefully reports like this will lead to more responsible surveys.
This HSR and the debate it is certain to spark serve an extremely important purpose, which is injecting the demand for more and better data, and more and better analysis, into the question of the human costs of war. This is a field that has been too long dominated by unsubstantiated claims, repeated sufficiently often to acquire the status of ‘facts.’
In my experience, closer analysis of mortality in crisis usually (but not always) leads to reduced figures. Early estimates are subject to various upwards pressures, including the tendency among the media, aid agencies and advocacy organizations to translate the upper limits for mortality estimates into firm projections or even minimum figures. Once a figure has reached the public domain, anyone who suggests a lower number is placed in the morally uncomfortable zone of being said to ‘minimize’ the crisis.
Methods for estimating violent deaths need to be developed. In 1992 in Mogadishu, Jennifer Leaning and I tried to develop a method based on ‘incidents’. On the grounds that every major incident (shooting, bombing, artillery attack) that involved fatalities, also involved individuals who were wounded and got taken to hospital, we figured that we could use the admissions to emergency rooms in the main Mogadishu hospitals to identify ‘incidents’ and on that basis obtain data for the number of fatalities. Unfortunately, the reviewer of our paper for publication rejected it on the basis that our method was ‘unproven.’
The secular decline in mortality levels due to increased public health provision also creates immense difficulties for estimating a valid baseline mortality rate. I found this in Ethiopia when trying to obtain some estimates for mortality due to the 2002/03 drought. In the years up to 2001, child mortality had declined consistently year on year. During the drought year it stopped declining. Should the non-decline in 2002 be translated into an ‘excess deaths’ figure on the grounds that without the drought the figure would have been lower? Or should a straight comparison with the previous year be the baseline? There is no obvious answer to this question, but it is important to note that baseline estimates, especially when extrapolated over several years and to large populations, can have an enormous impact on estimated numbers of excess deaths.