The exchanges on this blog on the issue of mortality in Darfur have been refreshingly sober. Let me add some further observations, on culpability for famine deaths, what constitutes a “normal” death rate, and on staying objective amidst powerful moral considerations.
One: Famine Crimes
First, are all the deaths in Darfur the criminal responsibility of the Sudan government? What is the question of the moral and legal status of those killed by hunger and disease? Should we considered them to be murdered by the Sudan government in the same way as those shot, stabbed or burned to death in massacres? My view is that killing by starvation and deaths in famine are both crimes, but in different senses.
For nearly twenty years I have argued that the verb "to starve" is transitive—it is something that one person does to another, like strangling, rather than something that happens through circumstance, like getting sunburned. What first alerted me to this was the exceptionally high death rates in displaced camps for Southern Sudanese during the famine of 1988. (For accounts of that famine, see Deborah Scroggins, Emma’s War: Love, betrayal and death in the Sudan, 2003, chapter 9; David Keen, The Benefits of Famine: A political economy of famine in Southwestern Sudan 1983-1989, 1994; African Rights, Food and Power in Sudan: A critique of humanitarianism, 1997). In these camps, death rates were about sixty times higher than during the worst of the 1984-85 famine in Darfur and Kordofan. Moreover, while the deaths in the drought famine were caused by disease and undernutrition, among the war displaced, frank starvation unto death occurred. Outright starvation is both exceptionally rare and exceptionally horrible. I would argue that almost every instance of frank starvation on a significant scale in modern times has occurred through a deliberate act of war or genocidal policy (an example of the latter being the Ukrainian famine of the 1930s). I would further argue that the Geneva Conventions (1977 Additional Protocols I and II, articles 54 and 14 respectively) clearly outlaw this kind of action as a war crime. I have argued for broadening those provisions of international humanitarian law to outlaw actions such as destroying livelihoods and blocking trade and migration, when undertaken in such a way that widespread hunger and increased death are probable.
In Darfur, the situation in Kailak in 2004 came close to this, and may indeed have been such an instance, until the intervention of relief agencies and local authorities ended the militia policy of starving the displaced people there. Other examples of deliberate starvation during and after military-Janjawiid attacks on villages are also reported. These crimes of starvation should, I aver, be investigated and prosecuted. Darfur should be an instance for the first ever such prosecutions, setting new precedents in the laws of war, in a way comparable to the first ever prosecutions of rape as a war crime in former Yugoslavia.
I have also argued that famine is a crime. I wrote a book entitled Famine Crimes. But famine is not starvation, it’s a much broader phenomenon. Most famine deaths are not frank starvation. Famine is a crime in the broader political sense that it’s a culpable political action that should be called to account in any democratic political system. It’s much less easy to ascribe individual responsibility for famines than for instances of frank starvation.
The great majority of mortality attributable to hunger and disease in Darfur is the outcome of the way in which the war was fought by the Sudan government and its proxies, but the armed movements must also bear some responsibility, especially for what has happened in South Darfur. Is it correct to say that these people were killed by the Sudan government? I would argue that, observing certain important provisos, yes it is. The provisos are (a) that the movements bear a share of the responsibility, and (b) that while these deaths were the foreseeable and preventable outcome of the military strategy used, they were not the intention of that military strategy. So, I would stop short of describing them as "slaughtered" or "murdered" by the Sudan government. But their deaths should be on the conscience of Sudan’s rulers and there should be a political price to pay.
Two: What is normal and what is "excess" mortality?
Second, there are some very important technical issues about estimating "excess" mortality—issues raised in particular by Alberto Polloni in his contribution to the paper he co-authored with John Hagan in Science.
What is "normal" mortality? The classic studies of famine mortality were conducted by demographers dealing with datasets with ns numbered in the millions or tens of millions (See e.g., Tim Dyson, "On the Demography of South Asian Famines," Population Studies, 45 1991, 279-97). Dismissing African datasets as "based on really tiny numbers", these demographers possess sufficiently rich long-range historical data to be able to make inferences about expected non-famine mortality with a fairly high degree of confidence. Not so the demographer of current African crises.
The techniques used for gathering data are primarily intended to tell a humanitarian agency whether a population is suffering an "emergency" or not. The detailed age-specific data (both for population composition and for death rates) required for making proper estimates of excess mortality simply don’t exist. Much variability in overall mortality rates is attributable to simple variations in the age composition of a population, a variability that is accentuated during times of distress migration.
In addition, there are seasonal and secular trends. How to account for the seasonality of "normal" deaths? For example, there is the fact that, because of nutritional and disease patterns, deaths in Darfur are "normally" concentrated in the months May-September. Should we have a seasonally-adjusted "normal" death rate accordingly?
Also, how to account for the fact that we might expect "normal" mortality rates to be declining because of the general worldwide improvement in health expenditure and medical technology? For example, a survey I conducted in Ethiopia to look at child survival during the 2002-03 drought found that child mortality had been declining in the decade prior to the drought, but didn’t decline further during that drought—did this indicate "excess" mortality or not? (A. de Waal, A. Seyoum Tafesse and L. Caruth, "Child Survival During the 2002-2003 Drought in Ethiopia," Global Public Health, 1: 2006, 125-32). There’s no obvious right answer.
With a large population, a small increase or decline—which may not even be noticeable at the level of a community—can lead to immense variability in the estimated figure for additional deaths or people saved from premature death. And a substantial and protracted increase in mortality rates in a country, for example the significant elevations in mortality in the eastern Democratic Republic of Congo, can add up to truly immense figures for excess deaths—3.8 million estimated for the case of Congo. (See Benjamin Coghlan et al.’s study in the Lancet in 2006).
And of course, describing pre-conflict mortality in Darfur as "normal" might imply that a situation of acute poverty, lack of basic sanitation and health services, and recurrent nutritional emergency, is somehow acceptable.
Polloni is absolutely right: estimating a hypothetical figure for "normal" mortality in this situation is introducing another source of error. In pursuit of rigor, we should enumerate these possible sources of error.
Three: it’s easy to allow activist enthusiasm to color the interpretation of data
Often in African crises, humanitarian activists have cited data in support of a particular course of action, and allowed their enthusiasm for that policy to color their interpretation of the data. (That’s why field staff are not always the best data collectors—they are too emotionally engaged.)
An example of how data can be misused from good intent is the survey conducted by the Centers for Disease Control in southern Somalia in November 1992 (CDC, "Population-based mortality assessment—Baidoa and Afgoy, Somalia," Morbidity and Mortality Weekly Report, 41.49, December 11, 1992, 913-17). The survey found eight-month mortality of 3.5 per 10,000 per day among 767 Afgoy residents, 5.5/10,000/day among 237 displaced in Afgoy and 16.9/10,000/day among 349 displaced people in Baidoa. These numbers are very small to attempt statistical generalization. Still smaller was the absolute number of surveyed children under five among the Baidoa displaced who had died the previous month: five dead out of 21 children. This was five individual tragedies. Yet this rather theoretical rate of 69.4/10,000/day was what hit the headlines. The Guardian of London reported on December 11, "Somalia has amongst the highest death rate from starvation [sic] ever recorded, said the Centers for Disease Control yesterday. It said deaths among children under five in one village [sic] reached 69.4 per day for every 10,000 children—more than three times that of the 1984-85 famine." The CDC press release was colored by enthusiasm for the military intervention that the U.S. administration was launching. Subsequently, CDC staff were embarrassed over the way in which their data had been utilized.
Another instance, also forwarded in good faith, was the prediction made by staff of Medecins Sans Frontieres in late October 1996, estimating that death rates in refugee camps for Rwandese in eastern Zaire (now Democratic Republic of Congo) would climb to 10 per 10,000 per day if relief supplies were cut, causing 13,600 people to die within three weeks. MSF was a loud advocate for a humanitarian intervention and made the claim in support of that call. The relief supplies were cut, the intervention didn’t happen, and the death rates didn’t climb. A prediction is not a data point.
What happened was that the clamor for intervention by humanitarian agencies was interpreted as a politically partisan act by the government of Rwanda, which was intending to root out the genocidaires encamped on its border. The breakdown in trust between the humanitarians and the Rwandese government contributed to the latter’s complete exclusion of humanitarian agencies from Zaire during its military campaigns there, with disastrous consequences for refugees and local people alike.
And, following on from the importance of being dispassionate about something as morally charged as deaths from massacre and famine–we must beware of fetishizing numbers.
Let me conclude with an observation from James Smith’s article, based on its title, "Khartoum apologists — don’t exaggerate, only 200,000 murdered in Darfur.’ While ESPAC (the organization that brought the case against Aegis and Save Darfur) can accurately be described as an apologist for Khartoum, there are others (for example, myself) who prefer to use middle-to-lower range estimates for excess deaths for more respectable reasons. In an excess of zeal, it would be possible for a less-informed activist to interpret phrasing of Smith’s title to refer indiscriminately to those who do not subscribe to the high-end figures he prefers.
There’s always a danger of the moral fervor that drives an energetic campaign turning into an intolerance of alternative interpretations of a complicated and often uncertain reality. This could easily happen if the debate on mortality statistics departs from scientific objectivity. We shouldn’t let that dismal fate overtake the campaign to end Darfur’s suffering.