Explore by Subject
Get 4 extra free issues and unlimited free access to NewScientist.com
Health
 

Risking life and limb to count the war dead

  • 01 August 2007
  • NewScientist.com news service
  • Jim Giles
Printable versionEmail to a friendRSS FeedSyndicate
 
Tools
digg thisAdd My YahooAdd Google Reader reddit submitNewsvineciteulike submit
Counting the dead
Counting the dead

HOW many people died during the second world war? Most historians put the figure at 50 million, while The Penguin Encyclopaedia of Modern Warfare says the number could be as low as 41 million. The Encyclopaedia of the Second World War has settled on 53 million. Other sources go as high as 70 million.

Why the confusion? While armies may keep good records of their losses, many civilian deaths go unrecorded. As well as the direct casualties from physical trauma, communities in wartime suffer malnutrition and disease, and some of those forced to flee their homes do not return. Are these people dead, or did they start a new life elsewhere?

If this sounds like an obscure argument over historical data, think again. Anyone keen to gauge the impact of conflicts and other humanitarian emergencies can fall foul of the same obstacles. Get the numbers wrong and starving people could be left without food, or war crimes could go unrecognised. What those trying to assuage the suffering caused by wars need are robust figures.

Over the past 20 years, such figures have begun to emerge thanks to the efforts of scientists, statisticians and emergency workers, who have forged a new discipline called conflict epidemiology. One of the toughest and most harrowing jobs in research, it requires investigators to go into conflict zones and survey communities face to face. Those who do it regularly place their lives in danger, and many witness violence and death at first hand.

Working in a war zone gives rise to formidable obstacles, and sometimes governments try to dismiss or even suppress the study findings. A study last year in The Lancet that was fiercely attacked for its high estimate of Iraqi deaths illustrates just how controversial a field this can be, but it was by no means the first time researchers have faced such criticisms. In the chaos of war, can conflict epidemiologists really make a difference?

In times of peace, most developed countries periodically gather information about their citizens through censuses, in which every single household is contacted by mail or phone. Such methods are scarcely practical in societies riven by war. Instead, researchers tend to use a technique known as cluster sampling, originally developed to assess the impact of vaccination campaigns. The method involves selecting clusters of households that together are representative of the country as a whole, then within each cluster picking households at random.

At this point the survey teams have to start knocking on doors. Researchers get daily reports from the field, checking rumours of skirmishes and travelling only when it seems safe, often with armed guards. Despite such precautions, sometimes things go wrong.

In 1992, Anne Hatløy, a nutrition researcher then based at the University of Oslo in Norway, was working in Mali in west Africa. Rebel Tuareg groups from the north of the country had been waging a two-year guerilla war on the government in the south. A ceasefire had just been signed, and Hatløy's group had begun assessing the impact of the fighting on local people. The project was going well, until one night when the team was returning home after completing a round of data collection. "We knew we should not drive after dark," she recalls. "But we were very close to our destination. We'd been sleeping in the open and wanted a shower, so we pushed things."

The team was stopped by armed northern Tuaregs, who were suspicious of two men in the group who came from the south of the country. One was a researcher and the other the team's driver, but the rebels suspected the pair of being government officials. The two were separated from the group and beaten, while the others could only listen helplessly. The attack only stopped when local Tuaregs in the team managed to persuade the rebels that the southerners had no government links. Hatløy believes this was the only thing that saved the southerners' lives.

One night armed rebels ambushed the researchers and beat up two of them while the rest could only listen helplessly

The next morning the group was reunited with the two southerners, who were left badly bruised though without serious injuries. But the rebels stole the group's car and used it in an attack on the offices of a nearby Norwegian charity, during which they captured more cars and fuel, and killed two aid workers.

"The night felt like it wasn't real," Hatløy recalls. "That was not bad. Thinking about it afterwards was bad." Hatløy considered abandoning her career. But her group wrote its report, which she says the government and aid agencies used for many years to help them set spending priorities for the area.

The influence of such reports can extend beyond immediate aid efforts. Analyses of deaths during conflicts in Kosovo in the late 1990s featured at the war crimes tribunal of the former Serbian leader Slobodan Milosevic. And a 1992 study of living conditions in the West Bank and Gaza provided common ground for discussion between Israeli and Palestinian negotiators in the run-up to the Oslo peace agreement the following year.

Getting that information out is not always easy, however. Just ask Francesco Checchi, a former epidemiologist with the World Health Organization, who ran a study in Uganda in 2005. Two decades of conflict between government troops and rebels belonging to the Lord's Resistance Army had left 2 million people living in refugee camps in the north of the country, where healthcare was slim to non-existent. In conjunction with the government and the WHO, Checchi's team began a survey of the camps.

The team found that the death rate in the camps was more than three times that of the general population - about a 1000 extra deaths a week. The main cause was treatable diseases such as malaria.

According to WHO guidelines, the death rate meant that the camps should be declared a humanitarian emergency. The United Nations cited Checchi's study in an appeal for $220 million to tackle the problem.

Once the press caught wind of the scale of the deaths, however, the government became unhappy. Although Checchi says officials had initially approved the report, they now issued a press release saying the team's methods were flawed and putting the death toll much closer to normal mortality levels. The president, Yoweri Museveni, summoned agencies that had been lobbying for international aid for the camps to a meeting; according to meeting notes seen by New Scientist, he told them that their "alarmist" claims were putting off tourists.

Even the UN pulled all mention of the figures in subsequent appeals to avoid antagonising the government. And because the Ugandan government was a partner on the project, but has now withdrawn its support, Checchi cannot publish his work. "It's frustrating," says Checchi, who is now at the London School of Hygiene & Tropical Medicine. "I was there. I saw how people were living. If we could bring in relief, things would get better."

A spokesman for the Ugandan government says Checchi's report was a draft, not intended for publication, and that since the study, some refugees have been resettled and the situation has "improved greatly".

The difficulties researchers face are not just political. One of the biggest obstacles to producing accurate figures is working out how to choose clusters so that they form a representative sample of the survey region. Areas with higher population density need to be covered by more clusters, for example.

Jon Pedersen, a social scientist at Fafo, a research centre in Oslo, has also seen nutrition surveys of Mali spoiled because researchers gave too much weight to data from houses at the centre of villages, which were unrepresentative of the general population. People near the village centre are often sedentary, while the outer areas are inhabited by poorer nomads. Richer nomads live still farther out in the bush. Only a method that samples each group in the right proportions will portray the true picture. "Capturing these differences is very important for the results, says Pedersen.

Surveys by non-governmental organisations - as opposed to bodies like the UN and the WHO - are notorious for such design flaws. NGOs often lack the time to do rigorous surveys, and most of their staff have clinical rather than epidemiological training.

When famine hit Ethiopia in 1999, for example, aid agencies rushed to the area, and many tried to quantify the problem's scale. Three years after the first tranche of studies, an analysis was published of 67 cluster surveys produced by nine NGOs (Journal of the American Medical Association, vol 292, p 613). Only six met standard criteria for accuracy.

Researchers at the London School of Hygiene & Tropical Medicine have found similar problems with surveys run by NGOs in Nepal, where there was fierce fighting between the government and Maoist rebels until last year's ceasefire. And of six studies of death rates in Darfur reviewed last year by the US National Academy of Sciences, none was rated as having a high level of accuracy.

Paul Spiegel of the UN High Commission for Refugees in Geneva, Switzerland, who led the Ethiopian research, says such errors mean the most needy can miss out. "If a survey finds a high mortality rate then that is where the donor money will go," he says. "Even though there is somewhere that needs it more."

It was methodological criticisms that were levelled at the most controversial conflict research of recent years, the study in The Lancet of mortality in Iraq after the US-led invasion in 2003 by researchers at Johns Hopkins University in Baltimore, Maryland, and Iraqi colleagues (vol 368, p 1421). Previous estimates for the number of Iraqi casualties due to the military action had been in the range of tens of thousands. Those that covered the same time frame had, however, relied on "passive surveillance" - in other words keeping a running tally of deaths reported to the authorities or news media, a method acknowledged to produce underestimates.

The study in The Lancet, by contrast, used the cluster sampling method, selecting 47 clusters throughout the region, and questioning 40 families in each one. As is standard, the aim was to count all the extra deaths over and above the normal death rate that had occurred since the invasion, both direct casualties and people who had died as a result of hazards such as more crime and worse healthcare. The team reported that the death toll now stood at around 655,000.

President George Bush poured scorn on the figure the day it was released, and a spokesman for the then UK prime minister Tony Blair was equally dismissive. Some of the criticisms from the war's supporters appeared politically motivated and uninformed, including, for example, disparagement of the very concept of cluster sampling. But certain academics in the field have made more reasoned critiques.

One of the chief criticisms involves a possible source of error sometimes referred to as "main street bias". The researchers selected their households within each cluster by picking a main commercial street and then randomly selecting a residential street that intersected with it. They then picked a random house on that street and began from there. Critics such as Michael Spagat of Royal Holloway, University of London, says this method excludes streets that do not join the main street, which would tend to be quieter and less likely to be the scene of attacks such as car bombs. By focusing on busier areas, the team may have picked households with high death rates and inadvertently exaggerated the overall toll, he argues.

The Johns Hopkins researchers reject this criticism, saying any residential streets that did not intersect a main street were specifically added into the random street-selection process. If anything, they say, the study is likely to have underestimated the death toll because entire households could have been killed, leaving no one to report the deaths to the survey team.

This is a general problem with all cluster sampling studies. "If a whole family has been hit by a bomb then you can't interview them," says Pedersen. "That creates a sizeable downwards bias."

The debate goes on. It may well be reignited by further work due to be published soon. Several groups have been re-analysing the data on which the study in The Lancet was based, and expect to publish their results in the next few weeks, while the WHO has independently carried out another cluster sampling survey, which could be published within a few months.

It was not easy for the Johns Hopkins and Iraqi team to have their work dissected in public (see New Scientist, 25 April, p 44), but the process may ultimately benefit the field, says Pedersen. The discussion has prompted many researchers to think about how potential biases such as those outlined by Spagat could creep in. "The debate has had a positive effect in many ways," he says.

Partly because of studies such as Spiegel's, NGOs also seem to be paying more attention to their methods. In Ethiopia, the government now coordinates nutrition surveys and helps train NGOs in survey methods. And in 2004 a large international group of aid agencies and academics produced a set of guidelines on issues such as cluster selection. The renewed focus on accuracy may be starting to pay off. When Spiegel looked at surveys lodged with the UN, while mortality studies had not improved in quality, the proportion of nutrition surveys that met his criteria had increased from 11 per cent in 1993 to 52 per cent in 2004 (Emerging Trends in Epidemiology, vol 4, p 10). "Everyone now agrees that we need to make decisions based on evidence," says Spiegel.

Sixteen years after the ambush in Mali that left Anne Hatløy questioning her career, she is still in the same job. Like many of her colleagues, she says what keeps her going is the need to get reliable data on how people are suffering - which ultimately helps reduce it. "We need to be aware of the situation in places where others don't dare to go," she says. "Our work gives a voice to people in those situations."

From issue 2615 of New Scientist magazine, 01 August 2007, page 34-37
Printable versionEmail to a friendRSS FeedSyndicate
 
Cover of latest issue of New Scientist magazine
  • For exclusive news and expert analysis every week subscribe to New Scientist Print Edition
  • For what's in New Scientist magazine this week see contents
  • Search all stories
  • Contact us about this story
  • Sign up for our free newsletter
 
SUBSCRIBERS
You are logged in as :
mspagat
Subscribe to New Scientist magazine
SPONSORED LINKS
Contact us about links

Click here