voxdev.org

Did COVID-19 vaccination in Africa progress quicker than we thought?

Vaccination

Official records may have painted an overly bleak picture of vaccination progress in low- and middle-income countries, particularly in sub-Saharan Africa. New research suggests that LMICs may have progressed faster than expected.

Good data is fundamental for sound policy making, yet data quality often receives too little consideration and scrutiny (Dillon et al. 2020, Jerven 2013). In Markhof et al. 2025, we show that data quality issues substantially impact our understanding of one of the foremost policy challenges of recent years: COVID-19 vaccination progress.

In a sample of 36 low- and middle-income countries (LMICs), survey estimates of vaccine coverage from longitudinal phone surveys exceed the official records reported in administrative statistics by 47% on average (Figure 1). This pattern is particularly striking and consistent in sub-Saharan Africa, suggesting at times vastly different progress of vaccination campaigns depending on the data source consulted.

Figure 1: Baseline survey estimates compared with administrative records

Baseline survey estimates compared with administrative records

Baseline survey estimates compared with administrative records

Note: Estimates COVID-19 vaccine coverage rate in survey and administrative data. Survey data estimates are for purposively selected (main) respondents. Administrative data is calculated for the adult population (aged 15+) by assuming no vaccinations were administered to children below 15. EAP = East Asia and Pacific; ECA = Europe and Central Asia; LAC = Latin America and Caribbean; MEA = Middle East and Northern Africa; SSA = Sub-Saharan Africa.

Such misalignment between different data sources has troubled development practitioners for decades, not least in the field of vaccination (Galles et al. 2021, Cutts et al. 2016, Sandefur and Glassman 2015, Burton et al. 2009, Murray et al. 2003).

Do errors in survey data bias COVID-19 vaccine uptake estimates?

To causally investigate potential errors in survey-based vaccine coverage estimates, we conducted six survey experiments in sub-Saharan Africa, including Burkina Faso, Ethiopia, Malawi, Nigeria, and Uganda. Each experiment allowed us to vary one survey design aspect at a time and investigate if, and by how much, changes in survey design cause changes in estimated vaccination coverage (Figure 2).

We find that the discrepancy between administrative records and survey estimates is almost cut in half (reduced by 42% on average), after making sure that the group of household members for which data is collected is closely representative of the national population. This is often not the case in multitopic surveys that interview a respondent that is broadly knowledgeable across a wide range of topics concerning the household (e.g. the household head). For individual-level questions such as vaccine uptake, the resulting samples may consequently overrepresent certain population groups such as men, older people, and heads of household.

Compared to such respondent selection, we achieve greater representativeness of the sample in two ways: either by selecting a respondent at random or by asking a purposively selected respondent to report on the vaccination status of all other household members as well (‘proxy reporting’). While in the latter approach, proxy respondents at times miss out on some completed vaccinations by other household members, both approaches allow us to recover estimates of vaccine uptake that are much more representative of the national population.

At the same time, survey estimates are unaffected by several other design choices and potential biases. Estimated vaccine coverage is unchanged when inducing experimenter demand by telling respondents explicitly that we expect to hear that they are (not) vaccinated (de Quidt et al. 2018), when interviewing respondents for the first time instead of repeatedly, or when analysing a nationally representative sample of households instead of the sub-sample included in the phone survey. Further, the same large discrepancies between survey- and administrative data arise when surveying respondents in person, instead of over the phone.

After adjusting for errors in the survey data, there remains a statistically significant, average gap of 9 percentage points between survey estimates and administrative records in sub-Saharan Africa, at a time when most of these countries had barely vaccinated 15% of their population, according to the official records.

Figure 2: Deviation of survey estimates from administrative records under different survey design choices

Deviation of survey estimates from administrative records under different survey design choices

Deviation of survey estimates from administrative records under different survey design choices

Are the administrative vaccination records flawed?

Since survey errors alone are unable to explain the puzzle of misaligned vaccination statistics, we next spotlight potential flaws in administrative vaccination records. These are the numbers that shaped the public discourse and policymaking on COVID-19 vaccination progress, as the official statistics released by each country’s government and compiled in the Our World in Data COVID-19 vaccination tracker (Mathieu et al. 2021).

We cannot test errors in the administrative data with the same rigor as the survey data. However, when compiling evidence on flaws and inaccuracies in administrative records using data from all 134 LMICs worldwide, it seems likely that, despite being designated ‘official’, administrative records provided an incomplete or at least delayed snapshot of the true vaccination rate at any given point in time.

Two patterns are evident when probing the official records. First, some data quality issues are widespread across all LMICs. Second, these issues tend to be more acute in sub-Saharan Africa than other LMICs (Figure 3).

We find that the average data reporting frequency is more than twice as long in sub-Saharan Africa as in LMICs outside the region, and virtually every country in sub-Saharan Africa has gaps of one month or longer without an update to their vaccination rate. Around two-thirds (67%) of sub-Saharan African countries have gaps of two months or longer, while this is true for 41% of other LMICs.

Infrequent reporting results in administrative figures often being updated in bulk. Over half of countries in sub-Saharan Africa record jumps of 5 percentage points or more from one report to the next. If taken at face value, and after adjusting for the number of days that passed since the last report, the sudden progress reported in some LMICs would rival some of the most successful days of vaccination in high-income countries. Finally, we find that uncertainty over the correct population counts in many LMICs lead different sources to report up to 5 percentage points different vaccination rates (in the largest decile) for the exact same number of people vaccinated, simply because they assume different population sizes.

Figure 3: Irregularities in administrative records

Irregularities in administrative records

Irregularities in administrative records

What did we miss about COVID-19 vaccination progress in Africa?

Based on our experimental evidence, we can create ‘error-corrected’ survey estimates of vaccine uptake using countries and survey rounds where we are able to correct for respondent selection effects.

The adjusted survey estimates suggest a much quicker progression in countries’ vaccination campaigns than what these countries were credited for in the public discourse. Taking the initial COVAX target of 20% population coverage as an example, our error-adjusted survey estimates suggest that this milestone was reached a year earlier in the Gambia, at least five months earlier in Burkina Faso and Kenya, over four months earlier in Nigeria, at least three months earlier in Malawi, and that in Uganda (where progress was quicker than in the aforementioned countries) vaccinations crossed the 40% mark at least a month earlier than administrative data would suggest (Figure 4).

Importantly, these results do not overturn the fact that vaccination rates in Africa trailed those in wealthier parts of the world. Neither do they mean that vaccine supply to Africa was not inequitable, scarce, and delayed. Taking the error-adjusted survey estimates at face value would still mean that, on average, vaccination rates in sub-Saharan Africa were only a third as high as those in high-income countries at the time.

Figure 4: Progress in administrative records versus error-adjusted survey estimates

Progress in administrative records versus error-adjusted survey estimates

Progress in administrative records versus error-adjusted survey estimates

Investing in resilient health systems necessitates investing in strong data systems

Not only do our results aim to ‘set the record straight’, but they also deserve broad attention from researchers and development practitioners for two main reasons:

They highlight that key lessons from the COVID-19 pandemic are not confined to strengthening health systems, but that they should also include investments in reliable, robust, and resilient data systems.

Our results serve as a reminder that, while administrative data is increasingly available and popular in low-income countries, this data is equally subject to measurement error and deserves the same scrutiny as any other data source.

We thus close with a call to invest in, and give careful consideration to, data quality—in survey data as in administrative records—and the possibly diverging policy conclusions different data sources may purport.

References

Burton, A, R Monasch, B Lautenbach, M Gacic-Dobo, M Neill, R Karimov, L Wolfson, G Jones, and M Birmingham (2009), “WHO and UNICEF estimates of national infant immunization coverage: Methods and processes,” Bulletin of the World Health Organization, 87(7): 535–541.

Cutts, F T, P Claquin, M C Danovaro-Holliday, and D A Rhoda (2016), “Monitoring vaccination coverage: Defining the role of surveys,” Vaccine, 34(35): 4103–4109.

Dillon, A, D Karlan, C Udry, and J Zinman (2020), “Good identification, meet good data,” World Development, 127: 104796.

Galles, N C, P Y Liu, R L Updike, N Fullman, J Nguyen, S Rolfe, A N Sbarra, et al. (2021), “Measuring routine childhood vaccination coverage in 204 countries and territories, 1980–2019: A systematic analysis for the Global Burden of Disease Study 2020, Release 1,” The Lancet, 398(10299): 503–521.

Jerven, M (2013), "Poor numbers: How we are misled by African development statistics and what to do about it," Cornell University Press.

Markhof, Y, P Wollburg, and A Zezza (2025), “Beyond the records: Data quality and COVID-19 vaccination progress in low- and middle-income countries,” Journal of Development Economics, 174: 103449.

Mathieu, E, H Ritchie, E Ortiz-Ospina, M Roser, J Hasell, C Appel, C Giattino, and L Rodés-Guirao (2021), “A global database of COVID-19 vaccinations,” Nature Human Behaviour, 5(7): 947–953.

Murray, C J L, B Shengelia, N Gupta, S Moussavi, A Tandon, and M Thieren (2003), “Validity of reported vaccination coverage in 45 countries,” The Lancet, 362(9389): 1022–1027.

de Quidt, J, J Haushofer, and C Roth (2018), “Measuring and bounding experimenter demand,” American Economic Review, 108(11): 3266–3302.

Sandefur, J, and A Glassman (2015), “The political economy of bad data: Evidence from African survey and administrative statistics,” The Journal of Development Studies, 51(2): 116–132.

sub-Saharan Africa COVID-19 vaccine access

Read full news in source page