Meaning and Measurement, Towards High EPMM Indicator Validity

Posted on

By: Yeeun Lee, Master's Student at the Harvard T.H. Chan School of Public Health and Research Assistant at the Maternal Health Task Force

Whether you know it or not, indicator validity matters to you. To illustrate why, let’s take the problem of disrespect and abuse (D&A) of women during childbirth. D&A is a violation of human rights, negatively impacts quality of care, and creates mistrust of the health system. What’s more, it’s a common phenomenon. Pregnant women all over the world are abused during one their most vulnerable times in their lives.

  We want nurses who are kindwho would be patient enough to tell us what is happening to usMost of them (nurses) do not….”

Undeniably, we all want to reduce D&A during childbirth, which is why measuring its prevalence is necessary. It allows us to better understand and track the problem. This is where indicator validity comes in. Imagine a scenario where researchers measured the prevalence of D&A during childbirth in three different ways and it gave them three different estimates of prevalence. That is exactly what happened. In 2017, Sando et al. found that five studies that used the same 7 categories of D&A during childbirth obtained prevalence estimates that varied widely, from 15-98%, despite the facilities being similar in terms of volume of births, provider types, and clientele. If the five papers used the same conceptual framework, why were the prevalence rates so different? Are they actually measuring the same concept? Certainly, it could be that the true rates are different. However, since the five studies were all conducted in resource-limited settings with similar maternity delivery systems, it is likely that the wide range in prevalence rates is not solely explained by real differences in the study settings and populations. Thus, other explanations of these differences include issues with indicator validity. For instance, the researchers may have used different definitions and measurement methods to capture the construct of D&A.

There are many ways that experts define indicator validity. A review by Benova et al. (2019) categorized the way people define validity in four ways. The most common ways are in terms of meaning (what do we want the indicator to measure? Does it measure what we want it to?), measurability (is it practical to measure this indicator?), measurement (what measure is good enough to accurately assess what we want to measure?), and meaningfulness (what can be achieved from using this indicator (located in time and place)?).

Infographic describing the most common definitions of indicator validity
Figure from Benova et al. 2019

Ways to assess the validity of an indicator vary depending on the level of the indicator. Some health measures track phenomena at the intervention level (e.g., coverage, quality) while others at the patient level (e.g., satisfaction). Yet others try to capture something at the health financing level, the health system level, or the health policy level.

Policy-level indicators often suffer from unclear definitions that fail to capture the underlying construct for measurement. This is a major threat to validity because if it’s unclear what an indicator is designed to measure, it is difficult to assess whether it’s succeeding. The lack of publicly available and standardized data sources further complicates measuring these types of indicators. Given the high burden of collecting and reporting on health indicators it is imperative to assess their validity in each of the four categories described above. However, methods to assess the validity of policy-level indicators are not well described.

To address this gap, Jolivet et al created a protocol to validate 10/25 (40%) of the health system and policy-level indicators developed in Phase II of the work to define a comprehensive monitoring framework for the World Health Organization (WHO) report, “Strategies toward Ending Preventable Maternal Mortality (EPMM)”. Since these indicators were selected as the best measures available to track progress on the key recommendations outlined in EPMM, it is critical to know if they effectively measure what they intend to measure. Let’s dig into the validation of one of them: “legal status of abortion.”

Unsafe abortion is one of the leading causes of maternal mortality across the globe. According to the WHO, around 4.7 to 13.2% of maternal deaths are due to unsafe abortion. Thus, “legal status of abortion” was chosen as a critical indicator for the EPMM theme: “address all causes of maternal mortality, reproductive and maternal morbidities and related disabilities” because legal abortions have been shown to be safer than illegal abortions.

However, even in countries where abortion is legal, providers may apply restrictions that are not required by law, potentially restricting access to safe abortion. For instance, imagine that compulsory counselling and judicial authorization in cases of rape are not codified in the law in a certain country.  If providers in that country apply those restrictions, despite its legality, abortion is not consistently and equally accessible to all women who seek it. Thus, providers who require a court ruling in cases of rape restrict women’s access to abortion on a ground for which it is legal, which could potentially lead them to seek unsafe abortions or carry an unwanted pregnancy to term. Moreover, if extra-legal requirements vary by geography, this could further threaten the validity of legality as a proxy for access. For instance, if judicial authorization is more common among providers in rural areas, women who live in rural areas may have to travel long distances to reach a provider who is willing to perform the procedure without the extra-legal barrier.

As conceptualized, “legal status of abortion” assumes that abortion being legal will improve women’s access to safe abortion, but if there are extra-legal provider-level barriers, this assumption doesn’t hold. As the example above illustrates, legality of abortion may be a poor proxy for access to safe abortion in many settings.

The Improving Maternal Health Measurement (IMHM) Project is working towards improving indicator validity to ensure that we are a) measuring what really matters, b) measuring it accurately, and c) measuring it in a way that will lead to feasible and positive change towards ending preventable maternal mortality. The research described in the protocol by Jolivet et al. proposes novel methods to assess the validity of indicators to monitor key distal determinants of maternal health and survival. As the importance of health system and policy factors gains salience in global and national efforts to end all preventable maternal deaths, indicators to track these factors become critical tools to gauge progress and appraising the validity of such indicators takes on greater urgency.