Why the Global Health Security Index failed
Introduction
During the coronavirus pandemic, an image was mockingly circulated online. It claimed to show “the countries best and worst prepared for an epidemic,” based on data from the 2019 Global Health Security Index (GHSI). The joke was that this confident ranking, published just before the COVID-19 pandemic, turned out to be dramatically wrong: countries rated highest, like the United States and the United Kingdom, suffered catastrophic failures, while others ranked far lower managed the crisis with far greater success.
The story didn’t end there. After the original index was shown to have failed, its creators—among them the Nuclear Threat Initiative, Johns Hopkins Center for Health Security, and the Economist Intelligence Unit—went back to the drawing board. In 2021, they published a new version of the index. Yet strangely, despite everything that had happened, the results remained almost unchanged.
This index was no small affair: over USD $15 million was spent on the project, funded mostly by philanthropic organisations including the Open Philanthropy Project, the Bill and Melinda Gates Foundation (now just the Gates Foundation), the Robertson Foundation, and the Rockefeller Foundation. The research team included three project leaders, 13 authors, 20 experts, and dozens more acknowledged contributors. Three large reports were created in 6 different languages (a 316-page report, a 260-page report, and a 63-page report), as well as 85-page references for each of the 195 countries analysed. The sheer scope of the effort lends the GHSI an aura of authority. But as we will see, that authority is misplaced.
Part I: Data
If we want to identify problems with this index, a good place to start would be the criteria themselves. Quite a few of them contain serious flaws. For each of these I have used the newer 2021 report’s findings.
Criterion 6.1.6a)
“Does the government’s authority extend over the full territory of the country?”
Russia and Thailand are listed as not having full territorial control in 2021. This is highly questionable. Russia did not enter the war in Ukraine until February 2022 and maintained control over its territory in 2021. Thailand has no active territorial disputes or contested areas; its score is completely unjustified.
Both India and Pakistan are listed as having authority over their “full territory”, since all of Kashmir is at least held by one of them. Sudan, conversely, is marked down despite the fact that, in 2021, all regions it claimed but did not control were held by recognized states (Egypt or South Sudan). Serbia is also penalized for Kosovo’s independence.
Criterion 3.3.1a)
“Does the country have in place an Emergency Operations Center (EOC)?”
Liberia is listed as lacking an EOC despite operating a Public Health EOC. They say “there is insufficient evidence”, but I found it easily: Liberia PHEOC, CDC report (p. 31), WHO source.
Cuba is also listed as lacking an EOC, despite having one as part of the regional organisation PAHO.
Criteria 3.3.1a) and 3.3.1b)
“Does the country have in place an Emergency Operations Center (EOC)?”
“Is the Emergency Operations Center (EOC) required to conduct a drill for a public health emergency scenario at least once per year or is there evidence that they conduct a drill at least once per year?”
Japan is somehow listed as lacking an EOC under 3.3.1a, yet under 3.3.1b, its apparently non-existent EOC is described as conducting regular drills. These cannot both be true.
Criterion 3.5.2b)
“Is there evidence that senior leaders (president or ministers) have shared misinformation or disinformation on infectious diseases in the past two years?”
Nepal is penalized because its president said COVID-19 was like the flu. Yet other countries made similar statements without consequences. Speaking about Covid, Estonian Interior Minister Mart Helme, stressed that there was no emergency in Estonia and compared the virus to a common cold condition, which is aided by blueberries and geranvas. Estonia still received a full score. The French health minister said there was “practically no risk” of importing Covid-19 and that the “risk of a spread of the coronavirus among the population is very small”. France also received a full score.
Criterion 6.1.5a)
“Is this country presently subject to an armed conflict, or is there at least a moderate risk of such conflict in the future?”
Ireland and Israel apparently have the same level of armed conflict. Ireland’s last major conflict, the troubles, ended in 1998 with ~2,000 total deaths over 30 years. In contrast, over 2,000 people died in Gaza during 2014 alone.
Criterion 6.1.4a)
“How likely is it that domestic or foreign terrorists will attack with a frequency or severity that causes substantial disruption?”
Israel is listed as facing only a moderate terrorism. Whatever your opinions on the state of Israel are, it is hard to justify claiming that there is only a moderate threat of terrorism. North Korea is inexplicably scored as having a terrorism threat, though it has no record of being a target.
Criterion 5.5.3a)
“Is there a publicly identified special emergency public financing mechanism and funds which the country can access in the face of a public health emergency (such as through a dedicated national reserve fund, an established agreement with the World Bank pandemic financing facility/other multilateral emergency funding mechanism, or other pathway identified through a public health or state of emergency act)?”
China is listed as lacking national emergency funds, but evidence clearly shows otherwise. There are well-established mechanisms to use these funds: emergency response laws, evidence of use, and an entire Ministry of Emergency Management.
The problems go beyond simple inaccuracies. Many of the criteria themselves are conceptually flawed.
Part II: Criteria
Criterion 6.1.6a again serves as a key example.
“Does the government’s authority extend over the full territory of the country?”
This question assumes that incomplete territorial control impedes national policy implementation. But this forgets that these territories are still governed by someone. The index should just focus on how well a government promotes health security in areas it does control. Moreover, the judgment about which country “should” control a disputed area is fraught with difficulty and cannot be answered in a single 0 or 1.
Criterion 5.5.4c)
“Has the country fulfilled its full contribution to the WHO in the past two years?”
While commendable, fulfilling WHO contributions has little bearing on internal health security capacity. International commitments are valuable but distinct from domestic preparedness.
All criteria in 3.7 (Trade and travel restrictions), e.g. 3.7.2a)
“In the past year, has the country implemented a ban, without international/bilateral support, on travelers arriving from a specific country or countries due to an infectious disease outbreak?”
Countries are penalized for implementing unilateral travel bans during outbreaks, despite the clear and widely acknowledged fact that travel restrictions can reduce the importation of disease. Whether such policies are politically desirable is another matter—but this is supposed to be a health security index.
Criterion 3.4.1a)
“Does the country meet one of the following criteria? Is there public evidence that public health and national security authorities have carried out an exercise to respond to a potential deliberate biological event (i.e., bioterrorism attack)? Are there publicly available standard operating procedures, guidelines, memorandums of understanding (MOUs), or other agreements between the public health and security authorities to respond to a potential deliberate biological event (i.e., bioterrorism attack)?”
This criterion rewards countries for publicly documenting bioterrorism exercises. But requiring such exercises to be public misunderstands national security: secrecy is often essential in defence. This is especially troubling given that this criterion is the most heavily weighted in the entire index.
Criteria from 5.5.4 (Commitments made at the international stage) and 5.6 (Commitment to sharing data), e.g. 5.5.4a)
Is there evidence that senior leaders (president or ministers), in the past three years, have made a public commitment either to:
- Support other countries to improve capacity to address epidemic threats by providing financing or support?
- Improve the country’s domestic capacity to address epidemic threats by expanding financing or requesting support to improve capacity?
These indicators assume that supporting others equates to internal preparedness. But a country can be highly secure without engaging in international aid or declarations.
Criteria 3.1.2a and 3.2.2a)
“Does the country have a specific mechanism(s) for engaging with the private sector to assist with outbreak emergency preparedness and response?”
“Is there evidence that the country in the past year has undergone a national-level biological threat-focused exercise that has included private sector representatives?”
Countries are evaluated on whether they engage with the private sector in their preparedness plans. But for nations with fully nationalized healthcare systems, this criterion is irrelevant. Penalizing countries for not incorporating private actors they do not have is to misunderstand how their health systems function.
Criterion 2.3.1b)
“Is there publicly available evidence that the country reported a potential public health emergency of international concern (PHEIC) to the WHO within the last two years?”
Only countries that have recently reported a health emergency to the WHO are rewarded. But the absence of such reports may simply reflect a lack of recent crises—not a lack of capacity or transparency. Effectively this criterion rewards countries for having health emergencies, since having no emergencies scores a 0, the same as having an emergency and not reporting it.
Part III: Underlying problems
The issues with the GHSI extend beyond poor data quality or inconsistencies in individual criteria. There are underlying problems that many criteria have in common. One example of this is how the index equates legislation with enacting legislation. Numerous criteria (I found 40!) assign full credit merely for the existence of laws, “strategic frameworks”, or official plans—without assessing whether they are enforced or even effective. For instance, a country earns the same score for having a law requiring antibiotic prescriptions (1.1.2a and 1.1.2b) as it does for actually implementing and enforcing it. Zoonotic disease preparedness is judged by whether planning documents exist (1.2.1a–c), not by whether risk has been reduced in practice. Criterion 1.4.1a gives one point for simply having a legal framework for biosafety, while 1.4.1b gives a second point for enforcing it, as though legislation and implementation were equally valuable. If a country fails the second of those criteria, meaning that they do not enforce their biosafety laws, why on earth should they get points for having these laws anyway? Hilariously, countries are given points for signing the Biological Weapons Convention (5.3.1) even if they have violated its terms.
Another underlying problem with many criteria is the insistence on participation in international governance frameworks. For example, countries are penalized for not participating in initiatives like the Global Health Security Agenda (GHSA), the Proliferation Security Initiative (PSI), or the “Australia Group”, regardless of whether these memberships have a measurable effect on their actual pandemic performance. Similarly, variables such as submission of annual reports to the WHO (Criterion 5.1.1a), or to the World Organisation for Animal Health (Criterion 1.2.3a), are used as to measure transparency. The index penalises countries that haven’t completed Joint External Evaluations (JEEs) and Performance of Veterinary Services (PVS) assessments (Criteria 5.4 and 5.5.2). All these may be good things to do but it is hard to argue that they are the only path to health security.
The GHSI is not merely a flawed implementation of a sound idea. It reflects ideological assumptions about what health security “should” look like: liberal democracy, private sector engagement, transparency, and alignment with international governance. Countries reflecting these ideals are scored highly—without any consideration for actual outcomes.
In some cases, countries are marked down for issuing trade or travel restrictions without international or bilateral coordination (Criterion 3.7), as though independent action during a crisis were inherently irresponsible. Similarly, laboratory capacity is judged not only by actual diagnostic capabilities but by whether facilities hold specific accreditations like ISO 15189 or CLIA (Criterion 2.1.2a)—standards that may not reflect local efficacy but rather compatibility with international benchmarks. In all these cases, national capacity is assessed not on intrinsic effectiveness, but on alignment with Western systems of governance. Rather than assessing how well a country can protect its population, the index frequently evaluates how well it fits into a predefined and ideologically loaded model of what its experts think preparedness should look like.
Their defence
In the GHSI’s second report in 2021 the authors attempt to defend their index in the face of accusations that their first report had been disproven by the performance of countries during the COVID-19 pandemic. They argued that the index was not intended to predict outcomes, only to document the “capacities” countries had on paper:
“Some countries found that even a foundation for preparedness did not necessarily translate into successfully protecting against the consequences of the disease because they failed to also adequately address high levels of public distrust in government and other political risk factors that hindered their response. Further, some countries had the capacity to minimize the spread of disease, but political leaders opted not to use it, choosing short-term political expediency or populism over quickly and decisively moving to head off virus transmission.
Those factors do not excuse but may explain why countries that received some of the top marks in the 2019 GHS Index responded poorly during the COVID-19 pandemic. As a measure of health security, the Index assigns the highest scores to countries with the most extensive capacities to prevent and respond to epidemics and pandemics. With its vast wealth and scientific capacities, the United States was ranked first in the 2019 GHS Index and again in the 2021 edition, although in both cases, the highest position was still measured to have critical weaknesses. Despite its ranking, the United States has reported the greatest number of COVID-19 cases, and its response to the pandemic has generally been viewed as extremely poor. The result highlights that although the GHS Index can identify preparedness resources and capacities available in a country, it cannot predict whether or how well a country will use them in a crisis. The GHS Index cannot anticipate, for example, how a country’s political leaders will respond to recommendations from science and health experts or whether they will make good use of available tools or effectively coordinate within their government. The Index does, however, provide evidence of the tools that countries have and the risks they need to address to protect their communities. Countries that fail to use those tools or address those risks to thereby enable an effective response should be held accountable. Shortcomings observed during COVID-19 must be fixed before the next public health emergency.”
This is extremely disingenuous. Here, the Index claims to focus strictly on measurable health security capacity—laws, institutions, policies, and infrastructure—arguing that it cannot and does not account for how leaders might choose to use or ignore those resources in a crisis. It states explicitly that it “cannot predict whether or how well a country will use them in a crisis,” citing this as a limitation rather than a flaw. And yet, the Index embeds within its scoring system several political variables—such as political stability, democratic governance, transparency, and participation in international norms—which it treats as indicators of health security. These elements are scored on the premise that they increase the likelihood that technical capacities will be used when needed. In other words, political structure is assumed to influence implementation.
This is the contradiction: if the Index scores political stability and democracy on the grounds that they improve a country’s capacity to act effectively in a crisis, it cannot then disclaim responsibility for predicting how political leaders will behave. It cannot have it both ways. If leadership decisions are truly unpredictable and beyond the Index’s scope, then political indicators should not be scored as part of preparedness. If they are scored, then their predictive failure must be acknowledged as a fundamental flaw in the Index’s design—not brushed aside.
Democratic governance is often valorised precisely because it is designed to ensure accountable leadership, public deliberation, and evidence-based decision-making. If, as the GHSI assumes, these features increase health security by making effective implementation more likely, then the Index is already making a judgment about political behaviour—it is assuming that democratic systems will produce leaders who use capacity well. When that turns out not to be the case, as with the U.S. during COVID-19, the explanation cannot simply be “we can’t predict how leaders will behave.” That unpredictability is precisely what the Index implicitly claimed to reduce by scoring democratic structures in the first place.
Conversely, if an authoritarian regime outperforms its score by enacting lockdowns, issuing travel bans, and centralized control of resources, then the Index must grapple with the implications: either those political indicators are not reliable predictors of implementation, or the entire assumption that capacity can be meaningfully measured without reference to decision-making behaviour must be reconsidered.
It should be becoming apparent around now that there is a much larger problem with this index than anything mentioned thus far. It isn’t actually measuring health security. I’ve pointed out problems with specific data points. I’ve discussed how some criteria aren’t as related to health security as they seem. But many of the criteria that are related to health security, even the ones that I think probably do indicate improved health security, aren’t actually measuring it. Nearly every single criteria simply ask the question: “Does this country have health policies that we, the experts, want them to have?”
If they wanted this to be a scientific endeavour, they could have found ways to actually measure health security. Then they could then take the results of this index and look for patterns: What policies are the most associated with health security? What novel approaches to health security have we overlooked? What conditions about a country, such as its economic development, its geography, it political system, effect its outcomes?
Part IV: Not just the GHSI
There is one last underlying problem I havent mentioned until now. While most of the criteria have detailed explanations for for their scores of every country, there are a few that are left to other organisations to score. The Economist “Intelligence Unit” is cited 22 times, and Economist Impact 10 times - citing themselves, nice. The world bank is cited 10 times, various UN organisations 8 times, the WHO 11 times, the CIA once, the “World Policy Analysis Center” twice, and the “Wellcome Trust Global Monitor” twice.
As for indices, the Economist Intelligence Democracy Index is cited twice (citing themselves, nice) as well as the Corruption Perception Index and Gender Inequality Index once each. As far as I can tell, the Gender Inequality Index is based on real data such as maternal mortality ratio, the share of parliamentary seats held by each sex, and women’s participation in the workforce. The Corruption Perception Index is quite perverse. It apparently captures “expert” and “business leader” assessments of various public sector corruption practices. Its sources include the Economist Intelligence Unit (citing themselves, nice), Freedom House, the World Bank, and the World Economic Forum. Asking what business leaders think about various governments is not a legitimate way to access corruption, it is basically just a way to find which national governments are the most disliked by business leaders. Even worse is the Economist Intelligence Democracy Index which operates entirely on a “trust me” basis.
The GHSI is part of a much larger trend. It is not just the GHSI, nor is it limited to the Economist Intelligence Democracy Index and the Corruption Perception Index. There are countless indices claiming to rank countries on everything from democracy and freedom to corruption and development. At first glance, these indices seem to offer objective tools for comparison. In reality, they often encode a narrow cultural and ideological perspective.
Take the Index of Economic Freedom for example. This index supposedly measures the level of economic freedom in a country. One might imagine that this finds the range of permissible economic activities in a given country and measures each country on the size of this range. It would then be up to the citizens of a country to decide: is economic freedom more important than its potential drawbacks, and at what point does increased economic freedom no longer justify these drawbacks? I would argue that the ability to bribe a politician represents an aspect of economic freedom - the freedom to trade without regard to politics or governance. I would also argue that the potential economic freedom gained from allowing bribery, is much less important than the benefits gained from banning bribery such as creating free and fair democratic governance. The Index of Economic Freedom disagrees. They measure corruption as something that limits economic freedom. From the standpoint of measuring economic freedom, especially if you accept that sometimes more economic freedom comes with drawbacks, this decision makes no sense at all. However if you reinterpret the Index of Economic Freedom to instead be measuring “the kinds of economic freedom that The Heritage Foundation likes”, then it makes perfect sense. This index doesn’t actually measure economic freedom. It measures whether countries adopt the policies of the Heritage foundation or not.
Many indices work this way. They begin with an ideological template—what democracy, freedom, or press rights should look like—and then assess countries based on how well they match. But instead of testing whether these methods actually produce better outcomes, they assume they do. This leads to circular logic:
- Define democracy according to Western liberal norms.
- Rank countries by how well they match those norms.
- Observe that Western countries score highest.
- Conclude that Western countries are the best at democracy.
What this actually proves is: Western countries are most similar to Western countries. This tautology has political consequences. Countries that diverge from the Western model—due to cultural, historical, or structural differences—are automatically downgraded, not because they are undemocratic or unfree, but because they are different. These rankings then feed into policy prescriptions, foreign aid decisions, and media narratives, reinforcing a civilizational hierarchy.
European vs Chinese approaches to medicine
Lets consider a parallel case in the field of medicine. Western biomedicine is often seen as the gold standard, and in many respects it is extraordinarily effective. But traditional Chinese medicine has its own solutions to problems that could be very useful if studied scientifically.
Chinese medicine has two main approaches that I think should be examined more closely: herbal medicine, and holistic care. The strongest argument would be that of holistic care, which posits that instead of seeing a doctor when you have a specific condition that needs a specific treatment, it is often worth seeing a practitioner regularly so that they can help to improve your diet and lifestyle to promote health. Many of the practices of the doctors in TCM surely lack evidence, but that only reinforces the point that there should be more studies done into the effect of diet and lifestyle on health, rather than those of treatments to isolated conditions. The second point, that of herbal medicine, also has its advantages.
Western medicine evolved out of the traditions of iatrochemistry and ultimately esoteric alchemy. The alchemists believed in the power of extraction - if a plant provides some properties when eaten, it must be possible to extract the essences out of it to provide those properties directly. This led to centuries of experiments in extracting elements and compounds out of things until eventually it transitioned into a medical practice. The person most responsible for this shift from alchemy to medicine was Paracelsus, whose full name was Philippus Aureolus Theophrastus Bombastus von Hohenheim, and who was born in Egg, Switzerland.
He advocated for chemical remedies based on purified minerals and plant substances. He introduced “spagyria”, a form of alchemy focused on separating and recombining substances to enhance healing power. He used mercury, arsenic, sulfur, and antimony compounds medicinally. He emphasized specific remedies for specific diseases — prefiguring modern targeted drug action. He also believed the essence of a plant or mineral could be extracted and concentrated to produce more powerful effects. Paracelsus saw each disease as having three separate cures depending on whether it was caused by the poisoning of sulphur, mercury, or salt, and he imported this from medieval alchemy. Important to his theories was the four categories of elementals: undines corresponding to water, sylphs corresponding to air, salamanders corresponding to fire, and gnomes, which he coined the name of, and which corresponded to earth. Paracelsus encouraged the use of laudanum, a tincture of the drug opium, to treat a variety of problems. And he is also known as the “father of toxicology” and a pioneer of the “medical revolution”.
The fact is that, despite the mystical and esoteric origins of our medical system, the end result has still proven to be immensely useful. Because we employ the scientific method, and use evidence-based medicine, it doesn’t matter so much that the origins of western medicine are so anti-scientific. We should apply the same idea to other medical systems. The WHO has accepted the use of Ayurveda (Indian medicine), TCM (traditional Chinese medicine), Unani (a Greco-Arabic medicine), and other medicines into its system. One that is especially important to me, as I am from Aotearoa New Zealand, is the Rongoā Māori health system. While aspects of each of these medicinal and health systems may have origins in spiritual beliefs, mysticism, or other ideas problematic to medical treatment, if we use the scientific method and evidence-based medicine, they can surely provide useful ideas and insights just as the esoteric practices of alchemy provided useful insights.
There is a counterargument I have faced in response to this line of thinking: if a medical practice works, then it is just “medicine” - western medicine, Chinese medicine, etc.. when proven to work stop being western or Chinese and are simply part of the universal category of “medicine”. This argument fails for two reasons.
Firstly, it doesn’t account for the incompatibility of these systems. Chinese medicine doesn’t simply provide different treatments to the same problems than does Western medicine, they have completely different approaches to health in general. If a TCM practice has been proven to be effective - and many practices have produced evidence to that effect - a Western-style doctor would never employ these practices because they simply use completely different approaches to health.
Secondly it doesn’t account for the underfunding of different systems of health and medicine. We could call this the “French Census Problem”. France bans the collection of racial data in the name of equality, but this makes it impossible to prove or redress racial discrimination. In the same vein, if we refuse to call a medical practice by its name and refer to them all as “medicine”, then it is impossible to see what we are overlooking and where our biases lie.
I hope the relation to the Global Health Security Index is clear. Sometimes there are a range of potential solutions to a problem, and the task should be to find which solutions best solve it. Indices that measure countries based on various criteria could be an excellent opportunity to find out which systems excel at solving different problems, and to find out which approaches should be followed. Instead we find that most indices are rigidly designed to test whether countries adopt a strict set of imposed solutions without testing how effective they are.
Imagine if a Chinese charitable organisation had developed a Global Medicine Index that ranks countries based on their adherence to Traditional Chinese Medicine. The highest scores go to those with widespread use of acupuncture, national programs for qi regulation, regular integration of pulse and tongue diagnosis into primary care, and strong participation in the International Society for Chinese Herbal Pharmacology. Countries are docked points for overuse of antibiotics or for relying too heavily on surgery and synthetic pharmaceuticals. In such a scenario, many Western countries would score poorly—not because they lack effective health systems, but because their model of medicine does not conform to the ideological framework the index assumes. The index would not be measuring health or medical outcomes per se; it would be measuring compliance with a particular vision of what medicine “should” look like, rooted in a specific historical and philosophical tradition.
This is precisely what the GHS Index does with health security. It creates a rigid, ideologically inflected model—centred around liberal-democratic governance, international reporting mechanisms, and bureaucratic formality—and evaluates countries on their adherence to that model, not on the actual outcomes or adaptability of their systems. When those highly rated systems fail—as in the case of the United States during COVID-19—the response is not to question the index’s assumptions, but to blame unpredictable leadership or “failure to implement.”
Just as Western medicine emerged from alchemical mysticism yet became powerful through the application of the scientific method, so too should global health governance evolve by testing different models of preparedness empirically—regardless of their ideological origins. To do otherwise is not to measure health security but to enforce conformity.
Part V: Ideology and Imperialism
None of this is accidental.
Every economic and political system generates an ideological justification for its power. Feudalism relied on divine right and religious hierarchy to legitimize social order. American slavery constructed racial science to justify human bondage. Capitalism invokes market liberalism and the “invisible hand” to naturalize inequality. And imperialism, both past and present, cloaks itself in the language of liberation, modernization, and democracy.
Today, the global order is shaped by a softer, more technocratic form of ideology—one rooted in the language of indicators, metrics, and benchmarks. Global indices that measure democracy, health, press freedom, and governance claim to be neutral tools for understanding and improving the world. But they are also ideological instruments. They embed specific assumptions—particularly Western liberal values—as if they were universal truths, and they rank countries according to how well they conform.
These rankings matter. When a state is labelled as deficient—on democracy, on pandemic preparedness, on human rights—it becomes easier to justify intervention. Rarely is that intervention overtly military; more often, it takes the form of sanctions, structural adjustment, “capacity building,” or development aid tied to policy compliance. The index doesn’t fire the missile—but it legitimizes the idea that some countries need to be corrected, fixed, or saved.
This is what we might call soft imperialism: the projection of power not through conquest, but through norms. Anyone can now look back at the colonization of the 19th century and see how language functioned as justification: Europeans claimed they were “civilizing the savages”, “modernizing backward nations”, or “bringing order to chaos”. Today, the same structure of thought persists. Western nations are framed as the guardians of “freedom”, “transparency”, and “human rights”; others, by contrast, are portrayed as “corrupt”, “opaque”, or “undemocratic”. Intervention is no longer called conquest—it is called “capacity building”, “stabilization”, or “governance reform”.
Conclusion
The Global Health Security Index failed its most important test: it could not predict which countries would respond effectively to a pandemic. More than a technical flaw, this failure reveals a deeper issue. Indices like the GHSI are not neutral tools—they reflect a worldview shaped by Western institutions, measuring how closely countries align with their standards rather than how well they meet the needs of their own populations.
This reinforces a harmful narrative: that poorer countries are struggling because they mismanage their resources or fail to meet “global” benchmarks. In reality, global inequality is not the result of poor decisions or insufficient reform—it is the legacy of imperialism, colonisation, and a global economic system designed to benefit a few at the expense of many.
When we use metrics that ignore this history, we turn structural inequality into a story about national failure. We reward conformity and punish difference, often overlooking models that work simply because they don’t fit the expected mold.
The problem isn’t measurement itself. The problem is what we choose to measure, how we interpret the results, and who decides what counts. Until we recognize that inequality is structural—not accidental—we will continue to mistake power for progress.