Introduction
“Follow the science” became the defining slogan of the COVID-19 pandemic. It was reassuring, authoritative – and deeply misleading. Not because science was irrelevant to pandemic policy, but because the slogan collapsed at least three fundamentally different things into a single phrase: genuine scientific evidence, provisional working models presented as settled fact, and value judgements disguised as technical conclusions. In doing so, it damaged both democratic deliberation and scientific credibility in ways that will be felt for a generation.
This essay argues that the pandemic revealed a structural problem in the relationship between science and public policy – one that extends well beyond epidemiology. The same tensions between evidence, uncertainty, and values that plagued pandemic decision-making are present wherever technical expertise is used to justify political choices: in macroeconomic policy, in climate action, in regulation. The pandemic simply made the problem impossible to ignore.
The core argument is simple, and it is not an argument against science. When confronted with a novel pathogen in early 2020, public health authorities did what they had to do: they reached for the best available models, drew on existing pandemic plans, and made decisions under radical uncertainty. That was appropriate. No one can be faulted for starting with imperfect knowledge. The critique is about what happened next. The provisional nature of those early models was never communicated to the public. The best-guess character of the initial response was never acknowledged. The need to learn, adapt, and revise as evidence accumulated was never framed as the plan – so when revisions came, they looked like failure rather than progress. And throughout, value judgements about acceptable risk, who should bear costs, and how to weigh competing goods were embedded within ostensibly scientific frameworks, shielded from the democratic deliberation they required.
Starting With Models: Reasonable Under Uncertainty
It is important to begin with a concession that is sometimes lost in retrospective critique: the early scientific response to SARS-CoV-2 was an exercise in genuine epistemic fog. In January and February 2020, almost nothing was known with confidence about the novel coronavirus. How it transmitted, who was most vulnerable, what the fatality rate was, whether immunity would be durable – all of this was uncertain. Decision-makers could not wait for certainty. They had to act.
The models they reached for were the ones they had. Pandemic planning in most countries, including Australia, had been built around influenza. The assumption that respiratory viruses spread primarily through large droplets and contaminated surfaces – so-called fomite transmission – was the default framework. It shaped the emphasis on hand hygiene, surface disinfection, the six-foot distancing rule, and the initial scepticism about masks for the general public. The early case fatality estimates, drawn from overwhelmed hospitals in Wuhan and northern Italy, were the best available data. The assumption that children were significant vectors, central to influenza transmission dynamics, was imported into COVID planning because there was no COVID-specific evidence to replace it.
None of this was unreasonable. Starting with an influenza-based framework was not a failure of science; it was science doing what it does when confronted with novelty – reasoning from the closest available analogue while waiting for better data. The early decisions – border closures, initial lockdowns, social distancing – were defensible given what was known at the time. Even if many of these decisions turned out to be imperfectly calibrated, they were made under conditions where the cost of underreacting was potentially catastrophic and the information needed to fine-tune the response did not yet exist. The error was not in the starting point. It was in how the starting point was communicated, and how long it took to move beyond it.
“Trust Me” Versus “This Is Our Working Model”
The communication posture from the outset was one of authority and certainty. “The science says” rather than “our current best understanding suggests.” “We know” rather than “we believe, based on the models available.” This framing choice – certainty rather than provisionality – was the single decision that set up every subsequent failure.
If public health authorities had said, “We are applying an influenza pandemic model because it is the best framework we have for a novel respiratory virus. This virus may behave differently in important ways, and we will update our guidance as we learn,” the public would have had a framework for understanding change. Revisions would have been expected. Updates would have been evidence of the system working, not evidence of incompetence or dishonesty.
Instead, by projecting certainty, every revision became a credibility crisis. When masks went from “don’t wear them” to “you must wear them,” when surface transmission went from dangerous to largely irrelevant, when schools went from plague vectors to probably safe – each shift damaged trust because the original position had been presented as settled knowledge rather than a working hypothesis.
Why did institutions choose this posture? The incentives were structural, not conspiratorial. Public health communication has long operated on a deficit model: experts know, the public must be told. There was a genuine fear that communicating uncertainty would cause panic or non-compliance. There was political risk asymmetry: leaders who underreacted and people died would be held accountable; leaders who overreacted and the worst didn’t materialise could claim credit for prevention. Media dynamics rewarded certainty and punished hedging – a public health official who said “we think, but we’re not sure” would have been savaged for lack of leadership. And once a posture of certainty was adopted, walking it back carried reputational costs that compounded over time.
These incentives are understandable. They may even have been rational for any individual decision-maker in the short term. But collectively they produced a communication model that was structurally unsustainable. The first time guidance changed, it would crack. By the third or fourth reversal, it would shatter.
The mask reversal was particularly damaging because it involved an outright noble lie. In several countries, officials actively told the public that masks were not effective. The real driver was supply: N95s and surgical masks were critically short, and healthcare workers needed them. Rather than being transparent – “masks likely help, but we need to preserve supply for hospitals; here is how to improvise” – officials presented what was essentially a supply-driven decision as a scientific judgement about mask ineffectiveness. When the guidance inevitably reversed, people had legitimate grounds to ask: if the science on masks was shaped by supply concerns rather than evidence, what other guidance had been similarly shaded?
Honesty about uncertainty is not a weakness in scientific communication – it is the foundation of durable trust. People can handle “we don’t know yet.” What they cannot handle, and should not be expected to handle, is being told “we know” and later discovering that the authorities didn’t.
Where the Science Got It Wrong – And How Slowly It Updated
Given that the starting models were provisional, the critical question is: how quickly did institutions update as evidence accumulated? The answer, in several important areas, is: far too slowly.
Aerosol Transmission
The most consequential example was the shift from the droplet/fomite model to an understanding of aerosol transmission. To be fair, the early evidence for aerosol transmission was suggestive rather than definitive. There were genuine measurement ambiguities, infectious dose was unclear, and some of the institutional caution – particularly at the World Health Organization – reflected differences in evidentiary thresholds rather than simple obstinacy. Bureaucratic caution under uncertainty is structurally predictable, and not all of it was unreasonable.
But the resistance went beyond reasonable caution. The droplet paradigm had dominated infection control thinking since 1910, when the public health official Charles Chapin urged the medical community to focus on contact and large "sprayborne" droplets and dismiss airborne transmission as unlikely. As Jimenez, Marr, and colleagues documented in their historical analysis, this framework became entrenched over the following century, leading to what they described as "systematic errors in the interpretation of research evidence on transmission." Only a handful of diseases – tuberculosis, measles, chickenpox – were accepted as airborne before COVID, and in each case recognition came only after prolonged resistance. The bias was not new; it was structural and decades-old.
When aerosol scientists began presenting accumulating evidence for airborne spread of SARS-CoV-2 in early 2020, they encountered this entrenched framework directly. Linsey Marr, a Virginia Tech engineer who had found influenza virus floating in aerosols in day care centres as early as 2010, described the resistance bluntly: "It was in textbooks that this is how diseases are transmitted, so that's what medical students learned and took to be true." In July 2020, 239 scientists co-signed an open letter in Clinical Infectious Diseases urging the WHO to acknowledge airborne transmission. The WHO's response, which Marr called a "grudging partial acceptance," modified its guidance only marginally, using far more certain language for droplet transmission than for aerosols. It took until April 2021 – more than a year into the pandemic – for the WHO to formally accept that COVID could spread by aerosols, and until April 2024 for the organisation to update its formal definitions of airborne transmission. Jose-Luis Jimenez, an aerosol chemist at the University of Colorado Boulder, described the eventual resolution as "finally the end of the most stubborn and senseless resistance to accepting this science." Meanwhile, the practical consequences of the delay were significant: enormous resources went into surface disinfection – what was later called "hygiene theatre" – while ventilation, air filtration, and CO₂ monitoring received far less attention than the emerging evidence warranted.
In the language of Imre Lakatos's philosophy of science, this institutional response had the characteristics of a degenerating research programme: rather than confronting an anomaly that challenged the prevailing model, the profession protected its hard core by making ad hoc adjustments to the protective belt. The guidance was quietly updated in stages, but the underlying model shift – from droplets to aerosols – was never openly acknowledged as the fundamental reorientation it was. The scientists who eventually forced the change were not epidemiologists or public health officials but engineers and aerosol physicists – outsiders to the discipline whose framework they were challenging.
Early Case Fatality Rates
The frightening mortality figures from early 2020 reflected two things that were not clearly separated: a novel pathogen and suboptimal treatment. Patients were placed on mechanical ventilators aggressively, following conventional protocols for respiratory failure. Over time, clinicians learned that prone positioning, corticosteroids such as dexamethasone, high-flow nasal oxygen, and more cautious use of invasive ventilation significantly improved survival. The early case fatality rates conflated the lethality of the virus with the limitations of initial clinical practice – but this distinction was not communicated to the public. The numbers that drove the most consequential policy decisions were the numbers from the period of greatest clinical ignorance.
Australia’s AstraZeneca Debacle
Australia’s vaccine rollout provided a case study in how risk communication can undermine the very objective it is meant to serve. Australia had secured AstraZeneca as its primary vaccine – the one available in volume, manufactured domestically, and ready to deploy. For older Australians, who faced the overwhelming burden of COVID mortality, the risk-benefit calculation was unambiguous: the vaccine’s protection against severe disease and death vastly outweighed the extremely rare risk of thrombosis with thrombocytopenia syndrome (TTS), estimated at roughly four to six cases per million doses.
The Australian Technical Advisory Group on Immunisation (ATAGI) recommended against AstraZeneca for people under 50, later revised to under 60, on the basis of the TTS risk. That recommendation was defensible on its own terms for the younger cohort, for whom both the clotting risk and the COVID risk needed to be weighed differently. But the communication of the decision was catastrophic. Rather than clearly conveying that AstraZeneca was strongly recommended for older Australians and that the caution applied only to younger age groups, the public messaging created a generalised fear of the vaccine that spread well beyond the population for whom the caution was relevant.
The result was that older Australians – the very people who stood to benefit most – hesitated or refused AstraZeneca, preferring to wait for Pfizer. The rollout to the most vulnerable population slowed dramatically during the critical window before the Delta variant arrived. People who could have been protected months earlier remained unvaccinated because the risk communication failed to distinguish between populations with fundamentally different risk profiles.
There was also a status dimension. AstraZeneca became the second-class vaccine – the one people turned down if they could. Pfizer became the prestige option. This had nothing to do with the science of efficacy against severe disease in older populations, where both vaccines performed well, and everything to do with how a nuanced, age-specific risk assessment was flattened into a generalised message of danger.
This episode encapsulates the essay’s central argument. The science on age-stratified COVID risk was clear. The science on the TTS risk was clear. The science on the benefit of rapid vaccination for older Australians was clear. What failed was the communication – the inability or unwillingness to convey a message more complex than a binary safe/unsafe judgement. And the cost was measured in delayed protection for the people who needed it most, at the moment they needed it most.
Children, Schools, and the 2021 Melbourne Lockdowns
The assumption that children were significant vectors of transmission was imported from influenza pandemic models, where it is well supported. For SARS-CoV-2, the picture turned out to be different. By mid-to-late 2020, evidence was accumulating that children were less susceptible to infection, far less likely to experience severe disease, and less efficient transmitters than adults. Schools, particularly primary schools, were not the amplification centres that influenza models predicted.
In the early months of 2020, when evidence was genuinely scarce, precautionary school closures were defensible. By 2021, they were not – or at least, they demanded a far more rigorous justification than they received. The evidence base had shifted substantially, and the costs of closure – educational, developmental, social, and mental health impacts on children, falling disproportionately on disadvantaged families – were increasingly well documented.
Melbourne’s experience stands as a case study in policy persisting beyond its evidentiary foundation. Melbourne endured six lockdowns totalling approximately 262 days, making it one of the most locked-down cities in the world. The later lockdowns, particularly the extended lockdown from mid-2021 through to late October 2021, occurred at a point when vaccination was well underway, when the age-stratified nature of COVID risk was thoroughly established, when the limited role of children in transmission was well documented, and when the harms of prolonged lockdown were undeniable.
By this stage, the question was no longer “is the science uncertain?” The science was substantially clearer. The question was whether the policy response had developed a momentum of its own, detached from the evidentiary base that was supposed to justify it. And the incentive structures suggest it had. Political leaders who had imposed lockdowns had a stake in defending them; reversing course risked the admission that earlier restrictions had been excessive. Media coverage punished any case surge as a failure of government, creating asymmetric pressure to maintain restrictions regardless of proportionality. The political cost of a visible COVID death attributed to premature reopening was immediate and personal; the diffuse costs of lockdown – children’s lost development, mental health deterioration, economic destruction – were slow, distributed, and easy to attribute to the virus rather than to the policy.
By mid-to-late 2021, the burden of proof had shifted. It was no longer sufficient to justify population-wide lockdown by invoking precaution; the evidence now demanded that such an extraordinary measure demonstrate its proportionality against alternatives – protecting the genuinely vulnerable through vaccination and focused measures rather than confining an entire city, including its children, under some of the most restrictive conditions imposed anywhere in the democratic world. That case was never convincingly made. The human costs – children’s lost education, adolescent mental health crises, destroyed small businesses, family separation – were borne disproportionately by those least at risk from the virus and least able to absorb the damage.
Outdoor Transmission and the 1.5-Metre Rule
Early restrictions treated outdoor spaces as dangerous – playgrounds were taped off, beaches patrolled, people fined for sitting on park benches. Evidence that outdoor transmission was extremely rare emerged relatively quickly, yet outdoor restrictions persisted well beyond what the science supported. The 1.5-metre distancing rule, an artefact of the droplet model, provided false reassurance in poorly ventilated indoor spaces while restricting behaviour outdoors where the risk was negligible. A rule grounded in the correct model – aerosol transmission in enclosed spaces – would have looked very different: less concerned with precise interpersonal distance, far more concerned with ventilation, air changes per hour, and time spent in shared indoor air.
The Absence of Communicated Uncertainty
Perhaps the most consequential failure was the near-total absence of communicated uncertainty. Epidemiological models were presented as predictions rather than as scenarios conditional on assumptions that might prove wrong. Confidence intervals vanished from public discourse. Provisional findings were stated as established facts.
This is not how science works. Science is fundamentally a process of reasoning under uncertainty – forming hypotheses, testing them, revising them, and being transparent about what remains unknown. The public communication of pandemic science was the opposite: an appeal to authority that implied certainty where none existed.
The analogy with macroeconomic forecasting is instructive. As I have argued elsewhere, macroeconomic models are poor at predicting specific outcomes but valuable for illuminating causal mechanisms and trade-offs. The distinction between structural understanding and point prediction is crucial for calibrating the authority we grant to expert pronouncements. The same distinction applies to epidemiological modelling. Models projecting death tolls under various scenarios were valuable tools for thinking about policy – but only if their assumptions and limitations were clearly stated. When they were presented as forecasts rather than conditional scenarios, they generated expectations that could not be met and eroded trust when reality diverged.
A public health establishment that had communicated in the register of “this is our current working model, here is what we are uncertain about, and here is how we will update” would have built durable trust. It would have prepared the public for changes in guidance. It would have framed revision as evidence of scientific progress rather than institutional failure. It would have invited the public into the process rather than demanding their compliance. The choice to project certainty instead was not a scientific decision. It was a communication strategy – and it failed.
What Was Not Science at All
Beyond the genuinely scientific questions – how does the virus transmit, who is most vulnerable, which treatments work – many pandemic decisions were not scientific at all. They were value judgements, resource allocation decisions, and political calculations presented in scientific clothing.
The mask supply decision, discussed above, is the starkest example. But the pattern was pervasive. The decision to favour lives over livelihoods was presented as though it were scientifically determined rather than a moral choice. “Lives versus livelihoods” was framed as though only one side was morally serious – as though raising concerns about economic devastation was callous rather than compassionate.
But livelihoods are not abstractions. They are dignity, purpose, mental health, family stability, housing security, and children’s futures. Long-term unemployment has well-documented effects on physical health, life expectancy, and mortality. The destruction of a small business someone spent decades building is not a minor inconvenience. Even on its own terms – if the objective is protecting life and wellbeing – the framing was false. Livelihoods are lives. Economic devastation is a health outcome. The question was never life versus money. It was which lives, which harms, which risks, distributed how, over what timeframe. That is a value question, not a scientific one.
The decision to protect the elderly at enormous cost to the young was a value judgement of the first order. Science could inform the trade-off by estimating the mortality risk to different age groups. It could not determine whether months or years of children’s education, the developmental trajectory of adolescents, or the formative experiences of young adults were an acceptable price for a given reduction in risk among the elderly. That determination required weighing incommensurable goods – a task that belongs to democratic deliberation, not to epidemiological modelling.
In Australia, these value choices played out with particular intensity. The prolonged school closures, the curfews, the restrictions on attending funerals and visiting aged care residents, the policing of outdoor exercise – all were justified by appeal to scientific necessity. Yet the science did not require any particular configuration of these measures. It could estimate their likely epidemiological effects; it could not determine whether those effects justified the human costs. That judgement was political and moral, and it was never honestly acknowledged as such.
Dissent Suppressed, Debate Foreclosed
If “follow the science” conflated evidence with values, it also served a more corrosive function: it delegitimised disagreement. If the policy is the science, then questioning the policy is questioning the science. And questioning the science places you outside the boundaries of acceptable discourse. This rhetorical structure made it almost impossible to have the debates that a democratic society needed.
The debates that needed to happen were genuinely difficult. They involved uncomfortable questions without clean answers: how to weigh competing goods, how to distribute unavoidable harms, when to update policy in light of evolving evidence, how much coercion is proportionate to what level of risk. Reasonable, informed, scientifically literate people could have reached different conclusions on every one of these questions. That is precisely why they required open deliberation.
Instead, the difficulty became the justification for suppression. Nuance was treated as dangerous because it might undermine compliance. Dissenting scientists – not fringe conspiracists but credentialed researchers at major institutions – were marginalised not because their evidence was rigorously evaluated and found wanting, but because their conclusions were inconvenient. The Great Barrington Declaration, authored by epidemiologists from Harvard, Oxford, and Stanford, proposed a strategy of focused protection of the vulnerable rather than population-wide lockdowns. Whatever its specific merits or limitations, it was a substantive policy proposal grounded in mainstream epidemiological reasoning. It was met not with substantive engagement but with an organised effort to discredit it – including, as subsequently released correspondence revealed, coordination between senior officials at the National Institutes of Health to produce a “devastating published takedown.”
The suppression of dissent was reinforced by a tribal dynamic that hardened over the course of the pandemic. Support for restrictions became, in many quarters, an identity marker – a signal of moral seriousness, trust in science, and social responsibility. Questioning the proportionality of lockdowns, the necessity of school closures, or the balance between lives and livelihoods became coded as politically partisan rather than analytically legitimate. This imposed a social cost on dissent that went beyond professional consequences: expressing doubt about the prevailing consensus risked placing you on the wrong side of a cultural divide. The effect was to silence precisely the people – left-leaning academics, public health professionals, policy analysts – who were best positioned to argue credibly for recalibration but who faced tribal costs for doing so.
This is not how a healthy scientific culture manages disagreement. A discipline confident in its evidence engages with challengers; a discipline protecting its institutional authority suppresses them. During the pandemic, public health orthodoxy too often behaved in the latter mode. The consequences were predictable: legitimate questions were driven underground where they merged with genuine misinformation, people who might have been persuaded by honest engagement were instead alienated, and conspiracy theorists gained credibility they had not earned – because they could truthfully point out that the official narrative was not being fully honest.
A functioning democracy needs institutional mechanisms for managing dissent civilly and systematically – not suppressing it, not ignoring it, but engaging with it on its merits. Structured red-teaming, where designated experts are tasked with challenging prevailing assumptions and stress-testing models, is standard practice in military and intelligence contexts precisely because those fields learned that unchallenged consensus leads to catastrophic failure. Public health needs the same discipline. Scientists who challenge prevailing views need protection from professional retaliation. Conformity driven by career incentives is not consensus – it is silence dressed as agreement.
Coercion and Its Costs
The pandemic saw an extraordinary exercise of state power over individual liberty: lockdowns, curfews, border closures, mandatory quarantine, vaccine mandates, and fines for breaching public health orders. In a democracy, coercion of this magnitude demands an equally extraordinary justification. The justification offered was: the science requires it.
But as the preceding sections have argued, much of what was presented as scientific necessity was actually a mix of provisional models communicated as certainty, value judgements embedded in technical frameworks, and policy choices that had developed institutional momentum independent of their evidentiary base. When the state compels behaviour on scientific grounds, and the science turns out to have been incomplete, provisional, or not really science at all, the damage extends beyond public health messaging. It strikes at the legitimacy of the social contract between the state and the citizen.
The costs of coercion were real and unevenly distributed. People were separated from dying relatives. Children were locked out of schools for months. Small businesses were destroyed while large retailers remained open. Workers in lower-income, culturally diverse suburbs – western Sydney, Melbourne’s north and west – were fined and heavily policed, while wealthier suburbs with greater capacity to work from home experienced lockdown as an inconvenience rather than a catastrophe. The coercion fell hardest on those with the least political power to resist and, frequently, the least epidemiological risk.
A society that had openly acknowledged the trade-offs – “we are asking you to bear this burden because we have collectively decided that protecting vulnerable people is worth this cost, and we recognise the sacrifice we are asking of you” – would at least have been making a transparent social contract. People could have engaged with it, contested the terms, and held leaders accountable. Instead, “the science says we must” closed off democratic agency while still imposing the cost. It supported state coercion without the honest justification that democratic coercion requires.
The Trust Deficit and the Next Pandemic
In the scheme of historical pandemics, COVID-19 was moderate in severity. That is not to minimise the suffering it caused – more than seven million people died worldwide, and many more experienced lasting health effects. But the mortality was heavily age-stratified: overwhelmingly concentrated among the elderly and those with specific comorbidities. The infection fatality rate, once properly understood, was far lower than the terrifying early estimates. For healthy working-age adults, the risk of severe illness was low. For children, it was negligible. COVID-19 was not smallpox. It was not a highly lethal strain of influenza striking all age groups. Relative to worst-case historical pandemics and relative to the institutional response it provoked – the degree of coercion, the suppression of debate, the expenditure of public trust – it was a moderate event.
And yet it nearly broke our institutions. Not virologically, but in terms of trust, communication, and democratic governance. If this was the dress rehearsal, the performance was not encouraging.
The corrosion of trust is a compounding cost that will keep accumulating. In Australia, approval of the federal government’s pandemic response fell from 85 per cent in mid-2020 to 52 per cent a year later, according to ANU social cohesion surveys; by 2024, only 37 per cent of Australians expressed confidence in the federal government. COVID booster uptake declined markedly, suggesting that the erosion was not merely attitudinal but behavioural. People who complied in good faith, who made genuine sacrifices based on what they were told was scientific certainty, and who later discovered that the basis for those sacrifices was shakier than they had been led to believe – those people are not going to comply as readily next time. Not because they are anti-science or conspiratorial. Because they learned, rationally, that “follow the science” sometimes meant “trust us and do not ask questions,” and that the answers, when they eventually came, did not always justify what had been demanded of them.
Now imagine something genuinely catastrophic. A novel pathogen as transmissible as Omicron but with a case fatality rate of five or ten percent across all age groups. Something where school closures genuinely are necessary, where lockdowns are clearly justified on any reasonable weighing of costs, where rapid mass compliance with public health measures is the difference between hundreds of thousands of deaths and millions. In that scenario, you need exactly what was squandered during COVID: public trust, institutional credibility, and a population willing to accept extraordinary measures because they believe the authorities are being straight with them. You need people who will comply not because they are coerced but because they trust the reasoning they have been given.
Instead, the next pandemic will arrive into a landscape of justified scepticism – not fringe scepticism, but mainstream, rational, experience-based scepticism from people who did the right thing last time and came to feel they were misled. The institutions will say “this time we really mean it” and a significant portion of the population will, quite reasonably, respond: “You said that last time.”
The people most responsible for this will not be the so-called anti-vaxxers or conspiracy theorists. It will be the institutions that chose “trust me” over “let me show you our working.” The trust was treated as an entitlement rather than as something that must be continuously earned through transparency, honesty, and respect for the public’s capacity to handle complexity.
Science, Values, and Democratic Governance
The deeper lesson of the pandemic is not about epidemiology. It is about the relationship between expertise and democracy – a relationship that applies wherever technical knowledge is used to justify political choices.
Science can illuminate the landscape of consequences. It can estimate what is likely to happen under various policy scenarios. It can identify mechanisms, quantify trade-offs, and reduce – though never eliminate – uncertainty. These are invaluable contributions, and no serious person argues for ignoring them.
But science cannot determine what we should do. The gap between “if you do X, Y will likely happen” and “therefore you should do X” is where values live. Whether to prioritise reducing mortality among the elderly or preserving children’s education. Whether the mental health costs of prolonged lockdown outweigh the transmission reduction they achieve. Whether the economic destruction of extended closures is justified by the epidemiological benefit, and for whom. Whether state coercion is proportionate to the risk, and who gets to decide. These are moral and political questions. They involve weighing incommensurable goods and distributing unavoidable harms. Science can inform these judgements. It cannot make them.
When science is used to foreclose democratic deliberation on these questions – when “follow the science” functions as “do not question the policy” – it damages both science and democracy simultaneously. Science loses credibility because it has been freighted with decisions it cannot justify on its own terms. Democracy loses legitimacy because consequential choices are removed from public contestation and placed in the hands of technocrats whose authority rests on a claimed objectivity that, at the level of values, does not and cannot exist.
What is needed is not less science in policy. It is more honesty about the division of labour. Science provides the evidence base – always provisional, always subject to revision, always carrying uncertainty that should be communicated rather than concealed. Values determine what to do with that evidence. Policy integrates both. Democracy is the mechanism by which that integration is made legitimate. Each element is essential; none can substitute for the others.
Conclusion
The pandemic taught us, at considerable cost, that “follow the science” is not a policy. It is, at best, an aspiration that requires specifying what the science actually says, how certain we are, where the evidence is still evolving, and where the science ends and value judgements begin. At worst, it is a rhetorical device for depoliticising political decisions, suppressing legitimate debate, and avoiding accountability for choices that impose real costs on real people.
The science during the pandemic was imperfect but genuinely valuable. It produced real knowledge about viral behaviour, treatment protocols, vaccine development, and disease dynamics. It deserved better stewardship than it received. The failure was not in the science itself but in the interface between science and public communication: the decision to project certainty rather than communicate honestly about uncertainty, to present working models as established fact, to embed value judgements within technical frameworks, and to treat dissent as a threat rather than as an essential component of a healthy epistemic culture.
We started, quite rightly, with the best models available. But the provisional nature of those models was never communicated. The best-guess character of the response was never acknowledged. The need to learn and adapt as evidence accumulated was never framed as the plan. And when the values embedded in policy choices were challenged, the challenge was deflected by appeal to a scientific authority that could not bear the weight placed upon it.
What would genuine engagement have looked like? Not agreement – disagreement was inevitable and healthy. But a public process in which competing proposals were evaluated openly on their merits: their assumptions stated, their costs and benefits estimated across all affected populations, their operational difficulties acknowledged, and their value trade-offs made explicit rather than concealed within technical language. Crucially, this would have required acknowledging that decisions under uncertainty are not choices between safety and risk. They are choices between different risks – viral harms on one side, economic devastation, educational loss, and psychological damage on the other – distributed unevenly across populations who have unequal capacity to absorb them. Uncertainty does not resolve the trade-off; it sharpens the obligation to be honest about it. When the Great Barrington Declaration proposed focused protection, the appropriate response was not a coordinated takedown but a structured public assessment: here is what this proposal assumes, here is what it would cost, here are the operational challenges, here is what it gets right, and here is why we believe an alternative approach better serves the range of goods at stake.
The same discipline should have applied to lockdown extensions, school closures, and vaccine mandates – not as a concession to sceptics, but as the basic requirement of democratic legitimacy when the state exercises extraordinary power over its citizens. The pandemic needed less "follow the science" and more "here is what we know, here is what we don't, here are the options, and here is what each one costs – in lives, in livelihoods, in children's futures, and in the trust we will need for next time."
COVID-19 was a moderate pandemic that exposed severe institutional weaknesses. The trust that was spent freely on debatable measures and concealed value judgements will not be available when it is desperately needed for something worse. Whether the lesson has been learned remains, at best, an open question.
No comments:
Post a Comment