Did the CDC improperly block a study showing the COVID vaccines were effective?
Jay Bhattacharya pulled the paper for quality reasons. Attacking him for his decision is unfair. There are 25 questions I'd like the authors to answer including why wasn't there a negative control.
Executive summary
CDC acting director Jay Bhattacharya delayed and then blocked a COVID-19 vaccine effectiveness paper from appearing in the agency's flagship scientific journal, Morbidity and Mortality Weekly Report (MMWR).
The NY Times had a field day with the story (here and here).
After studying the paper and reading the numerous articles written by people on both sides of the issue (such as the MD Reports analysis and Jeremy Faust’s substack), my opinion is that Jay was justified in his decision for the reasons he outlined to the Washington Post, namely, the CDC needs to be setting a high bar for quality research in the studies it publishes.
Test-negative design (TND) studies, like all observational methods, can have serious flaws. For example, there have been over 100 TND studies of the flu vaccine showing they work remarkably well. But they are all misleading. The more dispositive Simonsen papers (death and hospitalization) and the more recent Anderson discontinuity study (death and hospitalization) show the flu vaccines don’t reduce mortality or hospitalization.
Was this CDC paper the exception that got the right conclusion? I don’t think so.
Here are 25 unanswered questions about this study, e.g., why was there no negative control?
Why was the flu study (which used TND) approved?
Because it was approved before Jay got there. Duh.
25 questions for the authors of the CDC paper
Methodological transparency
1. Your reference group of “unvaccinated for the 2025-2026 dose” includes people with prior COVID vaccinations (median ~1,200 days since last dose) and likely prior infection. How should readers interpret a 50-55% VE estimate as a measure of vaccine benefit when both arms have substantial pre-existing immunity?
2. The median interval since vaccination was 47 days — within the peak antibody titer window. Did you analyze how VE changes at 90, 120, or 180 days post-dose? If not, do you agree the headline VE estimate represents an upper bound on durable protection rather than a typical seasonal estimate?
3. Why didn’t you include negative-outcome controls (e.g., VE against trauma admission, non-respiratory hospitalization) to quantify healthy vaccinee bias? Was this considered and rejected, or not considered?
4. Healthy vaccinee bias has been documented to inflate apparent influenza VE by 30-50% in cohort studies (Jackson 2006, Simonsen 2007). Why doesn’t your limitations section name this specific phenomenon or cite this literature?
5. Your adjustment model includes age, sex, race/ethnicity, calendar time, and geographic region, but not frailty, comorbidity count, prior healthcare utilization, or functional status. Hospitalized patients had a median of 4 underlying conditions versus 0 for ED patients — why no comorbidity adjustment?
Power and pooling
6. Your hospitalization VE estimate rests on 60 vaccinated cases. What’s the smallest unmeasured confounder shift that would account for the entire 55% effect? Have you done sensitivity analysis for unmeasured confounding (e.g., E-values)?
7. The VISION Network has multiple years of accumulated data. Why didn’t you pool across seasons to power a death endpoint analysis? Was this considered?
8. Link-Gelles 2025 (JAMA Network Open) and Ma 2026 (JAMA Network Open) used multi-season pooling to estimate VE against critical illness. Why didn’t you reference those analyses or apply that approach here?
9. What is the all-cause 30-day mortality rate among the 1,022 hospitalized cases in your study? Why isn’t this reported, given that registry linkage to mortality data is feasible?
What the paper measures versus what gets claimed
10. Your paper measures VE against medically-attended laboratory-confirmed COVID-19. ACIP and CDC will likely cite this paper to support broad recommendations defended on mortality and severe-disease grounds. Are you concerned about that translation gap? How should the paper be cited and how should it not be cited?
11. Your conclusion uses the phrase “additional protection.” Did you consider more neutral framings like “reduced odds of test-positive presentation”? Why was “additional protection” chosen?
12. Your study cannot distinguish between (a) the vaccine causing biological protection, (b) vaccinated people being healthier and less likely to seek emergency care for COVID-like illness, or (c) some combination. Do you agree this is the case? If so, why isn’t this stated more prominently?
Comparison with prior CDC analyses
13. What were the equivalent VE estimates from VISION for the 2024-2025 booster, the 2023-2024 booster, and the bivalent booster? Has VE been consistent, declining, or improving across seasons? Why aren’t these comparisons included?
14. During the original Omicron BA.1 wave, when COVID hospitalizations were 10-20× higher than current rates, you would have had statistical power for death endpoints. Did CDC publish TND VE estimates against death during that period? If yes, what did they show? If no, why not?
The Bhattacharya objection specifically
15. Reports indicate Dr. Bhattacharya raised methodological concerns about this study before publication. What specifically were those concerns? Were any of them addressed in revisions? Were any methodologically valid concerns declined?
16. Did the author team consider any of the following analyses, and if so why were they not included: (a) negative-outcome controls, (b) brand-differential VE comparison, (c) registry-linked all-cause mortality follow-up, (d) waning analysis stratified by time since dose, (e) sensitivity analysis for unmeasured confounding?
17. The Health Department official quoted in the NYT said the data was already collected and the analysis already done, so changes weren’t possible. Could the limitations section have been expanded without re-analysis? Could additional sensitivity analyses have been added without changing the primary analysis?
On TND as a method
18. The 2023 Wiley paper “The test-negative design: Opportunities, limitations and biases” argued TND “cannot be used for studying the mortality effects of vaccines and is problematic for studies into the effect on hospitalization.” Do you agree or disagree? Why isn’t this critique cited or addressed?
19. Simonsen et al. (2005, 2007) demonstrated that pre-TND cohort studies of influenza VE produced apparent ~50% all-cause mortality reductions that were inconsistent with population-level mortality data. What evidence convinces you that current TND estimates aren’t subject to similar inflation, especially for severe-outcome endpoints?
20. If TND were systematically biased upward by 20-30 percentage points due to residual healthy vaccinee bias, how would CDC’s surveillance system detect this? What would falsify the current TND-based VE estimates?
Funding and conflicts
21. Several authors disclose institutional support from Pfizer, Moderna, Sanofi, GSK, and Novavax. The 2025-2026 vaccines studied are Pfizer, Moderna, and Novavax products. How do you respond to readers who view this as a conflict that warrants additional methodological transparency, not less?
22. This study was funded by CDC contracts to Westat (75D30121D12779) and Kaiser Foundation Hospitals (75D30123C17595). Does CDC retain editorial control over the paper’s framing and conclusions? Were there pre-publication disagreements between the author team and CDC about how to present the findings?
What would actually answer the policy question
23. What study design would you consider the gold standard for measuring whether 2025-2026 COVID vaccines reduce all-cause mortality in elderly adults? Why isn’t that study being conducted, and who would need to fund it?
24. The Czech national registry, UK ONS data, and several Nordic registries have linked vaccination and all-cause mortality data at population scale, with millions of person-years of follow-up. Has CDC analyzed any of these datasets, collaborated with their custodians, or attempted similar registry linkage in the US? If not, why not?
25. If a future analysis using population-registry data found no all-cause mortality benefit of the 2025-2026 booster in elderly adults, would that contradict your VE estimate? How would you reconcile the two findings?
AI analysis of the paper
For a much deeper dive into the paper, including the myriad of problems with TND, see this Claude analysis.
Summary
Bhattacharya made the right call here. Had the paper done things such as pooled numbers and had multiple negative controls (it had none), it would have been more convincing.
If hundreds of TND studies validated the truth about influenza vaccines as revealed in the Anderson paper, we’d have higher confidence the method is reliable. MedPage Today and the NY Times and Washington Post are all clueless as to how bad the methodology is.
Bottom line: Bhattacharya was correct in demanding high standards for CDC published studies. Good for him in doing the right thing. We need more people like Jay in public service.



YES JAY IS DOIG A GREAT WORK IN DEMANDIG MORE TRANSPARENCY WITH CDC REPORTS. WELL DOES BIG PHARMER HELP THEM WITH THEIR REPORTS. THE ONLY PROBLEM THAT VERY FEW UNDERSTAND EVEN YOU STEVE IS:. IT WAS NEVER A VACCINE BUT A DANGEROUS JAB. YES THERE HAS TO BE A COVERUP.. . DR BRIYAN ARDIS SHOW .COM EXPLAINES THE JAB WAS NEVER MRNA BUT PLASMA WITH SNAKE VENOM. THIS WAS TO PRODUCE SPIKE PROTINE IN THE BODY.. WILL ALWAYS PRODUCE IT.. LONG COVID ONLY ONE RESULT. BLOOD CLOTS MOST COMMON BUT A SIMPLE WAY OF DEALING WITH THEM. THEY ARE THE BIGGIST KILLIER. . CANCER +++ OTHERS AS TO MANY TO MENTION.
Funny this pops up today after I fell into a old study rabbit hole that seems could have shut down EUA:
(Wiki on Mexicos President)
During her administration (Claudia Schienbaum/Mexico, over 200,000 kits containing ivermectin were distributed to patients diagnosed with COVID-19 without their knowledge.[107][108]
107=
https://www.washingtonpost.com/world/2022/02/09/mexico-city-covid-ivermectin/
February 9, 2022
"Mexico City gave ivermectin to thousands of covid patients. Officials face an ethics backlash."
(but what happened to those 200,000 people?)
What was their Covid illness/death rate?
"Mexico City officials eventually declared their effort a success. They issued an academic paper last spring saying the medical kits had significantly reduced hospitalization rates. That finding, they said, “supports ivermectin-based interventions” to ease the coronavirus pandemic’s burden on health systems."
Now city authorities are facing a backlash. A U.S.-based academic site that had posted their paper, SocArXiv, withdrew it last Friday, charging it was “promoting an unproved medical treatment in the midst of a global pandemic.” The site accused city officials of bad science and unethical behavior — in effect, of using citizens like rats in a giant laboratory experiment, without their consent.
The decision has detonated a storm on social media. Opposition politicians are demanding an investigation.
What makes the scandal remarkable isn’t just the scale of Mexico City’s program — nearly 200,000 kits with ivermectin were distributed — but who was advocating it. Unlike in the United States, where ivermectin has been promoted by conservative commentators (and star podcaster Joe Rogan), the drug was championed in Mexico by leftist intellectuals in top government jobs."
.......
After handing out 83,000 kits, the government crunched the numbers. It reported a decline of at least 52 percent in hospital admissions among those who had received the kits, compared to others previously infected.
......
Now city authorities are facing a backlash. A U.S.-based academic site that had posted their paper, SocArXiv, withdrew it last Friday, charging it was “promoting an unproved medical treatment in the midst of a global pandemic.” The site accused city officials of bad science and unethical behavior — in effect, of using citizens like rats in a giant laboratory experiment, without their consent.
(🤯 - So they are lab rats for Ivermectin experiments; but NOT Lab Rates for mRNA experiments???)
......
multiple problems.
One was that the medical kits included not just ivermectin but paracetamol, aspirin and oximeters. It wasn’t clear which of the items may have improved patients’ health, Cohen said, and the subjects of the study weren’t chosen randomly, as in a clinical trial. In addition, the city government hadn’t declared its conflict of interest — in other words, that it would benefit if the study portrayed the program as a success. And the city was mass-distributing a medication that international authorities, including the World Health Organization, said should be used to treat covid-19 only in clinical trials.
(the site that removed the paper then said):
https://socopen.org/2022/02/04/on-withdrawing-ivermectin-and-the-odds-of-hospitalization-due-to-covid-19-by-merino-et-al/
"To summarize, there remains insufficient evidence that ivermectin is effective in treating COVID-19; the study is of minimal scientific value at best; the paper is part of an unethical program by the government of Mexico City to dispense hundreds of thousands of doses of an inappropriate medication to people who were sick with COVID-19, which possibly continues to the present; the authors of the paper have promoted it as evidence that their medical intervention is effective. "
....
This is the first time we have used our prerogative as service administrators to withdraw a paper from SocArXiv. Although we reject many papers, according to our moderation policy, we don’t have a policy for unilaterally withdrawing papers after they have been posted. We don’t want to make policy around a single case, but we do want to respond to this situation.
We are withdrawing the paper, and replacing it with a “tombstone” that includes the paper’s metadata. We are doing this to prevent the paper from causing additional harm, and taking this incident as an impetus to develop a more comprehensive policy for future situations. The metadata will serve as a reference for people who follow citations to the paper to our site.
Our grounds for this decision are several:
The paper is spreading misinformation, promoting an unproved medical treatment in the midst of a global pandemic.
The paper is part of, and justification for, a government program that unethically dispenses (or did dispense) unproven medication apparently without proper consent or appropriate ethical protections according to the standards of human subjects research.
The paper is medical research – purporting to study the effects of a medication on a disease outcome – and is not properly within the subject scope of SocArXiv.
The authors did not properly disclose their conflicts of interest.
We appreciate that of the thousands of papers we have accepted and now host on our platform, there may be others that have serious flaws as well.
We are taking this unprecedented action because this particular bad paper appears to be more important, and therefore potentially more harmful, than other flawed work. In administering SocArXiv, we generally err on the side of inclusivity, and do not provide peer review or substantive vetting of the papers we host. Taking such an approach suits us philosophically, and also practically, since we don’t have staff to review every paper fully. But this approach comes with the responsibility to respond when something truly harmful gets through. In light of demonstrable harms like those associated with this paper, and in response to a community groundswell beseeching us to act, we are withdrawing this paper.
We reiterate that our moderation process does not involve peer review, or substantive evaluation, of the research papers that we host. Our moderation policy confirms only that papers are (1) scholarly, (2) in research areas that we support, (3) are plausibly categorized, (4) are correctly attributed, (5) are in languages that we moderate, and (6) are in text-searchable formats. Posting a paper on SocArXiv is not in itself an indication of good quality – but it is often a sign that researchers are acting in good faith and practicing open scholarship for the public good. We urge readers to consider this incident in the context of the greater good that open science and preprints in general, and our service in particular, do for researchers and the communities they serve.
We welcome comments and suggestions from readers, researchers, and the public. Feel free to email us at socarxiv@gmail.com, or contact us on our social media accounts at Twitter or Facebook."