Introducing the Kirsch Cumulative Outcomes Ratio (KCOR) analysis: A powerful yet simple new technique for accurately assessing the impact of any intervention on any outcome
KCOR shows the vaccine increased excess mortality. KCOR is 100% objective with just one tunable parameter: observation start/end time. Needs only 3 data values per person.

Note
I invented and modified the KCOR algorithm in this article, so the article is pretty long. I summarized all the key points up front so you don’t need to read the whole article.
Some people have claimed this is nothing new; just a hazard ratio. No, it’s not.
This is novel new method for analyzing data that hasn’t been done before. Please see this analysis which compares KCOR to traditional methods if you are skeptical.
Example: For analyzing a vaccine to see if it safe, all you need is DoB, DoD, and DoV for each person in a population. That’s it. The result is both instant and objective.
High praise from ChatGPT
What is it? What can it be used for?
KCOR is a new analysis method that allows you to objectively answer questions of the form: “Did intervention X increase/decrease outcome Y?” It is especially useful for measuring the impact of an intervention on human outcomes.
We’ll use the example: “Did the COVID vaccine save lives?” in the description below to make it easy to understand.
The method automatically determines the NET impact from all sources, e.g., lives saved from COVID virus and lives lost from an increase in non-COVID ACM (NCACM).
You pick an enrollment date and assign people to vax or not vaxxed yet cohorts. These are fixed cohorts over the observation period. Keep track of cumulative death COUNTS for vax and control and compare ratio at end of observation period with ratio during the baseline no covid period at the start. Because deaths per day are constant in any fixed size cohort over a year, it works. For example, negative control tests on testing all 40 year olds with all 80 year olds, gives value of 1 (no difference). This works because our outcomes are SINGLE focused (e.g., death) so we match outcomes of each group. So we basically “match” the groups based ONLY on our specific OUTCOME of interest rather that trying to make the groups look identical for all outcomes of interest. This provides extremely accurate matching of cohorts without having to figure out out to neutralize differences for sex, comorbidities, SES, etc. The tradeoff is that the groups are perfectly matched for one outcome only which is our one outcome of interest (in this case death).
It excels at measuring differential response to a stress, e.g., when COVID wave hits, did the vaccinated group die at a lower rate than the control group (defined as not vaccinated at start time)?
For example, it can easily, objectively, and deterministically tell us whether or not the COVID vaccines were a net harmful intervention.
It works for human or non-humans, mortality or not mortality, vaccines or not vaccines, etc. There are certain aspects to each of these and KCOR can adjust for them. For example, humans die at a rate that increases depending on their age and vaccines have an HVE effect which has to be taken into consideration.
If you just use the raw KCOR results on vaccine data, it is a very reliable CONSERVATIVE ESTIMATOR of vaccine harm as all of the unquantified biases (which can all be quantified with more effort) act in a way that reduces any vaccine harm signal. See this Grok discussion which covers this.
For example, the Levi study did a Pfizer-Moderna comparison. KCOR would completely fail to find a safety signal in a comparison study like this because it derives the baseline through measurement AFTER the cohort is determined. If you just compared two vaccinated cohorts, even if Pfizer had 2x the mortality of Moderna, KCOR would normalize it out and the vaccines would look the same. But for comparing vaccinated with unvaccinated, that’s a different story!
Here I explain the method graphically. The graph below is how people normally compare vaxxed vs. unvaxxed. The vaxxed die more because more people are vaxxed. But the curves look the same, with just one having proportional more deaths than the other.

Now here’s the magic trick. Take the ratio of the two curves above and plot that and normalize by the cum value 11 weeks from the enrollment date (since 3 week skip for HVE, and 8 week baseline during non-COVID). Voila! You now know the NET MORTALITY of the intervention at any time t just by reading the y value at time=t. In this case, over 17 months (end of 2022), there was a 20% higher death toll in the vaxxed meaning the vaccine was 20% net harm. So we INSTANTLY know the COVID vaccines were net harm from their injection till the end of 2022. Had we asked the question with a different time frame, we could have seen a net neutral response.

In a nutshell, it measures deaths/week in each cohort during a baseline period of no COVID. There is no need to try to match groups on other factors because we only care about death. So we can take ALL 40 year olds and ALL 80 year olds and we get no signal…. We only get a signal when there is a differential response (e.g., vaxxed have fewer deaths than unvaxxed) to an external stimulus (e.g., COVID). Common mode external stresses are cancelled out. We proved that this worked with the 40/80 age difference negative control. We delay for 3 weeks to account for HVE. Most people are vaxxed way before then.
How it works
Suppose you wanted to determine whether Fidelity Fund A was better than Fund B.
You’d set a start date, say 1 year ago. Then you’d invest $100 in each fund and track the absolute $ made or lost per day for each fund. Then you’d look at your total net worth in each fund and see which fund had more money. You wouldn’t care about the return on asset each day. You’d cumulate raw dollars and count the raw dollars at the end.
KCOR does exactly the same thing! That’s what I mean by simple.
The only difference is that we count deaths instead of dollars and the funds are “vaxxed” and “unvaxxed.”
Since we can’t specify identical investment amounts at the beginning so we have to create a synthetic control which we do by measuring the deaths in each group over a non-COVID period where “nothing is going on.” This creates a baseline mortality ratio between the cohorts. It works because any group of a fixed number of people will die at a constant rate each week, e.g., 5 deaths a day on average.
So for an extreme example, a vaccination offer causes selection bias that splits a group of 50-year olds into two cohorts (I’m making up the numbers):
Vaxxed: 10,000 people dying at .1% per year (avg age 25)
Unvaxxed: 1000 people dying at 1% per year (avg age 56)
In both cases, the two cohorts will have 10 deaths in year. But the actually amount will be changing each week and by the end of the year, the deaths per week will be up by 5.7% in the vaxxed group and 6.9% in the unvaxxed group due to their age differences. So there will be differential in the deaths counted over the year of (6.9-5.7)/2=.6%. So unless we need to detect small signals (or are measuring very old people where the difference would bigger), this works very well. We can divide the net asset value at the end of the period by the ratio determined at the start of the observation period.
Summary: Just like for comparing investment funds, we compare the two cohorts over the exact same period of time. We are trying to measure which fund generated more total dollars for us over the period (after normalizing for the fund size as explained above). It’s no more complicated than that.
The rules
The rules are fixed and set by the data to eliminate gaming. There is no bias in the method; it simply “counts the votes” (or in this case, deaths).
Find a baseline period where most of the people at risk had the intervention (COVID vaccine) and the corresponding threat (e.g., COVID) isn’t present
Pick an enrollment date at the start of that baseline period.
If you are doing a vaccine study, delay 3 weeks before starting the counters. This is to virtually eliminate any residual HVE from people who were vaccinated close to the enrollment date.
Cumulate counts for 8 weeks. That sets the baseline rate.This is sufficient to minimize noise.
If you want narrower CIs, you can have a longer baseline, but if the vaccine under study increases mortality which you can see from a time series analysis of the data, this will underestimate the net harm. So 8 weeks is the standard.
Method detail
Pick the enrollment date of the study. We used Jun 14, 2021 because that is after most people were vaccinated and right before a long non-COVID period where we can measure the baseline mortality of the two cohorts.
The enrollment date (start point) is used to determine two cohorts (vaxxed vs. control) based on their vax status as of the start time. Note that some controls will get vaxxed over the study but that’s fine and it simply depresses the differences between the cohorts, it does not change the sign of the effect (harm or benefit).
If vaccine study, baseline start = add 3 weeks to the enrollment date. Else add 0.
Set baseline end to (baseline start + 8 weeks). Longer baseline period leads to tighter confidence intervals, but in vaccine studies, stick to the 8 weeks. If the vaccine is very deadly right after it is given, using an 8 week period will give it the benefit of the doubt. Also, if the vaccine kills people during the 8 week period, KCOR will underestimate the final harm which could make a deadly vaccine look safe. You’d want to do a time series version of KCOR to investigate further if you see rising deaths during the baseline period and the net outcome was a safe vaccine. So it wasn’t necessary in our case; even with the rise during baseline, the COVID vaccine still generated a huge safety signal.
Cumulate death counts for each cohort on a weekly basis starting at the baseline start time, not the enrollment date.
Construct the charts from the data, the most important one being R(t) which is just the ratio at each t value of cum vaxxed deaths/ cum control deaths.
Plotting Rn(t) which R(t) normalized to the value of R(t) at t=end of baseline will show you the net harm/benefit of the intervention. >1=harm, =1 means neutral, <1 means benefit at that point in time. So if we look at Rn(t) at t=1 year from the end of baseline, that is effectively what the net harm/benefit is over a 1 year observation period.
Why it works
From ChatGPT: KCOR relies on a simple observation about Gompertz mortality: for fixed-age cohorts with similar male-to-female ratios, the weekly death rate ratio between two groups with different frailty levels remains nearly constant over time — typically declining by less than 3% per year for cohorts aged 90 and younger. This stability allows cumulative outcome comparisons (e.g., deaths) to serve as a reliable proxy for hazard ratios, even when baseline frailty differs.

If we restrict to looking at cohorts of the same narrow age range (e.g., 5 year ranges), 90 years old and younger, the deaths/week ratio between vaxxed vs. unvaxxed (3x more frail) will increase by less than 3% per year. The effect would make the vaccine look more deadly than it really is.
Here’s the annual correction table for a 3X frailty difference in the groups based on the age of the group:
See details.
The graph below is for two cohorts with the same physical age (so same hazard function if the male/female mix is the nearly same in the cohorts), but a 3x higher frailty index (e.g., in the vaccinated group). It shows the deaths/week ratios will be very constant over a 1 year period and only when the people get very old, will the ratios change significantly over a year (the far right of the curve).
If the two groups have vastly different hazard functions, e.g., same age, but one group is all male, the other is all female, then the ratio will also change in time over 1 year:

In a nutshell, two groups of 60 year olds, enrolled at t=0 with no moving people from one cohort to the other, regardless of mix of people, will have deaths/week in a relatively fixed ratio over time for a year, regardless of external stresses.
So if there are non-random differences in responses to an external stress, that will show up in the ratio of the deaths/week. For example, if one group gets a vaccine and the other group doesn’t, the ratio won’t change if the vaccine is safe. But during COVID, we’ll easily see if there is a differential response to a common mode stress (since cohorts are tracked each calendar week). Basically, using ratios allows us to see differential responses to common mode stresses applied to groups that were made non-random (e.g., one group was vaxxed, the other not).
Let’s dive into the details.
Given a fixed cohort of humans at t=0 unless they are very old, they will die at a nearly straight line constant slope over a 1 year period. The annual slope of deaths/day depends on their physical age, not their comorbidities.
So any cohort, regardless of mix of ages, comorbidities, etc. if we care only about death, there are only two key numbers that we can use to characterize ANY group of people:
deaths per week (impacted by effective fraility index of the group)
annual change per year in deaths per week (hazard function which is a function of effective chronological age of the group)
The bonus is that if you compare groups of the same chronological age, the second value is the same (the hazard functions will be nearly identical if they are all the same age unless the male/female mix between the groups is vastly different and even then it will be minor).
So if you have two 50 year old cohorts and you know the baseline death rates of each cohort, the ratio of deaths per week will always remain constant over a 1 year time frame UNLESS there is a stress applied that is predicted to DIFFERENTIALLY impact one group (e.g., vaccinated) and not the other. So there is no need to “match” age, sex, comorbidities, unmeasured confounders; you simply characterize each cohort by their "deaths per week” and look for differential responses to common mode external stresses.
This means that if we divide the cumulative curves, we will get a constant (it will have a slight slope if the physical ages of the cohorts are different). As long the background fluctuations (e.g., seasonality) are common mode and proportional to mortality rates, everything will be exactly cancelled out. So we will be easily able to spot DIFFERENTIAL signals that happens if one group is non-randomly made different than the other group, e.g., one got a vaccine and the other didn’t.
The best matched cohorts (where people in the two cohorts die at the SAME rate) will have the lowest noise. You can construct these synthetic controls by modifying the mix of ages chosen until the u and v lines overlap each other as we saw above.
But for best results, compare vax and unvaxxed of people with the same age.
This works because of Poisson (people die randomly), law of large numbers (a big enough cohort will have a very stable deaths per week number), and the central limit theorem (whatever the mix of people is, we can characterize it with a single mean value even if the death distributions of each person are radically different). It’s all statistically guaranteed.
The key benefit: unlike other methods, none of the risk factors need to be characterized and no matching is required because we are interested in a SINGLE OUTCOME (death). So we use the observed death rates to exactly match up the cohorts in a way far superior to other methods because of our single-minded purpose (death count differences). We simply OBSERVE the ACTUAL death rate of the cohorts. If we were interested in a different outcome, we’d observe THAT specific outcome during baseline. This obviates the need for “cohort matching” which can lead to spurious outcomes such as in the highly cited Xu paper that was to prove that the vaccines cause no harm or the equally bogus Barbara Dickerman vaccine comparison study where in order to match the cohorts, they assumed that the vaccines were perfectly safe. It’s no surprise that the outcome matched what they assumed.
This is what makes KCOR so powerful. It is objective. There is only one answer from the data. The truth. It’s a lie detector for data.
Key features
It’s simple to understand and use. The algorithm is extremely simple.
In practice, all you do it just paste the summary of your record level data into the spreadsheet and update the pivot table. See the spreadsheet (Note: this is messy now but will be cleaned up in the near future).
It produces just 5 graphs of interest. On the main KCOR graph, you simply look at the y-value of the rightmost point and that tells you the answer, e.g., 1.2 means the intervention resulted in a 20% net negative mortality.
Uses objective record level data as input.
Requires only 3 pieces of information from each person for human studies. This is really nice for COVID ACM studies since we don’t have to rely on classifying deaths as COVID deaths. We can just use ACM data which is much more reliable especially since in many places, if you were vaccinated, you didn’t have to test.
Date of birth
Date of outcome (e.g., death)
Date of intervention (e.g., date of first COVID vaccination)
It’s reproducible because it’s deterministic. All the parameters are set by the data. So given a dataset, you alway get the same answer and it cannot be gamed.
It always gets the right answer. For example, for COVID, it determined instantly that the shots are net harmful.
It provides 95% confidence levels on the answer.
What are the KCOR graphs?
Vr,Ur raw count lines over time (two flat lines). These will be relatively flat lines with lots of noise. Generally not too useful, but you can see COVID waves and how they differentially impact the two cohorts.
Vc,Uc cumulative counts vs. time (2 lines): these will slope up and right. Sometimes you get lucky and the lines overlap and then it gets super interesting. You can very often see patterns here by looking at the slope of the tangent line. For example, below you can see the deaths tracked, but when the boosters rolled out, the vaxxed had higher death counts. This is so clear and so hard to explain away.
Vr/Ur ratio vs. time (one line): this should be a line with fluctuations, but a basically flat line. If it dips like it did below when Delta and Omicron hit, it means the vaccine worked (or there was a differential age response to COVID).
Tc(t) cumulative total death counts vs. time (1 line). If there is slope discontinuity after the intervention, you’ll clearly see it. You can compare population slope during low COVID prior to vaccine rollout vs. population slope during low COVID after vaccine rollout. This can be quite telling. The slopes should be the same.
Dc(t). Note how slope during non-COVID before the vaccine rollout (see the line)is NOT the same during non-COVID post-vaccine rollout. The vaccine has modified people’s mortality. This is the FULL population of those born in 1950. This is very strong evidence that the COVID shots increased NCACM. The Levi Florida study confirmed we got it right. Tr(t): this is useful if you find that v/u(t) drops (e.g., during COVID), you can see if the total death counts were elevated less which is a true benefit. Look back in time for the weekly peaks before the vaccine rolled out. If these peaks are higher, all you did was shift deaths between the cohorts when you created the cohorts. If there was a real benefit, deaths during COVID after the vaccine rollout would be a lot smaller. As you can see, the heights are similar. This is exactly why in the cum COVID deaths of a country, there is no knee in the curve when the vaccines rolled out!! All that happened is case shifting from the vaccinated cohort to the unvaccinated cohort.
R(t)=Vc/Uc vs. time (the KCOR test): one line as above. You look for inflection points in the slope which means “something just changed the trajectory.” The ending value is the net benefit/harm. It starts cumulating events at the start point when you determined the fixed cohorts (nobody moves between cohorts, but the cohorts can die at different rates).
This is the main KCOR summary output graph. Calendar date is on the x-axis. The y value is R(t) which is the ratio of cumulative death counts for the vaxxed vs. control group. The value of the final point is 1.21 which means a 21% net harm caused by the COVID shots. Looking at the curve you can see that the COVID vaccines basically made you more more likely to die every month, although during the COVID wave, we can see there was a small benefit. The beauty of this graph is that it nets out all the impacts and gives you a very accurate net impact score.. All of the above ratio graphs should also be examined across different age groups so the numerator is from one age group and the denominator is from a different age group. This allows you to validate the data is well behaved and it allows you to validate whether signals are real or artifact of non-linear responses to external stress. This is a very powerful way to analyze the data.
R(t)=V1c/U2c vs. time, but use two different age cohorts. This enables you to see if the signal is present when the unvaccinated cohort is younger which allows you to better match the absolute mortalities between vaxxed vs. unvaxxed cohorts and see if the signal remains. If it does, the signal is real and not due to non-linear response. This is how I was able to verify that the COVID vaccines really did reduce risk of death because the signal was still there when comparing against a younger unvaccinated population.
R(t)=T1c/T2c vs. time. You are looking at all time starting before vaccinations were rolled out and the cohorts total deaths in age cohort 1 vs. different age cohort 2. Since hardly anyone vaccinated, look at simply total death counts of cohorts of different ages. It is interesting to do this over the entire timeline to see how vaccination impacts the population mortality rather than just limiting to the time before the start time.
Interpreting the results
< will be annotated in the spreadsheet including adjustments for edge cases>
Trying it out
See the spreadsheet and go to the simulator tab.
Potential issues
With any new method, people who don’t fully understand the method will raise concerns. I don’t believe any of these concerns are material. All of them are hand-waving attacks with no evidentiary support that they impact the outcome. Here is my reply to each concern that has been raised about KCOR and KCOR when used with the Czech data. Most of these issues included below are from Grok.
“Your baseline methodology is flawed because there is a long-term “return to the mean” effect which causes the ACM of the vaxxed group to increase steadily for 30 weeks after vaccination; a “long term equivalent” of HVE but with a longer time constant.” Nope. Check out the v/u curve for 1950s in this article showing the v/u curve flatlines for a month. Any HVE effect is strongest on day 0, then exponentially declines. I also used a 4 cohort model split after the booster shots (which drew primarily from the healthiest people who got shot 2) that the shot 2 people didn’t diverge from the unvaccinated controls which demolishes the “return to the mean” attack (details)
“You should be using mortality ratios, not death counts.” Nope. If you want to compare two investment funds for better performance, you’d never average the ROI each day. You’d observe which fund had more dollars (in our case deaths).
“Data quality issues (e.g., missing records, incomplete baselines) undermine the accuracy of baseline mortality ratios.” The missing records in 2020 are irrelevant because we don’t even start looking at the data until mid-2021! And even if there were randomly missing records it doesn’t matter since we rely on mortality ratios in the cohorts. If you randomly delete records, you get the same ratios!! Try it! And missing DOB doesn’t matter because we normally only run the method also on individual groups where we KNOW the birthdate.
“HVE is not accounted for.” It absolutely is. That’s why we skip 3 weeks after enrollment for the most recently vaccinated to fall to near zero. But most people were vaccinated over previous months, making this effect very small. If you look at the v/u curves for 1950 for example, it’s flat for 4 weeks. That means no HVE at all. If there was HVE, the v/u curve would rise steeply at first, and then level off.
“There were demographic differences in vaccine distribution: Moderna was given to older people.” KCOR main analysis results only differentiate whether the person was vaccinated or not. We don’t care who got vaccinated or with what vaccine. All we are saying is the vaccination program is deadly. We can’t say anything about the individual vaccines unless we do a differential enrollment for each vaccine which we haven’t done yet (so we’d enroll you in a Pfizer, Moderna, other, control cohorts). Great idea! I’ll do that next! As demographic differences between vaxxed and control cohorts, that’s what the baseline period adjusts for. We only require knowledge of the age of the person and the male/female ratio is comparable for each group. And if the M/F ratio is very different, we can do subgroup analysis by sex to eliminate the minor effects caused by different hazard functions for M vs. F.
“KCOR assumes that vaccine effects on mortality are consistent over time and across contexts (e.g., COVID vs. non-COVID periods). KCOR’s cumulative approach may not capture these temporal dynamics.” Wow. This statement is misguided. We do not assume anything about what happens during the observation window. And we count every vote after the baseline period! If a vaccine creates excess NCACM in the first 8 weeks (the baseline period), the vaccine will appear to be safe. So if we find a safety signal (which is a case here), it cannot be due early NCACM from the vaccine because that would increase the baseline and thus REDUCE the final value, not increase it! In short, if the baseline is distorted by an unsafe vaccine, KCOR will under-report the harm. So if there is a harm signal, it’s for real and if you see the vaccine increasing mortality relative to the unvaccinated during the baseline period, that is a red flag as well that the final harm is underestimated.
“KCOR may misattribute mortality differences to vaccine harm rather than external factors like rollout schedules or population characteristics.” Nope. All we do is count the deaths in the two cohorts. If there is a differential harm or protection we will see it. No exceptions. We count everything. And we don’t attribute causes. We just point out that the vaccinated group had more or fewer relative deaths. It’s all done against a control.
“Neglect of COVID-Specific Outcomes” In our analysis, we COULD have tracked only COVID deaths instead of all-cause deaths. That’s what they want us to focus on… they want to ignore the elephant in the room! The analysis we did on ACM shows the vaccinated did relatively better than the unvaccinated during COVID waves. So we know it helped. The problem with a COVID death count is that they had a differential testing policy in Czechia in 2021. If you were unvaxxed, you had to be tested 2x/week. If you were vaxxed, testing was optional. With a policy like that, if you were vaxxed, you’d never get COVID and never die from COVID. So we stick to measuring what cannot be gamed. So we tell the truth. A lot of people don’t like that.
“Lack of COVID Period Differentiation. While KCOR compares mortality in COVID and non-COVID periods to establish a baseline, it does not account for the vaccine’s role in reducing mortality during high-COVID periods. Without incorporating COVID-specific outcomes, KCOR cannot fully address the trade-off between vaccine-related mortality and lives saved from COVID.” We track ACM through all periods. We report who did better. This objection makes no sense. It shows a unjustified focus on COVID benefits while ignoring the harms caused by the vaccine. Measuring ACM over a long period of time is the only way to balance the risk / benefit of a vaccine.
Lack of Peer Review. It’s been endorsed by my colleagues such as Norman Fenton. You can’t get much better than that. He just doesn’t endorse things lightly. None of my colleagues, nor any of the AI bots have identified a credible issue with the approach.
“Failure to account for temporal mortality spikes unrelated to vaccines.” Huh? Common mode effects like hurricanes, wars, etc. are all common mode and are completely filtered out. They do not change the ratio in either direction. So a common mode benefit or common mode harm doesn’t change the baseline ratio.
“Challenges in Estimating Lives Saved because vaccines could have effects in the non-COVID baseline period like reducing long COVID.” If the vaccine reduced deaths in the baseline period in any significant amount, that would be extraordinary: a vaccine that reduces NCACM. It doesn’t do that. We can see that from the raw v/u curve which goes up. It’s all visible in all the charts. The charts give you a complete view of what is going on. The R(t) is simply the most revealing chart. KCOR method is all the charts, not just the R(t) chart.
“KCOR’s comparison of vaccinated and unvaccinated groups is complicated by the “mirage” effect, where the unvaccinated appear to have higher mortality due to undercounting (e.g., only counted at death) or HVE.” Nope. There is no undercounting here because it’s all record level data. We account for HVE by waiting 3 weeks from enrollment. Counting at death is fine…it means you never got vaccinated because if you did there would be a record. There is no data showing this is a problem.
“Ignores broader evidence showing benefit.” That’s deliberate. KCOR is a lie detector and tells you what the data shows in an unbiased way. It doesn’t care about scientific consensus. It tells you what the data shows. Unvarnished truth. Most people hate that because this method shows that we’ve been lied to. I get how people would not want that exposed so will do anything they can to attack this methodology, no matter how flawed their arguments are.
The problem all these critiques have is they fail to acknowledge that the negative control tests return 1, i.e., no harm, exactly what you expect. For example, if we compare all cause deaths between a 40 year cohort and and 80 year cohort and look at all 1 year sliding observation periods over a 3 year period, mean R(t) value at one year is .995. That is ridiculously close to perfect. And these cohorts have hugely different hazard functions, frailty indexes, and comorbidities. There were very high COVID periods and no COVID periods. Yet there was no differential signal between the full population cohorts on average because it was an “all” to “all” comparison rather than “vaxxed” to “control” comparison. I am not aware of any method that has a better outcome to a 40 year age gap negative control test than KCOR. That isn’t by luck. Nobody gets that lucky. It’s all guaranteed by statistics.
The bottom line is if you want to continue to live a fairy tale, you’ll find a way to justify ignoring this method. It’s like taking the red pill.
Correction factors that may be applicable
If v/u ratio rises during baseline period, it’s a sure sign of an unsafe vaccine. The final R(t) value should be adjusted upward to get the true value.
Control group got partially vaccinated during the observation period. This simply depresses the net impact because there is less of a difference between the groups. It doesn’t change net harm vs. net benefit determination.
Cohort group is over 90 years old so depletion increases v/u ratio. Use the table to reduce the final R(t) value, e.g., 1.20 —> 1.17 for 90 year olds.
Pull forward effect (PFE) could differentially impact ratios. PFE is non linear and only affects the most frail (unvaxxed group). So this can cause the u curve (the most frail to undershoot “normal” for a period before returning to baseline in older cohorts. It’s likely that there is some of this, but I haven’t completed this analysis yet. It’s tricky to do correctly. But we don’t see the “rebound dip” in the unvaxxed curves.
Uneven m:f ratio in the groups which means the hazard functions aren’t matched so the ratio after 1 year will change slightly. This is easy to check because we can run on all males of age 60 and do a separate analysis with all females of age 60. So when we do that, the mixes don’t matter. Mixes only matter for older age groups and only if there are extreme differences. Running the analysis on single sex/single age guarantees the hazard ratios are the same eliminating the need to adjust for sex.
Non-linear response to stress in the two cohorts. For example, suppose our unvaxxed group has 3X the deaths/week per capita than the vaxxed of the same age. Now we introduce COVID and there is non-linear response. The vaxxed ACM goes from 100 to 110 but the unvaxxed go from 100 to 150 instead of 100 to 130. Was this differential due to the vaccine? Or was it simply that COVID disproportionately kills the frailest people (of the same age) at a higher rate? Some studies indicate that mortality risk increases exponentially with frailty, particularly at higher frailty levels, suggesting the increase could be >3x. I don’t have a good way to assess this, but using COVID death data during the vax rollout, we can clearly see that there was a flattening of the COVID mortality curve on a population basis when the shots rolled out, meaning the benefit was real (much to my surprise):
Short term HVE: we pause for 3 weeks from enrollment before we count.
Long term HVE: the baseline period determines the mortality ratio.
Feedback
Grok says KCOR is the best analysis method for objectively analyzing intervention/outcome data on a human population. Check this out. I asked it for a better method and it drew a blank.
UK Professor Norman Fenton reviewed the method and was unable to find any flaws. He thought it was quite clever.
Clare Craig also had very nice things to say:
Executive summary
To date, nobody has done what I would consider a “proper analysis” of the data from any country in the world to determine whether there was a net all-cause mortality (ACM) benefit from the COVID shots.
All the studies that have been done to date are very seriously flawed. They nearly always depend upon identification of COVID cases and COVID deaths in the vaccinated and unvaccinated cohorts. Most all assume they can account for the unvaccinated mortality and healthy vaccinee effect (HVE) using mathematical models rather than measurements. It’s a mess. Very unreliable. This is why they think the vaccines saved lives.
Because COVID cases and deaths are unreliable (e.g., due to differential testing policies of vaxxed and unvaxxed), the correct way, which is to compare the ACM death counts during Covid and non-Covid periods of the vaccinated and unvaccinated groups. Such a study would use ONLY date of birth, date of death, and date of vaccination and there should be no “adjustments.” It should just use the raw data and that’s it.
There are no such studies. Zero. Zip. Nada.
It’s actually very straightforward to do such a study. No privacy issues there. No reason every state can’t publish this information.
Lucky for us, this information from the Czech Republic has been in public view since March 2024.
Yet nobody has analyzed it in the manner outlined above.
So I’m going to show just how trivial it is to do it in this article.
It’s a super simple amazingly powerful method:
You pick a start of study date (e.g., when 70% of your population (which you divide up into same age groups, e.g., born in 1950-54) has been vaxxed) which defines who is in the intervention vs. control groups. Next, your look for a time period after the start date when no external stress relative to the intervention is present, e.g., the 3 months just after June 1 in the Czech Republic when there was no COVID). You start cumulating event counts at that point in each cohort to establish a relative baseline rate in the two groups while under normal external stresses.
This start date for cumulation will ideally be as close to the start date as possible, e.g., the same as the study start date.
Now all you do is plot the Ratio of (cum intervention events) / (cum control events) on the y-axis vs. time and look at the slope. The observation time period should include times when the external stress is applied so our two counters reflect the differential outcome response in the two cohorts. The slope of a line drawn between the cumulative ratio at the start of observation time till the end of observation time (typically 1 year later but could be any time period of interest where the external common mode stress is “supposed to” produce a differential outcome count, i.e., ACM deaths per week lowered in the vaxxed v. unvaxxed) tells the story of benefit or harm:
Slope up—> vaccine is clearly killing people.
Flat slope —> no change.
Slope down —> net mortality benefit.There are two caveats to be aware of if you are dealing with a vaccine intervention *AND* your outcome is DEATH:
If you find a short-term healthy vaccinee effect (HVE) observed in the two raw event count lines vs. time (it appears like merging traffic lanes that are later parallel and is very obvious), you must start count accumulation when the lanes have been merged. Below is what it would look like if present (it isn’t present in our dataset since most people were vaxxed way before the start point). Typically, you’d only see this in real life if you are looking at time series data where the deaths are relative to the time of the shot. See these Medicare time series death plots for the COVID vaccine showing the effect is gone 30 days post shot and is exponentially declining in impact (deaths rise quickly at t=0, slower as t increases and you’re at actual mortality by t=30 days or earlier). The slopes post that time are due to seasonality impacts (look at ALL the graphs and you’ll see that), not “unicorn HVE (see below)” which there is no such thing. See also the Pneumococcal vaccine curve (Medicare 2021 all ages) showing the HVE effect is gone in ~14 days.
For cohorts 80 and older, you should subtract the differences due to depletion using the table below.
That’s it!!! Simple and objective.
You can calculate confidence bounds from the 4 numbers in the ratio using the normal methods (it will be determined by the baseline numbers so longer baselines give narrower CIs). You can shift the observation start time and window size to show you result is robust.
Some people say, “this method is flawed because the control group is getting the vaccine after the start time.” They are, but this simply depresses the differential mortality finding of the method (make benefit or harm result closer to zero). And we can measure that precisely and make adjustments. So for example, suppose the vaccine increase mortality by 20% from our calculations. We look and we find that 20% of the control group on average was vaccinated during the period. We just adjust the final benefit or risk downward
Some people erroneously think that you can measure the mortality of each population age group before the shots rolled out and set that as the baseline death rate of the two cohorts to that value. You cannot do that for 3 reasons:
You can’t calculate a baseline mortality rate of the 2 cohorts before the 2 cohorts are defined! People who opt for the shot typically die at a rate 10 years younger than their age, and those who skip the shot typically die at a rate 10 years older than their age. So you can only measure the baseline mortality of the two cohorts post start point. There is no way to split a single mortality rate into two mortality rates. If you know how to do this, I’m all ears. See my article on FDA discovers Fountain of Youth showing that people who study this are completely oblivious to this effect. This is one of the reasons why we have so many bogus vaccine studies.
The baseline rate of the age groups is corrupted in Czech data. The population mortality rate from the Czech data in 2020 is artificially low due to missing birthdates on 1M records and missing records in 2020. We don’t know the probability distribution function of the missing records. So there is no way to estimate the scaling factor accurately. And even if we could, there’s still the previous item which makes such a baseline useless.
The shots can impact mortality beyond the selection bias effect, e.g, a vaccine which increases your mortality for 8 weeks post shot and then the effect disappears. We are assuming if this is significant, the shots wouldn’t have been approved. If early day deeaths are present, then our baseline mortality for the vaxxed will be higher than the true baseline and the vaccine will appear to save lives. So if there is a significant immediate boost in short term mortality post-shot, then using this method but measuring events relative to the time each person got the shot is appropriate. There is a section below describing this modification of using relative time vs. calendar time below.
To see it in action, open this spreadsheet and go to the simulation tab and try changing the death numbers and look at the result.
In this example, the vaccine was 100% effective during COVID months 4 and 5. The net impact was .98 meaning it had a 2% mortality benefit. The plot has the endpoint lower than the start point which confirms it was beneficial.

Here’s what the actual data from the Czech Republic looks like. The vaccine was a disaster.
We find that the Covid vaccines have caused net all cause mortality exceeding 20% a year in the elderly.
This result is aligned with the the 36% minimum all-cause mortality increase after 1 year in the Levi Florida study for Pfizer. If Moderna was perfectly safe, then with a 2:1 ratio of Pfizer to Moderna which is what it was in the US, you’d get a mortality increase of 24% which is a close match to the 23% computed here.
Taken together, the result is unambiguous: the shots were a disaster.
Nearly every health department has the data to do this analysis. Yet not a single one lifted a finger to do the analysis. One health authority, Te Whatu Ora (Health New Zealand), spent millions of dollars paying lawyers to criminally prosecute their own database administrator (Barry Young) who was simply trying to alert them to a safety problem, and not a penny on doing the actual data analysis that would have shown them he was right.
And the great thing about this method is it works for anything causing anything:
do vaccines given to those under 3 years old cause autism?
does the childhood vaccine schedule increase the risk of chronic disease?
does the MMR vaccine cause sudden death?
does vaccine X prevent infections, hospitalization, or deaths due to virus X?
does taking tylenol after a vaccine increase the rate of autism diagnoses?
does doing X increase the risk of Y?
Introduction
For a number of reasons, including differential testing requirements for vaccinated and unvaccinated, short of a double-blind randomized trial honestly performed, the only reliable way to assess whether or not the COVID shots had a mortality benefit is to do an observational study that compares the ACM of the unvaccinated vs. vaccinated of 5-year subgroups of an entire population over at least a 12 month period.
Such an observational study is trivial to do. I know that because I spent all of about 3 hours doing the analysis using the Czech data.
All you need is a record for each person of:
their date of birth,
date of first vaccination, and
date of death.
The effect size is so huge, no more data than that is needed if you are dealing with full population data like we are here.
Every country should publish those 3 pieces of information. There should be a law in the US to require states to publish this info. But only one country in the world has such a law: the Czech Republic. AFAIK, they have the only record-level data for a population that has been made publicly available in the history of the world for a vaccine. As a misinformation superspreader, it was something that I could only dream about.
Once you have the data, it takes less than 30 minutes to write the code, and then a few hours to look at the data and see what it says. If you have guidance for where to look, you’re basically talking about less than 30 minutes because you can copy my code.
NOBODY IN THE WORLD HAS EVER DONE SUCH A STUDY USING THIS METHOD FOR THE COVID VACCINE.
Why not? Probably because it would reveal the truth.
They all do it “the wrong way” that relies on COVID cases and COVID death assessments and most always they refuse to measure the non-COVID all-cause mortality of the unvaccinated group.
The Arbel study in Israel is exhibit #1. They made all sorts of assumptions in their models and never double checked with the data to see whether their models matched the non-COVID ACM of the unboosted. After they were caught (in the Hoeg letter), they doubled-down on their models and REFUSED to reveal the NCACM of the unboosted. When MIT Professor Retsef Levi asked for the data, they turned him down too. He had to sue them and they still didn’t turn over the data. Is this the way science works?
So I guess it’s up to the misinformation superspreaders like me and my friends to analyze the publicly available record-level data seeing how nobody else will do it the right way.
Quick description of the method
If a vaccine is a placebo, if we track the deaths over time in the vaccinated group vs. the unvaccinated group that was determined at a FIXED time t=ts, then the cumulative deaths will be proportional to each other over time and the slope if this ratio will be flat.
If the vaccine is saving lives, the slope of the line from a baseline point to a year later will be negative. If it is a net harm, the slope will be positive.
This relies on a mathematical fact that groups with different baseline mortalities die proportionally with each other.
The method detects EXTERNALLY APPLIED CAUSES OF DEATH that DIFFERENTIALLY IMPACT the mortality of the cohorts and it is VERY sensitive to detecting such interventions. Example: COVID virus that due to the vaccine SHOULD cause a differential impact on the cohorts.
The math is brain dead simple and the data required is minimal: DOB, DOD, DOV.
The chart above shows what you get when you plot the ratios of the cumulative deaths of each cohort for the COVID vaccine. The fact that the slope goes up during non-COVID means the vaccine is killing people as soon as it is given! The slope for a safe vaccine would have been flat during that period. Then during high COVID times, the vaccine redeems itself for a very short period (from November to December) then goes back to the differential kill rate.
You’ll be able to verify each of these statements in the underlying individual cumulative death count curves next.
The method explained visually
Here is the actual cumulative mortality of the vaxxed vs. unvaxxed in our 1950 cohort with a June 14, 2021 enrollment date when the vaxxed and unvaxxed cohorts were defined. This shows the deaths in each cohort over time. They look the same don’t they? Just one grows faster than the other one because it had a bigger cohort size. And they have different slope because the slope depends on the cohort size at the start and the death rate. Turns out the vaxxed die less but it’s a bigger cohort, so it “looks” like the vaxxed are dying more. So at first, it looks like “move along, nothing to see here folks!”

What the method I created does is allow you to see the signal you wouldn’t normally notice.
So compare the two charts above. The chart with each of the cumulative event counts makes it hard to see a signal. The chart above it with the single line (we divide the vaxxed line by the unvaxxed line) shows very clearly what is going on!
I’m going to show you the hidden message in the data above right now.
In the graph above, I added two green lines and a pink line.
The slope of the green lines matches the starting slope of the cohort which was measured during no-COVID period. So I took the slope on the left of the graph and moved that line up and to the right to overlay the post COVID line.
Note: You cannot simply EXTEND the line because the COVID wave causes a shift up.
The pink line is just a straight line from October 2021 to December 2022.
For the unvaxxed, when COVID hits, cum deaths deflect upwards just as expected, and then, post COVID, the slope continues along at the same slope it was before the interruption. The prior trendline just got displaced upwards. But the death rate was the same. So the unvaxxed went on as usual after the COVID wave.
For the vaxxed, we see that they are dying at a rate that is clearly faster than they died in the no-COVID baseline period. Post COVID, the death rate didn’t return to baseline mortality. It got worse!! The pink line shows us that the death rate during Omicron just kept continuing at the same rate AFTER Omicron subsided. That’s a disaster. And no explanation they can offer fits the data as to why the mortality rate didn’t return.
So the unvaxxed returned to their pre-COVID wave death rate, the vaxxed did NOT.
The method I describe below is a simple mathematical technique to make this signal I just showed you crystal clear.
No magic required. No comorbidity adjustment required. We are comparing baseline mortality vs. later mortality. We compare each group with itself.
The method in detail

Definitions:
ts= start time of the study when vaxxed and unvaxxed cohorts are determined. We then watch how each cohort dies off each week and keep counters for each cohort. So if you are vaxxed after t=ts with your first shot, you are considered, for the purpose of this study as unvaccinated. This is very important.
t0=time when the baseline R0 value is determined, typically 8 weeks after S which in our case is at the end of a normal non-COVID mortality period = Aug 30, 2021
R1(t) = cumulative death ratio as of time t= cumulative from t=ts of deaths in vaxxed / cumulative deaths from t=S of deaths in the unvaccinated. So R1(t) will rise over time if the vaccine is preferentially killing the vaccinated. R1(t) will be a flat line if there is no differential mortality rate differences between the cohorts.
R0=R1(t0). R0 is the value of R at the end of the non-COVID period in our case so it’s a baseline ratio of the mortalities when everything is “normal” and the threat we want to measure a differential response to is not present.
R2(t) = R1(t)/R0. This is simply normalized R1(t) to a baseline of 1 at the end of the non-COVID period. This is what we plotted above. So R2(Sep 1, 2022)- 1.22 means a 22% annual net mortality increase due to the shots.
R3(t) = R1(t) / R1(t-1 year) so this is just a way to show our baseline wasn’t cherry picked. R3(t) should be relatively constant regardless of t and should equal R2(t0+1 year).
Pick a start date where a majority of the elderly people have been vaccinated and there is the start of a non-COVID period,
divide each 5 year cohort into vaccinated/unvaccinated at that moment based on their vaccination status at that date.
Start tracking weekly deaths of each cohort at the “start date”
See how their mortality compares over time relative to their own mortality during the non-COVID baseline mortality measurement period right after they were vaccinated. Do this by tracking cumulative deaths for the vaccinated and unvaccinated each week. Perform a ratio (the baseline ratio) at the end of the non-COVID period of cum vaxxed/cum unvaxxed which is typically around 8 weeks after the start point. Call that R0. This tells you how people in the two cohorts die over time.
Note: You can pick any point in the baseline period you want; later points are better because the value is more stable. But if you want, feel free to pick a baseline point outside that range. Pick any point you want! Just try to avoid around COVID waves because that’s not a “baseline” period. And no, the baseline period isn’t “artificially low” due to short term HVE artificially depressing the ratio. because the HVE effect is exponentially decaying from the time of the shot so if this was significant, you’d see something like the line in red and as you can see from the chart above, we don’t see the effect at all. It’s gone. Only the pro-vaxxers still believe it is there and somehow must be causing the mortality differential that keeps going on for YEARS.Here’s what the counts look like and I picked the t0 point to be at the end of the non-COVID period so there would be a stable ratio (it’s increasing because of the vaccine, but that’s not my fault):
R1(t) is the ratio of the cum vax/cum unvax at that time. R2(t)-R1(t)/R0 is what we plot above. If R2(t) >1, at a year from the the R0 baseline date, the vaccine was likely net harmful.
So in this example, there was a 23% mortality increase from baseline.
You can simply plot R2(t) on a graph like I did. You can tell if the vaccine is unsafe because the slope will be positive if it is net harm. You don’t have to do the baseline. You can compare any two R2(t) points a year apart an divide them to get R3(t). If R3(t)>1, the vaccine was unsafe.
For the most accurate results, then apply the appropriate HVE correction factor to each ratio. For ages under 70, the correction is less than .25% (very small). So in this case which is for born in 1950-54, 23% would be 22.75%.
So for example, suppose you find there are equal number of deaths in both cohorts during the baseline period. Then you just count up the number of deaths in each group over the period from the time of vaccination to the end of say 2022, and take the ratio. Once you have that, you need to apply the HVE correction factor (determined from the baseline mortality rates) to the answer.
A simple example
Start date 6/1/2021 after most 60 year olds are vaccinated.
On 6/1/2021, there are 100 vaxxed deaths and 120 unvaxxed deaths. This continues for 8 weeks. So now we have cumulate 800 vaxxed deaths and 960 unvaxxed deaths.
Take v/u= 800/960=.833=R0 that’s the baseline mortality rate difference.
Now let’s look out 52 weeks later.
In the ideal case where nothing is going on, we have
5200 vaxxed deaths and 6240 unvaxxed deaths.
You take the cum v/u ratio at that point. 5200/6240=.8333=R1(t=0).
You then calculate R2(t)=R1(t)/R1(t=0)=.833/.833= 1. So the vaccine had no effect.
Now suppose we introduce a covid month where the vaxxed are unaffected, but the unvaxxed die at 2X their normal rate for a month.
The unvaxxed death count is now 6240+120=6360.
R1 is now cum vaxxed/cum unvaxxed=5200/6360=.818
So now R2(at that point)=R1/R0=.818/.833=.98
So the vaccine had a benefit because it created relatively fewer deaths.
Interpreting the R2(t) graphs
Vaccine beneficial: End point is lower than t0 start point.
Vaccine does nothing: End point is same height as t0 start point.
Vaccine harmful: End point is higher than t0 start point.
So in the R2(t) graph below for the 1950 cohorts, what we are seeing is a vaccine which is continually increasing mortality, especially during COVID waves for the vaccinated. On the back end of the wave, there is a mortality benefit, but more than likely this is a pull forward effect (PFE) where there is a death deficit after a rapid rise in deaths. This is because those on the verge of death are pulled forward to die sooner leaving a void of people to die soon after.
ChatGPT analysis
It doesn’t get much better than this ChatGPT summary:
Grok analysis
The initial Grok analysis of the methodology shows the method is robust and makes perfect sense and is the way the day should be analyzed.
The second Grok analysis shows all of Grok’s objections were addressed to Grok’s satisfaction other than it hasn’t passed peer-review.
Here is the final assessment after I overcame all objections to Grok’s satisfaction:
The third Grok analysis concluded that KCOR doesn’t generate spurious results like claiming a vaccine is dangerous when it isn’t.
It later added this when I asked about my bet:
The fourth Grok analysis resulted from one of my readers asking Grok what it thought and Grok spew out objection after objection. I systematically dismantled each objection. Here’s what Grok #4 had to say at the end of our discussion:
The code and analysis
Code and analysis can be found in my github.
Results
Recapping here’s what we did:
compute a baseline cumulative mortality ratio during the non-covid period following vaccination.
compare that with the same cumulative ratio at the end of the study period.
Graphically, you just look at the y-intercept of the final datapoint. If it is >1, the vaccine had a net harm. This requires some adjustment for the healthy vaccinee effect (HVE) for cohorts near 85 years old. This is more complicated to do, but it will detect if a vaccine is increasing non-COVID all-cause mortality (NCACM). For those born in 1950, the intercept is 1.23 (1 year from the reference date) which means the vaccines caused a 23% net mortality increase in the first year and that includes all mortality benefits from the vaccine preventing COVID.
The analysis showed the vaccine increases your risk of death compared to the unvaccinated group as shown in the images below. The differential decreases over time.
Amazingly, all the R2(t) curves are nearly identical which was absolutely stunning to see this replication in real-world data.

A simple way to see the impact
See how the curve goes above the trendline when there is a “stress” applied? This then causes a pull-forward effect returning deaths to the trendline. Going above the trendline means the vaccine is helping the virus, not you.
HVE tutorial
If I had a nickel for every person who attributed higher vaccine mortality to the “healthy vaccinee effect” I could retire.
In this section, I will talk about the 3 types of HVE:
Short: lasting <3 weeks because we don’t vaccinate people who are going to die
Long : selection bias where the frail people refrain from taking the shots
Unicorn (aka “imaginary HVE”) which was invented to explain why the mortality of the vaccinated rises over time after getting the shots and is presumed to be a longer time constant version of short HVE. It’s not real because it’s too big, not symmetrical, and doesn’t follow an exponential decay (e.g., have a half-life).
If there are no external forces (like a COVID vaccine given to one cohort and not the other) causing differential mortality between the cohorts, then selection-bias-created mortality differences between cohorts is a zero sum game. If deaths increase in one cohort, they must decrease in the other cohort.
There is short term HVE. I know because I’ve seen the Medicare data. People who are going to die don’t get vaccinated. We ONLY would be able to see this effect shortly after the start date of our study when the cohorts are defined and it will be small because only recently vaccinated people would have it. Short term HVE is the reason for skipping 3 weeks before cumulation of counts to minimize the HVE impact of people who were vaccinated right before the enrollment date.
There is long term HVE as noted above.
People claim that the reason the vaccinated appear to be dying at a greater rate over time is that there is unicorn HVE which is the supposed “long tail” of the short-term HVE effect and that’s causing the vaccinated to die more.
I discuss unicorn HVE in detail here.
The so called long-term HVE effect is simple another way to say, “the unvaccinate cohort has higher mortality than the vaccinated because of SES, access to health care, health seeking behavior, etc.” and the differential mortality rates are set at the time the cohorts were picked. While it is “possible” there is a long term HVE effect, the data above proves that unicorn HVE is as mythical as a unicorn.
What I did was a simplistic calculation to estimate the magnitude of the mortality difference caused by the vaccines.
In a subsequent article, I’ll refine this estimate and account for the long-term HVE effect (healthy vaccinee effect).
The adjustment is because the baseline mortality of the unvaccinated cohort is much larger than the vaccinated cohort. That difference in mortality causes the deaths per month to change at a different rate in a fixed sized cohort. It has to do with picking two points on the curve below. If the slopes are the same, no problem. If the slopes are different (e.g., you pick a point to the left of the peak and the right of the peak), it creates a difference in deaths computed over a period.
For example, for those born in 1935, the vaccinated cohort is around a 9% annual mortality and the unvaccinated cohort is double that.
So if you had a perfectly safe vaccine, this kind of mortality difference would create a difference in the cumulative mortality.
A 20% annual mortality means deaths for the unvaccinated will go down every month since we are squarely on the right of the hump. A 10% annual mortality means deaths will fall only slightly every month (it’s just over the top of the hump). So it makes a neutral vaccine look bad for older age groups.
It turns out it’s hard to make the vaccine look good, and much easier for the HVE effect to make the vaccine look worse than it really is because the unvaccinated is always further right on the curve.
For younger age groups, the HVE effect is very small. For those around 85 years old, it’s significant.
This doesn’t change the risk benefit for anyone. It’s still a disaster.
For those born in 1950, the mortality rates of the cohorts are between 1.5% and 4% which means they have similar mortality slopes for deaths per month of a fixed size cohort. So no correction factor is needed.
HVE and “frail people” papers
There are a lot of bad papers out there about HVE.
“Chronic HVE,” referring to the long term mortality disparity between vax vs. unvaxxed (e.g., the unvaxxed dying a 3X the rate of the vaxxed for example) is a simple partitioning of a population into 2 cohorts. It does not create frail people. Those people were already there.
The partitioning of a population of a given age through an offer of vaccination instantly effectively creates two cohorts one with a higher effective frailty index than the other. This frailty index is a multiplier. The hazard functions (which depend on the person’s chronological age) are the same, but the frailty multiplier of each person is different.
The Poisson statistics of a group still apply.
Here’s the kicker: any group of people even with different ages (hazard functions) and different frailty mix will have a single effective frailty multiplier and a single effective hazard function.
So we can characterize ANY large group, no matter what their mix is and no matter how they were selected (even if non-random), with just 2 parameters.
It’s not rocket science.
A fixed # of people who are young, will have monthly deaths that INCREASE over time.
A fixed # of people who are around age 85 will have a relatively STABLE over time.
A fixed # of people who are over age 85 will have monthly deaths that DECREASE over time.
See the curve above. That curve is all about the tug of war between monthly increase in death rate vs. the baseline death rate. Once your baseline death rate gets really high (like over 10% a year) it is greater than the annual increase in mortality (typically 8% a year but it’s age dependent), deaths per month fall each month in a fixed cohort starting point.
The final calculation for 66-70 year olds (those born in 1950-54)
The mortality table below shows slopes at 1.5% and 4% annual mortality of 2.35% and 1.95% mortality increase per year in deaths from a fixed size cohort. When we calculate the HVE correction factor, the vaccine is less deadly by less than 0.25% from what we calculated (see Grok for the calculation and search for “it slightly improves the ratio”).
The margin of error is .1 so our 95% CI range for the mortality increase is [1.1342,1.3369]. In other words, the mortality increase was between 13% and 34%. It’s statistically significant and a very troubling result.
Sensitivity analysis
As can be seen from the 1950 graph, shifting the reference point will alway produce a point 1 year later that is much higher than the reference.
Changing the start date of the study didn’t change the results as shown below with 4 different start dates.
Implications
Not only did we kill people, but the medical community was facilitating it by doing flawed studies that led to physicians recommending it to patients as a helpful intervention when it was exactly the opposite.
Negative controls
This method is extremely powerful in detecting signals of harm or benefit. Moreso than anything else known to epidemiology. I say this because the method instantly found the harm signal in the COVID vaccine (and it only uses DOB, DOD, and DOV) whereas every other epidemiological technique fell flat using far more information.
I tested the method with negative controls by using the following to divide the cohorts instead of vaccination status and found the measured 1 year slopes rarely deviated much from 1:
Age difference (5, 10, 15, 20, etc) showed flat slopes
DCCI >1 vs. 0
Sex (M vs. F split)
You can see the results in the spreadsheet. It’s mind blowing.
For example, born in 1950 using a 25 year offset still has a std deviation of the 1 year slopes measure over weekly sliding scales of just 1.3% showing just how accurate the method is even for 25 year age differences (1950 vs. 1925 so they were 70 and 95 years old and the mortality comparison showed a flat line using this method).
Here are the 3 year 1 year sliding window computed weekly slope std deviations for a 10 year age difference which generally means about 2x as many comorbidities (.01 means 1% which is very very good):
Significance of the negative control tests
It shows that there is no need for comorbidity determination, matching by ages, cause of death, or any type of matching whatsoever.
As long as there is no external intervention that can impact mortality that is UNEVENLY applied to the two cohorts, the method will return a flat or nearly flat slope.
When there is an differential external intervention that impacts mortality of one of the cohorts relative to the other cohort (such as a COVID vaccine given to one cohort and not the other), this method will instantly identify the net direction and magnitude of the effect.
This makes this method extremely power and useful especially considering that no other method has been able to accomplish this with the COVID data.
The fact that all you need is 3 fields (dod, dob, dov) is a huge plus.
Simplicity, transparency, accuracy make this an important new tool for honest epidemiologists.
Grok agreed wholeheartedly. Check this out. The commend about the age control test is because Grok did it’s own estimate rather than let the spreadsheet calculate it. The slope for the 1950, 1975 case was .973 and that’s the mean done over every 1 year interval in a three year period (shifted by one week each time).
Here’s the corrected response confirming this is an extraordinary technique:
Why this method is infallible
The method relies only a simple mathematical fact: any large group of people that is picked at t=ts will die off at a monthly rate governed by two variables, and those numbers are very stable over a 1 year timeframe. The two variables are A=the current death rate and B=the growth in the death rate over time. This is the basis of human mortality tables and no large group of humans can escape that short of the invention of a new pill that stops aging that everyone takes. The differential in A is fully taken into account. The effect of B is small and can result in a very small delta in the calculated ratio over a year (e.g., for 1950, it’s less than a 1% effect) and that type of mortality adjustment can be made if 1% precision is required.
The method is a simple way to clearly see these differential mortality signals.
Limitations
My goal with this article is to show people that for the target cohort, those over 65, the vaccine was a train wreck and the medical community was none the wiser. With that in mind, here are the current limitations:
The method relies on math and statistics; on the law of large numbers and the central limit theorem. If those no longer hold, then this method will not work.
We don’t have cause of death data, so people claim that we can’t do a definitive causality study. I disagree. This is as definitive as it gets. Your mileage may vary.
We do have comorbidity information in the source data, but this was not used in the analysis. Instead, we characterized each cohort using actual measured mortality values of the cohorts during the non-COVID baseline. Removing people with comorbidities would be counter-productive as this is a full population divided into cohorts and we are trying to assess real world efficacy. I’ve already shown in the negative control section that the % of comorbidities in a cohort do NOT matter. Even comparing people with age differences of 25 years resulted in a flat slope!
The method assumes that groups with different mortalities, when exposed to a hazard, will react roughly in proportion to their baseline mortality. This is essentially the assumption used in Cox proportional hazards. This assumption is not strictly true in practice, but it’s close enough in our case for 2 reasons: 1) we ran negative control tests on people with 25 year age differences though COVID and non COVID periods and the slope of R2 was completely flat and 2) because we are looking at a vaccine which is touted as having a huge effect size, e.g., one cohort will be barely impacted while the other cohort will be proportionally heavily hit so even if there is a differential response, it doesn’t matter because the differential is small relative to the effect size we are measuring. So the method should easily show a vaccine benefit if one exists. On the other hand, if the vaccine doesn’t work at all, then if the vaccine group doesn’t proportionally react to the stimulus in the same risk ratio as the unvaxxed group, then the vaccine may “appear” to be slightly effective or ineffective simply because it’s never strictly true that cohorts with large mortality differences react proportionally to any intervention.
Similarly, there was no adjustment for socio-economic factors. Such adjustments are difficult to do accurately and once again aren’t need. If you have a cohort and you benchmark their mortality over time, you have a baseline control to use for when you apply a stimulus.
Data was not scrubbed for errors. This introduces a very small inaccuracy in the numbers that does not change the statistical significance of the result.
For space reasons, we don’t show all the graphs here. You can open the spreadsheet and use the year of birth to examine all the cohorts. They all look similar.
I haven’t shown the formal HVE adjustments to older groups. The HVE will always make the vaccine look better, i.e., with older ages there will be more of an adjustment so a 20% mortality increase may only be a 10% mortality increase after the adjustment.
“Inadequate confounder control” objection. Any LARGE cohort dies at a measurable rate that increases over time. So any large fixed size cohort can be characterized by just 2 numbers: the average mortality and the increase in mortality over time. Why do I need to "adjust" for confounders? That just throws in more opportunities for errors. You are right that the vaccinated die at a higher rate, but this is completely accounted for by the baseline period. And this is only showing that relative to the mortality of the unvaccinated group, the vaccinated group did worse. If we "correct" for confounders, we don't have a real world assessment anymore. And adjusting for external trends like lockdowns improved treatments etc. will affect both groups unless the groups were treated completely differently. but it doesn't matter. The article is simply saying with everything going on, the result was the vaccinated had higher mortality. It's not ascribing 100% of the blame on the vaccine.
This method is only saying, "This is the resulting difference in mortality in the actual real life measured deaths. This should be a wakeup call because it’s highly unlikely that any confounder can explain a mortality that is this large and this consistent over time and age groups.”
Finer points of the method
The data picking isn’t arbitrary. It’s set by the data. For the COVID vaccine, you’d pick the start of the study (where you separate the cohorts) right at the start of the low COVID period. This gives you time to accumulate death counts in the cohorts start period should stYou can then compare the point on the graph (the cum ratio at that week) 1 year to the right and take the ratio of the two points to determine if there was net harm (ratio >1) or net benefit (ratio <1). The results will decline over time as the vaccine harms wear off (the most affected people die).
You do not need to track the vaccination % of the control group over time. This is optional and will simply adjust the final result and make whatever effect you measured even stronger. So in our case, if we did adjust for the % vaccinated in the control group (which you would do as a post processing step after the final ratio is determined), the vaccine would be even more deadly than it already is. If the vaccine was found to be safe, this adjustment would show it to be even safer. This is because any differential mortality signal (which this measures) is would be enhanced in magnitude (think of multiplying by a number like 1.2 to make the adjustment). This is the key point Professor Morris completely misses. If a percentage of the control group is vaccinated during the observation period (which absolutely happened), it does not change the baseline mortality of the cohort since that was already determined at the start time. There is no HVE effect because unlike at baseline, there is no mortality change when they decide to get vaccinated after the groups were defined. Think of it this way. I pick 100 people and watch how they die over time. Then I ask: who wants to be vaccinated? This doesn’t change the mortality rate of the ENTIRE group at all. It simplies identifies a subset of the group who will die at a lower rate. So the baseline mortality of the control group (baseline expected deaths per week) is unchanged from that measured during the baseline period for the control group. What is different after the vaccine is given (assuming it is a safe vaccine that only PROTECTS from COVID) is that now we have less of a differential response signal to a common mode stress since the cohorts are now less differentiated. This is what Morris is missing. He’s just not thinking it through. Sadly, these are the people we are supposed to respect. There is one exception to this (see the next point).
The COVID vaccines were not safe; they actually increased people’s non-COVID ACM as we know from the Levi Florida study. So how does that affect the control group? As the control group gets vaccinated over time, their group mortality increases from what it was at baseline. This means that the ratio decreases (since unvaxxed is the denominator). So the control group getting vaxxed makes the vaccine appear to be safer that it really is. So the result we calculated would be adjusted to be even more deadly if we took this into account.
There is no PFE after the shots. Mortality rate kept rising! With PFE there is a high deviation from trendline followed by a deviation below the trendline.
There is no “long term HVE” where vaxxed die more and unvaxxed die less over time. While that could theoretically happen, it’s impossible in real life because death is hard to predict more than 2 weeks out. And anyone who lives more than 2 weeks is going to want to be vaccinated.
You can pretty much pick ANY point in the baseline non-COVID period (i.e., before Sept 1) to compute the 1 year impact of the vaccine. You then pick the corresponding point on the line 1 year later. Then divide the y-values of those two points. To ensure the method always finds the same number given data, always pick the longest possible baseline period.
There isn’t a correction for the short term HVE causing lower mortality in the vaxxed and higher mortality in the unvaxxed. This effect is insignificant after 3 weeks and since people didn’t get their shots all at once, this effect is very small. By starting the baseline 4 weeks or more past the start date, the effect is gone.
There is really no such thing as “long term HVE.” It’s all selection bias. The unvaxxed cohort dies more due to the selection bias when people made their choice to be vaccinated. There is a real short term HVE, but the “long term HVE” that people refer to is actually just the differential mortality caused by the selection bias.
You can see from the data when the non-COVID period starts and ends because ACM elevates at the start of a wave and then goes back down to normal. So for my non-COVID baseline period, I simply looked (in the 1950 group for example) when ACM deaths dropped under around 200 deaths a week and then when it climbed up again to verify start and end of the non-COVID mortality baseline period.
You don’t need fine grained record level data. This is sufficient:
5 year range of DOB (date of birth)
1 month range of DOV (date of vaccination) [including blank]
1 week range of DOD [including blank]
If you are getting HIPAA pushback on the above request, you can replace (b) above with “Was this person vaccinated with dose #1 by <your start date>?” which is 1 or 0.
You can also replace the data request by simply asking for summary data and that works just fine too! That’s all I used for this calculation!!! So the summary data columns are super simple!
Index columns:5 year DOB range,
Week of death (including blank),
Week of first COVID vaccination (including blank).
Value columns:Count of # of matching index field records,
Count of vaccinated on or before the selected start date.
The method in a nutshell is super simple: pick a start date when you define the two cohorts, measure the relative baseline mortality of the two cohorts (ideally right after the start point which is ideally at the start of low COVID period), count deaths in both cohorts over a period of time (it doesn’t matter, and compare the ratios. You then do an HVE adjustment if the cohorts are near 85 years old.
The constant factor to set the 1 reference point is the ratio of the CUMULATIVE mortality in the baseline period.
The cutoff date was WELL after most people were vaxxed. Any remaining short term HVE is completely irrelevant as you can see from the curves.
The size of the cohorts is FIXED at the start point. People do not get added or removed. You are either vaxxed or unvaxxed at the start of the study. Then we COUNT the deaths per week in each group.
The percentage of unvaxxed people is completely irrelevant. So yes, some people labelled "unvaxxed" do get vaxxed later, but they are in the "unvaxxed cohort" when they die. The unvaxxed cohort is technically a "mostly unvaxxed" cohort but is absolutely "unvaxxed as of start time."
There is a short term HVE effect, and it does make the vaccinated look slightly worse, but it’s a relatively minor effect because most people were vaccinated well before the start date and by that time, the differential has vanished. You can verify this by setting the baseline using the last few weeks of the baseline period. What matters is the slope of the line over time.
The method is simple and straightforward: divide into two cohorts at a fixed time (ideally right before a no-COVID period) and count deaths over time in each cohort and compare the cumulative deaths with the baseline cumulative death mortality ratio. Adjust for HVE effects if near 85 years old. If the vaccine protected against COVID the total deaths in the unvaccinated will be higher than in the baseline observation period.
If you look at the end of the timescale, the relative vaccinated death rate is around 10% (comparing the cumulative mortality ratio between April 1, 2024 and the end of June 2024) which means the harms caused the vaccine are still there, but fortunately at a lower level than earlier periods.
Cohorts with different mortalities die at different rates and I take that into account. This is precisely why there is the HVE correction. See Grok second conversation for the computation.
So why isn’t anyone else doing it the right way?
Covered fully in the Grok discussion, but here is the answer:
Is there a better way to analyze the data?
I asked Grok and it couldn’t think of a better way to use publicly available raw data to answer the question, “Did the COVID vaccines save lives?”
Has anyone else done the analysis anywhere using date of death and vaccination status?
Nope. Grok said:
Note about the data
The code processes the data without much sanity checking. There are a very small number of data entry coding errors which make it appear that people were vaccinated after they died or were vaccinated before the vaccines were available. There are a small number of these and none of these data entry errors change any outcomes in the method described here.
Slope table reference
Comparison with Cox Proportional Hazards Model
Cox tries to model what is going on. I just measure it.
Professor Jeffrey Morris’s attack
He created a simulation that does not adhere to the method as described and declared the method is “invalid.”
I agree: if you do not follow the algorithm, the results are “invalid.”
If you “use as directed,” you will get extraordinary insights. If you apply your own personal “corrected method” then it will fail to get the correct answer such as Morris did here:
What Morris did is create a completely bogus simulation of death by assuming deaths are split between the cohorts based on % vaccinated at the time of the death. This is just plain silly. Nobody who understands the algorithm would do that.
The algorithm sets cohort size at the start time. Let’s say there are 100 in each cohort. Then say over the next 6 weeks, everyone in the population gets vaccinated. Morris will, in his “simulation”, ascribe all the deaths to the vaxxed group since it is 100% vaccinated population. This means the 100 people in the unvaxxed group (who are now fully vaxxed) no longer die. All the deaths are in the original vaxxed group which is hardly realistic. So all the real world examples work fine. He’s contrived example doesn’t since it is nonsensical if the vaccine was safe.
That’s not how it works. There is a simulation tab on the my spreadsheet showing if nothing is going on, the slope is flat. It’s instructive to play with it to understand exactly how and why the algorithm works!
Henjin’s attacks
Much like Morris, to attack this method you need to use your own personal modifications of the method.
In this case, Henjin, a troublemaker skilled in R, applied the Henjin-Stupidly-Modified KCOR (aka HSM-KCOR) to try to discredit the method.
To which I replied:
I asked him for specific errors in KCOR. Here’s his DM reply:
In the algorithm, both intervention and control cohorts are fixed. The sizes and the people NEVER move between the cohorts.
The two things that change is that the number alive changes over time differently and some of the people in the control group get the shot which can potentially: 1) increase the control group’s NCACM mortality (and thus make the vaccine look safer) and 2) provide better (or worse) protection during COVID (which basically reduces the observed signal, making it smaller in absolute magnitude.
The number of people don’t matter. I track only deaths per week. This only becomes a factor for very old cohorts so you must adjust for it for the most accurate results for cohorts age 80 and above. This effect makes the shot look more harmful than it really is. It climbs fast over age 90.
So in our case, with an unsafe vaccine and looking at ages 80 and younger, we can completely ignore these effects since they don’t change the result (the COVID vaccine is unsafe).
Fixed size cohorts die at a nearly constant rate that is slightly increasing over time (unless you are over 86 in which case it starts sloping down). When you take the ratio of two lines from the different “effective age” groups, you get a line with nearly flat slope, e.g., a line with a 6% slope and a line with a 4% slope when you take the ratio will have less than a 2% slope. Think about it: pick two cohorts dying at drastically different rates, e.g, 3x / 2x (which would be the cumulative slopes) and plot it and it’s always 1.5 perfectly flat. There’s a slight slope to it because people die at higher rates as they age.
Depletion causes non-linear behaviors only noticeable in a 1 year period for those well over 100. Here’s what the raw deaths per week looks like at 100 (1M people fixed size cohort at t=0). Note you can’t hardly see the red line. The lines are on top of each other. So it’s a straight line, but at age 100, there is a strong slope.
The second point is counts vs. CMR.
Henjin and Professor Morris don’t want to make a cut date and define the two cohorts at that fixed point in time. He wants to make it more complicated and calculate a CMR each week on the number vaccinated who died vs. the number of unvaccinated who died. This is conventional thinking and it just doesn’t work.
I deliberately use counts and not CMR.
Counts and stable cohorts are guaranteed to give the correct answer (or an answer that can be easily corrected in a post-processing step).
Using vaxxed vs. unvaxxed and computing the CMR of both is a disaster and it can give false readings like inflated vaccine harm.
There are 2 reasons for that:
CMR gives different results depending on the ordering of the deaths (see the Henjin tab of my spreadsheet). This is very minor.
you’ve COMPLETELY lost the value of the baseline ratio because the mortality ratio of the groups has changed because you allowed people to move between groups after measuring their baseline mortality. So you have NO CLUE as to how to correct for that! Simple example: I walk into a room and segment the room into two cohorts: high death rate and low death rate. I tell you the death rates of the two groups. Then I determine the healthiest people in the sicker group and move them to the healthy group. What are the new mortalities of the group? No way to know!
If you then allow all the healthy people to get vaccinated, you’ve now changed the mortality rate so your baseline is USELESS. It causes you to get wildly INFLATED harm numbers.
Here’s the graph from Henjin’s method show even greater harm (60% ACM increase over 3 years) than my method.

That’s why I don’t do it.
Bottom line: Henjin and Professor Morris are both wrong. Their “improvement” decimates the accuracy of the method and leads to unrealistic results. It’s because they are shooting from the hip and haven’t thought it through. Are you surprised? You must have a FIXED cohort makeup when you enter the baseline determination period and that cohort cannot change (they can get shots because we know what happens when they do that, but no transfer to the OTHER group).
The full method and limitations
This method is extremely powerful. Given any intervention and any outcome, it can tell you the exact net change in outcome due to the intervention. In my case, I created this method to win a $1M bet because it would precisely determine the impact on the vaccine intervention would be on the mortality outcome.
So this is widely applicable and can be used to determine things like:
Do vaccines cause SIDS?
Do vaccines cause autism?
Does the MMR vaccine cause autism?
Do childhood vaccines increase childhood mortality?
Does the xyz vaccine reduce cases/hospitalization/death for xyz?
Does the COVID vax reduce net ACM deaths?
Does the COVID vax reduce net COVID deaths? (this is tricky since the data can be easily gamed so be careful on this one)
So any intervention. Any type of outcome. It’s only as good as the data you give it.
Start date: You pick the start date (e.g., for vaccines, a good point is after 70% have been vaccinated and when vaccination rates have slowed) and the start/end of the observation period. This determines who is in the treatment group (e.g., got the shot on or before the start date) and who is in the control group.
Observation window: Typically 1 year but can be longer or shorter. You start typically when the slope is flat (nothing going on causing differential mortality). For example, for the COVID shots, you’d set your baseline at the end of the no COVID period after people are vaccinated and the R1(t) line has a flat slope, e.g., the low COVID period assuming the vaccine is safe (this vaccine isn’t safe so the line isn’t flat during no COVID period which is a huge safety signal).
Result: You compute The ratio R1(end window)/R1(start window) which tells you instantly whether or not the the intervention create a net harm or benefit over the observation period where R1(t) is the cumulative treatment deaths/cumulative control deaths computed at time t.
All you need is record level data with 3 fields:
Year of birth (or 5 year range)
Date of the intervention (e.g., day or week of first COVID shot)
Date of the outcome (e.g., day or week of death)
You decide:
What the start date is to determine the cohorts.
Start and end date of the observation period
That’s it. No comorbidity reports, not cause of death, no adjustments for confounders, no model parameters that can be “adjusted.” You just compute a ratio and you can then get confidence intervals.
The analysis is simple and straightforward and there is no gaming. Just a ratio.
You can slide the observation window forward and backward and change the length to do sensitivity analysis. It’s easy to compute confidence intervals.
Things to be aware of:
Short term HVE effect: If present, will depress R(t) at the start. Generally, this is so tiny you can ignore it. I explain above how to spot it. It wasn’t present at all in the Czech COVID vaccine analysis and I doubt you’ll ever see it because the start date
Differential mortality change rate: splitting into vax vs. unvaxxed creates a differential mortality and these may cause outcomes (i.e., deaths in our current case) to increase at a slightly higher rate. This is really small and can generally be ignored. It works in different directions depending on the two mortality rates.
The control group getting the intervention (unvaxxed getting vaxxed) is fine. Actual signal will be larger that determined (in either direction). So if you measure a 10% harm, the real number could be an 11% harm; it all depends on how vaxxed your control group got and the timing of the vaccination. I usually ignore this because if there is harm or benefit, it will never change the sign (e.g., it won’t change a harm to a benefit).
There is no need to adjust for seasonality, comorbidities, etc. This is because the cohorts are all observed over the same time period.
There is basically one type of outcome this method as described won’t detect which is outcomes that happen immediately or shortly after the intervention. This is because this method above was designed to determine whether vaccines protect against an external stress (e.g., COVID wave) that occurs sometime after you get the intervention (i.e., vaccine).
To look for vaccine toxicity, we can use events relative to the intervention for the event counts. So instead of using calendar time to cumulate events each week, we shift the calendar time on a per person basis to the elapsed time from the time of the intervention.
For example, suppose autism happens with 3 days of a vaccine shot.
You define a start time and everyone vaccinated with the MMR in the last 6 months gets in the vaxxed group. Everyone who was never vaccinated for the MMR before the start date gets in the control group. Now, you make the event counters for each cohort is relative to vax date. To eliminate seasonality, the placebo vax date for unvaxxed is matched to each vaxxed person. So if the MMR is causing autism within weeks at a higher than normal rate, it will be immediately spotted. It will actually cause a lower ratio than 1 if the vaccine is causing harm (because the baseline period events are elevated) so you have to be a little intelligent about interpreting the result. Best to always look at the R1(t) curve rather than just relying on the number.
Summary
I described a new method, KCOR, using just 3 data values (DateOfBirth, DateOfOutcome, and DateOfIntervention) that can so determine whether any intervention can impact an outcome, e.g., does vaccination reduce net ACM deaths. You don’t need anything more. You don’t need sex, comorbidities, SES, DCCI, etc. Just the 3 parameters.
The method is simple, does no modeling, has no “tuning parameters,” adjustments, coefficients, etc.
You just load the raw data, and run it on the raw record level data or a data summary (e.g., just do a groupby on the raw record level data just like I did in the code above).
All parameters are determined by the data itself, not arbitrarily picked.
It is a universal “lie detector” for data impacts.
Given any input data, it basically will tell you the truth.
It is completely objective, including methods to further refine the answer (e.g., adjusting for the vaccination rate of the control group).
It is also deterministic: given the same data, you’ll get the same result.
So cheating like there is now doesn’t work.
This method makes it easy to detect and visualize differential outcomes changes (e.g., vaxxed vs. unvaxxed response to COVID virus) caused by large scale external interventions that impact an outcome (like death) which are differentially applied to the two cohorts, e.g., a vaccine given to 100% of one cohort and 20% of another cohort.
The method shows instantly that the COVID vaccines are unsafe.
How significant is this method? No other method was able to show a signal like this with crystal clarity. What other algorithm can similarly get the correct answer when fed the same dataset?
When scientists use other methods, they invariably get the wrong answer namely that the COVID vaccines have saved massive numbers of lives.
This method instantly and definitively gets the correct answer in a fraction of a second
So this is very significant. Had the scientific community used this, we could have saved 10M lives or more worldwide (estimated killed by the COVID vaccines).
Bottom line:
We have a powerful new tool for answering questions of the form: does this intervention cause this outcome?
We now know that the medical community has very serious problems. They’ve been promoting a vaccine that causes net harm and even when notified, they don’t change their behavior.
This should destroy all trust in the CDC, FDA, and NIH because this can be applied to so many drugs and it will reveal the truth that the American people have been lied to; the biggest and most significant lie is that vaccines don’t cause autism.
https://grok.com/share/bGVnYWN5_34a91df2-8397-4daa-896c-5c703b467c75
Grok had a non flattering description when I fed it your report
Steve: your method ignores % vaccinated (which changes over time), so is subject to base rate fallacy.
If you take your spreadsheets and do a simulation whereby you force the death rates to be identical between vaccinated and unvaccinated and compute your "statistical method", you could test the validity of your method.
If valid, in that case the ratios should be 1.00 across the board. But when you do that simulation, you see the same type of pattern you demonstrate in your analysis of the real data -- with normalized ratios >1.00 and increasing over time -- in fact even higher magnitude than you get for the real data.
This shows your method is completely invalid. It preordains false conclusions that "vaccines increase death risk"