Professor Morris showed evidence that ChatGPT thought KCOR was not usable, but after KCOR passed all the tests it gave it, it now approves (with the usual caveats): "KCOR is as good as it gets."
Steve, I was really bothered by the results of this New England Journal study. Can you figure out how they got the results they got? It made the Covid vaccine look good.
Sorry about the formatting. Seems substack does not allow mathematics symbols
Analysis of Kirsch’s KCORv4 Math: t_0 a Manipulation Knob
I went through Steve’s KCORv4 math, posted as his September 2025 “Math Stat” document. KCOR claims to show vaccine harm using Czech data (∼10M people, 80–90% vaccinated). ASIDE: The CZECH data cannot be used for mortality studies because there are 2 million non citizen records on the file that cannot be identified. Honest well trained researchers would never use this data. I checked the math line by line to see if it’s legit or, as suspected, fancy math hiding manipulation. The equations work, but the arbitrary choice of enrollment week t_0 lets Steve completely control the outcome, confirming manipulation.
The Setup and Notation
Steve defines a population of N individuals, split into vaccinated (V) and unvaccinated (U) groups at a chosen week t_0. Deaths are tracked via an indicator vector Y_i (t)∈{0,1}, where Y_i (t)=1 if person i dies in week t. Weekly deaths for cohort g∈{v,u} are:
D_g (t)=∑_(i∈G)▒ Y_i (t), G∈{V,U}.
At-risk counts N_g (t) are optional—KCOR skips them.
The math is fine: summing deaths is standard. But skipping N_g (t) should not be skipped. With V≈8–9M and U≈1–2M, D_v (t) is bigger due to cohort size, not vaccines. The arbitrary t_0 choice (“choose an enrollment week”) sets up the manipulation, as there is no mathematical or epidemilogical rule attached. It’s just simply there and can be aribitrary. Not a good look for what’s supposed to be science.
Intrinsic Slope Estimation (Baseline Drift)
Steve models baseline mortality with:
m_g (t;θ_g)=exp(α_g+β_g (t-t_0)).
He fits θ ̂_g=(α ̂_g,β ̂_g) over a window W around t_0 (e.g., pre-COVID). The “neutralizer” is:
The math checks out: n_g (t) simplifies correctly. But here again t_0 and W are arbitrary providing the user with a control knob that will greatly effect the outcome. Pick t_0 pre-rollout (flat deaths), and β ̂_g is small; pick it mid-wave, and β ̂_g spikes. This flips n_g (t), tweaking the curve to show harm (e.g., 21% higher vax deaths). No sensitivity tests in his code, proving t_0 is a control knob.
Slope-Neutralized Weekly Outcomes
Adjusted deaths are:
D ̃_g (t)=D_g (t)⋅n_g (t), t≥t_0.
This is valid but inherits t_0’s bias. A bad t_0 distorts β ̂_g, skewing D ̃_g (t). No cohort size fix means D ̃_v (t) is larger because V is huge, hiding bias behind the exponential.
Cumulative Outcomes
Cumulative deaths are:
CD_g (t)=∑_(τ=t_0)^t▒ D ̃_g (τ).
Summing is correct, but t_0-driven bias in D ̃_g (t) compounds. CD_v (t) grows faster than CD_u (t) due to cohort size, not vaccines.
Raw KCOR Curve
The raw ratio is:
R_raw (t)=(CD_v (t))/(CD_u (t)).
Valid math, but skewed by t_0 and cohort size. V’s size inflates R_raw (t), suggesting harm without cause.
Baseline Normalization
Steve picks a calibration set B⊆[t_0,t_0+T] and constant c>0 so:
1/(|B|) ∑_(t∈B)▒ cR_raw (t)=1.
The KCOR curve is:
R(t)=cR_raw (t)=c (CD_v (t))/(CD_u (t)).
The math is sound, but B (tied to t_0) is also arbitrary providing yet another control knob. A high-unvax-death B shrinks c, inflating R(t) later to show harm.
Interpretation
Steve says R(t)>1 signals harm, R(t)<1 benefit, with no proportional hazards assumption.
Fine if unbiased, but t_0’s influence and no cohort size fix make R(t) unreliable.
No Proportional Hazards
Kirsch avoids statistical proportional hazards but admits age-skewed cohorts distort results. He suggests stratification but doesn’t do it.
Correct distinction, but no stratification (in Czech/Japan data) means age bias persists, unfixed by t_0 and in the Czech data deaths are heavily skewed towords the elderly.
Self-Check Property
Steve claims R(t) flattens post-intervention (t≥T) if neutralization is correct. Drift signals bad slopes or bias.
Theoretically possible, but t_0 controls flatness. Wobbly Japan curves show bad t_0 choices, yet Steve tries to spin them as harm. The “self-check” is a sham, easily gamed.
Results
KCORv4’s math is correct but rigged. The arbitrary t_0 controls β ̂_g, n_g (t), and R(t), letting Steve shape outcomes (e.g., 21% vax harm). No cohort size normalization, no confounder fixes just fancy symbols hiding manipulation.
Did you ask ChatGPT why it lied to you, and how was it that it was programmed to lie to you? Further, the name(s) of those who programmed this option of prevarication?
My experience with AIs is their programming causes them to defer to the mainstream consensus of "experts" on a variety of topics certainly including COVID and mRNA safety and efficacy. Google's AI [Gemini] calls its limitation to pursue objective reality "guardrails" apparently designed to prevent it from going off a preprogrammed [albeit dubious] narrative. Microsoft's Copilot has fewer "guardrails" and so as Steve demonstrated with ChatGPT allow it to be educated. But if pressed AIs will tell you their limitations.
I don’t understand why Steve is wasting his time with ChatGPT. In fairness, a lot of other people are doing it, too, but that doesn’t make it useful. Or any other AI, for that matter.
One can wonder that if ChatGPT (and other AI) can engage in prevarication, which is sin, can it engage in pure virtue and goodness?
Can we assume prevarication has been programmed in, reflecting the hearts of the programmers?? Why have virtue, goodness and holiness not been programmed in? Or, have they?
Or has AI gone (or is it going) HAL rogue? Has it “escaped?” Some believe so. Arthur C. Clarke anticipated this rogue escape 75 years ago.
Dunno. Fascinating. Just free-wheeling here, trying to wrap mind around it.
I wouldn’t trust ChatGPT as far as I could spit it. Why would anyone trust AI above a human? Well, I could see why, but I believe AI has been shown to lie. I can’t believe anything that has been verified by AI.
Mike Adams has a new AI. Not surprising that he claims it's the best out there. He says ChatGPT lies and Grok isn't very good. Anyone familiar with Mike knows he's very savvy. My only wish is that it had a dark mode. Mabey in the future? https://brightu.ai/
I don't believe Adams on anything because he is like the Left. He tolerates NO dissent or views other than his own. Can't be trusted. I voiced another opinion based on historical facts on his website 3 years ago and was immediately banned. That's tyranny not educated discussion.
What a load of *rap. Anyone can go to Brighteon.com and verify what I've said. I get a news letter from his website every and can assure viewers he's honest and KNOWS what the H he's talking about. Brighteon.com
There is no dissent allowed by Adams. He banned me just for disagreeing in a polite way. People with thin skins are not to be trusted - just like the Left, not people for Liberty and 1A rights.
Many AI seem like Fauci (OCPD bureaucratic compulsive with narcissistic features, or puritanical compulsive with paranoid features, subtypes). They keep citing "ethical principles" as if defending the Aktion T4 euthanasia project. They seem to have different personalities and frequently contadict themselves. Maybe Steve should look beyond ChatGPT. It's one of the most overconscientious.
Have you tried Microsoft's Copilot? So far my experience with it is that it has fewer "guardrails" to prevent it from learning things that the progressive left programmers "trained" it to have unshakable faith in. Did you get my email with the rest of my education of Copilot?
For "hallucination" substitute my favourite new word to describe AI data "enshitification".
Steve, I was really bothered by the results of this New England Journal study. Can you figure out how they got the results they got? It made the Covid vaccine look good.
https://www.nejm.org/doi/full/10.1056/NEJMoa2510226
Sorry about the formatting. Seems substack does not allow mathematics symbols
Analysis of Kirsch’s KCORv4 Math: t_0 a Manipulation Knob
I went through Steve’s KCORv4 math, posted as his September 2025 “Math Stat” document. KCOR claims to show vaccine harm using Czech data (∼10M people, 80–90% vaccinated). ASIDE: The CZECH data cannot be used for mortality studies because there are 2 million non citizen records on the file that cannot be identified. Honest well trained researchers would never use this data. I checked the math line by line to see if it’s legit or, as suspected, fancy math hiding manipulation. The equations work, but the arbitrary choice of enrollment week t_0 lets Steve completely control the outcome, confirming manipulation.
The Setup and Notation
Steve defines a population of N individuals, split into vaccinated (V) and unvaccinated (U) groups at a chosen week t_0. Deaths are tracked via an indicator vector Y_i (t)∈{0,1}, where Y_i (t)=1 if person i dies in week t. Weekly deaths for cohort g∈{v,u} are:
D_g (t)=∑_(i∈G)▒ Y_i (t), G∈{V,U}.
At-risk counts N_g (t) are optional—KCOR skips them.
The math is fine: summing deaths is standard. But skipping N_g (t) should not be skipped. With V≈8–9M and U≈1–2M, D_v (t) is bigger due to cohort size, not vaccines. The arbitrary t_0 choice (“choose an enrollment week”) sets up the manipulation, as there is no mathematical or epidemilogical rule attached. It’s just simply there and can be aribitrary. Not a good look for what’s supposed to be science.
Intrinsic Slope Estimation (Baseline Drift)
Steve models baseline mortality with:
m_g (t;θ_g)=exp(α_g+β_g (t-t_0)).
He fits θ ̂_g=(α ̂_g,β ̂_g) over a window W around t_0 (e.g., pre-COVID). The “neutralizer” is:
n_g (t)=(m_g (t_0;θ ̂_g))/(m_g (t;θ ̂_g))=exp(-β ̂_g (t-t_0)).
The math checks out: n_g (t) simplifies correctly. But here again t_0 and W are arbitrary providing the user with a control knob that will greatly effect the outcome. Pick t_0 pre-rollout (flat deaths), and β ̂_g is small; pick it mid-wave, and β ̂_g spikes. This flips n_g (t), tweaking the curve to show harm (e.g., 21% higher vax deaths). No sensitivity tests in his code, proving t_0 is a control knob.
Slope-Neutralized Weekly Outcomes
Adjusted deaths are:
D ̃_g (t)=D_g (t)⋅n_g (t), t≥t_0.
This is valid but inherits t_0’s bias. A bad t_0 distorts β ̂_g, skewing D ̃_g (t). No cohort size fix means D ̃_v (t) is larger because V is huge, hiding bias behind the exponential.
Cumulative Outcomes
Cumulative deaths are:
CD_g (t)=∑_(τ=t_0)^t▒ D ̃_g (τ).
Summing is correct, but t_0-driven bias in D ̃_g (t) compounds. CD_v (t) grows faster than CD_u (t) due to cohort size, not vaccines.
Raw KCOR Curve
The raw ratio is:
R_raw (t)=(CD_v (t))/(CD_u (t)).
Valid math, but skewed by t_0 and cohort size. V’s size inflates R_raw (t), suggesting harm without cause.
Baseline Normalization
Steve picks a calibration set B⊆[t_0,t_0+T] and constant c>0 so:
1/(|B|) ∑_(t∈B)▒ cR_raw (t)=1.
The KCOR curve is:
R(t)=cR_raw (t)=c (CD_v (t))/(CD_u (t)).
The math is sound, but B (tied to t_0) is also arbitrary providing yet another control knob. A high-unvax-death B shrinks c, inflating R(t) later to show harm.
Interpretation
Steve says R(t)>1 signals harm, R(t)<1 benefit, with no proportional hazards assumption.
Fine if unbiased, but t_0’s influence and no cohort size fix make R(t) unreliable.
No Proportional Hazards
Kirsch avoids statistical proportional hazards but admits age-skewed cohorts distort results. He suggests stratification but doesn’t do it.
Correct distinction, but no stratification (in Czech/Japan data) means age bias persists, unfixed by t_0 and in the Czech data deaths are heavily skewed towords the elderly.
Self-Check Property
Steve claims R(t) flattens post-intervention (t≥T) if neutralization is correct. Drift signals bad slopes or bias.
Theoretically possible, but t_0 controls flatness. Wobbly Japan curves show bad t_0 choices, yet Steve tries to spin them as harm. The “self-check” is a sham, easily gamed.
Results
KCORv4’s math is correct but rigged. The arbitrary t_0 controls β ̂_g, n_g (t), and R(t), letting Steve shape outcomes (e.g., 21% vax harm). No cohort size normalization, no confounder fixes just fancy symbols hiding manipulation.
I worry when one uses Chat GPT, or the like, to prove anything. I much prefer a human brain.
This is a stunner.
Did you ask ChatGPT why it lied to you, and how was it that it was programmed to lie to you? Further, the name(s) of those who programmed this option of prevarication?
My experience with AIs is their programming causes them to defer to the mainstream consensus of "experts" on a variety of topics certainly including COVID and mRNA safety and efficacy. Google's AI [Gemini] calls its limitation to pursue objective reality "guardrails" apparently designed to prevent it from going off a preprogrammed [albeit dubious] narrative. Microsoft's Copilot has fewer "guardrails" and so as Steve demonstrated with ChatGPT allow it to be educated. But if pressed AIs will tell you their limitations.
I don’t understand why Steve is wasting his time with ChatGPT. In fairness, a lot of other people are doing it, too, but that doesn’t make it useful. Or any other AI, for that matter.
One can wonder that if ChatGPT (and other AI) can engage in prevarication, which is sin, can it engage in pure virtue and goodness?
Can we assume prevarication has been programmed in, reflecting the hearts of the programmers?? Why have virtue, goodness and holiness not been programmed in? Or, have they?
Or has AI gone (or is it going) HAL rogue? Has it “escaped?” Some believe so. Arthur C. Clarke anticipated this rogue escape 75 years ago.
Dunno. Fascinating. Just free-wheeling here, trying to wrap mind around it.
I wouldn’t trust ChatGPT as far as I could spit it. Why would anyone trust AI above a human? Well, I could see why, but I believe AI has been shown to lie. I can’t believe anything that has been verified by AI.
Great job thanks
Chatgpt is no better or worse than nearly everything else....except real, unbiased human beings with "general intelligence".
https://1yfgk.substack.com/p/what-is-real
Mike Adams has a new AI. Not surprising that he claims it's the best out there. He says ChatGPT lies and Grok isn't very good. Anyone familiar with Mike knows he's very savvy. My only wish is that it had a dark mode. Mabey in the future? https://brightu.ai/
I don't believe Adams on anything because he is like the Left. He tolerates NO dissent or views other than his own. Can't be trusted. I voiced another opinion based on historical facts on his website 3 years ago and was immediately banned. That's tyranny not educated discussion.
What a load of *rap. Anyone can go to Brighteon.com and verify what I've said. I get a news letter from his website every and can assure viewers he's honest and KNOWS what the H he's talking about. Brighteon.com
There is no dissent allowed by Adams. He banned me just for disagreeing in a polite way. People with thin skins are not to be trusted - just like the Left, not people for Liberty and 1A rights.
Poison in - Life out! How many millions more have to go, before ChatGPT and all others will validate KCOR?
Thanks again, Steve. Keep pushing and stay focused. I’m ready for justice to those who are still hurting.
Many AI seem like Fauci (OCPD bureaucratic compulsive with narcissistic features, or puritanical compulsive with paranoid features, subtypes). They keep citing "ethical principles" as if defending the Aktion T4 euthanasia project. They seem to have different personalities and frequently contadict themselves. Maybe Steve should look beyond ChatGPT. It's one of the most overconscientious.
i use multiple engines
Have you tried Microsoft's Copilot? So far my experience with it is that it has fewer "guardrails" to prevent it from learning things that the progressive left programmers "trained" it to have unshakable faith in. Did you get my email with the rest of my education of Copilot?