53 Comments
User's avatar
Anders Vinther's avatar

But, Steve, all the other experts say the vaccines saved many lives. So that means your work can’t be correct. (Im being sarcastic, but that is how some people really argue and think)

Expand full comment
Paul Fischer's avatar

Thank you Wayne. Yes I finally received two copies? Spectrum servers are a problem.

Expand full comment
Kaylene Emery's avatar

Blessings and appreciation from Sydney Australia.

Expand full comment
Paul Fischer's avatar

PEER REVIEW: Although you are not a PEER

Analysis of Steve’s KCOR Claims on COVID-19 Vaccine Mortality

I have carefully reviewed Steve’s KCOR claim of a 23% mortality increase associated with COVID-19 vaccines, focusing on the 1950-1954 cohort analysis. I replicated the analysis using multiple datasets, including the Otevrena-data-NR-26-30-COVID-19-prehled-populace-2024-01 CZECH DATA FILE.csv (12.6M rows, full Czech Republic population), vax_24.csv, CR_records.csv, and others. I also went over Steve’s Python code (e.g., cfr_by_week.py) and spreadsheets he referenced in KCOR’s Substack posts and on GitHub. My analysis found significant methodological issues that undermine the validity of KCOR’s conclusions. Below, I outline the key findings and concerns.

Replication of the 1950-1954 Cohort Analysis

I processed 672,876 records for the 1950-1954 cohort, covering 577,882 vaccinations (from December 28, 2020) and 56,154 deaths (in year-week format, e.g., ‘2021-41’). Filtering for valid dates (June 7, 2021, to August 29, 2022), I obtained 601,133 rows, with 10,304 vaccinated deaths and 5,935 unvaccinated deaths, totaling 16,239 deaths (matching Steve’s KCOR total). My raw ratio (10,304 / 5,935 = 1.7361) and vaccinated/unvaccinated (v/u) ratio (1.5102, using KCOR’s baseline of 1.1496) differ from Steve’s KCOR reported figures (9,517 vaccinated deaths, 6,722 unvaccinated, ratio 1.4158, v/u 1.2314). These discrepancies prompted further investigation into the methodology.

My 1950 to 1954 cohort using data from Steve's data from his substack post

the Czech Republic

date cumvax cumuvax ratio v/u date cumvax cumuvax ratio v/u

2021-06-07 92 111 0.82883 0.66844 6/7/2021 92 111 0.82883 0.72087

2021-06-14 189 221 0.85520 0.68971 6/14/2021 189 221 0.8552 0.74381

2021-06-21 289 330 0.70629 0.70629 6/21/2021 288 331 0.87009 0.75676

2021-06-28 388 436 0.88990 0.71770 6/28/2021 386 438 0.88128 0.76649

2021-07-05 482 533 0.90431 0.72932 7/5/2021 480 535 0.8972 0.78033

2021-07-12 577 607 0.95057 0.76663 7/12/2021 570 614 0.92834 0.80740

2021-07-19 689 712 0.96769 0.78044 7/19/2021 678 723 0.93776 0.81561

2021-07-26 808 800 1.01 0.81456 7/26/2021 795 813 0.97786 0.85049

2021-08-02 947 882 1.07369 0.86593 8/2/2021 930 899 1.03448 0.89973

2021-08-09 1084 976 1.11065 0.89573 8/8/2021 1060 1000 1.06 0.92193

2021-08-16 1232 1053 1.16999 0.94359 8/16/2021 1199 1086 1.10405 0.96024

2021-08-23 1363 1120 1.21694 0.96147 8/23/2021 1318 1165 1.13133 0.96397

2021-08-30 1478 1192 1.23993 1 8/30/2021 1428 1242 1.14976 1

2022-07-18 9313 5541 1.68074 1.35551 7/18/2022 8608 6246 1.37816 1.19865

2022-07-25 9489 5613 1.69053 1.36341 7/25/2022 8770 6332 1.38503 1.20462

2022-08-01 9668 5666 1.70631 1.37613 8/1/2022 8940 6394 1.39819 1.21606

2022-08-8 9845 5757 1.71009 1.37918 8/8/2022 9093 6509 1.39699 1.21502

2022-08-15 9991 5816 1.71784 1.38543 8/15/2022 9225 6582 1.40155 1.21899

2022-08-22 10149 5878 1.72660 1.39250 8/22/2022 9368 6659 1.40682 1.22357

2022-08-29 10304 5935 1.73614 1.40018 8/29/2022 9517 6722 1.41580 1.23138

These are the Methodological problems I found

1. Normalization Approach

Steve’s KCOR normalization method adjusts only the denominator of the v/u raw death ratio by a baseline ratio (e.g., 1.1496). This approach is mathematically absurd and lacks justification in standard statistical or epidemiological practice. Adjusting only one term of a ratio will distort results exaggerating vaccine-related effects. The correct approach is to normalize both numerator and denominator to account for population differences consistently.

Steve’s Formula for normalization:

raw v

KCOR= -----------------------------

raw uv x [(raw vt)/(raw uvt)]

This is not how normalization is done. This is like saying "I want to normalize the fraction 10/5, so I'll multiply just the denominator by 3 to get 10/15" - you're no longer measuring the same thing at all.

2. Baseline Period Selection

KCOR’s baseline date of August 30, 2021, is described as a “low or no COVID” period, but this is inaccurate. The Czech Republic reported approximately 1.6 million COVID-19 cases by this time, indicating significant disease activity. The choice of this date appears to influence the denominator in a way that affects the v/u ratio, potentially skewing results.

3. Lack of Population Normalization

KCOR’s raw death ratios do not account for differences in population size between vaccinated and unvaccinated groups. With 577,882 vaccinations, the vaccinated group likely represents over 80% of the 1950-1954 cohort, so higher death counts are expected. Without normalizing for population size, raw ratios are misleading. For example, consider a large city (1M population, 700,000 cars, 10,000 accidents) versus a small town (10,000 population, 7,000 cars, 100 accidents). The raw accident ratio (10,000/100 = 100) suggests a large difference, but the normalized rate (0.01/0.01 = 1) shows equivalence. Normalization is critical for valid comparisons and is standard practice for epidemiologists and demographers.

4. Inappropriate Use of Inferential Statistics

Steve applies confidence intervals to the entire Czech population dataset (16,239 deaths). This is a glaring example of statistical illiteracy. Statistics and by extension inferential statistics are used to estimate population parameters from samples and test hypothesis about those estimates, but the Czech dataset represents the full population. No statistics needed. Statistical inference exists precisely because we don't have access to the full population and need to estimate population parameters from sample statistics.

5. Static Cohort Definition

KCOR fixes vaccinated/unvaccinated status at June 14, 2021, ignoring subsequent vaccinations. This misclassifies individuals who were vaccinated later as unvaccinated, skewing ratios (e.g., 1.7361 vs. 1.4158). My analysis used dynamic tracking of 577,882 vaccinations starting December 28, 2020, to address this issue.

6. Year-Week Data Limitations

Using year-week data (e.g., ‘2021-41’) sacrifices daily precision, particularly around the baseline date (August 30, 2021). My counts (10,304 vaccinated deaths, 8.2% above KCOR’s 9,517; 5,935 unvaccinated, 11.7% below 6,722) suggest differences in week mappings, which may distort the vaccinated/unvaccinated split.

7. Unadjusted Confounders

KCOR’s metrics do not account for confounders such as age, health status, or seasonality, which are critical in cohort studies. There’s no adjustment for Simplson’s paradox were vaccinated and unvaccinated populations often have very different age structures, health statuses, and risk profiles. Additionally, the claim of methodological “novelty” is overstated, as cohort studies have been standard since at least the 1940s.

While KCOR’s dataset aligns with mine (16,239 deaths), the methodological flaws, mathematically absurd normalization which is demonstrably incorrect, inappropriate use of statistical techniques, static cohort definitions, and unadjusted confounders—undermine the claim of a 23% vaccine-related mortality increase.

Expand full comment
WayneBGood's avatar

Hi, Professor Fischer, I sent you an email.

Expand full comment
Paul Fischer's avatar

Sorry, I have no such email.

Expand full comment
WayneBGood's avatar

I sent again with the subject beginning "Invitation" from Wayne please check spam folder?

Expand full comment
Paul Fischer's avatar

Sorry WayneBGood, I have no email from you. If this is about Steve, he already has my email address.

Regardless, I am not interested in debating Steve about KCOR or anything else. I already replicated his KCOR work, explained exactly what was wrong, and in return he blocked me. I’ve also had the experience of sharing analysis with him in the past, only to see a similar version reposted without acknowledgment. I would have liked to work with Steve, but he has turned this into a childish confrontation. For that reason, I’m not providing him with any further work.

The larger point is the Czech datasets. I’ve spent months analyzing both the new and the old versions, and they are unsuitable for any type of mortality analysis. Treating them as if they represent the Czech population is a fundamental mistake.

Expand full comment
Jasmin Cardinal's avatar

Paul, contact me on here by direct message if you want me to take some time to explain what you don’t understand so you stop wasting everyone’s time, your objections were just shrugged off at the time because they were bad

Expand full comment
Paul Fischer's avatar

My analysis is meant for Steve. He’s well aware of the issues plaguing this data and his analysis, yet I’d bet my last dollar he won’t lift a finger to address them. The rest of this Substack crew seems utterly clueless about what constitutes valid research. With 45 years of research experience, much of it at the University of Wisconsin, I’ve been involved with quality research for decades. These days, though, the scientific community is bursting at the seams with statistical illiterates, and it’s a disgrace. Take the Czech dataset everyone’s so infatuated with…it’s a train wreck for mortality studies. It’s an insurance file riddled with bad records, nearly 2 million of them, hopelessly tangled with legitimate data. These include individuals who aren’t even Czech citizens but accessed the healthcare system, rendering the dataset a mess. Yet Steve, along with the other clowns parroting its use, hasn’t bothered to validate a single byte of it. Pathetic doesn’t even begin to cover it.

Expand full comment
Paul Fischer's avatar

These aren’t objections, they’re facts. KCOR is garbage. I replicated Steve’s work line for line — I know exactly what he did. If the truth bothers you, that’s your problem. Don’t read it.

Expand full comment
WayneBGood's avatar

I understand you would have liked to work with Mr. Kirsch based on what I've read of your posts and I also would like to see that happen. Sometimes misunderstandings can occur due to communicating only by internet posts.

I'd like to get you two into communication for a civil discourse and I'm sure any misconceptions and mistakes about citing can be ironed out quickly. I was using the email you have for Substack subscription which is not being returned as invalid but I'll send again and add another one.

Expand full comment
Steve Kirsch's avatar

Paul,

You misread my post.

I asked for comments on the CURRENT method in the github.

It's 100% reproducible. Just type "make." did you try that?

Is there a bug in the code you found?

perhaps you can explain the correct way to normalize the death slopes of the fixed cohorts?

And the proper way to set the baseline.

And the proper way to compare hazards between the cohorts?

Does the Paul Fisher method produce a smaller differential signal on the negative control cases?

Why does the DS-CMRR method and GLM method show the same thing?

What are the numbers when the analysis is done correctly?

Where can people find your analysis published with the sensitivity tests, negative control tests, and 3 or more validation tests using different methods?

Expand full comment
Jasmin Cardinal's avatar

None of your objections are valid, you dont understand KCOR

Expand full comment
henjin's avatar

I think it's a valid objection that Kirsch should've looked at deaths per population size and not raw deaths. It's the same thing me and Harvey Risch told Kirsch. And he now adjusts for population size in KCORv4, but Fischer's comment was about the original version of KCOR.

Kirsch's initial KCOR analysis included only people born in 1950-1954 so the cohorts were matched by age. But the unvaccinated cohort still had a much higher mortality rate than the vaccinated cohort, so the unvaccinated cohort depleted faster, which Kirsch didn't take into account.

The slope adjustment he added in KCORv3 accounted for the different rate of depletion in a hacky way. In KCORv4 he now added a less hacky method of adjustment for population size and age, so I think the slope adjustment is now redundant, and I think Kirsch could simplify the KCOR formula if he omitted the slope adjustment altogether.

But anyway, if you want to do a KCOR-style analysis that is adjusted for population size and age, I think it would be easier to just use a standard GLM regression model, like in my code below where I did this Poisson regression: `glm(dead~dose*week+factor(born),poisson,a,offset=log(pop))`. The term `dose*week` is an interaction between the `dose` and `week` variables, where `dose` is 0 for unvaccinated and 1 for vaccinated. The `week` variable contains the number of the observation week, where the first level of the variable is a baseline level that consists of the sum of all weeks in the baseline period. The argument `offset=log(pop)` uses the person-weeks as an offset variable, which is similar to doing the regression on mortality rates instead of the raw number of deaths. The term `factor(born)` adjusts the regression for the age group as a categorical variable:

t=fread("curl -Ls sars2.net/f/nzipbuckets.csv.xz|xz -dc")

iso=as.Date("2020-3-5")+0:239*7;names(iso)=format(iso,"%G-%V")

t[,week:=names(iso)[obsweek]]

a=t[,.(dead=sum(dead),pop=sum(pop)),.(dose=pmin(dose,1),born=pmax(pmin(born,2000),1920),week)]

a=a[week>="2021-24"&week<="2024-26"]

a=rbind(a[week<="2021-34",.(dead=sum(dead),pop=sum(pop),week="base"),.(dose,born)],a)

a[,week:=relevel(factor(week),"base")]

fit=glm(dead~dose*week+factor(born),poisson,a,offset=log(pop))

p=CJ(week=levels(a$week)[-1])

i=match(paste0("dose:week",p$week),names(coef(fit)))

est=exp(coef(fit)[i])

se=sqrt(diag(vcov(fit)))[i]

cbind(p,`colnames<-`(est+outer(qnorm(.975)*se,-1:1),c("lo","y","hi")))

I posted a plot of the results here: https://x.com/henjin256/status/1963328608530837507.

Expand full comment
Steve Kirsch's avatar

kindly tell us the proper way to adjust the slopes that is superior to the hacky way?

kindly explain the proper way to cumulate hazards?

kindly explain the proper way to establish baseline mortality for the cohorts.

Expand full comment
henjin's avatar

1. I think in KCORv4 if you do the calculation separately for each age group, and you adjust for population size within the age groups, it takes care of what you tried to do with the slope adjustment in KCORv3. And my Poisson GLM is another proper way to do the adjustment.

2. A few days ago I sent you code on DM for a version of my GLM regression that uses cumulative data and fixed cohorts.

3. I think a more appropriate baseline would be 2023 or 2024, because by then the short-term HVE is not as strong as in mid-2021.

Expand full comment
Steve Kirsch's avatar

so if the vaccine increases you ACM by 20% and you then set the baseline to 2024, how is that the correct baseline? You just let the vaccine kill lots of people and you're promoting it as safe.

I already replicated your code in python and ran it. It's posted in my github. That shows a harm signal unless you believe that vaccines lower NCACM. How else can you interpret your GLM results?

Expand full comment
Dawn Pier's avatar

Also, please watch Dr. Scott's opening statement and help us to refuse his many unaccurate statements: https://thehighwire.com/watch/

Expand full comment
Dawn Pier's avatar

Hi Steve, have you seen the unpublished study by Dr. Marcus Zervos of Henry Ford institute comparing vaccinated to unvaccinated cohorts for health status? Zervos and his coauthors refused to publish the study because it was so clear that the unvaccinated cohort was much healthier than the vaccinated cohort. They did the study assuming that the results would be very different and then refused to publish it because they knew they would their jobs.

Expand full comment
Steve Kirsch's avatar

see my next article on the senate hearing already published.

Expand full comment
Purehearted TruthSeeker's avatar

Steve, I don't know if this will help, and cannot figure out how to post nor upload on the github shared files, and I wanted to see if any of the data scientists could use this multi-theory for looking forward regression analysis calculations, to try using a tree model, yet tweak it for your purpose of looking back - I have more/the whole white paper, if need be and know the scientists whom wrote it, and are on LinkedIn with more examples: (If you send me an email, or give me an ability to upload this, I can, or connect you with the lead one, whom just started a new diagnosis health predictor company based on blood tests!)

"Many regression problems cannot adequately be solved by a single regression model. Decision tree algorithms provide the basis for recursively partitioning the data, and fitting local models in the leaves. The CART decision tree algorithm [2] selects the split variable and split value that minimizes the standard deviations of the target values in the two subsets. Identically, the CART algorithm finds a split that minimizes the Residual Sum of Squares (RSS) of the model when predicting a constant value (the mean) for each of the subsets. In contrast to Linear Regression Trees, the CART regression tree algorithm uses the mean for the target variable as the prediction for each of the subsets. Therefore, the splitting criterion is appropriate for the structure of the final model. Multiple authors have proposed methods to improve upon the accuracy of regression trees by generating models in the leaf nodes of the tree rather than to simply predict the mean. M5 [11], HTL [12], SECRET [4], incremental learning [10], SUPPORT [3], GUIDE [7], PHDRT [6] and SMOTI [8] are some of the model tree algorithms that have been proposed. They differ from each other mainly in their criteria for splitting the data, based on scalability and what their experiments show are likely to generate the most accurate overall model. Scalable decision tree algorithms evaluate (and commit to) individual attributes; this approach ignores dependencies between variables The motivation behind LLRT is that out of all the methods proposed to date, there has been no scalable approach to exhaustively evaluate all possible models in the leaf nodes in order to obtain a globally optimal split. Using several optimizations, LLRT is able to generate and evaluate thousands of linear regression models per second in order to perform a near-exhaustive evaluation of the set of possible model trees (1 level deep) for a given node. While this algorithm is slower than many other model tree algorithms, LLRT is sufficiently scalable to process very large data sets. Since it is less greedy, we observe it to obtain higher predictive accuracy for problems with strong mutual dependencies between attributes. The rest of this paper is structured as followed. We review related work in Section 2. We introduce terminology in Section 3 and present the LLRT in Section 4. Section 5 discusses experimental results, Section 6 concludes. 2. RELATED WORK Quinlan [11] presents the M5 algorithm based on the splitting algorithm in CART. The difference between M5 and CART is that M5 gains increased accuracy by fitting a linear regression model in each of the leaves of the tree, instead of using the mean. The evaluation criterion of CART is based on the variance of the target attribute in the nodes; this criterion is most appropriate for constant predictions in the leaves"...."4. MODEL TREE ALGORITHM

The first part of the description discusses the calculation of SSE for regression models in a k-fold validation setting so that models may be evaluated quickly. Normally SSE is calculated by first

applying the model and then summing up the squares of the residuals. This requires an O(P N) operation. We will derive an O(P2) operation that accomplishes the same calculation without

scanning the data set. The second part of the description explains how the SSE calculations are applied within the algorithm to be able to optimize (2) in the splitting process.

4.1 RSS Calculations

The linear regression equation within a given leaf of the model tree is Yˆ = Xc , where c∈ℜ(P+1) x1 is the regression coefficient vector. In matrix notation, the RSS can be written as

2 RSS = Xc − Y . The regression coefficient vector has to be estimated such that it minimizes this RSS: c* argmin Xc Y 2 c = − . In (3), we rephrase the RSS using elementary matrix algebra.

RSS 2 = Xc −Y

c Rc c b Y Y

c X Xc Y Xc c X Y Y Y

Xc Y Xc Y

T T T

T T T T T T

= − +

= − − +

= − −

2,

(3) where R = X T X and b = X TY .

One can minimize the RSS by setting the derivative

c

c X Xc c X Y Y Y

c

RSS T T T T T

= ∂ − +

∂ ( 2

= 2X T Xc − 2X TY

to zero, which gives Rc=b. Traditionally in regression models c

is solved by using Singular Value Decomposition to obtain the

inverse of R: c = R−1b = (X T X )−1 X TY (4)

Normally the execution time for this part of the modeling process is insignificant. However, in the LLRT algorithm, there will be millions of mini-regression models that need to be created.

Therefore it is imperative that a significant portion of the calculations is carried out with a more efficient algorithm. There are quicker alternatives to Singular Value Decomposition such as

G.E. (Gaussian Elimination) or Cholesky Decomposition if the matrix R is Singular Positive Definite. We use G.E. with full pivoting. One problem that needs to be addressed when using

G.E. is that it will not work with singular matrices. For use of G.E. in regression problems, we use an implementation that carefully detects and removes singularities from the matrix R and

sets the corresponding coefficients to zero. Using G.E., the full inverse of R needs not be calculated. G.E. can solve directly for the coefficient matrix c.

The key to avoiding over-fitting in a model so complex as a linear regression tree is to incorporate sampling methods. In this case, we use k-fold validation because it works well within the given formulation. We define X and Y as above and separate the N examples of the data within each tree node into S partitions.

For these partitions we define X1… XS and Y1… YS. Now, the

regression coefficient vector corresponding to a partition k is

defined by 2 argmin ( ) ( ) k k c k c = X − X c − Y − Y where

k X − X contains all examples not included in the kth partition.

This way, the model for a given partition is out-of-sample for that

partition. The RSS of ck regarding partition k is

2

k k k k RSS = X c −Y k T k k T k k k T

k = c R c − 2c b +Y Y

where k

T

k k R = X X and k

T

k k b = X Y .

The total RSS is now given as

Σ=

=

S

k

k RSS RSS

1

c R c b YTY

k k k

S

k

T

k + − =Σ=

( 2 )

1

. (5)"

Expand full comment
DaveHawkins's avatar

Hi Steve

I did some graph / correlation work on UK data up to October 2022

It's at:

https://davehawkins-shiny.shinyapps.io/JabsCasesDeaths/

For many age groups at the most granular level of data available, there does appear to be a good correlation between the #3 booster and deaths reported in the same period

(I only have limited time allowed per month on my free Shinyapps account, sorry)

Expand full comment
Useless Liberal's avatar

They did everything they could to destroy his presidency and his country.

Trump understands that.

Mostly.

He’s starting to recognize that they used Covid to destroy his presidency and his country.

Expose that.

Expose that, and he’ll do the rest for you.

Expand full comment
David O'Halloran's avatar

Excellent - thanks

Expand full comment
Nat's avatar

Ask Nick Hulscher. I'm sure he would love to support you on this!

Expand full comment
David Pare's avatar

I liked the readme too.

Do you have vax-death data on the flu vax? I remember you had something from medicare, but it was a while back. The data with the declining mortality, with that initial pop on week 1.

Having a base case to test it against might help persuade people.

Expand full comment
Santiago Miller's avatar

Steve... the graph in the results section shows what seems to be a plateau of damage by year 2024 which seems different than other graphs i have seen where there is a steady increase in harm amongst the vaccinated.

see:

https://i0.wp.com/theethicalskeptic.com/wp-content/uploads/2025/08/Vaccinal-Generation-Excess-Mortality-8.png?ssl=1

Am I misunderstanding something about the graph or is this perhaps a hopeful interpretation where there is at least some deceleration in sight?

I'll admit the growing amount of damage makes more sense as it goes along with what you would expect from accumulation of spike protein in tissues...

Expand full comment
henjin's avatar

Ethical Skeptic's plot with 77% excess deaths in ages 0-4 is fake: https://sars2.net/ethical3.html#Excess_deaths_from_natural_causes_in_ages_0_5.

I reverse engineered his baseline by digitizing the weekly excess deaths in his plot, and then I subtracted the excess deaths from the actual deaths.

The slope of his baseline roughly matched the pre-COVID trend between 2018 and 2019, but for some reason the baseline took an extremely sharp turn downwards between 2019 and 2020, so that each year the baseline dropped further below the real pre-COVID trend.

His baseline number of deaths dropped from an average of about 428 deaths per week in 2018 to about 278 deaths per week in 2024, so he assumed that the number of deaths would've dropped by about 35% between 2019 and 2024. In reality the weekly average number of deaths dropped from only about 422 in 2018 to 393 in 2024, so the deaths dropped by about 7%. So therefore ES gets about 41% excess deaths in 2024 (from `(393/278-1)*100`).

Several people have requested ES to document how he calculated the baseline, but he has repeatedly refused to document his methodology.

About 89% of deaths in his plot are deaths at age 0, so his plot essentially shows infant mortality. When I fitted an exponential curve to the rate of infant deaths per live births in 1968 to 2019, I got about -4% excess deaths in 2020, -4% in 2021, 0% in 2022, 0% in 2023, and 0% in 2024. So there was a slight uptick in deaths between 2021 and 2022, but it may have been because there was an unusually low number of deaths in 2021.

Expand full comment
Barbara Charis's avatar

What is the Science behind Vaccines? We know they do a great job at creating more diseases for the Medical Industry to treat! We know they have killed millions providing work for undertakers. We are learning that they contain deadly substances, which are harmful to the human body...What else should we know about the Science of Vaccines? Oh, they are created in labs by 'mad scientists' paid by vaccine manufacturers looking for profitable products to sell.

Expand full comment
ILoveLiberty's avatar

Try Sherry Tenpenny and Thomas Rentz.

Expand full comment
Davide Suraci's avatar

Hi Steve, If you tell me exactly the text you want to have validated, I can ask ChatGPT the right questions to put your algorithm under cross-examination.

Expand full comment