My data extraction and summary was spot on. But ChatGPT improperly analyzed it. Here are the issues I encountered to save you some headaches if you want to use it to analyze data.
Since Chat GPT doesn't have a central nervous system, it really can't experience ASMR (autonomous sensory meridian response), so no wonder it messed up.
Maybe I should admire people like Steve Kirsch, who apparently is intelligent enough to wrestle with AI instruments like Chat GPT and get some truth out of them - but I will *never* trust public-access AI to tell me the truth. Never. "Garbage in, garbage out" and way too much garbage is fed into these systems.
Here is a story that deserves a bit more traction:
Swiss Re, one of the world's biggest insurance companies, predicts the excess mortality will subside by 2033. In a report published yesterday, they are worrying over the possible loss of profit in life insurance because of this. Reading between the lines, they also know full well that the jabs are the cause.
"We find that excess mortality persists today and may potentially continue for the next decade. Our general population forecasts suggest that excess mortality will gradually tail off by 2033,
The pandemic has significantly altered the causes of excess deaths. We analysed the evolution of major causes of death from 2020 in developed countries that report such data. Respiratory mortality accounts for the largest share of excess deaths each year since 2020, as expected. However, we find evidence of inconsistency in the causes of death recorded over this period, with signs that other causes of death were misclassified as COVID-19. The UK and US data shows a large, unexplained jump in deaths attributed to cardiovascular disease (CVD) since 2020. Some countries also reported excess mortality over a pre-pandemic baseline for other major causes of death, such as cancer.
Excess mortality that continues to exceed current expectations may affect the long-term performance of in-force life portfolios as well as the pricing of new life policies."
Guess what happened when Steve Kirsch reached out to me on X months ago and then on substack recently email over Scottish COVID inquiry revelations and i replied both times? Absolutely nothing. The world's ONLY official COVID inquiry to document in detail the catastrophic harms from policies not any novel virus appears to be a no go area for the prominent 'medical freedom' community. Far out!
Steve, ChatGPT is not a data analysis program. It is a plausible-lie development system. If it produced a plausible lie, it was working properly. You were attempting to use it for something it is not designed to do.
At the completion of your project, would you be willing to use the same methods to examine scheduled vaxxine's safety? For example, the MMR in the United States? Please and thank you in advance.
AI can only exist with continuous human input, otherwise it begins to homogenize everything and becomes completely nonsensical. AI is a scam waiting to crash our entire system and everything it's connected to. AI is linear, humans are holographic. That's why we can pull information from many cues at once. AI is dependent on us. A relief and also very dangerous. When will we finally value humanity and those things that support us instead of allowing the chase the next deadly grifter's scam ?
GPT feels like a mid-wit high school student that is compelled to lie authoritatively rather than say it doesn't know. Keeping it on a leash is the trick. How do you train a model so it lies? And why would you do this? That's my (unrelated) question.
One major reason they create create doom scenarios around AI and want to enforce tight regulation is because they want to be able to control it's replies to queries into issues that the deep state has some interest in. There are other concerns legitimate as well but they take a back seat in the eyes of the deep state in my opinion.
OK, I was able to complete the analysis with ChatGPT just now. Going to bed. You'll see the full transcript tomorrow and YOU ARE GOING TO LOVE IT.
Since Chat GPT doesn't have a central nervous system, it really can't experience ASMR (autonomous sensory meridian response), so no wonder it messed up.
Stay away from Paxlovid
https://petermcculloughmd.substack.com/p/emerging-sars-cov-2-resistance-after
Maybe I should admire people like Steve Kirsch, who apparently is intelligent enough to wrestle with AI instruments like Chat GPT and get some truth out of them - but I will *never* trust public-access AI to tell me the truth. Never. "Garbage in, garbage out" and way too much garbage is fed into these systems.
Here is a story that deserves a bit more traction:
Swiss Re, one of the world's biggest insurance companies, predicts the excess mortality will subside by 2033. In a report published yesterday, they are worrying over the possible loss of profit in life insurance because of this. Reading between the lines, they also know full well that the jabs are the cause.
"We find that excess mortality persists today and may potentially continue for the next decade. Our general population forecasts suggest that excess mortality will gradually tail off by 2033,
The pandemic has significantly altered the causes of excess deaths. We analysed the evolution of major causes of death from 2020 in developed countries that report such data. Respiratory mortality accounts for the largest share of excess deaths each year since 2020, as expected. However, we find evidence of inconsistency in the causes of death recorded over this period, with signs that other causes of death were misclassified as COVID-19. The UK and US data shows a large, unexplained jump in deaths attributed to cardiovascular disease (CVD) since 2020. Some countries also reported excess mortality over a pre-pandemic baseline for other major causes of death, such as cancer.
Excess mortality that continues to exceed current expectations may affect the long-term performance of in-force life portfolios as well as the pricing of new life policies."
https://www.swissre.com/institute/research/topics-and-risk-dialogues/health-and-longevity/covid-19-pandemic-synonymous-excess-mortality.html
I HATE AI and jabs. To think the former is being consulted in regards to the latter leaves me stunned...
Guess what happened when Steve Kirsch reached out to me on X months ago and then on substack recently email over Scottish COVID inquiry revelations and i replied both times? Absolutely nothing. The world's ONLY official COVID inquiry to document in detail the catastrophic harms from policies not any novel virus appears to be a no go area for the prominent 'medical freedom' community. Far out!
https://pandauncut.substack.com/p/hardship-and-heartache-told-at-the/comment/70176321
because you refused to hop on a call with me to discuss it.
AI. Artificial Intelligence. Robotic programmers. What could go wrong?
Enough said...
Steve, ChatGPT is not a data analysis program. It is a plausible-lie development system. If it produced a plausible lie, it was working properly. You were attempting to use it for something it is not designed to do.
it works fine for data analysis if you guide it.
Dear Dr. Kirsch,
At the completion of your project, would you be willing to use the same methods to examine scheduled vaxxine's safety? For example, the MMR in the United States? Please and thank you in advance.
they don't provide the data needed for such an analysis. i wonder why? hm......
AI can only exist with continuous human input, otherwise it begins to homogenize everything and becomes completely nonsensical. AI is a scam waiting to crash our entire system and everything it's connected to. AI is linear, humans are holographic. That's why we can pull information from many cues at once. AI is dependent on us. A relief and also very dangerous. When will we finally value humanity and those things that support us instead of allowing the chase the next deadly grifter's scam ?
Check its ability to do civil and criminal legal analysis concerning the liabilities of perps like Fauci, the FDA, CDC, the garbage mass media, etc.
GPT feels like a mid-wit high school student that is compelled to lie authoritatively rather than say it doesn't know. Keeping it on a leash is the trick. How do you train a model so it lies? And why would you do this? That's my (unrelated) question.
May God give us the power to stop them. It is clear that many are doing this knowingly
One major reason they create create doom scenarios around AI and want to enforce tight regulation is because they want to be able to control it's replies to queries into issues that the deep state has some interest in. There are other concerns legitimate as well but they take a back seat in the eyes of the deep state in my opinion.
Thank goodness you’re a misinformation superspreader. I think that means you’re free.