It looks like 'household' is intended to be the sample unit for the second round of questions. It seems a bit of numeric data might be handy via replacing size classes with 'how many folks in your household'. To crosscheck the answers you get from using '120.000.000 US households'. Although classes pretty tight the 8+ might be an…
It looks like 'household' is intended to be the sample unit for the second round of questions. It seems a bit of numeric data might be handy via replacing size classes with 'how many folks in your household'. To crosscheck the answers you get from using '120.000.000 US households'. Although classes pretty tight the 8+ might be anything, eg folks interpreting to mean 'folks for which I claim I would know if vaccine injured or dead'. Numeric would sort that out.
The 3/4 who had a covid death and thought the medical treatment contributed seems to be interpretable as to the 'selection bias' or whatever term is used to describe bias due to who is attracted to and answers the poll. What proportion thought the medical treatment was dangerous amongst those who had vaccines but no vaccine injuries?
Is there a way to see 'aborted' surveys? Does that have any use?
What happens if you strip the questions down to say 6 questions or so? Does the proportion of deaths worked as a proportion of the population start to come down?
"selection bias" or whatever that is in poll design seems would be the weakest point of this data. Repeat polls are showing similar results so you are getting some view of things consistently, but the view is not applicable to population. High precision low accuracy I would guess.
Excellent initiative and thank you for doing all this.
It looks like 'household' is intended to be the sample unit for the second round of questions. It seems a bit of numeric data might be handy via replacing size classes with 'how many folks in your household'. To crosscheck the answers you get from using '120.000.000 US households'. Although classes pretty tight the 8+ might be anything, eg folks interpreting to mean 'folks for which I claim I would know if vaccine injured or dead'. Numeric would sort that out.
The 3/4 who had a covid death and thought the medical treatment contributed seems to be interpretable as to the 'selection bias' or whatever term is used to describe bias due to who is attracted to and answers the poll. What proportion thought the medical treatment was dangerous amongst those who had vaccines but no vaccine injuries?
Is there a way to see 'aborted' surveys? Does that have any use?
What happens if you strip the questions down to say 6 questions or so? Does the proportion of deaths worked as a proportion of the population start to come down?
"selection bias" or whatever that is in poll design seems would be the weakest point of this data. Repeat polls are showing similar results so you are getting some view of things consistently, but the view is not applicable to population. High precision low accuracy I would guess.
Excellent initiative and thank you for doing all this.