So let’s assume for a moment that the survey which George Till released is actually a survey conducted of physicians, without duplication. Even if this were true, there would still be multiple problems with the data as presented and the survey as presented.
To highlight a few examples. Let’s take, for example, the responses to question #2, which asks about whether or not there should be additional protections for physicians or patients if physician-assisted suicide were to become legal.
The choices are yes, no, and no response. They break down, respectively to 71, 471, and 68. The interpretation actually ignores the skipped question, presenting it as 13.1% and 86.9%. But it should really be 11.6%, 77.2% and 11.1%.
So that’s kind of misrepresenting things to begin with.
But here’s the part I just love- there’s a 2nd section of that question if you answer yes, there’s a follow up: if you think there are more protections necessary what do you suggest they should be, shortened to “if so, what?”
Remember that there were 71 people who said “yes” to that question. So you would think that there would be 71 or fewer answers to the “if so, what?” question.
Not so much so.
There are 98 answers to that question.
This, to me, suggests a certain lack of quality control.
These are just a couple examples from problems with the presentation and use of the survey from a methodological standpoint. On the other side of the flip, I’ll talk more about the philosophy of surveys and what they do and do not do
So let’s talk for a moment about polling. You often see people surveyed on a variety of issues– they usually discuss polling a number of people and include a margin of error. This is based on statistical analysis of sample size vs population size. The larger sampling percentage of a population you get, the more likely you are to have results which fall within certain margins. Everyone shoots for a confidence interval of about 95%.
So when you see a margin of error, for example, of +/- 4%, it means that given the number of people polled and the size of the population being estimated, 95% of the time, the results you get from that poll will accurately reflect the population at large. The other 5% of the time, it’s just sampling error.
When people present surveys like Till’s, they often do so as though it’s in some fashion similar to those polls and surveys.
It’s similar, in the same sense that a raccoon is similar to a horse. They both have fur and are capable of walking on all fours. But there are a whole lot of differences.
Till’s survey doesn’t do random sampling. It was sent out to a large swath of people, a portion of whom (36%) responded. This means that the responses are self-selected. I.e., people who feel passionately about these issues and think it’s worth their time to make their voice heard shared their thoughts. Those who didn’t either didn’t have time, thought it was worthless, or just didn’t feel a strong need to weigh in.
You can’t interpret much from that. If I were trying to be political about this, I could claim that only 10% of those surveyed said they’d leave the state if single-payer were enacted, since all 1600+ were surveyed, but the response rate was only 36%. But that’s a ridiculous claim, because it (a) takes the “survey” seriously to begin with and (b) ignores proper methodology for interpreting and understanding survey results.
So what you have here is something more akin to a poll in Cosmopolitan magazine or any other such pop culture thing which makes for nice press, but doesn’t say anything meaningful or valuable about the topic. I’d be fine with that, except for the fact that it’s presented as though it actually is academic and meaningful.
Specifically:
Rep. George Till (D-Jericho) today released the results of the 2011 Vermont Physician Legislative Survey. Till conducted the survey with a Departmental faculty support grant from the University of Vermont College of Medicine. Doctor Till had assistance from Anastasia Coutinho MPH, UVM College of Medicine class of 2014.
Now this offends me. At least the Doyle survey isn’t presented as anything but one man’s quirky attempt to take take “the pulse of voters on hot issues.”
But Till? He presented this as though it’s science and put it under the auspices of actual meaning.
To me, that’s unacceptable.
14% of people surveyed by Dr Till know that.
and statistics
Though it is known how ‘polls’ & ‘studies’ can be slanted due to demographics & other influences, story is helpful as a close up that puts into perspective one of the ways numbers can be skewed to manipulate data from a scientific standpoint.
The Dr. Till poll is particularly disturbing due to the many examples. Using “Survey Monkey” was really creepy since it appears to originate from respected sources. Quite an eye-opener to observe the deception used in how this all was concocted.
I would also be interested to know the demographics & details of Doyle’s poll esp since I simply to not agree that the results are actual.
Vermont House leaders slipped into rep.’s survey
5:26 PM Fri., April 8, 2011 | By Terri Hallenbeck
http://blogs.burlingtonfreepre…
It’s very elitist-liberal of you to expect numbers to add up and reconcile.
For numbers pulled out of the air (or someone’s ass) they are perfectly good numbers. They seem fair and balanced.
For additional “behind the scenes” political discussion of this completely unscientific poll, I refer readers to jvwalt’s sidebar diary. I learned a few things I didn’t know.
And thanks, Julie, for reminding me of the characteristics of a true poll or survey, things I learned too long ago in a statistics segment of a psych course in college.
NanuqFC
We must accept finite disappointment, but we must never lose infinite hope. ~ MLK, Jr.