Playing the Field Poll
Media love them, but public-opinion polls are still a less-than-exact science
In an age of interactive excess, public-opinion polls have become the primary mechanism for the media to gauge which way the political winds are blowing. But, though surveys are intended to provide answers, some observers say there are plenty of questions discerning readers should be asking about the polls themselves.
Now more than ever, news organizations are regularly sponsoring their own surveys as well as running stories based on polls from independent polling companies. Many media companies heavily cross-pollinate in the process—CNN and USA Today, ABC News and The Washington Post, and NBC News and The Wall Street Journal, to name just a few.
Here in California, news organizations often turn to the Field Research Corp., which has produced the Field Poll since 1947. It’s a non-partisan, California-specific survey that does not undertake studies for political candidates or ballot initiatives. Although the company won’t work for politicos, it fervently tracks voter interest in candidates running for office and in the job performance of those already there. The company produces between 40 and 50 surveys a year, often in clusters around hot news issues, such as elections.
With the notable exception of the Los Angeles Times, which conducts its own polls, virtually every major daily in California subscribes to the Field Poll service. For their investment, these papers get a one-day advance release of each new poll as well as proprietary access in their own market. (For this area, The Sacramento Bee has it.) A full day after the poll has been released to the subscribers, the collected data is sent to the University of California and California State University systems and is posted to the Field Poll Web site.
Field Poll Director Mark DiCamillo wouldn’t reveal how much newspapers pay for the service, but he did say, “The fee is proportionate to the newspaper’s subscriber base. A large paper like the San Francisco Chronicle will pay more than a paper in a smaller town.”
Although skeptics might not believe it is possible to ask just 1,000 people questions and then extrapolate the answers into an accurate prediction of how millions would respond, Field Research has a pretty fair record to stand on. Since 1984, company officials note, the candidate leading in the final pre-election Field Poll has won 17 out of 17 top-of-the-ticket California races (for offices such as governor or U.S. senator). The same is true in 43 out of 47 races since 1948.
Polls have become such an obsession that many politicians appear to avoid taking any action that isn’t supported by the numbers. But authorities warn that not all polls are created equal and that none should be taken solely at face value.
“Because polls are really looking at only a small piece of what is usually a very large picture, there are some very basic questions people should ask themselves when looking at them,” says Marlene von Friederichs-Fitzwater. She is the founder and chief executive officer of the Health Communication Research Institute Inc. (HCRI), a Sacramento nonprofit that conducts a variety of polls on its own and in partnership with other health organizations.
“First and foremost, when trying to evaluate any public-opinion poll, is to know how questions are worded and who was asked,” warned von Friederichs-Fitzwater, who also holds a doctorate in philosophy and is a professor of communication studies at California State University, Sacramento (CSUS).
Val Smith, a veteran of survey research with more than 20 years experience working mostly for a variety of politicians and private organizations, agrees.
“The biggest issue in any survey is knowing how it was conducted,” said Smith, who chairs the communications department at CSUS. “It all comes down to how the questions are written and what methods precede those questions.”
In earlier times, surveys were done in person, but these days, the vast majority of them are done via telephone. Field Research is typical of most companies in its use of random-digit dialing (RDD), which uses computers to randomly create telephone numbers within a targeted prefix. This gets around the pesky issue of unlisted numbers, which some experts estimate make up approximately 30 percent of U.S. residential phone numbers. RDD also ensures that everyone in the sampling population within that prefix has an equal chance of participating. The computer also dials the number repeatedly if nobody answers, in order to ensure that people away from the phone have their chance, too. Field Research takes that a step further by conducting its interviews in both English and Spanish.
Smith understands the logic of using RDD, but he also has problems with it. Because most of the surveys he conducts are for political campaigns or the people behind them, he prefers to use official voter files to acquire his sample audience.
“If a pollster asks questions about an election from a collection of people who are not likely to vote,” he reasons, “then that survey has an automatic bias built into it, particularly when compared to a survey that has taken the steps to ensure it is polling a sample audience that is statistically likely to vote.”
But getting someone on the phone is only half the battle. Even the most legitimate pollster easily can undermine his or her own credibility by framing questions or slanting them in a particular direction. For instance, questions using the term “abortion clinic” could elicit a much different response from questions referencing a “family planning center.”
Field Research, the Gallup Organization and others rely on computer-assisted telephone interviewing (CATI) technology to keep their questions aboveboard. This system brings questions up on a computer screen, with each question customized so it naturally follows the response given to the previous question. CATI also works to tabulate results quickly and compare them against known population factors, such as race and gender, as a means of correcting for any sampling biases. All of those issues and more are brought into the interpretation process.
“We take our methodology very seriously,” said Steve O’Brien, executive publisher for the Gallup Organization in Washington, D.C. “A lot of work goes into making sure that we have done as impartial a job as is humanly possible.”
And for good reason, according to O’Brien, who said the cost for a typical national, single-issue Gallup Poll with a random sampling of 1,000 people starts in the $40,000 range. That figure is a drop in the bucket compared with the cost of some polls. Smith said the more-intricate surveys usually requested on mass-interest public-policy issues can reach close to $200,000 if organizers choose to use additional polling tools, such as focus groups.
But no matter the issue, all of this careful planning and technology does not mean a thing without public response.
“The biggest problem we face right now, by far, is a declining response rate,” said Smith. “People are so inundated with telemarketers that they don’t want to even listen when we ask about answering questions for a survey.”
Another player on the scene, Rochester, N.Y.-based Harris Interactive, thinks it has the answer to that quandary. Since 1998, Harris has conducted its research solely on the Internet. Dan Hucko, Harris’ vice president for corporate communications and investor relations, said the ease of answering surveys online makes Internet survey research the wave of the future.
“In 2000, we were able to survey over 300,000 adults concerning 72 elections across the nation,” he said. “We had results tabulated and ready in about three hours and were also 96-percent correct in our predictions. When you look at that kind of speed and the flexibility it offers for people to participate on their own schedule, it’s easy to see that we are on the cusp of a total paradigm shift in polling.”
Hucko said Harris’ version of a single-item random survey of 1,000 people costs only around $25,000. He also mentioned the ability of his researchers to display pictures, movies and even links to other Web sites in the course of a poll, something he said makes subjects more inclined not only to get involved in one survey but also to participate in more surveys down the road.
Not everyone sees it that way.
“We want to ensure that everyone can get into the system,” said Gallup’s O’Brien. “Not everyone uses the Internet like that, so for now, we feel the best system is still via the telephone.”
DiCamillo agreed, saying that although the technology is appealing, the lack of computer saturation severely limits survey accuracy.
“We know that more than 98 percent of the California population has a phone,” he said. “I don’t know what the percentage of home computers is, but it isn’t close to 98 percent.”
There is also the very real concern that in cyberspace, people often are not who they portray themselves to be. Hucko concedes there is potential for survey fraud, but he said that potential is no more than exists when conducting telephone interviews. He also said his company goes to great lengths to screen participants and to be sure they only give one response per household.
Von Friederichs-Fitzwater acknowledged the usefulness of surveys but was quick to caution discerning poll readers to remember that self-reported research also can be highly inaccurate, regardless of whether the information comes from Field Research, Gallup, Harris or any other organization.
“I just wish politicians would remember that," she said. "The truly sad thing is that politicians place so much of their faith in these polls and not very much at all with the public they serve."