Analyzing Polls: Interpretive Analysis- Context
|
|
|
When shopping for household appliances, smart consumers investigate and compare features, dimensions, price, warranty, brands, delivery choices, energy consumption levels, colors, etc. Smart social science analysts also research and contrast data in order to make more thoughtful summaries. It is within certain contexts that they can make well-informed decisions and interpretations of what survey data mean. This section suggests potential contextual settings in which survey research studies should be considered. Reliability Checking Preparing an analysis of public opinion information using the results of a single survey can be challenging. Sound research minimizes the outlier effect and is well grounded using multiple sources-sometimes called triangulation. Survey data is in abundant supply and reputable survey organizations will archive the results of their work with the Roper Center or other archives to allow researchers to learn from their data. It is from sources like these archives that multiple sources of data can be easily attained. Below are some examples that will help illustrate this.
Non-Polling Sources Provide Context Social scientists use multiple sources in order to better understand and communicate the broader scope of what's being measured. For example, a report that consumer confidence is plummeting in the survey data can be juxtaposed with the Consumer Price Index. When unemployment figures rise in a community, how are opinions toward increased taxes impacted? As much as we'd like to advocate that public opinion is based solely on data we know that is not true. When multiple measures are involved in the analysis, assumptions and conclusions are far better grounded when they're supported with other types of data and evidence.
A poll offers a snapshot of opinion at a single point in time. Always be alert to the interview dates and other events going on at that time. The timing of fieldwork can be affected by a wide variety of activities, some obvious, others less so. Did the President give a nationally televised speech on the topic or a related topic around the time of the poll? Was there an elevated alert for terrorist activity at the time of the survey about funding Homeland Security, or had that not occurred for several months or years? If the research project reflects on a historic point in time, sound analysis may require some digging. Consider the contextual climate at the time the survey was fielded -i.e. what timely conditions existed that may have impacted responses?
Trends Many topics lend themselves to using another analysis means called trend lines. What does the longer term trend of the question on the subject tell us? There are long-term trends available that illustrate how people view political parties-i.e. Which party is better equipped to handle certain issues? Tracking these data provides information about how the public links the parties to their positions on the issues. Changes in that collective view can tell party leaders where their party appears to be strong and/or weak. Another important trend line is the Presidential Approval Ratings. The Roper Center offers these data as far back as Roosevelt 's term.
When preparing secondary analyses of opinion data, don't forget to look at how different groups within the same sample responded. The full story may not be told by looking at results of the full sample alone. A sample of adults nationwide responding to a battery of questions on experience with job discrimination may well be different when sub-setting for women or minorities in the sample. Analyzing group data is particularly useful when you have survey results that are very close, as in this group data example. Similarly, when reviewing survey reports, it's important to check who was asked the question and whether the results are reporting on everyone in the sample or a subset of respondents. Often analytical reports provide results that are filtered based upon a set of respondent characteristics. Frequently during election season, survey firms will use a set of filter questions to determine the likelihood of one's voting. Finding out early in the survey the respondent's voter registration status, prior voting behavior and intentions for the impending election, one can find results reporting responses of the full national adult sample, only those registered voters, or registered voters who historically voted and who plan to vote. In addition to knowing the size of the entire sample, information should be provided about the size of the sub-populations being reported and the subsequent sampling error. The larger the number of people interviewed the smaller the error will be due to the size of the sample. Likewise, dissecting the sample into sectors that are too small increases the likelihood of error.
|