A while back, while racking his brain for ideas on how to conduct educational research, some guy came to the glorious realization that the easiest way to get information from people is to just ask for it. Since that fateful day, surveys have been an indispensable tool for research in education. As critical as surveys are, though, it’s easy to lose sight of their pros and (especially) cons in the heat of a research project or teaching semester. Gathering and presenting information from surveys has limitations like any other method.
The freedom offered by survey methods to the researcher can be both liberating and dangerous. Consider the following scenario: you want to measure the physical activity of a student by surveying her about her workout schedule. So, you ask her how often she visits the gym. What you don’t (and indeed, probably can’t) know is that the student works at the gym, and is there to work just as much as she is to work out. How will she answer? If you’re lucky, she actually keeps track of when she visits the gym to workout. More than likely, you’ll get an answer that has been heavily fudged by the student. Your instrument doesn’t measure what you’d like it to; it lacks validity. In educational research, very often constructs derived from survey items are used to measure “grander” concepts, such as intelligence. To the extent that the theory is incorrect, such survey instruments lack validity.
Survey instruments must also be reliable; that is, they must be designed and implemented to measure the same thing the same way over time, from within when measured by two different questions, or when administered by different people. A perfectly reliable instrument possesses no random error; generally, the condition of reliability argues for very specific survey items which cover all or nearly all variables that may be at play in every item. On the other hand, increasing the number of items in a survey increases the potential for misread or ill-considered items, and may dilute the impact of any one item from the perspective of analysis. There is a subtle balance there. The workout survey, for instance, would clearly benefit from an item about other activities at the gym besides working out. An item asking for the precise nature of these activities, however, is probably overkill, and would likely confuse the large number of survey respondents who just go to the gym to exercise.
Thus the question I pose in the title of this post: how much is too much? How much information is too much to collect? Too much to process? Too much to respond to? In my experience, there really is no reason for an educator to breach subjects or theories beyond his/her field, if the research is solely for course or curriculum development. Students will invest earnestly in a survey that may benefit a course they’re currently taking, and exploring “grander” ideas about the relationship between external variables and an (in the scheme of things unimportant) university course just confuses and alienates students (I can vouch for this one personally). Accept from the outset that students won’t care if you discover a hidden connection between a student’s appreciation for 80’s hip hop and their performance in your course, so it’s not even worth going there. Really, in 2010, there is no reason for educators in the trenches of teaching to design their own survey instruments except, perhaps, in bleeding-edge fields. Nice thing about adapting or borrowing an existing instrument is that you don’t have to sweat validity and reliability, which is built into the design of existing instruments. Actually measuring what you set out to measure can be a pretty cool experience in and of itself. Trust me—the survey designers want educators to do this!
Honest teachers will accept that all they really want and need to do with educational research is improve their own teaching, and that the easiest way to improve is to simply ask students how well this or that intervention worked. My advice here is very simple: don’t neglect factors that could significantly affect the results of an intervention survey, such as the student’s performance on said intervention. Strive for completeness, but avoid irrelevant questions or bringing things out of left field in the middle of a survey (“I will be holding a surprise pop quiz tomorrow. How much do you know about the Wittig reaction?”). In the broader context of the learning goals, planned activities, and assessment methods of a course, survey instruments can serve as valuable formative (pre-, mid-, or post-semester!) probing tools…but find an existing framework that’s adaptable that works! Here are some of my favorites from chem ed:
Chemistry Self-concept Inventory
Student Assessment of Learning Gains
Groundwater Pollution Survey – If this one seems out of place, check out their “exploratory factor analysis.” This is one of the better papers I’ve seen that actually describes the statistics behind instrument validity. The content is a little esoteric, but hey, they’re asking for what they want to know! Kudos!