Chemical Education Roundup, 7-4-11

Happy independence day! A brand-spanking-new issue of Chemical Education Research & Practice found its way into my RSS reader list this week, so there’s plenty to talk about for this week’s roundup.

Let’s begin with a paper that gives multiple-choice tests in chemistry a second look. A lot of educators are nagged by the feeling that multiple-choice tests focus on factual understanding and memorization rather than conceptual understanding. One reason for this, argues George DeBoer in a recent CERP paper, is that typical analyses of multiple-choice tests treat them as dichotomous—every answer is either right or wrong, and the incorrect choices are lumped together and thrown out. In most cases, however, instructors deliberately design incorrect items (also known as “distractors”) to highlight incorrect lines of reasoning. If this is the case, we have a lot to learn from incorrect answers!

DeBoer applied Rasch modeling to a series of multiple-choice tests whose distractors were designed to pinpoint common chemistry misconceptions. Like item response theoretical models, Rasch models assign ability levels to students and difficulty levels to problems. The probability of a correct response on item x by student a is related to the difference between a‘s ability parameter and x‘s difficulty parameter. DeBoer’s model is even more finely grained, as it specifies probabilities for each choice on each item. Because each choice highlights a different misconception, one can plot the relationship between overall ability level and the probability of exhibiting some misconception (e.g., see the graph below). Cool stuff!

Misconceptions as a function of ability level

In other news, Penn et al. have validated the usefulness of concept maps as a measure of understanding in organic chemistry, using correlations to problem-set scores and final course grade. To generate the maps, they used a freely available concept-mapping tool called Cmaps.

The Journal of Computing in Higher Education has begun a special issue on interactions in distance education, and the first paper from that issue folds together two studies that address how different instructional strategies facilitate group interaction in online classrooms. The studies used the SOLO taxonomy and Community of Inquiry framework to evaluate instructional strategies; the results were largely complementary and fit together nicely in Kanuka’s article.

Finally, check out my friends’ blog on surviving in the wonderful, wild midwest at Adventures in the Midwest!

Academic Performance and Concept Maps

“Mildly creepy yet thorough, with a number of ‘duh’ moments.”

That’s how I’d describe the Journal of Chemical Education’s latest cross-university look at the factors that influence academic performance in organic chemistry courses from Szu et al. Creepily, the researchers asked student volunteers to keep a diary of their daily studying activities, asking them to indicate at the resolution of fifteen minutes “when and where they were studying, with whom, and what materials they used.” As is so often the case (see my recent micoach lament), this rather invasive data produced some of their most interesting results. Does the data-supported conclusion that “higher-performing students study earlier, not more” constitute something interesting in an absolute sense? I’ll let you be the judge. To me it was a “duh” moment—staying ahead of course material means that concepts make more sense when hearing them for the “first” time.

The Szu paper continues the recent trend (a fad of Gaga-esque proportions) to use concept maps to measure students’ conceptual understanding of a subject. I’m still not aboard the concept-map bandwagon, myself. Strangely, most human graders seem to treat concept maps like glorified open-response essays during the assessment process. How does it make sense to grade something containing discrete, explicit connections between concepts with a scale like “0 = total nonsense, 4 = scientifically complete”? It only makes sense when one “reads” a concept map as one would read an essay response, mentally talking out “[concept A] [verb expression] [concept B]” for each link. There must be a better way to grade these damn things!

Let me put on my Nostradamus cap for a second: visualization libraries for directed graphs on the web are not quite “there” yet, but once they get “there,” network analysis will bust onto the concept map scene in a big way. Humans aided by computer analysis of concept map networks will take education’s latest short-answer proxies to the next level as assessment tools. Right now, for instance, the distance between concepts is meaningless (related only to aesthetic concerns). Even basic network metrics, such as relative in- and out-degrees, are impossible to get a grip on visually. Can you imagine the incredible statistics educators could gain by pooling machine-readable concept maps from universities all over the country? It blows the mind!