Chemical Education Roundup, 4-23-13

“It was the best of times; it was the worst of times.” This sentiment nicely sums up the state of chemical education right now. While sequestration threatens the largest sources of funding for chemical education researchers in the US, the literature has been on fire in the past few weeks with some intriguing studies. There’s a lot to talk about, so let’s get right into it!

First, the bad news. STEM education takes a painful hit in the President’s budget for FY 2014.

The single biggest consolidation proposed this year is in the area of science, technology, engineering, and mathematics (STEM) education, where the Administration is proposing a bold restructuring of STEM education programs—consolidating 90 programs and realigning ongoing STEM education activities to improve the delivery, impact, and visibility of these efforts.

Don’t be fooled by the rhetoric–this is almost certainly bad news for American chem ed researchers. It will be interesting to see how existing NSF-funded programs respond to these changes, but it’s almost certain to hurt the proliferation of new programs. It’s worth noting also that this is only a proposed budget, but if President Obama is throwing STEM education under the bus, I don’t see Congress fighting back.

Enough with the bad news! The bright side is that a lot of interesting research is happening these days. I’ve been digging into the general chemistry literature lately for professional reasons, and a very recent study out of Middle Tennessee State University caught my eye. The research addressed student conceptions of gases, focusing on a question that asks about the effects of a temperature change on the particulate nature of helium gas (originally studied by Nurrenben and Pickering). The conclusion of the research is typical: scaffolding and schema-activating designs for assessments improve performance on conceptual problems relative to more vague designs, but the authors were unable to track down the exact source of the performance boost (despite a few controls).

cartoon-sledge-hammer-guyOne clue is provided by another recent study: that of Behmke and Atwood on the implementation of problems sensitive to cognitive load theory  in an electronic homework system. The authors converted single, multi-step problems into sequences of related problems that “fade” from nearly complete when given to fully incomplete. Using an analytical approach based on item response theory, the authors observed that students exposed to the “statically fading” questions were very likely to perform better on subsequent related problems. The act of breaking a multi-step problem down and exposing its process over multiple problems can improve performance.

Jennifer Lewis and colleagues at USF have written a very important summary of the state of the art in psychometric measurement for chemistry education research. In addition to pointing out the typical methods researchers use to argue for the validity and reliability of survey results, Lewis et al. note that chemistry education research is becoming more interdisciplinary as evidence mounts for theoretical overlap between sub-fields of science education. They also draw attention to the need for qualitative research to complement quantitative efforts (see the MTSU study for a nice recent example of this idea). A nice read right after Lewis’s review is Barbera’s recent psychometric analysis of the Chemical Concepts Inventory.

In other news: a simple approach to assessing general chemistry laboratories; an investigation of apprenticeship in research groups; differential item functioning in science assessments; the evolution of online video in an organic chemistry course; teaching gas laws to blind students. Mouse over the links for full article titles!


Inside Students’ Heads

There is a definite difference between good teaching and good learning. On the one hand, good learning does not follow logically from good teaching, and on the other, learning can take place without the aid of a teacher per se. The two can, in theory, be completely decoupled. And unfortunately, one’s definition of good teaching does not always overlap with techniques that actually lead to student learning. This problem of being stuck in one’s ways is particularly common in chemical education, where the divide between those interested in the subject (teachers) and those not (let’s face it…students) is extremely wide.

Enter constructivism, a theory of psychology about mental development, the formation of knowledge, and the process of learning. The fundamental constructivist hypothesis is that knowledge is constructed: it is not “out there” on a silver platter ready to be assimilated unchanged. In reality, the learning process mangles what is actually heard into a system of constructs that make sense in the mind of the learner (and these constructs depend on what was there before, which is different for each learner). Applications of constructivism to education at the collegiate level basically assume that “hell, the students are paying for their education, so yeah…student learning should be our ultimate goal.”* So let’s get inside the students’ heads and hire instructors to cross the chasm and coach students to the other side. That’s a whole hell of a lot easier than yelling at students across the canyon (figuratively speaking) and expecting them to listen. To sell the strategy, let’s wrap it all in a fuzzy package and give it a fancy name, like student-centered teaching.

The interesting thing about SCT, to me, is that it seems to force the instructor to take a somewhat passive role in the classroom. The question immediately comes to mind: “how do I center ‘teaching’ on the student without making my job as an instructor just a little less necessary?” The short answer to this question is that for student-centered approaches to work, instructor efforts must be redirected, not replaced. Indeed they can’t be replaced—we still need to get students over the bridge from ignorance. So what should the modern student-centered instructor be doing?

Providing relevant contexts for learning. Without relevant context, students will not see the subject as valuable. But, you’d be surprised what can serve as “relevant context.” I still remember a tangent one of my advanced organic professors made when we talked about aromatic substitution reactions, about the use of p-dichlorobenzene in urinal cakes.

Providing tools that can be applied to problem solving. Many tasks taught in chemistry curricula involve painfully rote activities—nomenclature comes to mind (and for me, all of general chemistry). This fact, coupled with the notion that the relevance of many low-level chemistry topics (e.g. molecular orbitals) is hazy at best for students, suggests the need for tools to automate and simplify necessary tasks with little educational value. Not to downplay physical chemistry, but MO calculators short-circuit the need to get from “this is an atom” to “here’s how to use MOs to predict reactivity, which actually has real-world applications.” Students find the latter discussion much more valuable and from an instructor’s perspective, hey, it’s still chemistry.

Forgetting about covering content. A lifetime is not long enough to cover the entirety of the landscape of organic chemistry. Pressure to cover a great deal of content, in this day and age, comes from the need to demonstrate the relevance of chemistry to students—”see? It’s here, and here, and here, and here, and here, so it’s clearly relevant. And stuff.” But, time is limited and no teacher can have her cake and eat it too. There is a theoretical limit on how much can be covered in one semester, and it’s lower than most of us think (assuming student learning is the goal of all this education business). The good news is that sacrificing the fundamentals a little to cover more real-world applications is OK, provided effective tools are in place (see above).

Handing over the reins. In addition to providing contexts for learning, instructors of higher-level classes should establish means for students to provide contexts of their own. In other words, let your students do the work for you!  Not really, of course, as designing, building, and managing a system for students to contribute to course content is a full-time job in and of itself. It’s worth it, though—trust me.

* Not a given in the physical sciences at large research universities, even now.