My predecessor left a ton of books behind in my office, and among some of his old stuff I found a wonderful book called The Pupil as Scientist. The author, Rosalind Driver, makes the case that students develop scientific explanations of what they see long before they set foot into a science classroom. Children’s natural scientific curiosity is both an asset and a liability: teachers can tap into it to engage students in deeper scientific learning, but it can be a source of robust misconceptions, too.
Driver describes beautifully a tension I’ve observed teaching college freshmen laboratories. On the one hand, we want students to observe and discover scientific principles in the laboratory, and leaving procedures open ended is part of that goal. On the other hand, the impact of an experiment seems greatest when it’s done properly, according to a prescribed procedure that yields “good results.” Good data is typically a pre-requisite for grappling with complex scientific concepts, but inquiry-based labs open the door to bad data or incorrect conclusions. How can we properly balance these two opposing forces?
In a recent authoring project on organic chemistry, I came across the following statement:
A chain mechanism involves two or more repeating steps.
Is this a true statement? Well, yes and no. Yes, a chain mechanism involves the same process happening again and again. But so does a catalytic mechanism—are both mechanisms the same? If they were, we’d just call all chain mechanisms catalytic (it sounds much better, right?). In fact, the two are not the same, and there’s far more to the definition of a chain mechanism than two repeating steps.
Naively, chain initiators (let’s use radical initiators for the present discussion) look a lot like catalysts. They’re around in substoichiometric amounts and they promote the combination of reactants that would otherwise sit dormant. Clearly then, they decrease the activation energy of the reaction relative to a situation without initiator. However, radical initiators are missing a key feature of catalysts: they are consumed by the reaction. They’re about as close as one can get to a catalyst without being a catalyst! Read the rest of this entry »
In chemistry, quantum mechanics and orbital theory often rub up uncomfortably against more naïve bonding theories, such as VSEPR and Lewis structures. For example, VSEPR tends to give the impression that the positions of lone pairs (or better yet, the orientations of filled non-bonding orbitals) are dictated by the number of electronic domains around the atom. Water, then, which has four electronic domains around oxygen—two single bonds and two lone pairs—apparently has lone pairs at 109.5º angles in a plane perpendicular to the H–O–H plane. The carbonyl oxygen, which VSEPR suggests is “really” trigonal, has two “rabbit-ear” lone pairs at 120º angles. These pictures make the lone pairs look equivalent, and helps us slot these structures in mentally with analogous structures, like imines (for the carbonyl) and ammonia (for water).
Yet, MO theory suggests that the lone pairs shown are not in equivalent orbitals! The simplest explanation for water is that the atomic 2p orbital on oxygen perpendicular to the H–O–H plane cannot interact with the 1s orbitals on hydrogen (there’s zero net overlap), so one of the 2p orbitals on oxygen must show up as a non-bonding molecular orbital. But this orbital can only hold two electrons, so the other lone-pair-bearing orbital on oxygen must be a hybrid. In a nutshell, one lone pair is best characterized as a π MO (the pure 2p orbital) while the other is a σ MO. The two lone pairs are not equivalent. As it turns out, this situation holds up even for lone-pair-bearing atoms in larger molecules. The inequivalency holds for both canonical and natural bond orbitals (NBOs), but the paper that inspired this post focuses on the usefulness of NBOs in the correct description. Read the rest of this entry »
One of my favorite books is Nate Silver’s The Signal and the Noise. Silver is well known as the founder of fivethirtyeight.com, a data-driven news site covering everything from the American economic outlook to the ethnic distribution of NBA fans. In his book, Silver describes his philosophy of prediction and champions Bayesian reasoning. He sensibly asserts that a firm understanding of statistics and probability is essential for making good predictions. Reading the book leaves me wondering about the intersection of statistics, probability, and chemistry.
Of course, chemical theory owes a great deal to statistics and probability. Quantum mechanics is an entirely probabilistic theory, although the concrete orbital shapes organic chemists tend to draw tempt us to think otherwise. Statistical mechanics is built on the idea that a collection of trillions upon trillions of molecules behaves like a massive sample. No social science experiment could ever hope to approach our sample size! In this context, the challenge is developing a theory that fits our clearly high-quality samples (and the theory of statistical mechanics is notoriously complex).
In other areas of chemistry, however, probability and statistics are unfortunately absent. Chemical reactions and synthesis come to mind: one can imagine a reaction system as governed by a set of probabilities—one for each reaction that might occur. The distribution of products formed before isolation depends on these probabilities. When it comes to organic chemistry, most compounds of appreciable size contain multiple functional groups, each of which is susceptible to reactions with sufficiently harsh reagents. Methods development and synthetic planning both involve minimizing the probability of undesirable processes—even if their likelihoods cannot be reduced to exactly zero. Using computer programs to aid synthesis has fallen out of fashion (unless SciFinder counts), but I can imagine a next-generation synthesis program as a Watson-esque guide that lays out several different routes with probabilities of success or “optimal-ness,” based on data from the literature.* Read the rest of this entry »
Remember middle-school dances? I have fond memories of the gym floor transforming into a capacitor of sorts, flanked by rows of pubescent boys and girls pressed as far apart from one another as possible. Although a few daring couples would wander out to the dance floor, generally action would happen only when a sufficiently large clique worked up the collective courage to form an awkward dancing circle. Talking to the opposite sex—aptly called “making one’s move”—was positively painful back then.
Eventually we all get over our fear of the opposite sex (or the same sex, if that’s your gig) for the most part and move on with our lives. For me, that process of building up a solid conversational repertoire and comfort in my own skin took years. Imagine how a teacher feels, who must use his or her conversational skills and set of “moves” to help a new crop of students learn a complex topic in the matter of a semester! It can be just as painful to watch a teacher barking at a disengaged student as it is to watch middle schoolers blow it at a dance. And yes, it can be just as awkward and gut wrenching when you’re the teacher doing the barking (or the middle-school dancer failing hard with your crush).
How do teachers decide what to say? How can we distinguish good moves from bad? What are the motives behind different types of moves? I was reminded of these questions while reading an excellent J. Chem. Educ. article last week. Warfa and co-workers studied the moves used by teachers in a POGIL classroom—where new norms for teaching and learning can make both students and teachers uneasy. They categorized teachers’ moves according to whether the move occurred in a monologic or dialogic context. Monologic discourse involves a one-way transfer of information from teacher to student, with little to no student input (think Hamlet’s “to be or not to be”). Dialogic discourse, on the other hand, involves a social dimension and sharing of ideas between teacher and student (think Socrates). Both types of discourse are important in the chemistry classroom, but figuring out the proper balance to meet the needs of students in a particular classroom environment is tricky. Read the rest of this entry »
At some point during my personal education in chemistry, I abandoned the use of “H+” to represent “what forms when an acid is placed in water” and switched over to writing “H3O+.” The way I see it, the moment came when my desire to be right and rigorous finally surpassed my urge to be efficient—taking the time to write the extra symbols was suddenly worth the trouble, once I realized that it was apparently a matter of correctness. What one rarely considers as an undergrad is the idea that “H+” wouldn’t have survived to the present day if it didn’t have a kernel of truth to it. What beleaguered chemists hiding out in dark, dusty laboratories are still fighting for the proton? What evidence could possibly bolster these defenders of the proton? Read on!
There is an interesting pedagogical dimension to this whole discussion. Oftentimes when students are learning a new concept or how to solve a new type of problem, functional shortcuts become apparent. These tricks minimize the mental effort associated with problem solving while working with high enough frequency to be tolerable—any trick that works more than 90% of the time is a winner! The catch, of course, is that shortcuts leave out important conceptual details and leave the student’s learning at a disadvantage. As a result, teachers tend to be highly opposed to them while students lap them up. Complicating the situation further, the effectiveness of a particular trick depends on the thoroughness of the teacher and the complexity of assigned problems.
I can’t speak for the broader community, but there was a time when I branded the use of “H+” one of these counterproductive tricks, and I think it’s a fairly common sentiment. For a very wide range of problems, simply writing “H+” works. The nugget of knowledge that the proton is not bare in acidic solutions is very rarely essential to the solution of a problem. Perhaps that fact annoys a lot of teachers—students can get by ignoring it, even though the claim has broader bogus implications (e.g., acids just fall apart in the gas phase, other bare cations can exist in aqueous solution, etc.). The feeling of annoyance encourages the idea in professors that the perpetuation of “H+” is a student-driven conspiracy! Read the rest of this entry »
It may not be a stretch to say that the study of reaction kinetics has claimed more hours of chemistry graduate student labor than any other enterprise. Waiting for a reaction to go to “completion” could require hours or even days, and one must keep a watchful eye on the data collection apparatus to avoid wasted runs. There’s a good chance that guy who’s reserved the NMR all night long is battening down for a kinetics run.
All of that effort, of course, leads to supposedly valuable data. The party line in introductory chemistry courses is that under pseudo-first order conditions, one can determine the order of a reactant in the rate law just by watching its concentration over time. We merely need to fit the data to each kinetic “scheme” (zero-, first-, and second-order kinetics) and see which fit looks best to ascertain the order. What could be simpler? The typical method—carried out by thousands (dare I say millions?) of chemistry students over the years—involves attempting to linearize the data by plotting [A] versus t, ln [A] versus t, and 1/[A] versus t. The transformation that leads to the largest R2 value is declared the winner, and the rate constant and order of A are pulled directly from the “winning” equation.