The other day as I was recording a video (on planes of symmetry and chirality) for my organic chemistry class, I realized that the video wasn’t really about what I thought it was about. While my lips were moving and I was scribbling on the screen, I was busy contemplating a new way to think about chirality, one that had never really crossed my mind before. It was a very interesting moment!
One reason I like recording videos is that it gives me that feeling of being on the hot seat, of playing to an audience surrounded by distractions and burdened with a limited attention span. An interesting structure, logical consistency, and “sticky” take-home ideas are all essential. In a face-to-face environment where a student can meet you halfway and instructor and student engage in dialogue, much of that pressure is off (though dialogue has a completely different set of challenges!). Good questioning and a warm demeanor can draw students into a dialogue, but great videos have to be fundamentally compelling in and of themselves.
While recording this video on chirality, I got thinking about the difficulties some students have in seeing that chiral molecules lack a plane of symmetry. I’ve seen students who gain great facility with identifying planes of symmetry in achiral molecules, but who don’t build enough confidence to assert that such-and-such chiral molecule has no planes of symmetry at all. I ended up having to pause the recording—how do organic chemists see a lack of a plane of symmetry in a chiral molecule? How do we see something that’s not there?
Brute force is one option: we could try reflection through every possible internal mirror plane and verify that none of them leave the appearance of the (chiral) molecule unchanged. Though a computer might be able to approximate this in some reasonable time frame, no human could hope to apply this approach with any success. What we need to really move forward with solving the problem is a set of candidate mirror planes that are the most likely to be planes of symmetry. Given an efficient method to generate a few candidate planes, we can try reflecting through them to come to a good educated guess about whether a molecule is chiral or not (“educated” in the sense that the guess is not rigorous but still correct something like 99% of the time).
Enter the idea of “corresponding structures,” identical portions of a molecule that must either stay put or exchange positions upon reflection through a plane of symmetry. In practice, we use corresponding structures to whittle down the list of candidate planes of symmetry: a huge range of possibilities is cut down to three or four at most. If one of them is a “hit” we call the molecule achiral right then and there; if not, we also know with great confidence that the molecule is chiral (never mind inversion centers).
If a student in conversation had asked me “how do you see something that isn’t there?”, I might have awkwardly fumbled my way to this idea. But recording a video gave me a chance to do it in an artificial environment, which was very cool. I wonder if anyone has studied the development of teaching skills during preparation of digital content?
It’s that time of year again: labs are gearing up. Drawers are being filled with new glassware, students donning lab coats are beginning to fill the halls…the ol’ machine is revving up to roll again. For me, this time of the semester means emphasizing good practices when working up data and results. I’ve written in past semesters about significant figures and some of the interesting issues that come up when teaching them—it’s about the mindset, not the rules, I swear…!
Take percent yield, a measure that has been reported with false precision by countless numbers of students across the generations. Percent yield is really interesting because the balance is one of the most precise instruments that exist in general chemistry laboratories—depending on the range and precision of the balance, measurements with five, six, and even seven significant figures are possible. Thus, it seems like percent yields (which are really just ratios of masses) should in turn have five, six, or seven significant digits. The measured mass of product points to this level of precision.
Students often struggle to understand that the precision of the balance is irrelevant—inevitable variations between runs of the reaction introduce massive uncertainty into yields. Such variations are gargantuan compared to imprecision in balance measurements and essentially render the precision of the balance meaningless. Continue reading →
In my opinion, the most awkwardly named reaction in all of chemistry is electrophilic aromatic substitution (and all of its three-worded cousins). This name suffers from the same problem as other named reactions: it is deceptively uninformative. I still recall raising an eyebrow in undergrad when I found out that the aromatic involved in this reaction is not the electrophile—the other reagents combine to generate the electrophile. The aromatic is the nucleophile. “Why the heck is the word ‘electrophilic’ stuffed before ‘aromatic’ in the name, then?!” When you really get down to it, the name doesn’t tell you much and has the potential to feed a novice a lot of incorrect information:
“So the reaction mixture is electrophilic, then?”
“Well no, the reaction involves a nucleophile and an electrophile, just like all polar organic reactions…”
“So the aromatic is electrophilic?”
“No, the aromatic is the nucleophile in these reactions.”
“But the name says electrophilic aromatic…!”
[Professor places face in palms]
Only once the student has seen copious examples of other electrophilic substitutions does s/he realize that the adjective refers to the conditions surrounding the substrate, not the substrate itself. The naming convention makes sense to a synthetic chemist interested in “decorating” a given substrate: the substrate is what it is, and we treat it with electrophilic or nucleophilic conditions to add groups to it. The names of substitution reactions clarify the reactivity of whatever’s coming into contact with the substrate (the reagents). To a student without a synthetic frame of mind though, without an inkling of the primacy of the substrate or even its identity, I don’t think this naming convention comes naturally. Continue reading →
I recently returned from vacationing in the UK, and just spent a couple of days in the West End of Glasgow, near Kelvingrove Park. Yes, the same Kelvin of scientific fame! Seeing his statue got me thinking about the second law of thermodynamics—enough that I was inspired to jot a few thoughts down about the second law.
The second law and entropy are two of the hardest topics to write about at a general chemistry level, in my opinion. Not only has there been fierce debate over the years as to the ideal intuitive notions and analogies for these topics, but related derivations with mathematical rigor can be painfully complicated. There’s a gulf here between the theory and practice of chemical thermodynamics that is difficult to navigate.
A while back I tried just to get down on paper a rigorous derivation of the definition of entropy in terms of heat and temperature, using the second law and a hypothetical thermodynamic cycle. While the work was mathematically correct, the writing made me—the author, mind you!—want to tear my eyeballs out recently. That text will never see the light of day in a general chemistry class. At that point, I wondered if I was even capable of dispensing with rigor to write a more intuitive piece. I’ve always found it difficult to write while sacrificing rigor because I still recall craving rigor and theory in the depths of my soul as a student.
The reality, of course, is that all chemists use heuristics, shortcuts, or metaphors when confronted with certain topics. The best chemist writers can navigate rigorous theory and metaphor with finesse, presenting derivations where the mind “wants” them and metaphors elsewhere. Tro is a good example—while he makes no effort to dumb down important equations, he also presents the practical metaphors that chemists most often use.
In the edition I have, he even manages to lay out all three general interpretations of entropy: entropy as disorder, entropy as energy dispersal, and the statistical interpretation. Color me jealous!
A simple question with a not-so-straightforward answer. Everyone learns about the infamous “bell curve” in one way or another—but why does randomly distributed data work this way? After all, we could imagine all kinds of wonky shapes for probability distributions. The strangeness of quantum chemistry even shows us that odd-looking probability distributions can occur “naturally.” Yet there’s something deeply intuitive about the bell shape.
For example, the normal distribution is consistent with the intuition that randomly distributed data should cluster symmetrically about a mean. Probability decays to zero as we move away from the mean in a kind of sigmoidal way: the drop is slow at first, picks up steam about halfway to the first standard deviation, and slows down again as p inches towards zero. Yet correspondence with our intuitions doesn’t give the normal distribution theoretical legitimacy: the hydrogenic 1s orbital has similar properties, after all. What’s so special about the normal distribution?
Although this is a question I’ve had for many years, I stumbled into the answer recently in an unexpected context: random diffusion. The answer gets right to the heart of what we mean by the word random, particularly with respect to the behavior of data (or little diffusing particles, in a diffusion context). If we imagine data points jiggling like little particles in a fluid, then random errors “nudge” the points to either the left or right with equal probability. What we really mean by “random” is that it’s impossible to predict which way the data points will move: they may go to the left or to the right with equal probability (50%). Randomly distributed data behaves just like little diffusing particles engaging in a random walk. Continue reading →
I experienced a wake-up call recently when a student dropped by to ask about unit cells. Wow, I realized, I know nothing about crystal structures. Analyzing simple cubic on the fly is easy enough, but the close-packed structures require quite a bit of mental gymnastics. Working mostly on the lab side of things, I don’t often think about this topic (although some nice activities with solids as their focus have been developed).
While the visualization skills needed to understand unit cells inside and out can turn students off, they’re a classic example of how chemists use microscopic structure and properties to explain and predict macroscopic phenomena. Stripping away the messy details, there are relatively few properties of unit cells that general chemists care about:
- Packing fraction (also interesting from a physical and mathematical point of view)
- Hole geometry and count
- Dimensions and atomic/molecular radii
This video is a great introduction to the most important crystal structures from a materials science point of view. The best thing a student can do, in my opinion, is use a physical model to build up the structures herself—this is particularly true for the close-packed structures, which to me have a kind of magical allure. How can there be two ways to pack hard spheres as closely as possible? The answer becomes apparent after you’ve stacked up two planes of close-packed spheres…
Two layers of close-packed spheres. Note the two types of pockets for the next layer!
The next layer of spheres will sit down in the triangular “pockets” between the red spheres, but there are two inequivalent types of pockets: those above tetrahedral holes and those above octahedral holes. Because of the size of the spheres, both types of pockets cannot simultaneously be occupied. Ergo, there are two inequivalent close-packed structures! Continue reading →
My predecessor left a ton of books behind in my office, and among some of his old stuff I found a wonderful book called The Pupil as Scientist. The author, Rosalind Driver, makes the case that students develop scientific explanations of what they see long before they set foot into a science classroom. Children’s natural scientific curiosity is both an asset and a liability: teachers can tap into it to engage students in deeper scientific learning, but it can be a source of robust misconceptions, too.
Driver describes beautifully a tension I’ve observed teaching college freshmen laboratories. On the one hand, we want students to observe and discover scientific principles in the laboratory, and leaving procedures open ended is part of that goal. On the other hand, the impact of an experiment seems greatest when it’s done properly, according to a prescribed procedure that yields “good results.” Good data is typically a pre-requisite for grappling with complex scientific concepts, but inquiry-based labs open the door to bad data or incorrect conclusions. How can we properly balance these two opposing forces?
Continue reading →