In a recent authoring project on organic chemistry, I came across the following statement:
A chain mechanism involves two or more repeating steps.
Is this a true statement? Well, yes and no. Yes, a chain mechanism involves the same process happening again and again. But so does a catalytic mechanism—are both mechanisms the same? If they were, we’d just call all chain mechanisms catalytic (it sounds much better, right?). In fact, the two are not the same, and there’s far more to the definition of a chain mechanism than two repeating steps.
Naively, chain initiators (let’s use radical initiators for the present discussion) look a lot like catalysts. They’re around in substoichiometric amounts and they promote the combination of reactants that would otherwise sit dormant. Clearly then, they decrease the activation energy of the reaction relative to a situation without initiator. However, radical initiators are missing a key feature of catalysts: they are consumed by the reaction. They’re about as close as one can get to a catalyst without being a catalyst! Read the rest of this entry »
In chemistry, quantum mechanics and orbital theory often rub up uncomfortably against more naïve bonding theories, such as VSEPR and Lewis structures. For example, VSEPR tends to give the impression that the positions of lone pairs (or better yet, the orientations of filled non-bonding orbitals) are dictated by the number of electronic domains around the atom. Water, then, which has four electronic domains around oxygen—two single bonds and two lone pairs—apparently has lone pairs at 109.5º angles in a plane perpendicular to the H–O–H plane. The carbonyl oxygen, which VSEPR suggests is “really” trigonal, has two “rabbit-ear” lone pairs at 120º angles. These pictures make the lone pairs look equivalent, and helps us slot these structures in mentally with analogous structures, like imines (for the carbonyl) and ammonia (for water).
Yet, MO theory suggests that the lone pairs shown are not in equivalent orbitals! The simplest explanation for water is that the atomic 2p orbital on oxygen perpendicular to the H–O–H plane cannot interact with the 1s orbitals on hydrogen (there’s zero net overlap), so one of the 2p orbitals on oxygen must show up as a non-bonding molecular orbital. But this orbital can only hold two electrons, so the other lone-pair-bearing orbital on oxygen must be a hybrid. In a nutshell, one lone pair is best characterized as a π MO (the pure 2p orbital) while the other is a σ MO. The two lone pairs are not equivalent. As it turns out, this situation holds up even for lone-pair-bearing atoms in larger molecules. The inequivalency holds for both canonical and natural bond orbitals (NBOs), but the paper that inspired this post focuses on the usefulness of NBOs in the correct description. Read the rest of this entry »
One of my favorite books is Nate Silver’s The Signal and the Noise. Silver is well known as the founder of fivethirtyeight.com, a data-driven news site covering everything from the American economic outlook to the ethnic distribution of NBA fans. In his book, Silver describes his philosophy of prediction and champions Bayesian reasoning. He sensibly asserts that a firm understanding of statistics and probability is essential for making good predictions. Reading the book leaves me wondering about the intersection of statistics, probability, and chemistry.
Of course, chemical theory owes a great deal to statistics and probability. Quantum mechanics is an entirely probabilistic theory, although the concrete orbital shapes organic chemists tend to draw tempt us to think otherwise. Statistical mechanics is built on the idea that a collection of trillions upon trillions of molecules behaves like a massive sample. No social science experiment could ever hope to approach our sample size! In this context, the challenge is developing a theory that fits our clearly high-quality samples (and the theory of statistical mechanics is notoriously complex).
In other areas of chemistry, however, probability and statistics are unfortunately absent. Chemical reactions and synthesis come to mind: one can imagine a reaction system as governed by a set of probabilities—one for each reaction that might occur. The distribution of products formed before isolation depends on these probabilities. When it comes to organic chemistry, most compounds of appreciable size contain multiple functional groups, each of which is susceptible to reactions with sufficiently harsh reagents. Methods development and synthetic planning both involve minimizing the probability of undesirable processes—even if their likelihoods cannot be reduced to exactly zero. Using computer programs to aid synthesis has fallen out of fashion (unless SciFinder counts), but I can imagine a next-generation synthesis program as a Watson-esque guide that lays out several different routes with probabilities of success or “optimal-ness,” based on data from the literature.* Read the rest of this entry »
Remember middle-school dances? I have fond memories of the gym floor transforming into a capacitor of sorts, flanked by rows of pubescent boys and girls pressed as far apart from one another as possible. Although a few daring couples would wander out to the dance floor, generally action would happen only when a sufficiently large clique worked up the collective courage to form an awkward dancing circle. Talking to the opposite sex—aptly called “making one’s move”—was positively painful back then.
Eventually we all get over our fear of the opposite sex (or the same sex, if that’s your gig) for the most part and move on with our lives. For me, that process of building up a solid conversational repertoire and comfort in my own skin took years. Imagine how a teacher feels, who must use his or her conversational skills and set of “moves” to help a new crop of students learn a complex topic in the matter of a semester! It can be just as painful to watch a teacher barking at a disengaged student as it is to watch middle schoolers blow it at a dance. And yes, it can be just as awkward and gut wrenching when you’re the teacher doing the barking (or the middle-school dancer failing hard with your crush).
How do teachers decide what to say? How can we distinguish good moves from bad? What are the motives behind different types of moves? I was reminded of these questions while reading an excellent J. Chem. Educ. article last week. Warfa and co-workers studied the moves used by teachers in a POGIL classroom—where new norms for teaching and learning can make both students and teachers uneasy. They categorized teachers’ moves according to whether the move occurred in a monologic or dialogic context. Monologic discourse involves a one-way transfer of information from teacher to student, with little to no student input (think Hamlet’s “to be or not to be”). Dialogic discourse, on the other hand, involves a social dimension and sharing of ideas between teacher and student (think Socrates). Both types of discourse are important in the chemistry classroom, but figuring out the proper balance to meet the needs of students in a particular classroom environment is tricky. Read the rest of this entry »
At some point during my personal education in chemistry, I abandoned the use of “H+” to represent “what forms when an acid is placed in water” and switched over to writing “H3O+.” The way I see it, the moment came when my desire to be right and rigorous finally surpassed my urge to be efficient—taking the time to write the extra symbols was suddenly worth the trouble, once I realized that it was apparently a matter of correctness. What one rarely considers as an undergrad is the idea that “H+” wouldn’t have survived to the present day if it didn’t have a kernel of truth to it. What beleaguered chemists hiding out in dark, dusty laboratories are still fighting for the proton? What evidence could possibly bolster these defenders of the proton? Read on!
There is an interesting pedagogical dimension to this whole discussion. Oftentimes when students are learning a new concept or how to solve a new type of problem, functional shortcuts become apparent. These tricks minimize the mental effort associated with problem solving while working with high enough frequency to be tolerable—any trick that works more than 90% of the time is a winner! The catch, of course, is that shortcuts leave out important conceptual details and leave the student’s learning at a disadvantage. As a result, teachers tend to be highly opposed to them while students lap them up. Complicating the situation further, the effectiveness of a particular trick depends on the thoroughness of the teacher and the complexity of assigned problems.
I can’t speak for the broader community, but there was a time when I branded the use of “H+” one of these counterproductive tricks, and I think it’s a fairly common sentiment. For a very wide range of problems, simply writing “H+” works. The nugget of knowledge that the proton is not bare in acidic solutions is very rarely essential to the solution of a problem. Perhaps that fact annoys a lot of teachers—students can get by ignoring it, even though the claim has broader bogus implications (e.g., acids just fall apart in the gas phase, other bare cations can exist in aqueous solution, etc.). The feeling of annoyance encourages the idea in professors that the perpetuation of “H+” is a student-driven conspiracy! Read the rest of this entry »
It may not be a stretch to say that the study of reaction kinetics has claimed more hours of chemistry graduate student labor than any other enterprise. Waiting for a reaction to go to “completion” could require hours or even days, and one must keep a watchful eye on the data collection apparatus to avoid wasted runs. There’s a good chance that guy who’s reserved the NMR all night long is battening down for a kinetics run.
All of that effort, of course, leads to supposedly valuable data. The party line in introductory chemistry courses is that under pseudo-first order conditions, one can determine the order of a reactant in the rate law just by watching its concentration over time. We merely need to fit the data to each kinetic “scheme” (zero-, first-, and second-order kinetics) and see which fit looks best to ascertain the order. What could be simpler? The typical method—carried out by thousands (dare I say millions?) of chemistry students over the years—involves attempting to linearize the data by plotting [A] versus t, ln [A] versus t, and 1/[A] versus t. The transformation that leads to the largest R2 value is declared the winner, and the rate constant and order of A are pulled directly from the “winning” equation.
An interesting thing happened this week in my labs. We do a neat little exercise that treats pennies as isotopes. Pennies minted before 1982 have a different mass than pennies minted after 1982—like all things “of the future,” pennies got lighter in 1982. Students are provided with opaque film canisters of fifteen pennies and asked to determine the “isotopic abundance” of pre- and post-1982 pennies in their canisters. The canisters are glued shut, but standard pennies and empty canisters are available for weighing.
Student: Do my calculations look good?
mevans: Looks great.
The student had calculated something like 10.1025 pre-1982 pennies and 4.8975 post-1982 pennies. Variance in the masses of the canisters and pennies causes some non-ideality.
Student: How many significant figures should I include?
Ouch. This question, to me, was evidence that the point of significant figures was lost on this student. Of course, I thought, just round to whole numbers, since fractional numbers of pennies are nonsensical.
mevans: Does a fraction of a penny make sense when counting them?
mevans: Then throw all the digits after the decimal away. Boom! Done.
Humans have a disturbing natural attraction to numbers, even when said numbers are nonsensical, so small as to be meaningless, or outright lies that ignore statistics (as when a calculation based on measured, uncertain values is reported to too many decimal places). Throwing numbers away is cognitively hard! Deep down, we know that a number with more significant digits is more precise, and we cling to those digits even if the precision is imaginary or nonsensical. A big part of science education is training the mind to overcome this deception and deal with numbers in a healthy way.
The saga continued. Students got the sensible idea to report isotopic abundances as percentages of pennies in the entire sample. New set of numbers, same set of issues with significant figures!
Student: How many significant figures should I include in the percentage?
Things suddenly got interesting! I have to admit that this question caught me off guard. The calculation is simple enough: 10 / 15 * 100. Dogmatic application of the “rules” for significant figures would produce the number “67%.” Yet, the exact ratio of pennies is known, since we know that there are fifteen pennies and—relying on the idea that fractional pennies are nonsensical—there is no uncertainty in the numbers of pennies. There is no uncertainty in the percentage at all, so it’s fine to report the percentage as “66.6 repeating.” Hm, perhaps there is more to significant figures than meets the eye!
I’m fascinated by the link between significant figures and scientific misconduct. I think it’s rarely appreciated, but significant figures really are an issue of scientific misconduct. Reporting too many digits in a number is tantamount to lying about the precision of one’s instruments and ignoring (willfully or not) the impact of uncertainty on reported values. How do you get students—and other number-obsessed humans in the general public—to appreciate the contingency of scientific quantities?
I can’t resist one more fun fact about significant figures to finish this post. A value calculated from a logarithm (say an energy calculated from –RT ln K) has only as many significant figures as digits after the decimal place. Why? Think of the logarithm as a stand-in for a power of 10 (never mind the conversion of ln to log for a second). The integer part of a logarithm, then, is just a simpler way of writing “times ten to the power of…” It’s just the exponent part of a number in scientific notation—a placeholder that is never significant! The decimal portion of a logarithm, on the other hand, actually represents a number with meaning. Hence, only the numbers following the decimal point are significant in a logarithm-based value. Slick, eh?