It may not be a stretch to say that the study of reaction kinetics has claimed more hours of chemistry graduate student labor than any other enterprise. Waiting for a reaction to go to “completion” could require hours or even days, and one must keep a watchful eye on the data collection apparatus to avoid wasted runs. There’s a good chance that guy who’s reserved the NMR all night long is battening down for a kinetics run.
All of that effort, of course, leads to supposedly valuable data. The party line in introductory chemistry courses is that under pseudo-first order conditions, one can determine the order of a reactant in the rate law just by watching its concentration over time. We merely need to fit the data to each kinetic “scheme” (zero-, first-, and second-order kinetics) and see which fit looks best to ascertain the order. What could be simpler? The typical method—carried out by thousands (dare I say millions?) of chemistry students over the years—involves attempting to linearize the data by plotting [A] versus t, ln [A] versus t, and 1/[A] versus t. The transformation that leads to the largest R2 value is declared the winner, and the rate constant and order of A are pulled directly from the “winning” equation.
An interesting thing happened this week in my labs. We do a neat little exercise that treats pennies as isotopes. Pennies minted before 1982 have a different mass than pennies minted after 1982—like all things “of the future,” pennies got lighter in 1982. Students are provided with opaque film canisters of fifteen pennies and asked to determine the “isotopic abundance” of pre- and post-1982 pennies in their canisters. The canisters are glued shut, but standard pennies and empty canisters are available for weighing.
Student: Do my calculations look good?
mevans: Looks great.
The student had calculated something like 10.1025 pre-1982 pennies and 4.8975 post-1982 pennies. Variance in the masses of the canisters and pennies causes some non-ideality.
Student: How many significant figures should I include?
Ouch. This question, to me, was evidence that the point of significant figures was lost on this student. Of course, I thought, just round to whole numbers, since fractional numbers of pennies are nonsensical.
mevans: Does a fraction of a penny make sense when counting them?
mevans: Then throw all the digits after the decimal away. Boom! Done.
Humans have a disturbing natural attraction to numbers, even when said numbers are nonsensical, so small as to be meaningless, or outright lies that ignore statistics (as when a calculation based on measured, uncertain values is reported to too many decimal places). Throwing numbers away is cognitively hard! Deep down, we know that a number with more significant digits is more precise, and we cling to those digits even if the precision is imaginary or nonsensical. A big part of science education is training the mind to overcome this deception and deal with numbers in a healthy way.
The saga continued. Students got the sensible idea to report isotopic abundances as percentages of pennies in the entire sample. New set of numbers, same set of issues with significant figures!
Student: How many significant figures should I include in the percentage?
Things suddenly got interesting! I have to admit that this question caught me off guard. The calculation is simple enough: 10 / 15 * 100. Dogmatic application of the “rules” for significant figures would produce the number “67%.” Yet, the exact ratio of pennies is known, since we know that there are fifteen pennies and—relying on the idea that fractional pennies are nonsensical—there is no uncertainty in the numbers of pennies. There is no uncertainty in the percentage at all, so it’s fine to report the percentage as “66.6 repeating.” Hm, perhaps there is more to significant figures than meets the eye!
I’m fascinated by the link between significant figures and scientific misconduct. I think it’s rarely appreciated, but significant figures really are an issue of scientific misconduct. Reporting too many digits in a number is tantamount to lying about the precision of one’s instruments and ignoring (willfully or not) the impact of uncertainty on reported values. How do you get students—and other number-obsessed humans in the general public—to appreciate the contingency of scientific quantities?
I can’t resist one more fun fact about significant figures to finish this post. A value calculated from a logarithm (say an energy calculated from –RT ln K) has only as many significant figures as digits after the decimal place. Why? Think of the logarithm as a stand-in for a power of 10 (never mind the conversion of ln to log for a second). The integer part of a logarithm, then, is just a simpler way of writing “times ten to the power of…” It’s just the exponent part of a number in scientific notation—a placeholder that is never significant! The decimal portion of a logarithm, on the other hand, actually represents a number with meaning. Hence, only the numbers following the decimal point are significant in a logarithm-based value. Slick, eh?
What was your least favorite part of your science education? Many people who only took a few science courses in high school or occasionally in college might drop entire fields (like the one I hear the most, “chemistry”). My least favorite part of science is more specific than that, and it’s not even close: error blew all competition out of the water. I hated all things error (including the king of terrible error-related things, error propagation) with a burning, blinding passion. Reflecting on those feelings now makes me realize how far I’ve come intellectually since then, so it felt like a natural topic to write about. One hears a lot about intellectual development in books on teaching and student learning these days, and I’ve discovered that my experience with error mirrors some of the theories out there. Maybe they’re on to something…!
I was an idealist in college (and still am today, to a large degree). Really, I started out as what one might call a “deterministic idealist.” Science described the world in deterministic terms in the form of equations, and if I learned these equations I would become privy to all the secrets of the universe. A particular system (experiment, say) either conformed to one of these immutable equations or didn’t, and if it didn’t, there was another, deeper equation that just hadn’t been written down yet that could describe the system. Hence, my hatred of the idea of error: “I get it. Experiments are subject to error. That’s not the important part of this experiment; why the hell are we worrying about it?!” Take the compressibility factor of a gas: why do we care about error when what matters is is measuring P, V, n and T and calculating Z?
That haughty attitude persisted well into my early years in graduate school. It corresponds roughly to the first stage of Perry’s process of intellectual development, dualism. “To solve a problem in the lab, all I really need to do is measure the relevant stuff and apply the Right Equation(s) to The Measurement to obtain The Answer.” The deferential capitalization is intentional! I would get so angry propagating error because it felt utterly irrelevant to The Answer. It was nothing but busy work!
That perspective seems so immature and proud in retrospect…what about uncertainty? Why trust your instruments? Why trust your own shaky hands, or your blurry eyes? Somewhere during my education—well after undergrad, mind you—I figured out that the world doesn’t operate in black and white. Uncertainty is a fact of life, and it needs to be accounted for. This viewpoint is more like Perry’s third level, relativism. “Answers need to be backed up by good reasoning (and good scientific reasoning must take error into account).” Better, I thought. I had learned how to do error propagation in college, but I never really understood why it worked. Honestly, because of my intellectual level back then, I doubt I was even capable of learning how it worked. A frightening thought! I remember blindly applying the formula, but never fully wrapped my mind around it.
I’m very fond of “ranking problems” that ask students to order a series of compounds in some way and provide an explanation. One of my all-time favorites is the famous “rank these acids from most to least acidic…” problem, which might include compounds like the following.
To really understand a problem like this deeply, the student has to be able to connect the structures provided with either physical/chemical properties or theoretical constructs (such as electron-donating and electron-withdrawing groups). Structure-property relationships are at the root of the appropriate thinking here. Unfortunately, as was pointed out by a recent paper that caught my attention in J. Res. Sci. Teach., students often struggle with structure-property relationships. Without experience under one’s belt, the incredible utility packed into a Lewis structure can be lost on students. It’s staggering, really—what other devices in science approach the information density of a chemical structure?! Read the rest of this entry »
If you show an organic chemist the structures of two stereoisomers, she’ll probably have no issue characterizing them as conformational or configurational isomers. Structures differing in the arrangement of atoms about one tetrahedral stereocenter are naturally configurational isomers; structures differing only with respect to rotation about a single bond are naturally conformational isomers. Baby stuff.
The criterion for making these judgments appears to be whether the isomers are related by one or more rotations about formally single bonds (conformational) or any other sort of topological change (configurational). The terms “configuration” and “conformation” only have meaning, then, when another isomer is considered. Put another way, a single structure can be both a configurational and conformational isomer simultaneously depending on context. That’s not so bad, although the situation causes issues for the novice, to whom the relevant “virtual isomer” may not immediately spring to mind.
The real issue comes to the fore when one considers why the hell “rotation about formally single bonds” is the criterion for conformational isomerism. This is one of those theoretical facts that you’ll tell a student, and afterwards they might just stare back at you blankly (if they aren’t of the disposition to accept theoretical facts at face value). They’ll wonder what’s so special about single-bond rotations.
What’s your message for BRSM?
Good luck, have fun, and be nice to the graduate students!
Any post-doc survival tips?
Not being a post-doc myself, I can’t speak from direct experience, but the best post-docs I have known have maintained their cool in the face of a lot of pressures. Graduate students (and others) will look up to you, but don’t let that pressure get to you—be yourself. Don’t be afraid to challenge your advisor. Get out, explore wherever you’ll be, get into a hobby that allows you to meet people and do good work (for me, it was community theater).
Do you have a fun story to share from your experience in US chemistry academia?
As I mentioned before, be nice to the graduate students. A few years back, a very tall and very strong post-doc came through the lab. His first day, he walked up to me and asked where to find one of the other graduate students…I made the mistake of asking if he was an undergraduate. Talk about not starting things off on the right foot!
Any survival tips for living in the US?
Looking for an adjective? “Awesome” is the American’s go-to.
What would you like to see on BRSM blog in the future?
Whatever you want to write about! Just keep on truckin’.
I’ve been reading The Righteous Mind by Jonathan Haidt, and lately it’s gotten me thinking about the role of morality in education. If education is a garden, morality is the soil. What implicit moralities best cultivate learning? What keeps thirty students itching for A’s from cornering the teacher in his/her office and demanding that grade?
That’s a little far-fetched, but you see where I’m going. The classroom is bound by certain ethical principles, but what keeps students (or instructors) from violating them? Part of that can be explained by student self-interest: “this content will improve me, so I have incentive to follow the rules,” or “I want the grade, so I’ll go along with what the instructor says.” But there’s good reason to believe that’s not the whole story. For example, many instructors take an arbitrary approach to assigning grades, and for these teachers doing that is in their self-interest: it keeps students off their backs and frees up more time for [writing grants|lab work|time with family|anything else]. Of course, the best instructors know better. They understand that arbitrary grades (e.g. curves) are demotivating and encourage cutthroat behavior in students. They know that students must have a reason to buy into the morality of education, and that many practices in the classroom undercut education’s lofty foundations. What’s the core reason to buy into education, and what practices have evolved to promote that buying in? Consider an evolutionary perspective. Read the rest of this entry »