On Significant Figures

An interesting thing happened this week in my labs. We do a neat little exercise that treats pennies as isotopes. Pennies minted before 1982 have a different mass than pennies minted after 1982—like all things “of the future,” pennies got lighter in 1982. Students are provided with opaque film canisters of fifteen pennies and asked to determine the “isotopic abundance” of pre- and post-1982 pennies in their canisters. The canisters are glued shut, but standard pennies and empty canisters are available for weighing.

Student: Do my calculations look good?
mevans: Looks great.

The student had calculated something like 10.1025 pre-1982 pennies and 4.8975 post-1982 pennies. Variance in the masses of the canisters and pennies causes some non-ideality.

Student: How many significant figures should I include?

Ouch. This question, to me, was evidence that the point of significant figures was lost on this student. Of course, I thought, just round to whole numbers, since fractional numbers of pennies are nonsensical.

mevans: Does a fraction of a penny make sense when counting them?
Student: Nope.
mevans: Then throw all the digits after the decimal away. Boom! Done.
Student: OK…

Humans have a disturbing natural attraction to numbers, even when said numbers are nonsensical, so small as to be meaningless, or outright lies that ignore statistics (as when a calculation based on measured, uncertain values is reported to too many decimal places). Throwing numbers away is cognitively hard! Deep down, we know that a number with more significant digits is more precise, and we cling to those digits even if the precision is imaginary or nonsensical. A big part of science education is training the mind to overcome this deception and deal with numbers in a healthy way.

The saga continued. Students got the sensible idea to report isotopic abundances as percentages of pennies in the entire sample. New set of numbers, same set of issues with significant figures!

Student: How many significant figures should I include in the percentage?

Things suddenly got interesting! I have to admit that this question caught me off guard. The calculation is simple enough: 10 / 15 * 100. Dogmatic application of the “rules” for significant figures would produce the number “67%.” Yet, the exact ratio of pennies is known, since we know that there are fifteen pennies and—relying on the idea that fractional pennies are nonsensical—there is no uncertainty in the numbers of pennies. There is no uncertainty in the percentage at all, so it’s fine to report the percentage as “66.6 repeating.” Hm, perhaps there is more to significant figures than meets the eye!

I’m fascinated by the link between significant figures and scientific misconduct. I think it’s rarely appreciated, but significant figures really are an issue of scientific misconduct. Reporting too many digits in a number is tantamount to lying about the precision of one’s instruments and ignoring (willfully or not) the impact of uncertainty on reported values. How do you get students—and other number-obsessed humans in the general public—to appreciate the contingency of scientific quantities?

I can’t resist one more fun fact about significant figures to finish this post. A value calculated from a logarithm (say an energy calculated from –RT ln K) has only as many significant figures as digits after the decimal place. Why? Think of the logarithm as a stand-in for a power of 10 (never mind the conversion of ln to log for a second). The integer part of a logarithm, then, is just a simpler way of writing “times ten to the power of…” It’s just the exponent part of a number in scientific notation—a placeholder that is never significant! The decimal portion of a logarithm, on the other hand, actually represents a number with meaning. Hence, only the numbers following the decimal point are significant in a logarithm-based value. Slick, eh?


I’m an Error Convert

What was your least favorite part of your science education? Many people who only took a few science courses in high school or occasionally in college might drop entire fields (like the one I hear the most, “chemistry”). My least favorite part of science is more specific than that, and it’s not even close: error blew all competition out of the water. I hated all things error (including the king of terrible error-related things, error propagation) with a burning, blinding passion. Reflecting on those feelings now makes me realize how far I’ve come intellectually since then, so it felt like a natural topic to write about. One hears a lot about intellectual development in books on teaching and student learning these days, and I’ve discovered that my experience with error mirrors some of the theories out there. Maybe they’re on to something…!

I was an idealist in college (and still am today, to a large degree). Really, I started out as what one might call a “deterministic idealist.” Science described the world in deterministic terms in the form of equations, and if I learned these equations I would become privy to all the secrets of the universe. A particular system (experiment, say) either conformed to one of these immutable equations or didn’t, and if it didn’t, there was another, deeper equation that just hadn’t been written down yet that could describe the system. Hence, my hatred of the idea of error: “I get it. Experiments are subject to error. That’s not the important part of this experiment; why the hell are we worrying about it?!” Take the compressibility factor of a gas: why do we care about error when what matters is is measuring P, V, n and T and calculating Z?

That haughty attitude persisted well into my early years in graduate school. It corresponds roughly to the first stage of Perry’s process of intellectual development, dualism. “To solve a problem in the lab, all I really need to do is measure the relevant stuff and apply the Right Equation(s) to The Measurement to obtain The Answer.” The deferential capitalization is intentional! I would get so angry propagating error because it felt utterly irrelevant to The Answer. It was nothing but busy work!

That perspective seems so immature and proud in retrospect…what about uncertainty? Why trust your instruments? Why trust your own shaky hands, or your blurry eyes? Somewhere during my education—well after undergrad, mind you—I figured out that the world doesn’t operate in black and white. Uncertainty is a fact of life, and it needs to be accounted for. This viewpoint is more like Perry’s third level, relativism. “Answers need to be backed up by good reasoning (and good scientific reasoning must take error into account).” Better, I thought. I had learned how to do error propagation in college, but I never really understood why it worked. Honestly, because of my intellectual level back then, I doubt I was even capable of learning how it worked. A frightening thought! I remember blindly applying the formula, but never fully wrapped my mind around it.

The Formula for error propagation. But why does it work...?a

The Formula for error propagation. But why does it work…?

Continue reading →

Chemical Education Roundup, 2-9-13

What’s new in the world of chemical education in 2013? In this edition of the CE Roundup, I’ll engage in a bit of shameless self-promotion, and we’ll look at articles that shed new light on the costs of publishing, innovations in laboratory instruction, student evaluations, and more.

Let’s get the shameless self-promotion out of the way first. Two weeks ago, the Introductory Organic Chemistry MOOC (massive open online course) kicked off on Coursera. The materials for this course were prepared by myself and my colleagues at UIUC for use with our organic chemistry 1 course for non-majors. I’m leading the Intermediate Organic Chemistry (organic chemistry 2) effort, and although that class hasn’t started yet, I’ve been knee deep in the MOOC world for a while now. I’ve got a whole series of blog posts planned on the MOOC experience, so stay tuned!

What is it about the winter months and great literature articles? Perhaps the cold bores people into writing. Who knows? Either way, the literature’s been very interesting in early 2013.

First, teacher reflection and cognition in the classroom. Reflective teachers generally see better student evaluations than unreflective ones. No surprise there: drivers who actually watch the road are better than those who don’t! But how much reflection is enough? A recent study in Brit. J. Educ. Technol. sheds some light on the question. The authors found that formative (weekly) student evaluations increased teachers’ reflective practice, and that increased levels of the latter lead to higher student evaluations over a multi-year period. Some would say that formative student evaluations could promote a “consumer culture” in education, however. There’s an interesting debate brewing there. In a study focused on science teachers, a team of researchers writing in to J. Res. Sci. Teach. found that teachers’ “noticing patterns”—patterns in their attention during class—indicate the ways in which they frame the classroom. Particular noticing patterns point to particular frames. Furthermore, the authors add, a given teacher is capable of multiple frames, depending on the classroom’s context. Their theoretical ideas are elegantly demonstrated in a video-based study of a high school biology teacher in action.

Laboratory instruction came under the qualitative microscope this month in a report by Bretz, Towns, and co-workers. They studied how instructors of different laboratories prioritize cognitive (thinking), affective (feeling), and psychomotor (doing) learning goals. This work draws attention to a potentially concerning decline in affective learning goals as students move from general chemistry to organic chemistry. In other laboratory news, a simple apparatus for flash chromatography gives results comparable to traditional columns and “obviates the need for students to handle silica gel”, and instructors at South Dakota State University have reported on instructional design for a laboratory sequence aimed at producing student researchers.

The editor-in-chief of J. Chem. Educ. has written an editorial describing the costs of publishing, and rationalizing some recent price increases. It’s worth a look, particularly if you’re interested in the broader forces acting on academic journals these days. Also interesting are the editorials citations, which include familiar language from the journal’s past editors.

Other news: a really nice piece on learning progressions in Science; a perspective on the scale of acidity; development and evaluation of a chemoinformatics curriculum.

Chemical Education Roundup, 10-14-12

What’s new in the world of chemical education this fall? Evidence is mounting that the community is taking something of a breather and re-examining basic assumptions, which is always a good thing. @RethinkChemEd is a new account on Twitter that I would encourage readers to check out. Ten Dichotomies We Live By is a must-read, which examines the dichotomies at the root of most chemical educators’ thinking and how they influence research and teaching. Somewhat off the beaten path, but still fundamental, a recent J. Res. Sci. Teach. article examines the nature of scientific argumentation in the classroom. The authors here recognized the great importance of scientific argumentation in the classroom, but identified several barriers to the application of scientific argumentation by students. Inquiry approaches to the teaching laboratory come to mind, but even these face challenges, as a recent Int. J. Sci. Teach. article suggests.

In recent years, a number of groups have taken up very long-term, mixed-methods studies that use qualitative research approaches to establish a foundation for subsequent quantitative work. The absolute master of this approach is, in my opinion, Bretz, who has notably addressed acid-base reactions and enzyme-substrate interactions using qualitative-then-quantitative work. The primary goal here is to identify alternative conceptions via interviews with students, then to rapidly nip them in the bud in subsequent semesters using survey instruments validated by the initial qualitative work. Bretz and McClary’s recent work on acid-base chemistry is a masterpiece in this field—definitely worth a look!

Educational technology research marches on. I had originally planned on an entire post on social media in education, but instead, I’ll just point you to a nice review of research on microblogging in education published earlier this year in Brit. J. Educ. Technol.—heck, the entire issue is an awesome look at social media in the classroom. Exciting news this month for chemists interested in ed tech: Jmol has been ported to Javascript! Check out the demo of “JSmol” here. Without too much comment I have to say that JSmol is a technological dream for chemical educators, since it opens the door to interactive models on all manner of portable devices.

Other random highlights: oral examinations in the undergraduate organic chemistry curriculum (!?) piloted by Mark Lautens (!?) at the University of Toronto; William Wulf’s Responsible Citizenship in a Technological Democracy course (mentioned in a letter to Science); a closer look at virtual chemistry laboratories in Res. Sci. Educ.; and an article on FoldIt, one of my favorite educational time-wasters.

Demo This!: Trautz-Schorigin Reaction of Polyphenols in Green Tea

Periodically, I plan to cover a new demonstration from the recent chemical education literature in a feature I’m calling Demo This! Today’s featured demonstration comes from a recent J. Chem. Educ. article, which highlights the use of polyphenols in green tea for the luminescent Trautz-Schorigin reaction.


Pyrogallol, or what we might call 1,2,3-trihydroxybenzene, undergoes an interesting set of transformations under oxidative conditions. In the presence of water, formaldehyde, base, and hydrogen peroxide, pyrogallol is oxidized and excited singlet oxygen is produced. Relaxation of singlet oxygen to its ground state produces red luminescence.

The Trautz-Schorigin Reaction: see if you can draw a mechanism for this beast!

The Trautz-Schorigin Reaction: see if you can draw a mechanism for this beast!

A quick literature search has revealed that this reaction has been understudied (or at least underpublished) over the years. See if you can draw a mechanism accounting for all the products! All manner of oxygen-containing species may be present under these harsh conditions, including superoxide anion and hydroperoxide anion.

This reaction can be slowed or prevented by treatment with boric acid (forming cyclic borate esters, which are resistant to oxidation) or by treatment with ascorbic acid, which can reduce the 1,2-keto intermediate back to pyrogallol and (in a separate reaction) react with singlet oxygen. Considering these quenching reagents, this demonstration has all the trappings of a “green,” easy-to-prepare experiment.


This demo can be carried out either with the parent pyrogallol or with polyphenols found in green tea. Either way, set up is straightforward, and the article claims that the entire kit and kaboodle takes less than one hour. Assuming that a tea bag holds about 2 grams of tea leaves, infusing for ~3 minutes in 200 mL of hot water is long enough to push a sufficient quantity of polyphenols into the water. Paraformaldehyde and sodium carbonate are then added to the hot tea with stirring, and the solution is allowed to cool to room temperature in a water bath. The pH of the solution is checked using pH paper or indicator before adding hydrogen peroxide (it should be ~11). 50 mL of the pH 11 solution are transferred to an empty beaker, and the lights are killed. Finally, 50 mL of dilute (3%) hydrogen peroxide are added. Luminescence should be instantaneous, and lasts for 5-10 seconds.

Ascorbic acid completely shuts down the reaction, while boric acid only slows it down. These quenching reagents should be added just before the lights are killed, right before the addition of hydrogen peroxide.


Panzarasa, G.; Sparnassi, K. J. Chem. Educ. 2012ASAP. doi: 10.1021/ed200810c

Chemical Education Roundup, 8-16-11

It’s been a while since I’ve done a roundup! The world of chemical education has been relatively quiet over the last month, although a few interesting things have happened (mostly in education-at-large). A while back, I blogged about Moskovitz and Kellogg’s intriguing idea of “double-blind science writing“—setting up laboratory experiments and reports so that neither students nor graders had an expectation of what their results should be. The aim of the exercise is to rip the bed of procedural and predictive comfort out from under students’ (and graders’) feet. Such a setup, argue the authors, forces students to use well-supported, rational arguments in lieu of the regurgitative, droning garbage that one usually sees in lab reports, and forces graders to evaluate students’ arguments as arguments—just as they would evaluate an academic paper.

On July 29, Science published a brief retort to the Moskovitz paper by Michael Goggin, a physics instructor who argues…

The first priority should be ensuring that the students get the correct result; their ability to articulate that result is secondary. (emphasis mine)

Goggin’s stated objection is that Moskovitz’s approach aims to teach writing more than science. However, in my opinion, a sufficiently open-minded scientist should take issue with Goggin’s assumption that the ideal lab experiment has “the correct result.” On the contrary, conservative experiments with spelled-out “correct results” lead students to believe that a career in science consists of proving what is already known. As any blue-blooded scientist knows, the opposite is true—most scientists spend their careers convincing others that their work is new! The work of undergraduates does not have to be new per se, but it should be new enough to them that constructing a convincing argument requires learning, not just regurgitation. Moskovitz’s approach to scientific writing is thus a step in the right direction. In a response to Goggin, Moskovitz and Kellogg offer this argument and others (among them: lectures give ample opportunity for students to find “correct answers”) in support of their ideas.

A little closer to home for me personally, Neil Selwyn has written an intriguing editorial in the British Journal of Educational Technology about the need for “pessimism” in the field. I put “pessimism” in quotes because what Selwyn argues for is less pessimism and more “healthy skepticism.” Selwyn states (truly) that there is an obsession among educational technologists with the use of technology as representing “progress” in education. Technology use seems to be associated with progress everywhere else in our lives—why should education be any different? Of course, in all aspects of human life, technology has its downsides. Selwyn argues (again, truthfully) that educators that use technology are often blind to the limitations, pitfalls, and “everything old is new again”-ness of what they do. How much in educational technology is actually new, he asks? Less than we think. ETs need a fresh challenge, a kick in the pants, a wake-up call that alerts us to the fact that what we’re doing may not be all it’s cracked up to be—which could be a good thing! Connections to past scholarship (and challenges to move beyond it) will only do good for the field of educational technology in the long run.

Other news and editorials: an interesting study of central nervous system drugs using calculated electrostatic potential energy surfaces, the harsh realities of narcissism and grade inflation, and a piece from the EIC of the Journal of Chemical Education on striking a balance with assessment. If you haven’t already, read about the epic standardized-test cheating scandal in Atlanta referenced in the last article.