What Does “Inquiry” Mean?

The phrase “inquiry-based labs” has been buzzing around my department for a while now. If it’s possible to crown a king of buzzwords in the realm of chemistry laboratories, “inquiry” is probably it.

On the surface, the idea of inquiry-based laboratories seems straightforward. The idea is to design and implement experiments that require students to engage in the process of scientific inquiry—exploring questions using the scientific method and making claims based on empirical evidence. To some degree, inquiry-based experiments have to “take the training wheels off” and throw students into a situation whose outcome is unknown. The catch is that the extent to which students should be left to explore on their own is by no means clear. Some great work has been done to clarify the continuum of inquiry labs.

Dirty little secret: these kinds of experiments make professors uncomfortable too! When a student makes a mistake during a prescriptive (procedural) experiment, it’s often easy to point to what they did and say “you made a mistake in step x.” The egregiousness of the mistake is related to how far the student is from the expected outcome. But when the outcome and procedure become uncertain, how can students or faculty know when a mistake is made? Anyone who has engaged in scientific research knows that this is a constant theme: did I make a mistake, or am I really observing something new? (Personal aside: I found this tension soul crushing during my early years in graduate school.)

Eventually, every professional scientist has to look this issue square in the face and become comfortable—on an emotional level—with the difference between sloppy technique and novel results. Much of that comes with experience learning and practicing science professionally. However, there’s a great argument to be made that the affective side to inquiry—the cosmic comfort one develops with uncertainty—can be developed through inquiry-based experiments in college.

So what keeps many faculty from implementing inquiry-based labs? You rarely see the other side of the coin in the chemical education literature, of course. Some have raised the point that students don’t learn as much from open-ended experiments, which could yield problematic results. On a more fundamental level, what students learn changes drastically when they work through inquiry-based labs. I don’t agree with the claim that students learn less from well designed inquiry-based labs, but I will admit that what they learn changes drastically. The focus shifts from verifying existing knowledge to constructing arguments based on data and observations.

I’m excited to get into the business of running inquiry-based experiments at large scale—I’ve always enjoyed shaking things up!

Screen Shot 2015-01-21 at 8.05.53 PM

Percent Yield, Movie Times, and the “Science Unseen”

It’s that time of year again: labs are gearing up. Drawers are being filled with new glassware, students donning lab coats are beginning to fill the halls…the ol’ machine is revving up to roll again. For me, this time of the semester means emphasizing good practices when working up data and results. I’ve written in past semesters about significant figures and some of the interesting issues that come up when teaching them—it’s about the mindset, not the rules, I swear…!

Take percent yield, a measure that has been reported with false precision by countless numbers of students across the generations. Percent yield is really interesting because the balance is one of the most precise instruments that exist in general chemistry laboratories—depending on the range and precision of the balance, measurements with five, six, and even seven significant figures are possible. Thus, it seems like percent yields (which are really just ratios of masses) should in turn have five, six, or seven significant digits. The measured mass of product points to this level of precision.

Students often struggle to understand that the precision of the balance is irrelevant—inevitable variations between runs of the reaction introduce massive uncertainty into yields. Such variations are gargantuan compared to imprecision in balance measurements and essentially render the precision of the balance meaningless. Continue reading →

Screen Shot 2015-01-16 at 10.44.38 AM

Yin and Yang: Electrophilic and Nucleophilic Reactions

In my opinion, the most awkwardly named reaction in all of chemistry is electrophilic aromatic substitution (and all of its three-worded cousins). This name suffers from the same problem as other named reactions: it is deceptively uninformative. I still recall raising an eyebrow in undergrad when I found out that the aromatic involved in this reaction is not the electrophile—the other reagents combine to generate the electrophile. The aromatic is the nucleophile. “Why the heck is the word ‘electrophilic’ stuffed before ‘aromatic’ in the name, then?!” When you really get down to it, the name doesn’t tell you much and has the potential to feed a novice a lot of incorrect information:

“So the reaction mixture is electrophilic, then?”
“Well no, the reaction involves a nucleophile and an electrophile, just like all polar organic reactions…”

“So the aromatic is electrophilic?”
“No, the aromatic is the nucleophile in these reactions.”
“But the name says electrophilic aromatic…!”
[Professor places face in palms]

Only once the student has seen copious examples of other electrophilic substitutions does s/he realize that the adjective refers to the conditions surrounding the substrate, not the substrate itself. The naming convention makes sense to a synthetic chemist interested in “decorating” a given substrate: the substrate is what it is, and we treat it with electrophilic or nucleophilic conditions to add groups to it. The names of substitution reactions clarify the reactivity of whatever’s coming into contact with the substrate (the reagents). To a student without a synthetic frame of mind though, without an inkling of the primacy of the substrate or even its identity, I don’t think this naming convention comes naturally. Continue reading →

Writing About Writing About the Second Law

I recently returned from vacationing in the UK, and just spent a couple of days in the West End of Glasgow, near Kelvingrove Park. Yes, the same Kelvin of scientific fame! Seeing his statue got me thinking about the second law of thermodynamics—enough that I was inspired to jot a few thoughts down about the second law.

The second law and entropy are two of the hardest topics to write about at a general chemistry level, in my opinion. Not only has there been fierce debate over the years as to the ideal intuitive notions and analogies for these topics, but related derivations with mathematical rigor can be painfully complicated. There’s a gulf here between the theory and practice of chemical thermodynamics that is difficult to navigate.

A while back I tried just to get down on paper a rigorous derivation of the definition of entropy in terms of heat and temperature, using the second law and a hypothetical thermodynamic cycle. While the work was mathematically correct, the writing made me—the author, mind you!—want to tear my eyeballs out recently. That text will never see the light of day in a general chemistry class. At that point, I wondered if I was even capable of dispensing with rigor to write a more intuitive piece. I’ve always found it difficult to write while sacrificing rigor because I still recall craving rigor and theory in the depths of my soul as a student.

The reality, of course, is that all chemists use heuristics, shortcuts, or metaphors when confronted with certain topics. The best chemist writers can navigate rigorous theory and metaphor with finesse, presenting derivations where the mind “wants” them and metaphors elsewhere. Tro is a good example—while he makes no effort to dumb down important equations, he also presents the practical metaphors that chemists most often use.

In the edition I have, he even manages to lay out all three general interpretations of entropy: entropy as disorder, entropy as energy dispersal, and the statistical interpretation. Color me jealous!

Screen Shot 2014-12-10 at 9.28.39 AM

Why Does the Normal Distribution Work?

A simple question with a not-so-straightforward answer. Everyone learns about the infamous “bell curve” in one way or another—but why does randomly distributed data work this way? After all, we could imagine all kinds of wonky shapes for probability distributions. The strangeness of quantum chemistry even shows us that odd-looking probability distributions can occur “naturally.” Yet there’s something deeply intuitive about the bell shape.

For example, the normal distribution is consistent with the intuition that randomly distributed data should cluster symmetrically about a mean. Probability decays to zero as we move away from the mean in a kind of sigmoidal way: the drop is slow at first, picks up steam about halfway to the first standard deviation, and slows down again as p inches towards zero. Yet correspondence with our intuitions doesn’t give the normal distribution theoretical legitimacy: the hydrogenic 1s orbital has similar properties, after all. What’s so special about the normal distribution?

Although this is a question I’ve had for many years, I stumbled into the answer recently in an unexpected context: random diffusion. The answer gets right to the heart of what we mean by the word random, particularly with respect to the behavior of data (or little diffusing particles, in a diffusion context). If we imagine data points jiggling like little particles in a fluid, then random errors “nudge” the points to either the left or right with equal probability. What we really mean by “random” is that it’s impossible to predict which way the data points will move: they may go to the left or to the right with equal probability (50%). Randomly distributed data behaves just like little diffusing particles engaging in a random walk. Continue reading →

cells_banner

Unit Cell Hell

I experienced a wake-up call recently when a student dropped by to ask about unit cells. Wow, I realized, I know nothing about crystal structures. Analyzing simple cubic on the fly is easy enough, but the close-packed structures require quite a bit of mental gymnastics. Working mostly on the lab side of things, I don’t often think about this topic (although some nice activities with solids as their focus have been developed).

While the visualization skills needed to understand unit cells inside and out can turn students off, they’re a classic example of how chemists use microscopic structure and properties to explain and predict macroscopic phenomena. Stripping away the messy details, there are relatively few properties of unit cells that general chemists care about:

  • Packing fraction (also interesting from a physical and mathematical point of view)
  • Density
  • Hole geometry and count
  • Dimensions and atomic/molecular radii

This video is a great introduction to the most important crystal structures from a materials science point of view. The best thing a student can do, in my opinion, is use a physical model to build up the structures herself—this is particularly true for the close-packed structures, which to me have a kind of magical allure. How can there be two ways to pack hard spheres as closely as possible? The answer becomes apparent after you’ve stacked up two planes of close-packed spheres…

Two layers of close-packed spheres stacked one on top of the other. Note the two types of pockets for the next layer!

Two layers of close-packed spheres. Note the two types of pockets for the next layer!

The next layer of spheres will sit down in the triangular “pockets” between the red spheres, but there are two inequivalent types of pockets: those above tetrahedral holes and those above octahedral holes. Because of the size of the spheres, both types of pockets cannot simultaneously be occupied. Ergo, there are two inequivalent close-packed structures! Continue reading →

Garlin-David_laugh

The Odd Oxygen

There is a weird smattering of organic oxides whose molecules contain a foreign oxygen latched on to an otherwise familiar framework. I’ve written about DMSO before, which is essentially dimethyl sulfide with an extra oxygen atom along for the ride. N-Oxides fit into this group of compounds as well.

The structure of nitrous oxide, also known as laughing gas.

Nitrous oxide (also known as laughing gas).

Perhaps no molecule better typifies the class than nitrous oxide, N2O. Even the molecular formula sets off neural fireworks: that can’t be right. A central nitrogen flanked by N and O? Something’s wrong here. The central nitrogen seems overworked, while the oxygen seems to be missing a bond. Despite its bizarre structure, nitrous is surprisingly unreactive—I learned this recently while helping out a teacher friend with one of his students’ science fair projects. Thanks to its lack of reactivity at sane temperatures, detecting N2O is a pain.

Synthesizing nitrous, on the other hand, is quite easy. Upon heating, ammonium nitrate breaks down into N2O and two water molecules. The melting point of NH4NO3 is downright eye popping for an ionic compound: 170 ºC. Some rather unsafe methods actually produce nitrous from molten ammonium nitrate at high temperatures.

NH4NO3(l) → N2O(g) + 2 H2O(g)

One must be careful as the dissociation of ammonium nitrate into gaseous nitric acid and ammonia competes with N2O formation (“decomposition”).

NH4NO3(l) → HNO3(g) + NH3(g)

Dissociation is endothermic and decomposition exothermic, so heating can set up an interesting situation where the dissociation reaction can “quench” the heat released by decomposition. When dissociation is suppressed, the decomposition reaction can become a runaway exotherm. Although this document on the safe production of nitrous oxide from ammonium nitrate is written for chemical engineers—they even go to the trouble of writing out “gram-mole”!—it’s an illuminating read with some additional information about the synthesis of nitrous oxide. Continue reading →