I recently returned from vacationing in the UK, and just spent a couple of days in the West End of Glasgow, near Kelvingrove Park. Yes, the same Kelvin of scientific fame! Seeing his statue got me thinking about the second law of thermodynamics—enough that I was inspired to jot a few thoughts down about the second law.
The second law and entropy are two of the hardest topics to write about at a general chemistry level, in my opinion. Not only has there been fierce debate over the years as to the ideal intuitive notions and analogies for these topics, but related derivations with mathematical rigor can be painfully complicated. There’s a gulf here between the theory and practice of chemical thermodynamics that is difficult to navigate.
A while back I tried just to get down on paper a rigorous derivation of the definition of entropy in terms of heat and temperature, using the second law and a hypothetical thermodynamic cycle. While the work was mathematically correct, the writing made me—the author, mind you!—want to tear my eyeballs out recently. That text will never see the light of day in a general chemistry class. At that point, I wondered if I was even capable of dispensing with rigor to write a more intuitive piece. I’ve always found it difficult to write while sacrificing rigor because I still recall craving rigor and theory in the depths of my soul as a student.
The reality, of course, is that all chemists use heuristics, shortcuts, or metaphors when confronted with certain topics. The best chemist writers can navigate rigorous theory and metaphor with finesse, presenting derivations where the mind “wants” them and metaphors elsewhere. Tro is a good example—while he makes no effort to dumb down important equations, he also presents the practical metaphors that chemists most often use.
In the edition I have, he even manages to lay out all three general interpretations of entropy: entropy as disorder, entropy as energy dispersal, and the statistical interpretation. Color me jealous!
A simple question with a not-so-straightforward answer. Everyone learns about the infamous “bell curve” in one way or another—but why does randomly distributed data work this way? After all, we could imagine all kinds of wonky shapes for probability distributions. The strangeness of quantum chemistry even shows us that odd-looking probability distributions can occur “naturally.” Yet there’s something deeply intuitive about the bell shape.
For example, the normal distribution is consistent with the intuition that randomly distributed data should cluster symmetrically about a mean. Probability decays to zero as we move away from the mean in a kind of sigmoidal way: the drop is slow at first, picks up steam about halfway to the first standard deviation, and slows down again as p inches towards zero. Yet correspondence with our intuitions doesn’t give the normal distribution theoretical legitimacy: the hydrogenic 1s orbital has similar properties, after all. What’s so special about the normal distribution?
Although this is a question I’ve had for many years, I stumbled into the answer recently in an unexpected context: random diffusion. The answer gets right to the heart of what we mean by the word random, particularly with respect to the behavior of data (or little diffusing particles, in a diffusion context). If we imagine data points jiggling like little particles in a fluid, then random errors “nudge” the points to either the left or right with equal probability. What we really mean by “random” is that it’s impossible to predict which way the data points will move: they may go to the left or to the right with equal probability (50%). Randomly distributed data behaves just like little diffusing particles engaging in a random walk. Continue reading →
I experienced a wake-up call recently when a student dropped by to ask about unit cells. Wow, I realized, I know nothing about crystal structures. Analyzing simple cubic on the fly is easy enough, but the close-packed structures require quite a bit of mental gymnastics. Working mostly on the lab side of things, I don’t often think about this topic (although some nice activities with solids as their focus have been developed).
While the visualization skills needed to understand unit cells inside and out can turn students off, they’re a classic example of how chemists use microscopic structure and properties to explain and predict macroscopic phenomena. Stripping away the messy details, there are relatively few properties of unit cells that general chemists care about:
- Packing fraction (also interesting from a physical and mathematical point of view)
- Hole geometry and count
- Dimensions and atomic/molecular radii
This video is a great introduction to the most important crystal structures from a materials science point of view. The best thing a student can do, in my opinion, is use a physical model to build up the structures herself—this is particularly true for the close-packed structures, which to me have a kind of magical allure. How can there be two ways to pack hard spheres as closely as possible? The answer becomes apparent after you’ve stacked up two planes of close-packed spheres…
Two layers of close-packed spheres. Note the two types of pockets for the next layer!
The next layer of spheres will sit down in the triangular “pockets” between the red spheres, but there are two inequivalent types of pockets: those above tetrahedral holes and those above octahedral holes. Because of the size of the spheres, both types of pockets cannot simultaneously be occupied. Ergo, there are two inequivalent close-packed structures! Continue reading →
There is a weird smattering of organic oxides whose molecules contain a foreign oxygen latched on to an otherwise familiar framework. I’ve written about DMSO before, which is essentially dimethyl sulfide with an extra oxygen atom along for the ride. N-Oxides fit into this group of compounds as well.
Nitrous oxide (also known as laughing gas).
Perhaps no molecule better typifies the class than nitrous oxide, N2O. Even the molecular formula sets off neural fireworks: that can’t be right. A central nitrogen flanked by N and O? Something’s wrong here. The central nitrogen seems overworked, while the oxygen seems to be missing a bond. Despite its bizarre structure, nitrous is surprisingly unreactive—I learned this recently while helping out a teacher friend with one of his students’ science fair projects. Thanks to its lack of reactivity at sane temperatures, detecting N2O is a pain.
Synthesizing nitrous, on the other hand, is quite easy. Upon heating, ammonium nitrate breaks down into N2O and two water molecules. The melting point of NH4NO3 is downright eye popping for an ionic compound: 170 ºC. Some rather unsafe methods actually produce nitrous from molten ammonium nitrate at high temperatures.
NH4NO3(l) → N2O(g) + 2 H2O(g)
One must be careful as the dissociation of ammonium nitrate into gaseous nitric acid and ammonia competes with N2O formation (“decomposition”).
NH4NO3(l) → HNO3(g) + NH3(g)
Dissociation is endothermic and decomposition exothermic, so heating can set up an interesting situation where the dissociation reaction can “quench” the heat released by decomposition. When dissociation is suppressed, the decomposition reaction can become a runaway exotherm. Although this document on the safe production of nitrous oxide from ammonium nitrate is written for chemical engineers—they even go to the trouble of writing out “gram-mole”!—it’s an illuminating read with some additional information about the synthesis of nitrous oxide. Continue reading →
My predecessor left a ton of books behind in my office, and among some of his old stuff I found a wonderful book called The Pupil as Scientist. The author, Rosalind Driver, makes the case that students develop scientific explanations of what they see long before they set foot into a science classroom. Children’s natural scientific curiosity is both an asset and a liability: teachers can tap into it to engage students in deeper scientific learning, but it can be a source of robust misconceptions, too.
Driver describes beautifully a tension I’ve observed teaching college freshmen laboratories. On the one hand, we want students to observe and discover scientific principles in the laboratory, and leaving procedures open ended is part of that goal. On the other hand, the impact of an experiment seems greatest when it’s done properly, according to a prescribed procedure that yields “good results.” Good data is typically a pre-requisite for grappling with complex scientific concepts, but inquiry-based labs open the door to bad data or incorrect conclusions. How can we properly balance these two opposing forces?
Continue reading →
In a recent authoring project on organic chemistry, I came across the following statement:
A chain mechanism involves two or more repeating steps.
Is this a true statement? Well, yes and no. Yes, a chain mechanism involves the same process happening again and again. But so does a catalytic mechanism—are both mechanisms the same? If they were, we’d just call all chain mechanisms catalytic (it sounds much better, right?). In fact, the two are not the same, and there’s far more to the definition of a chain mechanism than two repeating steps.
Naively, chain initiators (let’s use radical initiators for the present discussion) look a lot like catalysts. They’re around in substoichiometric amounts and they promote the combination of reactants that would otherwise sit dormant. Clearly then, they decrease the activation energy of the reaction relative to a situation without initiator. However, radical initiators are missing a key feature of catalysts: they are consumed by the reaction. They’re about as close as one can get to a catalyst without being a catalyst! Continue reading →
In chemistry, quantum mechanics and orbital theory often rub up uncomfortably against more naïve bonding theories, such as VSEPR and Lewis structures. For example, VSEPR tends to give the impression that the positions of lone pairs (or better yet, the orientations of filled non-bonding orbitals) are dictated by the number of electronic domains around the atom. Water, then, which has four electronic domains around oxygen—two single bonds and two lone pairs—apparently has lone pairs at 109.5º angles in a plane perpendicular to the H–O–H plane. The carbonyl oxygen, which VSEPR suggests is “really” trigonal, has two “rabbit-ear” lone pairs at 120º angles. These pictures make the lone pairs look equivalent, and helps us slot these structures in mentally with analogous structures, like imines (for the carbonyl) and ammonia (for water).
MO theory suggests that the lone pairs shown are not equivalent.
Yet, MO theory suggests that the lone pairs shown are not in equivalent orbitals! The simplest explanation for water is that the atomic 2p orbital on oxygen perpendicular to the H–O–H plane cannot interact with the 1s orbitals on hydrogen (there’s zero net overlap), so one of the 2p orbitals on oxygen must show up as a non-bonding molecular orbital. But this orbital can only hold two electrons, so the other lone-pair-bearing orbital on oxygen must be a hybrid. In a nutshell, one lone pair is best characterized as a π MO (the pure 2p orbital) while the other is a σ MO. The two lone pairs are not equivalent. As it turns out, this situation holds up even for lone-pair-bearing atoms in larger molecules. The inequivalency holds for both canonical and natural bond orbitals (NBOs), but the paper that inspired this post focuses on the usefulness of NBOs in the correct description. Continue reading →