As a holdover from grad school, I still get a little pain in my stomach whenever someone mentions “kinetics studies.” I never had the displeasure of running one myself, but I’ve heard many stories of others’ painful nights camped out at the NMR, running hours-long kinetics runs on slow reactions. And really, not a whole lot has changed with respect to reaction kinetics over the years. Sampling rates have gotten larger, and the repertoire of analytical methods used to follow concentration(s) has grown, but the underlying theory of reaction kinetics has largely remained the same.
Historically, the development of reaction kinetics has been a story of increasing cleverness. At some point, someone figured out that using a reactant in “drowning” concentrations causes its concentration to remain basically constant over the course of the reaction, removing its influence on the reaction rate—and thus was born the “isolation method.” Yet another clever chemist figured out that only initial rates are necessary to determine kinetic orders, provided multiple runs of a reaction are feasible—and thus emerged the “method of initial rates.”
Why stop there? Increasingly complicated mechanisms (especially catalytic mechanisms) have created a demand for ever more clever methods of kinetic study. Plus, technological advancements are pushing the Δt between data points ever smaller and the sizes of data sets ever larger. Concentration versus time data are basically continuous these days (as are rate versus time data), so why not use the entire span of a kinetics run to the best of our ability? A recent article by Blackmond shows just how far this approach can take chemists studying reaction mechanisms. With data for just a couple of cleverly structured reaction runs, one can propose pretty good guesses for reaction mechanisms. Continue reading →