Kenneth Hanson over at Chemistry-Blog set his students the task of re-inventing the Jablonski diagram, a kind of schematic used in chemistry to show how molecules become excited and de-excited. Those processes drive things like phosphorescence ("glow in the dark") and fluorescence ("day-glo") and are really important to organic solar cells and organic LEDs. (If you've seen the first-generation PlayStation Vita's eye-popping display, you can thank the latter technology.) Having a good, clear way of conveying this information is important, which is why the Jablonski diagram has hung around for the best part of a century - it works!
To come up with their own versions, the students had to really understand what's going on behind the diagram, so this is a great teaching exercise. If you're at an early stage of your chemistry career, take it from me: nothing gets your brain going on a problem like trying to describe and explain it to someone else. Quite aside from that though, they're all really imaginative. I wish I could have a go on Jablinko.
Sunday, 24 November 2013
Sunday, 25 August 2013
Calculating Carbyne
A group of researchers at Rice University have performed some nice calculations on carbyne, or linear acetylenic carbon, determining its mechanical properties and lots of other fun stuff. Carbyne is a carbon chain with alternating single and triple bonds, and therefore there's plenty of reason to expect that it'd have some interesting mechanical properties like being really strong. Of course, being basically a whole lot of acetylene molecules spaced apart by ethane, it's supposedly be really reactive, and it's strongly suspected that it'd cross-link to death if left to its own devices. (Alternatively, it could be double bonds all the way down; more on that later.)
The group did periodic DFT calculations in VASP to calculate the material's strength - when it breaks - and Young's modulus, a measure of its resistance to stretching. If you're not familiar with periodic boundary conditions, imagine a Pac-Man or Asteroids game board. Everything that goes out one side wraps around to the other. So a snippet of the molecule sees an image of itself continuing at each of its own ends, and those ends in turn see their own duplicates, and so on. This is an excellent model of an infinitely long chain that doesn't change anywhere along its length. This model of an infinitely long chain is, itself, is a good approximation of a really long, but finite carbyne molecule - and by stretching that chain they could measure its Young's modulus.
They also did some molecular calculations on some rings of the material to estimate its resistance to bending, and on finite-length carbyne molecules (capped at the end with different functional groups) to measure its resistance to twisting. To finish it off they determined the energy barrier to cross-linking when two molecules come together, how well it conducts electricity, and whether the single-triple or double-double bond structures are preferred.
Thorough work!
It transpires that it's not only really rigid but spectacularly strong. Its Young's modulus is double that of the next stiffest material, graphene, and three times that of diamond. It's also comfortably stronger than either. That's right, if you're looking for something to build a space elevator cable out of, miles of nanotubes are no longer the cool hypothetical material to go for. Its resistance to bending and twisting are on a similar order of magnitude to double stranded DNA, a whopping great hydrogen- and covalent-bonded monster of a material. (Interestingly, carbyne's properties are strongly dependent upon what capping groups are used, which could make for some interesting fine-tuning.)
Fun stuff. The real question is, will it hang around when you make it?
Well by these calculations, when bringing two chains together there's an energy barrier of 0.6 eV to them cross-linking. That's pretty substantial. On the other hand, it's very unlikely that two carbyne molecules would just benignly wander up to each other like this. I suspect that when you put the molecule into a real-world soup of radicals and ions, it's not going to have a hard time finding a way around that wall. They also determined that two carbyne chains won't cross-link together down their whole length; the difficulty of pulling the two chains alongside means that there are alternating stretches of untouched and cross-linked carbyne. Again, I'm not sure that carbyne would behave itself quite so well in the messy real world, but it's promising.
You can read the paper at Arxiv, "Carbyne from first principles: Chain of C atoms, a nanorod or a nanorope?" by Mingjie Liu, Vasilii I. Artyukhov, Hoonkyung Lee, Fangbo Xu, and Boris I. Yakobson; a quick Google News search for Carbyne should give you plenty of news coverage.
As always, all errors here are my own, and I would be really grateful for any corrections you might want to send my way.
The group did periodic DFT calculations in VASP to calculate the material's strength - when it breaks - and Young's modulus, a measure of its resistance to stretching. If you're not familiar with periodic boundary conditions, imagine a Pac-Man or Asteroids game board. Everything that goes out one side wraps around to the other. So a snippet of the molecule sees an image of itself continuing at each of its own ends, and those ends in turn see their own duplicates, and so on. This is an excellent model of an infinitely long chain that doesn't change anywhere along its length. This model of an infinitely long chain is, itself, is a good approximation of a really long, but finite carbyne molecule - and by stretching that chain they could measure its Young's modulus.
They also did some molecular calculations on some rings of the material to estimate its resistance to bending, and on finite-length carbyne molecules (capped at the end with different functional groups) to measure its resistance to twisting. To finish it off they determined the energy barrier to cross-linking when two molecules come together, how well it conducts electricity, and whether the single-triple or double-double bond structures are preferred.
Thorough work!
It transpires that it's not only really rigid but spectacularly strong. Its Young's modulus is double that of the next stiffest material, graphene, and three times that of diamond. It's also comfortably stronger than either. That's right, if you're looking for something to build a space elevator cable out of, miles of nanotubes are no longer the cool hypothetical material to go for. Its resistance to bending and twisting are on a similar order of magnitude to double stranded DNA, a whopping great hydrogen- and covalent-bonded monster of a material. (Interestingly, carbyne's properties are strongly dependent upon what capping groups are used, which could make for some interesting fine-tuning.)
Fun stuff. The real question is, will it hang around when you make it?
Well by these calculations, when bringing two chains together there's an energy barrier of 0.6 eV to them cross-linking. That's pretty substantial. On the other hand, it's very unlikely that two carbyne molecules would just benignly wander up to each other like this. I suspect that when you put the molecule into a real-world soup of radicals and ions, it's not going to have a hard time finding a way around that wall. They also determined that two carbyne chains won't cross-link together down their whole length; the difficulty of pulling the two chains alongside means that there are alternating stretches of untouched and cross-linked carbyne. Again, I'm not sure that carbyne would behave itself quite so well in the messy real world, but it's promising.
You can read the paper at Arxiv, "Carbyne from first principles: Chain of C atoms, a nanorod or a nanorope?" by Mingjie Liu, Vasilii I. Artyukhov, Hoonkyung Lee, Fangbo Xu, and Boris I. Yakobson; a quick Google News search for Carbyne should give you plenty of news coverage.
As always, all errors here are my own, and I would be really grateful for any corrections you might want to send my way.
Tuesday, 20 August 2013
When someone says "formula for the perfect", I reach for my red pen
Since this morning, I've upgraded my reaction to the RSC's formula for the perfect cheese on toast from "not impressed" to "deeply not impressed".
It starts off pretty well. They actually did some experimentation, which is a far cry from most "formula stories" cut from whole cloth at a PR firm's behest, many of them from the same few chronically underworked researchers. By carefully varying the amount and type of cheese, the thickness and type of bread, and the distance from the heat source, they determined the combination most appealing to a panel of experts. That's how science is done! Science is about thinking and experimentation, which are really, really accessible, so a cheese on toast experiment was a great idea. People could recreate this test at home. (And I encourage you to!) Great job RSC.
Then they present the results like this:
That's not okay!
By cloaking their results it in a mish-mash of confusing abbreviations, the formula just continues the message that science is an opaque and needlessly complicated field, making grand prognostications about topics in which it is hardly the be-all and end-all of judgement. You wouldn't actually write a reaction like that in a report, anyway - it would defeat the purpose. They take less space to simply tell us how to make the cheese on toast, and with more clarity, than the diagram occupies. So it's not just complicated, it's inauthentically complicated!
Outreach efforts like these should show that science is an activity for everyone, and do their best to explain that when scientists use jargon, it's only when needed for clarity or precision. Science can be, and should be, universal. Science is about trying things out, testing and iterating. We can all do that, even if we're just trying to figure out the best way to make our favourite snack.
Here's an equation I think we can all get behind:
Cheese + bread ---SCIENCE---> Deliciousness
For good articles on bad formulas, you could do worse than start with these:
"Transparent excuse for printing a nice pair of hooters" by Ben Goldacre
"X+Y/Z=BS" by Dan Rutter
"Formulaic fashioning of fun formulas" by Marc Abrahams
"Stupid formulae" by Andrew Taylor.
It starts off pretty well. They actually did some experimentation, which is a far cry from most "formula stories" cut from whole cloth at a PR firm's behest, many of them from the same few chronically underworked researchers. By carefully varying the amount and type of cheese, the thickness and type of bread, and the distance from the heat source, they determined the combination most appealing to a panel of experts. That's how science is done! Science is about thinking and experimentation, which are really, really accessible, so a cheese on toast experiment was a great idea. People could recreate this test at home. (And I encourage you to!) Great job RSC.
Then they present the results like this:
That's not okay!
By cloaking their results it in a mish-mash of confusing abbreviations, the formula just continues the message that science is an opaque and needlessly complicated field, making grand prognostications about topics in which it is hardly the be-all and end-all of judgement. You wouldn't actually write a reaction like that in a report, anyway - it would defeat the purpose. They take less space to simply tell us how to make the cheese on toast, and with more clarity, than the diagram occupies. So it's not just complicated, it's inauthentically complicated!
Outreach efforts like these should show that science is an activity for everyone, and do their best to explain that when scientists use jargon, it's only when needed for clarity or precision. Science can be, and should be, universal. Science is about trying things out, testing and iterating. We can all do that, even if we're just trying to figure out the best way to make our favourite snack.
Here's an equation I think we can all get behind:
Cheese + bread ---SCIENCE---> Deliciousness
For good articles on bad formulas, you could do worse than start with these:
"Transparent excuse for printing a nice pair of hooters" by Ben Goldacre
"X+Y/Z=BS" by Dan Rutter
"Formulaic fashioning of fun formulas" by Marc Abrahams
"Stupid formulae" by Andrew Taylor.
Monday, 5 August 2013
A Chemical Imbalance
Professor Polly Arnold of the University of Edinburgh department of chemistry is launching a campaign to understand and improve gender equality in science titled "A Chemical Imbalance" which the BBC have written a story about. She's taken Rosalind Franklin award funding and used it to produce a free movie and eBook, hosted on the Chemical Imbalance web site. Full disclosure, I'm knackered and preoccupied with work right now so I've not had a chance to check out either in any depth. However the campaign clued me in on an Edinburgh science story that feels familiar (perhaps via the Surgeon's Hall museum) but I never learned about in any depth: the Edinburgh Seven.
The Seven were a group of women, led by Sophia Jex-Blake, who banded together so that they could be allowed to study for degrees at the venerable old University of Edinburgh medical school in the late 1800s. Despite their successful studies, performed in the face of opposition from science notables like Alexander Crum Brown - later president of the RSC - and an actual riot about their anatomy exams, the university refused to allow them to graduate. Most of them ultimately went on to be granted medical degrees by more progressive institutions, and blazed a trail for future equality in education and medicine. You can read more about them in the eBook on the site, or if that's too long, Wikipedia and this blog.
With a wonderful sense of cosmic justice, Professor Arnold is now the Crum Brown Chair of Chemistry, while Professor Lesley Yellowlees was not only the first female head of chemistry at Edinburgh, but is now the first female president of the Royal Society of Chemistry, a position once held by Crum Brown himself.
Such overt discrimination has waned in much of the world, but the gender balance in the sciences is still ridiculous. The chart on the site shows the ridiculous drop-off rate between undergraduate, where it's nearly equal, and professor, where it's ten to one. Worse, as Prof. Yellowlees relates in the BBC article, outward contempt for female scientists is still depressingly extant. I'm not sure what I can do to help, but passing it along, reading the book and watching the documentary seem like a good start.
The Seven were a group of women, led by Sophia Jex-Blake, who banded together so that they could be allowed to study for degrees at the venerable old University of Edinburgh medical school in the late 1800s. Despite their successful studies, performed in the face of opposition from science notables like Alexander Crum Brown - later president of the RSC - and an actual riot about their anatomy exams, the university refused to allow them to graduate. Most of them ultimately went on to be granted medical degrees by more progressive institutions, and blazed a trail for future equality in education and medicine. You can read more about them in the eBook on the site, or if that's too long, Wikipedia and this blog.
With a wonderful sense of cosmic justice, Professor Arnold is now the Crum Brown Chair of Chemistry, while Professor Lesley Yellowlees was not only the first female head of chemistry at Edinburgh, but is now the first female president of the Royal Society of Chemistry, a position once held by Crum Brown himself.
Such overt discrimination has waned in much of the world, but the gender balance in the sciences is still ridiculous. The chart on the site shows the ridiculous drop-off rate between undergraduate, where it's nearly equal, and professor, where it's ten to one. Worse, as Prof. Yellowlees relates in the BBC article, outward contempt for female scientists is still depressingly extant. I'm not sure what I can do to help, but passing it along, reading the book and watching the documentary seem like a good start.
Saturday, 27 July 2013
The Signal and the Noise
I'm working on getting the blog back up and running; rather than the ad hoc approach of last time I'm planning out a writing project that should keep the blog fed with content at least once a month. I'll fill you all in in due course. In the mean time, I've finished reviewing Nate Silver's The Signal and the Noise, which I've attached below.
Nate Silver has shot to fame as the oracular figure who decoded political polling data into plain English and successfully predicted the US election. His debut book brings him back down to earth, using familiar examples as diverse as moneyball and warfare to demonstrate the sore lack of and need for better prediction in our lives, and the path to improvement through critical thinking and Bayesian reasoning.
Each chapter uses a particular area of prediction to teach broader lessons. The book opens with great momentum, using the financial crisis as a set of unambiguous examples of how not to make predictions before drilling into the all-too-human reasons that political commentators make poor election forecasts. There are good lessons here about how the need to feel confident and a single-minded focus on a few issues can lead one astray; he turns back to the financial crisis to emphasise the same failures there.
It's not all about the human factors, though, and Silver then turns to "moneyball" - statistics-based sports recruitment - to provide an overview of the more technical aspects of the art of prognostication. The idea of a predictive model is well articulated and applied to common-sense issues with surprising complications. With the reader warmed up, he spends several chapters digging into the fundimental reasons why level-headed and critically thinking scientists are unable to predict earthquakes. Some things - weather, disease, tectonic plates - are inherently challenging to forecast for interesting reasons, and he is equally quick to emphasise the technical traps that researchers can fall into in building their models.
The heart of the book, however, is Bayesian reasoning: the idea that we should take new predictions as adjustments to whatever our existing prediction said, as a sort of rolling improvement to our models. As a simple illustration, a test result indicating that one may have a rare disease should be combined with the low probability that one had the disease before the test results were in. Even if the test is 95% accurate, if the disease only affects one in a million people then the odds are far, far lower than 95% that one actually has the condition.
This is the tool Silver uses in the latter half of the book to show the way to better predictions, while still taking the time to illuminate other forecasting challenges. Whether it's poker or chess, the stockmarket or the battlefield, making a good model and refining it with new data is the key to victory. He lays out how the problems rise in these fields, be it a new raft of human frailties or the hefty challenge of trying to beat the "wisdom of the crowds", sets out how these failures in prediction can be capitalised by good agents or bad, and suggests Bayesian solutions.
A chapter on climate change in a book aimed at at those in big business has a huge potential to be a train wreck but Silver manages to weave a fairly acceptable course through the problem. This chapter acts to draw the book together, forcing together issues of complex models, noisy new data, and incentives to mislead, with Bayesian reasoning as the knight in shining armour. The overall theme is that climate models are difficult to make for fundamental reasons, and the warming consensus that has come out from those models has stood up to new results - despite the claims of think tanks who wish it otherwise.
This section has annoyed commentators on both sides of the issue. Silver manages to make good points without falling into the many huge rhetorical traps that the denialist movement has laid in any writer's path, but he's never particularly strong on the issue either. I liked the unspoken conclusion that less-confident predictions - 95% confidence rather than 99%, say - are more resilient to contradictory data in a Bayesian world, and Silver does not make false equivalencies and is unambiguous in supporting global warming. However this is not a strong introduction into climate science, or a real challenge to many of the incorrect claims made by denialists.
Truth be told this is a deliberate stylistic choice and potential issue throughout the book. Silver avoids bringing in controversies in the fundimental results that feed forecasts, except where it is directly relevant to a chapter's lesson. In the section on the financial crisis, human incentives are raised as a source of bias, but the humans responsible are hardly taken to task. If you want to find out about the failures of reasoning that permitted the 9/11 attacks, you'll have to read elsewhere. (Donald Rumsfeld appears but only as a lead into the "unknown unknowns" idea.) The implications of Scott Armstrong's work with the notoriously vociferous anti-climate-change Heartland Institute are left for the reader to find out about on their own.
This will variously come across as refreshingly expedient, frustratingly wishy-washy, focussed or cowardly depending on your reading preferences and ideological views. Consider yourself forewarned and take the book on its own terms.
The Signal and the Noise is certainly cleanly written and well-structured. Silver's introduction sets the book up as a toolbox, first outlining the failures of prediction and their causes before moving onto the successes and the processes that enable them, but in truth he allows the book to digress around the broader themes raised in each chapter, be it the problems and benefits of the "wisdom of the crowds" or the failure to properly, quantitatively account for the uncertainty in the prediction. These digressions are brief and enlightening, and echo back and forth between the chapters to make a more cohesive whole.
With the aforementioned caveat this is a superb route into the whole issue of modeling and forecasting. It's accessible, clearly written, technically sound and meticulously reasoned. It's recommended as reading on a difficult subject, although it's probably not going to prove to be the definitive work.
(If you want an primer to thinking about statistics before you dig into this I strongly recommend Darrell Huff's "How to Lie with Statistics". It's inexpensive, funny, brief, and makes a good companion piece.)
Each chapter uses a particular area of prediction to teach broader lessons. The book opens with great momentum, using the financial crisis as a set of unambiguous examples of how not to make predictions before drilling into the all-too-human reasons that political commentators make poor election forecasts. There are good lessons here about how the need to feel confident and a single-minded focus on a few issues can lead one astray; he turns back to the financial crisis to emphasise the same failures there.
It's not all about the human factors, though, and Silver then turns to "moneyball" - statistics-based sports recruitment - to provide an overview of the more technical aspects of the art of prognostication. The idea of a predictive model is well articulated and applied to common-sense issues with surprising complications. With the reader warmed up, he spends several chapters digging into the fundimental reasons why level-headed and critically thinking scientists are unable to predict earthquakes. Some things - weather, disease, tectonic plates - are inherently challenging to forecast for interesting reasons, and he is equally quick to emphasise the technical traps that researchers can fall into in building their models.
The heart of the book, however, is Bayesian reasoning: the idea that we should take new predictions as adjustments to whatever our existing prediction said, as a sort of rolling improvement to our models. As a simple illustration, a test result indicating that one may have a rare disease should be combined with the low probability that one had the disease before the test results were in. Even if the test is 95% accurate, if the disease only affects one in a million people then the odds are far, far lower than 95% that one actually has the condition.
This is the tool Silver uses in the latter half of the book to show the way to better predictions, while still taking the time to illuminate other forecasting challenges. Whether it's poker or chess, the stockmarket or the battlefield, making a good model and refining it with new data is the key to victory. He lays out how the problems rise in these fields, be it a new raft of human frailties or the hefty challenge of trying to beat the "wisdom of the crowds", sets out how these failures in prediction can be capitalised by good agents or bad, and suggests Bayesian solutions.
A chapter on climate change in a book aimed at at those in big business has a huge potential to be a train wreck but Silver manages to weave a fairly acceptable course through the problem. This chapter acts to draw the book together, forcing together issues of complex models, noisy new data, and incentives to mislead, with Bayesian reasoning as the knight in shining armour. The overall theme is that climate models are difficult to make for fundamental reasons, and the warming consensus that has come out from those models has stood up to new results - despite the claims of think tanks who wish it otherwise.
This section has annoyed commentators on both sides of the issue. Silver manages to make good points without falling into the many huge rhetorical traps that the denialist movement has laid in any writer's path, but he's never particularly strong on the issue either. I liked the unspoken conclusion that less-confident predictions - 95% confidence rather than 99%, say - are more resilient to contradictory data in a Bayesian world, and Silver does not make false equivalencies and is unambiguous in supporting global warming. However this is not a strong introduction into climate science, or a real challenge to many of the incorrect claims made by denialists.
Truth be told this is a deliberate stylistic choice and potential issue throughout the book. Silver avoids bringing in controversies in the fundimental results that feed forecasts, except where it is directly relevant to a chapter's lesson. In the section on the financial crisis, human incentives are raised as a source of bias, but the humans responsible are hardly taken to task. If you want to find out about the failures of reasoning that permitted the 9/11 attacks, you'll have to read elsewhere. (Donald Rumsfeld appears but only as a lead into the "unknown unknowns" idea.) The implications of Scott Armstrong's work with the notoriously vociferous anti-climate-change Heartland Institute are left for the reader to find out about on their own.
This will variously come across as refreshingly expedient, frustratingly wishy-washy, focussed or cowardly depending on your reading preferences and ideological views. Consider yourself forewarned and take the book on its own terms.
The Signal and the Noise is certainly cleanly written and well-structured. Silver's introduction sets the book up as a toolbox, first outlining the failures of prediction and their causes before moving onto the successes and the processes that enable them, but in truth he allows the book to digress around the broader themes raised in each chapter, be it the problems and benefits of the "wisdom of the crowds" or the failure to properly, quantitatively account for the uncertainty in the prediction. These digressions are brief and enlightening, and echo back and forth between the chapters to make a more cohesive whole.
With the aforementioned caveat this is a superb route into the whole issue of modeling and forecasting. It's accessible, clearly written, technically sound and meticulously reasoned. It's recommended as reading on a difficult subject, although it's probably not going to prove to be the definitive work.
(If you want an primer to thinking about statistics before you dig into this I strongly recommend Darrell Huff's "How to Lie with Statistics". It's inexpensive, funny, brief, and makes a good companion piece.)
Subscribe to:
Posts (Atom)