In my DP Chemistry studies, we have discussed energetics, the theory that describes the relationship between energy and chemical changes. The common trend among concepts in this topic is taking a complex process and simplifying it to a number. The tables of these numbers in the data booklet, therefore, are models of the theory of energetics. This is a common pattern in the natural sciences, mathematics, and beyond; these fields make extensive use of models—paradigms constructed in order to reduce something complicated to a simpler form. As George Box correctly noted, something must be lost in the reduction from reality to model, meaning any model must be wrong—incomplete or otherwise flawed in a way that makes it inaccurate on the whole. [1]:792 However, despite their drawbacks, many can still be useful, due to the increased comprehensibility derived from the process of modelling.
The tables in the chemistry data booklet are reference, or literature, values—the “right” numbers as opposed to whatever we might deduce from our own experiments—and we treat them as always correct. However, these are defined only for standard conditions and might be different under different conditions. Bond enthalpy, the energy required to break a bond, in particular is known to be only an average value because a bond between the same two atoms will behave differently in every different compound in which it is found. The notion of such a “literature value” for bond enthalpy is a model within chemistry, and though it is “wrong” in this way it is still useful to chemists and chemistry students as a point of comparison for experiments.
Another well-known model in the natural sciences, at the edge of modern physics, is the Standard Model. This is the theory that seeks to construct physics, up from the smallest things in the Universe to everything we observe, from eighteen fundamental particles. It breaks down matter even further than atoms, into six “flavors” of quarks, which make up particles like protons and neutrons; six leptons, which include electrons; and six bosons, which are responsible for the forces by which the others interact and for giving them all mass.[2] Since the groundwork was laid for quantum physics in the 1890s, and increasingly since the Standard Model was fully theorized in the 1970s, hundreds of experiments have confirmed the behaviors that it predicts, and these experiments have helped physicists continue to build and refine the theory during that time.[3][4] Recently, the discoveries of quantum physicists have also been contributing to the growing field of quantum computing, which promises to revolutionize information technology with previously-unknown computing power.[5]
The Standard Model is not, however, without its shortcomings: many of the major open questions in physics revolve around its holes. Notably, it does not explain “dark matter” or “dark energy,” which together are believed to make up about 95% of the Universe, and it is irreconcilable with Einstein’s general theory of relativity and thus the modern theory of gravity.[3] The existence of the Higgs boson was also not confirmed until 2012, and even now many of its properties are somewhat contentious.[2] Despite these apparently significant gaps, the Standard Model has thus far done a very good job of predicting and, to some extent, explaining the results of various experiments in quantum physics and beyond. It could be argued that it is “wrong” because of what it fails to explain, but then most laws and theories in physics only work in certain conditions; the Standard Model is remarkably useful because it provides a framework for describing what classical physics long struggled to (and still cannot) make sense of.
In 1930 and 1931, Kurt Gödel published two results that now underlie the study of mathematical logic, or “meta-mathematics” in a sense. The Incompleteness Theorems state, in simplified terms: for any “formal system,” (1) so long as no statement can be proven both true and false, there are statements that cannot be proven nor disproven, making the system incomplete; and (2) it cannot be proven that the system is consistent—no statement is both true and false—within itself.[6] Though the actual proof of these is rather complicated, and jargon tends to render discussions of them nearly incomprehensible, the implications of these theorems are staggering. They mean that as long as we avoid contradictions in math, a fundamental property of proofs, it is impossible to prove every possible result: there will always be things we do not know, some part of the field perpetually beyond our reach.
Gödel’s Incompleteness Theorems could be construed to show that math itself is wrong. Indeed, many people have done so, calling Gödel’s notion “math’s fundamental flaw,”[7] a “paradox at its heart.”[8] At the time of his writing, other mathematicians had begun to define “formal systems” of mathematics, identifying the axioms—fundamental statements assumed to be true—on which the discipline rests.[6] These are models, but really they are models of a model, our system of notation and expression of mathematical concepts being itself one large model for the abstract nature of math. Gödel has thus not demonstrated that math is wrong but that our model of it—along with any other model of it we could think to create—is wrong. Yet it must still be useful, or else we would not require children to spend twelve years learning it, nor build from it an entire academic discipline in which researchers cling to the hope of improving the tools and knowledge available to the rest of the world.
One of the most impactful books on my education—though I did not read it for school—is Math on Trial , in which mother-daughter author duo Leila Schneps and Coralie Colmez provide ten examples of “how math is used and abused in the courtroom.”[9] Several involve models in some form, among these the case of Sylvia Ann Howland’s death.
In 1865, one specific instance of her signature was accused of being a forgery, and Benjamin Peirce, a Harvard mathematician, was summoned as an expert witness in the fraud case. In his testimony, he used the data from comparisons of 42 samples of Howland’s signature to determine whether it was real or fake. He approximated the set of pairwise comparisons among this data with a statistical distribution that was close to the data he was given, but not identical, leading him to calculate the probability that two of her signatures, taken at random, would be exactly the same—as were the two on two pages of her will—as 1 in 931 quintillion, so microscopically small that the specimen he was presented with must have been a forgery. However, the model he constructed was flawed, overestimating the number of signatures that were very different and underestimating the number that were quite similar. This means that his approximation severely discounted the possibility of two signatures being exactly equal and made forgery seem like a much more likely outcome than it may actually have been. [9]:177–181
Peirce’s model was nearly useless as evidence to his point because it missed the nuances of the situation—not only that the approximation diverged from the truth in a crucial part of the data, but also because he ignored the pragmatics, such as that two signatures written at the same desk with the same pen are rather likely to look nearly identical.[9]:181 This makes it wrong and, though the binomial distribution he used is valuable in many situations, not useful in a court of law. And there are many more examples, from 1865 to the present, of mathematical models being constructed in a way that misleads people, whether intentionally or unintentionally, usually involving statistics or probability as this case did. This can be done by ignoring (or simply failing to consider) information that would counter the claim the model seeks to make, as well as by omitting assumptions that have been made in order to support that claim. Though these presentations combine inaccuracy and inutility, that does not preclude Box’s statement from being true overall: he said some, not all, models are useful.
In chemistry, a model simplifies an infinite number of possibilities down to a single value. In physics, a model attempts to describe the workings of the entire Universe with just eighteen particles. In math, a model draws a conclusion from incomplete information, while another shows that the whole system in which it exists is inherently incomplete. All of these point to the idea, which the last proves, that no model can be infallibly correct—there are holes in it somewhere. Yet, at the forefront of research in mathematics and in the natural sciences, experts all over the world use these models on a daily basis to study things as concrete as heat and energy and as abstract as mathematical logic. Even while other “experts” are using similar models to manipulate information, they demonstrate that some are incredibly useful in a wide range of applications despite their inevitable limitations. Therefore, it can be concluded that George Box may have oversimplified his claim, but his idea was accurate.
This has implications for knowledge in all areas and for all knowers. Everything we know comes in models because our minds do not contain all of reality. Yuval Noah Harari, in his book Sapiens, thus extends Box’s statement to all knowledge: “The real test of ‘knowledge’ is not whether it is true, but whether it empowers us.… [N]o theory is 100 per cent correct. Consequently, truth is a poor test for knowledge. The real test is utility. A theory that enables us to do new things constitutes knowledge.” [10]:264
↑ 1. George E. P. Box, “Science and Statistics,” in the
Journal of the American Statistical Association volume 71 issue 356,
doi:10.1080/
a b 2. “The Standard
Model,” CERN,
home.cern/
a b 3. Corey S. Powell,
“Relativity versus quantum mechanics: the battle for the universe,” in The Guardian,
theguardian.com/
↑ 4. Margot Michel, “The origins of the Standard Model,”
in CNRS News,
news.cnrs.fr/
↑ 5. Kristiane Bernhard-Novotny, “Report explores quantum
computing in particle physics,” in the CERN Courier,
cerncourier.com/
a b 6. Panu Raatikainen,
“Gödel's Incompleteness Theorems,” in the Stanford Encyclopedia of Philosophy,
plato.stanford.edu/
↑ 7. Derek Muller, “Math’s Fundamental Flaw,” on Veritasium, yt:HeQX2HjkcNo
↑ 8. Marcus du Sautoy, “The paradox at the heart of mathematics: Gödel's Incompleteness Theorem,” on TED-Ed, yt:I4pQbo5MQOs
a b c 9. Leila Schneps and Coralie Colmez, Math on Trial
↑ 10. Yuval Noah Harari,
Sapiens: A Brief History of Humankind, available at
1pezeshk.com/