I often encounter Facebook memes denouncing pharmaceutical companies with words to the effect of: “Big Pharma isn’t interested in making cures, they’re interested in making customers,” as if this were some deep insight into a grave moral failing on the part of the entire industry. Now, I’m quite sure that there are perverse incentives and inappropriate goals scattered throughout the medical industry in general, from the level of the individual physician, to pharmaceutical manufacturers, to the insurance industry, and everywhere else in the healthcare field. There’s little doubt that these injustices and inefficiencies gum up the works for everyone, making healthcare overall worse than it might otherwise be. But simply to complain about the fact that most medications don’t “cure” our many modern ailments is to confess a misunderstanding of the nature of biology and medical treatment. Continue reading “It’s unreasonable to expect “cures” for most diseases”
If you’ve studied or read much physics, or science in general—or, more recently, information theory—you’ve probably come across the subject of entropy. Entropy is one of the most prominent and respected, if not revered, concepts in all of science. It’s often roughly characterized as a measure of the “disorder” in a system, but this doesn’t refer to disorder in the sense of “chaos”, where outcomes are so dependent upon initial states, at such high degrees of feedback and internal interaction, that there’s no way to know based on any reasonable amount of information what those specific outcomes will be. Entropy is, in a sense, just the opposite of that. A state of maximal entropy is a state where the outcome is always—or near enough that it doesn’t matter—going to be pretty much the same as it is now. A cliché way of demonstrating this is to compare a well-shuffled deck of cards to one with the cards “in order”. Each possible shuffled configuration is unique, but for practical purposes nearly all of them are indistinguishable, and there are vastly greater numbers of ways for a deck to be “out of order” than “in order”.
Let’s quickly do that math. The number of orders into which a deck of fifty-two cards can be randomly shuffled is 52 x 51 x 50 x……x 3 x 2 x 1, notated traditionally as 52! It’s a big number. How big?
To quote Stephen Fry on Q.I., “If every star in our galaxy had a trillion planets, and each planet had a trillion people living on it, and each person had a trillion packs of cards, which they somehow managed to shuffle simultaneously at 1000 times per second, and had done this since the Big Bang, they would only just, in 2012, be starting to get repeat shuffles.” Now, how many ways are there to have a deck of cards arranged with the cards in increasing order (Ace to King) within each suit, even without worrying about the ordering between the suits? If my math is correct, there are only 4! ways to do that, which is 4 x 3 x 2 x 1, or 24. To call that a tiny fraction of the above number is a supreme understatement. This comparison should give you an idea of just how potent the tendencies are with which we’re dealing.
You could describe entropy as a state of “useless” energy. Entropy is, famously, the subject of the Second Law of Thermodynamics, and that law states that, in any closed system, entropy always tends to stay the same or increase, usually the latter. (The First Law is the one that says that in a closed system total energy is constant.)
When energy is “partitioned”—say you have one part of a room that’s hot and another part of a room that’s cold—there’s generally some way to harness that energy’s tendency to want to equilibrate, and to get work done that’s useful for creatures like us. Entropy is the measure of how much that energy has achieved a state of equilibrium, in which there’s no useful difference between one part of the room and the other.
This draws attention to the irony of entropy. The tendency of systems to become more entropic drives the various chemical and physical processes on which life depends. Energy tends to flow down gradients until there’s no energy gradient left, and its this very tendency that creates the processes that life uses to make local order. But that local order can only be accomplished by allowing, and even encouraging, the entropy of the overall world to increase, often leading to a more rapid general increase than would have happened otherwise. Think of burning gasoline to make a car go. You achieve useful movement that can accomplish many desired tasks, but in the process, you burn fuel into smaller, simpler, less organized, higher entropy states than they would have arrived at, if left alone, for a far longer time. The very processes that sustain life—that are life—can only occur by harnessing and accelerating the increase of net entropy in the world around them.
Although it seems like the principle most well-embodied in Yeats’s Second Coming, wherein he states, “Things fall apart, the centre cannot hold; / mere anarchy is loosed upon the world,” entropy is held in highest regard—or at least unsurpassed respect—by physicists. Sir Arthur Eddington famously pointed out that, if your ideas seem to contradict most understood laws of physics, or seem to go against experimental evidence, it’s not necessarily disreputable to maintain them, but if your ideas contradict the Second Law of Thermodynamics, you’re simply out of luck. And Einstein said of the laws of thermodynamics, and of entropy in particular, “It is the only physical theory of universal content, which I am convinced, that within the framework of applicability of its basic concepts will never be overthrown.”
The reason the Second Law of Thermodynamics is so indisputable is because, at root, it owes its character to basic mathematics, to probability and statistics. As I demonstrated in the playing card example, there are vastly more ways for things to be “disordered”—more arrangements of reality that are indistinguishable one from another—than there are configurations that contain gradients or differences that can give rise to patterns and “useful” information. WAAAAAAY more.
The Second Law isn’t at its heart so much a law of physics as it is a mathematical theorem, and mathematical theorems don’t change. You don’t need to retest them, because logic demands that, once proven, they remain correct. We know that, in a flat plane, the squares of the lengths of the two shorter sides of a right triangle add up to the square of the length of the longest side (You can prove this for yourself relatively easily; it’s worth your time, if you’re so inclined.) We know that the square root of two is an irrational number (one that cannot be expressed as a ratio of any two whole numbers, no matter how large). We know that there are an infinite number of prime numbers, and that the infinity of the “real” numbers is a much larger infinity than that which describes the integers. These facts have been proven mathematically, and we need no longer doubt them, for the very logic that makes doubt meaningful sustains them. It’s been a few thousand years since most of these facts were first demonstrated, and no one has needed to update those theorems (though they might put them in other forms). Once a theorem is done, it’s done. You’re free to try to disprove any of the facts above, but I would literally bet my life that you will fail.
The Second Law of Thermodynamics has a similar character, because it’s just a statement of the number of ways “things” can be ordered in indistinguishable ways compared to the number of ways they can be ordered in ways that either carry useful information or can be harnessed to be otherwise useful in supporting—or in being—lifeforms such as we. Entropy isn’t the antithesis of life, for without its tendency to increase, life could neither exist nor sustain itself. But its nature demands that, in the long run, all relative order will come to an end, in the so-called “heat death” of the universe.
Of course, entropy is probabilistic in character, so given a universe-sized collection of random elementary particles, if you wait long enough, they will come together in some way that would be a recognizable universe to us. Likewise, if you shuffle a deck of cards often enough, you will occasionally shuffle them into a state of ordered suits, and if you play your same numbers in the Powerball lottery often enough, for long enough, you will eventually win.
Want my advice? Don’t hold your breath.
According to General Relativity, our experience of space and time is a bit like seeing shadows in a higher-order, four-dimensional space-time. This is probably not news to many of you; the basics of Relativity have become almost common knowledge, which is no doubt a good thing. But many people may not realize that the tenets of General Relativity and Special Relativity, with their abolition of simultaneity or any privileged point of view in space-time also imply that the entire past, and the entire future, of every point in space and every moment in time, already or still exist, permanently. I’m not going to get too much into the how’s of this—I refer you to, and heartily recommend, Brian Greene’s The Fabric of the Cosmos, which has an excellent explication of this notion.
The upshot of this principle is that, in a very real sense, our past is never gone, but is still there, just where it was when we lived it. Similarly, the future is also already in existence (applying time-specific terms to such things is a little iffy, but we use the words we have I suppose, even though we must accept them as metaphors). In this sense, a human life is not an isolated, ever-changing pattern in some greater, flowing stream so much as a pre-existing rope of pattern in a higher-dimensional block of space-time, like a vein of gold running through a fissure in a rock formation. Its beginning is as permanent as its end.
We know that General Relativity cannot be absolutely and completely correct—its mathematics breaks down at singularities such as those in the center of black holes, for instance. But within its bailiwick, it seems to be spectacularly accurate, so it’s not unreasonable to conclude that it’s accurate in the above description of a human life—indeed, of all events in the universe.
But what does this mean for us? How does it impact the fact that we experience our lives as though the sands of the future are flowing through the narrow aperture of the present to fall into the receiving chamber of the past? How does General Relativity interact with consciousness? We seem to experience the present moment only as an epiphenomenon of the way fundamental principles translate themselves into chemistry and biology as measured along some fourth-dimensional axis. We can’t decide to reel ourselves backward and reexperience the past, or fast-forward into the future, even though it seems that our existence has much in common with the permanently-encoded data on a digital video file. We cannot choose to rewind or lives any more than can the characters within a movie we are watching.
Similarly, according to this implication of General Relativity, we could not, even in principle, have lived our past differently. Were we to rewind and then replay events, they would work out exactly as they had before, just as a movie follows the same course no matter how many times you watch it. The characters in a movie might learn later in the film that they had made some tragic error, yet when you rewind the show, they revert to their previous selves, ignorant of what they are always ignorant of at that point in time, subject to the same story arc, unable to change anything that they did before. Likewise, it’s conceivable that, when our lives end—when we reach the point where our pattern decomposes, diffuses, and fades—we may go back to the start and reexperience life again from the beginning. (This depends heavily on what the nature of consciousness is). Indeed, we may be constantly reexperiencing it, infinitely many times.
Though this seems to be a kind of immortality, it’s not a particularly rewarding one, as we wouldn’t gain anything no matter how many times we replayed our lives. For those of us with regrets it would be a mixed blessing, at best. For those who have endured lives of terrible suffering, it seems almost too much to bear. But, of course, reality isn’t optional. It is what it is, and there is no complaint department.
Ah, but here’s the rub. We know, as I said, that General Relativity cannot be quite right; crucially, it does not allow for the implications of the Uncertainty Principle, that apparently inescapable fact at the bedrock of Quantum Mechanics. Quantum Mechanics is, if anything, even more strongly supported by experiment and observation than is General Relativity; I’m aware of no serious physicists who don’t think that General Relativity will have to be Quantized before it can ever be complete.
But of course, as the name implies, the Uncertainty Principle says that things are—at the fundamental level—uncertain. How this comes about is the subject of much debate, with the two main views being the “interaction is everything, the wave-function just collapses and probabilities turn into actualities and there’s no point in asking how” that is the Copenhagen Interpretation, and the Many Worlds Interpretation, originated by Hugh Everett, in which, at every instance where more than one possible outcome of a quantum interaction exists, the universe splits into appropriately weighted numbers of alternate versions, in each of which some version of the possible outcomes occurs. It’s hard to say which of these is right, of if both are wrong—though David Deutsch does a convincing job of describing how, among other things, quantum interference and superposition implies the many-worlds hypothesis (see his books The Fabric of Reality and The Beginning of Infinity).
But what does the Everettian picture imply for our higher-dimensional block space-time that is at once all of space and time, already and permanently existing? Are there separate, divergent blocks for every possible quantum divergence? Or does the space-time block just have a much higher dimensionality that merely four, instantiating not just one but every possible form of space-time at once?
If this is the case, why do we conscious beings each seem to experience only one path through space-time? Countless quantum events are happening within and around us, with every passing Planck Time (about 10-43 seconds). The vast majority of these events wouldn’t make any noticeable difference to our experiences of our lives, but a small minority of them would.
This is the new thought that occurred to me today. It’s thoroughly and entirely speculative, and I make no claims about its veracity, but it’s interesting. What if, whenever we die, we start over again, as if running the DVD of our lives from the beginning yet again, but with this important difference: Each time it’s rerun, we follow a different course among the functionally limitless possible paths that split off at each quantum event? Even though most of these alterations would surely lead to lives indistinguishable one from another, everything that is possible in such a multiverse is, somewhere (so to speak) instantiated. Reversion to the mean being what it is, this notion would be hopeful for those who have suffered terribly in a given life, but rather worrisome for those who’ve had lives of exceptional happiness. At the very least, it implies that there would be no sense in which a person is trapped in the inevitable outcome of a given life. You can’t decide to behave differently next time around, but you can at least hope that you might (while reminding yourself that you may do even worse).
Of course, all this is beyond even science fiction—well, the earlier parts aren’t, just the notions of a person’s consciousness reexperiencing life, either the same or different, over again. But it was and is an interesting thought to have on a lazy, early Sunday afternoon in the spring of the year, and I thought I would share it with you.
I don’t know how often most of you notice the occasional noises of Flat-Earthers online, and particularly on social media, but I notice. Encountering such absurdities can at times lead a reasonably educated person to feel that the world is going mad, that society is collapsing, and that—despite the cornucopia of information available to us—humans are breathtakingly stupid.
However, I’ve recently been reading John Stuart Mill’s “On Liberty,” and it gave me a new insight: The fact the we encounter such vociferous and seemingly ridiculous expressions of contra-factual ideas is a sign of the health and strength of our discourse, rather than its deterioration. Continue reading “Flat-Earthers and “hate speech” are good for us”
Some years ago, when I first read Carl Sagans’s The Demon-Haunted World, I encountered a notion that stuck in my mind and has grown more prominent as the years have passed. This is the idea that laws, as made in a democracy, are a form of experiment, but that they are carried out without any of the sensible objective measures and controls that make scientific experiments so useful. I think this is clearly true, and I think we should all try to petition our legislators to approach laws in this scientific fashion.
Many—perhaps most—new laws are proposed to prevent, or correct, or create some specific situation…presumably altering something that isn’t quite the way we want it to be. Unfortunately, the way laws are proposed and assessed is through public debate—at best. As civil and criminal courtrooms demonstrate, when an important matter is addressed mainly through debate, the outcome isn’t necessarily that the best or truest idea is chosen, but that those who are most skilled at rhetoric and manipulation rule the day. This is not a much more reliable way to make good decisions than by holding a jousting match. It’s not good in court, and it’s worse in the halls of legislature, where the quality of discourse is often even lower than one often finds among courtroom lawyers (“If the glove does not fit, you must acquit,” is at least mildly clever, as opposed to the appalling spectacle of an elected legislator in the Federal Government bringing a snowball into the Capitol Building as evidence against climate change).
Wouldn’t it be wonderful if, with every proposed new bill, the proposer had to articulate what problem was to be addressed by the legislation, and what result was being sought. Then, in the subsequent discussions, legislators could better focus their inquiries, bringing existing information to bear, including the outcomes of prior, similar legislation. Also—and here is a key point—each bill could contain specific language detailing by what means its relative success of failure would be measured, how that data would be collected, at what frequency it would be evaluated, and at what point—if ever—the measure would have been found to fail. We know that most measures, if measured, would fail, based on the experience of science, in which the vast majority hypotheses end up disproven, even when proposed by the best and brightest minds in the world. How much more likely is it that ideas proposed by the likes of our legislators are going to be shown to be ineffective?
Of course, the real world—the laboratory where each new law would be tested—is a messy place, with innumerable confounding variables, correlations which have nothing to do with causation, unreliable data, and so on. So, we wouldn’t necessarily want to hold legislative outcome checks to quite the same standards of rigor as those to which we hold particle physics. But simply requiring each new bill to contain a statement of hoped-for outcomes, of measures by which it would be considered to have succeeded or failed, and a required time of review, could produce better laws, influenced from the beginning by more information and logic than rhetoric. Even if no definitive answer was available at the time of a planned review, that review might still inspire new ideas about how better to measure outcomes, and perhaps even ways to tweak a law to make its outcome more clearly beneficial. Most importantly, it would be much easier to recognize and discard the failures.
Of course, to initiate such a policy of lawmaking would require something even more sweeping than a Constitutional amendment. It would require that we elect representatives capable of bringing a scientific mindset to matters of fact. This, in turn, would require a voting population with the ability to judge among and select such individuals, rather than the charlatans and hucksters they tend to elect. This in turn would require both a change in the educational style of the country and a cultural shift in which we give greater precedence to logic and reason, rather than our usual approaches to life, which are only more sophisticated than those of chimpanzees in that they are more complicated, but which are not necessarily any more rational.*
It’s a tall order, I know. But the possible improvements in our laws, in the way public policy is carried out, and in the general health and well-being of the nation would be potentially vast.
They would also be much more measurable.
*See last week’s post on teaching probability and statistics, for instance.
Among the many educational reforms that I think we ought to enact in the United States, and probably throughout the world, one of the most useful would be to begin teaching all students about probability and statistics. These should be taught at a far younger age than that at which most people begin to learn them—those that ever do. Most of us don’t get any exposure to the concepts until we go to university, if we do even there. My own first real, deep exposure to probability and statistics took place when I was in medical school…and I had a significant scientific background even before then.
Why should we encourage young people to learn about such seemingly esoteric matters? Precisely because they seem so esoteric to us. Statistics are quoted with tremendous frequency in the popular press, in advertising, and in social media of all sorts, but the general public’s understanding of them is poor. This appears to be an innate human weakness, not merely a failure of education. We learn basic arithmetic with relative ease, and even the fundamentals of Newtonian physics don’t seem too unnatural when compared with most people’s intuitions about the matter. Yet in the events of everyday life, statistics predominate. Even so seemingly straightforward a relation as the ideal gas law (PV=nRT, relating the volume, temperature, and pressure of a gas) is the product of the statistical effects of innumerable molecules interacting with each other. In this case, the shorthand works well enough, because the numbers involved are so vast, but in more ordinary interactions of humans with each other and with the world, we do not have numbers large enough to produce reliable, simplified formulae. We must deal with statistics and probability. If we do not, then we will fail to deal with reality as accurately as we could, which cannot fail to have consequences, usually bad ones. As I often say (paraphrasing John Mellencamp) “When you fight reality, reality always wins.” Continue reading “Odds are we should teach probability and statistics”
I occasionally wonder about what physicist really think regarding the hypothetical particles, gravitons, those carriers of the gravitational force mandated by the need* to quantize all the forces of nature. Specifically, I wonder how they behave in and around black holes.
I know, from my understanding of General Relativity, that the influence of gravity travels at the speed of light, and the recent LIGO results, and all other experimental results of which I’ve heard, are consistent with that. This must surely mean that the proposed gravitons travel at the speed of light, and are thus mass-less particles. And if they are carrying a force, they must have some form of inherent energy, which means that, according to Einstein at least, their path would be affected by gravity. This seems contradictory in some ways, but it’s my understanding that the electrical force produced by a moving electron also acts backward on itself, so I guess that’s not completely unreasonable…though here I’m veering further away from any deep knowledge, much to my sorrow.
My real question applies to the surface of an event horizon, that boundary in space-time within which all things are separated from the outside by the strength of the gravitational force – more particularly, according to Einstein, by the degree of curvature of space-time. If gravitons are particles, carrying the gravitational force, are they constrained by the effects of the event horizon, or – presumably because they wouldn’t be self-interacting – do they simply pass through it, it being irrelevant to their motion, unlike all other things with finite speeds…which means everything. That sometimes seems contradictory to me, though by no means am I certain that I’m thinking correctly about this. Could it be that the gravitons within and outside of an event horizon are two separate populations of gravitons, with the external ones somehow being generated at the horizon? If not, then how can a particle ignore the degree of gravity, unless, of course, as a mentioned above, they are not self-interacting – which wouldn’t be unusual, since, if I understand correctly, photons also don’t interact with other photons. But photons would, obviously, interact with gravitons, of course, otherwise they wouldn’t be effected by gravity, as we know they are…the most extreme example of this being at a black hole.
I know that a possible explanation for this might be found in M theory, in which we exist in a 3-brane that floats in a larger, higher-dimensional “bulk,” and that gravitons, unlike all the more “ordinary” particles are not constrained to remain within that brane, but can go above and below it, so to speak, thus bypassing any barrier that is exclusive to the brane. But I don’t know if this really deals with the issue.
And, of course, how can the idea of gravity as a force, mediated by a quantum particle, be reconciled with the convincing and highly fruitful model of gravity as the consequence of the curvature of space-time? Obviously, I don’t expect anyone to know the deep answer to this question, since it’s the biggest, most fundamental problem in modern physics: our two best, most powerful theories of the world don’t work when brought together. But if anyone out there has any idea of at least the form of such a possible reconciliation – i.e. do proponents of quantum gravity think that it will eliminate the notion of curved space-time, or do they think, somehow, that it will be an expression thereof – I would be delighted to hear from you. My best reading to date on things like string theory hasn’t given me any real insight into the possible shape of such a unification.
Anyway, these are some of the thoughts that are troubling me this Monday morning. I’d love to know any of your thoughts in response, or if you have any recommendations on further study materials, I would welcome those as well.
* due to the Uncertainty Principle, among other things.