Never hate your interlocutors

There’s a moment in “The Godfather: Part III” when Michael Corleone says to Vincent, Sonny Corleone’s hotheaded illegitimate son, “Never hate your enemies; it affects your judgment.”  These may be some of the most useful words in that whole excellent movie series, words that apply to the world and to human interaction generally, perhaps more than ever before in our modern world of politics and social media.

Anyone who has spent a significant amount of time on social media, at least when dealing with political and social issues, has seen the face of the problem this aphorism addresses.  Anyone who has followed politics has also seen it.  We tend to address our issues and disagreements in the real world as though they are zero-sum games—contests in which there can be only one winner and one loser, where any gain by the “other side” is a loss for “our side.”  Perhaps as an automatic defense against the distress of having to face our fellow humans in such a contest, we demonize our “enemies.”  Unfortunately, this approach quickly becomes counter-productive, because—as Michael Corleone rightly points out—to demonize others, to hate them, impairs our judgment.  If we see another person as inherently reprehensible, then to give him or her any ground, at any level, is to seem to reward what we perceive as evil and, given the zero-sum assumption, to penalize the good. Continue reading “Never hate your interlocutors”

The good/evil number line

During the last presidential election (some of you may remember it) occasional memes floated through social media making pronouncements to the effect that choosing the lesser of two evils (e.g. Hillary Clinton vs. Donald Trump in these memes’ cases) is still choosing evil.  These memes often came from first hopeful, then frustrated, Bernie Sanders supporters, but it’s a notion that’s by no means confined to such groups.  Ideologues of all stripes, from the religious, to the political, to the social-scientific and beyond, fall prey to the classic mental fallacy of the false dichotomy—the notion that the world is divided into two absolute, opposite natures, and that if their own ideas are pure and good (nearly everyone, on all sides, seems to believe this of themselves), then any choice other than the pure realization of their ideas in all forms is somehow a descent into evil.  Many people implicitly believe that even to choose the “lesser of two evils” is somehow to commit a moral betrayal that can be even worse than simply choosing evil for its own sake.

I hope to explode this notion as the destructive claptrap that it is. Continue reading “The good/evil number line”

Remember the reason for the season

As we in the United States prepare to celebrate the 4th of July, also known as Independence Day, I want to remind my readers to think about the real reason behind the holiday.  This has a bit of the character of a devout Christian enjoining everyone to remember “the reason for the season,” at Christmastime, and I’m far from embarrassed by the comparison (though we have more immediate reasons to connect the 4th day of July with this celebration than Christians do with December 25th).

The celebration of the 4th in modern America—and for some time longer, as far as I know—tends to center on the launching of fireworks, nominally in recollection of the battle commemorated in “The Star-Spangled Banner,” and on the celebration of the flag itself.  While I have no deep problem with enjoying those symbols, I am impatient with the fact that the flag has become the center of that celebration, and the focus of American patriotism, as well as with the blind and thoughtless idea of American exceptionalism, especially in the era of “Make America Great Again,” and other such vacuous statements.  We have become a people that, on the surface, seem to think of America as exceptional for reasons of fate, or Divine Providence, or some other mere happenstance.  But if America is great, it is not great in any set of its current circumstances, but in the ideas upon which it was founded.

I encourage everyone to reread, on the occasion of the celebration of the birth of the United States of America, The Declaration of Independence, and preferably also the United States Constitution, especially the Bill of Rights.  These are the ideas at the heart of what America really means, and if America is ever to achieve the greatness made possible by those ideas, in any durable and important way, it will need to do so by commitment to the principles there described.

The key sentence of the Declaration of Independence is the second one:

“We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness. — That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, — That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness.”

These were revolutionary concepts, though they had their roots in the Enlightenment principle that the purpose of government is to serve the people governed.  In this they are very different from traditional “Judeo-Christian values” (despite those frequently being claimed as our government’s basis), for those ideas are inherently authoritarian and dogmatic, while those of America have more the character of the scientific method.  Governments are a means of solving problems, and must always, in principle, be open to revision and improvement.  This is perhaps the most important, the most crucial aspect of the Constitution:  not the famous division of powers, with its various checks and balances, excellent though those things are, but the idea that the Constitution is amenable to continual and constant revision and amendment, as new, hopefully better, ideas come to the fore.

The first ten such amendments are the Constitutional framers’ attempt to codify more fully the notion of “unalienable Rights,” as described by Jefferson in the Declaration, and are sensibly called The Bill of Rights.  They are the explicit statements that, no matter what expediency might seem to justify it, and even if a majority desire it, a government has no business infringing the rights of the citizens, even be it one individual whose rights would be infringed.

The very nature of American government, as it was founded, contradicts any notion of blind patriotism.  The nation, the law, the government, these are not ends in themselves, but are means to an end, and they serve the rights and well-being of the citizenry.  Government derives its just powers from the consent of the governed, and when it fails to protect those governed, it is the right—many would say the duty—of the citizenry to make changes, for an infringement of the rights of any individual is a threat to the rights of every individual.

The United States is only great to the degree that it strives to live up to these ideals, which are still probably best expressed in its founding document.  This is what we should remember and celebrate on this anniversary of the birth of the nation:  not a flag, though the flag is nice; not a song, though the song is stirring; not fireworks, though they are fun, and the battle they recreate was no doubt impressive.  The United States of America is not a place, but an idea—the idea that government exists for the sake of the individual citizens of the country, not the other way around.  It is the duty of the government to protect and nurture the rights, the liberty, the pursuit of happiness of the people it serves, and dissent should not merely be allowed but is a fundamental duty.

Read the Declaration on this Independence Day.  Read the Bill of Rights.  Participate fully in your local, state, and federal government.  Vote.

And by all means, if you disagree with me (or with your government) feel free to say so and to do something about it.

I don’t want to believe

Fans of The X-files will no doubt recall the poster on Mulder’s office wall, with its stereotypical picture of a flying saucer and the words, “I want to believe” written on it.

Well, I for one don’t want to believe.

Despite being a fan of at least the first four seasons of The X-files, I don’t want to believe.  It seems bizarre to me that, as a culture (as a species?) we have elevated the notion of belief as a good thing in and of itself, and we often respond to people based upon the strength of their belief, as though it were a sign of personal strength, as though it were something we should admire or even emulate.  We’re often told that we need to believe in ourselves*, that we need to find something in which to believe:  a religion, our nation, a set of ideals, what have you.  Rarely are we ever enjoined to question whether this is always a good thing.

Given, however, that humanity’s greatest strength—the attribute that sets us apart from the rest of the animal kingdom—is our ability to reason, it seems absurd that so many of us respond to, and even reward, those who accept the truth of propositions not adequately founded on evidence and argument.

I don’t understand this tendency, and I don’t think I really want to understand it.  To me, belief beyond the level justified by reason is not a strength—it’s not even neutral—it’s a weakness.  If your conviction doesn’t scale with the evidence, then you are, in a very real sense, deluded.  If your understanding of reality is out of sync with what reality is, then sooner or later you are going to collide with reality.  In such collisions, reality always comes out on top.  It can’t do otherwise; it’s reality.  I don’t just believe this; I’m thoroughly convinced of it.

Of course, at times I run afoul of the multiple meanings and connotations to the word “belief”, and I understand that this is a legitimate issue.  After all, when someone counsels others to “believe in” themselves, they rarely mean for them to believe without restriction or reservation.  They’re just advising people to have confidence in their own abilities—not to think that there’s nothing at all that they can’t do, but to recognize that they do have abilities, and can accomplish many things.  It’s hard to feel too critical about such advice, if it’s not carried it too far.  But even when we’re confident in ourselves—for good, sound, experiential reasons—it’s rarely good to believe that we’re the best in the world at everything.  It’s rarely even accurate to say that a given individual is the best in the world at anything.  If we can’t decide, from among a pool of a handful or fewer, who the world’s greatest basketball player is, then it’s pretty unlikely that there’s any one thing in the world at which any one person could readily be called the best.

We’re exhorted to believe in God, to believe in our country, to believe in a set of ideals, to believe in our political party, to believe in a philosophy—not as conclusions, but as starting points and, ultimately, as endpoints.  To maintain such beliefs persistently requires from one a near-paranoid protection from argumentation.  Such beliefs are delusional in character, even if they happen to be correct; if you believe something without having arrived at that belief honestly, then even if that belief happens to accord with reality, you can’t claim to be right.  You can only really consider yourself lucky to have stumbled into a valid conclusion.

I don’t want to believe; I want to be convinced by a body of reasoning, with my level of conviction always on a sliding scale, adjustable by new inputs of evidence and argument, and always—in principle—open to refutation.  If I’m not amenable to correction, I’m as much a victim of self-deception as a person who thinks he’s Napoleon.

I don’t really like to use the word “believe,” even in its more benign forms, such as when someone says, “I believe that’s true, (but I’m not certain)”.  I prefer to use such terms as “I think,” “I suspect,” “I’m convinced (beyond a reasonable doubt),” and “I wonder.”  When presented with a proposition that I consider highly unlikely, I like to use Carl Sagan’s polite but dubious phrase, “Well…maybe.”

In this, I’m much more in line with Scully than with Mulder.  I’m deeply skeptical of the whole panoply of paranormal perfidy such as that with which Mulder was obsessed, though I am open to being persuaded and convinced.  But I don’t want to believe, whether in the existence of the supernatural, or the rightness of certain political ideas, or in any religion, or in the power of positive thinking, or anything else without adequate support.  Faith, I think, is not a virtue, and I suspect that it never has been.  I’m convinced that doubt, reasonable doubt, is the virtue.  If that means that I’ll go through life never experiencing the untrammeled confidence of the true believer, that soaring, absolute conviction that I am on the side of right, and those who oppose me are not…well, good.  I’m sure non-sanity of that sort has its moments of joy, but I think I’d prefer skydiving or free solo rock climbing.  Those activities would be dangerous to myself, but at least they’re unlikely to endanger anyone else.

Belief is dangerous, because when it collides with reality, the believer is quite often not the only casualty of the collision.  Often the results are explosive and cataclysmic.  So I don’t want to believe.  And I really don’t want you to believe, either.

nonbelief

*this is perhaps the least objectionable form of belief, but it can still be problematic, as the personality disorders of some public figures shows.

The Irony of Entropy

If you’ve studied or read much physics, or science in general—or, more recently, information theory—you’ve probably come across the subject of entropy.  Entropy is one of the most prominent and respected, if not revered, concepts in all of science.  It’s often roughly characterized as a measure of the “disorder” in a system, but this doesn’t refer to disorder in the sense of “chaos”, where outcomes are so dependent upon initial states, at such high degrees of feedback and internal interaction, that there’s no way to know based on any reasonable amount of information what those specific outcomes will be.  Entropy is, in a sense, just the opposite of that.  A state of maximal entropy is a state where the outcome is always—or near enough that it doesn’t matter—going to be pretty much the same as it is now.  A cliché way of demonstrating this is to compare a well-shuffled deck of cards to one with the cards “in order”.  Each possible shuffled configuration is unique, but for practical purposes nearly all of them are indistinguishable, and there are vastly greater numbers of ways for a deck to be “out of order” than “in order”.

Let’s quickly do that math.  The number of orders into which a deck of fifty-two cards can be randomly shuffled is 52 x 51 x 50 x……x 3 x 2 x 1, notated traditionally as 52!  It’s a big number.  How big?

80658175170943878571660636856403766975289505440883277824000000000000.

To quote Stephen Fry on Q.I., “If every star in our galaxy had a trillion planets, and each planet had a trillion people living on it, and each person had a trillion packs of cards, which they somehow managed to shuffle simultaneously at 1000 times per second, and had done this since the Big Bang, they would only just, in 2012, be starting to get repeat shuffles.”  Now, how many ways are there to have a deck of cards arranged with the cards in increasing order (Ace to King) within each suit, even without worrying about the ordering between the suits?  If my math is correct, there are only 4! ways to do that, which is 4 x 3 x 2 x 1, or 24.  To call that a tiny fraction of the above number is a supreme understatement.  This comparison should give you an idea of just how potent the tendencies are with which we’re dealing.

You could describe entropy as a state of “useless” energy.  Entropy is, famously, the subject of the Second Law of Thermodynamics, and that law states that, in any closed system, entropy always tends to stay the same or increase, usually the latter.  (The First Law is the one that says that in a closed system total energy is constant.)

When energy is “partitioned”—say you have one part of a room that’s hot and another part of a room that’s cold—there’s generally some way to harness that energy’s tendency to want to equilibrate, and to get work done that’s useful for creatures like us.  Entropy is the measure of how much that energy has achieved a state of equilibrium, in which there’s no useful difference between one part of the room and the other.

This draws attention to the irony of entropy.  The tendency of systems to become more entropic drives the various chemical and physical processes on which life depends.  Energy tends to flow down gradients until there’s no energy gradient left, and its this very tendency that creates the processes that life uses to make local order.  But that local order can only be accomplished by allowing, and even encouraging, the entropy of the overall world to increase, often leading to a more rapid general increase than would have happened otherwise.  Think of burning gasoline to make a car go.  You achieve useful movement that can accomplish many desired tasks, but in the process, you burn fuel into smaller, simpler, less organized, higher entropy states than they would have arrived at, if left alone, for a far longer time.  The very processes that sustain life—that are life—can only occur by harnessing and accelerating the increase of net entropy in the world around them.

Although it seems like the principle most well-embodied in Yeats’s Second Coming, wherein he states, “Things fall apart, the centre cannot hold; / mere anarchy is loosed upon the world,” entropy is held in highest regard—or at least unsurpassed respect—by physicists.  Sir Arthur Eddington famously pointed out that, if your ideas seem to contradict most understood laws of physics, or seem to go against experimental evidence, it’s not necessarily disreputable to maintain them, but if your ideas contradict the Second Law of Thermodynamics, you’re simply out of luck.  And Einstein said of the laws of thermodynamics, and of entropy in particular, “It is the only physical theory of universal content, which I am convinced, that within the framework of applicability of its basic concepts will never be overthrown.”

The reason the Second Law of Thermodynamics is so indisputable is because, at root, it owes its character to basic mathematics, to probability and statistics.  As I demonstrated in the playing card example, there are vastly more ways for things to be “disordered”—more arrangements of reality that are indistinguishable one from another—than there are configurations that contain gradients or differences that can give rise to patterns and “useful” information.  WAAAAAAY more.

The Second Law isn’t at its heart so much a law of physics as it is a mathematical theorem, and mathematical theorems don’t change.  You don’t need to retest them, because logic demands that, once proven, they remain correct.  We know that, in a flat plane, the squares of the lengths of the two shorter sides of a right triangle add up to the square of the length of the longest side (You can prove this for yourself relatively easily; it’s worth your time, if you’re so inclined.)  We know that the square root of two is an irrational number (one that cannot be expressed as a ratio of any two whole numbers, no matter how large).  We know that there are an infinite number of prime numbers, and that the infinity of the “real” numbers is a much larger infinity than that which describes the integers.  These facts have been proven mathematically, and we need no longer doubt them, for the very logic that makes doubt meaningful sustains them.  It’s been a few thousand years since most of these facts were first demonstrated, and no one has needed to update those theorems (though they might put them in other forms).  Once a theorem is done, it’s done.  You’re free to try to disprove any of the facts above, but I would literally bet my life that you will fail.

The Second Law of Thermodynamics has a similar character, because it’s just a statement of the number of ways “things” can be ordered in indistinguishable ways compared to the number of ways they can be ordered in ways that either carry useful information or can be harnessed to be otherwise useful in supporting—or in being—lifeforms such as we.  Entropy isn’t the antithesis of life, for without its tendency to increase, life could neither exist nor sustain itself.  But its nature demands that, in the long run, all relative order will come to an end, in the so-called “heat death” of the universe.

Of course, entropy is probabilistic in character, so given a universe-sized collection of random elementary particles, if you wait long enough, they will come together in some way that would be a recognizable universe to us.  Likewise, if you shuffle a deck of cards often enough, you will occasionally shuffle them into a state of ordered suits, and if you play your same numbers in the Powerball lottery often enough, for long enough, you will eventually win.

Want my advice?  Don’t hold your breath.

Playing with space-time blocks

According to General Relativity, our experience of space and time is a bit like seeing shadows in a higher-order, four-dimensional space-time.  This is probably not news to many of you; the basics of Relativity have become almost common knowledge, which is no doubt a good thing.  But many people may not realize that the tenets of General Relativity and Special Relativity, with their abolition of simultaneity or any privileged point of view in space-time also imply that the entire past, and the entire future, of every point in space and every moment in time, already or still exist, permanently.  I’m not going to get too much into the how’s of this—I refer you to, and heartily recommend, Brian Greene’s The Fabric of the Cosmos, which has an excellent explication of this notion.

The upshot of this principle is that, in a very real sense, our past is never gone, but is still there, just where it was when we lived it.  Similarly, the future is also already in existence (applying time-specific terms to such things is a little iffy, but we use the words we have I suppose, even though we must accept them as metaphors).  In this sense, a human life is not an isolated, ever-changing pattern in some greater, flowing stream so much as a pre-existing rope of pattern in a higher-dimensional block of space-time, like a vein of gold running through a fissure in a rock formation.  Its beginning is as permanent as its end.

We know that General Relativity cannot be absolutely and completely correct—its mathematics breaks down at singularities such as those in the center of black holes, for instance.  But within its bailiwick, it seems to be spectacularly accurate, so it’s not unreasonable to conclude that it’s accurate in the above description of a human life—indeed, of all events in the universe.

But what does this mean for us?  How does it impact the fact that we experience our lives as though the sands of the future are flowing through the narrow aperture of the present to fall into the receiving chamber of the past?  How does General Relativity interact with consciousness?  We seem to experience the present moment only as an epiphenomenon of the way fundamental principles translate themselves into chemistry and biology as measured along some fourth-dimensional axis.  We can’t decide to reel ourselves backward and reexperience the past, or fast-forward into the future, even though it seems that our existence has much in common with the permanently-encoded data on a digital video file.  We cannot choose to rewind or lives any more than can the characters within a movie we are watching.

Similarly, according to this implication of General Relativity, we could not, even in principle, have lived our past differently.  Were we to rewind and then replay events, they would work out exactly as they had before, just as a movie follows the same course no matter how many times you watch it.  The characters in a movie might learn later in the film that they had made some tragic error, yet when you rewind the show, they revert to their previous selves, ignorant of what they are always ignorant of at that point in time, subject to the same story arc, unable to change anything that they did before.  Likewise, it’s conceivable that, when our lives end—when we reach the point where our pattern decomposes, diffuses, and fades—we may go back to the start and reexperience life again from the beginning.  (This depends heavily on what the nature of consciousness is).  Indeed, we may be constantly reexperiencing it, infinitely many times.

Though this seems to be a kind of immortality, it’s not a particularly rewarding one, as we wouldn’t gain anything no matter how many times we replayed our lives.  For those of us with regrets it would be a mixed blessing, at best.  For those who have endured lives of terrible suffering, it seems almost too much to bear.  But, of course, reality isn’t optional.  It is what it is, and there is no complaint department.

Ah, but here’s the rub.  We know, as I said, that General Relativity cannot be quite right; crucially, it does not allow for the implications of the Uncertainty Principle, that apparently inescapable fact at the bedrock of Quantum Mechanics.  Quantum Mechanics is, if anything, even more strongly supported by experiment and observation than is General Relativity; I’m aware of no serious physicists who don’t think that General Relativity will have to be Quantized before it can ever be complete.

But of course, as the name implies, the Uncertainty Principle says that things are—at the fundamental level—uncertain.  How this comes about is the subject of much debate, with the two main views being the “interaction is everything, the wave-function just collapses and probabilities turn into actualities and there’s no point in asking how” that is the Copenhagen Interpretation, and the Many Worlds Interpretation, originated by Hugh Everett, in which, at every instance where more than one possible outcome of a quantum interaction exists, the universe splits into appropriately weighted numbers of alternate versions, in each of which some version of the possible outcomes occurs.  It’s hard to say which of these is right, of if both are wrong—though David Deutsch does a convincing job of describing how, among other things, quantum interference and superposition implies the many-worlds hypothesis (see his books The Fabric of Reality and The Beginning of Infinity).

But what does the Everettian picture imply for our higher-dimensional block space-time that is at once all of space and time, already and permanently existing?  Are there separate, divergent blocks for every possible quantum divergence?  Or does the space-time block just have a much higher dimensionality that merely four, instantiating not just one but every possible form of space-time at once?

If this is the case, why do we conscious beings each seem to experience only one path through space-time?  Countless quantum events are happening within and around us, with every passing Planck Time (about 10-43 seconds).  The vast majority of these events wouldn’t make any noticeable difference to our experiences of our lives, but a small minority of them would.

This is the new thought that occurred to me today.  It’s thoroughly and entirely speculative, and I make no claims about its veracity, but it’s interesting.  What if, whenever we die, we start over again, as if running the DVD of our lives from the beginning yet again, but with this important difference:  Each time it’s rerun, we follow a different course among the functionally limitless possible paths that split off at each quantum event?  Even though most of these alterations would surely lead to lives indistinguishable one from another, everything that is possible in such a multiverse is, somewhere (so to speak) instantiated.  Reversion to the mean being what it is, this notion would be hopeful for those who have suffered terribly in a given life, but rather worrisome for those who’ve had lives of exceptional happiness.  At the very least, it implies that there would be no sense in which a person is trapped in the inevitable outcome of a given life.  You can’t decide to behave differently next time around, but you can at least hope that you might (while reminding yourself that you may do even worse).

Of course, all this is beyond even science fiction—well, the earlier parts aren’t, just the notions of a person’s consciousness reexperiencing life, either the same or different, over again.  But it was and is an interesting thought to have on a lazy, early Sunday afternoon in the spring of the year, and I thought I would share it with you.

We shouldn’t assume that we know other people’s motives and character based on limited data (and it’s almost always limited)

I have a long and very important letter to write today (I haven’t been this nervous about writing something since college), so I’m going to keep this relatively short, but I did want to write something, at least.  It’s on a subject that troubles me quite a bit, that apparent tendency—at least on social media—for people to act as if they were telepathic or clairvoyant regarding other people’s motives and thoughts.

It happens so easily, and probably without much thought (probably without much ill-intent).  We see a post or declaration, or a political or social statement, and we infer from it all sorts of things about the source’s character, intentions, and morality.  It’s remarkable that we imagine we’re so good at such interpretations, since most of us very rarely have any idea what our own motivations and deeper thoughts are.  It’s apparently true that often we can recognize by facial expression and body language how our friends and colleagues are feeling more clearly than they recognize it themselves, but this is broad and crude.  Recognizing that someone is sad or angry before they realize it themselves doesn’t give us any reason to think we know why someone is sad or angry.

Yet if a person posts a meme supportive of the Second Amendment—or conversely, one supportive of stricter gun control—those who see this meme often seem to draw far-reaching conclusions, straw-manning the person and their supposed motivations.  The sharer must be a right-wing, racist, homophobic, misogynistic, anti-government bigot, say; or alternatively, they must be a “regressive leftist,” communist, SJW, crusading vegan, who wants to emasculate all men.

How many people in the world would meet those descriptions accurately?  There are probably a few—there are a lot of people in the world, after all, and the Gaussian is broad.  But surely, most people don’t honestly fit into any such broad stereotypes.

Of course, maybe I’m making my own error by guessing that people perform such acts of unwarranted attribution based on limited statements and data.  Maybe I’m straw-manning the people online.  Certainly, there are many to whom I’m being unjust—or would be if I were thinking of them.  But mostly, I’m thinking of the people who respond to trolling and counter-trolling, and the ones who take part in internet-based debates that rapidly, or immediately, degenerate into name-calling matches of which most six-year-olds would be ashamed.

I wonder how people can feel comfortable engaging in such interactions on a regular basis.  Perhaps the anonymity, or pseudo-anonymity, of the online world helps people let slip their baser natures more easily.  We are free from the subtle cues of body language and expression that, as I stated above, give us a sense of how our interlocutor feels.  Also, the nearly-automatic echo-chamber effect of social media tends to reinforce our sense of identity as a member of a particular group, and that leads us to be more inclined to react to perceived outsiders as enemies—this is probably both defensive and a matter of “virtue-signaling,” or what would probably be better understood as tribe-signaling.  We are declaring to those in our tribe that we are members in good standing, and thus should remain welcome.

A similar phenomenon might be behind why a lot of people, many of whom don’t honestly subscribe to the tenets of their stated religion, continue to go church (or mosque, or synagogue, or whatever) on a regular basis.  They demonstrate not their actual beliefs, but that they are committed members of the tribe.

This is, I suppose, often relatively harmless.  But it is anathema to honest discourse.  And it’s only through honest discourse (as far as I can see) that we can come to an ever-improving model of the world, to come nearer to truth and understanding.  We can see how tribalism and partisanship, a reflexive judgmentalism and name-calling, has poisoned much of our political system, creating deadlocks even in a government currently dominated by a single political party.  Nothing gets done—or at least very little does—when those involved are just trying to demonstrate their “virtue” by assailing those on the other side.  At least, it seems like that’s what they’re doing.  Maybe I’m misjudging.

I don’t know what the fix for this tribalism is; it seems to be something innate in the human character.  But it’s surely not the only thing, or we would never have created modern civilization.  Perhaps a place to start, a small step, would be for us to try to curtail our instinct to lead or to respond with accusation and insult.  If we think we know someone else’s motives, we should stop and think again before believing ourselves.  If we want to bring a point of criticism to their attention, instead of reflexively spewing, “It’s gross!  It’s racist!” we might start by saying, “I don’t know what your intentions are here, but when you say something like that, it comes across—to me at least—as racist.  Is that what you wanted?”

I don’t know if that will work better or not, but I’d love to see the experiment tried on a large scale.  In the meantime, remember, just because you infer something doesn’t mean that it was actually implied.