A TIME TO LOOK BACK, A TIME TO LOOK FORWARD

By Paul M. Lewis

Fifty years ago this month, I had just left the monastery where I’d lived as a monk for the previous seven years. I was twenty-one years old, struggling to find myself in a world that was as totally unfamiliar to me as the inside of a silent monastery is to most people who have never lived there. This “outside world,” as we called it and as I even then thought of it, was loud and overbearing, seemingly both uncontrolled and uncontrollable. If I had landed on an alien moon, or a planet somewhere on the far off edges of the galaxy, I am not sure I would have found it all that much less strange or intimidating. To me, this new world was exotic, incomprehensible, and in conflict with everything I had come to think of and rely on as familiar and stabilizing.

It had been my choice to leave. I knew I could no longer remain locked behind monastic gates, not with the kind of desires I had. As a young gay man, my hormones were roiling and boiling, but as a monk, I had kept my vows, reined in those sometimes almost overwhelming impulses into a kind of control (the “white-knuckle” kind, as people in AA say) and had refrained from all carnal contact with other monks. Much later, I learned that many other young monastics had not, but I took the vows I had pronounced as sacred promises and followed them to the letter. My plan, as bizarre as it may seem to anyone now, was to leave, begin dating girls, which would magically cure me of otherwise unwanted desires, and then eventually rejoin the monastery once more after I had become “normal” again. There is hardly any need to say that this did not happen, could not have happened, and for that I am now more grateful than I can probably ever express.

It’s not much of an exaggeration to say that during this first year on the outside, that first summer in particular, I was in a kind of constant state of trauma. I flew home—for the first time on my own—from Washington, DC, where the monastery was located, to upstate New York, where my mother and brother lived. But I stayed there for only a few weeks, as I had applied for and been accepted into an NDEA (National Defense Education Act) scholarship, as a future teacher of French. The eight-week immersion course (all day, everyday, only French was to be spoken) was located at the University of Missouri at Columbia. Twenty-five people from all over the United States had been accepted and formed our group. The fact that probably two-thirds were women, while not exactly a surprise, nonetheless still came as a kind of existential shock to me. Up until that point, I had never in my life spent so much time with young women my own age, and I found it both terrifying and enlightening. It was the beginning of a long learning curve for me, during which I slowly came to realize, and to enormously appreciate, the fact that the female sensibility was different from that of men, and that women have marvelous, even almost magical insights to offer.

Even so, dating women—as I had promised myself I would do—was not easy. I had no notion of what to expect, nor what they might expect from me, or how to respond if, in fact, they wanted something I could not provide. I dated Martha first, and found myself liking her very much, though only as a friend, even going so far as to visit her in her family home on Cape Cod. Later on, I dated Bea, perhaps because she looked kind of boyish, but I found her unsettlingly aggressive, to the point almost of making me want to flee. And when it became clear that I was supposed to be kissing her, but did not, even now all these years later I cringe to remember her saying to me: “What? Do I have spinach in my teeth, or something?” As much as it is a useless and futile exercise to regret anything in life, I have to say that I am nevertheless extremely sorry for what I put them through. I know I did my best, but they were both fine women, good human beings, and they deserved better than I was able to give. No doubt, they have gone on to have happy and fulfilled lives; or so it is my hope anyway.

At the same time, I was struggling at least as much with my faith. More and more, I began to realize that I could no longer believe in a rigid, overly doctrinaire, and uncompassionate Church, one that had once been the mainstay of not just my spiritual life, but of my psychological and emotional life as well. This bedrock of what I had felt to be my identity was rapidly beginning to shatter. Everything I knew or was familiar with had begun to flow away, until soon it became a river in flood stage, a torrent that carried with it whatever had previously seemed solid and stable. I was drifting with nothing to cling to. I did not want to confide in my mother, as she had troubles enough of her own, mourning the passing of her husband, my father. And my brother was a young straight man, who spent his time in the local bars with his factory working buddies. I felt I hardly knew him.

But as difficult as all this was, and as lost as I felt, I also realized at some level that it was the beginning of a new and exciting life, something I had never before envisioned for myself. The particular Catholic religious order I had been a member of was made up of teaching brothers. As such, while a monk, I’d also been a student at the Catholic University of America in Washington, DC. Upon leaving, I had one year left to go before getting my bachelor’s degree in French Literature, and I did so at the State University of New York at Albany.

I could not rely on my mother for money, as she had none, and so while finishing my last year at university, I also worked every night, and all day Saturday. The job I got was in a local reform school for teenage boys, working in the recreation department. Obviously, institutional settings somehow attracted me, as much as this one represented what might be thought of as the darker version of a monastic environment. But regimens, schedules, and organized, bureaucratic settings, even institutional food and set and stable mealtimes clearly represented my comfort level. And somehow, I instinctively knew how to empathize and interact with boys who felt bereft and alone, even if they did put on a tough and sometimes off-putting front.

That first year on the outside is one I will never forget. It taught me that I could face what had once seemed frightening and even unbearable to me with a degree of courage and resilience. That said, it was still a long time before I began to feel even minimally adequate, the beginnings of an ability to take care of myself in a world that often felt alien, hostile, and simply inexplicable. Sometimes the smallest task would throw me, a thing that I knew I should know how to do, but did not. The first time I had to make a doctor’s appointment, for example, I remember thinking: “How exactly do you do that?” Until I made myself take this on, I had no clue that it was as simple as calling and scheduling one at a convenient time. That was the degree of my inexperience in the world. Virtually every day was an occasion to learn something new, to be frightened and utterly perplexed, and then slowly to come to see how I was supposed to conduct myself. I didn’t always like what I saw, or even understand it, but ultimately I decided that this was how to make my way toward a hoped-for adulthood, a sense of maturity, from the Latin maturus—as I knew—meaning “ripe.”

The curious thing is that I feel as though I am still learning, all these years later. When does one reach maturity? When are we fully ready to adequately face the unknown, the continuing, ever-changing challenges of life? Perhaps only when we leave this world. As the ripened fruit falls, so ends our struggle to grapple with life. Everything that I have faced and found, the summonses, the dares, the provocations, as well as the prizes, the great rewards that come to fill our hearts and minds, all have been worth the effort. This is the comfort that comes with seeing things from an older perspective.

So, I have learned something in these fifty years. And if it is not as much as I could or should have, at least I do know this: Nothing in life goes to waste. Everything we experience contributes to who we are, to our understanding of our rightful and fitting place in a sometimes wild and unpredictable, but always—ironically—a perfect, and beautifully ordered world.

PLEASE HEAD TOWARD THE EXIT IN AN ORDERLY FASHION

By Paul M. Lewis

The Brexit vote this past week was a great shock to almost everybody, even to those who supported Britain leaving the European Union. And the fact that the decision to exit won by more than a million votes was perhaps even more surprising. The British bookies, too, lost their shirts, since they had placed odds on the UK remaining part of the EU. What happened? Why would so many people want the United Kingdom to part company from the union of European states it had, if slowly and somewhat reluctantly, joined over forty years ago?

There are many answers to that question, as pundits have been reporting on for some time now. Top among them is that many British voters, especially the English (as opposed to the Scots, the Northern Irish, and some of the Welsh) felt as though they were somehow losing their country to immigration. Within that context, many feared specifically for their jobs, in particular those that newcomers might qualify for if they did not come with a great deal of education or experience. Additionally, there has long simmered a feeling among many that the Englishness of England was becoming a thing of the past. That may in fact be true, if things are viewed in the short term. For the past several hundred years now, England has been more or less white, Christian, and of course Anglo-Saxon. It’s worth remembering, though, that those early Germanic settlers were not always there. According to most accounts, the Anglo-Saxons began arriving in the late 5th century. They did not come all at once, instead arriving incrementally for two hundred years or so, while slowly intermingling with the original Celtic inhabitants and the remnants of the Romans who had settled there.

The Celtic language had previously been used for centuries, with Latin coming to replace it as the language of business and culture around the middle of the first century of the Common Era (CE). Later, the Germanic languages of the Angles, the Saxons, and the Jutes—grouped together and coming to be known as Early English—began to meld with, and finally replace, both Celtic and Latin; the only exception being that Latin continued on for many hundreds of years as the language of the church and of education. French, too, could be added as an influence, after the Norman Conquest of 1066.

The point here is not to attempt, in so short a space, a history of the English people, but merely to point out the multicultural and multilinguistic heritage of England. It wasn’t until the 8th century, for example, that the famous historian, Bede, wrote his Historia Ecclesiastica Gentis Anglorum (Ecclesiastical History of the English People), a time when one could say that England was just becoming English, and so needed a history of its own to explain itself. Bede finished his great work in 731 CE, some 1285 years ago. On a planet that is four and a half billion years old, and within the context of modern humans evolving some one hundred and fifty thousand years ago, it’s not unreasonable to think that this is a relatively short period of time. Indeed, humans have been living and interbreeding among tribes and races ever since the beginning.

Given this longer historical framework, it’s a fair question to ask: What exactly is meant when people say that they want to keep England English? Or keep America American, for that matter? No one needs a lesson on the multicultural, multiethnic, multireligious, and multilinguistic heritage of the United States. Even the Native Peoples of this continent have been here for only probably 10,000 to 20,000 years, depending on which archeologist you believe. A long time in terms of human memory, to be sure, but not so long from other perspectives. Who, therefore, owns a country and its heritage? And what is a country even, but an arbitrary enough system of geopolitical borders? Granted, within those borders there is a shared history (for however long, or short, it may be), often a shared language, and to an extent anyway, shared religio-cultural values. But there is nothing to say that these borders, or these shared elements of human culture, are forever immutable. That’s not meant to imply that people can’t also have a kind pride in their shared history, but at the same time remember that the narrative chronicle of any country is always a relatively brief one. Countries, whole empires, that once considered themselves solid and unchanging have come and gone, and today we dig up artifacts from out of the dust that once belonged to glorious nations now no longer in existence. Nor should we forget that, not so far back, we all came from the same roots

Britain has made its choice to leave the EU, as much as there are those who are calling for a re-vote, a new referendum, now that the sober light of day is just starting to reveal the magnitude of what has been done. I do not believe that this will happen. The die has been cast, and the United Kingdom—or some form of it, if Scotland and Northern Ireland eventually choose to opt out—will have to make the best of things. Indeed, there is chaos enough already attempting to make sense of the consequences of the vote and to figure out how to disengage from the European Union without too much more damage being done. Further uncertainty and chaos, in the form of a new campaign for and against another vote on the Brexit, is not needed. What is best now is to move toward the exit in an orderly fashion, while preserving as much economic, political, and social stability as possible.

But neither does this mean that the enormity of the decision shouldn’t be studied in depth. It should, in fact, be dissected as cleanly and as clearly as possible, so as to understand both how and why it came about, and what it means in terms of how the British people now think of themselves. Other countries too ought to investigate the whys and wherefores of the vote, in order to understand how similar trends, feelings, and beliefs play out among them, and what that may portend.

Surely, the European Union itself, as a political entity, is not without some culpability. It is all too easy to find fault with the so-called ignorant (as some think) in Britain, who voted out of the union. But there is little doubt that the bureaucracy of the EU is itself partially to blame, as it has become an unresponsive and inflexible monolith. As such, many people—not just the British—believe they have had no real representation in Brussels. Americans in particular ought to remember what happens when a group suffers under the onerous and unfeeling mandate of a government that levies taxation without at the same time providing for equal and fair representation.

That said, I continue to believe that the Brexit was a grave mistake. The flaws of a system can surely be overcome, if there is enough political will to do so. The ideal of a common union of nations is a grand one, especially on a continent that has been the genesis of two utterly devastating world wars. What is needed now is not the resurgence of more and more nationalism, not walls, literal or metaphorical, but a wider, a more inclusive, a more open and welcoming embrace of humanity. In that sense, we can all learn from this serious mistake made so recently by the United Kingdom. And in the process, with luck and a good deal of work, perhaps we can also help our British cousins mitigate, or even begin to reverse, some of the more deleterious effects of so short-sighted a decision.

 

 

COSMIC MYSTERIES AND OUR NEED TO KNOW

By Paul M. Lewis

Watching Stephen Hawking’s “Genius” series on PBS recently has reminded me what fascinating topics theoretical physicists study. They specialize in asking such big questions as “Where did the universe come from?” and “Is there a center to the universe?” And while it’s true that there has always been a degree of contention in regard to how these questions are answered, there is at least general consensus on the Big Bang itself, that is, the very beginning of the universe. That term may be a bit misleading, however, in that physicists do not believe it to have been an actual explosion. In fact, the term Big Bang was coined as a kind of put down of the theory by an early doubter. Instead of an explosion, it was probably more of an almost inconceivably rapid expansion, followed immediately by what is called “an inflation,” indicative of the fact that the infant universe moved rapidly outward, expanding in all directions. And the universe continues to expand even now, 13.7 billion years after the initial expansion. No less a figure than Einstein, himself, long doubted the idea of an expanding universe, but even he finally came to accept it, due to the patient observations of another renowned scientist, the great astronomer Edwin Hubble.

How did the Big Bang come about in the first place? Where was it located? And doesn’t it make sense to think of it as having somehow occurred in what might be thought of as the center of the universe? These are all legitimate questions to ask. The answer to the first, that of how the Big Bang came about, is very simple: no one knows. In that sense, it becomes, at least for now and in the absence of further scientific break throughs, more or less a philosophical or a theological question, although naturally scientists do continue to explore it. Regarding the query having to do with the Big Bang being in the center of the universe, the problem it raises becomes a question of logic. To think in locational terms assumes there was some “place” to be. However, there could have been no place for the universe to begin in until there actually was a universe. In other words, how could there have been a physical place, before there was such thing as space to be in? This also means another way to think of it is that everywhere is the center of the universe.

Before the Big Bang, nothing at all existed. It’s extremely difficult for us to conceive of nothingness. Language itself begins to break down, but it’s clear that nothing cannot be “a thing.” The definition of nothing is “no thing,” a complete non-existence of whatever can be perceived by our senses. How can we imagine what this might be like? Some might suggest we can envision it in terms of outer space being a vacuum, that is, of it “containing nothing,” again, as if nothing could somehow be contained. But even that is not the case, since physicists now understand that space is actually filled with Dark Matter. And as much as Dark Matter is unperceivable, it is known to comprise some 80% of all of the matter in the universe. On the other hand, normal matter that can be seen (i.e. asteroids, comets, stars, planets, galaxies, cosmic gas, as well as you and I and all the creatures of the earth and on any of the other planets) therefore accounts only for about a fifth of the known universe.

Theoretical physics routinely deals with imponderables. It works at the edges, at the border between science and philosophy/theology, between what can be known empirically and what can be inferred, or imagined, or intuited. Take another question that physicists are currently studying, that of the multiverse. The idea is that there may be many universes aside from the one we live in. Some even suggest that evidence points to there possibly being an infinite number of these universes, all existing in parallel form. In part, this originates from studies done by the German physicist Erwin Schrödinger. Schrödinger is one of the founding fathers of Quantum Mechanics, which studies the mysterious workings of the micro world of atoms and subatomic particles. He posited that a quantum state is the sum, or the “superposition,” of all possible states, hypothesizing in his famous “cat experiment” that an imagined feline in a box could be both dead and alive, and that we simply point to one or the other state as a kind of convenience, a sort of book-keeping device, only knowing if it is one or the other when the box is opened and it is observed. Additionally, according to another famous student of the field, Werner Heisenberg, quantum particles can exist in multiple locations simultaneously. This is referred to as his Uncertainty Principle, whereby the location of a subatomic particle can be calculated, but not its speed; or the speed can be calculated, but not its location. Some subatomic particles even appear to spring automatically, if fleetingly, into existence from nothing. All this happens at the tiniest—the quantum—level.

At the macro level, on the other hand, String Theory has to do with the workings of gravity and the vastness of the universe, and may ultimately help explain both Dark Matter and Dark Energy (the latter being the mysterious force that is thought to drive the expansion of the universe). The holy grail of modern physics is to come up with a theory that would adequately explain the universe using both the laws of Quantum Mechanics and those of Einstein’s Special Theory of Relativity, which deals with the macro universe. So far, unfortunately, no genius physicist has yet been able to explain this so-called Grand Unification Theory.

As for the multiverse, speculation on that question has not yet risen to the level of an actual theory. In fact, it is useful to remember exactly what is meant, in scientific terms, by the word theory. What it is not, and what many non-scientists believe it to be since this is how the word is used in everyday speech, is a kind of guess—as in, “your theory (of whatever) is as good as mine.” Instead, scientifically, a theory is a system of ideas meant to explain something, based on principles independent of the thing being explained. Thus, we speak of the Theory of Evolution—which is not a guess at all, but a hypothesis that has been tested and retested over the years, and proven itself to be true beyond any reasonable doubt. This is also the case with Quantum Mechanics, whereas String Theory (admittedly, confusingly) has not yet been fully accepted by the scientific community as a whole.

So, we see from merely a short sketch that there are myriad puzzles, inconsistencies, and mysteries in the universe. Any number of others could be added, such as the inexplicable nature of Black Holes, and other singularities like the Big Bang itself. How the two are alike, or not alike, is as yet unknown. And what happens to Space-Time, when it enters into a Black Hole, if even light itself cannot escape its super gravitational pull? Does intelligent life exist on other planets? How did self-reflective consciousness come about? And what exactly is antimatter, which was created at the time of the Big Bang? In principle, when antimatter comes into contact with matter, the latter is annihilated. So, how do we exist? One possible explanation is because there is one extra particle of matter for every billion particles of anti-matter. And is this a matter of luck, or something more mysterious, more mindful?

Which ultimately brings us to the question of God, or if you prefer, some ultimately unknowable Universal Intelligence. How does he—or she, or it—fit into the picture? Does he exist? My own theory, to use the everyday vernacular form of the word, is that he does, and the way toward understanding him lies within, in private, not out there in the practices of organized religion. As Einstein once famously said: “Teachers of religion must have the stature to give up the doctrine of a personal God, that is, give up the source of fear and hope which in the past placed such vast power in the hands of priests.”

To be sure, science can help point the way, by examining the mysteries of the universe that we somehow have an innate longing to comprehend. Even if we never get there by using the scientific method, or generally through the normal processes of the human mind, at least we know we are trying to elucidate these ultimate questions. And if, as I believe, there is a Mystery Beyond All Mystery, one we will never fully plumb with our ordinary minds, then I should think such a Divine Being would really be pleased with the efforts of his clever, curious and ever-striving creatures.

WHAT DOES IT TAKE TO MAKE A COUNTRY GREAT?

By Paul M. Lewis

A number of people have recommended to me an article written by the brilliant, conservative-leaning intellectual (graduate of Oxford and Harvard), Andrew Sullivan, published in the most recent edition of the New York magazine, entitled “Democracies End When They Are Too Democratic.” Its subtitle goes on to say, “And right now, America is a breeding ground for tyranny.” In it, Sullivan makes a convincing case for the notion that over time democracies become almost too democratic, what he calls hyper-democratic, and as such they tend to implode on themselves. Within that context, he goes on to quote Plato, who tells us that “tyranny is probably established out of no other regime than democracy.”

Although Sullivan nonetheless still maintains that democracies are wonderful places to live, he says—no doubt, rightly so—that nothing lasts forever. Indeed, the excesses of democracy are all too often seen in the passions and the tyranny of the mob. The Founding Fathers did what they could to temper this, but over time such protections have eroded away. As an example, just look at the untrammeled chaos, the blind furor of the zealots in the current primary season. Sullivan refers to this as “last stage political democracy.”

The excesses of social media, seen on Facebook and Twitter and elsewhere, are further examples of unregulated democracy. If it were not so, why would China, and other repressive regimes (North Korea also springs to mind) want to limit, or even forbid, its access? The web itself has virtually no monitors, no elite experts who can serve as intellectually legitimate analysts to correct errors, or to call a lie a lie. Either that, or there are so many claiming to be experts that, in the end, no one knows who is legitimately so, and who is not; there is no longer anyone to modulate people asserting themselves or their pet ideas, or to say, “No, what you are claiming is misleading, untrue, even immoral.” Hyper-democracy, in other words, seems to bring us to the point of what might be called hyper-equality, wherein the thoughts, feelings, and opinions of each person are sacrosanct (we are all equal, after all) and automatically asserted to be on the same level as those of everyone else, no matter how unskilled or inexpert they may be. Where then is judgment, circumspection, logic, prudence, let alone wisdom? As a result, we get a presumptive Republican nominee for the highest office in the land in Donald Trump, who is the very epitome of uncouth, uncultured, uneducated, even unprincipled, self-aggrandizement. In other words, the brashest, to say nothing of the richest, gets to speak the loudest and rises to be the leader of the pack. As Orwell said so presciently back in 1945, speaking, ironically, about communism, “All animals are equal, but some are more equal than others.”

As brilliant an essayist as Sullivan is, and as thorough and insightful an analysis as the article provides (I highly recommend that it be read in its entirety), it seems to me that virtually any political system can ultimately devolve into tyranny, and that democracy is no more susceptible to doing so than any other. I suppose it could even be asked: how many other forms of government are there, aside from democracy itself and tyranny? Just look at two of the other largest and most powerful countries of the world, Russia and China. Nobody would accuse either of them of ever having been hyper-democratic, as much as Russia may have made a few tentative steps toward democracy once communism fell. There is little doubt today that each is caught up in the throes of an increasingly repressive dictatorship. Indonesia can be cited as another example of a country that went through the horrors of the tyrannical Suharto regime, only to emerge briefly and hopefully into the light of democracy, having elected Joko Widodo (aka, Jokowi) in 2014; sadly, however, he now appears to be leading his country back towards a form of hyper-religious rigidity, if not outright dictatorship. Virtually all of the promising Arab Spring movements toward democracy, too, have surrendered to dictatorship and tyranny. Gen. el-Sisi in Egypt, as just the latest example, has taken away most of the rights of civil society that hopeful democrats had, not so long ago, thought to be within their grasp. And look what happened in Libya once the hated dictator fell, with help from the democratic west. Can it be said that the tyranny of a dictator was any worse than the tyranny of warring clans, or the horror of an emerging ISIS? The point once again is that these, and many other countries that could be cited, collapsed into oppression and subjugation, not out of a context of hyper-democracy, but out of either the chaos of their own recent history, or a long-standing predilection toward autocratic rule.

My fear is that people generally—no matter what form of government they live under—have a built in penchant, even a longing, for a “big daddy” who will take control, rule their lives, and tell them what to do and when to do it. All too often, we want to be relieved of the burden of having to think, analyze, and make difficult decisions on our own. This may especially be so when the world becomes even more complex and confusing than it normally is, or when outside factors over which most of us truly have little or no control, things such as the globalization of the world economy and even the terrible effects of the ever increasing warming of the globe, come into play. When this happens, people become desperate for plain, simple answers, ones which they either do not want to parse out themselves, or which they feel themselves incapable of grappling with. They want relief from the burden of needing to live in a more or less constant state of questioning, uncertainty and unpredictability. When such times come about, the Trumps of the world rush in to offer surety, decisiveness, and an ability to get things done now, not after endless dithering and debate, while democracy makes its slow, messy, erratic, moody, and unpredictable way forward. The supporters of Donald Trump, like those of Xi in China, or Putin in Russian, or Jokowi in Indonesia, or Erdogan in Turkey—many others could be added—want certainty in an uncertain world, and are all too willing to go along with the scapegoating of disempowered minorities by way of easy explanation.

As simple as it sounds, it takes a lot to live with ambiguity. It takes a kind of centeredness within oneself, a sureness of who one is, and a belief that this identity will not change, no matter what happens out there in a disordered and topsy-turvy world. But that is not easy. Many of us (myself included, I admit) are not all that comfortable with change; we find it unsettling, disconcerting, and unnerving. But the world is, by its very nature, variable, fluctuating, inconsistent, an unpredictable place in which to live.

Still, while all of this may certainly be true, it does not relieve each of us of the responsibility of facing the world head on, whether shivering in our boots, or cursing with all our might against the vicissitudes of ill-starred fate. Donald Trump, with his simplistic promises of making American great again, and pointing a finger at whoever his latest scapegoat may be—criminal illegal aliens stealing our American jobs, or terrorist Muslims hiding behind every bush, ready to pounce on an innocent and unsuspecting populace—will not be able to rescue us, no matter how much anyone may want him to.

Democracy, even with all of its flaws and failings, and its all too human tendency toward chaotic imperfection, is still always better than dictatorial tyranny. And if, as Sullivan notes, hyper-democracy can be a gateway to autocratic totalitarianism, then so be it. If this is the case, it’s up to each of us to prevent that from happening. Who else is there to do it? If we can learn to be more comfortable with ambiguity, and take on a little more responsibility for informing ourselves and making things right that have gone wrong, then maybe we don’t need someone out there to do that for us.

Maybe America already is great, not because Donald Trump asserts that he can make it so, but because we, the people—you and I—are capable of taking on the task of responsible self-government. In the end, it’s up to us to make some mature decisions and not opt for the easy fantasy of an imperious and domineering generalissimo, riding in to deliver a hoped-for, if ever illusive, rescue. It’s our choice and, with hard work and determination, we really are capable of making democracy work for all of us, no matter what late stage our political life may find itself in.

THE BENEFITS OF MEMORIZATION: OR HOW BEST TO GET A POEM

By Paul M. Lewis

I know of no better way of understanding a poem—I mean, of really getting it—than to memorize it. Yes, of course, just reading a piece of poetry is always good; and in rereading it several time one can certainly begin to comprehend at a deeper level what a particular piece, especially a complex and complicated one, is all about. But if you want to make a poem completely yours, learn it by heart.

This was something I first discovered while memorizing some of the sonnets of William Shakespeare. It all started more or less on a lark. I was spending a lot of time on various workout machines at the gym, treadmills mostly, and it soon enough became clear to me that this did not provide much mental stimulation. So, rather than stare at the inanity of the TV screen in front of me (thankfully, the sound is always turned off), and more or less by way of self-defense, I took to memorizing a few of my favorite poems. It was mostly a way of keeping my mind active and interested, present, you might say. I began with a few by Gerard Manley Hopkins, and eventually I moved on to Shakespeare.

The first time I read one of Shakespeare’s sonnets, however, I admit I had to wonder a little what exactly it was about. I’m not a Shakespearian scholar, only an interested amateur, one who likes to go to his plays and listen to the sonorousness of that glorious language. That said, it’s not just sound that’s important; after all, the language also does mean something. Take his sonnet number five, as an example. In it, we read, “Were not summer’s distillation left/A liquid prisoner pent in walls of glass.” Now, what in the world does that mean? As I practiced and learned the poem by heart, it became clearer that Shakespeare was talking about perfume made from flowers and stored in a glass vial. Then, going on to the last two lines of the same sonnet, the traditional rhyming couplet, he writes: “For flowers distilled, though they with winter meet,/Leese but their show; their substance still lives sweet.” Again, words not necessarily immediately understandable to our modern ear. But with some practice, it soon enough became clear that Shakespeare was talking about the stored up scent of flowers (i.e. perfume), and though the flowers may lose their outward beauty, the preserved scent still gives great pleasure any time of the year, even in winter.

Of course, if you’re not particularly drawn to poetry in the first place, to the unique and exquisite way it can condense and refine language, creating its own phantasmagoric world, then I suppose a legitimate enough question is, why bother at all? Why put the effort and the mental energy into memorizing something that may not appeal? I get that, and have no argument against it. But still, if you consider for a moment just how magnificent the language itself can be, how the compactness of its meaning is so striking, so astounding, how the rhythm, the sheer vibratory energy of the poem can be so surprising, so breathtaking, so extraordinary, then you may come to a deeper and greater appreciation of what it is.

I have always felt that language is a powerful tool; that its sound, its throbbing vibrato, the pulsation of it, has the ability to make changes in the world. I’m not necessarily talking about changes “out there,” making things appear or disappear, for example (although, who knows, maybe someone with a profound enough ability to concentrate can make things happen that ordinary mortals cannot?). But at very least, what I am talking about is the ability it has to make changes in our own consciousness, that is, to lift one’s thoughts from the mundane and the everyday to the greater heights of the ethereal and the otherworldly. Shakespeare himself seems to suggest this in another sonnet, the famous number 29. Here, he begins with a long list of things that have put him (the speaker) into “disgrace with fortune and men’s eyes.” In that list, we come across such items as wishing that he were “…like to one more rich in hope,/Featured like him, like him with friends possessed,/Desiring this man’s art and that man’s scope,/With what I most enjoy contented least.”

Now, it can be said, as lovely as the language is here, it is nonetheless about a kind of depressed state of being; and therefore it might be thought of as not particularly uplifting. However, as so often happens in the structure of these lovely sonnets, beginning with the ninth line, things take a turn: “Yet, in these thoughts, myself almost despising,/Haply I think on thee,” and then his state does change. But who is this “thee” that Shakespeare is speaking of, by the way? Many scholars believe it references the beloved youth, the young man to whom the first 126 sonnets are addressed. No one knows who this was, or even if it was an actual young man whom Shakespeare loved, or a compilation of people, or even a symbol of something else. And because this part of it is less than certain, it clears the way for each of us to insert our own “thee” into that space. Whether that turns out to be a person, an ideal, a hope for the future, a wish for greater things to come, or even—if you prefer—some spiritual being, who may help us be better than we think we’re capable of, all that can be left to us.

The important point is that, with mere words—albeit powerful ones—there actually is a way of uplifting one’s own consciousness. Indeed, there may be no better way of demonstrating this than by quoting verbatim here the rest of this lovely poem and letting it speak its overwhelming beauty directly:

 

“…then my state,

Like to the lark at break of day arising

From sullen earth sings hymns at heaven’s gate;

         For thy sweet love remembered such wealth brings

         That then I scorn to change my state with kings.”

There are other poems, too, that uplift and that change how we think, how we see the world. William Butler Yates does it all the time. In his “Lake Isle of Innisfree” we read, “I will arise and go now, and go to Innisfree.” What can Innisfree refer to except that inner space wherein we feel ourselves to be liberated (“in-is-free”)? Or Gerard Manley Hopkins, who in his “Pied Beauty” speaks, although perhaps less directly and more figuratively, of all things spotted and mixed: “Glory be to God for dappled things,/For skies of couple-colour as a brinded cow;/For rose-moles all in stipple upon trout that swim.” He ends with this laudatory attribution: “He fathers-forth whose beauty is past change:/Praise him.”

Coming back full circle to where I began, as lovely as it is simply to read any of this, the memorization of it somehow serves to incorporate the language into our psychic DNA. It takes the immense beauty of the words, and of how the words work for and with one another, and the meaning, and all that is beyond mere meaning, and instills and integrates it into the very elemental fabric of our being. In this way, we too arise and go to Innisfree, to this place far beyond the intellectual, beyond the ken of everyday understanding, and we assimilate it into the fiber of who we are. As Yates says in the same poem, speaking of such a spot:

“And I shall have some peace there, for peace comes dropping slow,

Dropping from the veils of the morning to where the cricket sings;

There midnight’s all a glimmer, and noon a purple glow,

And evening full of the linnet’s wings.”

Who would not want to live in such a place? And is it really possible to do so? To be sure, the world out there has its grandeur and allure, though who does not also see its terrible ugliness, as well? But the deep world of poetry, learned by heart, made one’s own and fully taken into one’s own private inner sanctum, such that one is not merely saying the words but living them, experiencing them in the fullness of their totality, transforms us in a way that art, at its highest and very best, as well as beauty, and truth, and love, and even spirituality, has always been meant to do.

WHAT DOES MONEY SAY TO US?

By Paul M. Lewis

Not surprisingly, the decision to remove Andrew Jackson from the face of the new $20 bill has been controversial. There are those who continue to adulate Jackson. And although as a young man he could be rowdy, self-willed and quick to anger—he killed a man in a duel to defend his wife’s honor—he was also brave, self-made, and he championed everyday people, defending them, as a lawyer in court, against the elites of the day. He had an abiding hatred for the British, whom he fought against as a young teenager during the Revolutionary War, and by whom he was captured. While in captivity, an English officer ordered him to polish his boots; Jackson refused, and the soldier slashed the left side of Jackson’s face with a sword, leaving lifelong scars. Later, as an officer himself during the War of 1812, Jackson is reported to have fought bravely and was loved by his men.

That is one side of Jackson’s personality. The other side, a darker one, is that he was an owner of almost 150 slaves, whom he sometimes treated with extreme cruelty, and he had no love for American Indians. While president of the United States, he became famous, or infamous, for his terrible treatment of the Cherokee people. The Cherokee had lived for centuries in the southeastern portion of the United States, occupying much of what is now known as the state of Georgia. Although the history is a complex one, and the Cherokee were themselves undermined to an extent by their own political infighting, they were driven off their ancestral land, in no small part due to Jackson’s efforts, and ordered on a forced march to trek a thousand miles to the west to live on the southern Great Plains. This was an utterly alien land to them, where they had to make a home among other Indians whom many of the Cherokee themselves looked upon as “uncivilized.” Along the way on this exhausting march, as many as 4,000 died, and many more expired after having arrived in so-called Indian Territory, due to the disastrous effects of such an onerous and punishing journey. It has long been referred to as “The Trail of Tears.”

Again and again during the course of his presidency, Jackson proved his utter disdain for Indian peoples, in spite of the fact that he and his wife adopted an Indian child. As such, many American Indians today, perhaps the Cherokee in particular, detest his memory. They have long loathed the fact that the face of this man, who so tragically used and abused their ancestors, was on the front of one of the most commonly used bills in US currency. In the April 24, 2016 edition of the Los Angeles Times, Becky Hobbs, a contemporary member of the Cherokee Nation, says of her elders that they “wouldn’t even touch a $20 bill because they so despised Andrew Jackson.” To add insult to injury, the calamity of removal, as it was called, befell the Cherokee in large part because white men wanted what had been Cherokee land, so that they could use their black slaves to clear the land and plant cotton. And this in spite of the fact that the Cherokee had made many accommodations to white civilization and were convinced that their future, such as it was, lay in cooperation with, not opposition to, the Americans. Indeed, when forced off their land, they took the US government to the Supreme Court and won a judgment against the administration, which Jackson proceeded to ignore.

All this raises a number of questions related to the topic of who should be on the face of a country’s banknotes; what message ought to be put front and center about a nation? Take the European euro, as an example. Maybe by way of not offending anyone in so multinational, multicultural, and multilingual a political association of states as modern day Europe represents, no one individual appears on the euro. Instead, each of the seven bills (€5, 10, 20, 50, 100, 200, and 500) features representations of generalized and stylized “European architectural monuments” on the obverse, and—tellingly, or maybe hopefully—bridges on the reverse. In China, not surprisingly, Chairman Mao’s face appears on many of the banknotes of the renbinmi, along with occasional pictures of various Han Chinese faces and depictions of other nationalities to be found within modern day China. Renbinmi, after all, means “the people’s currency.” The Russian ruble mostly shows famous monuments, such as St. Basil’s Cathedral, the Moscow Kremlin, as well as depictions of towns famous in Russian history and culture. The South African rand, again not surprisingly, depicts Nelson Mandela on the obverse of most bills, along with an assortment of animals native to the region, such as the lion and the water buffalo, on the reverse. But American bills have traditionally been mostly about men—white men specifically—from our storied past. Thus, Andrew Jackson on the face of the $20 bill. Countries, in other words, tend to place their heroes front and center, at least as long as the powers-that-be can agree that they are heroes (e.g. Vladimir Ilych Lenin was dropped from the Russian Ruble in 1992).

It’s perhaps an understatement to say that money means many things in the life, history, culture, and politics of a nation. Who, or what, appears on it is also fraught with meaning. In the form of bills or coins, money is used by every citizen of that country, and in the case of large and influential countries—none more so than the United States—by those living outside of the country, as well. It is handled by virtually every adult, and many children, in every country every day, often multiple times within a twenty-four hour period. As such, its look and feel sometimes may hardly register on the consciousness of those who use it. And yet, there is little doubt that most Americans can tell you who is on the one dollar bill, the five, the ten, and case in point, the twenty. Maybe especially the twenty, since almost everyone uses ATM machines these days, and they dispense only bills of that denomination. But what of the vaunted melting pot of the country? If only white men are depicted on currency, how does that in any way represent American diversity? Andrew Jackson’s picture has appeared on the $20 since 1928. Where are the women; where are black people, Latinos, Asians; and where is the depiction of the American Indian? Even the iconic “Indian head nickel” (a coin, not a banknote) is no longer issued by the US mint, and hasn’t been since 1938.

But that is about to change. The US Department of the Treasury has decided to remove Andrew Jackson from the obverse side of the $20 bill, putting him on the back instead, and replacing him with Harriet Tubman, an escaped slave, conductor on the Underground Railroad, and rescuer of countless slaves in the process—in other words, a true American hero. Treasury Secretary Jack Lew, who spearheaded the effort, has said that the design will be released in 2020, although it is not clear how long after that the bill itself will come into use. Still, this is a huge change, and a major step forward, for a country whose idolization of all things white and male has been endemic.

When it does come to be, how will a black woman feel when she goes to her local ATM and sees a twenty dollar bill with the face of Harriet Tubman on it? How will Becky Hobbs, the Cherokee woman, feel when she no longer has to view Jackson’s despised face, at least on the front of the twenty? Will it actually make any difference to either of them, or to anyone else? I’m guessing that it will, since symbols, which register both consciously and unconsciously, really do mean something to people. When all you see around you in terms of the literal wealth of the nation are pictures of white men, what message does that send? It says that they have the power, the influence, the authority; it says they have mastery and control over others.

None of this is meant to suggest that all white people, men or women, have influence and authority. Just ask Donald Trump’s backers, or even Bernie Sanders’s, how much in control they feel. Still, white people are, at least for now, the majority in this country. But that too is changing fast. Whites currently represent about 62% of the US population. It is projected that they will lose that majority status within the next 30 years, and white children will be a minority by 2020. Here in California, whites are already a minority, at about 38% of the population, while Hispanic peoples are at 39%. Isn’t it, then, about time for somebody other than a white man to be represented on the face of US currency?

Trump has, of course, already declared himself against the idea of having an ex-slave black woman on the face of the $20 bill, claiming that it’s just another example of liberal overreach and political correctness. But that is what we have come to expect from the Donald. To him, political correctness is just another term for whatever he happens to be against.

The real question is why a country would not want to put its best face forward on the very thing that, literally, touches every citizen of that country (and which each of those citizens touches). Putting Harriet Tubman and others like her who have overcome monumental adversity and helped their fellow citizens in the process on the face of American currency is the right thing to do. They are among the best the country has produced, and they represent the immense richness of our social, cultural and racial heritage. For my money, it’s time we left more dubious and questionable historical figures behind and picked people whom all of us can actually look up to.

HIGHWAY OR TRAIN, INDIVIDUAL OR GROUP: WHAT’S THE AMERICAN WAY?

By Paul M. Lewis

Not long ago, my partner and I were driving to Northern Arizona from our home in Southern California. We go each month to visit with my partner’s mother, who is in hospice care at a nursing home there. It’s usually at least an eight and a half hour drive each way, longer if somebody was texting, or chatting on the cell phone, or otherwise distracted, and so has caused an accident.

For the most part, we try to work it out so as to avoid the worst of rush hour traffic on the freeways, leaving early enough to give ourselves some breathing room. We also tend to take the northern route, heading up Interstate 15 to Barstow, and then taking I-40 east until it meets Arizona state route 89. Taking the I-10 east instead might cut off a few miles, but the 40 is so much more beautiful. It goes through the magnificent Mojave Desert National Preserve in California, and once you’re in Arizona you pass through an enchanting forest of juniper trees.

When there’s a problem on the roads, it’s always in the LA megalopolis. For us, getting to Barstow entails taking the 405 to the 22 to the 55 to the 91 to the 15. Anyone who drives the LA freeways knows what I’m talking about, and for those unfamiliar with these routes, it’s maybe enough to say that they can be torturous. One of the worst places is the intersection of the 91 and the 15, near the Inland Empire town of Corona. That’s because so many people have moved to the southern part of Riverside County, where housing at least once was a lot cheaper, in search of the American dream: a house with 3 or 4 bedrooms, living room, dining room and family room, plus a yard out back with grass where the kids can play and the dog can romp. If you’re lucky, or rich enough, maybe you even have a swimming pool, to boot.

For years, that intersection narrowed down to one lane, and traffic backed up accordingly. On a dark winter’s morning, driving east on the 91 and approaching the 15, you could see a gargantuan necklace of headlights, as cars awaited their turn to get onto the westbound 91. Nowadays, Caltrans (the California Department of Transportation) is in the midst of a monster construction project there, involving a multiple lane overpass.

Which is what got me to thinking. The last time we came through there, we were on our way home, and so it was the middle of the afternoon. The behemoth hulk of the half-built overpass was plainly visible, hanging in midair, as workers and machines scrambled over the area following their appointed tasks, ones not necessarily apparent to us passersby. Still, progress was clearly being made, or I guess that’s what it’s called. At least, you could see that more of the road had been completed than when we started our regular treks to Arizona, something like eight months ago.

And no doubt, the folks who use those freeways everyday, commuting back and forth to jobs nearer the coast from communities like Riverside, or Lake Elsinore, or Murrieta, or even as far south as Temecula, will be overjoyed once the work is done. My guess is that things will be better for them, at least for maybe a year or two, until the traffic catches up with the improvements—as it always does—and we’re back once again looking at what will then be a double, or even a triple-line, necklace of headlights.

The Caltrans budget for the current 2015-16 year is 10.5 billion dollars, an almost 2 billion dollar, or 11.9%, increase over that of the previous year. Even though this represents less than 10% of the state’s overall budget of about $113 billion, it is still a lot of money. Though some might say even that’s not enough. After all, without our freeways, how would people get to work, how would goods and services be moved, how would anyone get anywhere, for any reason? But remember this, too, that the ten billion plus dollars spent by the state on Caltrans this year represents only a tiny fraction of the amount spent over the years on the building of this kind of infrastructure. In other words, that ten billion is the cost for maintenance, and some isolated construction projects, on a system that basically already exists.

What has occurred to me many times, as we drive through that interchange between the 91 and the 15—or any other you may care to name—is why did we never invest the same monumental sums of money into rail connections? In my freeway-smog addled mind’s eye, I imagine my partner and me, for example, sitting comfortably in a bullet train, heading east out of downtown Los Angeles straight to Phoenix. Then, after relaxing for a short wait in a beautifully appointed train station there, we would take another line north from Phoenix to Prescott (our final destination); or, if it had to go to Flagstaff first, then from there on a smaller branch line down to Prescott. This same kind of convenient train travel could of course be reproduced in all fifty states. But that is not what we have. What trains are available are hardly convenient. Years ago, we took a train trip from Los Angeles to Seattle. It was supposed to leave LA at noon, and arrive in Seattle at 8:00 PM the following evening. Instead, it left at 4:00 PM and arrived at 3:00 o’clock in the morning two days later. Does that instill confidence in getting from place to place on time, to say nothing of comfortably? This huge delay happened mostly because there is, for the most part, only one train track between these two major west coast cities, and freight trains often take the right-of-way. It’s not supposed to be like that, but the freight carriers far prefer to pay the relative pittance of a fine for not giving way to a passenger train, and so slowing down their own operations.

If the government—and of course the people who elect their representatives—made train travel a priority, we could have made that same journey in a matter of hours, not days. Just as Europeans do on their trains, or the Japanese, or nowadays even the Chinese. The travel time, for example, between Paris and Marseille—a journey of approximately 775 miles—takes about 3 hours and 40 minutes on the TGV (train à grande vitesse, France’s version of the bullet train). You leave from central Paris and arrive in central Marseille. No need to bother with highways, airports, or parking, or sitting in traffic. You can read, chat with your fellow passengers, or just sit and look out the window. And all this for about 25 euros, just over $28 US dollars, according to the current exchange rate. Is that what it costs to actually operate these state-of-the-art trains? Probably not, but the government is willing to subsidize the cost, and so are the French people. By contrast, the distance between Los Angeles and Phoenix is about 365 miles. The Amtrak ticket costs $100 more than the ticket between Paris and Marseille, and it is estimated that the trip will take over 10 hours. In other words, it would cost 4 times as much, and take more than 3 times as long, for my partner and me to go half the distance. And we would still have to either rent a car in Phoenix to get to Prescott (a two hour drive), or get ourselves to the airport there to pick up the shuttle van.

Why would we ever do that? Indeed, why would anyone take a train in the United States, when travel by car is so much faster, cheaper, and more convenient? The answer obviously is almost no one. But what is behind these questions may be more interesting. One estimate of the cost of building the interstate system is that it takes approximately $1 million for every mile of highway built. Using that estimate, and multiplying it by the almost 48,000 miles of interstate highways we have in this country, we come to the mind-blowing total of approximately $48,000,000,000. To put it in words, because most of us are not used to seeing that many zeros after any number, that is forty-eight trillion dollars. Naturally, the money was spent to build these roads over the course of many decades. Still, by way of comparison, the entire US GDP, the Gross Domestic Product (i.e., the cost of all goods and services produced in the country in a given year) is projected to be just under $18 trillion dollars for 2016.

I learned a long time ago, working for many years at universities, that budgeting is always a matter of deciding on priorities. When my boss told me I could not hire an adviser I thought we needed, but I learned later that another office was able to, it was clear that that other office had a higher priority in the hierarchy of what was considered important at the university. Each of us does the same thing with our own household budgets. New car? Well, maybe not this year. Maybe it’s best to get the roof fixed, or pay down that outrageous credit card bill.

Although admittedly far more complex, the basic principle is the same when it comes to countries. Money is ultimately put where you, the taxpayer (via your representatives), want it to go. And Americans want their cars, and their highways. We want to be able to go out our front door, jump into our automobile, and hit the open road. Or that’s the fantasy, at least. We’re rugged individualists; we want independence, free choice; we want to go where we want, when we want, and to be able to stop whenever it’s convenient. Leave the trains—those giant conveyor belts of groups of people—to the socialists in Europe, or the communists in China. So, don’t look for a diminishing of car travel any time soon in this country. California has been attempting to build a bullet train between LA and San Francisco for several years now, but with all of the court challenges against it, the project has just barely begun. And even if and when it is completed, it will be required to run without state subsidy.

In the end, we get what we pay for. Americans have always wanted what we think of as our freedom of movement: the car in the garage ready to whisk us off whenever we choose either to work, or to school, or to an enchanting land of adventure. But along with this comes packed freeways, bumper-to-bumper traffic, huge costs, and polluted skies. If that is what we want, then that’s what we’ve got. And if anybody prefers a nice train ride, swift, clean, reliable and cheap, well, they’d just better take a trip to Paris to find it.

SOLITUDE AND COMMUNITY: CAN WE HONOR BOTH?

By Paul M. Lewis

Nicholas Dames’ article entitled “The New Fiction” in the April 2016 edition of The Atlantic magazine explores the modern novel by contrasting it with an older version of fiction, one exemplified first by Cervantes in Don Quixote. That earlier view, amplified all the more by the great nineteenth and twentieth century masters, saw fiction as essentially a way of identifying with the other. Its goal was to provide a space whereby we could step into the lives of someone so different, so removed that the reader would otherwise never have encountered such a person in life. Who could imagine, for example, that they could have come to know anyone as strange as Quasimodo, or even Jean Valjean (to conflate two of Victor Hugo’s most famous works), or Don Quixote, to bring us back once again to Cervantes? Or how could most of us have traveled with the deviant Humbert Humbert other than in Vladimir Nabokov’s Lolita? Yet meet them we do, and in so doing, we come to understand at some deeper level what it is like to be them.

In the postmodern novel, however, this empathic “expansion of the moral imagination,” as Dames puts it, is not the goal. Instead, contemporary novelists, who eschew older forms of writing, concentrate not so much on our ability to pass outside the boundaries of our own skin, as on the need to understand and anchor the concept of the self. In a world where we are incessantly interconnected electronically, they seem to be asking, how are we to know who we really are? There isn’t so much a need to understand and feel with another, as there is to delve into and inhabit our own ego identity, which we are in danger of losing, or have already lost. A term that has come into use for this type of writing is “autofiction.” Dames defines it as “denoting a genre that refuses to distinguish between fiction and truth, imagination and reality, by merging the forms of autobiography and the novel.” The goal—if that is not too atavistic a term to use in this context—seems to be to reveal, even to revel in, one’s isolation, one’s aloneness, in our inability to know, or be known by, another. Each of us exists in our own solitude, and that solitary state is essentially unbridgeable, except—and here is the irony—by the very revelation of the singularity of our individuality. Otherwise, if that were not possible, then why write at all? The writer’s separateness can, in some way, teach the rest of us how “to soothe our isolation,” though we incongruously still need the hermitic distinctiveness of our solitary selves in order to understand, and even to appreciate, the individuality of our own humanity.

All this may come across as overly highbrow, as some sort of precious or recherché affectation, almost a kind of faux exploration of life in the twenty-first century. For the most part, those of us who still read at all tend to do so for the traditional reason, that is, in the hope of getting to know the other. Even Pres. Obama noted this, as was reported in the same Atlantic magazine article. Harkening back to that older view of the meaning of fiction, he said that what he had learned from novels was “the notion that it’s possible to connect with some [one] else even though they’re very different from you.” He went on to say he lamented the demise of fiction reading in our culture and said he believed that this pointed to a concomitant loss of empathy in the country and the world.

Still, can it be said unequivocally that all this business about the meaning of literature might just be highfaluting claptrap, a thing dreamed up by critics so as to show off a fancy vocabulary or, more nefariously, by publishers in order to sell books? I think not. The basic notions of identity, of isolation, and of empathy really are important to each of us, whether we think about them in conscious ways, or not. Of course, no one necessarily has to read a novel, of whatever genre or era, in order to feel for another, or to realize their own essential aloneness. These existential states of being come of their own accord in the process of living, in the misery of a bereft childhood, or the toxic stew of an inherited chemical imbalance; or they invite themselves into our psyches by the blunt-force trauma that everyday life can sometimes bring with it. In other words, living can be its own kind of suffering. As Gerard Manley Hopkins, the great nineteenth century poet, put it, “This in drudgery, day-labouring-out life’s age.”

A question that each of us ultimately faces in life, whether it be head-on or more obliquely, is how do we overcome what is our essential aloneness? How do we reach out beyond our “bone house,” to quote Hopkins again, that is, beyond the awful—and awe-filled—barrier that is the end of our own skin, and in some way connect with another? Love, of course, is the simple answer. But how successful are any of us at that? How many times do we stumble, fall and go crashing to the ground in our hasty, or confining, or clinging attempts to reach out lovingly? And if love demands a certain kind of selflessness, an overcoming of the all too self-centered ego, how often are we able to achieve that?

Literature, in all of its varieties, can teach us something about these fundamental questions and help the reader, or the watcher/listener if we are talking about drama, attempt the frightening leap across that impassible barrier, out into the abyss, in the hope of grabbing hold of some other frightened leaper. In this sense, the conflict between traditional and post-modern writing may only be an apparent one. In the case of the former, the traditional role of literature, the identity of the leaper is assumed (that is, it’s ourselves), and the reader then can empathize with the character “out there.” In terms of the latter, the post-modern vision, the assumption that we don’t know who we are may simply be the next logical step in the evolution of that outreach. Literary self-exposure is another way of looking into the mirror and saying to ourselves: yes, that’s me and not another; this is my hyper-personal expression of the utter uniqueness that is my individuality. It’s what makes each of us human, or at least what contributes to our understanding, our belief, that we are all different in ways that cannot ever fully be explained or communicated. If love is to be the answer as to how to span the unbridgeable gap, it must assume two (at least two) individuals; otherwise, there is no abyss to be bridged at all. Both love and literature demand separateness. Postmodern writing merely emphasizes the “I,” while traditional literature highlights the “he, she, or they” in the equation.

The answer to the question of whether or not we can honor both solitude and community is that one needs the other. The relentless modern attempt to reach out electronically, to text and to tweet, or to have FaceTime, may be emblematic of overwrought and overworked lives. Even so, it is after all a kind of reaching out. It’s true that we don’t have to read postmodern novels to understand we are alone; nor do we have to plow through Cervantes, or Hugo, or Tolstoy, or Faulkner to put ourselves in someone else’s skin. But it can’t hurt. That’s another way of saying that literature benefits us, that it reflects and explains the parts of ourselves that all too often escape us, as we go about the quotidian business of living. It reveals a deeper level of our being that slips and slides among the shadows and hides from the harsh, revelatory light of day. It grabs at the core of who we are, even when we don’t know—at least consciously—who that is, and flings the pieces of that identity, fragmentary as they may be, across the unbreachable chasm that stands between us.

We may be utterly alone in that no one will ever be fully capable of plumbing the profundity of our inner most being. Maybe we can’t do that even for ourselves. But we live with the hope, even the promise, of connecting with another and, in the end, that may be enough. This is what excellent writing can do, and why storytelling, in whatever form, which is what fiction is about after all, will always be with us.

THE QUESTION OF IDENTITY: WHO ARE WE, AFTER ALL?

By Paul M. Lewis

Whom do we identify with? That’s a basic question all of us may want to spend some time thinking about. It might seem at first to be of relatively small importance, too abstract to even mean anything in the real world. But it turns out the answer to it influences a lot about how we live our everyday lives.

Let me start off with an example from my own life. When I was young, I thought of myself as a good Catholic boy. At least, that is what I strove to be, possibly more so even than many of my classmates at St. Patrick’s Grammar School (yes, in those days, they were thought of as schools where grammar was taught, meaning not just how best to construct a sentence, but more widely, how to comport oneself in the world, how to construct a life). At St. Patrick’s, there were good boys and bad boys, the latter (mostly Italian—no one said Italian-Americans in those days) being those who flaunted the rules and wore their hair in a certain style the nuns most definitely disapproved of called a DA, or duck’s ass. They were the rebels, the tough guys, the non-conformists, the group I didn’t belong to (as much as I may have secretly wanted to be one of them).

Instead, I hung out with those who were less outwardly rebellious. But even these boys swore, spent a lot of time talking about sex, and generally didn’t take religion all that seriously. I tried to identify with them, but somehow it never came off very naturally for me. Inwardly, I disapproved of (could it be said that I feared?) their language, their topics of conversation, and their general disinterest in religious teachings. I suppose some might have thought I was a bit of a pill. The one saving grace I probably had was that, even at a young age, I instinctively knew enough about how to get along with people for them to accept me as one of their own. But, unbeknownst to them, I would often sneak off and kneel in prayer in the darkened interior of St. Patrick’s Church, or attend Friday night Benediction (a traditional Catholic devotional service). No wonder then, at age fourteen, I decided to enter a monastery.

Even there, however, I found boys who did not quite live up to my standards, which were very high! Yet people still appeared to like me because I was by nature a peacemaker and someone who tried to see the best in others, while openly criticizing no one. A big part of my not criticizing others stemmed from the awful realization that I knew I was far from the idealized self I imagined I should be. How could I blame others for not being somehow better, when the very faults I recognized in them I also saw all too clearly in myself—in fact, far worse ones? There were things the Church said not to do which I did, and many others which, while I might not have done them, I earnestly wanted to. And if I wanted it so much, wasn’t that tantamount to actually doing it? In short, the standards I believed the Church established for me, and those that I freely embraced on my own, were mountains so high I could never hope to fully scale them. In that sense, I consistently set up my own failure.

And so, my principal focus of identification in those years was with an idealized Church, one that I believed would allow me to lead a life I felt I was supposed to lead. It was a kind of umbilical cord that provided an association, a connection with an entity that I felt to be greater than myself, and which at the same time gave me a kind of scaffolding upon which to construct a life that I otherwise felt to be constantly on the verge of collapsing disastrously out of all control.

It worked, too, at least for a while, even if not completely, because I often felt I failed at the high standards I had created for myself. As such, and in keeping with Catholic teaching, I thought of myself as a sinner. Still, the superstructure did provide me with a consistent foundation upon which I endeavored to build something. Until, of course, it didn’t. The first problem with what might be called the “idealized external” is that it is, by definition, outside of oneself; and the second is that it, too, eventually shows itself to be less than perfect. Even I could see that the luster had begun to tarnish, that the Church was showing a darker, seedier, more squalid side. After all, it was made up of people, and people are far from perfect. Aside from being sometimes good and helpful and even loving, they—we, all of us—are also more than capable of selfishness, cruelty, prejudice, cynicism, arrogance, egotism, deceitfulness, anger, even violence. And the list could, of course, go on.

What I am saying is that any organization, any human group, no matter how good its intentions (in particular, its initial intentions, until time and usage begin to break them down), is so flawed we ought to think long and hard about fully identifying with it. And not just religious organizations; other groups as well could certainly be included, such as political parties, philanthropies, environmental groups, sports teams, cultural associations, as well as organizations affiliated with labor, the military etc.

In fact, the core of the problem comes exactly down to the question of the depth of one’s identification with the external. My childhood relationship with the Catholic Church, and with the particular monastic tradition I belonged to, was so all engulfing as to obscure everything else. I took it to be all there was, and when I eventually began to realize that life was writ far larger than that, more complex, messier, dirtier, more intent, more insistent on its own needs than anything I had previously thought possible, then I saw that this first object of my identification could no longer contain everything that I was.

But what could? That is the very question I have struggled with for many years. It is a question all of us must face. What I have always looked for is a wider, a deeper, more all-inclusive connectivity. Ultimately, I came to believe that this was my own relationship with my self; or, I should say, with my Self, the capitalized “s” indicative of some part of my being (and not just mine, of course, but everyone’s), beyond mere ego identity, that both includes all the things of everyday concern and, at the same time, goes beyond that.

I take great comfort in a particular passage from one of my favorite scriptures, the Bhagavad-Gita. If ever there has been a more insightful statement on identification, in the largest sense of that term, essentially on who we are, then I don’t know what it might be. Speaking of union with Brahma (the Creative Principle of the Godhead), Krishna says: “He so vowed, so blended, sees the Life-Soul resident in all things living, and all living things in the Life-Soul contained…Who dwell in all that lives and cleaves to Me in all, if a man sees everywhere—taught by his own similitude—one Life, one Essence, in the evil and the good, hold him a yogi, yea, well perfected!”

Taught be our own similitude—that’s a very interesting phrase. The language may sound a bit obscure, but put more simply, what it means is that we see in others exactly what is already within us, namely both evil and good; actually, more to the point, some messy, chaotic intermingling of the two. That is what human beings look like, at least on the outside. Within, who knows? Perhaps something bigger, more perfect, something that connects with all of life, and at the same time transcends it. Maybe this is what it means to realize who we truly are. And, if so, that’s what I want to identify with.

WHAT DOES IT MEAN TO DIE A NATURAL DEATH?

BY PAUL M. LEWIS

How best to care for elderly relatives is an issue many people are struggling with these days. It’s a subject close to home for my partner and me, as well, given that his mother is in hospice care and has been for the past six months. In addition, we know at least half a dozen others, good friends, who are struggling in their own ways with taking care of elderly parents, whether they live close by or at a greater distance. We are, ourselves, some 500 miles away from my partner’s mother and make the nine-hour trip there at least once a month. Another friend undertakes a seven-hour drive to see his mother every week, arranging to work a full-time schedule in four days and compacting Mom’s care, plus the 14-hour round-trip, into his Friday-to-Sunday weekends. Yet another has his 93 year old mother living in his home, with him as 24 hour-a-day caregiver. And one other close friend is overseeing the care of both of his parents simultaneously, one of whom is in a skilled nursing facility, while the other still lives, at least technically, on her own, but needs almost constant care. Additionally, there are still others who have it much worse, those who have to combine eldercare with raising small children, for example, or those who are struggling with their own physical ailments, while attempting to deal with the illnesses of aged parents.

In one sense, this is not entirely new. To an extent, families have always dealt with taking care of the elderly. At one point in our history, it was not at all uncommon for grandma or granddad to live in the same home with a grown daughter or son and their family. People simply contrived to take care of the older person, as he or she got sicker and closer to death. What has changed, however, and changed dramatically, in the last few decades is the length of time that people have been living. Not so long ago—certainly within my lifetime and in the lifetimes of many of my contemporaries—common diseases would have caused the death of many an elderly relative. In my own family, both of my grandfathers had died before I was born, and neither of my grandmothers lived much beyond their mid 70’s. During the 1950’s and 60’s, when they died, that was relatively common, and simply seen as part of the rhythm of life that comes to its expected end. I am not suggesting that the loss of a loved one was any easier, or less traumatic, in those years. The point is only that it often happened earlier in that person’s lifespan, and consequently in the lives of their offspring and caregivers.

Today, diseases and other ailments that, only a few decades ago, might well have carried off an individual are now regularly treated by modern medicine in such a way as to prolong the lives of those suffering from them. I am speaking of afflictions such as heart disease, stroke, pneumonia, even some forms of cancer, to say nothing of helping those seriously injured in devastating accidents that at one time might have very well brought about death. Again, I want to make it clear I’m not at all suggesting that this is bad. Of course, we all want those whom we love to go on living. What I am saying is that the longer a person lives, especially into what we now think of as extreme old age, that is, the nineties and beyond, the more difficult it becomes not only for them, but for those whose lot it falls to to care for them, particularly as their quality of life becomes more and more compromised. And the burden of this care can be a heavy one, physically, financially, emotionally, and simply in terms of time and energy.

Ultimately, the larger and more overriding question may be this: What does it mean to die a natural death? Many people have decided that they do not wish to live on life support and have issued what is commonly referred to as a DNR—a Do Not Resuscitate order. Both my partner and I have done so, as has his mother. Even so, the question is not as clear-cut as it may at first seem. There are endless gradations involved, gray areas, in between places when it falls to the person who is acting for another to decide if “this is really it.” If an elderly mother, for example, has a stroke, who is to say if she can come back from it and regain much of her strength and mobility? Or if a father in his 80’s has an abdominal aneurysm, should he be operated on in order to relieve the potentiality of it rupturing? Of course he should, many of us would say. And yet, this was exactly the case for a good friend of mine. It turned out his daughters decided for him, as his mind was already somewhat compromised and he had difficulty fully understanding the ramifications of decisions. Yet, after the operation, he slipped more and more into a world inaccessible to anyone, and lingered for another year in that twilight state. This is not to blame his daughters, who did what they thought right, but was it what my friend really wanted?

At what point do we decide, either for ourselves or for those we are looking after, that no more medical help ought to be given, other than palliative, non-curative care? And what of people who have decided that the time has come, choose hospice care, and yet somehow still cling to life, in essence forgetting that they may have made such a decision? And if they made that decision while in sound mind, but now appear to no longer be capable of making fully informed, rational judgments, what then? What are we to do if, having made one decision, they change their mind again, back and forth sometimes even from day to day, or from week to week? These are questions that cry out for answers that we do not always have at the ready.

Could we even say that the very notion of a natural death has been so changed by the advances of modern medicine that we no longer exactly understand what we mean by it? I can offer myself as an example. Nine years ago, after having had a second heart attack, I underwent angioplasty. The doctors miraculously inserted two stents into the arteries of my heart, and I seem to be fine today. If they had not done so, there is every possibility that I might well have died long ago of a heart attack, as my mother did in 1970, at age 50, much before such things as stents were even dreamed of. It could be said she died a natural death. Or did she? But what of the fact that she smoked for most of her life, that she worried constantly about everyone, her children in particular, and that she worked hard in a factory much of her adult life? Didn’t all this contribute to her early demise, and if so, how “natural” is that?

Still bigger, in a sense more global, questions could be asked. What about poverty and its consequences, such as lack of access to medical care, living in overcrowded conditions and susceptibility to infectious diseases, the inability to buy healthy food and have clean water to drink. Even lack of education can affect a person’s lifespan, as we have seen when women tend to have fewer babies the more education they get. Is it natural to die while having an eighth or ninth child?

And while this may seem to have led us relatively far afield from the topic of eldercare, what I am suggesting is that it all contributes to our understanding of the overarching question of what it means to die a natural death. Indeed, in the world of the 21st century, it is more of a conundrum than ever. Do not resuscitate, yes, of course! Few of us would wish to linger on life support, while living essentially in a coma (although even here there are exceptions, as many of us may remember from the Terri Schiavo case).

All too often, the choices are not cut and dry. It is difficult enough for each of us to make choices when it comes to our own lives. Do we opt for chemotherapy, for example, if diagnosed with cancer, given its terrible side effects and the likelihood, or not, of its working? And it is even less clear when needing to make such decisions for someone else. Should we have told the emergency room doctor to do everything possible for Mom or Dad after that stroke? Is their current quality of life enough to have justified that decision, even though a DNR was on record? And add to this the fact that such decisions must often be made on the spot, amid the terrible haze of emotional trauma, when one’s own judgment may not be as clear and dispassionate as we might otherwise wish.

There are few clear paths through the maze of such questions. It may be that the best any of us can wish for in taking care of others is to follow our hearts, with the hope of an informed intellect and, with luck, perhaps even some clarity and wisdom. We all wish that, when the time comes to shuffle off “this mortal coil,” as they used to say in my Catholic youth, we may not linger, and instead exit with a measure of grace and dignity. Yet, no one is assured of what might be called a clean and clear-cut ending. Do we get the death we deserve, or the one that we need? Should it be conscious; or do we hope for a silent slipping away while asleep?

Maybe the best preparation for a natural death is for us to not be so concerned about it at all. In Hindu thought, there exists the notion of God’s “Lila,” the idea that all of creation, including life and death, is part of the divine play, with Spirit being the only true Reality. There is comfort in this view, and perhaps even great wisdom. As Krishna says in the Bhagavad-Gita: “Mourn not for those that live, nor those that die. Nor I, nor thou, nor any one of these ever was not, nor ever will not be, forever and forever more.” And if that is the case, then, in the end, maybe death itself ought not to matter so much.