Michael Toscano

Michael Toscano is a film school dropout who writes from Brooklyn. He is balding rapidly.

From the Archives: On the Meaning of Baseball (and a Suggestion)

This piece comes from our archives.

 

When I am trying to recall my childhood, the best I often call forth is a phantasmagorical flash of images and feelings; for instance, a friend whose play habits I have forgotten or a name without a face-not nearly tidy enough for scrapbook presentation. Amongst the confusion, there are a few vivid, traceable “story lines,” if you will, by which I can mark my life’s progression.

One example that comes to mind is my relationship with Rich, my best friend since I was three. Another is my exploration of books and all the richness I have found there. Indeed, there are other threads equally as important to the overall weave and design of my personal history, influences that give it form and worth. But with October winding down, amidst the kaleidoscope of autumnal color, one storyline, which comes accompanied by the “crack!” of a bat and the “pop!” of a mitt, gains a particular poignancy: my love for baseball.

As a lifelong New Yorker born in the 1980s, I grew up amongst baseball chatter and rivalry. From one house to the next, loyalty was divided between the Yankees and Mets. My house belonged to the Yankees. With stars like Darryl Strawberry, Doc Gooden, and Keith Hernandez, and having won the World Series in 1986, the Mets were the better team of the eighties. But I inherited my father’s love, became a Yankees fan, and learned to revere the history of the team. Dave Winfield, Ron Guidry, and Don Mattingly were my players of choice.

As I grew older, what I loved so much on television I took to the schoolyard, and began playing catch with my father, two older brothers, and the other neighborhood boys. And soon, I was playing in the local Pee Wee league, helping my team to the championship round in five consecutive seasons. Unfortunately, we won only the first of five. Eventually, as the skills of the other boys developed, mine stalled, and I became a mere spectator. While my playing days were over, I still found baseball exhilarating as an onlooker. And I watched year in and year out, following my team down the valleys and up the peaks, devastated with every playoff loss and elated with every October victory, devoted to my team and to my sport without demur.

Recently, a friend, peering over my shoulder while I read baseball box scores, asked me with sincerity (a rare and disarming temperament), “Why do you like baseball, anyway?” Truly, baseball, like many things personally enjoyed, is a delight to some and a frivolity to others. As with anything really loved, my love for baseball has been challenged before.

It is a challenge, honestly, with which I rarely engage because cynics-of sports, most especially-very seldom ask genuine questions, usually making proclamations through their “questions.” The conversation inevitably devolves into an each-to-his-own pact of non-aggression. They go their way; I go mine. It is not a conversation worth having often. So when I detected the sincerity in my friend’s voice, I realized that he was not showing disdain for my sport, but that he was actually asking a rather thoughtful question: “What is the meaning of baseball?”

Very quickly, I recognized that I did not really know the answer. I had never considered baseball in that manner. I had always unquestioningly enjoyed it, as I suspect is the case of all baseball enthusiasts. But that is to be expected, maybe even hoped for. As C.S. Lewis, the Oxford don and author, explains, enjoyment, the disposition at the heart of sport, is vastly different to contemplativeness, the disposition at the heart of paradigmatic construction. Contemplating baseball requires objective detachment from it-an outside view, if you will. On the other hand, enjoyment of baseball requires an unquestioning subjective engagement: humility before the sport-being inside of it.

Enjoyment and contemplation are two differing consciousnesses. One sees from without, one from within. And fans learn from the earliest age to see sports from within. Still, I thought to contemplate the question and seek an answer.

Admittedly, when measured with a detached rationality, baseball appears absurd. The principal action of the game revolves around a man who, from a raised bump of dirt, throws a stitched leather ball across a 216 square inch pentagon which lies 60’6” away. An opponent stands aside the pentagon and tries to swat the stitched leather ball with a narrow wooden stick. Many more precisely measured and equally ambiguous actions ensue as a result of this repeated event. While these are famously difficult feats, they appear entirely arbitrary. At first glance, baseball gives the impression of meaninglessness, and, especially in light of their enthusiasm, its fans appear irrational.

Yet, a cardinal virtue of any good sport is its ability to test the physical limits and discipline of an athlete. And, although the rules and actions of baseball may appear arbitrary, they certainly excel in testing an athlete’s physical endurance, agility, speed, and strength. To this end they have been constructed and diligently upheld; so they are not entirely random. As each game generally spans over three hours, and each player has limited opportunities to contribute to the contest’s outcome, an athlete must prove his patience and focus as well. For example, on offense the position players (players who both play the field and hit) are likely to have only four at bats per nine innings. On defense, a position player can go an entire game without fielding a single ball-a rather frequent occurrence. The athlete must remain vigilant, as a result, so as not to be caught unaware. However, the greatest test for a ballplayer is one of will, for even the most excellent hitters succeed, on average, only three times out of every ten at bats. Failure is at the very heart of baseball and success can only be had in spite of it.

Americans love excellence, even of the purely physical sort. It satisfies our meritocratic predisposition. On that level, one can understand an American’s appreciation for a baseball player or team. But can appreciation for excellence explain the intensity of the fan’s communion with his team? Can it explain the fan’s tears over loss, adulation from victory, or even prayers for a player’s well being? Can it explain his devotion during years or even decades of competitive futility? No, it certainly cannot. Again, the fan is either entirely irrational, or there is something more to baseball. I suggest that there is something more, which can only be found in the experience of the fans, a testimony that cannot be discounted. To see it, we must move a little further in.

Simply put, baseball has a visible level-a physical dimension viewable by all, even cynics-and an invisible level-a metaphorical dimension experienced by fans only (often on the subconscious level) which is unknown to cynics; especially empirical statisticians. If baseball is a body, the rules are the bones and flesh, and story is the blood. Only together does it have fullness and its fullness can only be found in fandom.

So what does the fan see, exactly? While it’s different for each fan, it certainly contains nostalgia, as I have recounted from my own life. More importantly, the fan sees a microcosm of the human story. In my experience, I of course appreciated Don Mattingly for his offensive and defensive prowess. Nevertheless, he became my favorite player during years when his back was balky and his numbers declined. During those seasons I appreciated his perseverance, humility, sacrifice, and sympathized with a career that became increasingly demoralizing. Don Mattingly became my favorite player because I empathized with his personhood. This is common: in one breath fans will praise a player statistically, the next in universally human terms.

In addition, the drama of baseball is entirely unscripted, which makes its structure both analogous to life and more theatrical than a stage play, film, or television show. In this way, a baseball game reflects spontaneous human achievement, action, and emotion: a physical dramatization and symbolization of everyday living. Furthermore, the story of a particular game, season, franchise, or player establishes the meaning, and thus the dramatic content, of any given physical action. For example, a home run hit in an April contest, while physically impressive, is quickly forgotten. On the other hand, in 2003, Aaron Boone’s home run to defeat the Red Sox and send the Yankees to the World Series was a narrative masterpiece, complex and deep enough to stir true euphoria and genuine devastation. As a result, the moment is memorialized in both infinite honor and infamy. This demonstrates an indelible fact: no matter how impressive the physical act is, its meaning is understood, and memorability determined, by its place in the story of the game.

On a subtler level, baseball is symbolic for an overarching metaphor that mirrors human existence at its most primal: that life can only be lived in the face of certain death. A baseball contest progresses by outs, failures, if you will, not by time. Generally speaking, a standard ballgame is complete only after both teams in the contest record twenty-seven outs. In the bleak world of baseball, both teams fail. The victor is merely the team who has accumulated the most runs in spite of their own demise. Concomitantly, the hitting side, dubiously named “the offense,” is postured from the start in a defensive position, and their task is to temporarily stave off the efforts of the pitching side to retire them. Ultimately, the world of baseball is a fallen one in which even the victors inevitably perish. Much like life, victory in baseball is achieved in the face of a harsh fatalism. Ballplayers are actors in a passion play and the fans are the beneficiaries of their willingness to demonstrate the human struggle.

If victory means failure, and winning the World Series is so supremely difficult, why play? Here I can only make a suggestion: for good reason baseball first took root amongst rural Americans, a people famous for their protestantismus. Imported from England, baseball became a reflection of the Americans that claimed it. Like all, the people of rural America were aware of death’s certainty, yet they still hoped that on the other side promises would be fulfilled and dreams come true. Baseball, their game of choice, offers players and fans a mirrored anticipation. Fielding a baseball team is like taking Pascal’s wager: when a team wins the World Series, all of their hopes and dreams for the season have been realized. Wouldn’t it be worth it to dedicate oneself to that cause even if humiliation was assured and victory uncertain?

Kuleshov’s Effect: The Man behind Soviet Montage

It was in 1918 that Lev Kuleshov—film theorist, father of the Soviet Montage school of cinema, director of The Extraordinary Adventures of Mr. West in the Land of the Bolsheviks (1924), political partisan, teacher—ventured a hypothesis. The hypothesis: the dramatic effect of a film was found not in the content of its shots but rather in the edits that join them together.

Kuleshov put his hypothesis to the test. Taking an expressionless long shot of the actor Ivan Mozzhukhin peering into the camera—presumably, because footage of the original experiment has been lost— he broke it into three parts. Then he intercut each practically-identical segment with three other shots—a bowl of steaming soup, an attractive young woman, and a child lying dead in a coffin. When he showed the segments to audiences and polled their reactions, they swore that Mozzhukhin’s expression had changed from piece to piece. When staring at the soup, Mozzhukhin was hungry; at the young woman, lustful; at the child, mournful.

Kuleshov-effect

“The discovery stunned me,” Kuleshov wrote, “so convinced was I of the enormous power of montage.”

The Power of Montage

And his amazement was catching. Soviet greats of the silent era, such as Vsevelod Pudovkin, Aleksandr Dovzhenko, Dziga Vertov, and Sergei M. Eisenstein, were likewise stunned. It’s only a slight exaggeration to say that the Soviet silent era, its “golden era,” was the flowering of a shared fascination with Kuleshov’s discovery.

Kuleshov experimented further. In Art of the Cinema, Kuleshov’s first book, he tells of several other tests, including the creation of a single woman from a hodgepodge of different women. He writes, “By montage alone we were able to depict the girl, just as in nature, because we shot the lips of one woman, the legs of another, the back of a third, and the eyes of a fourth.” It was “a totally new person.” And even though she was a composite, according to Kuleshov, she retained “the complete reality of the material.”

Kuleshov tended to exaggerate the implications of these constructs: “it was not important how the shots were taken, but how these shots were assembled.” Alfred Hitchcock, decades apart and worlds away, called it “pure cinema,” when the montage gives rise to meanings that exist nowhere to the eye, but only in the mind. This interplay between montage, perception, and meaning has come to be known as the “Kuleshov Effect.”

What Kuleshov actually discovered and what he thought he discovered are not necessarily one and the same. How successful would the Mozzhukhin experiment have been had he not appeared “neutral” but enraged, elated, or better yet, jogging in place? These montage composites are limited by plausibility (a woman has lips, legs, a back, and eyes) and the content of the shots themselves (neutral stares versus, say, directed actions, like, ‘appear as if you are frightened’).

Yet Kuleshov was right to emphasize the power that editing has over motion pictures, even to the point of bending the inner “reality” of shots. What stunned Kuleshov was the incredible flexibility of the medium, and, with that in mind, the power it granted him to provide moving pictures with new contextual meanings. Such authority over meaning strikes us as obvious today, but at the time the “photographic” image was held to be a totally faithful, “concrete,” inviolably “true” artifact, free of the shortcomings of subjectivity. This turned out to be false; or, rather, true in a limited sense. What Kuleshov was witnessing was the dissolution of a paradigm—which no doubt felt like the melting away of the thing itself.

Thinking about Film

Film historian Ronald Levaco called Kuleshov the “first aesthetic theorist of cinema,” a deserved appellation. Yet it’s an oddity of Soviet film that such a theorist, doing strange experiments in an editing room, could have so great of an effect. Indeed, without the total destruction of Russian cinema by a chain of sweeping social disasters—World War I, the October Revolution, the Civil War, Bolshevik rule, and the mass starvation of (conservative estimate) 5 million people—it would have been impossible. The Civil War devastated Russia’s cultural centers, and studio owners fled the Bolsheviks with their cameras and film stock in tow. The instruments of film, incredibly expensive, difficult to operate, and very hard to replace, were gone. “Moscow had 143 theatres operating before World War I,” historian Peter Kenez recounts, “but in the autumn of 1921 not a single one remained in operation.”

In 1920, Kuleshov joined Moscow’s All-Union Institute of Cinematography, established in 1919, as an instructor. The Institute was the world’s first film school—a film school without film stock. While Hollywood, during this time, was making film, the Soviets and Kuleshov were thinking about film; and thus developed a thickly theoretical and experimental approach to filmmaking, as was readily apparent as soon as film equipment became available to them. (One wonders, incidentally, what kind of cinema Hollywood would have made had it at least once been destroyed.)

Films without Film

Kuleshov’s workshops are legendary. Known as the Kuleshov Group, Pudovkin was one of his students; Eisenstein studied under him for three months, but was inspired—“influenced” is a better word—by Kuleshov for a lifetime; sometimes as a rival; later as a dear friend.

Kuleshov would direct his students in mock shoots of “films without film,” drilling them over and over again through a series of taxing acting exercises. He would position the actors before empty cameras and they would act out the scenario, pretending that their performances were being recorded, in preparation for the time when the Soviet Union once again had film.

Re-Centering Cinema

Meanwhile, Kuleshov continued his editing experiments—having no film with which to shoot did not mean there was no film with which to play. Using whatever films they could get their hands on—films left behind from the days of Tsar Nicholas II, foreign films allowed entrance into Soviet territory under Lenin’s New Economic Policy, and others which circulated illegally throughout the Soviet Union—Kuleshov and his cohort would break them down and reassemble them in a variety of configurations. They were especially captivated by the innovative (and supremely racist) films of D.W. Griffith, the American director behind The Birth of a Nation and Intolerance. Kuleshov studied Intolerance obsessively; swapping its parts; changing its order; and re-constituting its meaning and themes (a task made easier by its silence). Like a cinematic Doctor Frankenstein, Kuleshov poked and prodded; re-arranged and re-animated; then did it all again. Breakdown then re-assemble; breakdown then re-assemble.  It was this breathless experimentalism that yielded the Kuleshov Effect.

Kuleshov became convinced of the vaporousness of the shot and, naturally, of the inconsequence of the director as a cinematographer. In such a psychological setting, the center of Soviet film shifted from the camera to the editing table; from “production” to “post-production.” From there it unleashed its “golden age.” For all of its deformities—or because of them—Soviet montage remains one of the truly lasting and perpetually fascinating movements in film.

The Hammer, the Sickle, and the Editing Table

So what of Kuleshov the filmmaker? The Extraordinary Adventures of Mr. West in the Land of the Bolsheviks (1924), his most famous film, was a financial success and one of the first feature length films in the Soviet Union. It’s a key point in cinema history, but not a great work of art. His best film was Po Zakonu (1926), based on a short story by Jack London; it’s very good, but, again, not great. The students of his cinematics—Pudovkin, Dovzhenko, Vertov, and especially Eisenstein—made far more compelling things of his theories and principles than he ever did. Yet his theories are implied in almost every aspect of their work. This makes him a minor filmmaker but a major figure in the history of film.

Soviet Montage had a short-lived but high glory, lasting from 1924 to 1930, brought to an end by the artistic exhaustion of the theory, squabbles among the movement’s major figures, the rise of sound, and the disapproval of Stalin. Eisenstein, the greatest and most natural filmmaker of the group, explored both the roots and frontiers of Kuleshov’s doctrine—“the content of the shots in itself is not so important as is the joining of two shots of different content and the method of their connection and their alteration”—at once realizing its greatest triumphs while stretching it far beyond the sinews of plausibility. Strike! (1925) and The Battleship Potemkin (1925) are undoubtedly two of the greatest and most unique—because stylistically unrepeatable—films ever made. Eisenstein both fulfilled and exhausted Soviet Montage.

By 1930, the “Golden Age” which Kuleshov fathered came to an end. Soviet Montage had marched across the globe, deposited its greatest cinematic achievements, theories, and inventions into the tool chest of—something called—“global cinema” (a prestige genre cooked up by Hollywood and its California-based film schools), and was spent.

BRAND_BIO_BSFC_158803_SF_2997_005_20140508_V1_HD_768x432-16x9

Back in the Soviet Union, Stalin and his party apparatchiks turned against the movement; and the names of its practitioners, especially Eisenstein’s, became bywords among political aspirants, encapsulating all that is wrong with its misguided adherents, decadent artists lost in sterile theory, too effete to portray the strengths and vigors appropriate to “the people’s” cinema.

It was in 1935, though, that Soviet Montage officially died. Under the motto “For a Great Cinema Art,” on January 8 through 13, Stalin convened the All-Union Creative Conference of Cinematographic Workers, with Stalin himself present on the final day to distribute awards for cinematic achievement. Stalin orchestrated the proceedings to denigrate Soviet Montage and to elevate Socialist Realism in its stead as the single aesthetic of “Great Cinema Art.”

For five days, Kuleshov’s theories were officially disavowed, but Eisenstein, who had become most popularly associated with the movement, was the named target. Director Leonid Trauberg criticized Eisenstein (and through him the movement) for making “stupid poetry”; director Sergei Yutkevich read aloud a letter from George Sand to Flaubert—which accused Flaubert of too much intellectual study—and pointed its finger at Eisenstein: “You are a fool who roots around in his straw and eats his gold.” Even Dovzhenko, the coward, himself a Soviet montage filmmaker, feeling the heat, took a turn: “If I knew as much as he does I would literally die.” He then threatened Eisenstein, whose lack of production—he had not completed a film in six years—noticeably displeased Stalin: “If you fail to make a film within twelve months at the latest, I beg you never to make one at all. We will have no need of it and neither will you.”

Only one man, during these five tense days, came to Eisenstein’s defense: his mentor, rival, and friend, Kuleshov. “You have talked about him here with very warm, tearful smiles as if he were a corpse which you are burying ahead of time.” Kuleshov slapped back: “I must say to him, to one who is very much alive, and to one whom I love and value greatly: Dear Sergei Mikhailovich, no one ever bursts from too much knowledge but from too much envy.” Then he took his leave: “That is all I have to say.”

Kuleshov defended the man who explored his theories the most ingeniously and made them known to the world—but the movement he had founded was dead.  

In the late 1920s, several years prior to Eisenstein’s public flaying, Kuleshov had found himself, like Eisenstein, the object of intense scrutiny and renunciation by party wannabes. Four of his adoring students, one of whom was Pudovkin, came to his defense:

Some of us who had worked in the Kuleshov Group are regarded as having “outstripped” our teacher. It is a shallow observation…

We make films—Kuleshov made cinematography.

Seven Minutes that Shook the World

“Of all the arts…cinema is the most important.”V. I. Lenin

We are told that they are the seven most important minutes in film, and at the time of this writing they are ninety years old. These minutes comprise a scene hailed by critics—both of their era and beyond—as the first true moment of filmic genius, which, by virtue of their “dialectical” approach, grandeur of spirit, and prowess of production, finally overcame—or better yet, overthrew—the encumbering concreteness of the celluloid image.

To the director himself—and a good many officials in the Communist Party, pragmatic and serious men as they were—they held even greater promise: they were a key discovery, a new agitational tool, an advance in propaganda that if scientifically applied directly struck the psyche, shocking it awake from its proletarianized slumber; a technique to readjust the very rhythms of thought, re-pacing them to match the clashing thrusts of history’s theses, anti-theses, and syntheses; and the final stage of the revolution, the revolution of consciousness. They embodied a method to alter the minds of the people of the new Soviet Union, stirring them to “dialectics.”

The tool was an editing theory known as “Soviet Montage,” the genius was Sergei Eisenstein, the seven minutes are his famed depiction of the 1905 massacre in Odessa (which he set for dramatic purposes on the Odessa Steps), the film was The Battleship Potemkin, the year of its release was 1925, the tail end of the silent era and eight years after the October Revolution. We’ve been studying these seven minutes ever since.

“И вдруг” (or as the intertitle translates it, “Suddenly”)

Crash cut! Extreme close up! Extreme close up! Extreme close up! Extreme close up!

A woman’s face blurred

Head jerks violently

Teeth gnash

Pain—death?

Four shots; four cuts; one composition; for a duration of one second.

—Cut—

Medium shot. North. Camera faces the Potemkin Stairs. Panic: the people of Odessa flee downwards and towards the frame. Something looms behind them. What moves in from the north?

A woman center frame, black dress, holds a white umbrella which obscures her face. She plunges forward, her umbrella engulfing the frame in white. Four-second shot.

—Cut—

Medium shot. Many tones—grey, white, black. The camera’s line of sight jumps northwest 45 degrees (to the left) of center stair. The crowd surges.

In the background, men and women fly south on a dirt path that runs parallel to the stair; in the foreground, on the stair itself, dark silhouettes stream past the lens. Mid-ground, a legless man, the focus of the shot, spins on his palms to see the threat behind, then turns to bolt on his hands. Six-second shot.

—Cut—

Long shot. Establishing shot—or at least what we would normally call one. The perspective has jumped from looking upstairs to down. The camera is positioned high (on a crane, most likely), angled tightly downwards to show the full length and breadth of the Potemkin Stairs.

The enemy enters bottom frame: the tsar’s soldiers in imperial white, rifles ready. Four-second shot.

—Cut—

Medium shot. We are no longer viewing the whole of the action, but are again looking upstairs, back among the people. Now, northeast, 45 degrees right of center, a woman has fallen. No one stops to lift her. She clambers to her feet and continues her escape. Three-second shot.

—Cut—

Long shot. New perspective. The whole stair is visible, from the bottom landing to the top, all 192 steps. Majestic, aggressive, imposing, unnecessary—imperial.

A guess: 150 or 175—maybe 200—men and women, old and young, bourgeois and proletarian, in a full run before the advancing soldiers. (Amazingly, according to reports, no one was injured during the filming.) Seven-second shot.

—Cut—

Montage! A signature move of Eisenstein’s: show one full action from multiple angles, parts out of sequence, without care for continuity or exactness of performance.

Death!

One death; four shots; four versions.

Extreme close up! A man falls, collapses to his knees. Three-second shot.

Close up! Back on his feet, he falls face forward, plunging towards the camera. One second.

Flash!—extreme close up! He tumbles sideways. One half-second.

Medium shot. On his feet once more, he tumbles sideways, hits the ground and dies. Three seconds.

—Cut—

The Math and Music of Montage

So what have we seen, and why have we seen it? Thirty-six seconds in and we’ve discovered contrasting tones, irregular lengths, leaps in perspective, and adventurous editing—in, out, up, down, straight, askew, whole, part, long, brief, briefer still. Montage.

The scene at first appears chaotic, but Eisenstein is in complete control. The improvisational opening sets the stage for an ever-more-complex sequence in which Eisenstein rounds these wild elements into an orderly, almost mathematical, organization, creating a rhythm of edits. By scene’s end, Eisenstein makes 159 splices (by my count), with some shots as brief as six frames long (less than 4/10 of a second in the 16 frames per second that he shot in). As the scene grows steadily in scale and ambition, its ballooning scope morphs into a sustained and driving tempo, which gives way to a surging pace of edits and actions that fall on increasingly hard beats. Allegro, vivace, allegrissimo, prestissimo!

Soldiers march—fire!

Odessans fall

Soldiers march—fire!

Odessans fall

Marks of Marxism

We never get a good look at the tsar’s troops. They are faceless and ruthless. By contrast, as the montage unfolds, Eisenstein introduces us to some of the personalities in the fleeing crowd—a mother, for instance, who holds her dead son in her arms, counter-marches up the stairs, against the scattering tide, towards the tsarist soldiers who greet her with a volley of rifle fire. This is also the famous stroller-careening-down-the-steps scene, one of the most iconic moments in film, reproduced by Brian De Palma so memorably, if cartoonishly, in The Untouchables.

Odessastepsbaby

Eisenstein whips the scene into a furious gallop. As the images grow more violent—blood pours from a woman’s belly; a child is trampled in the stampede—the shots decrease in duration. And as they decrease in duration they grow in number—and as they grow in number their frames become tighter and their depths shallower (with a few exceptions), showing less and less in each image. Thus, each interacting part becomes by sequence’s end an isolated reality, even as they remain locked in conflict.

This is a creature of Eisenstein’s Marxism. Soldiers. Crowd. Their interests are so divided that they cease to appear in the same shots. We no longer see the soldiers firing on the fleeing people. They are broken apart into non-overlapping images—the soldiers firing, cut to the crowd bleeding. This is an opposition so deep, so endless it cannot be depicted; it can only be realized by a division in the image; by an edit, or as Eisenstein would have it, a montage.

“Montage is Conflict”

Eisenstein was always and ever a partisan of montage. His theoretical understanding of what it was, however, underwent several major revisions during his career. In the years directly following the release of The Battleship Potemkin, Eisenstein was called on to explain or defend (depending on the audience) the theories which drove his film. He was now a global star—a prized guest in the avant-garde circles of the European continent and an honored invitee to the homes of Hollywood greats such as Walt Disney and Charlie Chaplin, both of whom regarded Eisenstein as a personal friend. It is said that Chaplin counted The Battleship Potemkin as the greatest film ever made. He was not alone. Eisenstein was clearly a master. The world knew him as a genius.

dt.main.ce.Stream

From the years 1925 to 1929, the years that saw the release of Potemkin and its thematic sequel, October: Ten Days that Shook the World, Eisenstein took to explaining film form as “conflict” itself. “Montage is conflict,” he writes in “The Cinematographic Principle and the Ideogram” (1929). And film is, he writes, “first and foremost, montage.” He asks, “By what, then, is montage characterized and, consequently, its cell—the shot?” He answers:

By collision. By the conflict of two pieces in opposition to each other.

He repeats:

By conflict. By collision.

And this conflict goes all the way down. A six second shot is in conflict with a five second shot. A character angled to the right is in conflict with a character angled left. Down is in conflict with up. A long shot with a short. A fast with a slow. A dark toned with a light toned. A close up with a medium. And so on. It’s, well, silly.

But to be fair, these crudities were shared with subtler thinkers of a similar hue, such as Viktor Shklovsky, and were far more sophisticated in their hands. To be fairer still, Eisenstein eventually left this notion behind. And Eisenstein’s temporary focus on oppositions and differences was fruitful. It gave him an eye for stark compositional contrasts and made for elaborate (and yet controlled) configurations. The soldiers marching down the steps, for instance, should be intercut with a woman trudging up the steps. Not only is the woman defying the soldiers, but, in this approach, up is also defying down.

Intellectual Montage: Editing to Reveal Symbols and Themes

The Odessa Steps scene is a rhythmic elaboration on one theme, violence, which takes the very form of montage (as Eisenstein believed at this point). At the end of the seven minutes, the relations of the shots are no longer perceived by the eye, but are pieced together by the mind. The images cease to be the constituting elements of the story; instead, the edits, which place the shots in conflict, narrate the images themselves.

Eisenstein believed that this link between the edit, the mind, and meaning could be exploited. The longer the mind thinks along with his edits and the greater the distance the mind and montage travel together—over sharp contrasts in composition, time, space, and theme—the more the mind becomes attuned to conceptualizing what’s not ‘there.’ He theorized that his montage could direct the “vibrations” of the psyche to spring, if only momentarily, into dialectical consciousness, because it had been trained by the montage to perceive dialectical conflict. This trained perception, if brought to climax, would drive the psyche to new heights of conceptual vision. (And also because, to be honest, he saw his fellow Russians—or really, the peasants and the proletariat—as mere matter to be re-programmed by his own hand and the hand of the party.)

In the scene in question, his attempt to stir such consciousness came via a clever editing trick. He filmed three stone statues of lions—one sleeping, one waking, one rising—and then edited the shots together, producing a fluid stop-motion animation of a stone lion roused and at the ready. The animation is the capping image: after the slaughter, after the battleship’s massive guns have fired on the city in retaliation, and after he has spent seven minutes building the rhythm, contrasting compositions, and driving the scene to a point—one, two, three, the lion rises!

The audience is supposed to recognize this as an apt symbol of the rise of Russia in revolt against the tsar, avenging those massacred at Odessa. It alludes to the 1905 Revolution, the foreshock of 1917. Eisenstein called this type of signification “intellectual montage,” when the edit produces a symbol unexplained by the story, but the mind, stirred by the rhythmic modulations of the scene, leaps to understanding. Such a leap, Eisenstein mused, would herald nothing less than a “revolution in the general history of culture.”

The meaning of the sequence, however, historian Oksana Bulgakowa tells us, was lost on Eisenstein’s audience. Or at least it was lost on the proletariat, certainly the peasant, perhaps even the avant-garde (although no such admission was forthcoming).

A far more effective example of Eisenstein’s forays into intellectual montage is found in his first feature, Strike! (1924), a film about a labor strike sparked by a worker’s suicide which concludes with a long battle between factory workers and policemen. Eisenstein crosscuts footage of a bull being slaughtered, blood gushing from its neck, heaving on the floor, with shots of the police mowing down strikers with machine guns. The bull twitches, dying, and so do the workers. The symbol needs no interpretation: the workers have been sacrificed to the gods of capital.

A Beauty of Power

Eisenstein thought these symbolic interjections the purest of agitation and his crowning propagandistic achievement (despite their failure with proletarian and peasant audiences). Yet they were something still more: they were Art. To wrench a symbol from the “concreteness” of photographic images, for Eisenstein, was to join the ranks of the great modernist movements of the day—especially futurism, cubism, and suprematism—which were doing their own hard work of liberation, on canvas, in verse, and with whatever material sculptors could get their hands on. But don’t imagine that his dedications to Art and agitation pulled Eisenstein’s work in opposing directions. He was a constructivist through and through who saw it as his duty to use his immense talent for the purposes of the state. Eisenstein claimed, “For art is always conflict: (1) according to its social mission, (2) according to its nature, (3) according to its methodology.”

Odessastepsboots

Eventually, despite his obvious dedication to Bolshevism, Eisenstein’s cinematics were singled out, and loudly, publicly renounced by the party. It’s unclear, due to his evident loyalty, why, but such are the mysteries of Soviet Communism. And yet, until the end Eisenstein remained a believer. And if his cinema is beautiful, it reflects the beauty he found in the regime he supported. Eisenstein’s role in the revolution was to make a beauty of power. Using the first mass art, film, he told the story of the new reality, created an iconography of the new man, and heralded the dawning of the new age. Over 1,300 furious shots later—almost twice the number in American films made at the time—it appeared to many that he had succeeded. David O. Selznick, producer of Gone with the Wind and then an associate producer at MGM, upon seeing The Armoured Cruiser Potemkin, as he knew the title of the film, circulated this memo:

“It was my privilege…to be present at two private screenings of what is unquestionably one of the greatest motion pictures ever made…The film is a superb piece of craftsmanship. It possesses a technique entirely new to the screen, and I therefore suggest that it might be very advantageous to have the organization view it in the same way that a group of artists might view or study a Rubens or a Raphael.

Rubens? Raphael? Overblown, but Selznick identifies rightly the spirit of the film and the moment: what Potemkin offered (or seemed to offer) was newness. And while it no longer offers that, it offers something quite like it: other possibilities. Just ask Hitchcock, Coppola, and Scorsese, who drew and draw heavily on the Soviet.

But also consult with another man: “A genius doesn’t adapt his treatment to the taste of tyrants!” That’s gulag prisoner K-123, from famous dissident Aleksandr Solzhenitsyn’s One Day in the Life of Ivan Denisovich (1962), decrying Eisenstein. Perhaps he’s the one who deserves our ear. After all, our study of these seven minutes cannot separate Eisenstein’s art from the regime he served.

The Purest of Lines: Isao Takahata’s Final Bow

In November 2013, after fourteen years between films and at age 78, the brilliant animated feature film director Isao Takahata released Kaguya-hime no Monogatari in his native Japan. Under the title The Tale of the Princess Kaguya, the film is currently enjoying a critically celebrated American release. And deservedly so: it is one of the most arresting animated films ever made.

The basic story is familiar to all Japanese, as its source is The Tale of the Bamboo Cutter, a 10th century folktale (some say late-9th) and the earliest known piece of Japanese literature. In the original, a miraculous child, Princess Kaguya, is sent from Heaven to be raised by a humble, old bamboo cutter and his humble, old wife. It is a rather straightforward morality tale. The beautiful princess’s greatest happiness comes from loyalty to her elderly, earthly parents—she artfully eschews many a well-concocted marital advance until her mother and father pass away. Princess Kaguya’s father tells her from his deathbed, “Thank you, my daughter, for all the happiness that you have brought us.” And, after his death, rather than fly into the arms of her waiting beloved, the princess stoically returns to the moon, wherefrom she descended many years before. “I am ready.” It is the type of story indigenous to the Japanese mind and spirit. Takahata retains the story’s 10th century setting, but then angles it to reflect some of the more difficult parts of late-20th and early-21st century Japanese life.

Blessed are the Poor

Since his directorial debut in the early 1960s, Takahata has distinguished himself as a powerful dramatist and a visual adventurer who enjoys thematic sleight of hand. It’s not a Hollywood “twist” that he works; rather, it is the careful, Yasujirō Ozu-like revelation of the sub-themes as a film’s innermost drama. His celebrated Grave of the Fireflies (1988), the story of a Japanese boy and his little sister orphaned during World War II, for example, seems at first to be about the brutality of the American firebombing of Japan—which the film depicts unflinchingly. But its overarching drama centers on a boy’s ego and lack of perseverance. The boy pridefully refuses to accept the help of a grouchy, begrudging, mean-spirited aunt. Instead he chooses to fend for himself and strikes out on an ‘adventure,’ little sister in tow. The sister starves to death on the outskirts of town; he dies alone in a railway station. Most viewers judge the aunt for her hardheartedness; Takahata judges the boy’s hardheadedness.

The Tale of the Princess Kaguya is similarly structured. It remains, at heart, a story about family loyalty, and at first appears to be a rather typical tale of a young fairy princess growing, albeit awkwardly, into her exalted life. Takahata, however, announces his real theme when, after discovering the miraculous child in a bamboo grove, the bamboo cutter pronounces to his wife, “It’s me Heaven’s blessed.”

Takahata deftly draws the audience’s attention away from lingering over this moment, all the while unfolding its significance. Eventually, the bamboo cutter leads Princess Kaguya to the capital city. Using gold that Heaven sent to her, the bamboo cutter contracts a madam to give her a contorting education in aristocratic manners. The princess’s face is caked in white makeup, her eyebrows removed, her teeth blackened, all in preparation for marriage to a feudal lord. For this is the “happiness” of a princess, he declares. The bamboo cutter’s shaping and marketing of his daughter nearly lands him a seat at the emperor’s palace, where she is desired as a concubine. Her beauty is his ticket up the social ladder.

I’ve sketched the bamboo cutter a little starkly; he’s not a true film villain. He may be greedy, but his real menace is a lack of self-knowledge. He cannot see that the princess is suffering at his hand. When he realizes his failure at the end of the film, he repents—too late, however, for her happiness. But here is the key point for Takahata: Princess Kaguya was sent from heaven to the very place she was always meant to be, born into the very life she was always meant to have. And this is what no one expects, the bamboo cutter most especially. When he proclaimed, “It’s me Heaven’s blessed,” his mind raced with all the ways that Heaven lifted him from his life as a “hillbilly.” He doesn’t realize that the princess had too been blessed, been blessed to be his daughter, to live as a part of a humble family, very far from the trappings of wealth and power. They were meant to live a rural life, or as she sings throughout the film, among the “birds, bugs, beasts, grass, trees, flowers.” Instead, she has been deprived of her birthright and forced to live as a fake, fake, fake, as she mourns towards the film’s climax. But, because the bamboo cutter subconsciously hated his own life, he could not conceive of it as blessed, and thus drove his daughter to misfortune. This is Takahata’s emerging sub-theme.

Critiquing the Miracle

It’s telling that what Takahata judges most severely in his film are the parts unique to his 21st century adaptation. These new elements—the bamboo cutter’s renunciation of his peasantry, his relocation to the capital city, and Kaguya’s education as a key to social elevation—have some populist angst to them. But they also evoke a bundle of tightly bound cultural images associated with the much-heralded “Japanese Miracle,” the rapid industrialization and modernization of post-war Japan and the incredible economic boom it generated. Takahata appears to be criticizing the very heart of modern Japanese culture: the rapid refashioning of the Japanese from a rural people to an urban workforce and the re-built post-war education system designed to fit them to their new economic roles.

Despite its tenth-century setting, The Tale of the Princess Kaguya is a film made with eyes firmly fixed on the troubles of contemporary Japanese life. It is suffused with anxiety over the hollowing of modern Japan—its demographic tailspin, and the deep slide of the Japanese Miracle into the dreary doldrums of the Lost Two Decades (twenty years of economic stagnation on the heels of astounding national economic performance). Large numbers of young Japanese, like the children of many industrialized nations, are putting off family life and childbearing, floating free of all other civic and social attachments—except for attachment to work. (This is the much-mocked Japanese loyalty to firm.) Why, then, are Japanese parents pushing their children so strongly into an education and economic system that appears to be rending them apart? Takahata’s bamboo cutter suggests self-loathing and greed at the root of Japanese modernity.

Visual Shapeshifting

Takahata is not just a critic of Japanese modernity and a masterful storyteller; he’s also one of the great visual innovators in the history of animation, thanks in part to the good collective work of the world famous Studio Ghibli. Takahata founded Ghibli with his one-time student and long-time collaborator, the great Hayao Miyazaki. The two briefly experimented with digital techniques but eventually scrapped the effort, choosing to remain partisans of traditionally drawn illustrations. Miyazaki responded to the digital encroachment by maturing as a painterly illustrator; Takahata returned to the spirited days of a youthful sketch artist. His great skill is design.

Lurking behind Takahata’s turn (or return) to sketch-like illustrations is a rejection of the technological smoothness and denseness of the computer generated image. With his last two films, My Neighbors the Yamadas and The Tale of the Princess Kaguya, Takahata has foregone the fuller imagery of his earlier Ghibli films, instead working in a granular, imperfect style that hews close to the sketch pad. The animation of My Neighbors the Yamadas has the flow and quality of a highly-expressive doodle: pencil-based, sparsely colored, and more reminiscent of a Sunday comic strip than a feature film. The Tale of the Princess Kaguya is similarly minimalist, but features much thicker, coal-based line work, and nearly impressionistic uses of watercolor. The effect is beautiful and wild, almost primal. At times the film’s images seem to emerge coevally with the story, as if a youthful savant was sketching the tale as it unfolded in his mind. This minimalist foundation allows Takahata to fluidly illustrate up or down, so to speak, either by increasing the detail and crispness of the line work or descending into murkier, rougher strokes depending on the emotional needs of the story. In one scene, as Princess Kaguya flees from her mansion, she morphs into a red, black, and white blur—the animation is stunning.

the-tale-of-the-princess-kaguya-trailer

But despite the apparent spontaneity and fluidity of the images in The Tale of the Princess Kaguya, they are also efficient, expressive, and elegant—the marks of a director in complete control of the line work in his film. No more than needed; lines few, perfect, beautiful; a fertile image. Like a Bashō poem (Barnhill translation):

Screen Shot 2014-12-10 at 11.13.25 AM 

 

 

 

 

 

 

 

 

 

 

 
I don’t want to push the comparison very far. Takahata is no Bashō. But the line and color work of his late films emanate from the same quintessentially Japanese spirit (of which the 17th century Bashō is a key voice): a haiku-like ordering of distilled images, cut down to their most needed parts, evoking a deeper, higher, and fuller order of things, even when apparently wild.

Losing Happiness

Takahata’s aesthetic response to the digital is of a piece with the larger ethos of Ghibli. Both Takahata and Miyazaki are fierce critics of Japan’s monetizing and technologizing of human life. Miyazaki wants his films to spur children to new hope and action in the daunting face of modern Japanese life, to figure out “how to start,” as he puts it in Turning Point, a collection of Miyazaki’s writings from 1997 to 2008. In contrast, Takahata’s films pitilessly follow the darker logic of Japanese modernity. The beauty of a child spoiled by avarice is his image of modern Japan.

images (1)

By the end of The Tale of the Princess Kaguya, the bamboo cutter repents and gravely regrets the tortuous refashioning of his fairy daughter into an urbane, cool cosmopolite. But when Heaven descends to re-collect her, unlike the stoic ascent featured in the original folktale, Takahata’s Kaguya leaves Earth with deep, heartrending regret. She has been betrayed by the subconscious ambitions of her father.

Just prior to her heavenly ascent, knowing the hour was near, the modern Kaguya returned alone to the village where the bamboo cutter first found her. I would have been happy here, she says. When she sees an impoverished peasant bowl maker, whom she loved as a girl, she tells him, I would have been happy with you. And that’s what Takahata’s oeuvre is fundamentally about, the loss of happiness once found among the “birds, bugs, beasts, grass, trees, flowers.”

Bearing New Images

Hayao Miyazaki’s films are some of the most charming in the short history of cinema. The heroes of his animated tales—Pazu in Castle in the Sky, Ashitaka in Princess Mononoke, and Princess Nausicaä in Nausicaä of the Valley of the Wind—adventure through richly imagined worlds, amongst spirits and gods, where the stakes are very high: love is tested by hate, good by evil, life by death. And despite their many obstacles and inner temptations, Miyazaki’s heroes choose rightly: Pazu embraces love, Ashitaka stands with good, and Nausicaä dies for life.

Turning Point is the second in a pair of memoirs by the author. More can be learnt about Starting Point 1979-1996 by following this link.

Through their triumphs, Miyazaki’s heroes are liberated. They soar—literally. All but two of his ten written and directed films feature an extended scene in which our heroes take flight, weave through the clouds, and ride the wind across a lush acrylic sky. One feels free when watching his films.

But Miyazaki himself is weighed down, overburdened and in despair. That’s the principal lesson of Turning Point: 1997-2008, a collection of translated interviews, public statements, essays, and panels on which Miyazaki sat, compiled from the years he directed Princess Mononoke, Spirited Away, and Howl’s Moving Castle. The book offers “lessons” rather than “themes,” because there is a distinct lack of about-ness to it. Miyazaki’s public voice, chronologically ordered, roams to and fro and very far from his films. Even at events convened to explore them, he moves quickly to other subjects, often timely matters of public concern, and those subjects slide to talk of Japan’s cultural troubles. Eleven years of free-roaming conversation will reveal the bent of a man’s thoughts; the conversations collected in Turning Point capture Miyazaki deflecting talk of his films and hurrying to speak of Japan’s imminent demise.

Dimming the Children

While the troubles named are many, for Miyazaki, the greatest threat to Japan’s future is the systemic dulling of Japanese children. Their fate, Miyazaki avers, is to be made into “normal, boring adults.” But he means far more than “boring.” He fears Japanese children are dimmed by a culture of overconsumption, overprotection, utilitarian education, careerism, techno-industrialism, and a secularism that is swallowing Japan’s native animism. The play, imagination, and moral formation of children have been ceded to the sedations of digital gadgetry and—surprisingly, in light of his own profession—animation. Japanese children are formed in the womb of technology, says Miyazaki, and raised on the easy pleasures of Japanese comics and animated films:

“I’m part of a subculture that makes animated films. What we have done is to narrow down the world of children and fill that narrow sliver with this subculture. The television stays on all day long even in rural homes [just as in urban homes]. People’s lives have become filled with this subculture.

“This,” he adds, “is the source of the downfall of a people.” This is not hyperbole; seventy pages earlier, Miyazaki lamented that manga—Japanese comic books and graphic novels—have become the “common denominator” of the Japanese people. This is “the peril faced by Japanese culture.”

Spirited Away

Spirited Away

Downfall at the hands of cartoons and television seems alarmist, curmudgeonly, and, in light of Miyazaki’s own profession, self-centered. However strange this may be, on one point he is surely, but sadly, right: Japan is in peril. Indeed, it’s dying. In its 2013 survey of the sexual habits of Japanese, the Japan Family Planning Association (JFPA) discovered that a catastrophic number of Japanese teens and young adults, aged 16-24, have lost the desire for sex. A quarter of Japanese young men were “not interested in or despised sexual contact”; 45 percent of women reported the same. Not surprisingly, in 2012, fewer Japanese babies were born than in any other recorded year. The consequences are clear: JFPA director Dr. Kunio Kitamura warns that Japan will “perish into extinction.”

A Crisis of Desire

Japan’s crisis is perplexing. Countless explanations have been put forward—mostly economic, many social, none which fully satisfy. All agree, however, that “hopelessness” has extinguished the erotic drive of many young Japanese. The root of this hopelessness, observers frequently assert, is ongoing economic stagnation. Miyazaki does not address the erotic conditions of young Japanese directly in Turning Point, but he concerns himself with Japan’s interior life time and again, especially with how the imagery of Japanese pop culture and technological gadgets affects desire and disaffects Japanese from the world and one another. Where Miyazaki parts from his contemporaries is that his exploration does not give economics pride of place, even though he treats it with great seriousness. For Miyazaki, despite the 25 years of economic decline, the “critical” issue Japanese face is their passionate need of the synthetic and digital. This, he says, “is a bigger problem than all the economic chaos.”

According to Miyazaki, Japanese children lack a taste for the real and human and know little of the natural beauties of the world. He said:

“This was fated from the time when television, or manga, or video games, or even photo print clubs came to fill in something children had lost [a less restricted life, close to nature in the pre-war economy] and became more exciting than reality.

The desires of many—if not most or even all—Japanese children, Miyazaki believes, have been hollowed, stretched, inflated for the false, and, thus, deflated for the true. The beauty of woman for man and man for woman, especially, has been supplanted by the cartoonish, pornographic, robotic, and monstrous. This is what he meant when he called animated films “the source of the downfall of a people.”

japanese-street-fashion--large-msg-119421742444

According to Web Japan, a Japanese information website, manga accounts for “36% of the volume of all books and magazines sold in Japan,” an astronomically high share of the market. And their stylings have stretched far into other mediums, deeply influencing even the content of novels, making manga, as Miyazaki put it, “no longer a subculture,” but, rather, “the originator of culture.” In Japan, manga imagery is ubiquitous—advertising, television, social media, toys, public festivals, conventions, and social groups—and travels with you via smart phones, tablets, handheld videogames, and countless other portable gadgets. “Everything,” Miyazaki said, “has become insubstantial and mangalike.”

Hoping for Reality

Miyazaki recounts this to his own great shame. An extraordinary 96 percent of Japanese have seen one of his films. His fear is that he is greatly responsible for the fantasia of Japanese life. His hope, on the other hand, is that his films illuminate what others make dark. Miyazaki’s ambition is to make realist films that urge children toward reality. He, for instance, described his film Spirited Away—about a ten-year-old Japanese girl, whose quest is to land a job from a witch who owns a bathhouse for tired spirits—as an attempt to “trace the reality in which ten-year-old girls live.” But with all the frog people, sentient soot, and ten-foot babies, what could such a reality be?

Much as it appears. Miyazaki is an animistic pagan living among a world of spiritual essences—beasts, insects, plants, trees, rocks, and the wind are ensouled—where the forest groves and deep waters are home to the gods. At least, that’s how he styles himself.

His animism may explain the content of his films, but not necessarily his approach to film craft. His criticisms of Japanese culture and the manga industry offer a better starting point. The largest problem facing the manga industry is that the people running it are anime fanatics, known as otaku in Japan. These “sickly otaku types,” as Miyazaki called them, were reared on manga and Japanimation, and developed an inordinate desire for them—their shape, scale, motion, symbols, and narrative tropes. Such children, “locked in [manga’s] own enclosed world,” became illustrators themselves, reinforcing the enclosure. With their arrival in the industry, characters became boxier, eyes ballooned, and, to be frank, breasts grew larger. The expressiveness of the manga industry was further attenuated, a cycle that cheapens and thins the general taste of Japanese society. These otaku, “raised amidst the clamor,” Miyazaki said, “probably can’t be the flag bearers for new images.”

Bearing New Images

To bear “new images,” to make films that liberate, the filmmaker must himself be liberated, free of the customs of the genre. That’s why Miyazaki frequently stresses that he does not “watch film at all” and describes his own career as an ongoing effort to escape the yoke of his great forebear, Osamu Tezuka, the father of manga, creator of Astro Boy, and Miyazaki’s greatest influence. That’s also why he strongly urges that, if an illustrator is to spur audiences to seek and love the world, he must himself be filled with its riches. That is, he must gain an intelligent understanding of it by cultivating “a constant interest in customs, history, architecture, and all sorts of things.” Otherwise, he “can’t direct.” And if he doesn’t have time to study, he must “look carefully at what is right in front of [him].” If he fails to do so, no matter what he makes, “it turns out to be a film we’ve seen somewhere, or something we’ve seen in manga.”

That’s also why he strongly urges that, if an illustrator is to spur audiences to seek and love the world, he must himself be filled with its riches.

Unlike most of today’s major directors, there is nothing counterfeit about Miyazaki’s work. His films have an inner clarity and beauty that few others achieve. Yet they are frequently wrapped in mystery, ambiguity, and confusion. And purposely so. Miyazaki not only fills his films with the treasures of intellectual study, he also refuses to over-clarify them. As he said of his epic Princess Mononoke, “I made this film fully realizing that it was complex…If one depicts the world so that it can be figured out or understood, the world becomes small and shabby.” To truly mirror the world, he makes his films difficult to understand, because “there are so many things in the world that we don’t understand.” If a film answers all the questions it raises, after all, one need not search beyond it. Miyazaki’s films stretch into the world and require things of it for their completion, say, a conversation with friends, a hike through the woods, a fear of the gods, or even a childlike innocence.

“If one depicts the world so that it can be figured out or understood, the world becomes small and shabby.”

When reading Turning Point, the burden of synthesis lies with the reader. For that reason, despite its evident charms, I cannot recommend it, except to would-be biographers or critics. As much as one can glean from the book about Miyazaki’s craft (and much more can be mined from it than I have), his thoughts move most naturally towards Japan. That, the book makes clear, is where his heart is—and not with his films. This, paradoxically, is one of the keys to their greatness. His films are not about his ego or even the art of film. They climb for far higher reaches.

What becomes of Japan, only time will tell. But I hope along with Miyazaki that his films do spur Japanese children to seek a better, realer happiness. As Miyazaki said, “All I can do is to create films that help children feel glad they have been born.” That is all his films ever do. And that is no small thing.

 

The Saturnine Age and the Modern Genius

When a modern person thinks of artistic genius, they imagine an individual. Some have quantified genius by standardized exams – for example, the I.Q. test – but most know a genius by his work. The Brothers Karamazov is proof that Fyodor Dostoyevsky is a genius. Be it Shakespeare, Mozart, or Michelangelo, the man of genius is epoch-making because his work acutely affects history and seems to redefine our basic categories of human potential.

Yet in our common imagination, the artistic genius is not only an individual of excellent output, but an individual of a certain disposition. The man of genius is exceptional in intelligence, originality, and creativity. While free from all that restrains the average person, he bears the greatest burden of all: the burden of being him.

What the modern person misses, however, is that this particular sort genius is but a newborn – and not just a newborn, but a bastard. The modern artist-genius, and the entire modern notion of art, was engineered in the cultural and philosophical laboratory of the Renaissance. The Renaissance assault on millenniums-old beliefs about genius gave birth to both the modern idea of Art and artist. History has forgotten what truly made the Renaissance radical—their re-writing of the classical world.

In the classical world, the genius was not a man at all. It was a god.

For both the Greeks and Romans, the genius was a personal attendant god, like a guardian angel. Of the yoked partners, the human was the lesser, and was charged with pleasing its superior genius. If pleased, like the angelii of the later Christian tradition, the genius would impart to man the wisdom of the Demiurge—the source of all being, according to Plato. Because the Demiurge was ultimate and transcendent, communication between God and man required an intermediary: the genius. In the classical period, wisdom descended from on high, and the genius was the tongue of proclamation. Ultimately, the genius was the source of man’s insight into the created order and the Creator.

Like all things born in Antiquity, the ubiquitous genii were subservient to a strict hierarchy. In the Ptolemaic cosmology – from the very earth, to the upper air, through the aether, the seven heavenly spheres, the fixed stars, and eventually to the border of the Primum Mobile (the Sphere closest to God) – the genii, like Jacob’s ladder, whispered man’s petitions upwards and the revelatory wisdom of God down the great chain of being.

Indeed, unlike the vacuous, empty expanse of Copernicus’ later model, Ptolemy’s geocentric universe was a palace teeming with life. And presiding over his throne was the greatest of all the heavenly spheres: the chief Genius, Jupiter. Known by the late-Roman Stoics as the Progenitor-Genitrix, Jupiter was the demiurgic source of all creation and Lord of all lesser genii. In The Discarded Image, C.S. Lewis, scholar of Medieval and Renaissance literature, describes the reign of Jupiter:

The Character [that Jupiter] produces in men would now be very imperfectly expressed by the word “jovial,” and is not very easy to grasp; it is no longer, like  the saturnine character, one of our archetypes. We may say it is Kingly; but we  must think of a King at peace, enthroned, taking his leisure, serene. The Jovial  character is cheerful, festive, yet temperate, tranquil, magnanimous. When his  planet dominates we may expect halcyon days and prosperity…He is the best  planet, and is called the Greater Fortune, Fortuna Major.

But as we can see, the temperament of the modern man of genius stands in stark contradistinction to the genius of myth.

And so we come through history to the penultimate chapter of the mythological genius: the Christian Middle Ages. To the medieval Christian thinker, the assumption that all things had been created good, and that evil was merely a distortion of it, became axiomatic. No thing was worthless. If something were “baptized” (in other words, Christianized), it would reclaim its full and rightful goodness.

As a result, the genii and their progenitor, Jupiter, were subsumed into the greater Christian schema. While Jupiter officially lost its godhood, it remained King of the heavens and became an “influence” that promoted festivity, magnanimity, and prosperity in man. Similarly, the genii became either angels or demons, messengers of divine truth or malefic falsehood. And yet, the jovial genius, much like our melancholy genius, was deeply entrenched in the medieval imagination and retained a central position in Christian literature, as exemplified by Dante’s The Divine Comedy.

Finally we have reached the Renaissance, the sepulcher of the mythological genius and the womb of the man of genius. But before we go further, we must investigate an essential gap in the telling: the nature of Man and Art in the genial universe.

From the late 14th to early 17th century, humanism alleged that Late Antiquity and the Christian Middle Ages were false historical developments, long divergences from the true history found in a continuum of humanity between Athens and Florence (the center of humanist thought). Their movement was called the Renaissance, meaning the “rebirth” of Man.

In spite of this humanist contention, Florence’s claimed anthropological commonality with Athens seems a stretch. Plato and Aristotle, the most preeminent Athenian thinkers, both postulated that man had an unchangeable essence defined by the creator of Nature, the Demiurge. And in the hierarchy of the created order Man stood low. Departing radically from Athenian thought, the humanists asserted that Man was not ruled either by essence or hierarchy. Rather, to the humanist, Man was the very apex of being, free to choose its own nature. For example, Pico della Mirandola, an influential 15th century Florentine, held that Man’s soul had the chameleon-like ability to change into whatever he chose. One wonders why the humanists so forcefully claimed continuity with Athens.

Whatever the intention, there was a devil-may-care revisionism surrounding the humanist movement. After all, one does not alter canonized history and long-held beliefs about the eternal nature of Man without hubris. And just as they changed these, they began to change the nature of art as well, laying the foundation for what we know today.

In bedrock of the fine arts is a noble lie, born from the falsified history of Renaissance humanism: that Art, by which they meant the “fine arts” (painting, literature, poetry, architecture, dance, music, sculpture, and theatre) has existed as a distinct category since time immemorial. But once again, true history contradicts the humanists.

From the ancient Greeks to Medieval Man, spanning well over 2,000 years, the West divided the hierarchy of being into two distinct categories: Nature, God’s ex nihilo creation, and the arts, meaning the artificial, any human product. The hierarchy was further divided into two distinct categories of art – a “syllabus,” as Lewis said, that by the time of the Middle Ages “was regarded as immutable…By long prescription, [it] had achieved a status not unlike that of nature herself.” The first category was the superior art of the mind and work of the cleric, the revered seven Liberal Arts: Grammar, Dialectic, Rhetoric, Arithmetic, Music (theory, not practice), Geometry, and Astronomy. The second was the lesser art of the body and the work of the layman, the mechanical arts—painting, farming, sculpture, shoe making, and all else. Larry Shiner, the author of The Invention of Art, writes frankly: “The ancient Greeks, who had precise definitions for so many things, had no word for what we call fine art.” And for that matter, neither did Rome, nor the Middle Ages. Fine art is a recent category midwifed by the Renaissance humanists, reared by Enlightenment philosophers, and now, in the 21st century, it has grown old beyond its years and has forgotten its own nature.

John Milton, perhaps the last true medievalist, published Paradise Lost in 1667, the denouement of the Renaissance. In his infamous harangue, Satan, Milton’s extraordinary protagonist, proclaimed that it was “Better to reign in hell, than serve in heav’n.” Because of our democratic temperament, the modern person naturally commiserates with Satan’s quest for autonomy from God. To Milton, contrarily, Satan’s radical selfdom was the very cause of evil: pride. In accord with Christian orthodoxy, Milton saw pride as the unwillingness to accept one’s God-ordained nature. Paradise Lost, while not solely allegorical, was partly a lamentation for the collapse of the medieval order by the hand of the humanists, whom, like Satan, Milton saw as unwilling to accept their position in the cosmic hierarchy.

On that account, we must complete the testimony of the genius. In more ways than I can possibly detail, the humanist artist embodied Miltonic pride. But the most damning evidence was their unprecedented proclamation that they, mere men, were geniuses. In 1533, Agrippa of Nettesheim, a German humanist, in his De occulta philosophia, explained that rather than wait for the genius’ descent, one could use one’s reason to bypass the hierarchy and take the knowledge of the genius by one’s own power. Coeval with this myth was a new drive by artists to be understood as inspired, creative, and autonomous individuals. In contrast with the humanist agenda, inspiration, to that point, had been the gift of the genii; Creation was solely the power of the God who made Nature from nothing (and the artist, both liberal and mechanical, was said to merely imitate the natural order); autonomy, as I described above, was thought the devil’s desire. And finally, just as they chose genius as their true nature, they also chose a new planetary master. They replaced Jupiter, Fortuna Major, with Saturn, the planet that produced melancholy in men, Infortuna Major, the Great Infortune. Like Prometheus, the artists had stolen the powers of the gods, Jupiter and genii both. Eventually, the mythological connotation of the artist-genius was forgotten, and genius became the innate excellence of the individual.

Today, because the genius is within Man, art has become self-expression, not an appreciation of God and his Nature as it once was. When a person looks at the ceiling of the Sistine Chapel, he or she does not do so in order to see God communing with Adam; rather, it is to see Michelangelo. Originality, because the power of creativity now belongs to the fine artist, has become idolized as the criteria for judging a piece’s worth. As any artist will tell you, this, more than any other art convention, is most crippling.

And ever since the Renaissance, Saturn has cast a melancholy shadow over the mind of the artist, turning art into an agonizing vocation. Because of the pride of the humanists, the sins of the father have visited the son: the art world. Unfortunately, contemporary artists, who know nothing of the arrogances of their predecessors, are forced to carry a torch, which, if not returned to the hearth, will continue to burn those who carry it.

The Myth of Fairy Inferiority

If someone wrote a constitution outlining the powers allotted the contemporary storyteller, one of its central clauses might read like this: “So-called ‘myths,’ an antiquated genre cobbled together from perpetuated falsity, if written at all, must be done so for the sole purpose of amusing the children in the nursery.”

But even without such a rigid document, most of us have tacitly consented to this convention. Lemuel K. Washburn, the early-twentieth-century author, atheist, and ardent proponent of such thinking, once derisively asked:

Where are the sons of gods that loved the daughters of men?
Where are the nymphs, the goddesses of the winds and waters?
Where are the gnomes that lived inside the earth?
Where are the goblins that used to play tricks on mortals?
Where are the fairies that could blight or bless the human heart?
Where are the ghosts that haunted this globe?
Where are the witches that flew in and out of the homes of men?
Where is the devil that once roamed over the earth?
Where are they? Gone with the ignorance that believed in them.

These creatures-and this “ignorance”-persist, but they are in disrepair, and too small to matter. We’ve stored them in the playpen, and as any parent will tell you, no object goes unmarked if left with even the best-behaved child. When we in the West wrote off our oldest and most treasured narratives as (at best) merely for children, or (at worst) merely for historians, we created a great discontinuity in our story tradition – and we rarely stop to ask, “Is this where myths truly belong?”

Fortunately, the West is not the only culture with a tradition of storytellers. The Japanese, for example, have a strong narrative continuity, unaffected by the literary developments of western modernity – and their myths are not exclusively for children. Hayao Miyazaki, the most famous Japanese director of what we Americans would call “children’s movies,” recently released Ponyo, an excellent myth-a feature length cartoon-that defies many of our western story conventions. In light of its success as well as that of Miyazaki’s entire catalogue (Spirited Away is considered one of the greatest films in Japanese history), we must wonder if our unspoken laws are false. Ponyo, Miyazaki’s reimagining of Hans Christian Andersen’s “The Little Mermaid,” offers us a glimpse of how we in the West might also restore myth to its proper place.

The film is the story of a love so powerful and important that it could literally touch off the Apocalypse. Sosuke, a small boy who lives with his parents on a sparsely inhabited island, one day comes across Ponyo, a magical fish washed ashore. Sosuke quickly takes a liking to Ponyo and promises to take care of her forever. This innocent promise has profound repercussions because Ponyo, equally innocent and equally smitten, chooses to accept Sosuke’s offer. By sheer strength of will, she defies her fish nature and turns into a human so that she can love Sosuke forever-a person she is not made to love. If Ponyo’s choice and her love remain unfulfilled, if Sosuke’s promise is false and he refuses to love her, then the natural order could not sustain her unnatural decision and would be destroyed. The fate of everything (at least everything that concerns Man) depends upon Sosuke’s willingness to fulfill that promise. In Ponyo, quite a lot depends on love.

The plot uses many archetypes with which westerners are familiar, especially the forbidden love motif. So what sets Ponyo and Miyazaki’s other films apart from western stories? There are too many answers to that question for this short article; however, I believe the most noteworthy distinction is that Miyazaki’s tales show the unparalleled power of mythology. This, I believe, is crucial to understand and emulate if there is to be any restoration of the western myth.

Why is the power of Miyazaki’s mythology significant for western myths? Put simply – because of the decreasing power of the western fairy (by which I mean mythical creature). This diminution (in quality, not frequency) is a relatively recent development in western storytelling. In fact, J.R.R. Tolkien himself, a master of myth and expert on mythology, claims that the devolution of fairies originates with Shakespeare and his contemporaries.

In Midsummer Night’s Dream, for example, the fairies are minuscule, too swift to be seen, and out only at night. Their worst deed is to steal a baby and leave behind a double, and best is to bless the marriage bed. In short, Shakespeare’s fairies are barely consequential. Since the Elizabethan era, western literature has continued to diminish the power of the fairy, leaving them without innate supremacy over humans and with a decreasing effect on the human story, and ultimately resulting in characters like David the Gnome and the Smurfs.

This contrasts with a much longer, and far older, tradition of western myth, in which the mythical element is of such great stature that it is incomprehensible and uncontainable. For instance, in Greek mythology, Zeus and the other Olympians, themselves of such a remarkable rank that they could alter the affairs of men on a whim, succeeded the reign of the Titans, who bore monikers like Cronos (Saturn), Oranos (Sky), and Gaia (Earth). Imagine the effect of such imposing characters on the Greek – it was Man, young and old, who was diminutive.

Similarly, in Beowulf-a tale told hundreds of years later and many leagues away-when our heroic King faces the dragon, he is doomed; and when he falls, his country plunges into a dark age. The mythic element was perilous when met, even if it was good-natured. These tales served an existential purpose-they placed man on the periphery of meaning and stature, and the fairy far closer to the center.

Clearly, a drastic shift in the nature of fairies occurred in the West – but why? Because there was a drastic shift in the self-image of the Western Man. Tolkien proposes that the diminishment of western fairies began with European exploration. He says, “It seems to become fashionable soon after the great voyages had begun to make the world seem too narrow to hold both men and elves; when the magic of Hy Breasail…had become the mere Brazils.” This hierarchical reorganization continued into the Enlightenment, during which Europe elevated Man and his reason to supremacy, and rid itself of “superstition.” In such a climate of anthropocentrism, powerful fairies became increasingly unacceptable and therefore less interesting, and were eventually marginalized by the advent of so-called Realism-a European fascination with the mundane and a rejection of the fantastic.

The western fairy is now too impotent to matter, and has become a character in a new myth: the myth of human supremacy. Adults no longer have anything to fear from fairies – and therefore have nothing to learn from them. For example, in 1904, J.M. Barrie premiered Peter Pan, Or the Boy Who Wouldn’t Grow Up, a play that presents mythic creatures in a manner highly indicative of the modern West. In Barrie’s tale, a fairy’s existence is so beholden to humans that the mere disbelief of a child is potent enough to destroy it, and a child’s clap, which signifies belief, is strong enough to resurrect it. Certainly, Peter Pan has many virtues (its criticism of “adulthood” for one), and is a modern classic for good reason. However, Peter Pan is just that: truly modern. In it, the fairy is entirely subjected to human power. Today, in films like Ferngully, Shrek, and Hellboy 2, fairies are dominated by humans, and the new myth of human superiority and fairy inferiority is perpetuated.

Fortunately, the West is not entirely lost. In the mid-twentieth century, authors like J.R.R. Tolkien, C.S. Lewis, and Madeleine L’Engle began retelling the old story in competition with the new, and made fairies – like Aslan – powerful enough to teach adults and children alike. We can hope their work will continue with a new generation of authors. In the meanwhile, Miyazaki’s fairies are potent enough to remind us of something missing from our own tradition – and maybe even ourselves.