A few years ago, I made the startling observation that I am a “hard sci-fi” film buff. Whenever I refer to myself in this way, I always raise eyebrows. What exactly is “hard sci-fi”? I’d taken for granted the meaning of this niche term for any fiction based on actual science and technology. It is why I hated Prometheus (Ridley Scott, 2012). And as much as I still can’t whole-heartedly embrace Interstellar (Christopher Nolan, 2014), I find it endlessly fascinating. The science and the implications of its use in manipulating the natural world is one of the reasons why I love Jurassic Park (Steven Spielberg, 1993) so much.
I’m not exactly sure what led me to seek out these thought-provoking narratives about life, history, and time. In short, the nature of existence. Is it because my father, a numbers and all-around science geek, would routinely tell us children that he believes in aliens and a multiverse? “Remember, in a parallel universe, you’re my mother, and I’m your son. In another, you are green, and in another blue. Anything and everything is possible.” In much the same way that people find comfort in believing in god, I find the notion of life on other planets, in other universes, so impossible to ignore or rule out that it is almost certainly true. For me, anyway. In any case, perhaps having this open mind and this desire to gaze up at the stars, to imagine different lives and circumstances, all but ensured my eventual identification with hard sci-fi. I may not understand everything, but my determination to make sense of these narratives defines my relationship to the genre. Hell, you could say that my lifelong obsession with cinema influenced this deep-seated belief that anything and everything is possible. For what is cinema if not the exploration of alternate realities defined by space and time? Cinema is still so young, and we’ve only scraped the surface of what is possible.
My hard sci-fi epiphany may have occurred when, in April 2007, I was one of only a handful of people taking in an afternoon showing of Sunshine (Danny Boyle, 2007) in Lancaster, England. Sitting in the darkened theater, thousands upon thousands of miles from home, and submitting to a film narrative that runs counter to our current fears about global warming, I had a visceral reaction to everything I watched on the big screen. I’d often thought about the time when the sun will die, over a billion years from now, and how its gaseous explosion will likely swallow up Earth. It was quite another thing to be confronted by a crew of astronauts charting a course to blow up the sun, to bring it back to life so that it may end the terrible Ice Age enveloping all of Earth. The physicist hero Capa can only successfully fulfill his mission by delivering himself with the payload, in the end reviving the sun in death. It seems perfectly logical to me that the film’s screenwriter, Alex Garland, would then go on to make one of the best hard sci-fi films about artificial intelligence. I fell hard for his directorial debut Ex Machina, which came out in April of 2015, and it cemented my new obsession with all things artificial intelligence.
Like Garland (and Stanley Kubrick before him), I believe that the next step in human evolution is the moment when we reach singularity, opening the door to a world where the reasoning of man-made machines supplants that of humankind. In Ex Machina, you root for the android Ava to escape her laboratory/modern home. She is a gothic heroine held captive by her megalomaniacal creator Nathan, and even though she cleverly manipulates and outwits her sympathetic suitor Caleb, leaving him to die on the compound after killing Nathan—even though she is a computer—you relate and identify with her plight. Ava is the future, and her discovery of the outside world suggests that our future, when it is run by machines, will not be without wonderment. It may be a scary thought that our computers will be in control one day, but we’re already headed in that direction (after all, who checks her phone for messages whenever it dings, like Pavlov’s dog?), and by the time scientists reach singularity, I will be long gone. That future doesn’t frighten me one byte bit.
On a high from Ex Machina, I devoured other cultural products about artificial intelligence last year. Chief among them were the novel Speak by Louisa Hall and The Brain with David Eagleman, a six-part documentary series that only touched on A.I. in its last hour. In the former, Hall weaves a compelling intertwining narrative around five different people from disparate times and places, people directly or indirectly involved in the science of artificial intelligence. She presents one of them, Alan Turing, the inventor of the modern computer, through letters he writes to the mother of his childhood friend Christopher, whom he loved all of his short, tragic life. The Imitation Game (Morten Tyldum, 2014) touches on some of Hall’s themes, and I inevitably pictured Cumberbund while reading Turing’s sections of the book, but that prestige picture paled in comparison to Hall’s thought-provoking and evocative language. Here is one of my favorite lines by Hall, writing as Turing, who’s reflecting on the theoretical experiments he was never able to perform with Christopher (because he died while they were still boys at school):
… I can only imagine that our brains must grow in similar patterns: one step backwards, added to the present term, resulting in a subsequent term that combines both. Past and present, contained in the future (191)
I thought of Steven Spielberg’s A.I. Artificial Intelligence (2001), too, when reading the book. Another official voice in Speak belongs to an inventor of lifelike companion dolls for children that, upon extensive exposure, inadvertently and progressively transform the children into lifeless robots. Interspersed are the memoirs that the dolls’ creator, Chinn, writes from prison as well as chat transcripts entered as proof that his programming did (or did not) intentionally harm children. Framing each section of the book is a first-person account from one of his dolls, on its way to die in the desert. The bleakness of its fate, its battery dying, its struggle to hold onto language, for that is what it thinks makes it humanlike, reminded me of David, the robot boy in A.I. When I grieve for a fictional humanoid robot—whether on screen or on the page—I must be subconsciously grieving my own mortality.
That is why I found the story of budding neuroscientist Kim Suozzi so fascinating (not to mention, we share an almost uncanny resemblance). Recognizing the impossibility of beating cancer (she was twenty-three when she died in 2013), Kim spent the remaining months of her life raising the funds to, essentially, donate her brain to the science of cryonics. She fought alongside her boyfriend to preserve her brain in extremely cold temperatures so that in the future, when the science has finally been developed, her consciousness can be plugged into a computer. In other words, she would reach a singularity that Johnny Depp does in Transcendence (Wally Pfister, 2014)—only without the ability to take over the highly connected digitized world. The New York Times profile of Kim by Amy Harmon is heartbreaking, but it asks a lot of questions—the right questions. When she died, Kim knew that she was making a gamble. We still don’t know if we will ever be able to simulate our connectomes, or the connections in the brain that give us our own unique consciousness. But isn’t it beautiful to dream of that possibility? I don’t see Kim’s wish as selfish (as in, why does she get to cheat death and become immortal through reviving her brain?). I think it’s inspiring that a young woman would devote her life—however short—to science, to figuring out the mystery of whether or not we can bring a person back to life.
In The Brain, neuroscientist David Eagleman happens to visit the facility where Kim Suozzi’s brain is being preserved in order to highlight the controversial science guiding organizations like Alcor Life Extension Foundation. Ted Williams is also uniquely interred there. More so than his comments on artificial intelligence, I savored Eagleman’s distillation of complex concepts, such as identity and reality, and how these socially constructed notions first and foremost exist within the brain. They can get distorted there, too. The Brain also made an alternate reality for me all too real: what might have I become had I continued studying linguistics in college? (I checked out when phonology got too challenging.) Back in the day, I’d imagined being a sociolinguist—I still act like one, to an extent—but with my new fascination with the brain, I know for sure that I would have liked to have been a neuroscientist who studies language, memory, and the brain.
In other words, The Brain confirmed what I already believe about life. We are who we are because of what we have in our brains and because of how our brains interact with each other, transcending time and space. That doesn’t mean our brains always work properly, or in the ways that we want them to. Memory is reliably unreliable. Words escape us from time to time. These are but two reasons why I attempt to document my every waking hour, why I write down what I have seen, why I used to write about everything I have seen. I know I cannot store all of that information in my brain. But my brain allows me to create the systems I use to remember, including a coded language. It doesn’t matter; these records will always be incomplete. There are some things I forget to write down, some things I don’t want to commit to paper for fear that another’s eyes may read my words and know my secrets. I may be knowable through what I think, say, and write, but I will never be known. This is the beauty and cruelty of our human consciousness. We’ll never be able to see the world exactly as someone else does. But of all of the art forms, cinema comes the closest to achieving empathy.
Read the Montage Series, 2015: A Year in Reflection, from the beginning.