One of the people responsible for putting a man on the moon died last week at the age of 95. This is, in 2014, a common and not terribly newsworthy occurrence: the generation of men and women whose industry kept a dozen men safe from vacuum, radiation, and temperatures ranging from scalding to freezing is now succumbing to the mediocre ravages of time. The youngest living astronaut to have walked on the moon itself is older than Hitler’s invasion of Austria.

But John C. Houbolt deserves our attention, for at least a moment, because his contribution was important enough that it changed the direction of the US space program. As NASA tried to figure out, in the early 1960s, how it was going to meet President Kennedy’s goal of landing a man on the moon before 1970, Werner Von Braun was pretty sure he already knew the answers: he had, after all, been thinking about this stuff for some time.

The problem was Von Braun, whom the Americans had seconded with his enthusiastic consent at the end of World War II, didn’t want to build machines just to land a man on the moon. He wanted rockets that could also help the United States build space stations, and eventually put a man on Mars.

|| Connecting the dots in A Beautiful Mind

Social physics is an emerging (and ominous-sounding) discipline that wants to “connect the dots” of our data—but, ideally, as a force for  good.

In the summer of 2013, one of the wunderkind companies of the 2008 green energy euphoria went belly-up. Better Place, formerly Project Better Place, was an effort by Israeli entrepreneur Shai Agassi to revolutionize the concept of electric cars by, essentially, taking the part of the consumer experience everyone hates about the mobile phone provider (selling people a barely subsidized piece of consumer electronics for the dubious privilege of being locked into a multi-year contract) and combining it with the second-largest purchase most households make.

To his credit, Agassi was legitimately trying to think about the problem of accelerating the adoption of electric cars in a new way. To his discredit, that seems to be about the only nice thing anyone has left to say about him. Fast Company has a pretty clinical post-mortem of Better Place, and it’s kind of a buffet of vignettes of what happens when a firehose of money gets pointed at people who don’t have the skills to know what to do with it, or even the skill to recognize what they don’t know. (The point where Agassi divorces his wife and starts bringing his new girlfriend to meetings is a particular nice touch.)

Would Aristotle be good at Twitter? This has been on my mind lately. As the latest round of acrimonious social media debates have popped up in the form of “Cancel Colbert and the resignation of Mozilla CEO Brendan Eich, people have again taken to arguing vociferously on Twitter, doing their darnedest to convince others of their rightness. So I wonder, would the person who tried to set out in Rhetoric how persuasion works be good at arguing with people in 140-character snippets?

The accepted wisdom is that those who are good at argumentation in other venues are also good at it on Twitter. A lot of the time, that seems quite true: it’s why writers, essayists, and annoying pedants have taken to the service so happily. But as I watch the kinds of people who seem forced to endure arguments with others—namely, women, people of colour, and other activists—I get the sense that the rules of rhetoric laid down by folk like Aristotle are especially unsuited to Twitter. More interestingly, though, maybe watching people on Twitter invent new rhetorical tactics suggests that what’s wrong with online discourse isn’t that it is hampered by constraint, but that there isn’t enough of it.

Of the many reasons Star Trek: The Next Generation appealed to my nerdy teenaged mind, the holodeck was perhaps the most significant. Granted, like most adolescents, my imaginings about what I’d do with technology that allowed you to create, and then step into, believable fictional worlds tended to focus on sexual adventures with the crew of the Enterprise. But at the same time, it also spoke to a mind equally formed by the worlds of literature and video games: here, in the guise of a distraction, was a way to enter a place and then return from it, changed and renewed.

Re-creating the world is among the most basic of human desires; fiction of all kinds remakes the world, ever so slightly changed. That habit is about to enter its next, holodeck-like phase, however, thanks to two technologies that are soon to go mainstream: 3D scanning, which can map three-dimensional spaces, and virtual reality, which can then aurally and visually immerse you in them.

The body is a text. To communicate with another human being is to consider them as a book. Unable to see into their souls, we encounter others as collections of signs: a smirk or a crinkle around the eyes, a hand placed on a cheek—words upon words as we try, in futility, to express to one another what we think and feel. The soul may be irreducible, but to be human is to reduce it nonetheless.

How, then, should we feel now that the text of the body has become machine-readable?

People in my social media feeds have been pretty excited about “this amazing new app that lets you read a novel in 90 minutes” the past little while. They’re talking about the new speed reading app Spritz, which flashes text at anywhere from 250 to 1000 words a minute, and uses a novel technique to focus your eyes in one place as it does so. Using it is a thoroughly discombobulating, fascinating experience—try it here. The best description I’ve heard is that it’s less like reading and more like snorting words as if they were coke.

That it was so hotly discussed amongst my friends, mostly literature academics and writers, feels like a sign that we were clearly drawn to and then put off by the idea—not incensed so much as sympathetic to the need to read more, but baffled as to why anyone would want to read that fast. That ambivalence arose, I’d guess, because with Spritz, we’ve happened upon the Soylent of reading.

The Internet has been a boon for losers—or perhaps, I should say, it has been a boon for me. Prior to the web, being “in the know” was as impossible a goal for me as climbing Everest or talking to a girl I liked without sweating profusely. Today, though, with an endless library of culture at my fingertips, suddenly that obscure Algerian film, new bar, or buzzy novel is easy to find, even for oddballs like me. The obstacle of discovery has, at last, been lifted. But, because nothing good can last, am I to understand that after all that, I have to now be worried about the rise of the secret Internet?

By that term, I don’t mean the dark or deep web—places like the Silk Road that are invisible to Google and, thus, most web users. No, the secret Internet is instead something hidden in plain sight: it’s the surprising return of the email newsletter as the way to stay on top of what people are talking about. It’s the proliferation of private mailing lists, an old standby that seemed to be approaching irrelevance until suddenly, it wasn’t. And it’s the rise of communities obscured from public view, almost like secret societies.

Is cultural imperialism the wrong phrase to describe Facebook’s purchase of WhatsApp? On the surface, it certainly sounds absurd. WhatsApp was, after all, yet another Silicon Valley company that was simply swallowed up by the Valley company par excellence. Typical Palo Alto incest, sure, but imperialism?

Yet, it was the term that leapt to mind as soon as I heard the news. Like millions of people, my family, a diaspora scattered across the globe, uses WhatsApp to stay in touch despite the immense geographic distances that separate us. Now that a company like Facebook owns it, it feels a bit like your favourite band selling out. Even if, in truth, there never really was any pure state to begin with, it still feels odd that a thing that was an intimate part of your life has now been sucked up into the contemporary emblem of the evil empire.

What do we actually want social media to do? It’s a testament to how deeply Facebook et al. have penetrated our lives that the question itself sounds strange. To ask “What’s the hot new thing in social media?” sounds reasonable. But less than a decade into its existence, “What do we want social media to do?” already sounds like a question asked by an alien.

It sounds odd, in part, because it presumes we have (or ever even had) a choice. But I ask now because the tea leaves of the Internet seem to at least temporarily be resolving into something clear: we’re about to embark on the next phase of social media. Facebook, Twitter, and others are not only becoming more like each other, but also more similar to the media they sought to replace. Dedicated apps are about to be the new normal, as the many functions of social networks splinter off into smaller chunks. New forms are emerging, from ephemeral messaging to apps just for two. But hovering above it all, another question: what does a world in which social relations are structured by the vision of Silicon Valley actually look like?

A few days ago, for no more than half a second, I thought I could Google smells. I don’t mean that I thought I could simply look them up—rather, for the tiniest fraction of time, I thought I could type the name of a smell into my computer and then I would be able to smell it.

Alas, no sooner had I thought that Smell-o-Google™ was real than I realized it was just, yet again, my mind playing tricks on me. I’m used to it, though. I often feel as if I can swipe through shelves of CDs, scribble across a PDF, or pinch-to-zoom on the scene I see through the bus window. Somehow, the differing characteristics of digital and physical things are constantly blurring for me, as the fictional and the real also continually blend in my addled brain. I don’t quite know why I’m so susceptible to such synesthesia-like confusion. Perhaps a life so dominated by the page and screen alike cannot help but result in a constant sense that reality is just the one text of fiction we convince ourselves is true.

Throw out your TV. A new age is upon us, and its name is 4K. Yes, it’s true, millions of people did just buy new high-definition TVs, but their time is already past. 4K, a new standard that offers a picture four times as sharp as lowly “HD,” and is richer and more vibrant, too, is the undisputed way of the future.

Such is the rhetoric of those pushing 4K, anyway. At the Consumer Electronics Show in Las Vegas earlier this January, Sony, Samsung, Netflix, and others could talk of little else than how 4K is the next step in gadgetry. Press releases praise the “immersive and dynamic entertainment experiences” possible, and a Sony exec even claimed it enabled new artistic possibilities.

I was skeptical, but having had the chance to see 4K, I can confirm: it truly is stunning. Though I was expecting the marketing hype to be just that, its clarity is astounding, almost absurdly so. It isn’t so much that it “looks real” as the fine detail and depth seem hyper-real—as if there is somehow more in the landscapes or artfully arranged bowls of fruit on the screen than in the objects or places they depict. It’s remarkable.

And yet, my skepticism lingers.

American telecom giant AT&T proposed something this week that is almost certainly dead on arrival: the reincarnation of Ma Bell thought it would be just swell if advertisers and other businesses could pay for the wireless data its customers use for certain apps. Want to watch streaming video on your phone, but don’t want to pay for the digital mileage? No worries there, NBC would be happy to pay for your viewing time—provided, one imagines, they get at least some access to the treasure trove of information on the average person’s smartphone.

Even if it didn’t raise privacy concerns, AT&T’s idea is likely to be torpedoed by the US Federal Communications Commission for the good enough reason that it would put a massive hole in the already leaky concept of “net neutrality,” the idea that the digital domain works best when its biggest players can’t buy their way to static dominance.

Whatever the future of AT&T’s proposal (and there may be a version that’s less offensive to American regulators out there), it illustrates one of the fundamental differences between the digital world we’ve built since the 1990s and the analog systems that preceded it: classic TV or radio is an all-you-can-eat affair, because the channel was always open and nobody was metering how much you watched. This was partly a function of technology, but also a function of business model: TV users didn’t have to pay, because advertisers already were. AT&T isn’t inventing some novel evil here.

Unless you were staring patiently at YouTube, you could have mistaken the buzzing of BigDog Reflexes for anything. A lawnmower appealing to a higher power, maybe. This was 2009: it was perfectly reasonable to be somewhere else while your clip loaded—do you, buffering circle—which made the eventual reveal of the buzzing’s source, a 240-pound robotic pack mule, all the more alarming. You had to go outside. Have a good sit.

BigDog does this thing with its legs. When toppled, it staggers—splays, really. Hydraulics struggle to regain balance. The buzzing whips up. BigDog comes alive in a way few things with legs ever do, and I am licking my lips.

Don’t read the comments! So goes the ubiquitous online exhortation warning readers away from the bile in the boxes below, now so common that it has its own Twitter account.

If only that view weren’t so mistaken. Lost in the broad strokes of that puritan refrain is that the space under a news story or blog post can be awful or it can be brilliant, a seething mass of hate and idiocy or a veritable kaleidoscope of crackling ideas and debate. For me and fortunate others, the comment section has so often been the latter of those binaries—a place that feels more like home than chaos. In conflating the good and the bad, that pernicious phrase of “Don’t read the comments” erases this crucial aspect and more.

Three intriguing transportation technologies have floated past in the fog of headlines this month. First, South Korea unveiled an electric bus that is powered by a magnetic coil buried in the road. Second, nominal plans for a Jetsons-looking pod transit system for Tel Aviv apparently got closer to reality. And of course, on Monday, Elon Musk unveiled his notion of the Hyperloop.

I’ve listed these ideas both in ascending order of awesomeness and descending order of likelihood that you’ll ever actually ride one. That is, in 2050, I suspect a great many of us could be riding on electrically charged buses in areas where people are too precious to allow overhead wires (the trolleybus is a thing, look it up), but I suspect almost nobody will be riding elevated podcars on overhead rails or, alas, lining up to be shot at intercontinental speeds through an almost-airless, windowless coffin tube.

This summer, Guillermo Del Toro’s Pacific Rim continued the illustrious Hollywood tradition of movie computer interfaces that look totally awesome and that no sane person would want to use. I don’t know about you, but if I was in the middle of helping building-sized robots fight monsters—and who knows, with climate change continuing apace, anything’s possible—I’d probably want to do something a little more precise than make what may or may not be circles in the air.

For all the silliness of Minority Report-style interfaces, though, their ubiquity in film makes sense: They are, after all, more visually arresting than, say, someone banging away at a keyboard. Yet, now that similar interfaces have started to infiltrate the real world—first with Microsoft’s Kinect, and most recently with the recently released Leap Motion—it’s also becoming clear there’s more to our affinity for these new modes of interaction than our appreciation for the whims of Hollywood’s VFX artists. Instead, the excitement over motion control seems to be about getting to “touch” the things behind the screen—as if what we really want is to break the barrier between the digital and the physical.

Sylvia broke up with me a few months after we’d both started university, over the phone, with a simple, curt phrase: “He kissed me and I kissed him back.” A couple weeks later, feeling like I should try to move on, I threw away the tiny figurine of Winnie the Pooh’s Tigger she’d given to me as a gift. I now can’t for the life of me remember why she had given me that or why it was important.

That was the last time I’d be involved with a woman without the Internet being involved somehow, with its record of emails or social media posts or digital photographs—maybe something that would explain why a statue of a cartoon tiger was significant. Gaps like that in my memory have now turned me into something of an obsessive documenter. From pictures of meals I’ve cooked to email conversations from a decade ago, I hoard digital markers of memory. But when you can record and save everything, you’re also confronted with a difficult question: what do you need to remember and what do you need to forget?

That can’t be right, can it? It says here that I’ve played the mobile game Real Racing 3 for 40 hours. That’s a full work-week spent I’ve spent racing virtual cars. Staring at the figure, it is difficult not to recall The Simpsons’ Comic Book Guy who, only at the moment before being hit by a nuclear missile, realizes it’s possible there may have been more productive ways to spend his life.

Nevertheless, my guilt at that statistic and my experience playing the game seem to be two separate things. Driving a digital Porsche 911 GT3 endlessly around the same tracks has a strange hypnotic effect. The curves of the courses start to become imprinted on the mind so that, spinning through them, one feels a bit like a child being read a favourite bedtime story for the hundredth time: the familiarity is the point.

Wandering around London’s Tate Modern gallery a few years ago, I found myself starting to bore—until I saw Bruce Nauman’s “Double No.” The installation was two screens of looping video in which a jester jumps up and down while saying “no” over and over again. It had a weird effect: first you smirk at the frustrated figure, then you start to be a bit put off by it looping, and then you finally start to feel disturbed, as the image of a peevish, childish clown starts to remind you of every selfish, angry adult you’ve ever known.

It’s the looping that made it “art,” of course. But now that looping video has become so common in gifs and Vines, it seems worth thinking about what looping does to our experience of video, and whether or not Instagram’s decision to have its new video feature not loop might be an inadvertent stroke of genius.

Pages