People in my social media feeds have been pretty excited about “this amazing new app that lets you read a novel in 90 minutes” the past little while. They’re talking about the new speed reading app Spritz, which flashes text at anywhere from 250 to 1000 words a minute, and uses a novel technique to focus your eyes in one place as it does so. Using it is a thoroughly discombobulating, fascinating experience—try it here. The best description I’ve heard is that it’s less like reading and more like snorting words as if they were coke.
That it was so hotly discussed amongst my friends, mostly literature academics and writers, feels like a sign that we were clearly drawn to and then put off by the idea—not incensed so much as sympathetic to the need to read more, but baffled as to why anyone would want to read that fast. That ambivalence arose, I’d guess, because with Spritz, we’ve happened upon the Soylent of reading.
The Internet has been a boon for losers—or perhaps, I should say, it has been a boon for me. Prior to the web, being “in the know” was as impossible a goal for me as climbing Everest or talking to a girl I liked without sweating profusely. Today, though, with an endless library of culture at my fingertips, suddenly that obscure Algerian film, new bar, or buzzy novel is easy to find, even for oddballs like me. The obstacle of discovery has, at last, been lifted. But, because nothing good can last, am I to understand that after all that, I have to now be worried about the rise of the secret Internet?
By that term, I don’t mean the dark or deep web—places like the Silk Road that are invisible to Google and, thus, most web users. No, the secret Internet is instead something hidden in plain sight: it’s the surprising return of the email newsletter as the way to stay on top of what people are talking about. It’s the proliferation of private mailing lists, an old standby that seemed to be approaching irrelevance until suddenly, it wasn’t. And it’s the rise of communities obscured from public view, almost like secret societies.
Is cultural imperialism the wrong phrase to describe Facebook’s purchase of WhatsApp? On the surface, it certainly sounds absurd. WhatsApp was, after all, yet another Silicon Valley company that was simply swallowed up by the Valley company par excellence. Typical Palo Alto incest, sure, but imperialism?
Yet, it was the term that leapt to mind as soon as I heard the news. Like millions of people, my family, a diaspora scattered across the globe, uses WhatsApp to stay in touch despite the immense geographic distances that separate us. Now that a company like Facebook owns it, it feels a bit like your favourite band selling out. Even if, in truth, there never really was any pure state to begin with, it still feels odd that a thing that was an intimate part of your life has now been sucked up into the contemporary emblem of the evil empire.
What do we actually want social media to do? It’s a testament to how deeply Facebook et al. have penetrated our lives that the question itself sounds strange. To ask “What’s the hot new thing in social media?” sounds reasonable. But less than a decade into its existence, “What do we want social media to do?” already sounds like a question asked by an alien.
It sounds odd, in part, because it presumes we have (or ever even had) a choice. But I ask now because the tea leaves of the Internet seem to at least temporarily be resolving into something clear: we’re about to embark on the next phase of social media. Facebook, Twitter, and others are not only becoming more like each other, but also more similar to the media they sought to replace. Dedicated apps are about to be the new normal, as the many functions of social networks splinter off into smaller chunks. New forms are emerging, from ephemeral messaging to apps just for two. But hovering above it all, another question: what does a world in which social relations are structured by the vision of Silicon Valley actually look like?
A few days ago, for no more than half a second, I thought I could Google smells. I don’t mean that I thought I could simply look them up—rather, for the tiniest fraction of time, I thought I could type the name of a smell into my computer and then I would be able to smell it.
Alas, no sooner had I thought that Smell-o-Google™ was real than I realized it was just, yet again, my mind playing tricks on me. I’m used to it, though. I often feel as if I can swipe through shelves of CDs, scribble across a PDF, or pinch-to-zoom on the scene I see through the bus window. Somehow, the differing characteristics of digital and physical things are constantly blurring for me, as the fictional and the real also continually blend in my addled brain. I don’t quite know why I’m so susceptible to such synesthesia-like confusion. Perhaps a life so dominated by the page and screen alike cannot help but result in a constant sense that reality is just the one text of fiction we convince ourselves is true.
Throw out your TV. A new age is upon us, and its name is 4K. Yes, it’s true, millions of people did just buy new high-definition TVs, but their time is already past. 4K, a new standard that offers a picture four times as sharp as lowly “HD,” and is richer and more vibrant, too, is the undisputed way of the future.
Such is the rhetoric of those pushing 4K, anyway. At the Consumer Electronics Show in Las Vegas earlier this January, Sony, Samsung, Netflix, and others could talk of little else than how 4K is the next step in gadgetry. Press releases praise the “immersive and dynamic entertainment experiences” possible, and a Sony exec even claimed it enabled new artistic possibilities.
I was skeptical, but having had the chance to see 4K, I can confirm: it truly is stunning. Though I was expecting the marketing hype to be just that, its clarity is astounding, almost absurdly so. It isn’t so much that it “looks real” as the fine detail and depth seem hyper-real—as if there is somehow more in the landscapes or artfully arranged bowls of fruit on the screen than in the objects or places they depict. It’s remarkable.
And yet, my skepticism lingers.
American telecom giant AT&T proposed something this week that is almost certainly dead on arrival: the reincarnation of Ma Bell thought it would be just swell if advertisers and other businesses could pay for the wireless data its customers use for certain apps. Want to watch streaming video on your phone, but don’t want to pay for the digital mileage? No worries there, NBC would be happy to pay for your viewing time—provided, one imagines, they get at least some access to the treasure trove of information on the average person’s smartphone.
Even if it didn’t raise privacy concerns, AT&T’s idea is likely to be torpedoed by the US Federal Communications Commission for the good enough reason that it would put a massive hole in the already leaky concept of “net neutrality,” the idea that the digital domain works best when its biggest players can’t buy their way to static dominance.
Whatever the future of AT&T’s proposal (and there may be a version that’s less offensive to American regulators out there), it illustrates one of the fundamental differences between the digital world we’ve built since the 1990s and the analog systems that preceded it: classic TV or radio is an all-you-can-eat affair, because the channel was always open and nobody was metering how much you watched. This was partly a function of technology, but also a function of business model: TV users didn’t have to pay, because advertisers already were. AT&T isn’t inventing some novel evil here.
Unless you were staring patiently at YouTube, you could have mistaken the buzzing of BigDog Reflexes for anything. A lawnmower appealing to a higher power, maybe. This was 2009: it was perfectly reasonable to be somewhere else while your clip loaded—do you, buffering circle—which made the eventual reveal of the buzzing’s source, a 240-pound robotic pack mule, all the more alarming. You had to go outside. Have a good sit.
BigDog does this thing with its legs. When toppled, it staggers—splays, really. Hydraulics struggle to regain balance. The buzzing whips up. BigDog comes alive in a way few things with legs ever do, and I am licking my lips.
Don’t read the comments! So goes the ubiquitous online exhortation warning readers away from the bile in the boxes below, now so common that it has its own Twitter account.
If only that view weren’t so mistaken. Lost in the broad strokes of that puritan refrain is that the space under a news story or blog post can be awful or it can be brilliant, a seething mass of hate and idiocy or a veritable kaleidoscope of crackling ideas and debate. For me and fortunate others, the comment section has so often been the latter of those binaries—a place that feels more like home than chaos. In conflating the good and the bad, that pernicious phrase of “Don’t read the comments” erases this crucial aspect and more.
Three intriguing transportation technologies have floated past in the fog of headlines this month. First, South Korea unveiled an electric bus that is powered by a magnetic coil buried in the road. Second, nominal plans for a Jetsons-looking pod transit system for Tel Aviv apparently got closer to reality. And of course, on Monday, Elon Musk unveiled his notion of the Hyperloop.
I’ve listed these ideas both in ascending order of awesomeness and descending order of likelihood that you’ll ever actually ride one. That is, in 2050, I suspect a great many of us could be riding on electrically charged buses in areas where people are too precious to allow overhead wires (the trolleybus is a thing, look it up), but I suspect almost nobody will be riding elevated podcars on overhead rails or, alas, lining up to be shot at intercontinental speeds through an almost-airless, windowless coffin tube.
This summer, Guillermo Del Toro’s Pacific Rim continued the illustrious Hollywood tradition of movie computer interfaces that look totally awesome and that no sane person would want to use. I don’t know about you, but if I was in the middle of helping building-sized robots fight monsters—and who knows, with climate change continuing apace, anything’s possible—I’d probably want to do something a little more precise than make what may or may not be circles in the air.
For all the silliness of Minority Report-style interfaces, though, their ubiquity in film makes sense: They are, after all, more visually arresting than, say, someone banging away at a keyboard. Yet, now that similar interfaces have started to infiltrate the real world—first with Microsoft’s Kinect, and most recently with the recently released Leap Motion—it’s also becoming clear there’s more to our affinity for these new modes of interaction than our appreciation for the whims of Hollywood’s VFX artists. Instead, the excitement over motion control seems to be about getting to “touch” the things behind the screen—as if what we really want is to break the barrier between the digital and the physical.
Sylvia broke up with me a few months after we’d both started university, over the phone, with a simple, curt phrase: “He kissed me and I kissed him back.” A couple weeks later, feeling like I should try to move on, I threw away the tiny figurine of Winnie the Pooh’s Tigger she’d given to me as a gift. I now can’t for the life of me remember why she had given me that or why it was important.
That was the last time I’d be involved with a woman without the Internet being involved somehow, with its record of emails or social media posts or digital photographs—maybe something that would explain why a statue of a cartoon tiger was significant. Gaps like that in my memory have now turned me into something of an obsessive documenter. From pictures of meals I’ve cooked to email conversations from a decade ago, I hoard digital markers of memory. But when you can record and save everything, you’re also confronted with a difficult question: what do you need to remember and what do you need to forget?
That can’t be right, can it? It says here that I’ve played the mobile game Real Racing 3 for 40 hours. That’s a full work-week spent I’ve spent racing virtual cars. Staring at the figure, it is difficult not to recall The Simpsons’ Comic Book Guy who, only at the moment before being hit by a nuclear missile, realizes it’s possible there may have been more productive ways to spend his life.
Nevertheless, my guilt at that statistic and my experience playing the game seem to be two separate things. Driving a digital Porsche 911 GT3 endlessly around the same tracks has a strange hypnotic effect. The curves of the courses start to become imprinted on the mind so that, spinning through them, one feels a bit like a child being read a favourite bedtime story for the hundredth time: the familiarity is the point.
Wandering around London’s Tate Modern gallery a few years ago, I found myself starting to bore—until I saw Bruce Nauman’s “Double No.” The installation was two screens of looping video in which a jester jumps up and down while saying “no” over and over again. It had a weird effect: first you smirk at the frustrated figure, then you start to be a bit put off by it looping, and then you finally start to feel disturbed, as the image of a peevish, childish clown starts to remind you of every selfish, angry adult you’ve ever known.
It’s the looping that made it “art,” of course. But now that looping video has become so common in gifs and Vines, it seems worth thinking about what looping does to our experience of video, and whether or not Instagram’s decision to have its new video feature not loop might be an inadvertent stroke of genius.
The way many feel about books, I feel about video games. We tend to think that it’s games that are the menace to books, but they’re actually in the same boat: just as some argue video- and web-based forms of culture threaten to supplant the sustained attention of reading, quick and often superficial handheld games threaten the more traditional long-form gaming I was raised on. Basically, you damn kids get your Angry Birds off my pixelated, Mario-filled lawn.
So you might say that I welcome the less-than-stellar debut of new much-ballyhooed game console Ouya with the kind of relish a certain kind of bibliophile might greet the shutdown of the Internet. Which is to say, it’s a perspective that’s as wrong as it is stupid, but as examples of ugly schadenfreude go, it’s one both the gaming and book worlds should pay attention to.
Ian Brown thinks the glut of digital photographs is destroying the mindfulness with which we capture the world. This year, the acclaimed Globe and Mail writer was an adjudicator for the 2013 Banff Mountain Film and Book Festival photography competition. As he related in a feature for the Globe, for the first time, Brown and other jury members couldn’t pick a winner, or even a runner-up, as not one of the entries even “managed to tell the simplest of stories.”
Brown theorizes as to why this is happening. One proposal: like addicts, we turn to Instagram et al and simply shoot to confront “the uncomfortable difficulty of actually seeing,” instead craving “the instant gratification and collective approval that the Internet deals out to us.” Another idea: “as we live less and less physically and more and more virtually, we take pictures as substitutes for the real.”
Though cyberwar and cybercrime may seem like a recent development, it’s been a major concern for governments around the world since the early ’70s. What started with annoying chain e-mails that touted get-rich-quick schemes and better sex has evolved into international breaches of security and impressive feats of cyber-stealing. To mark today’s publication of Black Code: Inside the Battle for Cyberspace, and our interview with its author Ronald Deibert, we assembled this history of cyber-shenanigans.
When you’re an awkward person, social situations require strategy. One of mine: reading lots online so that I can contribute to conversations, or maybe even offer up an interesting anecdote. The trouble, though, is that given the vast, overwhelming morass of things to read online, how do we know what’s good?
That question has plagued us since Internet media first became popular, and the progression of answers over time is like a series of photos of the ways in which our relationship to the web has changed. First came the search era, in which the Internet was an open treasure trove of information to be actively delved into by the brave and skilled. Then, it was all about aggregation, in which algorithms and sites like the Huffington Post did the sifting for us. Next came the social phase, where the filtering was left to the wisdom and whims of our friends. Now, however, it seems we are finally entering the next stage—and it looks a lot like the revamped Digg, and a newer platform called Medium.
A year ago, Paul Miller believed he was being corrupted by the Internet. But as it turns out, his enemy isn’t technology; it’s William Wordsworth.
In 2012, Miller, a writer for tech site The Verge, embarked on a stunt almost perfectly suited for the times: for one year, he would remain “off the internet.” This week, he returned with a long, intriguing post in which he reflected on his time offline, but you need only read the first line to gather the gist of what follows: “I was wrong,” says Miller.
Marketing has long since ceased shilling the virtues of a product. It is now about conjuring an ethos, then associating a product with that idea. If you want to get a sense of how to do that spectacularly wrong, you need look no further than Microsoft.
Witness its latest abomination, a Windows Phone ad in which a wedding party descends into chaos after iPhone and Android users exchange barbs. Amidst the ensuing food-fight madness, two attractive wait-staff—the reception was in the church, apparently?—comment on how the fight is futile because… Windows Phone! It’s a thing that exists! And may or may not have features you want! We don’t really know, because they don’t say. As a friend on Twitter pointed out, even their legal disclaimers are weird: “Do not attempt” appears beneath the nuptial brawl. Uh, thanks Microsoft, I’ll keep that in mind.