My favourite theories of technology are the ones that argue we create tech in order to become cyborgs—that through everything from the hammer to the smartphone, we express our fantasies of becoming more than ourselves.

It’s the kind of explanation I find deeply satisfying because it gets at the unconscious desire that drives our relationship to these tools. Particularly fascinating, to me, is what social media says about our desires to both speak and write our identities in the spaces beyond our bodies. In the creation of a sphere in which we can publicize and externalize our thoughts, there is a profound something being said about our wanting to inscribe our words into a visible place and have the world see it.

The downside, however, is the twisted vortex of narcissism that is both Mayor Rob Ford and his legions of critics.

If you look for it, you’ll find that certain conservative dailies on this planet have made a small cottage industry of republishing reports that caged animals are, in fact, healthier than free-range livestock. Indeed, this argument comes up again and again and again in some places. As I write this, someone out there probably wants to have a detailed argument about poultry mortality, but suffice it to say that, whatever the other merits are, letting birds enjoy wide open spaces does, in fact, run the risk of them sharing those spaces with something that thinks chickens are tasty.

As unappealing as it is, this is part of the reason why the industry has mostly moved indoors: aside from the fact that the birds are going to be killed and eaten, it’s pretty safe for them.

Few things remind me of what my friends and I don’t have in common than when, amidst talk of books or TV shows over drinks, discussion turns to baby names. Specifically, it’s the point at which they start to go on about how their grandmother’s names are making a comeback that I remember how different we are. After all, I’m pretty sure my grandmas’ names—Iqbal and Sundar—won’t be making an appearance on Today’s Parent’s top 100 baby names for 2014.

These kinds of gaps feel increasingly frequent. In part, it seems the cosmopolitanism of an urbanizing world makes common ground a little harder to find—closer together yet further apart, and so on. More than that, though, the “nichification” of culture—how the massive increase in access to culture makes it easy to sink into a narrow slice of it—means that, even amongst friends, shared tastes can be tricky. If it was once rare to like a TV show, author, or band your friends had never heard of, it almost feels like it’s become de rigeur these days.

The garden was supposed to be my reprieve. Just over two years ago, parental illness, a graduate degree that has been “almost done” for years, and big-city rent all conspired to, at the decidedly not-young age of 35, send me back home to live with my parents. Though I can hardly call their comfortable suburban home, their generosity, or their affable personalities hardships, I was still searching for an upside—and, for a time anyway, I found relief from the pace of my digital life in the cool, black earth of the family vegetable plot.

I spent much of the summer of 2012 doing what some people call “reconnecting with nature.” I plunged my hands daily into dirt, tending to a garden of tomatoes and chillies, gourds and bitter melon, mint and strawberries. I would marvel at the array of blooms that grew, each arriving at their own time in the annual cycle, adding their distinct hue and tenor to the yard. Of particular note was a delphinium with two different-coloured sets of flowers, one mauve, the other an almost phosphorescent blue. The latter made things especially strange, as the only phosphorescent blue to which I was usually exposed was that of the screen—a glowing object in front of which I spent so much time that I worried the hue of it and my skin were becoming one and the same. Without quite realizing it had happened, there I was, caught in the middle of a cliché I’d resisted for years: giving up technology for the purity of “nature.”

It’s been an uneven few weeks for tech policy in the United States. The Federal Communications Commission opened the door to “fast lanes” on the Internet, widely seen as a hole below the waterline for the venerable SS Net Neutrality. More recently, the US Supreme Court declared on Wednesday that Aereo, a company whose entire business model amounted to being too cute by half, was in fact too cute by half. Aereo had been using a (strong-ish) argument that it was within the letter of US intellectual property law while streaming TV over the Internet even if nobody really believed it was anything but on the fringes of the law’s spirit.

So the Supreme Court slapped Aereo down, and a cursory Googling will call up any number of arguments about why this decision was terrible and the Supreme Court has crippled the United States’ Internet economy once again, just like it did in 2001 with the Napster ruling or 2003’s Eldred v. Ashcroft. And yet, the Internet persists. The incorrigible degenerates among you probably still know how to find free music, movies, television and (ahem) books, too.

It turns out that the Internet has a pretty strong record of keeping the firehose of goodies open for people who desire it—much stronger than the record of people trying to restrict it.

You could fill an Olympic-size swimming pool with the post-2008 financial filings of every energy company or researcher promising the next big thing in clean energy. Throw in the failed solar panel manufacturers (wiped out by cheap Chinese competition), the ethanol-mongers, and the stop-start-stop progress on electric cars, and it’s been a field where the good news is found almost solely in the exceptions.

So reports, then, that a team of British researchers think they’ve cracked the problem of storing hydrogen for cars (and, one presumes, other things that consume energy) need to be treated skeptically. We have heard this story before. But it’s worth paying attention to, if only because their storage medium of choice—ammonia—is already manufactured on a globally significant scale.

If you’ve ever been brave enough to delve into the roiling human cesspool that is Reddit, you might be familiar with phenomenon of “Ask Reddit,” a section of the site where someone can pose a question and have the legions of “redditors” respond.

Questions can be about pedestrian matters, such as tips for getting a job, or they can be oddball and fun, such as asking people to re-imagine world history as a film. The most fascinating, though, are those that ask for personal anecdotes, especially the lurid ones: your most embarrassing sexual experience, or why a revelation from a significant other made you break up with them on the spot.

Scrolling through these stories, you get lulled into a certain credulousness. “Isn’t the world a funny place,” you think to yourself after reading yet another tale of parents walking in on teens mid-coitus, never stopping to wonder at the veracity of anything you’re reading. It happens all the time, in those “inspiring” posts that your friends and family share on Facebook that bring a tear to your eye as you thumb through your feeds. One forgets: these online fora in which people gather—not just Reddit, but Twitter, Facebook, and the others—have a strange tendency to meld news and anecdote, fact and fiction, all within the same space. Much more so than with TV or print, as the web conflates my experience of both fantasy and reality, I frequently find myself asking: wait, is any of this actually true?

If you had to pick a symbolic figurehead for the digital future, you have some good choices: Mark Zuckerberg, of course; Elon Musk, perhaps; hell, maybe Tavi Gevinson. But, in light of some recent trends, I’d suggest that when it comes to the digital future, we line up behind another great thinker: George Costanza. Because, for all the buzzword-filled talk of “social,” “disintermediation,” or “curation,” only our dear, bumbling Costanza is familiar enough with the idea that is truly shaping the digital world: shrinkage.

To shrink and condense information is new tenor of the digital age. From its inception, the Internet as both technology and concept has been inextricably tied to “more,” but that overwhelming throbbing mass of information now requires an inversion. Put it however you want—less is more, small is the new big, the trickling stream rather than rushing torrent—but shrinking the tsunami of information into manageable chunks is clearly the new way forward.

It was a throwaway joke, but when comic Demetri Martin made fun of digital cameras because they allow us to “reminisce instantly,” it stuck with me. It seems the historical distance needed to become nostalgic about something has shrunk dramatically. “Aww, remember MySpace or iPods?” we say to ourselves, as if it were an age ago—a phenomenon surely compounded because the things we miss can seem so ephemeral: ubiquitous one year, invisible the next.

Maybe that’s why I was so pleased when I heard the desktop computer may be making a comeback—as if I were a book lover hearing that, no, the novel is not in fact dead. The whir and hum of those beige and black boxes was an intimate part of my youth, and a symbol of the future, too.

Laptop sales surpassed the desktop in 2008, and it’s expected by 2017, tablets will have surpassed both. But as computing shifts to mobile, and our sense of what computing is shifts to the cloud, I wonder if there still isn’t a place for big clunky boxes that sit under your desk—and with it, an ongoing tactile relationship to technology.

Having racked up more than 700,000 miles of driving on the streets of Mountain View, California, Google is taking the next steps toward a future in which we don’t drive our own cars: namely, building the prototypes. The first problem for Google? The prototypes look super dorky. The spinning LIDAR dome on the top of the cars is practically begging for a propeller beanie, and the outline of a pocket protector on the front hood wouldn’t be out of place.

Well, so what? They’re not dorky, they’re adorkable. And in any case, the point of self-driving cars, if there’s a telos to them at all, isn’t to give us all self-driving Hummers and Escalades with spinning rims. The point is to make us rethink the nature of car ownership altogether.

Like a reliable, rusty old factory machine, the Apple rumour mill recently sprang loudly to life in anticipation of the company’s rumoured purchase of Beats, news to which Apple fans largely responded with a resounding, “Ew, why?” Even though it’s pretty clear that if Apple does in fact buy Beats, it would be to acquire cachet, talent, and a promising music business, iPhone users still felt the whole thing was a bit off-putting: “Our beloved company is seeking help from a brand we associate with urban kids? Oh God.”

It was not a difficult code to crack. The Awl’s John Herrman cleverly suggested we just call it “Apple Privilege”: a tongue-in-cheek way of getting at the fact that Apple seems to be held up as a model of purity, and anything that “taints” it—you know, the masses, the lower-income, those pesky coloureds—is awful.

If this were some random occurrence, that would be one thing—but it’s a pattern now. It was only in 2012, after all, that many iPhone users worried that Instagram would be sullied when people without little apples on the backs of their devices could join in on the filtered fun. Couple that with the noise around the Beats story, and the fact that, at least in North America, Android is much bigger with visible minorities, and a question needs both asking and answering: Is Apple “white”?

The good news is that, in space at least, the Russians and Americans are getting along pretty well. As astronaut Mike Coats tells the Houston Chronicle, “Astronaut to cosmonaut, scientist to scientist, engineer to engineer, we’ve had a wonderful working relationship with the Russians.” It’s the concerns of us mere mortals on the ground, in places such as Ukraine, that are mucking things up.

The state of US-Russian relations matters, because the Russians provide the only bus up to the International Space Station: that massive, $150 billion lab built largely with American cash and more than a little Russian competence that’s finally starting to pay off after decades of work—and up to which the Russians are threatening to stop ferrying Americans after 2020.

That may not even end up being the biggest problem, though: Russia is also objecting to US requests to keep the ISS running at all after 2020.

Social media is like a cultural oscilloscope. Use something like Twitter, and it’s like reading in real-time the pulse of a crowd as it ebbs, flows, and surges. It is, in many ways, akin to scholar Walter Ong’s description of oral cultures: immediate, more communal, and prone to what Ong calls “agonism”—a kind of deliberate contentiousness to stand out in memory.

Twitter, composed as it is of typed words, isn’t actually oral, of course. But thinking of the service as oral-like helps explain a great deal about it—not least of which is why it is, at the same time, both “dying” and more vibrant than ever.

It was quite a thing to witness Glenn Greenwald, one of the journalists responsible for breaking stories based on the leaked Edward Snowden files, debate former NSA and CIA head General Michael Hayden. In these two individuals and the much-hyped Munk Debate over U.S. government surveillance, you had the perfect crystallization of the contemporary clash between the establishment and the fourth estate. But as I watched them heatedly spar about whether the surveillance state was a legitimate defense of a free society, one thing about the event stood out more than any other: just how well-dressed the crowd was.

For an event held in a venue mostly designed for hosting symphonies, the fashion on display wasn’t exactly surprising. That didn’t make it any less weird, though. Think about it: after an early cocktail or glass of wine, rows upon rows of people in tailored suits and neatly cropped dresses took their seats and settled in for the night’s entertainment—you know, an argument about the historically unprecedented intrusion of the surveillance state—only to then retreat to a nearby bar and slip into a martini. Debates are theatre, sure, but it was discomfiting to say the least.

It was impossible to be confronted with the strangeness of it and not wonder: for whom exactly do these debates over Internet freedom and privacy take place?

One of the people responsible for putting a man on the moon died last week at the age of 95. This is, in 2014, a common and not terribly newsworthy occurrence: the generation of men and women whose industry kept a dozen men safe from vacuum, radiation, and temperatures ranging from scalding to freezing is now succumbing to the mediocre ravages of time. The youngest living astronaut to have walked on the moon itself is older than Hitler’s invasion of Austria.

But John C. Houbolt deserves our attention, for at least a moment, because his contribution was important enough that it changed the direction of the US space program. As NASA tried to figure out, in the early 1960s, how it was going to meet President Kennedy’s goal of landing a man on the moon before 1970, Werner Von Braun was pretty sure he already knew the answers: he had, after all, been thinking about this stuff for some time.

The problem was Von Braun, whom the Americans had seconded with his enthusiastic consent at the end of World War II, didn’t want to build machines just to land a man on the moon. He wanted rockets that could also help the United States build space stations, and eventually put a man on Mars.

|| Connecting the dots in A Beautiful Mind

Social physics is an emerging (and ominous-sounding) discipline that wants to “connect the dots” of our data—but, ideally, as a force for  good.

In the summer of 2013, one of the wunderkind companies of the 2008 green energy euphoria went belly-up. Better Place, formerly Project Better Place, was an effort by Israeli entrepreneur Shai Agassi to revolutionize the concept of electric cars by, essentially, taking the part of the consumer experience everyone hates about the mobile phone provider (selling people a barely subsidized piece of consumer electronics for the dubious privilege of being locked into a multi-year contract) and combining it with the second-largest purchase most households make.

To his credit, Agassi was legitimately trying to think about the problem of accelerating the adoption of electric cars in a new way. To his discredit, that seems to be about the only nice thing anyone has left to say about him. Fast Company has a pretty clinical post-mortem of Better Place, and it’s kind of a buffet of vignettes of what happens when a firehose of money gets pointed at people who don’t have the skills to know what to do with it, or even the skill to recognize what they don’t know. (The point where Agassi divorces his wife and starts bringing his new girlfriend to meetings is a particular nice touch.)

Would Aristotle be good at Twitter? This has been on my mind lately. As the latest round of acrimonious social media debates have popped up in the form of “Cancel Colbert and the resignation of Mozilla CEO Brendan Eich, people have again taken to arguing vociferously on Twitter, doing their darnedest to convince others of their rightness. So I wonder, would the person who tried to set out in Rhetoric how persuasion works be good at arguing with people in 140-character snippets?

The accepted wisdom is that those who are good at argumentation in other venues are also good at it on Twitter. A lot of the time, that seems quite true: it’s why writers, essayists, and annoying pedants have taken to the service so happily. But as I watch the kinds of people who seem forced to endure arguments with others—namely, women, people of colour, and other activists—I get the sense that the rules of rhetoric laid down by folk like Aristotle are especially unsuited to Twitter. More interestingly, though, maybe watching people on Twitter invent new rhetorical tactics suggests that what’s wrong with online discourse isn’t that it is hampered by constraint, but that there isn’t enough of it.

Of the many reasons Star Trek: The Next Generation appealed to my nerdy teenaged mind, the holodeck was perhaps the most significant. Granted, like most adolescents, my imaginings about what I’d do with technology that allowed you to create, and then step into, believable fictional worlds tended to focus on sexual adventures with the crew of the Enterprise. But at the same time, it also spoke to a mind equally formed by the worlds of literature and video games: here, in the guise of a distraction, was a way to enter a place and then return from it, changed and renewed.

Re-creating the world is among the most basic of human desires; fiction of all kinds remakes the world, ever so slightly changed. That habit is about to enter its next, holodeck-like phase, however, thanks to two technologies that are soon to go mainstream: 3D scanning, which can map three-dimensional spaces, and virtual reality, which can then aurally and visually immerse you in them.

The body is a text. To communicate with another human being is to consider them as a book. Unable to see into their souls, we encounter others as collections of signs: a smirk or a crinkle around the eyes, a hand placed on a cheek—words upon words as we try, in futility, to express to one another what we think and feel. The soul may be irreducible, but to be human is to reduce it nonetheless.

How, then, should we feel now that the text of the body has become machine-readable?

Pages