Tech Culture Archive

The year 2029 must have seemed a long way off when Masamune Shirow first published his cyberpunk vision, Ghost in the Shell, in 1989. The intervening 28 years to today have seen multiple animated adaptations of his work (two films and a TV series). Despite its age and the inevitable changes that follow from any adaptation, Shirow’s core technological vision has been remarkably unchanged since the first manga.

Ghost in the Shell (1989)

The franchise provides plenty of food for thought relating to cybernetic augmentation, artificial intelligence, and fundamental questions on the nature of humanity and sentience. In this post, I’ll take a close look at one particularly eye-catching concept — cybernetic prosthetic hands that are shown driving computers with blazing speed. Even in our current era of Bluetooth and optical communication, this concept somehow still earns a prominent place in the contemporary canon.

Beyond the Home Row

Keyboards, whether QWERTY or Dvorak, split ergonomic or Maltron, all have 10 keys on the home row. Debates about comparative speed aside, they all have approximately the same number of total


I recently watched 2010: The Year We Make Contact and noted how the set was crammed with old-school keyboards. It occurred to me that if the film were re-made today, each of those conventional-looking keyboards would be replaced by a futuristic virtual keyboard on a screen, wall, or table surface (the control center in the film Oblivion comes to mind).

We have the technology today to create that reality of glass-based interface, but how many of us are actually using soft keyboards to do serious work on on computers? The first accessory I bought for my first iPad was a Bluetooth keyboard. The first accessory I bought for my new iPad mini 2 only a few months ago was a Bluetooth keyboard.

In my experience working with professional software developers at a prominent startup, instead of moving toward soft keyboards, the trend is quite literally opposite. Many developers I know are seeking out mechanical switch keyboards with greater “clickiness”, not less, such as Max Keyboard Blackbird Tenkeyless model (I’m told the Cherry MX Blue version is the most clicky). They appreciate the more active typing experience, and


Tech news site The Information is reporting that Google is developing three successors to Google Glass, to be released in 2016.

Notably, one of those three models is Glass-less, so to speak. It will drop the screen in favor of exclusively audio output. Moreover, the audio will be delivered by bone conduction, as was the case with the original Google Glass.

Although Google continues to develop a 2nd generation screened version for the enterprise market, the audio-centric version, which is presumably targeting the sports market, may overcome the primary challenge that blocked Google Glass from more widespread adoption. Specifically, a small and unobtrusive bone conduction audio device may allow for a connected experience without putting barriers between individuals who are face-to-face.

With all wearable devices, I am concerned about sending signals to other people that I’m distracted or otherwise not paying full attention to someone. Regularly I found myself checking alerts on my Pebble Watch only to realize it looked like I was checking my watch as if I was bored with a conversation or had a conflict. This is less prominent with the Apple Watch, in my experience, because it handles multiple missed alerts more


I recently started experimenting with text-to-speech (TTS) and automated speech recognition (ASR) functionality after participating in a hackathon in San Francisco several weeks ago co-sponsored by AT&T. Further emboldened by a re-watching of 2001: A Space Odyssey, I set out to make a program that I could drive with some simple voice commands: “tell me the weather,” “what’s the market doing?” etc. Unsurprisingly, my results were mixed — the voice recording component sometimes truncated phrases, the web-based ASR took 1-2 seconds to process, and the results were sometimes wrong in unexpected ways (possibly attributable more to my microphone than AT&T’s API). I was sure that someone a little more experienced could really knock speech functionality out of the park.

Consequently, I was excited to play with an Xbox 360 while staying with a friend for a few days recently. It’s an old platform at this point, and I’m eager to see Xbox One, but I was surprised how limited and clunky the console’s speech capabilities were. Firstly, even in a quiet living room we found ourselves practically yelling at the console — “Xbox, Netflix!”. We’d laugh at how many times it took to recognize our command.

Secondly, the


Wired recently pondered whether wearable technologies are only for the privileged. Ultimately the article was just a report on the high price of wearable fitness technologies and Google Glass, while stopping short of exploring the very interesting question in its title, “Are Wearable Technologies Just a Luxury for the Upper-Class?” A look back at the history of innovative and disruptive technologies suggests not, and that eventually these technologies will someday be mass market, but only if they solve problems familiar to classes lower on the socioeconomic ladder. Once-expensive revolutionary technologies can achieve economies of scale and move down-market only if they solve commonplace problems better than existing products, while others that solve niche problems remain expensive, and more often than not disappear altogether.

The Cellular Phone — Problem Solver

Cell phones are everywhere, and in developing countries they are more prevalent than landlines for a variety of reasons including lack of wired infrastructure and more flexible payment plans, on top of all the benefits of mobility. In fact, penetration of cellular phones per capita in the developing world is fact-approaching that of the developed world.

Adoption of Mobile Phones

In addition to many fine movies available to stream on Amazon Prime, there is also Dredd, a film 2012 installment in the Judge Dredd franchise. For a film taking place in the 22nd century, the technologies portrayed in the film are, for the most part, remarkably mundane. While spending many production resources on flashly explosions and slow-motion sequences, the high technology of the film is more or less limited to one large computer monitor dressed up in futuristic garb by showing 3D floorplans, windowed security CCTV feeds, the classic Windows star screen saver, and a terminal window — pretty much a regular episode of 24.

The most adventurous piece of technology portrayed is a set of bionic eyes possessed by the “clan techie.” It’s never made clear what these actually do other than serve as eye-candy (pun intended) because the character still uses a computer monitor and displays no special abilities. Their only function is to look pretty – Tron-blue in color with bladed iris adjustable diaphrams – and move the plot forward in an almost negligible way (and even then without any reference to actual functionality). In fact, there’s even a


Google has revealed through regulatory filings that Google Glass will incorporate bone conduction audio output in what may be the biggest mass market development for the technology ever, but it will not be the first. Bone conduction audio has existing in some form for almost 100 years, and has a long history of commercialization in both the medical and consumer audio markets, but to date it is a niche technology with adoption concentrated in narrow domains.

Has the time for bone conduction audio for the mass market finally arrived? Probably not yet, but we’re close.

The Origins of Bone Conduction Audio

The general public’s unfamiliarity with bone conduction audio belies its age and the advanced state of the technology’s development. Historically, bone conduction audio has been widely employed as a hearing aid technology – by transmitting sound into the skull audio signals may bypass a defective middle ear, allowing many hearing-impaired or otherwise deaf individuals to hear, even where traditional amplication-style hearing aids are ineffective. A bone conduction hearing aid was first described by Hugo Gernsback in 1923, though the first relevant patent I have located was issued in 1941. Since that time, the technology has improved with lighter


Elon Musk’s Tesla announced today that it would be discontinuing sales of its entry-level Model S vehicle given poor sales. Apparently only 4% of customers opted for the least-expensive 40 kWH version, which affords the smallest driving distance (160 miles between charges). Tesla will generously fill all undelivered 40 kWH orders with a 60 kWH battery package, but customers shouldn’t expect to take a leisurely 170 mile drive because the battery will be software-governed to perform like a 40 kWH battery. A software upgrade will be available to unleash the full 60 kWH capabilities, but it will run a cool $10,000, which is the difference in model price anyway.

Welcome to a world in which DRM and OEM-imposed limitations bleed into every corner of our lives. Tesla doesn’t owe anyone more than exactly what the contract stipulated, and the concept of OEM-control is not entirely new (e.g., your cable box since 1980), but it is troubling to see a company artificially diminish the capabilities of its product. The cause is not safety, or maintenance of product quality, but simply to deliver an exacting account of dollars and cents paid per value received.

Surely there would be some upset 60 kWH-buyers if


Present is precedent when it comes to envisioning our digital future in big screen sci-fi and futuristic action films. Guided by an imperative that props must be believable to audiences in the context of today’s culture and technology, writers and prop designers are much too conservative in envisioning a future that will be radically different from today’s. Their conceptions are often times visually radically, and yet at the same time functionally anachronistic – a phone is something you hold to your head, we receive information through visors and displays that look like flashier versions of today’s screens, etc.

Familiarity is essential to some extent. It’s like a Broadway actor flamboyantly acting out emotions with exaggerated facial gestures and body language to convey what would be a subtle expression in real life. Similarly, the retro-future technologies featured in so many sci-fi films give the audience a necessary hook to accept a futuristic vision or identify the role of a prop without stumbling over it. The technique is successful as a narrative device, but it inhibits our ability to collectively imagine the wild possibilities before us – possibilities that will trend toward the magical while being increasingly invisible (thus not making for good film).

Twenty years ago


The era of integrated computing is upon us, or at least starting in earnest. I look out upon the landscape of upcoming commercial peripherals, computing devices, and experimental tech and see many fascinating technologies which will change the ways that we interact with computers, our environments, and each other. To name a few: credible mass-market gesture controllers like the Myo Armband or the Leap Motion Controller; a competitive market for wearable computers like smart watches, heads up displays (Glass), and intelligent garments; and radical new technologies in neural interfaces that are allowing paralyzed individuals to interact with their worlds in previously unimaginable ways be it through an artificial hand, leg, or eye.

I love simple solutions, and I am not overly eager to ditch my computer mouse which serves me well (currently a Logitech M705), but in the very near future they will enjoy a decreasing share of our computing attention as more intuitive and natural interfaces, as well as semi-autonomous solutions take their place. In the future mice may be as much a memory as the hand-crank car window.

The purpose of this blog is not to discredit the value of today’s or yesterday’s technology, but simply to share some