The era of integrated computing is upon us, or at least starting in earnest. I look out upon the landscape of upcoming commercial peripherals, computing devices, and experimental tech and see many fascinating technologies which will change the ways that we interact with computers, our environments, and each other. To name a few: credible mass-market gesture controllers like the Myo Armband or the Leap Motion Controller; a competitive market for wearable computers like smart watches, heads up displays (Glass), and intelligent garments; and radical new technologies in neural interfaces that are allowing paralyzed individuals to interact with their worlds in previously unimaginable ways be it through an artificial hand, leg, or eye.
I love simple solutions, and I am not overly eager to ditch my computer mouse which serves me well (currently a Logitech M705), but in the very near future they will enjoy a decreasing share of our computing attention as more intuitive and natural interfaces, as well as semi-autonomous solutions take their place. In the future mice may be as much a memory as the hand-crank car window.
The purpose of this blog is not to discredit the value of today’s or yesterday’s technology, but simply to share some excitement and raise questions about what might be next in the areas of digital control, interaction, and integration. Enjoy.
As touch devices become more commonplace, a big question is how we will interact with our devices in the future (something your site’s title suggests). The current “picture under glass” interaction is severely limited by the fact that it a) provides no tactile feedback and b) isn’t touch sensitive. Tools like paint-brushes, chisels, mice (as in Logitech M705), and the Wacom Bamboo are all examples of tools that *improve* the precision of our hands. All of these tools fall somewhere on a two-dimensional matrix with one dimension being precision and the other dimension being ease of use. Touch screens are easy, but not very precise. Keyboards are precise, but not very easy (it’s amazing how many people still use hunt-and-peck to type). I hope the next generation of devices will bring precise devices that are also easy to use. Like a good artist with an arsenal of various paint brushes, I hope the technology of our future will give us a range of input models, each suited a particular level of precision and ease.
Well said. I often reflect on future interface design in a utilitarian context – get information in, commands out, etc. – but you raise the interesting matter of precision, and of the essential nature of tactical feedback and interaction… both of which lend themselves to creative uses as one would use a paint brush, a chisel, or a Wacom. For all of my ravings about neural interfaces, it will be a long time before they are precise enough to render a deeply personal artistic vision beyond abstraction.