We live in an age when the lines between the virtual and the physical are beginning to blur. The great divide – one side symbols, the other physics – is rapidly shrinking, and most of us alive today will experience these two dimensions merge into one. Sensor technology is the bridge between these two dimensions of reality. Sensors take in information from the real world and convert it into symbolic representations that can then be processed in the virtual realm.
There are many types of sensors:
Three complementary technologies are spurring the revolution in sensor technology right now. The first is the availability of the kinds of Big Data analytics tools that are critical for making sense of the huge volumes of data generated by sensor technology. The other two forces driving sensor development today have to do with the way computing power is now inching out into the physical world through the help of humans and machines.
Sensors Piggybacking on Humans
The human path to sensor ubiquity in the world started with the smart phone market nearly a decade ago, and is now building additional momentum with the coming wearable computing revolution.
When computing power was still anchored to our desktops, there just wasn’t much need for environmental information – we new where the device was and what was around it. Most of the information input requirements centered around humans, and so keyboards, mice, microphones and web cameras were pretty much all we needed. With mobile computing, location could no longer be assumed, and environmental information started to get interesting. GPS chips provided locational context to help us localize search results and, with the help of embedded gyroscopes, accelerometers, and compasses, we gained valuable information about our orientation and movement in 3D space. Suddenly, we humans started relying on mobile devices to get from point A to point B.
In a way, we are now closing in on one of our earliest pop-culture visions for sensor technology, the tricorder from Star Trek.
Sensors Piggybacking on Machines
The machine path to sensor ubiquity is actually made up of two technological trends: robotics and the Internet of Things.
The Internet of Things allows device intelligence to exist in the cloud where it is constantly improving, and it also allows devices to communicate and coordinate with other devices. But one of the more intriguing aspects of Internet-connected devices is their ability to collect massive amounts of data from embedded sensors.
Researchers at M.I.T.’s Media Lab are experimenting with mapping sensor data onto the Unity 3D gaming engine for easy manipulation and exploration. You can play with a real-time model of temperatures, humidity levels, and noise levels inside the actual Media Lab building or of a 250-acre wetlands restoration project outside of Plymouth, Massachusetts.
A Hybrid World
What is perhaps even more intriguing, however, is that this bridge between the physical and virtual works both directions.
Just as we will use sensors to project our physical reality into digital realms, so too will the digital realm use sensors to project itself into physical reality. Right now that might not mean much, but in the coming era of artificial intelligence, that ability to extend digital reality into physical reality will have huge implications for society and the planet, and sensors are the bridge in that hybrid world.
Tricorder image modified with text, from original image by Mike Seyfang.
Star Trek image modified with text, from original image on Wikipedia.
I love the wording of sensors piggybacking on human. It is true though, that we as individuals can help gather information that benefits the community. Maybe the simplest example of that is how individual rides logged on Waze using GPS, help build an overview of the general traffic situation.
That’s a great example, Ann.
Pingback: Google Cardboard – Virtual Reality for Phone! | Investment Torso