Everyone is familiar with a hunch, intuition, and/or
perception. Each is intangible and involuntary, but
most people rely on them every day. Cole Sear’s
intuition is live and in color. His sixth sense
breaks down the veil that seems to exist between the
natural and the spiritual realm and it terrifies
him.
SixthSense' is a wearable gestural interface
that augments the physical world around us with
digital information and lets us use natural hand
gestures to interact with that information.
We've evolved over millions of years to sense the
world around us. When we encounter something,
someone or some place, we use our five natural
senses to perceive information about it; that
information helps us make decisions and chose the
right actions to take. But arguably the most useful
information that can help us make the right decision
is not naturally perceivable with our five senses,
namely the data, information and knowledge that
mankind has accumulated about everything and which
is increasingly all available online. Although the
miniaturization of computing devices allows us to
carry computers in our pockets, keeping us
continually connected to the digital world, there is
no link between our digital devices and our
interactions with the physical world. Information is
confined traditionally on paper or digitally on a
screen. SixthSense bridges this gap, bringing
intangible, digital information out into the
tangible world, and allowing us to interact with
this information via natural hand gestures.
‘SixthSense’ frees information from its confines by
seamlessly integrating it with reality, and thus
making the entire world your computer.
The SixthSense prototype is comprised of a pocket
projector, a mirror and a camera. The hardware
components are coupled in a pendant like mobile
wearable device. Both the projector and the camera
are connected to the mobile computing device in the
user’s pocket. The projector projects visual
information enabling surfaces, walls and physical
objects around us to be used as interfaces; while
the camera recognizes and tracks user's hand
gestures and physical objects using computer-vision
based techniques. The software program processes the
video stream data captured by the camera and tracks
the locations of the colored markers (visual
tracking fiducials) at the tip of the user’s fingers
using simple computer-vision techniques. The
movements and arrangements of these fiducials are
interpreted into gestures that act as interaction
instructions for the projected application
interfaces. The maximum number of tracked fingers is
only constrained by the number of unique fiducials,
thus SixthSense also supports multi-touch and
multi-user interaction. |