As Tom Cruise toggles between the glass wall searching for a possible crime to happen, he uses a special glove that interacts with the pulled up screen that projects data and video of the a future event.
The fiction part the 2002 movie was the future seeing psychics but the glass digital screen and the glove of 2054 is now a reality.
HoloLens by Microsoft, uses Gaze, Gesture and Voice to interact in a real time setting, giving target to waht you look at, air-tap objects (without the glove), and access via Windows 10 using voice, these are the three key elements of input of the headgear.
In this case a headgear and a powerful OS in the form of Windows 10 is necessary to experience live 3D motion in real time within a mapped out room. When PC.com was invited to attend the Build Conference in San Francisco last year, the media was given a demo on an work-in-progress unit of the HoloLens. Impressive for the untethered capability of the device and how users can place holographic objects in live settings.
How It Interacts
Gaze on HoloLens, works very naturally, at any time, the system knows where your head is in space, and what is directly in front of you. The engineers have understood on what you’re paying attention to at any given time, and establish what both gesture and voice should be targeting.
As an example, in a contest organised, the Airquarium finalist, one of the core ideas is that you can look at anything and issue a tap to find out more about the animal that you selected. The way the system understands what to give facts and stats about when you issue a command is from gaze.
Microsoft team also studied other VR gears out there which limited mobility and realise users need to move around which you can with the HoloLens and so is your gaze.The easiest way to think about it is as having a raycast from the device and which you can determine what object (real world as represented in the spatial mapping mesh or holographic) that ray intercepts with. Once you’re gazing at the animal, and you want to find out more, you need to tell the HoloLens to take an action.
Gesture is the way to take a basic action in HoloLens. By simply raising your hand with your index finger raised, and tap down with your index finger. You can target (with gaze), and act (with gesture).
To drive deeper interactions voice will be the best tool. T
- Because HoloLens is a full Windows 10 device, the complete speech engine from Windows 10 is available as a developer.
- Because the device is head mounted, the location of the mouth for voice can also be programmed be able to build array microphones into the device which produce a very high quality audio signal for the purpose of doing speech recognition.
- Because gaze is present, you actually have a better user context than is possible to attain for voice driven applications today. It is now possible to understand the object or area of interest which a voice command is intended to target. Because the device can provide this context, the user doesn’t need to try to preface each command with what they’re looking at, which will allow deeper, easier voice driven interactions than have been possible to date.
On a project on Galaxy Exploration, users could establish context (a planet or other feature) based on gaze, and then use voice to actually drive what they want to have happen. The user doesn’t need to explain that they’re targeting a particular world, because gaze is already telling us that. In this way, gaze will combine seamlessly with voice commands to allow you to explore the universe in a way you’ve never been able to before.
Microsoft has announced the launch of the develop version beginning the quarter of 2016 with price tag of $3000 per unit limited to just 2 per developer (available in Canada and US). Community is anxious to see the tool and its application change the way humans interact.