OSVR Software Version 0.6 Released

The Sensics OSVR team is glad to announce the release of OSVR software version 0.6, including dozens of new features and updates for OSVR. Thanks to all of the contributors to OSVR!

Below is a description of the major additions to OSVR. You can read the full release notes here.

Major Features in OSVR-Core

Optical video-based (“positional”) tracking

The positional tracking feature uses IR LEDs that are embedded on the OSVR HDK along with the 100 Hz IR camera included with the OSVR HDK to provide real-time XYZ positional and orientation tracking of the HMD.

 The LEDs flash in a known pattern, which the camera detects. By comparing the location of the detected LEDs with their known physical locations, a position and orientation (pose) determination is made.

 The software current looks for two targets (LED patterns): one on the front of the HDK and one on the back. Additional targets can be added, and thus additional devices that have known IR LED patterns can also be tracked in the same space.

 It is also possible to assign different flashing patterns to multiple HDK units, thus allowing multiple HDK units to be tracked with the same camera. This is useful for multi-user play. Changing the IR codes on the HDK requires re-programming the IR LED board.

Sensics is working with select equipment developers to adapt the IR LED board and pattern to the specific shape of an object (e.g. glove, gaming weapon) so that that object can also be tracked with the OSVR camera.

The tracking code is included with the OSVR Core source code and is installed as an optional component while we are optimizing the performance.  It will be set as the standard tracker once camera distortion, LED position optimization, and sensor fusion with IMU data have been implemented.

 The image below shows a built-in debugging window that indicates the original image overlaid with beacon locations (in red, a tag of -1 means that the beacon has not been visible long enough to be identified) and reprojected 3D LED poses (in green, even for beacons not seen in the image).  RANSAC-based pose estimation from OpenCV provides the tracking.

Centralized Display Interface

The OSVR-Core API now includes methods to retrieve the output of a computational model of the display. Previously, applications or game engine integrations were responsible for parsing display description JSON data and computing transformations themselves. This centralized system allows for improvements in the display model without requiring changes in applications, and also reduces the amount of code required in each application or game engine integration.

The conceptual model is “viewer-eye-surface” (also known as “viewer-screen”), rather than a “conventional camera”, as this suits virtual reality better.[1] However, it has been implemented to be usable in engines (such as Unity) that are camera-based, as the distinction is primarily conceptual.

As a demonstration of this API, a fairly minimal OpenGL sample (using SDL2 to open the window) is now included with OSVR-Core.

Render Manager

The Sensics/OSVR Render Manager provides optimal low-latency rendering on any OSVR-supported device.  Render Manager currently provides an enhanced experience with NVIDIA’s Gameworks VR technology on Windows. Support for additional vendors (e.g. AMD, Intel) is being worked on. We are also exploring the options to work with graphics vendors for mobile environments.

 Unlike most of the OSVR platform, the Render Manager is not open-sourced at this point. The main reason is that the NVIDIA API was provided to Sensics under NDA and thus we cannot expose it at this time.

Key features enabled by the Render Manager:

  • DirectMode: Enable an application to treat VR Headsets as head mounted displays that are accessible only to VR applications, bypassing rendering delays typical for Windows displays. DirectMode supports both Direct3D and OpenGL applications.
  • Front-Buffer Rendering: Renders directly to the front buffer to reduce latency.
  • Asynchronous Time Warp: Reduces latency by making just-in-time adjustments to the rendered image based on the latest head orientation after scene rendering but before sending the pixels to the display.  This is implemented in the OpenGL rendering pathway (including DirectMode) and hooks are in place to implement it in Direct3D.  It includes texture overfill on all borders for both eyes and supports all translations and rotations, given an approximate depth to apply to objects in the image.

Coming very soon:

  • Distortion Correction: Handling the per-color distortion found in some HMDs requires post-rendering distortion.  The same buffer-overfill rendering used in Asynchronous Time Warp will provide additional image regions for rendering.
  • High-Priority Rendering: Increasing the priority of the rendering thread associated with the final pixel scan-out ensures that every frame is displayed on time.
  • Time Tracking: Telling the application what time the future frame will be displayed lets it render the appropriate scene.  This also enables the Render Manager to do predictive tracking when producing the rendering transformations and asynchronous time warp.  The system also reports the time taken by previous rendering cycles, letting the application know when to simplify the scene to maintain an optimal update rate.
  • Unity Low-level Native Plugin Interface: A Rendering Plugin will soon enable Render Manager’s features in Unity, and enable it to work with Unity’s multithreaded rendering.

 Render Manager is currently available only for OSVR running on Windows.

Several example programs and configuration files for OpenGL (fixed-pipeline and shader code versions, callback-based and client-defined buffers based) and Direct3D11 (callback-based and client-defined buffers based, library-defined device and client-defined device) are provided and open-sourced. Also included is a program with adjustable rendering latency that can be used to test the effectiveness of asynchronous time warp and predictive tracking as application performance changes.

Predictive Tracking

Predictive tracking reduces the perceived latency between motion and rendering by estimating head position at a future point in time. At present, the OSVR predictive tracking uses the angular velocity of the head to estimate orientation 16 mSec (1 frame at 60 FPS) into the future.

 Angular velocity is available as part of the orientation report from the OSVR HDK. For other HMDs that do not provide angular velocity, it can be estimated using finite differencing of successive angular position reports.

Profiling tools

Utilizing ETW – Event Tracing for Windows – the OSVR performance profiler allows optimizing application performance by identifying performance bottlenecks throughout the entire software stack.

 

 Event Tracing for Windows (ETW) is an efficient kernel-level tracing facility that lets you log kernel or application-defined events to a log file and then interactively inspect and visualize them with a graphical tool. As the name suggests, ETW is available only for the Windows platform. However, OSVR-Core’s tracing instrumentation and custom events use an internal, cross-platform OSVR tracing API for portability.

 Currently the default libraries have tracing turned off to minimize any possible performance impacts. However, the “tracing” directory contains tracing-enabled binaries along with instructions on how to swap them in to use the tracing features. See this slide deck for a brief introduction on this powerful tool: http://osvr.github.io/presentations/20150901-Intro-ETW-OSVR/

JointClientKit

The default OSVR configuration has the client and server run as two separate processes. Amongst other advantages, this keeps the device servicing code out of the “hot path” of the render loop and allows multiple clients to communicate with the same server.

 In some cases, it may be useful to run the server and client in a single process, with their main loops sharing a single thread. Examples where this might be useful include automated tests, special-purpose apps, or apps on platforms that do not support interprocess communication or multiple threads (in which case no async plugins can be used either). The new JointClientKit library was added to allow those special use cases: it provides methods to manually configure a server that will run synchronously with the client context.

 Note that the recommended usage of OSVR has not changed: you should still use ClientKit and a separate server process in nearly all cases. Other ways of simplifying the user experience, including hidden/launch-on-demand server processes, are under investigation.

New Android Capabilities

A new device plugin has been written to support Android orientation sensors. This plugin exposes an orientation interface to the OSVR server running on an Android device. This is available here: https://github.com/OSVR/OSVR-Android-Plugins

A new Android OpenGL ES 2.0 sample demonstrates basic OSVR usage on Android in C++. You can find this sample here: https://github.com/OSVR/OSVR-Android-Samples

 An early version of an Android app has been written that launches the OSVR server on a device to run in the background. This eliminates the need to root the phone, which existed in previous OSVR/Android version. You can find this code here:  https://github.com/OSVR/OSVR-AndroidServerLauncher

 The Unity Palace demo (https://github.com/OSVR/OSVR-Unity-Palace-Demo/releases/download/v0.1.1-android/OSVR-Palace-Android-0.1.1.zip) for Android can now work with the internal orientation sensors, as well as with external sensors

Engine Integrations

OSVR continues to expand the range of available engines to which it integrates., this includes:

  • Unity
  • Unreal
  • Monogame
  • Valve OpenVR (in beta): https://github.com/OSVR/SteamVR-OSVR

 Here are new integrations as well as improvements to existing integrations:

Language Bindings

The .NET language bindings for OSVR have been updated to support new interface types for eye tracking. This includes 2D and 3D eye tracking, direction, location (2D), and blink interface types.

New Unity capabilities

Unity adapters for the eye tracking interface types have been added, as well as prefabs and utility behaviors that make it easier to incorporate eye tracking functionality into Unity scenes.

The optional distortion shader has been completely reworked, to be more efficient as well as to provide a better experience with Unity 4.6 free edition.

The OSVR-Unity plugin now retrieves the output of the computational display model from the OSVR-Core API. This eliminates the need to parse JSON display descriptor data in Unity, which allows for improvements in the display model without having to rebuild a game. The “VRDisplayTracked” prefab has been improved to create a stereo display at runtime based on the configured number of viewers and eyes.

 Coming soon: Distortion Mesh support. Mesh-based distortion uses a pre-computed mesh rather than a shader to do the heavy lifting for distortion. There is a working example in a branch of OSVR-Unity.

 Contributions wanted:

  • UI to display hardware specs, display parameters, and performance statistics.

New Plugins and Interfaces

Gesture interface

The gesture interface brings new functionality that allows OSVR to support devices that detect body gestures including movements of hand, head and other parts of the body. This provides ways to integrate devices such as Leap Motion®[2], Nod Labs Ring, Microsoft® Kinect® [3], Thalmic Labs MYO[4] armband and many others. Developers can combine gesture interface with others to provide meaningful information such as orientation, position, acceleration and/or velocity about user’s body part(s) pose.

New API has been added on the plugin and client sides to report/retrieve gestures for device. The gesture API provides a list of pre-set gestures while staying flexible to allow custom gestures to be used.

 We added a simulation plugin – com_osvr_example_Gesture (see description of Simulation Plugins below) that uses gesture interface to feed a list of gestures, and also created a sample client application to visually output gestures

received from the plugins. These useful tools would help when developing new plugins or client apps.

Using the new interface, we are working on releasing a plugin for Nod Ring that will expose a gesture interface as well as existing interfaces.

Locomotion interface

The locomotion interface adds an API to support a class of devices also known as Omni-Directional Treadmills (ODT) allow walking and running on a motion platform and then converts this movement into navigation input in a virtual environment. Some examples of devices that would be able to use locomotion interface are: Virtuix Omni,  Cyberith  Virtualizer, Infinadeck, and others. These devices are very useful for First Person Shooters (FPS) games and by combining locomotion interface with tracker additional features such as body orientation, jump/crouch sensing could be added.

The API allows the ODTs to report the following data (on a 2D plane):

  • User’s navigational velocity
  • User’s navigational position

EyeTracker interface

The EyeTracker interface provides an API to report detailed information about the movement of one or both eyes.

 This includes support for reporting:

  • 3D gaze direction – a unit directional vector in 3D
  • 2D gaze direction – location within 2D region
  • Detection of blink events

EyeTracker devices are effective tools to interact inside VR environment providing intuitive way to make a selection, move objects, etc. The data reported from the devices can be analyzed to understand human behavior, marketing research and other research topics as well as gaming applications. They can also be used to perform the most accurate virtual reality rendering, customized for the location of your pupil every frame.

A .NET binding for EyeTracker (described above) allows easy integration of the eye tracking data into Unity.

SMI Eye Tracker plugin

In collaboration with SensoMotoric Instruments GmbH (SMI) we are releasing a new

plugin for SMI trackers. For instance, the plugin supports the SMI Upgrade Package for Oculus RiftDK2. It uses the SMI SDK to provide real-time streaming of eye and gaze data and report it via EyeTracker interface.

The SMI plugin also provides an OSVR Imaging interface to stream the eye tracker images.

 The plugin is available at – https://github.com/OSVR/OSVR-SMI

Simulation plugins

Along with the newly added interfaces (eyetracker, gesture, locomotion), we provide simulation plugins that serve as an example on how to use a certain interface. Their purpose is emulate a certain type of device (joystick, eyetracker, head tracker, etc.), connected to OSVR server, and feed simulation data to the client. These plugins were added as a tool for developing applications so that developers can easily run tests without the need to attach multiple devices to the computer. We would be expanding the available simulation plugins to have one for every type of interface. Simulation plugins are available in OSVR-Core and can be modified to a specific purpose.

In Closing…

As always, the OSVR team with the support of the community is continuously  adding smaller features and addressing known issues. You can see all of these on Github such as in this link for OSVR Core

Interested in contributing? Start here: http://osvr.github.io/contributing/

Thank you for being part of OSVR!

 


[1] Kreylos, 2012, “Standard camera model considered harmful.” <http://doc-ok.org/?p=27>

[2] Leap Motion is a trademark of Leap Motion, Inc.

[3] Microsoft and Kinect are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

[4] ThalmicLabs™ and Myo™ are trademarks owned by Thalmic Labs Inc.

[5] Logbar Inc. Ring is a trademark of Logbar Inc.

 

Related Posts