GamingProgramming

Can Unity’s UI System be Used in Virtual Reality Applications?

5 Mins read
Unity can be used to do virtual reality app development
Unity platform is a popular choice for game development. We recently posted an article about best 3d game development books and Unity 3D is one of the most popular choice of game developers. 
If you are a beginner you may also want to start with beginners C# books before jumping into game programming using this platform.
  

Unity’s UI system is a great tool for creating user interfaces, but can it be used for developing VR applications? Yes, you can.

Two input schemes are mostly used in VR applications – the gaze-pointer which includes a pointer based on head orientation and a second pointer that acts as a conventional mouse pointer like a UI panel hovering in space in the virtual world.

The Unity GUI System Basics

Unity’s UI system includes

  • EventSystem – core through which events flow
  • InputModules – main source of events controlled by EventSystem
  • RayCasters – Responsible for interactive components based on pointer movement or touch
  • GraphicComponents: Button, Toggle, Slider and the like

In a mouse-driven/ touch-driven application, the user can touch or point elements on the screen that refers to the viewport of the app which indicates a ray in the world. The InputModule is responsible for choosing closest results from all ray intersections found by ray casters in the scene.

Why Doesn’t This Work in VR?

Since there is no screen in VR, one does not have any surface for a pointer to move across. Hence GUI control in VR is based on a virtual screen where the mouse pointer can traverse. The head movement does not control any pointer but the mouse movement controls it. The traditional Unity’s world space UI system is based on clicks and touches from the origin of the camera although the UI is in world space.

The gaze pointer is another common tool that remains at the centre of user’s field of view and is controlled by head movement. It works as a ray cast from between the eyes, not from camera angles as Unity’s UI system works. A pointer in your hand could be the right point of origin if there is a tracked input device connected.

Also, Unity’s UI system is oriented around screen positions that is different from the VR world.

How Can Unity Be Configured For VR Then?

Having stated the above, it is not too hard to modify UI system to make all events work in world space rather than the screen position itself. Unity’s system can use rays in the raycasting code and then convert it to screen position for overall compatibility with other processes.

A Sample Project Using Unity with VR

In the project, one can find two scenes in Scenes directory that include Pointers and VRPointers. Pointers refers to Unity UI canvases and a camera. VRPointers also includes the same, but with OVRCameraRig and UI setup.

Open up Pointers scene and run it in the editor. It works like a regular non-VR application wherein you can move the mouse, the sliders and even click on any check box.

To convert this scene to VR

  • Leave Play Mode before continuing and replace Camera with OVRCameraRig
  • Delete Camera from scene and replace it with OVRCameraRig prefab, wherever you feel is a good vantage point of UI
  • Select EventSystem in Hierarchy view, remove the StandaloneInputModule component which is used for normal mouse input and add the new OVRInputModule which handles ray-based pointing with the Ray Transform property 
  • Drag the CentreEyeAnchor from OVRCameraRig on the slot, indicating that your head will point directions
  • Add OVRGamepadController component to EventSystem object for gaming support
  • Add a gaze pointer in GazePointerRing prefab in Assets->Prefabs directory. The OVRInputModule will find this and proceed to move it around. 
  • Drag OVRCameraRig object on to CameraRig slot for making it aware of its presence
  • One can convert and manipulate world-space Canvas object with a few changes. (At this point, one should be able to run the scene using the gaze pointer and also the spacebar for controlling GUI elements. You can configure a gamepad button for gaze-click event.)
  • Add OVRPhysicsRaycaster component to OVRCameraRig and set EventMask property to the “Gazable” layer
  • Add a world-space pointer to the scene that will be activated when you look at the Canvas with the help of the gaze pointer. Instantiate the CanvasPointer prefab as child of the Canvas. You can replace it with any 2D or 3D pointer representation too. Drag the added pointer to Pointer reference slot on OVRRaycaster. Add OVRMousePointer component to Canvas object for moving the pointer around the canvas.

Run the scene and you can interact with UI and physics elements with gaze pointer and even control virtual mouse pointer on the Canvas just by looking at it.

How Does It Work?

Some extensions are made for some core Unity UI classes including OVRInputModule which includes pointer interaction and GUI element navigation code. Since it is private, one could branch Unity’s UI system with own version, ask Unity to change functions to protected or inherit from base class up the chain by copying and pasting the code that is needed from StandaloneInputModule into our own class. OVRInputModule.cs is inherited from PointerInputModule and there is StandaloneInputModule code embedded into it in two places

Processing Gaze, World Space Pointers and Rays

GetGazePointerData() and GetCanvasPointerData() are two pointers that are extensions that handle pointer input state and even call out to OVRRaycaster/OVRPhysicsRaycaster to search for GUI/ Physics intersections. We have subclassed PointerEventData to make the new OVRRayPointerEventData with member public Ray worldSpaceRay. These are treated as another kind of PointerEventData. Ray caster objects are aware of the new member and hence use it for space ray intersections.

Pointing With The Help of World-Space Pointers

public OVRRaycaster activeGraphicRaycaster keeps track of “active” raycaster and one can adopt various schemes to determine its active state, especially when the gaze pointer enters it. OVRRaycaster is active and important because it can help detect intersections between world-space pointers and GUI elements.

The OVRInputModule hides the fact that one is working with VR pointers from the entire GUI system and the logic lies on screen position of pointer events too. With the above strategy, one can convert world space positions to screen positions keeping the VR cameras in consideration.

Because of the conversion, the rest of Unity UI system can interact with PointerEventData objects without knowing its point of origin – a mouse pointer, gaze pointer or any other element.

The above changes are enough to make gaze pointing work in virtual reality but visual representation of the gaze is important too. OVRGazePointer is the component for the same while OVRInputModule keeps it in the right position and orientation.

Conclusion

Few classes can help in extending Unity’s UI system for working in VR and there are many more such ways to extend Unity’s capability to interact with UIs in VR. One can replace gaze pointers with tracked controller pointers and so on.

As Unity brings in more VR-specific features, the changes may become redundant too. But for now, this is enough to get you started.

Chirag Leuva has more than 6 years of experience in mobile game development, with a focus on iPhone, android and unity game development. He currently holds the position of CEO at Yudiz Solutions, a company of unity 3d game development.

Leave a Reply

Your email address will not be published. Required fields are marked *