Disclaimer 1: A similar reflexion should be valid for other HMDs (Head Mounted Displays).
Disclaimer 2: This is a personal essay, based on my personal experience. Take the following words with a grain of salt.

oculus_vr

The buzz about VR (Virtual Reality) is far from over. And just as with regular stereo 3D movie pipelines, we want to work in VR as soon as possible.

That doesn’t mean necessarily to sculpt, animate, grease-pencil all in VR. But we should at least have a quick way to preview our work in VR – before, after, and during every single of those steps.

At some point we may want special interactions when exploring the scene in VR, but for the scope of this post I will stick to exploring a way to toggle in and out of VR mode.

That raises the question, how do we “see” in VR? Basically we need two “renders” of the scene, each one with a unique projection matrix and a modelview matrix that reflects the head tracking and the in-blender camera transformations.

This should be updated as often as possible (e.g., 75Hz), and are to be updated even if nothing changes in your blender scene (since the head can always move around). Just to be clear, by render here, I mean the same real-time render we see on the viewport.

There are different ways of accomplish this, but I would like to see an addon approach, to make it as flexible as possible to be adapted for new upcoming HMDs.

At this very moment, some of this is doable with the “Virtual Reality Viewport Addon“. I’m using a 3rd-party Python wrapper of the Oculus SDK (generated partly with ctypesgen) that uses ctypes to access the library directly. Some details of this approach:

  • The libraries used are from sdk 0.5 (while Oculus is soon releasing the sdk 0.7)
  • The wrapper was generated by someone else, I’m yet to learn how to re-create it
  • Direct mode is not implemented – basically I’m turning multiview on with side-by-side, screen grabbing the viewport, and applying a GLSL shader on it manually
  • The wrapper is not being fully used, the projection matrix and the barrel distortion shaders are poorly done in the addon end
Virtual Reality Viewport Addon in action - sample scene from Creature Factory 2 by Andy Goralczyk

Virtual Reality Viewport Addon in action – sample scene from Creature Factory 2 by Andy Goralczyk

Not supporting Direct Mode (nor the latest Direct Driver Mode) seems to be a major drawback of this approach (Extended mode is deprecated in the latest SDKs). The positive points are: cross-platform, non-intrusiveness, (potentially) HMD agnostic.

The opposite approach would be to integrate the Oculus SDK directly into Blender. We could create the FBOs, gather the tracking data from the Oculus, force drawing update every time (~75Hz), send the frame to Oculus via Direct Mode. The downsides of this solution:

  • License issue – dynamic linking may solve that
  • Maintenance burden: If this doesn’t make into master, the branch has to be kept up to date with the latest Blender developments
  • Platform specific – which is hard to prevent since the Oculus SDK is Windows
  • HMD specific – this solution is tailored towards Oculus only
  • Performance as good as you could get

All considered, this is not a bad solution, and it may be the easiest one to implement. In fact, once we go this way, the same solution could be implemented in the Blender Game Engine.

That said I would like to see a compromise. A solution that could eventually be expanded to different HMDs, and other OSs (Operating Systems). Thus the ideal scenario would be to implement it as an addon. I like the idea of using ctypes with the Oculus SDK, but we would still need the following changes in Blender:

The OpenGL Wrapper change should be straightforward – I’ve done this a few times myself. The main off-screen rendering change may be self-contained enough to be incorporated in Blender without much hassle. The function should receive a projection matrix and a modelview matrix as input, as well as the resolution and the FBO bind id.

The BGE change would be a nice addition and illustrate the strength of this approach. Given that the heavy lift is still being done by C, Python shouldn’t affect the performance much and could work in a game environment as well. The other advantage is that multiple versions of the SDK can be kept, in order to keep maintaining OSX and Linux until a new cross-platform SDK is shipped.

That’s pretty much it, if you have any related reflexions please share on the comments below.

Dalai Felinto
Rio de Janeiro, September 18t, 2015

 

Thanks to Blend4Web, there is a very straightforward way of embedding a 3D model from Blender into Facebook. I couldn’t let this pass, and I gave it a go today. If you want to check out the “app”, it is publicly available here (if the app fails to load, disable Ghostery or similar active addons).

The trickiest part for me was to enable https in my server (which is required by Facebook). Apart from that, everything was pretty straightforward (the navigation in the app is the default built-in viewer from Blend4Web).

I would like to thank Cícero Moraes and his team, for sharing the 3D model with me, and allowing me to re-share it via Facebook.

Technical sheet:

Saint Anthony, facial forensic reconstructed in 3D by Cícero Moraes, Paulo Miamoto phD and team.

This project was part of Ebrafol (Equipe Brasileira de Antropologia Forense e Odontologia Legal) activities, made possible thanks to a plethora of open source tools.

The final digital file was made with Blender 3D, and is shown here exported via Blend4Web.