Disclaimer 1: A similar reflexion should be valid for other HMDs (Head Mounted Displays).
Disclaimer 2: This is a personal essay, based on my personal experience. Take the following words with a grain of salt.

oculus_vr

The buzz about VR (Virtual Reality) is far from over. And just as with regular stereo 3D movie pipelines, we want to work in VR as soon as possible.

That doesn’t mean necessarily to sculpt, animate, grease-pencil all in VR. But we should at least have a quick way to preview our work in VR – before, after, and during every single of those steps.

At some point we may want special interactions when exploring the scene in VR, but for the scope of this post I will stick to exploring a way to toggle in and out of VR mode.

That raises the question, how do we “see” in VR? Basically we need two “renders” of the scene, each one with a unique projection matrix and a modelview matrix that reflects the head tracking and the in-blender camera transformations.

This should be updated as often as possible (e.g., 75Hz), and are to be updated even if nothing changes in your blender scene (since the head can always move around). Just to be clear, by render here, I mean the same real-time render we see on the viewport.

There are different ways of accomplish this, but I would like to see an addon approach, to make it as flexible as possible to be adapted for new upcoming HMDs.

At this very moment, some of this is doable with the “Virtual Reality Viewport Addon“. I’m using a 3rd-party Python wrapper of the Oculus SDK (generated partly with ctypesgen) that uses ctypes to access the library directly. Some details of this approach:

  • The libraries used are from sdk 0.5 (while Oculus is soon releasing the sdk 0.7)
  • The wrapper was generated by someone else, I’m yet to learn how to re-create it
  • Direct mode is not implemented – basically I’m turning multiview on with side-by-side, screen grabbing the viewport, and applying a GLSL shader on it manually
  • The wrapper is not being fully used, the projection matrix and the barrel distortion shaders are poorly done in the addon end
Virtual Reality Viewport Addon in action - sample scene from Creature Factory 2 by Andy Goralczyk

Virtual Reality Viewport Addon in action – sample scene from Creature Factory 2 by Andy Goralczyk

Not supporting Direct Mode (nor the latest Direct Driver Mode) seems to be a major drawback of this approach (Extended mode is deprecated in the latest SDKs). The positive points are: cross-platform, non-intrusiveness, (potentially) HMD agnostic.

The opposite approach would be to integrate the Oculus SDK directly into Blender. We could create the FBOs, gather the tracking data from the Oculus, force drawing update every time (~75Hz), send the frame to Oculus via Direct Mode. The downsides of this solution:

  • License issue – dynamic linking may solve that
  • Maintenance burden: If this doesn’t make into master, the branch has to be kept up to date with the latest Blender developments
  • Platform specific – which is hard to prevent since the Oculus SDK is Windows
  • HMD specific – this solution is tailored towards Oculus only
  • Performance as good as you could get

All considered, this is not a bad solution, and it may be the easiest one to implement. In fact, once we go this way, the same solution could be implemented in the Blender Game Engine.

That said I would like to see a compromise. A solution that could eventually be expanded to different HMDs, and other OSs (Operating Systems). Thus the ideal scenario would be to implement it as an addon. I like the idea of using ctypes with the Oculus SDK, but we would still need the following changes in Blender:

The OpenGL Wrapper change should be straightforward – I’ve done this a few times myself. The main off-screen rendering change may be self-contained enough to be incorporated in Blender without much hassle. The function should receive a projection matrix and a modelview matrix as input, as well as the resolution and the FBO bind id.

The BGE change would be a nice addition and illustrate the strength of this approach. Given that the heavy lift is still being done by C, Python shouldn’t affect the performance much and could work in a game environment as well. The other advantage is that multiple versions of the SDK can be kept, in order to keep maintaining OSX and Linux until a new cross-platform SDK is shipped.

That’s pretty much it, if you have any related reflexions please share on the comments below.

Dalai Felinto
Rio de Janeiro, September 18t, 2015

 

20 Thoughts on “Blender and Oculus, food for thought

  1. CubedParadox on September 18, 2015 at 4:39 pm said:

    Have you considered using a 3rd party api like osvr, vrui or valve’s openvr? If you could get oculus direct mode working through one of those toolkits, you would have the added bonus of automatically supporting all the other major vr headsets and displays as well.

    • dfelinto on September 18, 2015 at 7:29 pm said:

      Hi, those should also be possible, but even them could be implemented as addons instead of a low-level integration.

      In fact at IEEEVR I’ve seen a presentation from OSVR and talked with their developers. They actually suggested a ctypes integration in a similar fashion as the easiest approach for Blender support.

      I’m all in favour of an open source library that supports all hardware. But I think it’s still a bit early in the game for us to bet our chips on that direction. So for now I prefer to have a more flexible low-level API that can be used for different HMDs plugins, instead of investing time in supporting whichever open sdk we guess will take over the market.

  2. Pingback: Code: Blender and Oculus, food for thought

  3. Hi,
    this is good food for thoughts,
    In case you are around, would you pass by Brussels these 6-7 November during the Libre Virtual Reality Meeting we are organising ?
    or even participate in a chat session could be cool…
    the program is being finalised now and we are constructing the workshop shcedule…. http://lvrm.f-lat.org

  4. As a user and developer of VR applications and software, I would recommend implementing OpenHMD, which is a HMD agnostic library.

    Its a compact C library (with available Python, Perl and C++ bindings) on the Boost license (which is compatible with Blender’s license).
    The project is quite active and has multiple contributors (including me) and is even starting to work with HMD developers for official support.

    I will be up for helping implementing OpenHMD into Blender and provide support where needed

    • dfelinto on September 21, 2015 at 2:55 pm said:

      Hi Joey,

      I like the idea of supporting an open source library. But there are multiple libraries there, and I don’t want to wait until one stands on top of the others. Instead I would be happier to have a system that can be plugged with different libraries, thus the idea of working on an addon instead of a low level C integration.

      Thus it would be nice to support OpenHMD in the addon as well. My original idea of the addon was to support multiple HMDs, I just happened to start from Oculus. So it would be great to see what would be needed to support other backend solutions, in order to make sure all the required changes in Blender are done at once.

      Now a few questions:
      * Does it have an official Python library?
      * Does OpenHMD support Oculus DK2 tracking?
      * Does it support Direct Mode?
      * Does it help with setting up the projection matrix and handling an eventual barrel shader?

      Cheers,
      Dalai

      • OpenHMD has Python bindings created by one of our contributors, these bindings are actively updated.

        Oculus DK2 is supported, and experimental support for positional tracking is being worked on.

        Direct Mode is not something we currently do, since we do not do any direct communication with gl/dx yet, but users of the library have found methods to do this using SDL2 which will be included in a example file.
        This is something we are looking at.

        OpenHMD delivers the projection matrix for you, and though we have barrel shaders for DK1 and DK2 in glsl, you have to apply them yourself for now.

        As previously mentioned, we are a active group and are working with other HMD developers to deliver support for their devices.
        We hope to have a handful of devices ready in a couple of months.

  5. Benoit Bolsee on September 25, 2015 at 3:10 pm said:

    Hi Dalai,

    By coincidence I have started to work on a project that requires the same sort of functionality in the BGE.
    A method to send ImageRender to a FBO is just what I need. My first idea was to embed the creation of the FBO entirely in the ImageRender object but a low level API will just do as well and will be more generic. We can join forces on this, I haven’t any coding yet.

    But there is another requirement: it should be possible to run the BGE on a headless computer. The main BGE loop should not render anything, or should render to a small off-screen buffer. All user interactions will come through python with VRPN and all graphic output will be done through ImageRender and FBO.

    Disabling the window manager in the BGE seems a bit of a challenge. Any thoughts on how this can be achieved in Linux?

    • dfelinto on September 25, 2015 at 10:06 pm said:

      Hey Benoit,

      Nice. Let me guess, the decklink branch? I was about to write to you, since I suspected you have done something along those lines already 🙂

      I indeed think it’s more flexible if we leave the FBO creation to Python, so I would prefer to leave the FBO creation/deletion outside the core code.

      The BGE would be the final task I was planning to tackle, but we’ll see. We can at least agree on the design together. I just committed my “framebuffer” branch to the Blender git repository, so we can collaborate there. So far there is one commit, exposing the FBO related functions I needed here. Feel free to expose others there. I will leave my wip code for the Blender viewport offscreen rendering on github (framebuffer-dev) until it’s ready for the framebuffer branch.

      So far I tested the BGL framebuffer creating with Blender and it didn’t work entirely (I’m still investigating it). It seems that glClear() is the only thing affecting the FBO, I can’t draw anything there. But I will see if I can build a simple test in the BGE of drawing my FBO on top of the viewport. This should help to test if the FBO is being rendered correctly once we have the upcoming ImageRender function.

      > But there is another requirement: it should be possible to run the BGE on a headless computer.

      You can look at how Blender is implementing the –background option and have the same for the Blenderplayer. It would be the most elegant solution in terms of implementation.

      • dfelinto on September 25, 2015 at 11:15 pm said:

        Funny enough I just managed to get a generic FBO drawing working in Blender with BGL (not the real viewport data yet), but the BGE test (which helped see what I was doing wrong in Blender) is the one not working now. The file is here if you want to take a look at it:
        http://www.dalaifelinto.com/ftp/tmp/bge_fbo.blend

      • Benoit Bolsee on September 28, 2015 at 3:16 pm said:

        Great, I am downloading the code and looking into it.

        Re the headless requirement, the -background option in Blender simply skips the creation of the Ghost Window manager. Not so easy to do in the BGE as the Window manager is at the heart of the main loop and the creation of the OGL context.
        I found instead this simple solution that works for a nVidia card in a headless Linux:
        https://wiki.archlinux.org/index.php/NVIDIA#Overriding_monitor_detection
        It’s good enough for the customer and doesn’t require to change the BGE.

        • Benoit Bolsee on September 28, 2015 at 3:40 pm said:

          Looks like a bug here:
          glGetIntegerv(GL_ACTIVE_TEXTURE, act_tex)

          glBindTexture(GL_TEXTURE_2D, act_tex[0])

          while it should instead be:
          glGetIntegerv(GL_TEXTURE_BINDING_2D, act_tex)

          glBindTexture(GL_TEXTURE_2D, act_tex[0])

          • dfelinto on September 29, 2015 at 3:18 am said:

            Nice catch 🙂

            I just updated the file: http://www.dalaifelinto.com/ftp/tmp/bge_fbo.blend

            It is now using Blender GPU wrappers to handle FBO setup, binding, unbinding and delete. (get the latest changes from the framebuffer branch)

            The missing bits are:

            * The gpu module is not available in blenderplayer, only in Blender/Embed BGE

            * The self._draw_a_quad() should be replaced by a call to the ImageRenderOffscreen function you wanted to make.

            Note that gpu.offscreen_object_create(width, height) will return an object type, which should be the one used by the ImageRenderOffscreen(offscreen_object, projection_matrix, modelview_matrix);

            (at least that’s my suggestion)

            I’m currently banging my head to find a way to pass the gpu.OffscreenObject as a parameter to an Operator. But for the BGE function it should be easier to use it.

            (nice that you found a solution for the headless issue)

            Technically, with this GPUOffscreenObject approach we don’t even need to expose the FBO functions to bgl. But let’s leave this decision for later.

          • dfelinto on September 29, 2015 at 6:01 pm said:

            Alright, the framebuffer branch has now all that I need to use it for my addon (blender only, not bge).

            (I will record a video later, once I have the Oculus part working and plugged on the addon)

            I also believe that I introduced something potentially undesired for the bge. You won’t be able to expose the gpu module as it is, since it’s relying on some deep-blender calls.

            Maybe those calls could be externalized and we could have a stubs for the blenderplayer.

  6. Alfonso on December 5, 2015 at 5:34 pm said:

    Hi Dalai,

    I only have a DK1 Dev kit to test my models in BGE. This addon is very cool, and I wonder if it could be used for DK1 or only a DK2 will be supported.

    Thanks.

    • dfelinto on December 7, 2015 at 4:07 pm said:

      Hi Alfonso. It works in both DK1 and DK2. I have a semi-working version for the BGE as well (the addon is for Blender viewport) I should be able to share in the coming days.

      • Alfonso on January 21, 2016 at 9:05 am said:

        I have tested the viewport addon, and It is awesome. All is working fine with the instrucctions that you give in the Readme file. I am very interested in your progress with BGE and I am impatient to make tests.

        Best Regards.

  7. Patrick Depoix on November 7, 2016 at 8:17 pm said:

    Hi Dalai,

    You remember me on Google+ Spirou4D (Blenderartist.com)….
    On Youtube I ansked you some questions about Blender VR Addon.
    Here is a news you could like:
    http://www.archdaily.com/798970/trimble-sketchup-viewer-allows-you-to-manipulate-hologram-models-in-the-real-world?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ArchDaily+(ArchDaily)

    byebye
    Spirou4D

Leave a Reply

Your email address will not be published. Required fields are marked *

Post Navigation