Have you ever found yourself needing to change a .blend file remotely and VNC / Remote Desktop is not working?
In my case I finished rendering the left eye of a video, and wanted to do the same for the right one. I did it in parts due to the memory limit of the rendering station. And VNC is not working because … no idea. But it’s Friday and I won’t have physical access to the remote computer until next week.
If you read my blog you will know that I’m repeating myself. I can’t help stressing this enough though.
Parts of the challenge of stereo movie making is to work in 3d as soon as possible in your pipeline. This is the main reason the Multi-View implementation ranges from the 3D viewport all the way to the sequencer.
VR (Virtual Reality) movie making is no different. Even more so, if we consider the uniqueness of the immersive experience.
So what if … What if we could preview our work in VR since the first stroke of the storyboard?
Here I’m demoing the Oculus Addon I’ve been developing as part of an ongoing research at a virtual reality lab in Rio (Visgraf/IMPA).
Notice that I’m not even drawing in VR. i’m merely experiencing the work done by Daniel “Pepeland” Lara in his demo file.
The applications of this addon are various, but it mainly focus on support HMD (head mounted displays) in the Blender viewport.
At the moment the support is restrict to Oculus (Rift and DK2), and it excels on Windows since the fastest direct mode is only supported on Oculus’s latest (Windows-only) SDK.
Disclaimer 1: A similar reflexion should be valid for other HMDs (Head Mounted Displays).
Disclaimer 2: This is a personal essay, based on my personal experience. Take the following words with a grain of salt.
The buzz about VR (Virtual Reality) is far from over. And just as with regular stereo 3D movie pipelines, we want to work in VR as soon as possible.
That doesn’t mean necessarily to sculpt, animate, grease-pencil all in VR. But we should at least have a quick way to preview our work in VR – before, after, and during every single of those steps.
At some point we may want special interactions when exploring the scene in VR, but for the scope of this post I will stick to exploring a way to toggle in and out of VR mode.
That raises the question, how do we “see” in VR? Basically we need two “renders” of the scene, each one with a unique projection matrix and a modelview matrix that reflects the head tracking and the in-blender camera transformations.
This should be updated as often as possible (e.g., 75Hz), and are to be updated even if nothing changes in your blender scene (since the head can always move around). Just to be clear, by render here, I mean the same real-time render we see on the viewport.
There are different ways of accomplish this, but I would like to see an addon approach, to make it as flexible as possible to be adapted for new upcoming HMDs.
At this very moment, some of this is doable with the “Virtual Reality Viewport Addon“. I’m using a 3rd-party Python wrapper of the Oculus SDK (generated partly with ctypesgen) that uses ctypes to access the library directly. Some details of this approach:
The libraries used are from sdk 0.5 (while Oculus is soon releasing the sdk 0.7)
The wrapper was generated by someone else, I’m yet to learn how to re-create it
Direct mode is not implemented – basically I’m turning multiview on with side-by-side, screen grabbing the viewport, and applying a GLSL shader on it manually
The wrapper is not being fully used, the projection matrix and the barrel distortion shaders are poorly done in the addon end
Not supporting Direct Mode (nor the latest Direct Driver Mode) seems to be a major drawback of this approach (Extended mode is deprecated in the latest SDKs). The positive points are: cross-platform, non-intrusiveness, (potentially) HMD agnostic.
The opposite approach would be to integrate the Oculus SDK directly into Blender. We could create the FBOs, gather the tracking data from the Oculus, force drawing update every time (~75Hz), send the frame to Oculus via Direct Mode. The downsides of this solution:
Maintenance burden: If this doesn’t make into master, the branch has to be kept up to date with the latest Blender developments
Platform specific – which is hard to prevent since the Oculus SDK is Windows
HMD specific – this solution is tailored towards Oculus only
Performance as good as you could get
All considered, this is not a bad solution, and it may be the easiest one to implement. In fact, once we go this way, the same solution could be implemented in the Blender Game Engine.
That said I would like to see a compromise. A solution that could eventually be expanded to different HMDs, and other OSs (Operating Systems). Thus the ideal scenario would be to implement it as an addon. I like the idea of using ctypes with the Oculus SDK, but we would still need the following changes in Blender:
The OpenGL Wrapper change should be straightforward – I’ve done this a few times myself. The main off-screen rendering change may be self-contained enough to be incorporated in Blender without much hassle. The function should receive a projection matrix and a modelview matrix as input, as well as the resolution and the FBO bind id.
The BGE change would be a nice addition and illustrate the strength of this approach. Given that the heavy lift is still being done by C, Python shouldn’t affect the performance much and could work in a game environment as well. The other advantage is that multiple versions of the SDK can be kept, in order to keep maintaining OSX and Linux until a new cross-platform SDK is shipped.
That’s pretty much it, if you have any related reflexions please share on the comments below.
Rio de Janeiro, September 18t, 2015
Thanks to Blend4Web, there is a very straightforward way of embedding a 3D model from Blender into Facebook. I couldn’t let this pass, and I gave it a go today. If you want to check out the “app”, it is publicly available here (if the app fails to load, disable Ghostery or similar active addons).
The trickiest part for me was to enable https in my server (which is required by Facebook). Apart from that, everything was pretty straightforward (the navigation in the app is the default built-in viewer from Blend4Web).
I would like to thank Cícero Moraes and his team, for sharing the 3D model with me, and allowing me to re-share it via Facebook.
Saint Anthony, facial forensic reconstructed in 3D by Cícero Moraes, Paulo Miamoto phD and team.
This project was part of Ebrafol (Equipe Brasileira de Antropologia Forense e Odontologia Legal) activities, made possible thanks to a plethora of open source tools.
The final digital file was made with Blender 3D, and is shown here exported via Blend4Web.
If you follow my work (aka my anual blog update 😉 ) you know I’m a great enthusiastic of anything slightly resembling sci-fi, geek, gadget things. And sometimes I’m lucky enough to team up with amazing people in order to put those toys to some good use.
In this video I showcase the Planovision system – a ‘3D Table’ compound of a head-tracking device, a 3D projector, and 3D glasses. I’m currently working towards integrating the Planovision with an authoring tool in order to build real demos, and help the project to kick-off.
After the initial integration with Blender via the Blender Game Engine (after all we don’t want just to see the 3d models, but to interact with them), today I got the system to work with BlenderVR to help the integration with different inputs (head-tracker, 3d mouse, leap motion, …). I’m helping the development of BlenderVR since last October, and we only recently released its 1.0 version. BlenderVR is a well behaved guinea pig I must say.
The Planovision has being developed under the guidance of professor Luiz Velho, director or Visgraf/IMPA in Rio de Janeiro, Brazil.
BlenderVR is an virtual-reality open source framework built on top of the Blender Game Engine. BlenderVR was created and is developed by LIMSI/CNRI in Orsay, France and is aimed at Oculus, CAVE, Video Walls among other VR display types.
There is something tricky about them. You can’t just render a pair of panoramas and expect them to work. The image would work great for the virtual objects in front of you, but it would have the stereo eyes swapped when you look at behind you.
How to solve that? Do you remember the 3D Fulldome Teaser? Well, the technique is the exactly same one. We start by determining an interocular distance and a convergence distance based on the stereo depth we want to convey. From there the software (Cycles) will rotate a ‘virtual’ stereo camera pair for each pixel to be rendered, so that both cameras’ rays converge at the specified distance.
Oculus barrel correction screen shader applied to a view inside the panorama
This may sound complicated, but it’s all done under the hood. If you want to read more about this technique I recommend this paper from Paul Bourke on Synthetic stereoscopic panoramic images. The paper is from 2006 so there is nothing new under the Sun.
If you have an Oculus DK2 or similar device, you can grab the final image below to play with. I used Whirligig to visualize the stereo panorama, but there are other alternatives out there.
Top-Bottom Spherical Stereo Equiretangular Panorama – click to save the original image
This image was generated with a spin off branch of multiview named Multiview Spherical Stereo. I’m still looking for a industry standard name for this method. But in the meanwhile that name is growing on me.
I would also like to remark the relevance of Open projects such as Gooseberry. The always warm-welcoming Gooseberry team just released their benchmark file, which I ended up using for those tests. To be able to get a production quality shot and run whatever multi-vr-pano-full-thing you may think of is priceless.
If you want to try to render your own Spherical Stereo Panoramas, I built the patch for the three main platforms.
* Don’t get frustrated if the links are dead. As soon as this feature is officially supported by Blender I will remove them. So if that’s the case, get a new Blender.
How to render in three steps
Enable ‘Views’ in the Render Layer panel
Change camera to panorama
Panorama type to Equirectangular
And leave ‘Spherical Stereo’ marked (it’s on by default at the moment). Remember to post in the comments the work you did with it!
Last and perhaps least is the small demo video above. The experience of seeing a 3D set doesn’t translate well for the video. But I can guarantee you that the overall impression from the Gooseberry team was super positive.
Also, this particular feature was the exact reason I was moved towards implementing multiview in Blender. All I wanted was to be able to render stereo content for fulldomes with Blender. In order to do that, I had to design a proper 3D stereoscopic pipeline for it.
What started as a personal project in 2013 ended up being embraced by the Blender Foundation in 2014, which supported me for a 2-month work period at the Blender Institute via the Development Fund. And now in 2015, so close to the Multiview completion, we finally get the icing on the cake.
Support the Gooseberry project by signing up in the Blender Cloud [link]
Support further Blender Development by joining the Development Fund [link]
* Time traveller from the future, hi! If the branch doesn’t exist anymore, it means that the work was merged into master.
Thanks! This is not mine though Oculus is one of the supported platforms of the Blender-VR project, to be presented at the IEEEVR 2015 next week.
If you are interesting in interactive virtual reality and need an open source solution for your CAVE, multiple Oculus or video wall, give Blender-VR a visit. I’m participating in the development of a framework built on top of the Blender Game Engine.
Also if Oculus feels like sending me my own Oculus, I wouldn’t mind. If you do, though, consider sending one to the Blender Foundation as well. I will feel bad when I take the device away from them next week.
Have a good one,
Due to the long review process the patch is not yet in Blender. That said, since there were enough people interested on this feature, I just updated the links above with a more recent build (on top of current Blender 2.76 RC3).
The build now also supports regular perspective cameras. This is required for cube map vr renders. For this I also recommend an addon that I was commissioned to build, to render or to simply setup cubemap renders [link].
Note: remember to change your camera pivot to center.
Baking is a popular ‘technique’ to flat down your shading work into easy to use images (textures) that can be applied to your 3d models without any concerns with lighting calculation. This can help game development, online visualization, 3d printing, archiviz animations, and many other fields.
The above maps illustrates Ambient Occlusion and Combined baking. Ambient Occlusion can be used to lit the game scene, while combined emulates what you get out of a full render of your object, which can be used in shadless engines.
The character baked here is Koro from the Caminandes project. Koro was gently made available as CC-by, so while I take no credits on the making of it, I did enjoy supporting their project and using Koro in my tests. Koro and all the other production files from Caminandes Gran Dillama are part of the uber cool USB customized card you can buy to learn the nitty-gritty of their production, and to help supporting the project and the Blender Foundation.
Open Shading Language
Open Shading Language (OSL) is a shading language created and maintained by Sony Image Works and used by them in many blockbusters already (Amazing Spider Man, MIB III, Smurfs 2, …). It’s a great contribution from Sony to the industry, given that it was released in a permissive license, free to be implemented, and expanded by any interested party.
Blender was the first 3d package outside of Sony to officially support OSL, and since November 2012 we can use OSL in a “Script Node” to create custom shaders. Blender uses OSL via Cycles. The “Script Node” was implemented by Brecht, Lukas, Thomas and … me (:
Thus, with baking support in Cycles we get for “free” a way to store the shaders designed with it. In the following example you see the Node Cell Noise sample script from OpenShading.com. So even if your game engine has never heard of OSL, you can still benefit from it to make your textures and materials look more robust. How cool is that?
Open Shading Language Baking
I Want to Try It
There are no official builds of this feature yet. However if you are familiar with git and building Blender, you can get it from my github repository. Clone the bake-cycles branch from the blender-git repository. Once you build you need to UV Unwrap the object you want to bake, select it and run the following script:
If you can’t build your own Blender get a build on GraphicAll.org. You can also follow my Blender Foundation Weekly Report to learn about the progress of this feature and to be informed on when the work will be ready and merged upstream in the official Blender repository.
There is still more work ahead of this project. Cycles Baking is actually a small part of a big planned baking refactor in Blender, which includes Baking Maps and Cage support. We only decided for Cycles baking to be a start point because the idea was to use Cycles to validate the proposed refactor of the internal baking API.
That means Cycles Baking may or may not hit Blender on its own any soon. There are bugs to be fixed, loose ends to be tied, so it’s not that I’m spending time anxiously wondering about when this will land anyways (;
I would like to take the opportunity to thank Brecht van Lommel for all the help along this project, and the Blender Foundation for the work opportunity. I’m glad to be involved in a high impact project such as the Blender development.
Last but not least. If you work professionally with Blender and can benefit from features like this, consider donating to the Blender Foundation via the Development Fund page.
Final render image. Light and material, Dalai Felinto (me), post-processing: Bruno Nitzke, modelling: Multiple Authors
Last month I attended a Cycles workshop during the BlenderPRO in Palmas (Brazil). I went for the BlenderPRO initially to give a speech on Game Development with Blender, and a Python workshop for Blender addon development. Luckily for me, the conference was beyond my own expectations and I had a great time also attending some of the other talks and workshops. A workshop that I enjoyed particularly well was “Cycles for Architecture Rendering” by Bruno Nitzke.
The workshop started with a pre-modelled scene, with a camera already staged (assets from SketchUp 3D Warehouse and Blend Swap). From there we started talking about lighting and different lens/camera settings. For interior scenes, Bruno starts with an HDR map for indirect ambient lighting. We then setup the HDR, a Sun light, a Point light for the side-lamp and lighting was pretty much done.
From there we covered architecture material settings in Cycles, with a lot of Mix Shader nodes, good UV-Mapped textures, and some procedural textures to add extra perceived randomness (e.g., in the Barcelona chair). It was a 4-hour long workshop so he couldn’t cover everything he wanted. To compensate that, I used the Color Management feature in Blender to bring my image somewhere closer to the final look I envisioned with Film Kodak Ektachrome 320T, exposure 0.755, Gamma: 1.55. By the end of it, my raw image was like this:
After the workshop, Bruno asked my rendered image so he could show me the post-processing his clients are used to (and pleased by). You can check the node setup, it’s nothing very fancy, but enough to give the image that extra punch. The final image you can check in the post banner 😉
Composite Nodes by Bruno Nitzke – click to enlarge
As someone who has touched Cycles code here and there, it’s nice to be able to use the tool and reconnect myself with my architect side. For new-readers of this blog you can check in the very end of my portfolio my 2007 renders with Blender, SketchUp, VRay, …
A big thank you for Bruno for being so clear in his instructions and the commitment with the class. And for the organizers of the BlenderPRO for putting together such a memorable event.
Worth mentioning, this is pure stabilization based on keeping one point steady and the angle between the two points the same across the footage.
For more advanced tracking a more robust system would be needed (e.g., to select four floor points and a horizon point to give the footage always up and facing the same direction, or some damping system to allow some rotation, …). But basically the client was happy with the solution, thus so were we.
Here it is a video showing how to use the tool (@6:48 shows before/after)
Maybe in the future, with some further interest and funding this can be expanded to a more complete solution. Meanwhile if someone wants to expand the solution, you are welcome to contribute on github 😉
Addon implementation based on the original work developed last year on Visgraf/IMPA by a different project/team (D. Felinto, A. Zang, and L. Velho): [link].