If you follow my work (aka my anual blog update 😉 ) you know I’m a great enthusiastic of anything slightly resembling sci-fi, geek, gadget things. And sometimes I’m lucky enough to team up with amazing people in order to put those toys to some good use.
In this video I showcase the Planovision system – a ‘3D Table’ compound of a head-tracking device, a 3D projector, and 3D glasses. I’m currently working towards integrating the Planovision with an authoring tool in order to build real demos, and help the project to kick-off.
After the initial integration with Blender via the Blender Game Engine (after all we don’t want just to see the 3d models, but to interact with them), today I got the system to work with BlenderVR to help the integration with different inputs (head-tracker, 3d mouse, leap motion, …). I’m helping the development of BlenderVR since last October, and we only recently released its 1.0 version. BlenderVR is a well behaved guinea pig I must say.
The Planovision has being developed under the guidance of professor Luiz Velho, director or Visgraf/IMPA in Rio de Janeiro, Brazil.
BlenderVR is an virtual-reality open source framework built on top of the Blender Game Engine. BlenderVR was created and is developed by LIMSI/CNRI in Orsay, France and is aimed at Oculus, CAVE, Video Walls among other VR display types.
There is something tricky about them. You can’t just render a pair of panoramas and expect them to work. The image would work great for the virtual objects in front of you, but it would have the stereo eyes swapped when you look at behind you.
How to solve that? Do you remember the 3D Fulldome Teaser? Well, the technique is the exactly same one. We start by determining an interocular distance and a convergence distance based on the stereo depth we want to convey. From there the software (Cycles) will rotate a ‘virtual’ stereo camera pair for each pixel to be rendered, so that both cameras’ rays converge at the specified distance.
Oculus barrel correction screen shader applied to a view inside the panorama
This may sound complicated, but it’s all done under the hood. If you want to read more about this technique I recommend this paper from Paul Bourke on Synthetic stereoscopic panoramic images. The paper is from 2006 so there is nothing new under the Sun.
If you have an Oculus DK2 or similar device, you can grab the final image below to play with. I used Whirligig to visualize the stereo panorama, but there are other alternatives out there.
Top-Bottom Spherical Stereo Equiretangular Panorama – click to save the original image
This image was generated with a spin off branch of multiview named Multiview Spherical Stereo. I’m still looking for a industry standard name for this method. But in the meanwhile that name is growing on me.
I would also like to remark the relevance of Open projects such as Gooseberry. The always warm-welcoming Gooseberry team just released their benchmark file, which I ended up using for those tests. To be able to get a production quality shot and run whatever multi-vr-pano-full-thing you may think of is priceless.
If you want to try to render your own Spherical Stereo Panoramas, I built the patch for the three main platforms.
* Don’t get frustrated if the links are dead. As soon as this feature is officially supported by Blender I will remove them. So if that’s the case, get a new Blender.
How to render in three steps
Enable ‘Views’ in the Render Layer panel
Change camera to panorama
Panorama type to Equirectangular
And leave ‘Spherical Stereo’ marked (it’s on by default at the moment). Remember to post in the comments the work you did with it!
Last and perhaps least is the small demo video above. The experience of seeing a 3D set doesn’t translate well for the video. But I can guarantee you that the overall impression from the Gooseberry team was super positive.
Also, this particular feature was the exact reason I was moved towards implementing multiview in Blender. All I wanted was to be able to render stereo content for fulldomes with Blender. In order to do that, I had to design a proper 3D stereoscopic pipeline for it.
What started as a personal project in 2013 ended up being embraced by the Blender Foundation in 2014, which supported me for a 2-month work period at the Blender Institute via the Development Fund. And now in 2015, so close to the Multiview completion, we finally get the icing on the cake.
Support the Gooseberry project by signing up in the Blender Cloud [link]
Support further Blender Development by joining the Development Fund [link]
* Time traveller from the future, hi! If the branch doesn’t exist anymore, it means that the work was merged into master.
Thanks! This is not mine though Oculus is one of the supported platforms of the Blender-VR project, to be presented at the IEEEVR 2015 next week.
If you are interesting in interactive virtual reality and need an open source solution for your CAVE, multiple Oculus or video wall, give Blender-VR a visit. I’m participating in the development of a framework built on top of the Blender Game Engine.
Also if Oculus feels like sending me my own Oculus, I wouldn’t mind. If you do, though, consider sending one to the Blender Foundation as well. I will feel bad when I take the device away from them next week.
Have a good one,
Due to the long review process the patch is not yet in Blender. That said, since there were enough people interested on this feature, I just updated the links above with a more recent build (on top of current Blender 2.75a).
Baking is a popular ‘technique’ to flat down your shading work into easy to use images (textures) that can be applied to your 3d models without any concerns with lighting calculation. This can help game development, online visualization, 3d printing, archiviz animations, and many other fields.
The above maps illustrates Ambient Occlusion and Combined baking. Ambient Occlusion can be used to lit the game scene, while combined emulates what you get out of a full render of your object, which can be used in shadless engines.
The character baked here is Koro from the Caminandes project. Koro was gently made available as CC-by, so while I take no credits on the making of it, I did enjoy supporting their project and using Koro in my tests. Koro and all the other production files from Caminandes Gran Dillama are part of the uber cool USB customized card you can buy to learn the nitty-gritty of their production, and to help supporting the project and the Blender Foundation.
Open Shading Language
Open Shading Language (OSL) is a shading language created and maintained by Sony Image Works and used by them in many blockbusters already (Amazing Spider Man, MIB III, Smurfs 2, …). It’s a great contribution from Sony to the industry, given that it was released in a permissive license, free to be implemented, and expanded by any interested party.
Blender was the first 3d package outside of Sony to officially support OSL, and since November 2012 we can use OSL in a “Script Node” to create custom shaders. Blender uses OSL via Cycles. The “Script Node” was implemented by Brecht, Lukas, Thomas and … me (:
Thus, with baking support in Cycles we get for “free” a way to store the shaders designed with it. In the following example you see the Node Cell Noise sample script from OpenShading.com. So even if your game engine has never heard of OSL, you can still benefit from it to make your textures and materials look more robust. How cool is that?
Open Shading Language Baking
I Want to Try It
There are no official builds of this feature yet. However if you are familiar with git and building Blender, you can get it from my github repository. Clone the bake-cycles branch from the blender-git repository. Once you build you need to UV Unwrap the object you want to bake, select it and run the following script:
If you can’t build your own Blender get a build on GraphicAll.org. You can also follow my Blender Foundation Weekly Report to learn about the progress of this feature and to be informed on when the work will be ready and merged upstream in the official Blender repository.
There is still more work ahead of this project. Cycles Baking is actually a small part of a big planned baking refactor in Blender, which includes Baking Maps and Cage support. We only decided for Cycles baking to be a start point because the idea was to use Cycles to validate the proposed refactor of the internal baking API.
That means Cycles Baking may or may not hit Blender on its own any soon. There are bugs to be fixed, loose ends to be tied, so it’s not that I’m spending time anxiously wondering about when this will land anyways (;
I would like to take the opportunity to thank Brecht van Lommel for all the help along this project, and the Blender Foundation for the work opportunity. I’m glad to be involved in a high impact project such as the Blender development.
Last but not least. If you work professionally with Blender and can benefit from features like this, consider donating to the Blender Foundation via the Development Fund page.
Final render image. Light and material, Dalai Felinto (me), post-processing: Bruno Nitzke, modelling: Multiple Authors
Last month I attended a Cycles workshop during the BlenderPRO in Palmas (Brazil). I went for the BlenderPRO initially to give a speech on Game Development with Blender, and a Python workshop for Blender addon development. Luckily for me, the conference was beyond my own expectations and I had a great time also attending some of the other talks and workshops. A workshop that I enjoyed particularly well was “Cycles for Architecture Rendering” by Bruno Nitzke.
The workshop started with a pre-modelled scene, with a camera already staged (assets from SketchUp 3D Warehouse and Blend Swap). From there we started talking about lighting and different lens/camera settings. For interior scenes, Bruno starts with an HDR map for indirect ambient lighting. We then setup the HDR, a Sun light, a Point light for the side-lamp and lighting was pretty much done.
From there we covered architecture material settings in Cycles, with a lot of Mix Shader nodes, good UV-Mapped textures, and some procedural textures to add extra perceived randomness (e.g., in the Barcelona chair). It was a 4-hour long workshop so he couldn’t cover everything he wanted. To compensate that, I used the Color Management feature in Blender to bring my image somewhere closer to the final look I envisioned with Film Kodak Ektachrome 320T, exposure 0.755, Gamma: 1.55. By the end of it, my raw image was like this:
After the workshop, Bruno asked my rendered image so he could show me the post-processing his clients are used to (and pleased by). You can check the node setup, it’s nothing very fancy, but enough to give the image that extra punch. The final image you can check in the post banner 😉
Composite Nodes by Bruno Nitzke – click to enlarge
As someone who has touched Cycles code here and there, it’s nice to be able to use the tool and reconnect myself with my architect side. For new-readers of this blog you can check in the very end of my portfolio my 2007 renders with Blender, SketchUp, VRay, …
A big thank you for Bruno for being so clear in his instructions and the commitment with the class. And for the organizers of the BlenderPRO for putting together such a memorable event.
Worth mentioning, this is pure stabilization based on keeping one point steady and the angle between the two points the same across the footage.
For more advanced tracking a more robust system would be needed (e.g., to select four floor points and a horizon point to give the footage always up and facing the same direction, or some damping system to allow some rotation, …). But basically the client was happy with the solution, thus so were we.
Here it is a video showing how to use the tool (@6:48 shows before/after)
Maybe in the future, with some further interest and funding this can be expanded to a more complete solution. Meanwhile if someone wants to expand the solution, you are welcome to contribute on github 😉
Addon implementation based on the original work developed last year on Visgraf/IMPA by a different project/team (D. Felinto, A. Zang, and L. Velho): [link].
Hello there. After nearly two months I’m back I left Canada and I’m in Brazil (more specifically Rio de Janeiro for the time being). Today I went hiking. Some tourists may know “Pão de Açúcar” (Sugar Loaf), one of the famous sightseeing attractions here.
Not only you can take the as-seen-on-007-movie cable cars to get to the top, you can also go hiking there. It’s a lovely view from through the entire circuit. You do need someone experienced to walk you through, since some of the passages require climbing material.
A mosaic made with Blender – just so we stay on the blog’s topic.
We start covering the basics of Blender for new comers followed by a simple game project from start to end. That should give the reader the perspective of the components of the Blender game engine important for making your own projects.
The following chapters are self-contained, each one with its own approach. Most of them have tutorials and short projects focused on the presented tools. Finally, chapter 10 gives the reader a perspective of real usage of the Blender game engine presenting 10 projects from different artists around the world explained by the authors themselves.
What version of Blender does this book cover?
The book covers Blender 2.66a fully.
Will there be an ebook?
Yes. So far online for Kindle (and in the USA), but other options and regions should be available shortly. For the Kindle version get it here.
About the authors:
Mike Pan is a CG generalist who started using Blender 10 years ago, before it was open sourced. Mike’s interest in Blender includes everything from special effects to compositing, and from real-time graphics to scripting. He has given talks at the Blender Conference in Amsterdam, hosted workshops at the View Conference in Turin and Blender Workshop in Vancouver, and conducted a three-day Blender course in Kerala, India. Mike is currently the lead programmer for a two-year project at Harvard Medical School to develop a biomolecular visualization software using Blender. Before that, he worked at the University of British Columbia with Dalai on a marine ecosystem visualization project. Mike lives in the always-raining Vancouver, Canada. You can find him at mikepan.com.
Dalai Felinto, who is currently living in Vancouver, Canada, was born in sunny Rio de Janeiro, Brazil. He has been using Blender since beginning his undergraduate studies in Architecture and Urban Planning in 2003. His participation in the Blender Community includes papers, workshops, and talks presented at events such as BlenderPRO in Brazil, Che Blender in Argentina, Blender Conference in Amsterdam, View Conference in Turin, BlenderVen in Venezuela, and Blender Workshop in Canada. He has contributed patches and code to Blender since version 2.47. Dalai uses Blender and the game engine in his work as a science communicator at the University of British Columbia, Canada. However, his day job doesn’t stop him from doing freelance Blender projects around the world. His latest works have been for Italy, England, and the Netherlands. Dalai’s dream career move is to land a job in the movie industry, working at Pixar. Follow his adventures at dalaifelinto.com.
Bonus Question: How did this book help the game engine?
One of the reasons it took us almost three years to complete this project was to make sure it would cover all the game engine latest developments.
It wasn’t an easy call because we knew the book delay could harm the book sales. However my first concern was into making sure we were proud of the engine we were talking about. And even though we both make a living out of working with Blender and the game engine, we knew there were room for some pressing improvements.
We finally settled on the 2.66a. And before handing all the manuscript to the publisher, in the author review phase we made sure things were updated. From there on people can follow the release notes of new Blender and it will be fine.
The book companion files (over a hundred) also helped to test the BGE itself. Just to illustrate it, we were short in time prior to 2.66 release and couldn’t dedicate time to test the official release (somehow to finish a book takes time (: )). We then worked closely with other developers to have 2.66a stable as far as the animation, uv materials, and multiplatform support goes. The result? between 2.66 and 2.66a alone we had 15 bugs fixed by the bge developers (some by myself directly, and others by fellow programmers). And all the book files are working in 2.66a as they were originally conceived in all major platforms (Linux, Mac and Windows).
We hope you enjoy the book!
Feel free to drop us a line with any feedback or commentaries you may have.
Not only beautiful, but one of the few places in the world where you can bike around, enjoy the view and do some coding without any safety worries. I will certainly miss that.
Now, what an interesting timing. In Brazil riots are taken place for reasons I not only vouch for, but would love to take part on. If your local news is not covering any of those, do check the links below for more information.
As a curious contrast, in Vancouver the last riots were thanks to … Stanley Cup (hockey competition). Were they protesting against the abusive expenses in renewal stadiums, privatizing the public space, the lack of counterpart for the population? No my friends, the “protests” were due to the poor performance of the Canucks in the cup. What?!? Yup, go figure.
For a “tinkering” developer there is no satisfaction like trying your code into production and have it working out of the box. I’ve been coding the 3-D stereo support for the multiview branch with no stereoscopic display, so today was the first time I could see it in action… and it works 😉
I tested top-bottom, side-by-side and interlaced (windowed and fullscreen). For interlaced windowed mode the “swap left-right images” is particularly important.
The one thing I didn’t test is the pageflip functionality. I came to the realization that my laptop doesn’t support 120Hz displays. I heard it’s working though, so I’m at ease.
I would like to express my gratitude to Dr. Maria Lantin, director of the Stereographic 3D Centre of Emily Carr university of art + design for so kindly open the doors and let me play with their “toys”. The same goes for her lab research team, in particular Alan Goldman, Denise Quesnel and Sean Arden for taking the time to show their latest (extremely cool) projects and to help setting me up with the 3-D display and projector. And last and not least, for my friend Dr. Barry Po for connecting me with them (thanks Barry!).
For my own records: with the BenQ projector they are using a 3D-XL 3D Projector Adapter from Optoma to convert side-by-side or top-bottom inputs to time-sequential format.
And if you like the Llama, make sure to check the short movie Caminandes.