Are you stuck on Microsoft Visual Studio, with Window 10 Anniversary Edition and missing QTCreator features such as rename refactoring?


Just like QT Creator, I need it, and I want it now!

I found an interesting free extension of MSVC named “Visual C++ Refactoring”. You can get it here.


Easy, right? However if you got an error message because your .NET Framework has a different version hear me out:

Read More →

Match made in e-heaven

Story originally published in in July 28th, 2016.

Meet e-interiores. This Brazilian interior design e-commerce startup transformed their creation process into an entire new fashion. This tale will show you how Blender made this possible, and how far we got.

We developed a new platform based on a semi-vanilla Blender, Fluid Designer, and our own pipelines. Thanks to the accomplished results, e-interiores was able to consolidate a partnership with the giant  Tok&Stok providing a complete design of a room, in 72 hours.

Read More →


Sometimes when working with architecture visualization we want a material to be seamlessly repeateable and to set its size based on the material real dimension.

For instance, let’s say we have a photo of a wood texture which corresponds to 2.0 x 0.1 meters.

If we want to re-use this texture for different objects we can’t rely on UV Coordinates to guarantee the correct real world dimensions.

So, how to do it?

To get this properly rendered you can use a node group that I prepared just for that:

  • Download this [sample .blend]
  • Import and add the “Architecture Coordinates” node group to your material
  • Link it to a Mapping node, with Scale: 2.0 (X) 0.1 (Y)
  • Link the Mapping node to your Image Texture node

Optionally you can change the Location (X, Y) and Rotation (Z) of the Mapping node.

Note, for this to work the object scale should be 1, 1, 1.

Incorrect Textures

Incorrect Textures 🙁

Correct Textures

Correct Textures 🙂

Sample File Explained

Note, the sample file requires you to Run Python Scripts for the drivers


This file has a cube object which has its mesh controlled by hooks. And the hooks are driven by custom properties of the “Origin” empty. This way you can play with different values without changing the object scale (which would affect the final result).

The test image has a 2 x 1 aspect ratio. If we pretend it was originally a 4.0 x 2.0m texture the whole image will be seen when the width and height of the cube are 4 and 2 respectively.

The Architecture Coordinates Node group take the Object coordinate and tranform it based on the facenormal (i.e., whether the face is facing the X, Y or Z axis).


Tcharan! The texture is properly setup regardless of the face direction.

I hope you find this useful, and you have a diferente solution for this problem please let me know. Maybe this is something Cycles should have by default?

Note: This file was developed for Blender 2.77, it may not work in different versions

SPOILER ALERT: The conclusions I reached here are wrong. Big time wrong. According to the internet there is no convergence distance (infinite) or there is a 1.3m value. That said, carry on if you want to read me babble on math …

The following text is rather dull and technical. I’m basically dissecting the projection matrix I get from the Oculus SDK in order to guess which convergence distance is being used. Making long short I found that for my setup, with the eye separation (interocular distance) of 6.5cm, the convergence distance is 4m.

This is twice as much as the classic “rule of thumb” of having a convergence distance 30x than the interocular distance.

Read More →

What if you want to copy and paste a text back and forward from Blender and your operating system? Blender has limited integration when it comes to the Font objects, and unfortunately none of the workarounds was satisfying for my picky taste.

So, what do you do when you are your own boss and want to use this inexistent functionality in Blender? Well, you just stop doing everything else and hack the hell out of Blender’s code 🙂

Blender Copy and Paste

Back in Blender 2.49 (around 2009) we could copy/paste the text from either the system clipboard or the internal (per object) text buffer. The reason behind this design was to allow for copy/paste of special formatting (e.g., bold, underline, …) when using it in Blender.

Seven years later in the latest Blender (2.76) this functionality (system clipboard) is not even available or exposed to the user. To fix this I unified the old system clipboard and the internal text buffer functionalities. Thus if you copy/paste a text from a font object it will be available in the system clipboard. And if the text was previously created within Blender, you will also get its original formatting.

Oh, did I mention it supports funky unicode characters? 😉

The patch is still under development and waiting for peer review. But it should be ready to merge in master any time soon.

Update: The patch was committed, and it will be part of the upcoming Blender 2.77.


Have you ever found yourself needing to change a .blend file remotely and VNC / Remote Desktop is not working?

In my case I finished rendering the left eye of a video, and wanted to do the same for the right one. I did it in parts due to the memory limit of the rendering station. And VNC is not working because … no idea. But it’s Friday and I won’t have physical access to the remote computer until next week.

Blender Interactive Console to the rescue!

$ blender -b MYFILE.blend --python-console
>>> import bpy
>>> bpy.context.scene.render.views['left'].use = False
>>> bpy.context.scene.render.views['right'].use = True
>>> bpy.ops.wm.save_mainfile()

Now all you need to do is resume your tmux session, and kick-off render once again. For different Blender command-line options try blender –help.

This post is obviously based on real events! Have a nice weekend 😉

If you read my blog you will know that I’m repeating myself. I can’t help stressing this enough though.

Parts of the challenge of stereo movie making is to work in 3d as soon as possible in your pipeline. This is the main reason the Multi-View implementation ranges from the 3D viewport all the way to the sequencer.

Grease Pencil and Oculus with Blender

VR (Virtual Reality) movie making is no different. Even more so, if we consider the uniqueness of the immersive experience.

So what if … What if we could preview our work in VR since the first stroke of the storyboard?

Here I’m demoing the Oculus Addon I’ve been developing as part of an ongoing research at a virtual reality lab in Rio (Visgraf/IMPA).

Notice that I’m not even drawing in VR. i’m merely experiencing the work done by Daniel “Pepeland” Lara in his demo file.

The applications of this addon are various, but it mainly focus on support HMD (head mounted displays) in the Blender viewport.

At the moment the support is restrict to Oculus (Rift and DK2), and it excels on Windows since the fastest direct mode is only supported on Oculus’s latest (Windows-only) SDK.


Disclaimer 1: A similar reflexion should be valid for other HMDs (Head Mounted Displays).
Disclaimer 2: This is a personal essay, based on my personal experience. Take the following words with a grain of salt.


The buzz about VR (Virtual Reality) is far from over. And just as with regular stereo 3D movie pipelines, we want to work in VR as soon as possible.

That doesn’t mean necessarily to sculpt, animate, grease-pencil all in VR. But we should at least have a quick way to preview our work in VR – before, after, and during every single of those steps.

At some point we may want special interactions when exploring the scene in VR, but for the scope of this post I will stick to exploring a way to toggle in and out of VR mode.

That raises the question, how do we “see” in VR? Basically we need two “renders” of the scene, each one with a unique projection matrix and a modelview matrix that reflects the head tracking and the in-blender camera transformations.

This should be updated as often as possible (e.g., 75Hz), and are to be updated even if nothing changes in your blender scene (since the head can always move around). Just to be clear, by render here, I mean the same real-time render we see on the viewport.

There are different ways of accomplish this, but I would like to see an addon approach, to make it as flexible as possible to be adapted for new upcoming HMDs.

At this very moment, some of this is doable with the “Virtual Reality Viewport Addon“. I’m using a 3rd-party Python wrapper of the Oculus SDK (generated partly with ctypesgen) that uses ctypes to access the library directly. Some details of this approach:

  • The libraries used are from sdk 0.5 (while Oculus is soon releasing the sdk 0.7)
  • The wrapper was generated by someone else, I’m yet to learn how to re-create it
  • Direct mode is not implemented – basically I’m turning multiview on with side-by-side, screen grabbing the viewport, and applying a GLSL shader on it manually
  • The wrapper is not being fully used, the projection matrix and the barrel distortion shaders are poorly done in the addon end
Virtual Reality Viewport Addon in action - sample scene from Creature Factory 2 by Andy Goralczyk

Virtual Reality Viewport Addon in action – sample scene from Creature Factory 2 by Andy Goralczyk

Not supporting Direct Mode (nor the latest Direct Driver Mode) seems to be a major drawback of this approach (Extended mode is deprecated in the latest SDKs). The positive points are: cross-platform, non-intrusiveness, (potentially) HMD agnostic.

The opposite approach would be to integrate the Oculus SDK directly into Blender. We could create the FBOs, gather the tracking data from the Oculus, force drawing update every time (~75Hz), send the frame to Oculus via Direct Mode. The downsides of this solution:

  • License issue – dynamic linking may solve that
  • Maintenance burden: If this doesn’t make into master, the branch has to be kept up to date with the latest Blender developments
  • Platform specific – which is hard to prevent since the Oculus SDK is Windows
  • HMD specific – this solution is tailored towards Oculus only
  • Performance as good as you could get

All considered, this is not a bad solution, and it may be the easiest one to implement. In fact, once we go this way, the same solution could be implemented in the Blender Game Engine.

That said I would like to see a compromise. A solution that could eventually be expanded to different HMDs, and other OSs (Operating Systems). Thus the ideal scenario would be to implement it as an addon. I like the idea of using ctypes with the Oculus SDK, but we would still need the following changes in Blender:

The OpenGL Wrapper change should be straightforward – I’ve done this a few times myself. The main off-screen rendering change may be self-contained enough to be incorporated in Blender without much hassle. The function should receive a projection matrix and a modelview matrix as input, as well as the resolution and the FBO bind id.

The BGE change would be a nice addition and illustrate the strength of this approach. Given that the heavy lift is still being done by C, Python shouldn’t affect the performance much and could work in a game environment as well. The other advantage is that multiple versions of the SDK can be kept, in order to keep maintaining OSX and Linux until a new cross-platform SDK is shipped.

That’s pretty much it, if you have any related reflexions please share on the comments below.

Dalai Felinto
Rio de Janeiro, September 18t, 2015


Thanks to Blend4Web, there is a very straightforward way of embedding a 3D model from Blender into Facebook. I couldn’t let this pass, and I gave it a go today. If you want to check out the “app”, it is publicly available here (if the app fails to load, disable Ghostery or similar active addons).

The trickiest part for me was to enable https in my server (which is required by Facebook). Apart from that, everything was pretty straightforward (the navigation in the app is the default built-in viewer from Blend4Web).

I would like to thank Cícero Moraes and his team, for sharing the 3D model with me, and allowing me to re-share it via Facebook.

Technical sheet:

Saint Anthony, facial forensic reconstructed in 3D by Cícero Moraes, Paulo Miamoto phD and team.

This project was part of Ebrafol (Equipe Brasileira de Antropologia Forense e Odontologia Legal) activities, made possible thanks to a plethora of open source tools.

The final digital file was made with Blender 3D, and is shown here exported via Blend4Web.

If you follow my work (aka my anual blog update 😉 ) you know I’m a great enthusiastic of anything slightly resembling sci-fi, geek, gadget things. And sometimes I’m lucky enough to team up with amazing people in order to put those toys to some good use.

In this video I showcase the Planovision system – a ‘3D Table’ compound of a head-tracking device, a 3D projector, and 3D glasses. I’m currently working towards integrating the Planovision with an authoring tool in order to build real demos, and help the project to kick-off.

After the initial integration with Blender via the Blender Game Engine (after all we don’t want just to see the 3d models, but to interact with them), today I got the system to work with BlenderVR to help the integration with different inputs (head-tracker, 3d mouse, leap motion, …). I’m helping the development of BlenderVR since last October, and we only recently released its 1.0 version. BlenderVR is a well behaved guinea pig I must say.

The Planovision has being developed under the guidance of professor Luiz Velho, director or Visgraf/IMPA in Rio de Janeiro, Brazil.

BlenderVR is an virtual-reality open source framework built on top of the Blender Game Engine. BlenderVR was created and is developed by LIMSI/CNRI in Orsay, France and is aimed at Oculus, CAVE, Video Walls among other VR display types.


If you want to learn more about the Blender Game Engine, don’t forget to check the book Game Development with Blender, written by Mike Pan and yours truly.

I guess this is one of those times when the line between work and play gets really blurry. I hope it keeps this way for a long time 😉