Disclaimer: This tutorial is tailored towards the cluster system I have access to. The output from the commands is unique to each system, and the same goes for the cluster commands. Also, since OpenCL is still not fully supported in Blender this guide focus on CUDA rendering only. Anyways, use it wisely, and refer to your own cluster documentation for their equivalent of qsub, interactive testing, nvidia-smi, … feel free to drop a line if that helps you or if you have any complementing information.

Introduction

Blender has no command-line way to set the render device Cycles should use. That means you may not be able to take full advantage of your GPU renderfarm out of the box.

This is not a problem if you use the same computer to save the file and to render it. However if you need to render your file in a remote computer (in this case a renderfarm) you may not have access to the same hardware available in the target computer. So you can’t rely on the ‘render device’ settings that are saved with your .blend.

Luckily we can use Python to set the proper rendering settings before producing your image.

Getting the list of GPUS

The ssh terminal I use for login doesn’t have access to any GPU, thus it doesn’t work for testing. Taking advantage of the qsub system available in the cluster, the first step is to ask for an interactive session to get a list of the available GPUs. In a ssh session in your cluster do:

$ qsub -q interactive -I -V -l walltime=01:00:00,nodes=1:gpus=3
qsub: waiting for job 125229.parallel-admin to start
qsub: job 125229.parallel-admin ready

At this point the shell terminal is back and we can ask Blender what devices it can see.
First copy the code below and paste in a new file called available_devices.py:

import bpy, _cycles
print(_cycles.available_devices())
bpy.context.user_preferences.system.compute_device = 'BLABLABLA'

To use this script in Blender, run:
$blender -b -P available_device.py

Here this will return the following list: (‘Intel Xeon CPU’, ‘Tesla M2070’, ‘Tesla M2070’, ‘Tesla M2070’, ‘Tesla M2070 (3x)’)

And also the following intentional error:
TypeError: bpy_struct: item.attr = val: enum “BLABLABLA” not found in (‘CUDA_0’, ‘CUDA_1’, ‘CUDA_2’, ‘CUDA_MULTI_2’)

This “error” is only to let us know what is the internal name of the CUDA device I want to use. In my case I want to render with the ‘Tesla M2070 (3x)’ so I should set system.compute_device as ‘CUDA_MULTI_2’ (the error list and the output list have the same size and order, so it’s a one-to-one correlation).

Rendering

Now  to create the actual setup script. Paste the following code in a new file called cuda_setup.py (replace CUDA_MULTI_2 by the device you want to use in your system):

import bpy
bpy.context.user_preferences.system.compute_device_type = 'CUDA'
bpy.context.user_preferences.system.compute_device = 'CUDA_MULTI_2'

And to use this script in Blender and test render your file:

$ blender -b Nave.blend  -o //nave_###.png -P ~/cuda_setup.py -f 1

Remember, the order of the arguments matter. Any argument added after the -f will be parsed only after the render is over. For the complete list of available arguments visit the Blender Wiki.

For the final render you need to use the above command as part of a qsub job dispatching file (the -P cuda_setup.py part). Since Cycles doesn’t recognize all the cluster nodes as rendering devices, you need to split your job into batches, to have an instance of Blender running on every node. This is outside the scope of this guide though.

Just to be Sure

In case you think the GPUs may not be at use, you can do the following. First ssh connect in the interactive node you were using for the test. Next use the NVidia SMI program:

$ nvidia-smi

gpus

As you can see there is no GPU power been spared here. Sorry but you can’t bitcoin mine here 😉

Wrapping Up

This tutorial is intended for my own records, but also to help someone stuck in the same situation – though you will likely have to adjust something to your own setup. I tested this with Blender 2.65a running on a CentOS in a cluster based on the HP Proliant SL390 server architecture.

Happy 2013!

 

Hello all,
I’m pleased to announce that the latest version of Blender 3D is out. This is the collaborative effort of a team of developers, which I’m proudly a part of. The 2.65 edition is particularly relevant for the dome community due to a complete support for equidistant fisheye lens rendering.

Equidistant fisheye 180°, used for fulldomes. Image by Adriano Oliveira

Equidistant fisheye 180°, used for fulldomes. Image by Adriano Oliveira

That includes a series of fixes since last release, the most noticeable been the Equidistant Fisheye Lens fix, as I mentioned in an early post. This release not only benefit Fulldome artists, but also anyone willing to experiment with the Equisolid Fisheye lens. The image below is what you see from within the working viewport.

And how simple it is to use this? For those familiar with the builtin Cycles render engine this is as easy as it gets. For Equidistant fisheye lens all you need to do is to set the render dimensions to square (e.g., 1024 x 1024) and to set the field of view angle (usually 180°). For Equisolid fisheye lens you need to do is to set the lens size and one of the sensor dimensions. The other sensor dimension is taken from the render aspect ratio.

Equisolid fisheye, 3d viewport preview. Image by Pataz Studio - www.patazstudio.com

Equisolid fisheye, 3d viewport preview. Image by Pataz Studio – www.patazstudio.com

For the complete release information, please visit the official Blender 2.65 Release Log.

For the Fisheye Lens specific info, check:

Blender is under 60MB to download, free and more than capable to handle small to medium size productions. Go get it! I hope it can help more future fulldome productions in the future.

Enjoy it,
Dalai

I’m just back from the Siggraph Asia 2012. I was impressed by the people I met, the talks and courses I attended, and why not, the places I visited. Singapore is a very interesting city for a tourist. Among the people I met, a particular meeting was long overdue. I finally had a chance to meet Paul Bourke personally.

We collaborated (meaning he helped me ;)) in the fisheye implementation for the Blender Game Engine back in 2009. Since then there is not a fisheye related question that I don’t bounce by him first. So, in between talks he kindly shared his thoughts for stereoscopic rendering for domes. It took me a week to work around the problem, but here you can see the first real renders in a patched Blender with Cycles.

The formula is very simple, it’s just one of those problems that is really hard to debug (at least when you don’t have a dome with 3d projectors at hand). Thankfully in my flight back I had the peace of mind to wrap that up.

3D Model The White Room cortesy from Jay-Artist, shared on blendswap.com

As a teaser, this is all you get for now. More on that later 😉

I’m flying tomorrow to my second Siggraph. My first one was last year here in Vancouver. Now it’s time to check on the Asian conference. Funny thing, my main motivation to fly that far is to present a poster part of the panorama rendering project I’ve been blogging about. But now that I decided to attend, I came to realize how fantastic the courses and talks are. And we even have a Blender Birds of a Feather there, it will be really nice 🙂

For the records, and to help myself to keep track of what my plans are, this is what got me pumped up to not miss a moment of Siggraph. Mouth drowning time:

Birds of a Feather (basically a show and tell)

* Blender Foundation – Artist Showcase (I will probably be showing some of my projects there)
* Pipeline & Tools – coordinated by two LucasFilm employees

Courses:

* Pre-visualisation: Assisting Filmmakers in Realizing their Vision
* Method of Induction of Basic and Complex Emotions in Video Games and Virtual Environments
* Constructing a Stereo Pipeline from Scratch: Lessons Learned from Disney’s “The Secret of the Wings”
* Story Structure for Programmers, Game Designers and Artists
(note: this workshop is presented by Craig Caldwell, professor from University of Utah, a great seasoned animator among other things, who I had a chance to watch, and chat with at View Conference (when I went to Turin to give a BGE workshop :))

Special Sessions:

* Technical Challenges of Assassin’s Creed III: Real-Time Water
Simulation and High-Quality Linear Gameplay Sequences
* The Future of Technology Innovation at Lucasfilm: Crossover between Games and Film [cancelled]
* The Visual Effects of ‘The Dark Knight Rises’

Uff … those will be busy days!

And as far as my computer goes things are still in bad shape. It took me 30 hours to backup the HD in a vain hope of recovering something. And when I was to finally burn a new DVD with PartedMagic, the medium I have around are not compatible with my DVD driver. So I’m going to forget about this for 2 weeks and hope for a realization that I don’t need all the lost data 😉

In case Alice’s rabbit is right, it’s always good to have your eyes at the current time. I was never a fan of arm’s watches and I do use Blender in fullscreen (alt+F11). So what to do?

You are right, an addon to show the current time in the Info header. Download the file here, or copy and past the code below. Have fun.

bl_info = {
    "name": "Clock",
    "author": "Dalai Felinto (dfelinto)",
    "version": (1,0),
    "blender": (2, 6, 6),
    "location": "Info header",
    "description": "Shows the current time",
    "warning": "",
    "wiki_url": "",
    "tracker_url": "",
    "category": "Useless"}

import bpy
import time

def header_info(self, context):
    t = time.localtime()
    self.layout.label("%02d:%02d:%02d" % (t.tm_hour, t.tm_min, t.tm_sec))

@bpy.app.handlers.persistent
def clock_hack(context):
    # hack to update UI
    for area in bpy.context.screen.areas:
        if area.type == 'INFO':
            area.tag_redraw()

def register():
    bpy.app.handlers.scene_update_pre.append(clock_hack)
    bpy.types.INFO_HT_header.append(header_info)

def unregister():
    bpy.app.handlers.scene_update_pre.remove(clock_hack)
    bpy.types.INFO_HT_header.remove(header_info)

if __name__ == "__main__":
    register()

This is a casual recording of the Map Range Node commit (available in Blender svn snapshots or in the future Blender 2.65).

We present different uses for this new node, including atmospheric effect and background blender composite techniques; and a neat post-production stereoscopic 3d depth based system (demonstrated in Tears of Steel). Also, we show the real time of the commit, and the feature available in another computer – we hope this encourage more people to build their own Blender builds.

 

This new Blender feature was elaborated during the days prior to the BlenderPRO 2012, in Brasília.
In this video you see in order of appearance:

Flag of Brazil – http://en.wikipedia.org/wiki/Flag_of_Brazil
BlenderPRO 2012 – http://www.blender.pro.br/2012@BlenderProBr
Dalai Felinto – www.dalaifelinto.com@dfelinto
Roberto Roch – thisroomthatikeep.blogspot.com.br@roberto_roch
Sebastian König – www.3dzentrale.com@s_koenig
Daniel Salazar – www.patazstudio.com@ZanQdo

I would like to thank the Fundação Banco do Brasil for believing and supporting the importance of Blender as a digital, artistic and social inclusion tool. They are a major partner (aka sponsor) of the BlenderPRO this year. Thanks to their substantial support we were able to gather so many Blender professionals in the same place, which, as I hope this video demonstrates, is the necessary fuel for collaborative development.

I also would like to personally thank the organization team of the BlenderPRO this year. The event hasn’t yet even started, but it’s already been a very interesting exchange of experience, ideas, …

Today was a meditative day to once again celebrate the passage of our Sun across the exactly-ish point in the sky. Yup, I’m getting old, but hey  don’t we all? As my self-birthday gift I decided to face the hills from home to work with my laptop in the backpack (big thing for me, I would always take the bus when I need the laptop, which means almost everyday).

Equirectangular Color Map Twisted

To make the self-celebration even more complete, I decided to not only work around my physical healthy, but also to give my mind some food for thought. In the past week, professor Adriano Oliveira was troubleshooting the cycles fisheye camera for his fulldome audiovisual work. He noticed that the equidistant lens was far off the expected (for example, compare it with the BGE fisheye lens) and even the equisolid (which is great to simulate real fisheye lens (like Nikkon, Canon, …) wasn’t producing a perfect fisheye.

Equirectangular Color Map

We have been debating that, and in the end he found some nice online references for different formulas per lens type. So today I thought it was a good time to get down the code. What you see next is the comparison between the wrong and the corrected equidistant fisheyes, the equirectangular testmap I used (as known as Blender UV Color Grid ;)) and something I’m passionated about now: to use drawing softwares to do math. You can delete things, move them around, color them .. it’s perfect 😉

 

 

So have fun with the fixed “fulldome” mode. Remember, the mode can go beyond 360 degrees if you want to make some really cool images 😉

Saving trees and abusing my tablet 😉

Now, something nice … I met prof. Adriano last year in the BlenderPRO (Brazilian Blender Conference). He may have exaggerated, but he told me that the main reason he went to the BlenderPRO was to meet me. It seems that it definitely paid off. I’m not saying I wouldn’t fix this bug if someone else had reported it. But it’s much more interesting to work with someone you met.

And why am I saying that? Well, next week we have a new edition of the BlenderPRO. So if you can make to Brasilia, don’t think twice. It’s going to be an amazing event, and I hope to see you there.

And happy birthday to me 🙂

Sometimes I have this coding itch, that doesn’t let me do anything until I’m done with developing an idea. I actually thought that I wouldn’t bother implementing the mockups I did earlier today. But in my way home I realized the implementation was actually trivial. With no further delays, the next screen is not a mockup, but a real screenshot from a patched Blender:

You are right Momo, groovy!

The complete patch is in the Blender Tracker, but the essence of the code is:

//given xco, yco and percentage
int barsize = 60;
// draw in black first
glColor3ub(0, 0, 0);
glBegin(GL_QUADS);
    glVertex2f(xco + 1 + barsize, yco + 10);
    glVertex2f(xco + (1 - percentage) * barsize - 1, yco + 10);
    glVertex2f(xco + 1 + (1 - percentage) * barsize - 1, yco - 1);
    glVertex2f(xco + barsize, yco - 1);
glEnd();

Any comments and feedbacks are still appreciated.
Time to sleep 😉

I’m reading a very interesting book on data visualization for science. (Visual Strategies by Felice C. Frankel & Angela H. Depace). The book was highlighted in last month’s Nature magazine, and is indeed highly recommended for scientists or science communicators like myself ;).

Today Mitchell asked me to look at a patch he was reviewing and adding his own changes. The patch by Angus Hollands re-organize the interface for debugging. Well, I couldn’t help giving a try at representing the numbers in a more direct way. I don’t how hard would be to implement those ideas, here are mockups only for the sake of my creative exercise. Thoughts?

Original proposal by Angus Hollands (agoose77) with changes from Mitchell Stokes (Moguri)

Why not visualize the percentage?

A more radical approach

Some time ago Paul Bourke sent me some images he captured with the Red Scarlet and a 4.5mm lens. The result is really impressive. He can get a recording in crystal clear 4K at 30fps. Below you can see one of his images:

Red Scarlet sample photo – credits Paul Bourke + synthetic elements by yours truly

Wait, what is Suzanne doing there?

Ok, that’s not really his original capture. I wanted to explore how would be to insert virtual elements in a fisheye image. It shouldn’t be much different than integrating synthetic elements in a panorama (topic of some previous blog entries and a paper waiting for approval 😉 – more on that later this year ). And as it turned out, it’s ‘straightforward’-ish enough.

First take a look at the original image:

Red Scarlet sample photo – credits Paul Bourke

This is a cropped image, expanded vertically to fill the 180 FOV (field of view). This arrange of camera+lens+4k doesn’t give you a full frame fisheye nor a circular fisheye. As a curiosity, the Red Scarlet can get a complete 180 fisheye circle if the photo is made in 5k. However you can’t get a 30fps movie capture at that resolution.

In order to use the IBL Toolkit for the scene reconstruction I first generated a full panorama (360×180) out of the original fisheye photo. I used the open source tool Hugin for that.

Be aware that Hugin has a bug in the calculation of the  focal length multiplier for equisolid fisheye lens (basically it’s using the full frame fisheye calculation for all its fisheye modes). Actually if you know someone involved in Hugin/Panotools project, I would send her/him over this patch. As far as I can tell the fix is along these lines. I couldn’t manage to compile Hugin though, so I don’t feel like sending a not-working patch for their tracker.

Back on topic … this is the image I got from Hugin (using 4.5 as lens and 2.35 as scale factor for equisolid – 2.35 was eyeballed because I couldn’t find in the internet the sensor size of the 4K capture for the Red Scarlet, and remember, once they fix the software the input would have to be different):

360×180 fullpanorama

 

Once I got the full panorama the rest of a piece of cake. This scene is perfect for the IBL Toolkit (this square in the front plane is practically screaming “Calibrate with me !!11!!”).

Blender IBL Toolkit in Action

And a render from a different angle (where glitches are expected). I used the Project UV option of IBL Toolkit to project the corresponding UV in the panorama to the subdivided meshes.

Extra ‘render’ – more a behind the scenes shot instead

 

Final considerations:

  • I really wish Blender had a shadow-only shader to help integrate support meshes, synthetic elements and a background plate.
  • I’m pretty sure Blender Institute crew worked nicely around that for the Tears of Steel project. I’m still waiting for them to finish the movie and release the files though.
  • The lighting is indeed bad here because the original plate was a LDR, not an HDR, so I didn’t have the lighting of the scene (and didn’t want to bother recreating it – thus you see no shadow in the original scene support elements).
  • If I had the HDR I would use Luxrender (AR Luxrender actually) for the render 🙂
  • IBL Toolkit should be called Pano something instead, anyways 😉
  • I forgot to say that the final render was only possible due to the Fisheye Lens in Cycles, a patch that I wrote on top of Brecht’s original full panorama code and is already on trunk (and will be available in Blender 2.64).
  • In fact I’m sure I could have fisheye implemented as an input option for the IBL Toolkit (discarding the need of Hugin). That would help to output the content in the exactly same position as the original camera (if you put them side-by-side you can see they have a slightly different orintation).

I’m planning to present a complete framework for working with panoramas and virtual elements in the Blender Conference this year. Even though this is based of my work with Aldo Zang (using Luxrender and not Blender) I think it can help to inspire possible solutions for Blender. So finger crossed for the presentation to be accepted and I hope we can make it interesting. The original paper (submitted to CLEI 2012 goes by the name:

Production framework for full panoramic scenes with photo-realistic augmented reality

 

So stay tuned, (and enjoy the Summer, Vancouver is finally sunny o/)
Dalai

(thanks Paul Bourke for authorizing the re-use of his image, consider giving his website a visit, its one of these corners of the internet that will keep you busy for a long time)