This post is my presentation for the “Geometry Processing” course I’m attending in IMPA, the National Institute of Pure and Applied Math in Rio de Janeiro, Brazil.

The course is ministred by the professors:
Luiz Henrique de Figueiredo
Luiz Velho

What’s up doc?

The problem proposed was to create an algorithm to subdivide a triangular mesh inscripted in a sphere (e.g. an icosahedron) multiple times in order to approximate the mesh to a complete sphere. We need to solve two different mesh structures:

  • STL-like files – where triangles are listed by the coordinates of their vertices.
  • OFF-like files – where triangles are listed by the indices of their vertices, and the vertices are listed by their coordinates.

I’m using only .ply files for both approaches. If you choice -m stl and the faces are indexed, they are internally converted to a storage format that treat them as OFF files.

The delivery of the project is the following program/script. It requires python to be installed in the computer.

How to run it

1.1) Download the final script here –
1.2) Download a sample ply file: icosahedron.ply.

2) Unzip and run the command:
./ icosahedron.ply -l 5 -m ply -o icosahedron_5.ply
* in MS Windows you need to call it with the python console or set it to automatically run using the Python installation available.
* if you want to force the program to use the stl method use -m stl in the arguments.

3) The output file is ready. You can check the result in Blender or MeshLab
STL output file | PLY output file

For the rest of this document I’m explaining only the ‘off’ (ply actually) approach. The ‘stl’ one is more a warm up, given that it’s no challenge compared to what comes next. Both algorithms can be found in the complete script above.

Result and performance

Result rendered in Blender, levels 0, 1, 2, 3 and 4

Performance Comparison

Subdivision Python PyPy
level 1 0.0001 0.0002
level 2 0.0003 0.0005
level 3 0.0011 0.0019
level 4 0.0046 0.0074
level 5 0.0186 0.0881
level 6 0.0746 0.2395
level 7 0.3336 0.2790
level 8 1.5101 0.8054
level 9 6.6167 3.0026
level 10 – out of memory 11.3281

Without any changes in the code, the execution with PyPy showed an improvement of 100%. It still doesn’t compare to native/compiled code, but those are satisfactory results given the simplicity of the implementation and the flexibility of using a script language.


The core of the code is the subdivision algorithm. I decided to look for a solution where a face could be seen as as much independent as possible. That means no pointer to neighbour triangles and no need to navigate in the triangles in a fancy particular order. The key I pursued is to create unique indices in a ‘universal’ way. A system that could be used by neighbour triangles unaware of each other and that could result in the same values.

This may not be the most efficient algorithm, but it’s very simple to implement (the code in Python ended very compact).

Code overview

def main():
    # 1) parse args
    input, output, method,level = parse_args()

    # 2) import ply
    raw_verts, raw_faces = import_ply(input)
    # 3) convert to internal structure
    verts, faces = convert(raw_verts, raw_faces, method)
    # 4) subvidide
    verts, faces = subdivide(verts, faces, level, method)

    # 5) export ply
    export_ply(output, verts, faces)

Apart from the subdivision, they are other small steps needed: (1) parse the arguments (level, method, file, …); (2) import the ply file; (3) convert the raw data into an internal structure (more on that soon); (4) subdivide; (5) dump the new data into another ply file.

The conversion (3) rearrange the faces to store not only the vertices indices but also the indices of the edges. It’s a simple code that creates a key for a pair of vertex indices that will always return the same regardless of their order:

key = "%020d%020d" % (min(v1,v2), max(v1,v2))


If the key was already created we get the id from a dictionary. Otherwise creates a new id and assign it to this key.

Subdivision code explanation

* take a look at the code in the end of this page, or in the download link in the top.

First I allocate new vertices in the verts array. We are going to create one vertex per edge, thus we know from beginning the size of the new array. With the array pre-created it’s easy to see if a specific vertex has been created (by the other triangle that shares this edge) or if it needs to be initialized in the list.

verts.extend(list(None for i in range(old_edges)))

Next I create a unique vertex index that can be guessed/retrieved from both triangles that share the edge where the vertex is generated. Taking advantage of the fact that the number of new vertices is the same as the number of edges in the previous interaction, I use the edge index to directly assign a value for the vertex. old_verts works as an offset to the new vertices indices.

old_verts = len(verts)

# new vertex indices
v4 = old_verts + ea
v5 = old_verts + eb
v6 = old_verts + ec

Now that we know the exactly id of the vertex, we can see if it was created already. There is no risk of array overflow since the new verts array was already expanded to accommodate all the vertices (filled with None objects).

# add new vertex only if non-existent
if verts[v4] == None:
    verts[v4] = (verts[v2]+verts[v3]).normalize()

For the new edges, 6 are external and come from the existent edges. It’s crucial to also find a unique index for them. My first attempt was to use an orientation system, always naming the edges clock-wise. This is not reliable though, I found it impossible to implement correctly. The final solution was to name an edge based on its opposite vertex. For example, in the initial triangle the vertex 1 faces the edge 1, the vertex 2 faces the edge 2 and the vertex 3 faces the edge 3.

Also part of the challenge is to split the edge in two new edges with universal indices. I used arbitrary comparison (vertex a > vertex b) to determine which edge will inherit the original index of the edge, and which one will be allocated passed the initial edge offset (old_edges):

# new external edges
e1 = ea + (old_edges if v2 > v3 else 0)
e4 = ea + (old_edges if v2 < v3 else 0)

The internal edges (edge 7, 8 and 9) are brand new and all they need is to get an index from the edge pile and increment the counter. This is the equivalent in C for e7 = edge_inc ++. edge_inc is initialized as an offset to avoid overriding of edge indices.

# new internal edges
e7 = edge_inc; edge_inc += 1

Finally it’s time to create the faces following the original schema.

new vertices and edges

new_faces.append(( v1, v6, v5, e7, e5, e3))
new_faces.append(( v6, v2, v4, e1, e8, e6))
new_faces.append(( v5, v4, v3, e4, e2, e9))
new_faces.append(( v6, v4, v5, e9, e7, e8))

Subdivision code-snippet

The complete code of the subdivision can be seen here:

def subdivide_ply(verts, faces, level):
    for i in range(level):

        old_verts = len(verts)
        old_edges = int(len(faces) * 1.5)
        edge_inc = old_edges * 2

        verts.extend(list(None for i in range(old_edges)))
        new_faces = []

        for v1, v2, v3, ea, eb, ec in faces:
            # new vertex indices
            v4 = old_verts + ea
            v5 = old_verts + eb
            v6 = old_verts + ec
            # add new vertex only if non-existent
            if verts[v4] == None:
                verts[v4] = (verts[v2]+verts[v3]).normalize()

            if verts[v5] == None:
                verts[v5] = (verts[v3]+verts[v1]).normalize()

            if verts[v6] == None:
                verts[v6] = (verts[v1]+verts[v2]).normalize()

            # note, the following could be 'optmized' to run the
            # if only once . the gain is of 3%
            # new external edges
            e1 = ea + (old_edges if v2 > v3 else 0)
            e4 = ea + (old_edges if v2 < v3 else 0)

            e2 = eb + (old_edges if v3 > v1 else 0)
            e5 = eb + (old_edges if v3 < v1 else 0)

            e3 = ec + (old_edges if v1 > v2 else 0)
            e6 = ec + (old_edges if v1 < v2 else 0)
            # new internal edges
            e7 = edge_inc; edge_inc += 1
            e8 = edge_inc; edge_inc += 1
            e9 = edge_inc; edge_inc += 1

            new_faces.append(( v1, v6, v5, e7, e5, e3))
            new_faces.append(( v6, v2, v4, e1, e8, e6))
            new_faces.append(( v5, v4, v3, e4, e2, e9))
            new_faces.append(( v6, v4, v5, e9, e7, e8))

    return verts, faces

I hope the explanation was clear.
It’s always fun to reinvent the wheel ūüėČ


What do you do when you have 2 idle projectors by your computer? The answer is obviously a high definition projection area to be filled with lo.v.e. (lots of valuable experiments).

Two short throw projectors in one seamless desktop

I’ve been following the work of the Vision3D since 2009. This lab in Montreal is specialized in computer vision (recherche fondamentale et appliqu√©e sur les aspects tridimensionnels de la vision par ordinateur). Lead by¬†S√©bastien Roy they have been producing (and sharing!) on calibration of projection surface (e.g. domes o/), multiple projector systems, and content toolsets.

lt-align manual calibration process

The Vision3D lab main tool in that area is Light Twist. This tool was presented in the LGM2009¬†with a live showcase of the system in a cylinder. In the last week I tried to have light twist going with a multi projector system (aiming to use this for a dome later on) but so far I’m stuck in the playback of content (and I suspect the calibration stage is wrong). Anyways, light twist will be a topic of another post, once I get it up and running.

Plugin enabled – video in the middle of the screens, desktop working normally

Since 2009 the light twist project shifted its focus from labs to end users. In 2011 they finally presented a new project called lt-align and lt-compiz-plugin. The lt-align is a software to quickly calibrate the screens alignment, very easy to use.

The Compiz plugin requires some fooling around with ubuntu settings, but once things are in place it works like a charm. I’m yet to make it work with Unity, so I can have real fullscreen across the desktops.

Recording of the alignment process and video playback

Elephants Dream – Stitched Edition ūüėČ

Note: there is an extra package you need to compile the lt-compiz-plugin:`sudo apt-get install compiz-plugins-main-dev. And I didn’t have to restart compiz with ccp to make it work. Also I changed the shortcuts to start the plugin because Alt+F* were taken by other OS commands.

Time to make it real and project in a large wall

In this picture you can see Djalma Lucio, sys admin that oversees all the computer installations at Visgraf on IMPA. A great professional and a very funny guy to work with. Think about someone that actually enjoys opening a xorg.conf file.¬†And you can also see in the right Aldo Zang. Check it out his ARLuxRender project – a plugin system for LuxRender “which allows to render scenes with mixtures of real and virtual objects directly, without post-processing”.

I hope to post more in the coming months in domes, projections, a special video project … ūüėČ I went on a 3-month leave of my work at UBC to join the research lab at Visgraf/IMPA, under the coordination of prof. Luiz Velho. This is the second week only, but it’s been already a great experience. And above all, it’s nice to be back home (Rio de Janeiro, Brazil).

Happy Twisting,

What happens when an image fails to load in your system? It goes without saying that we need to find a non-intrusive way to analyze it.

It happened to me today. An image downloaded from the internet was failing to load in my project (a virtual art gallery for domes, more on that once it’s out). The internal framework involves to copy the image to the project folder and open it with Video Texture. For those unfamiliar with the Blender Game Engine, this is a python module to dynamically load and swap in-game textures.

In my tests all the images I tried were working. No exception. But of course it takes only a test-run with the client to get a crash ūüėČ One single image was enough to make me pull my hair.

Our beloved open image editor¬†GIMP¬†opens the image with no problems. In fact if I open it and save it I can open it in my project with no problems. So what’s wrong? Why can’t GIMP warn me about this problematic file?

Looking for ‘file inspectors’ I ran into this Binary File Inspector from Microsoft.¬†It didn’t take more than a glance to spot the problem:

CMYK … Bingo! Opening the image in a station with Photoshop proved this was the issue.

For the adventure seekers out there, remember: open source tools are great. Yet you should be not afraid of getting out of your comfort zone once in a while.

Who could guess that Microsoft would be the cavalry to make it up for the lack of CMYK (and feedback) support on GIMP ūüėČ (or the lack of CMYK support in Video Texture, or me being short in tools for image¬†forensics, …).

Have a great day,

credits: Dalai Felinto, Mike Pan (Blender) and Sherman Lai (post processing)

¬†It’s available on Ted Talk the presentation from Dr. Pauly on the ocean’s shifting baseline. The key idea is that we need to stick to a baseline in order to develop a more reliable feeling on the changes that are happening.

But what happens when we can’t see the baseline? In this case the use of simulations – films and images – can be of great help. In the final slide of his presentation, Dr. Pauly showed an image to suggest a simulated ocean in 2010. You can see this at ~ 8:12.

This is not one my favourite works, but it’s an important one. This image was made based on a still from the first animation made in Blender I worked on, back from early 2009:¬†The Life in The Chesapeake Bay. It’s nice to look back and admire how many chances to improve my work I got.

To work with science communication is a thrill, and to have this work recognized really makes my day. Note that this image is not being used only to illustrate a particular ocean scenario. The image is there to make a point. To reenforce the role of art in the understanding of our lives.

. . .

And yes, it’s always great to spread Blender around the world, even when people are unaware of it (I was going to do a screenshot from the Blender file but I can’t find it – it took TedTalk way too long to make the stream online available ūüėČ


A belated thank you for Villy Christensen, Sherman Lai and Mike Pan for the opportunity of doing the¬†¬†original project¬†together. And for York University and the unexpected strike in late 2008 ūüėČ God and his crooked lines, go figure.

Welcome, I’m your Oracle. What would you like to know?

Those promising first lines announce what is to come. A virtual avatar who will walk you through a journey of knowledge and discovery.

This was our first take, a prototype if you will, in creating ways to communicate global data on the ocean possible futures and the scientific models underneath the predictions. As part of the NF-UBC Nereus Program.

If you wanna learn more about the science underneath the project, and the whereabouts, go check the official press-release:


Read More →

It’s common during a production to have your own copy of blender in your repository. It can be a stable blender that your production relies on, a snapshot from the current svn or, also common, a patched Blender prepared specially for your project.

In projects using the Blender Game Engine this is even more crucial. The blenderplayer should be kept as part of the deployment process of your project.

I recently started to love and hate the use of svn:externals with svn exports. This is a handy (and sloooooow) setup that allows you to create a release folder with files gathered from all over your svn production repository. So you can work as you would in your production folders, and when you want to send a snapshot for a client you simply do a svn export from the ‘release folder’ (a folder smartly arranged to have only the production files you need (kept in sync from the production folder) and even some extra files that are to be used only for release (e.g. icons, readme.txt, runme.bat, …).

The problem I just ran into is that not all blender files are automatically added to your svn repository. As tricky as it sound, some files (i.e.  the .so files from the 2.62/python/lib/python3.2/lib-dynload folder) are not added automatically when you do svn add (either from a command-line or from svntortoise). This concerns Linux and Mac users.

Read More →

My love for photomatching goes a long way.
Back in 2007 I did this project using the fantastic SketchUp Photo Match:

I used 20 photographies, a blueprint of a cross section and a blueprint of the original floor design. I was then hired to do the drawing of the façade with AutoCAD to be used for a study on preservation and historical register of this building.

Since then I realized that Blender was very far from catching up with tools designed with architects in mind.
Today I ran into an add-on for Blender that may help to reduce this gap.

BLAM is a Blender Calibration Tool that you can find here:

My original plan for tonight (to code support for green-magenta anaglyph glasses in the Blender Game Engine :)) clearly would have to wait. It’s time to test the tool!

I was following the steps of the video tutorial –¬†BLAM Video Tutorial
If you want to try yourself this is the picture I used:
University of Seattle
It’s a picture from the University of Seattle. I traveled to Seattle last year and really enjoyed the university campus (and the BattleStar Galactica exhibition at the Space Needle alone made the trip worthwhile).


  1. adding axis is nice and intuitive but it would nice to tweak the curves for fine tuning while seeing the 3d change¬†(as a live ‘estimate camera focal length and orientation’ mode)
  2. an option to automatically add the image as image background would be nice.

Um dos eventos marcantes do ano passado foi a BlenderPRO 2011.

√Č dif√≠cil descrever todas as emo√ß√Ķes desta que foi a quinta edi√ß√£o deste encontro de amigos e espa√ßo de constru√ß√£o e consolida√ß√£o de uma comunidade que s√≥ faz crescer e me encher de orgulho de fazer parte disso.

Os v√≠deos das apresenta√ß√Ķes no sal√£o principal est√£o finalmente
(parabéns a Fernando Avena pelo trabalho de gravação e edição dos vídeos)

Acabei de rever minha apresentação e me trouxe boas lembranças. Se você tem interesse em saber mais sobre os usos de Python com o Blender o vídeo está aqui:

Se tiver com o tempo curto recomendo ao menos que assista a apresentação do Teisson Fróes. Surpresa maravilhosa conhecê-lo neste ano que se passou.
Se n√£o conhece seu trabalho, prepare o babador e confira o site da OVNI VFX

Um grande abraço,

Esta chegando a quinta edicao da BlenderPRO.

Vou colaborar com uma palestra, uma oficina sobre python para producao em Blender e auxiliar na oficina do Ton possivelmente.

Este ano vai ser epico, o publico presente, a cidade, o pessoal da organizacao …. Nao vejo a hora de chegar em Salvador.

Blender Pro 2011 from Blender Pro Salvador on Vimeo.

Entao se ainda nao fez sua inscricao corra e se programe: 9 a 12 de Novembro em Salvador, Bahia

Dalai Felinto