Hello there,

I’m currently trying to fix Blender’s text vertical alignment feature. I originally implemented this feature in Blender for the e-interiores project. What I had was perfect for my needs at the time. However let’s look at one of the trickiest ones: centered alignment.

The expected result would be to have the text center to be in-between the lowercase texts give it or take, yet it’s way off. What happens is that the code is taking into account the line size, and making sure the text base-line is located mid-way into the line height. As an example, let’s see how Google Slides handles vertical alignment:

I changed the text background color, so it is more obvious what is going on. Basically we have a “box” where the text fits in, and then this box is vertically centralized. That rises the question, how to determine this box dimensions? As it turn out these are called ascent and descent.

Ascent: The recommended distance from the baseline to the line above.
Descent: The recommended distance from the baseline to the line below.

In this example, using the Indie Flower font we have descent of about 20%, and ascent the 80% left. According to FontForge this is indeed correct – there I get 819 and 205 as their values:

However, and now it is the caveat, with freetype2 I’m getting 994 and (-)500 (67% and 33%). And why does it matter? Well, Blender uses freetype2 for its internal font management, so it matters big time.

I even scrapped a simple freetype2 example to test it isolated from Blender, and the result is the same.

/**
 * Example of FreeType2 library usage.
 *
 * How to build:
 * gcc example.c -o example -I/usr/include/freetype2 -lfreetype example.c
 */


#include <ft2build.h>
#include FT_FREETYPE_H

int main(int argc, char** argv) {
  FT_Library library;
  FT_Face face;

  /* Error handling is non-existent in this example. */
  FT_Error error;

  char* filename;

  if (argc != 2) {
    fprintf(stderr, "usage: %s font\n", argv[0]);
    exit(1);
  }

  /* First argument. */
  filename = argv[1];

  /* Initialize library. */
  error = FT_Init_FreeType(&library);

  /* Create face object. */
  error = FT_New_Face(library, filename, 0, &face);

  const short em_size = face->ascender - face->descender;
  const float descender = -(float)face->descender / em_size;
  const float ascender = 1.0f - descender;

  printf("Descender: %d (%4.2f)\n"
         "Ascender: %d (%4.2f)\n",
         face->descender,
         descender,
         face->ascender, ascender);

  FT_Done_Face(face);
  FT_Done_FreeType(library);

  return 0;
}

For the records, the final alignment change is something like:

  /* Offset from baseline. */
  float y_offset = descent - (em_size * 0.5f);

But I still need to figure out the correct way to get the ascent/descent from freetype2.
 
My best guess at the moment is that face->ascender is not the ascent but something else altogether.

Wish me luck!

Update : I decided to contact the freetype developers, I will post an update here once I hear back from them.

Update 2 : I got a reply by FontForge developer Khaled Hosny.

As it turned out, FontForge was using the hardcoded values (80%) when opening ttf files, and the proper Font ascent could be found in a different tab there (Font Info → OS/2 → Metrics) instead of (Font Info → General).

The baffling part of all this, is that Google Slides do seem to be using the same 80% value for their internal padding. I may as well hardcode it in Blender and move on. The results may be good enough (definitively better than what I get from using the real ascent/descent).

つづく

It is with a great weight in my heart that I have to announce the decease of my external Mac keyboard :/

I was calling an online store to complain about my air conditioner purchase (Summer is on us already) and I ended up dropping some chocolate ice-cream on top of the keyboard. I could salvage what was left of the ice-cream, however I can no longer type 7UJM in the keyboard.

I may or may not use some of these keys for 99% of my passwords, so it was time to retire my old keyboard. Truth be told, I had it for way too long. Since after the Cosmic Sensation project I felt in love for it. Considering that this was back in 2010, and the keyboard was super bendy, it was about time.

So why to talk about it? Leaving the mourning aside, typing in a laptop with no external keyboard is tricky. The touchpad tends to get on the way more often than not. Specially in my case since for some reason the touchpad is particularly bumpy.

Read More →

Are you stuck on Microsoft Visual Studio, with Window 10 Anniversary Edition and missing QTCreator features such as rename refactoring?

msvc_refactor

Just like QT Creator, I need it, and I want it now!

I found an interesting free extension of MSVC named “Visual C++ Refactoring”. You can get it here.

msvc_extension

Easy, right? However if you got an error message because your .NET Framework has a different version hear me out:

Read More →

Match made in e-heaven

Story originally published in blender.org in July 28th, 2016.

Meet e-interiores. This Brazilian interior design e-commerce startup transformed their creation process into an entire new fashion. This tale will show you how Blender made this possible, and how far we got.

We developed a new platform based on a semi-vanilla Blender, Fluid Designer, and our own pipelines. Thanks to the accomplished results, e-interiores was able to consolidate a partnership with the giant  Tok&Stok providing a complete design of a room, in 72 hours.

Read More →

Introduction

Sometimes when working with architecture visualization we want a material to be seamlessly repeateable and to set its size based on the material real dimension.

For instance, let’s say we have a photo of a wood texture which corresponds to 2.0 x 0.1 meters.

If we want to re-use this texture for different objects we can’t rely on UV Coordinates to guarantee the correct real world dimensions.

So, how to do it?

To get this properly rendered you can use a node group that I prepared just for that:

  • Download this [sample .blend]
  • Import and add the “Architecture Coordinates” node group to your material
  • Link it to a Mapping node, with Scale: 2.0 (X) 0.1 (Y)
  • Link the Mapping node to your Image Texture node

Optionally you can change the Location (X, Y) and Rotation (Z) of the Mapping node.

Note, for this to work the object scale should be 1, 1, 1.

Incorrect Textures

Incorrect Textures 🙁

Correct Textures

Correct Textures 🙂

Sample File Explained

Note, the sample file requires you to Run Python Scripts for the drivers

screen

This file has a cube object which has its mesh controlled by hooks. And the hooks are driven by custom properties of the “Origin” empty. This way you can play with different values without changing the object scale (which would affect the final result).

The test image has a 2 x 1 aspect ratio. If we pretend it was originally a 4.0 x 2.0m texture the whole image will be seen when the width and height of the cube are 4 and 2 respectively.

The Architecture Coordinates Node group take the Object coordinate and tranform it based on the facenormal (i.e., whether the face is facing the X, Y or Z axis).

 

Tcharan! The texture is properly setup regardless of the face direction.

I hope you find this useful, and you have a diferente solution for this problem please let me know. Maybe this is something Cycles should have by default?

Note: This file was developed for Blender 2.77, it may not work in different versions

SPOILER ALERT: The conclusions I reached here are wrong. Big time wrong. According to the internet there is no convergence distance (infinite) or there is a 1.3m value. That said, carry on if you want to read me babble on math …

The following text is rather dull and technical. I’m basically dissecting the projection matrix I get from the Oculus SDK in order to guess which convergence distance is being used. Making long short I found that for my setup, with the eye separation (interocular distance) of 6.5cm, the convergence distance is 4m.

This is twice as much as the classic “rule of thumb” of having a convergence distance 30x than the interocular distance.

Read More →

What if you want to copy and paste a text back and forward from Blender and your operating system? Blender has limited integration when it comes to the Font objects, and unfortunately none of the workarounds was satisfying for my picky taste.

So, what do you do when you are your own boss and want to use this inexistent functionality in Blender? Well, you just stop doing everything else and hack the hell out of Blender’s code 🙂

Blender Copy and Paste

Back in Blender 2.49 (around 2009) we could copy/paste the text from either the system clipboard or the internal (per object) text buffer. The reason behind this design was to allow for copy/paste of special formatting (e.g., bold, underline, …) when using it in Blender.

Seven years later in the latest Blender (2.76) this functionality (system clipboard) is not even available or exposed to the user. To fix this I unified the old system clipboard and the internal text buffer functionalities. Thus if you copy/paste a text from a font object it will be available in the system clipboard. And if the text was previously created within Blender, you will also get its original formatting.

Oh, did I mention it supports funky unicode characters? 😉

The patch is still under development and waiting for peer review. But it should be ready to merge in master any time soon.

Update: The patch was committed, and it will be part of the upcoming Blender 2.77.

python-console

Have you ever found yourself needing to change a .blend file remotely and VNC / Remote Desktop is not working?

In my case I finished rendering the left eye of a video, and wanted to do the same for the right one. I did it in parts due to the memory limit of the rendering station. And VNC is not working because … no idea. But it’s Friday and I won’t have physical access to the remote computer until next week.

Blender Interactive Console to the rescue!

$ ssh MY_REMOTE_COMPUTER
$ blender -b MYFILE.blend --python-console
(...)
(InteractiveConsole)
>>> import bpy
>>> bpy.context.scene.render.views['left'].use = False
>>> bpy.context.scene.render.views['right'].use = True
>>> bpy.ops.wm.save_mainfile()

Now all you need to do is resume your tmux session, and kick-off render once again. For different Blender command-line options try blender –help.

This post is obviously based on real events! Have a nice weekend 😉