Merry Christmas to all and a present

Hi all,

2008 was both a good and bad year for me. Back on March I started my tinkering in volumetrics: test here, test there, change little piece of blender code, compile, render….hm ,and the process starts over again.  Then, by April I had some functional build and the rest is well known history )
I have met many of blender users around the world, really good people willing to help even selfish requests like the BBB DVD that I enjoy every time volumetric allow me )
Then, Nature struke my country with the same thing I was trying to simulate: 3 supervolumetrics hurricanes that literally devastated the blockaded Cuba. Imagine the consequences. Thanks to the help of many of the blender community,I and my family could recover to some extent and have a decent Christmas

And here is my present for all of you: while Im adding received-shadows support for sim_physics, I am also implementing a new algorithm for full Multiple Light Scattering calculations in seconds! I just sent the patch to the generous Matt Ebb for review and also for his magical touch and committance )

So deep thank to everybody: I wish all of you a merry Christmas and happy year in 2009

proposal for the sim_physics branch

Multiple Light Scattering (MLS) in contrast to Single Light Scattering (SLS) used to be avoided in CG because it was a too cpu-intensive rendering method.

Indeed, the exact physical equations which govern light diffusion behavior inside a scattering media (clouds, smokes…) require lots of calculations per sampled voxel, typically involving spherical integrals (complex summations of properties over the spacial variables defining the direction around a voxel: latitude, longitude)–> cpu-time eater : concretely, for each voxel (point) inside a volumetric, one has to take into account both incoming and outcoming lights in ALL directions in order to have the right illumination for this voxel. For exact simulation this would require unlimited number of light contributions from a little sphere surrounding each voxel –> obviously not suited for production quality animations 😦

Several approaches have been made in order to simplify the equations governing MLS and to develop algorithms alternative to brute force calculation, in one word optimized ones:

1) SLS approximation, by neglecting the light directions which are not those of the main lights (that is to say neglecting the light scattered by other voxels in the neighbourhood). Indeed, diffusion is more visible inside the cones of the main lights of the scene. This works fine to some extent, but nature is not so simple. That ́s why when you look at a cloudy sky you can see that the edges of the clouds are brighter than the background. Indeed, water molecules  of the cloud re-emit some light of the sun in all the directions up to the boundary of the cloud. As light travels more inside the clouds than on the border it is more absorbed, and thus some part of the clouds nearer to the sun than the sides seem paradoxically darker while the sides look bright because of the diffused light described in the previous sentence.

In many media (dense volumetrics) the contributions from other directions than the main lights ones are simply insignificant, thus we can go on there with SLS. However, in other ones the contributions from other directions can be very important (clouds, smokes), that ́s where we should go for MLS.

2) Monte Carlo simulation for path tracing inside volumetrics, random walk simulations and so: the sperical integrals are performed over random directions (the more there are, the best it is – otherwise the picture is dotty like a raytracing picture not matured enough)

3) others only take into account the boundary of the volumetric object (ex: the sides of the clouds), by neglecting the light coming from deeply inside the cloud (as most of it get absorbed)

4) Here I have implemented a different approach: since MLS is actually the light diffusion process inside participating media, I could approach it with a full semi­lagrangian diffusion simulation of the light inside the volumetric.

That way we could achieve several points:

a) ­Simulation is performed in the entire volume and baked so it’s pre-calculated for each sample voxel in a preprocessing step.

b) It does take into account contributions from All directions at no extra cost.

c) ­It’s relatively fast to calculate and only depends on the cache volume resolution (for most scenes it would only takes few seconds!).

d) Highly controllable by the user.

e) ­Physically based diffusion simulation.

Matt Ebb has previously implemented in Sim_physics the Light Cache, so the extension to include MLS its rather simple: The Light Cache is the input/output of the simulator and it only requires 3 parametters to control the MLS, (though some presets could be added)

a) light diffusion factor

b) ­time steps for the simulation

c) ­simulation steps

The MLS simulation is performed very fast compared with previous methods and the results are very promising 🙂

Again, images speak louder than words (the tiny squares are compression artifacts from the blog, the original pictures are clean)

Fig 1: Tiny volumetric Suzanne with Single Scattering (SLS)


Fig 2: Tiny volumetric Suzanne with Multiple Scattering (MLS) 10 simulation steps (2s in MLS sim)


Fig 3: Tiny volumetric Suzanne with Multiple Scattering (MLS) 20 simulation steps (5s in MLS sim)


Fig 4: Dense volumetric Suzanne with SLS


Fig 5: Dense volumetric Suzanne with MLS (2 secons in MLS simulation)


Fig 6: SLS volumetric sky – full scene antialiasing – render time : 17 min 48 s


Fig 7: MLS volumetric sky – full scene antialiasing – 18 min 5 s


Fig 8: SLS volumetric sky – full scene antialiasing – 17 min 35 s


Fig 9: MLS volumetric sky – full scene antialiasing – 17 min 45 s (so only 10s in MLS simulation) (here in high-definition)


MLS is a common effect in nature, by simulating them we could greately enhance realism in volumetric rendering and also increase the range of effects that could be achieved within the Sim_physics volumetric framework. The time overhead is very little compared to other MLS calculation methods and the results compensate with excess those few extra seconds. With this, blender will have cutting edge Multiple Light Scattering render capabilities 🙂

Merry Christmas to all and a present



this is a friend of Farsthary: many of you are waiting eagerly for a real working build on their platfrom, and as Farsthary can not get connected the week-end…

Our super cool Matt Ebb reviewed and included 2 days ago Farsthary’s voxeldata and ERT

So just compile and run the last sim_physics revision. Here are the current builds (thank you graphicall people! :):

* MacOSX 10.5 Intel: build + simulator (thank you Jens!)

* Windows:  build + simulator (thank you Afalldorf!)

The ERT slider was renamed “Depth Cutoff” (1=max speed-up for dense material but some accuracy loss)

Here is a blend with the right settings for voxeldata rendering:

just launch it (naturally after loading the “smoke.bin” in the texture panel (F6 touch)) and observe/tweak the settings…

(for best quality, decrease the “step size” at the top of the “Volume” panel)

Keep in mind that:

– The resolution setting in the simulator is just the dimension of the core: you have to add the boundaries (1 cell width at  each side, contain the initial speeds) so if the resolution is 62 (standard one), the resolution for voxeldata (“smoke.bin”) in blender texture panel is 64…

– the voxeldata currently requires an external texturing reference: just put an empty object inside the cube, go the “Map Input” panel, click “object” and enter the name of the empty..

– the ascending axe for the smoke (convection) is currently the -y one of the empty, and the empty size is the one ot the voxel data: if you want that all the smoke fills the cube, just make the empty axes as big as the latter one. (just look at the blend for the right position of the empty, which must be in an appropriate corner for complete filling)

– here are default settings for smoke simulation (resol=62 by default):

* beware about the frame number: about 1 Mb per frame

* time step resolution: 0.1

* diffusion factor: 0 (increases the speed of dilution of the smoke)

* viscosity: 0 (increase the resistence of smoke to change its moving state, inercy)

* buoyancy: 10 (increases the ascending acceleration of the smoke: hot gases are less dense and so quickly go up)

* vorticity confinement factor: 1 (increase the turbulence)

* vorticity confinement amplification factor: 0 (is a recursion factor for turbulence)

* temperature simulation: no (additional bin file for voxeldata with these values)

* …..: no if you only want one density voxel texture..



Cutting down render times…

First the bad news, I have noticed that sim_physics volumetrics perform a full integration regarding the optical tickness of the object. Here is an example of the current drawback: Given a cube of very dense smoke aligned to the camera: let’s assume that it will take a render time T:  now if you scale 10 times the edges perpendicular to the viewer then render time will take 10*T with no significant difference from the previous render!


That could be perfect for truly realistic renderers where every detail matter, but it also eats a lot of render time, that´s why every volumetric render engine out there implement an optimized algorithm called Early Ray Termination (ERT).


This algorithm basically stops integrating along a ray when total opacity is near the maximum value, so voxel samples behind will not contribute at all to the image pixel. In my thesis build since I accumulate alpha values from the beginning, I make the raymarch (integration) along the ray stop as soon as accumulated alpha reaches the value 1.0 (= opacity): As a result render times were incredible shorted for very dense volumetrics (denser smokes,rocks,and other special effects)

However, my build lacks optimization for empty volumes contrary to Matt Ebb’s one (“sim_physics” branch), which shines on it. Consequently, I have taken some time to mix the strength of both builds and now are the good news 🙂

Since sim_physics implementations are based on physics terms the most direct mapping to final opacity seems to be the total transmittance, so I only will need to check it as an additional condition to keep the volumetric integration loop, and that´s basically the modification :), simple but powerful and allow a new range of effects for sim_physics and at the same time cut down render times for thick volumetrics.

Here I show you some results :


No ERT: render time = 21s 93


ERT: 9s 39

The new Parametter added is Thickness in range from 0 to 3.0, 0 performs the full volumetric integration for every ray as is now sim_physics, while other value stops the integration when the accumulated transmitance reach the value specified for the user.


More examples:


No ERT: 21s 38


ERT, thickness=0.1 : 16s 29


ERT, thickness=0.2 : 14s 38

With big volumes and denser smokes the render time gained is more noticed 🙂


No ERT: 1s 52                ERT, thickness=0.2 : 0s 52

Off course, the final quality is up to the user, and does not only depend on the thickness parameter, because there will always be a need for either very transparent volumetrics or very opacy ones.

Volumetrics with lot of empty areas will gain little with ERT but they are already optimized thanks to your build so now we could have best of both builds!

Also, now are possible new range of volumetric effects that sim_physiscs have lacked since it was focused on thinnier smokes, take a look:


No ERT: 5s 25


ERT, thickness=0.1 : 4s47


ERT,thickness=0.3: 3s 80                           ERT, thickness=0.6 : 2s 86

Now are achieved a desired effect, the volumetric blobs seems denser enough to totally occlude anything behind it and that is in control of the user, previously with combinations of density scale,absorption,emission and so this effect where very difficult if not impossible to achieve, volumetrics keep looking too transparent and that was due to the full ray integration.

Here is the very interesting effect with little render time 🙂


ERT, thickness=2.0: 2s 59

and finally some screen captures..




Cutting down render times…

Sim_physics VoxelData texture

Hi all!

Thanks to the help from Matt Ebb I have implemented a new texture type
for Sim_physics volumetric branch: VoxelData, it do what its name says 🙂
handle voxel datasets, as my previous build do but easier ,so now is
possible to mix volumetrics from several sources: textures,pointdensity
and so as Matt suggest.

I also have adjusted the simulator source to ask user inputs and reduce
the necessity to recompile the code for minor simulation changes.

Here is a tutorial

So, grab the simulator (source) and (windows executable) and hope you
will like this:

Sim_physics VoxelData texture