I've reopen my old fractal landscape scene and run different fractals on it, change the lighting, etc... with is now much more fun with the current Arnold GPU.
i've added more renders to gallery page.
I've reopen my old fractal landscape scene and run different fractals on it, change the lighting, etc... with is now much more fun with the current Arnold GPU.
i've added more renders to gallery page.
I've picked up an idea from my social media stream, rendering a bubble soup. for this example, I used the idea from Entagma's tutorial using a flip simulation for bubble soup movement.
the Flip simulation is quite simple and needs fixes to get correct with UV Distortion, but focused on the rendering part. the idea is quite simple using Fluids Dynamics to distort the UV on a sphere. a texture mapped on a sphere the UV's to drive the thickness of thin-film shader. I used the default ThinFilm feature of regular Arnold Shader. This makes the setup super simple, all you need is Spheres is distorted UV's.
the rendering from Arnold GPU is a little slow for GPU rendering but the Arnold CPU is extremely fast, the render time for HD was 12 seconds on superslow Xeon CPU. it's by far the fastest thin-film rendering on the CPU, I've seen so far.
raw rendering. I need only AA sample of 1. below is transmission albedo, it quite graphics look on its own.
it almost looks like infrared images from the Nasa of Jupiter.
for the next iteration is will great the type animation / noised structure within the shader only. it should be easy to re-recreate the dynamics with noise fields.
The following Standard Surface shader settings were used to create a soap bubble.
Base: 0
Specular: 1
Specular Color: 1 1 1
Specular Roughness: 0
Specular IOR: 1.0
Transmission: 1
Transmission Color: 1 1 1
Coat: 1
Coat IOR: 1.5
Thin Film: IOR 1.4
Thin Film Thickness: 500 [nm]the thickness values will be replaced by the thickness attribute multiplied by 500.
for testing different workflows, I've created USD and Alembics file of simple Debris simulation. You download scene files and caches here
https://www.dropbox.com/sh/hewpzan9hiahbl3/AADqEBR7HdfTWv-XzN4VO0Tla?dl=0
i added a example of GafferHQ setup as well.
i made 2 minutes video how to creat AOV passes with Arnold in Solaris. I am using the Arnold ROP node to deal the Variables names and LightPath expression for my own passes in LOP's
here 2 minutes video how to create Lightgroups asses within LOPs (Solaris). in Houdini 19
since renderVar nodes are hooked in a chain, you need be careful, wrong parameters etc.. can triggers crashes and wrong updates with render engines.
Here’s a simple kick trick to get a list of AOVs and LPEs. The -laovs flag lists all the AOVs in the loaded scene, but if you give kick no input, you’ll get a list of all built-in AOVs defined by Arnold. this only works when you have arnoldSDK installed.
For example, on Windows, run :
kick -laovs -i Null
On Linux or macOS, run :
kick -laovs -i /dev/null
Available aovs:
kick -laovs -i Nul Available aovs: Type: Name: LPE: -------------------------------------------------------------- VECTOR2 motionvector (~) RGBA RGBA C.* VECTOR N (~) FLOAT Z (~) RGB direct C[DSV]L RGB indirect C[DSV][DSVOB].* VECTOR Pref (~) RGB albedo C[DSV]A RGB emission C[LO] RGB diffuse_direct C<RD>L RGB background CB RGB denoise_albedo ((C<TD>A)|(CVA)|(C<RD>A)) RGB sss_albedo C<TD>A RGB specular_albedo C<RS[^'coat''sheen']>A RGB diffuse C<RD>.* FLOAT cputime (~) RGB diffuse_indirect C<RD>[DSVOB].* RGB sss_indirect C<TD>[DSVOB].* RGB diffuse_albedo C<RD>A RGBA shadow_matte FLOAT volume_Z (~) RGB specular C<RS[^'coat''sheen']>.* RGB coat_direct C<RS'coat'>L RGB specular_direct C<RS[^'coat''sheen']>L RGB specular_indirect C<RS[^'coat''sheen']>[DSVOB].* RGB volume_direct CVL RGB coat C<RS'coat'>.* RGB coat_indirect C<RS'coat'>[DSVOB].* RGB coat_albedo C<RS'coat'>A RGB sheen C<RS'sheen'>.* RGB transmission C<TS>.* RGB transmission_direct C<TS>L RGB transmission_indirect C<TS>[DSVOB].* VECTOR2 AA_offset (~) RGB transmission_albedo C<TS>A VECTOR P (~) RGB sheen_direct C<RS'sheen'>L RGB volume CV.* RGB sheen_indirect C<RS'sheen'>[DSVOB].* NODE shader (~) RGB sheen_albedo C<RS'sheen'>A RGB sss C<TD>.* RGB sss_direct C<TD>L RGB volume_indirect CV[DSVOB].* RGB volume_albedo CVA FLOAT A (~) FLOAT ZBack (~) RGB opacity (~) RGB volume_opacity (~) FLOAT raycount (~) UINT ID (~) NODE object (~) FLOAT AA_inv_density (~) RGBA RGBA_denoise (~) -------------------------------------------------------------- (~) No opacity blending
This is a quick overview of current render Engines for Houdini and General in terms of MotionGraphics and VFX usage.
There are different RenderEngines out there, each one is unique and uses different method to solve a problem. I am looking into Arnold, RenderMan, Vray, Octane and Redshift. For comparison reason I added Indigo Renderer engine.
There are different way to render a scene with benefits and shortcomings. lets start with most common one.
image by Glare Technology
to be precise Backward Pathtracing. In backward ray tracing, an eye ray is created at the eye; it passes through the viewplane and on into the world. The first object the eye ray hits is the object that will be visible from that point of the viewplane. After the ray tracer allows that light ray to bounce around, it figures out the exact coloring and shading of that point in the viewplane and displays it on the corresponding pixel on the computer monitor screen. that’s classical way, which all of the Render engines uses as standard.
This procedure has the advantage, relative to bidirectional path tracing, that once a path has been found from light to eye, the algorithm can then explore nearby paths; thus difficult-to-find light paths can be explored more thoroughly with the same number of simulated photons. Metropolis light transport is an unbiased method that, in some cases (but not always), converges to a solution of the rendering equation faster than other unbiased algorithms such as path tracing or bidirectional path tracing. MetroPolis is often used in Bidirectional mode (BDMLT).
Mix between Path-tracing and MLT, unbiased technique for intelligent light-path construction in path-tracing algorithms. Indirect Guiding that improves indirect lighting by sampling from the better lit or more important areas of the scene. goal is to allow path-tracing algorithms to iteratively “learn” how to construct high-energy light paths.
link to latest Siggraph paper
BiDirectional Pathtracing ( BDPT )
Regular backward Pathtracing has hard time in indoor scene with small light source because it take lot’s rays and bounce to find a tiny light in a room, just to see if a object gets light by the light.
with Bidirectional, rays are fired from both the camera and light sources. They are then joined together to create many complete light paths.
image by Silverwing
Unlike most renderers which work with RGB colours, Spectral renderers uses spectral colour throughout, from the physically-based sky model to the reflective and refractive properties of materials. The material models are completely based on the laws of physics.This makes it possible to render transparent materials like glass and water at the highest degree of realism.Spectral renderer are pretty good in simulate different medium atmospheric effects like under water or earth air atmosphere.
hat Biased Render Engine actually means is pre-computing a lot of information before sending out rays from the camera. In more simple words, It uses an optimization algorithm to greatly speed up the render time but doing so It is not strictly just modeling the physics of light but it is giving an approximation
here is an example what Spectral rendering able to do:
Unlike other rendering systems which rely on so-called practical models based on approximations, Indigo’s sun and sky system is derived directly from physical principles. Using Rayleigh/Mie scattering and data sourced from NASA, Indigo’s atmospheric simulation is highly accurate. It’s stored with full spectral information, and allows fast rendering and real-time changes of sun position.
some examples of Atmosphere simulations by Indigo Forum user Yonosoy.
Here are free scene files with 3 frames of USD caches for a water simulation for free to use for any purpose. Its setup with lights and shaders for Arnold renderer in GafferHQ. I will update the dropbox folder in future with other render engines setups like CyclesX , Karma, Omniverse or Unreal5.
I am using selfmade velocity blur node inside gaffer to get motionblur on changing point count
link to cache and gaffer files.
https://www.dropbox.com/sh/ifx72nevgkpnh6f/AAA5WpQ87eY2422rJQgYkuPna?dl=0
Here are some USD Assets and Scenes for free to use for any purpose. Its setup with lights and shaders for Arnold renderer in GafferHQ. I will update the dropbox folder in future with other render engines setups like CyclesX , Karma, Omniverse or Unreal5.
https://www.dropbox.com/sh/8ha3xh4t120qku4/AADDipysw9Hp9xLdvGrObKj8a?dl=0
in this examples it used the gaffer instancer and Arnold fog volume shader which quite fast on the GPU.
Models and Textures are selfmade and HDR is used from iHDRI.com