Website powered by

Apple M1 chip for VFX production

Article / 20 March 2022



First a disclaimer, this is my own opinion and only mine.
Many people ask me if the Apple silicon CPU/GPU is good for CGI or VFX productions and is it the future?
First, let's have a look at the concept of the silicone chip from apple.

Macbooks and Macbooks pro with the M1 chip have 20+ battery life, which is 3-4 times more than Intel laptops or previous MacBooks. Is the M1 so much better than the Intel or AMD chips? mmhh , maybe or is the Battery so much better? I don’t think so, it's still the type of battery.
What really makes it stand out is the concept of CPU and GPU design.
The M1 CPU has its own Memory onboard without the need to move data forth and back from a memory on the mainboard, which already saves electric power usage.  next, the GPU cores, which also handle graphics display are also on the same chip. This means data don’t need to be transferred through the mainboard to a graphics card, which needs on a high frequency to be fast. High frequency also needs more power and cooling, that’s the next power savings. the M1 design also has shared memory, meaning CPU and GPU using the same memory, unlike a regular computer, which saves extra work shifting data from CPU memory to GPU memory, also power savings.
Properly the biggest energy saving comes from the different CPU cores. M1 chips have different CPU cores. The regular CPU cores like in Intel or AMD chips and lower power CPU cores. These simpler and slower CPUs cores only need a fraction of the power of a regular full-featured CPU core. And that’s the key. 95% of laptops use is for writing, office work like excel sheets, watching movies or surfing the internet, tasks which the low-power CPU can easily do. And that’s the main reason why the battery lasts so long. If you do raytracing all the time, your battery life will be in a similar range then intel used laptop.

So, the concept of M1 Chips is, having multiple processing units for different purposes on 1 chip.  CPU cores, low-power CPU cores, GPU cores and AI cores (Nvidia calls its AI chip Tensor cores) are on the chips which a unified memory. This means the 128GB of Ram, like on the M1 Ultra Chip, can be fully used by the GPU cores for graphics or heavy jobs like raytracing. That’s huge! If you wanna have 128GB graphics card memory with Nvidia or AMD cores you need to invest 14k$+ just for graphics cards plus the costs of a PC with a proper mainboard and cooling system. The downside of the concept is, it's not flexible, you can’t upgrade the memory or graphics cards. But to be honest, how many times have you been in the store and bought more memory these Days?

Overall the concept sounds promising and I think its the future for now. This concept is not new, the O2 workstation from Silicon Graphics had the same concept, so did the Nintendo 64 video game console. It didn’t turn out to be successful. At the time, the speed improvements of the intel chips were so huge, they outrun any benefits of the O2 design.




In their current state, the M1 chips hold up pretty well with Intel and AMD CPUs and beat anything in the same price class. Of course, you can outrun the M1 chips with a high-end desktop and water-cooling, if you wanna do rendering but at what cost? Also, the M1 chip still can’t compete with the full beefed-up 3090 Nvidia GPU card in terms of rendering speed. The M1 is designed as an all-purpose workhorse and extremely fast one, but it has a big memory advantage. The full advantages of M1chip we see once the software developer starts using Metal Api and harvest the combined power CPU, GPU and AI cores.
Video processing for example is unmatched, it profits from the shared memory, no need to shuffle insane data of a 4k or 8k video between memory types. At the moment, the M1 chip powerful alternative to Intel's CPU dominance and competition is good for the consumer.
A workstation computer the apple silicone chips is a worthy alternative. Even more so once the software gets converted native M1 are applications and take full advantage of the system. Rumours have it, that’s intel and AMD work on similar architecture. the future looks bright!

The M1, M1 pro and M1 max share the same architecture, just different amounts of core and memory. The ultra chip is basically two M1 max chips glued together and data exchange is so fast, it works as one.

The apple silicone can is also attractive as a server or render-farm system. the low-power usage and small form factor are efficient, you don't need big motherboards with fast bus lanes, which bumps up the cost of cooling and power. a huge factor in this area.
The question remains, what is the future? will the default PC chips stay dominant or will the ARM architecture chip design, like Apple M1, take over? From a technical standpoint, the ARM architecture is the most efficient way for running a computer system. Having the right CPU/GPU unit for the right job is power efficient and cheaper. The downside, it's very complicated to write complex software for this kind of system. If AMD comes out with 512 core Threatripper that would be not very power efficient but extremely efficient for software development. It will triumph over power efficiency. Nevertheless for mobile or small devices ARM architecture will be standard just because the power saving is so impactful.

It will rule Desktop machines properly too because having a large amount of full CPU cores unit has its physical limits. Extreme CPU cores need a large mainboard, a lot of electrical power and cooling. We have already reached the physical limit of how much we can compress the CPU into a smaller space, 5mn.(nano-meter). Smaller is impossible because electric photon touch it's other and you get the correct information you need anymore.
On the other side, new software languages and AI improving constantly which helps with software development for abstract parallel processing. I think ARM is the future and it will interesting to see what kind of ARM system the competition bring us. (Intel,Nvidia,AMD ). 




alien World test

Work In Progress / 14 February 2022

testing new 3D fractals in Houdini and Arnold renderer


relighting old shot

General / 05 February 2022

I've reopen my old fractal landscape scene and run different fractals on it, change the lighting, etc... with is now much more fun with the current Arnold GPU.

i've added more renders to gallery page.

Bubble Soup Rendering with Arnold

General / 26 December 2021

I've picked up an idea from my social media stream, rendering a bubble soup. for this example, I used the idea from Entagma's tutorial using a flip simulation for bubble soup movement.  



the Flip simulation is quite simple and needs fixes to get correct with UV Distortion, but focused on the rendering part. the idea is quite simple using Fluids Dynamics to distort the UV on a sphere. a texture mapped on a sphere the UV's to drive the thickness of thin-film shader. I used the default ThinFilm feature of regular Arnold Shader. This makes the setup super simple, all you need is Spheres is distorted UV's.

the rendering from Arnold GPU is a little slow for GPU rendering but the Arnold CPU is extremely fast, the render time for HD was 12 seconds on superslow Xeon CPU. it's by far the fastest thin-film rendering on the CPU, I've seen so far. 

raw rendering. I need only AA sample of 1. below is transmission albedo, it quite graphics look on its own.

 


it almost looks like infrared images from the Nasa of Jupiter. 

for the next iteration is will great the type animation / noised structure within the shader only. it should be easy to re-recreate the dynamics with noise fields.

The following Standard Surface shader settings were used to create a soap bubble.

Base: 0

Specular: 1

Specular Color: 1 1 1

Specular Roughness: 0

Specular IOR: 1.0

Transmission: 1

Transmission Color: 1 1 1

Coat: 1

Coat IOR: 1.5

Thin Film: IOR 1.4

Thin Film Thickness: 500 [nm]the thickness values will be replaced by the thickness attribute multiplied by 500.

debris USD files

General / 11 December 2021


for testing different workflows, I've created USD and Alembics file of simple Debris simulation. You download scene files and caches here
https://www.dropbox.com/sh/hewpzan9hiahbl3/AADqEBR7HdfTWv-XzN4VO0Tla?dl=0


i added a example of GafferHQ setup as well.


Basic AOV and Lightgroups passes with Arnold and Houdini19 Solaris

General / 06 December 2021

i made 2 minutes video how to creat AOV passes with Arnold in Solaris. I am using the Arnold ROP node to deal the Variables names and LightPath expression for my own passes in LOP's


here 2 minutes video how to create Lightgroups asses within LOPs (Solaris). in Houdini 19

since renderVar nodes are hooked in a chain, you need be careful, wrong parameters etc.. can triggers crashes and wrong updates with render engines. 

Here’s a simple kick trick to get a list of AOVs and LPEs. The -laovs flag lists all the AOVs in the loaded scene, but if you give kick no input, you’ll get a list of all built-in AOVs defined by Arnold.  this only works when you have arnoldSDK installed.

For example, on Windows, run :

kick -laovs -i Null  

On Linux or macOS, run :

 kick -laovs -i /dev/null

Available aovs:

kick -laovs -i Nul
Available aovs:
    Type:    Name:                        LPE:
    --------------------------------------------------------------
    VECTOR2  motionvector (~)
    RGBA     RGBA                         C.*
    VECTOR   N (~)
    FLOAT    Z (~)
    RGB      direct                       C[DSV]L
    RGB      indirect                     C[DSV][DSVOB].*
    VECTOR   Pref (~)
    RGB      albedo                       C[DSV]A
    RGB      emission                     C[LO]
    RGB      diffuse_direct               C<RD>L
    RGB      background                   CB
    RGB      denoise_albedo               ((C<TD>A)|(CVA)|(C<RD>A))
    RGB      sss_albedo                   C<TD>A
    RGB      specular_albedo              C<RS[^'coat''sheen']>A
    RGB      diffuse                      C<RD>.*
    FLOAT    cputime (~)
    RGB      diffuse_indirect             C<RD>[DSVOB].*
    RGB      sss_indirect                 C<TD>[DSVOB].*
    RGB      diffuse_albedo               C<RD>A
    RGBA     shadow_matte
    FLOAT    volume_Z (~)
    RGB      specular                     C<RS[^'coat''sheen']>.*
    RGB      coat_direct                  C<RS'coat'>L
    RGB      specular_direct              C<RS[^'coat''sheen']>L
    RGB      specular_indirect            C<RS[^'coat''sheen']>[DSVOB].*
    RGB      volume_direct                CVL
    RGB      coat                         C<RS'coat'>.*
    RGB      coat_indirect                C<RS'coat'>[DSVOB].*
    RGB      coat_albedo                  C<RS'coat'>A
    RGB      sheen                        C<RS'sheen'>.*
    RGB      transmission                 C<TS>.*
    RGB      transmission_direct          C<TS>L
    RGB      transmission_indirect        C<TS>[DSVOB].*
    VECTOR2  AA_offset (~)
    RGB      transmission_albedo          C<TS>A
    VECTOR   P (~)
    RGB      sheen_direct                 C<RS'sheen'>L
    RGB      volume                       CV.*
    RGB      sheen_indirect               C<RS'sheen'>[DSVOB].*
    NODE     shader (~)
    RGB      sheen_albedo                 C<RS'sheen'>A
    RGB      sss                          C<TD>.*
    RGB      sss_direct                   C<TD>L
    RGB      volume_indirect              CV[DSVOB].*
    RGB      volume_albedo                CVA
    FLOAT    A (~)
    FLOAT    ZBack (~)
    RGB      opacity (~)
    RGB      volume_opacity (~)
    FLOAT    raycount (~)
    UINT     ID (~)
    NODE     object (~)
    FLOAT    AA_inv_density (~)
    RGBA     RGBA_denoise (~)
    --------------------------------------------------------------
    (~) No opacity blending


Render Techniques differences

General / 29 November 2021

This is a quick overview of current render Engines for Houdini and General in terms of MotionGraphics and VFX usage. 

There are different RenderEngines out there, each one is unique and uses different method to solve a problem. I am looking into Arnold, RenderMan, Vray, Octane and Redshift. For comparison reason I added Indigo Renderer engine.

There are different way to render a scene with benefits and shortcomings. lets start with most common one.

image by Glare Technology

Pathtracing (PT)

to be precise Backward Pathtracing.  In backward ray tracing, an eye ray is created at the eye; it passes through the viewplane and on into the world.  The first object the eye ray hits is the object that will be visible from that point of the viewplane.  After the ray tracer allows that light ray to bounce around, it figures out the exact coloring and shading of that point in the viewplane and displays it on the corresponding pixel on the computer monitor screen. that’s classical way, which all of the Render engines uses as standard.


Metropolis light transport (MLT)

This procedure has the advantage, relative to bidirectional path tracing, that once a path has been found from light to eye, the algorithm can then explore nearby paths; thus difficult-to-find light paths can be explored more thoroughly with the same number of simulated photons. Metropolis light transport is an unbiased method that, in some cases (but not always), converges to a solution of the rendering equation faster than other unbiased algorithms such as path tracing or bidirectional path tracing. MetroPolis is often used in Bidirectional mode (BDMLT).


Path Guiding

Mix between Path-tracing and MLT, unbiased technique for intelligent light-path construction in path-tracing algorithms. Indirect Guiding that improves indirect lighting by sampling from the better lit or more important areas of the scene. goal is to allow path-tracing algorithms to iteratively “learn” how to construct high-energy light paths.

link to latest Siggraph paper

BiDirectional Pathtracing ( BDPT )

Regular backward Pathtracing has hard time in indoor scene with small light source because it take lot’s rays and bounce to find a tiny light in a room, just to see if a object gets light by the light.

with Bidirectional, rays are fired from both the camera and light sources. They are then joined together to create many complete light paths.

Spectral rendering

image by Silverwing

Unlike most renderers which work with RGB colours, Spectral renderers uses spectral colour throughout, from the physically-based sky model to the reflective and refractive properties of materials. The material models are completely based on the laws of physics.This makes it possible to render transparent materials like glass and water at the highest degree of realism.Spectral renderer are pretty good in simulate different medium atmospheric effects like under water or earth air atmosphere.

Biased Rendering

hat Biased Render Engine actually means is pre-computing a lot of information before sending out rays from the camera. In more simple words, It uses an optimization algorithm to greatly speed up the render time but doing so It is not strictly just modeling the physics of light but it is giving an approximation


here is an example what Spectral rendering able to do:

Indigo renderer Planet-scale atmospheric simulation

Unlike other rendering systems which rely on so-called practical models based on approximations, Indigo’s sun and sky system is derived directly from physical principles. Using Rayleigh/Mie scattering and data sourced from NASA, Indigo’s atmospheric simulation is highly accurate. It’s stored with full spectral information, and allows fast rendering and real-time changes of sun position.

some examples of Atmosphere simulations by Indigo Forum user Yonosoy.


Bee captured in mid flight

General / 20 November 2021

some examples from the photoshoot. hand held and shot with Nikon V1 with its ultrafast autofocus on the nikkor 32mm F1.2 lens. so easy with this camera 2-3 minutes and is was done.



Arnold operators tricks

Making Of / 11 November 2021


if use Arnold operators you can change shading and geometry without any delay in IPR rendering. Operators toegther which Houdini node graph and you have simpler version of Solaris, Katana or Gaffer. the Arnold operators are around for years when even Katana and Gaffer was quite new and long before the idea of Solaris.

Operators allow advanced users to override any part of an Arnold scene and modify the Arnold universe at render time. Probably one of the most common use cases is to override parameters (e.g. shaders) inside a procedural (USD, ASS or Alembic).Here is quite test with 600GB caches file during  Arnold GPU interactive rendering.

As you can see its quite fast to update the interactive rendering. you can the Kick command which used to query possible target node parameter names of a set_parameter node. For example: kick -info or kick -info polymesh. a more detailed tutorial i will do in future posts.

i've used a set parameter to change the radius of the foam particles. radius *= 1.3  or change the particle to 'disc', 'sphere' or 'quad', o change the shader like shader == 'purple_shader' .



here more detail talk about usage of arnold operators from autodesk and The Mill. 

video link


water USD files

General / 11 November 2021

Here are free scene files with 3 frames of USD caches for a water simulation for free to use for any purpose. Its setup with lights and shaders for Arnold renderer in GafferHQ. I will update the dropbox folder in future with other render engines setups like CyclesX , Karma, Omniverse or Unreal5.

I am using selfmade velocity blur node inside gaffer to get motionblur on changing point count


link to cache and gaffer files.

https://www.dropbox.com/sh/ifx72nevgkpnh6f/AAA5WpQ87eY2422rJQgYkuPna?dl=0