Tag Archives: London

So what is VFX? I do it for a living, yes, but what is it?

When I am asked what I do for a living, there is a follow-up question that is so common I begin to answer it right away now. That question is, “Ok, that sounds interesting. So what do you actually do? What is Visual Effects really?”

It’s a fair question actually and one whose answer changes as time goes on. If I’m stumped for an answer to the question, I try some of the following.

My staple answer now is,

“I add stuff to video footage that wasn’t there in the first place, or take it away if it wasn’t meant to be there.”

More often than not, the actual answer is,

“I create something with the appearance of having been shot as real life, but which is actually impossible to shoot, be that for practical, artistic or financial reasons.”

Ah, so that will answer it, right? Nope. I find these answers are enough for most people to understand at least vaguely what the end result of my job is. However, some are mad about film, TV dramas and whatnot and really want to show their interest. Again, fair enough. A question you might get is,

“So when you say you add things into video footage or film or whatever, how do you do that?”

That’s the really tricky one to answer, especially as everyone’s preconceptions of media, especially digital, are different. There’s the Make Awesome button right? It’s all done by the computer right?
However, wonderfully, a lot of people use Photoshop now and kind of get the concept of layering things over each other. Lately, I’ve been explaining with,

“VFX has similar principles to editing photographs, only these photos are on the move. Imagine using Photoshop for moving images, with all the layers and masks moving, the colour corrections animating and so on. I make elements, series of 2D images, that are composited on top of others, like layers are in Photoshop.”

I do almost exclusively 3D VFX, by which I mean those elements are created in a 3D package, such as Maya, rendered out as 2D images, just like photographs have no physical depth to them. I no longer get bogged down into details when explaining VFX. To begin with, I don’t even mention the many jobs available; compositor, modeller, 3D generalist, render wrangler etc. I used to say I did 3D animation, but that would lead people down the path of thinking I did Toy Story or was about to reinvent Wallace and Gromit. Another danger with the 3D moniker is the recent resurgence in 3D cinema which is another kettle of fish altogether.

So there we are. A fairly basic answer which most people understand! Incidentally, I am a 3D generalist, available to hire in London, UK. Check out my work on the home page at https://www.ajcgi.co.uk.

3d Generalist available for hire in London from May onwards. 8 years of experience.

From the start of May onwards I will be available for generalist 3d VFX work in London, UK.
My show reel and examples of work are on the home page at https://www.ajcgi.co.uk

I’ve been beavering away this year at a huge rate of knots, working on such things as adverts including one for BBC World Service, and documentaries for BBC and Channel 4. Videos and suchlike to come.

I have now have 8 years of producing shots across many media and platforms, mainly for dramas and documentaries.
More details on my PDF CV at https://www.ajcgi.co.uk/blog/?page_id=45

Alex
aj@ajcgi.co.uk

How To Build A Planet – My VFX Input

Not so long ago I worked at Lola Post, London, on another documentary hosted by Richard Hammond. Similar to the Journey to The Centre of The Planet and Bottom of The Ocean shows I worked on some time back, this entailed a heck of a lot of vfx.

The concept is that we see the constituent parts of scaled-down planets and the solar system being brought together in a large space over the Nevada desert. In order for Hammond to be able to present things at the necessary altitude, he is up at the top of a 2 mile high tower, which is obviously not real for various reasons. Nor is the desert much of the time. Or Hammond.

My input on the show was working on dust and sand particle systems. I was working on 2 sequences of shots. I will warn you now that some of this will get technical.

The first sequence shows a large swirling cloud of high-silica sand and iron. This includes a shot which was to become my baby for a month or two. It pulls out from Hammond at the top of the tower, back through the dust cloud swirling around him, then really far back so we see the entire 2km wide cloud in the context of the landscape around it. The whole shot is 30 seconds long.

The second sequence of shots shows the formation of Jupiter out of a large swirling disc of matter. Jupiter itself attracts dust inwards, which swirls as it approaches.

A few challenges presented themselves quite early on. One was creating particle systems in Softimage’s ICE that behaved correctly, especially when it came to dust orbiting Jupiter as the whole system itself swirls around the protosun. The initial swirling round the protosun was solved using a handy ICE compound that Lola have kicking about on their server, but if you use that twice in an ICE tree it is only evaluated once as it sets the velocity using an execute node, effectively overriding the new velocity value for each particle, rather than passing that out so it can be added to the previous velocity.

The solution to this was to break apart the compound. Integrating new nodes, including some out of a Move Towards Goal node, meant that I was able to make a new compound that I could proudly label Swirl Towards Goal. It sets the goal, then outputs a velocity which can be added to the velocity from the previous swirling compound higher up the tree. It even has sliders for distance falloff, swirl speed, and weight.

The most challenging aspect of this project was actually rendering. The swirling dust in each of my shots is made up of about 4 different clouds of particles. One alone has 60 million particles in it.

Enter Exocortex Fury, the fabled point renderer that was to save our bacon. Aside from one fluffy cloud pass per shot, rendered as a simple Mental Ray job on a separate lower detail cache, each cloud pass was rendered with Fury. Unlike traditional particle renderers that use CPU to render, Fury is a point renderer which can take advantage of the raw power of graphics cards. The upside is a far faster render compared to traditional methods, and done correctly it is beautiful. To speed things up further, particles which were offscreen were deleted so Fury wouldn’t consider them at all. Downsides are that it can flicker or buzz if you get the particle replication settings wrong and it has no verbose output to tell you quite how far it is through rendering. Between us dust monkeys many hours were spent waiting for Fury to do something or crash.

Adding to the complications was the scale of the main scene itself. The tower is rendered in Arnold, a renderer that works best when using one Softimage unit per metre. Unfortunately the huge scene scale caused problems elsewhere. In a couple of shots the camera is so high off the ground that mathematical rounding errors were causing the translation to wobble. Also, as particles, especially Fury-rendered ones, prefer to be in a small scene to a gigantic scene for similar mathematical reasons, they weren’t rendering correctly, if at all. The particles were in their own scenes for loading speed and memory overhead purposes, but in order to fix these issues, the whole system was 1/5 of the main scene scale and offset in such a way that it was closer to the scene origin yet would composite on top of the tower renders perfectly.

How to Build a Planet is on show in the US on Discovery’s Science channel before being shown to the UK in November.
Discovery Sci – How to Build a Planet

Brand New Showreel!

The work in the following reel is created using Softimage, Terragen, Nuke and PFTrack.
Text in the bottom right shows what I created for each shot.
See PDF for further details.
Download PDF shot breakdown

Edited on 15th Oct – Now updated with work from The Bible Series and How To Build a Planet

CCTV-9 Documentary Channel Ident

Update! The CCTV-9 channel branding, including this ident, recently won a Gold for Best Channel Branding at the PromaxBDA awards in Singapore!

I was called back in to work at Lola in London for this Chinese TV channel ident for CCTV-9 Documentary. Only 2 of us worked on this shot: myself and Tim Zaccheo, head of 3D at Lola.

The ident sees a waterfall coming down the side of a cubic mountain. The camera pulls back down a valley with scenery akin to the Guilin area of China, then out into space to reveal that the Earth is indeed cubic. CCTV have a cubic theme, so this makes sense in context. Thanks to the real-world scale of Terragen and the existing workflow at Lola, Tim was able to come up with a camera move that once imported into Terragen matched perfectly with the Softimage scene. The Earth’s textures and even the clouds lined up perfectly in both sections allowing a seamless blend.

My part in this was embellishing the initially blocked out Terragen scene with the necessary details to make it look like the Guilin mountains. A challenge there was that Terragen is great for pointy Alpine style mountains dusted with snow. That is easy out of the box. Guilin mountains are almost bell jar in shape, carpeted in trees with rocky cliffs here and there. The valleys between have been eroded away by rivers, leaving behind relatively flat farming land.

The solution to this was a variety of painted map shaders. Although this allows flexibility and great detail when it comes to controlling displacements, they’re best replaced with actual textures if possible, else the rendering gets very intense. In this case it wasn’t really an option. The painted maps were used to define areas of low and high ground, plus to define where the river goes and to control where the farmland appeared.

As there is quite so much foliage in the area there needed to be a solution that didn’t rely entirely on populations of tree objects. In come the procedural trees. This is essentially a series of overlaid displacement textures that build up to create the cauliflower head look to the trees. Similarly, the farming land was achieved using a tiled texture of fields and a few trees distrbuted along hedgerows. It’s very easy in a procedural program like Terragen to forget that a bitmap texturing approach is still a valid method and often faster.

Something that took a while to figure out was the cubic mountain at the start. The cube was initially displaced using a square displacement map with a falloff around the edges, plus an area eroded away at the front. The stoney displacements were then layered on to this, taking the new normals into account, rather than throwing everything up vertically as is the default. It was then eroded in various directions using extra displacement maps.

The waterfall was Tim’s baby, done entirely in Softimage’s ICE using fairly straight forward techniques, but along with some coloured mattes it all came together nicely in the comp.

There’s no sound on the video above by the way. I’ll replace it with one with audio once I’ve located it.