Category Archives: Video

Over the Hills and Far Away… Teletubbies Came to Play. And me. I was there too.

Teletubbies Logo

From the summer of 2014 through till the summer of 2015 I was involved in a project the scale of which I’d not played a part in before. A new series of Teletubbies was announced as being in the works, and Lola Post, where I was freelancing as a 3D type, had won the contract for all the VFX. All 60 episodes of it.

This amounted to hundreds of shots, a volume which is ordinarily associated with film projects. Initially I was involved in the pre-production, working alongside Pinewood-based prop-makers Propshop and the production company, Darrall MacQueen, in laying out designs for the set and other VFX assets. The actors were to be shot on a blue screen with the set being a 1:20 scale. It was our digital set layout which was 3D printed and then dressed by the prop shop staff. This allowed us to use the same 3D data when lining shots up in postproduction.

During the shoot I was working out of a hair and make-up room next to stage three at Twickenham Studios, alongside the DIT. This allowed me to continue developing assets for the 3D team back in London, while still being available on set for questions about set extensions, digital assets and so on.

Once the team on set were up to speed and questions of a 3D nature were thin on the ground I returned back to Lola Post in Fitzrovia. There we had set up a dedicated office and team specifically for the Teletubbies. My main responsibility there was to be lead 3D TD. However I was not the only one. Tiddlytubbies had become such a large part of the show that they had their own section, led by Jonny Grew and Josh George, with much of the animation by Steve White.

In the meantime, I had become what the supervisor, Garret Honn had described as ‘chief landscape gardener’. Every external shot has a set extension. The real scale model is only 4 metres across, representing an 80 metre circle in Teletubbyland. I had come up with a set extension system which was refined as the project went on, but allowed a few of us to continually churn through the many moving or high angle shots that required distant hills, grass, clumps of flowers and trees to be seen beyond the edge of the model set. For many shots which were lower or nowhere near the edge of the set, we got away with putting a large panoramic image in the background and sliding it around from shot to shot.

For the sake of generating distant hills with realistic lighting and so on, we’d gone down the route of using Terragen, a software I’ve used many times for external landscapes. However, with its relatively slow render times, it was only truly used for the opening and closing credits where the light swings round, creating raking shadows. The rest of the time, the background is a large cyclorama, rather akin to a zoetrope, constructed out of Terragen renders. This approach kept render times down, something that was very important with such a volume of material to get through.

Naturally enough, Teletubbyland needs more than just grass and hills, so there are trees, flowers, many tufts of grass and so on. The trees are based on illustrations created by an independent illustrator, brought to life through a combination of softwares; Speedtree, Mudbox and ultimately Softimage. Additionally, we created flowers based on the scale models from Propshop, alongside the stunt ball for Laa Laa, custard bubbles, snowballs and other non-spherical assets, such as the windmill. Naturally there was toast. Custard and toast. No wonder this bunch are funny colours.

Once the project had truly gotten underway I spent roughly half my time answering questions, watching dailies, attending meetings and keeping an eye on the render farm. In that regard it was the most technical role I’ve undertaken. The rest of the time was spent tracking shots, managing who did what and occasionally doing shots myself. Props to the rest of the 3D team for their untiring efforts, especially Olly Nash and Ismini Sigala who were both in it for the long haul. Between us and Tammy Smith we’ve tracked more than enough shots for a lifetime, animated many flowers and a lot of spherical objects.

Naturally, there’s more to life than the 3D side of VFX. The 2D side was phenomenal in scale. So many blue screen shots, so little time. It all needed keying, roto work, cleanup and the final compositing too. To list everyone here would be crazy and considering only a handful of people will read down to this paragraph, i’m not going to list them all! Just be aware that for every shot on Teletubbies that you watch with your kids, about 5 people will have touched it and most of those will be compositors and roto artists. Thanks to all involved. Your efforts did not go unnoticed!

Teletubbies is currently on air in the UK and is bound to be shown elsewhere soon. Response seems to be positive so far. Due to very strict licensing agreements I can’t currently post videos from the show here, so it’s over to the BBC with you!

Teletubbies page at Cbeebies

How To Build A Planet – My VFX Input

Not so long ago I worked at Lola Post, London, on another documentary hosted by Richard Hammond. Similar to the Journey to The Centre of The Planet and Bottom of The Ocean shows I worked on some time back, this entailed a heck of a lot of vfx.

The concept is that we see the constituent parts of scaled-down planets and the solar system being brought together in a large space over the Nevada desert. In order for Hammond to be able to present things at the necessary altitude, he is up at the top of a 2 mile high tower, which is obviously not real for various reasons. Nor is the desert much of the time. Or Hammond.

My input on the show was working on dust and sand particle systems. I was working on 2 sequences of shots. I will warn you now that some of this will get technical.

The first sequence shows a large swirling cloud of high-silica sand and iron. This includes a shot which was to become my baby for a month or two. It pulls out from Hammond at the top of the tower, back through the dust cloud swirling around him, then really far back so we see the entire 2km wide cloud in the context of the landscape around it. The whole shot is 30 seconds long.

The second sequence of shots shows the formation of Jupiter out of a large swirling disc of matter. Jupiter itself attracts dust inwards, which swirls as it approaches.

A few challenges presented themselves quite early on. One was creating particle systems in Softimage’s ICE that behaved correctly, especially when it came to dust orbiting Jupiter as the whole system itself swirls around the protosun. The initial swirling round the protosun was solved using a handy ICE compound that Lola have kicking about on their server, but if you use that twice in an ICE tree it is only evaluated once as it sets the velocity using an execute node, effectively overriding the new velocity value for each particle, rather than passing that out so it can be added to the previous velocity.

The solution to this was to break apart the compound. Integrating new nodes, including some out of a Move Towards Goal node, meant that I was able to make a new compound that I could proudly label Swirl Towards Goal. It sets the goal, then outputs a velocity which can be added to the velocity from the previous swirling compound higher up the tree. It even has sliders for distance falloff, swirl speed, and weight.

The most challenging aspect of this project was actually rendering. The swirling dust in each of my shots is made up of about 4 different clouds of particles. One alone has 60 million particles in it.

Enter Exocortex Fury, the fabled point renderer that was to save our bacon. Aside from one fluffy cloud pass per shot, rendered as a simple Mental Ray job on a separate lower detail cache, each cloud pass was rendered with Fury. Unlike traditional particle renderers that use CPU to render, Fury is a point renderer which can take advantage of the raw power of graphics cards. The upside is a far faster render compared to traditional methods, and done correctly it is beautiful. To speed things up further, particles which were offscreen were deleted so Fury wouldn’t consider them at all. Downsides are that it can flicker or buzz if you get the particle replication settings wrong and it has no verbose output to tell you quite how far it is through rendering. Between us dust monkeys many hours were spent waiting for Fury to do something or crash.

Adding to the complications was the scale of the main scene itself. The tower is rendered in Arnold, a renderer that works best when using one Softimage unit per metre. Unfortunately the huge scene scale caused problems elsewhere. In a couple of shots the camera is so high off the ground that mathematical rounding errors were causing the translation to wobble. Also, as particles, especially Fury-rendered ones, prefer to be in a small scene to a gigantic scene for similar mathematical reasons, they weren’t rendering correctly, if at all. The particles were in their own scenes for loading speed and memory overhead purposes, but in order to fix these issues, the whole system was 1/5 of the main scene scale and offset in such a way that it was closer to the scene origin yet would composite on top of the tower renders perfectly.

How to Build a Planet is on show in the US on Discovery’s Science channel before being shown to the UK in November.
Discovery Sci – How to Build a Planet

South Bank Show Trailer

A few months back I worked on a trailer for the South Bank Show, featuring Melvyn Bragg walking through the Leake St tunnel under Waterloo station. Bragg was shot on a greenscreen, with the environment being recreated in Softimage by myself and fellow freelancer Rasik Gorecha.

The obvious question there is why? Why can’t Mr. Bragg just go into the tunnel and we shoot it there, huh? Well, there are a few obvious answers to that. The tunnel, itself a road with access to a car wash half way down, is dank, contains certain undesirable types Mr. Bragg would probably best steer clear of, and is continually in flux thanks to it being one of the few areas in London where it is legal to graffiti. It’s also not the most comfortable of places to sit around in for long hours on a shoot. The other reason is that lots of the graffiti was to be replaced with animated posters and artwork featuring well known faces from the arts. That process is a lot easier if created digitally and lit using indirect lighting solutions.

My input on this was twofold. Firstly I set up the lighting in Arnold. After an hour or so of experimenting, the solution found was to place shadow casting point lights in the ceiling under about half of the strip light fittings, plus a spot light at either end of the tunnel. Additional fill lights were used to brighten up the nearest walls. The lights in the walls toward the back of the tunnel are merely textured models and not actual lights.

One of the things with a Global Illumination solution like Arnold is that it can lead to fizzing. One solution to lighting this tunnel would be area lights. This was ditched as a plan extraordinarily fast as it led to lots of noise, plus the modelled lights themselves act as bounce cards essentially negating the need for area lights at all.

Rasik had the majority of the modelling done by the time I joined in the project but was yet to embark on cables. Whilst he set up initial texturing, I became cable monkey. I modelled cables and brackets, trays for them to run along, pipes and all sorts. It took a few days of continually modelling cables before I’d finished them. Simple stuff but it really added to the believability.

South Bank Show Trailer

The top of the two images above is the model with finished textures and below that is the finished lighting.

The final trailer is not as it appeared on Sky for 2 reasons. They added their own logo at the end, naturally enough, and they own full copyright of the sound bizarrely, so mine’s a silent movie. Add your own ragtime soundtrack as appropriate.

The Bible Series – VFX

Recently in America, The History Channel broadcast The Bible Series, knocking American Idol into the weeds for ratings. The real reason of course to celebrate this fact is that I worked on VFX for this, along with many others hired by / working at Lola Post, London.

There were hundreds of shots. As the series covers many well-known events that are either epic in scale or miraculous in nature, it’s hard to cut corners with this kind of content.

One of the advantages of VFX is the ability to extend sets or create new ones. The most used model shared amongst the 3d crew was that of Jerusalem. It was originally an off-the-shelf-model of a real scale model, intended to be seen from a distance, so it needed to be tweaked and improved upon where appropriate on a shot by shot basis. With so many artists having touched the model at one point or other, the lighting setup, materials and textures got improved to the extent that once composited, the shots really shone out. Many of the shots I did for The Bible featured Jerusalem, either as an entirely CG set or an extension tracked into existing footage.

One story that is covered in the show is that of Moses parting The Red Sea, with the Israelites being chased by Egyptians through the parted waves. The shot I did for this sequence is a slightly top down shot, following the fleeing crowds through the freshly created gap in the ocean. To achieve this, I effectively split the 3d ocean into horizontal grids and vertical grids. The horizontal grids were simulated with aaOcean in Softimage. The vertical ones were distorted to represent the sea walls, textured with composited footage of waterfalls running upwards. The join where the two sets of grids met was blended using a matte and Nuke’s iDistort node. Softimage’s CrowdFX was used for the fleeing crowd. Twirling smoke elements were added once passed to the comp.

An advantage of Softimage’s ICE simulation system is that making a convincing cloud or mist is a fairly straight forward procedure. I was tasked with creating a storm over Jericho, a swirling mass of cloud and debris that had to look huge and imposing whilst looking down through the eye of the storm.
With clouds, water, and many other fluids, scale can be half the battle. A large wave only looks large if surrounded by smaller ones, a cloud only looks like a huge ominous mass if seen as a collection of smaller masses, but go too small and the effect is lost entirely. In the case of the cloud, if too many small details were apparent it very quickly seemed fluffy. Cute a storm is not. Once the cloud’s scale was correct, there was the issue of it having to spin, distort and generally seem organic. Handily ICE has a node for rotating clouds around points in space so that solved that one. The distortion was shape animation applied to a lattice attached to the cloud.

The rest of my involvement on The Bible was tracking shots in PFTrack and adding in set extensions. Most of the 3d content was rendered using Solid Angle’s Arnold Renderer.

The shots I mention above, along with a few others, are now online in my updated 2013 reel.

For further details on VFX in The Bible, check out FXGuide’s feature on Lola’s work.

Brand New Showreel!

The work in the following reel is created using Softimage, Terragen, Nuke and PFTrack.
Text in the bottom right shows what I created for each shot.
See PDF for further details.
Download PDF shot breakdown

Edited on 15th Oct – Now updated with work from The Bible Series and How To Build a Planet

CCTV-9 Documentary Channel Ident

Update! The CCTV-9 channel branding, including this ident, recently won a Gold for Best Channel Branding at the PromaxBDA awards in Singapore!

I was called back in to work at Lola in London for this Chinese TV channel ident for CCTV-9 Documentary. Only 2 of us worked on this shot: myself and Tim Zaccheo, head of 3D at Lola.

The ident sees a waterfall coming down the side of a cubic mountain. The camera pulls back down a valley with scenery akin to the Guilin area of China, then out into space to reveal that the Earth is indeed cubic. CCTV have a cubic theme, so this makes sense in context. Thanks to the real-world scale of Terragen and the existing workflow at Lola, Tim was able to come up with a camera move that once imported into Terragen matched perfectly with the Softimage scene. The Earth’s textures and even the clouds lined up perfectly in both sections allowing a seamless blend.

My part in this was embellishing the initially blocked out Terragen scene with the necessary details to make it look like the Guilin mountains. A challenge there was that Terragen is great for pointy Alpine style mountains dusted with snow. That is easy out of the box. Guilin mountains are almost bell jar in shape, carpeted in trees with rocky cliffs here and there. The valleys between have been eroded away by rivers, leaving behind relatively flat farming land.

The solution to this was a variety of painted map shaders. Although this allows flexibility and great detail when it comes to controlling displacements, they’re best replaced with actual textures if possible, else the rendering gets very intense. In this case it wasn’t really an option. The painted maps were used to define areas of low and high ground, plus to define where the river goes and to control where the farmland appeared.

As there is quite so much foliage in the area there needed to be a solution that didn’t rely entirely on populations of tree objects. In come the procedural trees. This is essentially a series of overlaid displacement textures that build up to create the cauliflower head look to the trees. Similarly, the farming land was achieved using a tiled texture of fields and a few trees distrbuted along hedgerows. It’s very easy in a procedural program like Terragen to forget that a bitmap texturing approach is still a valid method and often faster.

Something that took a while to figure out was the cubic mountain at the start. The cube was initially displaced using a square displacement map with a falloff around the edges, plus an area eroded away at the front. The stoney displacements were then layered on to this, taking the new normals into account, rather than throwing everything up vertically as is the default. It was then eroded in various directions using extra displacement maps.

The waterfall was Tim’s baby, done entirely in Softimage’s ICE using fairly straight forward techniques, but along with some coloured mattes it all came together nicely in the comp.

There’s no sound on the video above by the way. I’ll replace it with one with audio once I’ve located it.

Mankind – The Story of All of Us

For the past few months I’ve been working at Lola Post, London, on Mankind, soon to be shown on the History channel both here in the UK and the USA.

I worked on quite a few sequences, 30 shots in total. Most of these involved creating projectiles of differing sorts, predominantly arrows; People firing arrows, being shot by arrows, and avoiding arrows while simultaneously cheating the whole archer deal by using guns. All arrows in the sequence above are CG.

As with many documentaries, many shots on Mankind were illustrative map shots, presented as full scale Earth scenes and as full CG shots they were subject to much change. Luckily, the flexibility of CGI makes it easy to work outside the boundaries of reality and to change one’s mind.

A few of the shots I worked on involved creating digital sets. Firstly I created an aqueduct for a sequence of shots with Caesar in. This was a case of tracking shots, matching on set details and extending upwards.

The trickiest shot was a bullet time shot, first in the sequence above, showing an Irish navvy unwittingly getting a little too close to a tunnel blast within the Appelacians. The original footage was green screen with the actor effectively sitting on a green pole with the camera moving around him. This introduced a wobble but was significantly easier and cheaper than a timeslice rig. As the footage was ramped up and down as well as being slow mo, getting rid of the wobble was high priority and after many tests it was eventually solved with simple yet nifty 3d camera trickery.

To smooth out the wobble, I followed a suggestion of Lola’s MD, Grahame. Having tracked the raw footage in PFTrack I projected that original footage through the camera in Softimage onto a card, positioned where the actor should be. That way the actor stayed in the same place in 3d space whilst I moved my new 3d camera around him.

The entire environment in that shot is a 3d set I threw together out of multiple particle instances of the same handful of rock models.

Most of the other shots were relatively straight forward, the exception being another bullet time shot, this one actually being one of the first bullets ever fired! The footage for the start of the shot was different to that of the end, so although the start had lots of people thrusting spears and poles in a smokey landscape, the end was completely clear of people and smoke, plus the target dummy was way too near. To solve this I made a new 3d gun, texturing it with various camera projected textures from the original footage, then made a new background out of a humongous psd stitched together out of footage and photos. In the end none of the original footage is being used as footage, more as texturing inspiration! It’s a really long shot so I split it in the sequence above.

All the work I did on this show bar the Earth-scale shots was rendered using Arnold. This has an advantage over Mental Ray of being a fast method of getting realistic lighting complete with indirect light bouncing. The quality is superb. To me, Mental Ray is much more flexible, but Arnold trumps it for speed between initial light placement and realistic render. I’m very glad I’ve forced myself to learn it.

A few of the aforementioned Earth-scale map shots are shown below.

Orbit shots

On the Recent Work page, and indeed right here, is a video of a few of the shots I worked on for Orbit: Earth’s Extraordinary journey.

The first and last shots featured are both from the same ‘journey’ setup that was used for many other shots too. The setup featured many different elements on their own passes, each passed into its own part of a Nuke composition. As the project progressed, both the 3d scene and the Nuke script needed subtle reworking.

The second shot is a pair of emFluid particle systems, whereas the third is a simple enough ICE simulation in Softimage. The particles in those two shots were rendered with beta versions of Exorcortex’s Fury rendering system which loads the particles onto the graphics card, rendering them in OpenGL. Without Fury the second shot would have been particularly time-consuming to render. It contains millions of particles and took many many hours to cache out.

How will the World End?

Tidal Wave Greets Liberty

Every now and then a show comes along where I get told something a little unbelievable such as, “We have a high budget, almost limitless.” In this instance it was, “This show is presented by Samuel L Jackson!”

“Errr… come again?”

Well it turned out to be true. The show itself is a series of scientifically-backed explanations of how America the world may end. VFX-wise this mostly involves large explosions, landslides, tidal waves and the like. It’s all slickly presented by Samuel L Jackson who seems to be in a bunker, the dampness of which puts me right off hiding there from the impact of the apocalypse.

My input was to work on the tidal wavey goodness. This was using the aaOcean suite of Softimage plugins plus a few in-house ICE nodes at Lola. That and many many passes.

http://bit.ly/pw0FCG