Category Archives: Video

Gold Rushing

For the last nine months I’ve been working for Fluid Pictures on graphics for Raw TV’s Gold Rush, a documentary series for Discovery Channel.

There seem to be two responses to the above. One is to ask what the show entails, the other is to ask why on Earth it needs visual effects. I tend to think of the Gold Rush work to be 3D graphics rather than VFX as it clearly isn’t aiming for a photoreal aesthetic. Unfortunately it’s this that keeps the show off my reel as the two styles clash. Luckily for me, Fluid have their own reel of graphics we’ve made!

I worked on many of these shots, bar the ones at 00:11, 00:17, 00:20, 00:26, 00:31, 00:42, 00:50, 00:53, 00:59, 1:03


I’ve personally been involved with season 10-12 and even a few shots on season 9. Each season is around 24 episodes long, with earlier episodes being broadcast as the later ones are edited. This time constraint, with a delivery every week, means graphics can’t be too fancy or they would continually fall very short of the client’s ideal.

That all being said, we used Houdini as the 3D software of choice, usually used as a straight VFX tool. The node-based methodology, excellent terrain tools and a fairly logical workflow worked in our favour. It was the first time where locations could easily be referenced directly from GIS data. We made tools for drawing out rivers, the cuts in the ground, tree distributions and so on, such that we could concentrate on the shot content. Much of the time the graphics explain how things work or the challenge of moving material from one location to another, but also mechanical things break, especially those with moving parts, so many of the graphics are there to show the problem and how it’s solved.

Oddly there are still technical challenges on this show, with Houdini FX actually being necessary for smoke. Also, although we had simpler Houdini Core solutions for dirt, water and conveyors, we needed to show these things going wrong dynamically or illustrate a point close up.

The show has improved my Redshift 3D skills and made me learn about HDAs, rigging excavators, POP fluids, and machining engine parts using VDBs. That being said, three series is enough for me now. I’m moving on to pastures new with actual VFX work on actual plates.

Gold Rush, at times in its life the most watched show in America on a Friday night, is available on the Discovery network of channels and Discovery+ in the UK.

Availability update

Edited October 2021

I am currently on a lengthy contract that theoretically runs until the new year, but don’t let that stop you getting in touch using the details below!

For the last couple of years I’ve been concentrating on Houdini related work, so if that’s what you’re after we’ll get along even better. I can work remotely on your machine, plus have local Houdini Indie and Redshift 3D licenses.

Prior to that I have an additional 13 years experience of other 3D software, including some Maya and a whole lot of Softimage – remember that?

Check out my CV and reel on this site, then get in touch via aj@ajcgi.co.uk! Or for a quick response, 07816 292534 is the number. If I don’t answer, please leave a voicemail. Don’t be shy.

Further Info About The Reel

Please note: I live a commutable distance to London, but due to current Covid 19 restrictions, I won’t be travelling into the city for quite some time.

2020 3D VFX REEL

PDF Shot Breakdown of 2020 3D VFX Reel

After a few more years of pootling about working in London and 2 years of Houdini work, it’s about time I updated my VFX reel! The previous one missed many projects out, so it’s fitting that this one is practically a 2019 one, especially considering the months spent during lockdown working on a personal project or two.

This particular reel contains some of the more effective and impactful shots I worked on for The Planets and the second series of Britannia, both at Lola Post Production in London’s West End. The Planets mainly involved decorative spheres in space, with a strong style that leans heavily on NASA’s archives with inky blacks and no stars. See my preview post from a while back for details!

Towards the end of The Planets I moved on to Britannia Season 2 as the CG shots were ramping up considerably. See my previous blog post for more details on that!

As an aside here, as someone who has switched 3D software to Houdini, if you’re considering learning Houdini, don’t be daunted! Start with the simpler stuff. At Lola, I was given a Houdini Core license which gets used in studios to do the day to day 3D tasks, working on shots, bringing in assets others have made, plus creating shaders, doing layout work etc. If you can handle that, the FX stuff becomes a lot easier to get your head around because you are already thinking Houdini. Looking back, especially now I have my own Houdini license at home, if I had dived into the FX end of things first it would have put me off the software. I can now easily work around problems with wrangles, writing my own nodes in VEX, I understand the logic with which the transforms are put together, the reason why global transforms are hardly ever differentiated from local and so on. If I had to learn that AND how to make a custom destruction sequence I’d be a full time bowl carver by now.

Enjoy the reel! And the drum and bass. Apologies, I needed something with pace and no lace. Feel free to use that last sentence in a conversation today.

Britannia Series 2 VFX

A breakdown of some of the VFX work by Lola Post

A while back, working at Lola Post, as things were winding down on BBC The Planets, I was handed a few things to work on for Sky/Amazon’s Britannia.

Initially this was a case of doing a spot of modelling. Barracks and cranes were needed to pad out the layout of an outdoor set. Aulus’ house had no roof in reality, then had my CG one, then a burnt version as some hooligans set fire to it using firebombs. Those also needed making as visual effects.

Two of the sets, one for the location known as Isca, and one for Oppida (an old name for settlement) were scanned using Lidar. Once that had been wrangled into something usable, it was handed over to us and used alongside many photo references taken on set to aid in all the modelling and set extension work that needed doing.

As there were many shots in both locations, there was a lot of tracking work to be done. I’m a firm believer in not over-engineering things. Whenever I could I pinned stills in to the plate using Nuke and then passed the whole lot onto compositors. However, being an atmospheric kinda show, this wasn’t always sufficient as said compositors usually had extra elements to add and a camera track was handy. Most of the shots tracked fine once we’d figured out lens info. Even when there’s loads of moving people in shot, there’s often enough to track between foreground and background for PFTrack to grab a hold of.

Isca had its own challenge. Being a hill fort inspired by some very old principles indeed, the actual set was tiny compared to the one that needed to be seen in wider shots. I was tasked with adding details in to aid its scale and believability. A quick fence creation setup in Houdini allowed me to draw in fences around the various huts dotted about. A series of particle distributions were used to scatter rocks, piles of logs and grasses around. Water troughs, buckets and other accoutrements were hand placed on to the set.

On the subject of technical work and Houdini, this was the second project where a large chunk of my work was done in Houdini, even layout and some of the modelling. This allowed me to continue learning an enigmatic software at a generalist level, only opening Houdini FX right at the end to set up a rigid body system. The project I’m on now (a story for another day) has seen me create particle systems, pyro smoke and even water, while all the time fitting that in within what many would consider the staple tasks of a 3d generalist, in one 3d package.

I even had an opportunity (much in the same way as my Latin homework at school was a ‘learning experience’ according to our teacher) to learn the basics of crowd setup in Houdini. When Aulus’ army arrive at Isca and indeed are en route, they didn’t have the decency to be real people. Luckily for me, much of the leg work of rigging soldiers and sourcing motion capture data had already been done, but setting up new shots based on others necessitated pulling things apart to understand how they worked, then making new setups from scratch. Much of VFX work is this and asking colleagues how to do things. Asking questions isn’t a weakness. Pretending to know everything is.

2017 Showreel

After many years of work I’ve finally built up enough new shots to replace much of my old reel. It served me well, bringing in many projects, and indeed some of the better shots still remain, but now with spangly new work alongside!

My contribution to each shot is shown briefly in the bottom left of the screen, with a much more detailed explanation written shot by shot in the PDF breakdown.

In the past few years I’ve been fortunate enough to work on some very interesting projects that have been subject to watertight NDAs. Now that they’ve been broadcast and the dust has settled, it’s a real bonus for me to finally be able to share some of these with you.

The MARS series and Teletubbies were two such projects. MARS was seven months of my time and if I recall correctly, Teletubbies was significantly longer. This left two large projects missing from my reel and consequently any updates to it felt kinda pointless as I’d only be adding one or two shots and labelling it a new reel. The thing with working in TV or film is not all shots that I work on are actually showreel-worthy. Many are similar to each other or shots I’ve made previously, or they may be created using other people’s systems, to the point that putting them in a reel of my own work feels disingenuous.

This reel has been a long time coming, so I hope you enjoy it!

P & O Commercial at Seed Animation

My last project before Christmas was at Seed Animation, somewhere I’d begun the year working on the surreal and hilarious Halwani Chicken commercial. This time around I was doing similar work on a P&O commercial, the style of which was to match a genuine stop-motion animation. As the client wished to see colour progressively spread through the world of our travelling couple, the choice to do this advert in 3d was made.

My input was as a layout guy and TD type. Layout is essentially placing things on screen in such a way that the eye feels comfortable when looking at the shots and is led to the correct area of interest. The TD (Technical Direction) input was mainly the unrolling paper effects seen throughout the animation.

Each strip of rolling paper was actually one long grid, flat by default, with a controller null at one end. A spiral curve was parented to that null and the grid deformed by that curve. As the null moved the spiral along the grid, an expression offset the position of the grid so it stayed put and the end unrolled. The grid was then extruded to give it thickness in its unrolled position, the new polygons kept in a separate cluster so they remained white, giving the impression of cut paper. As the operator stack remained live, the whole thing could be rolled back up, placed to suit the layout, then unrolled and deformed by a lattice to hug the surface it’s sat on. Having been animated, the whole thing was then converted to 2s so it mimicked the stop-motion 12fps standard.

Other inputs from me included the wobbly sellophane-like wake at the bottom of the ferry and the stones building up the castle in the background.
As with the Halwani commercial earlier in 2016, the whole project was rendered using Redshift 3D out of Softimage. Render times from Redshift still blow me away. It’s such a boon for small studios.

Swisscom Commercial

A while back I had my first stint of working at Glassworks London.

After assisting on a PS4 ad, tweaking a few shots to help another 3D guy with his workload, I moved onto this advert for Swisscom.
Layout is a stage that many of us do as part of shot creation. It’s similar to photographic composition in that elements in a scene must fit together on screen to draw the viewers’ attention to the right things, give scale to a shot, or perhaps a sense of drama or relaxation. In this case the skiier has to look fast so the piste has to be described on the mountainside in a way that suggests quick downhill progress in each shot.

We placed lots of fences in such a way that when someone else came along with a working system for simulating the wobble of said fences they were already there and the layout wouldn’t have to be worked on. This is almost always wrong as the layout tends to be adjusted according to client’s needs. For example if they feel the background isn’t working, perhaps the matte painting will need changing and the piste now runs into a mountainside. Looking at the final cut for the first time recently made me notice this had indeed happened and the fences had been adjusted accordingly.
All in all the piste appears consistent in width and our skiier makes it down to the finish line in double quick time!

To me this is quite a clever little advert, something that Glassworks seem to specialise in.

Halwani Chicken Commercial

Recently I was fortunate enough to work with the guys at Seed Animation in Soho, London. As soon as I sat with owner Neil Kidney and watched the initial storyboarded animatic for what was to be a minute long Egyptian commercial for fast food chicken giants Halwani, I knew it was going to be an interesting month or so. Every shot was packed with details, loads of characters, and environments that at first glance seemed to all be different. With the addition of an Arabic song and fast cuts of shots that seemed to include a concentration camp and a swimming pool of frying oil, this project became something I doubt I’ll forget in a hurry.

My involvement was as one of 2 TD and lighting types, picking up from where someone else had left off, a position that can be a little tricky. Everybody approaches technical setups differently, so some adjustments were necessary. Animators were brought in to animate chickens, and others were off site modelling and setting up the fluid simulation for the swimming pool of boiling oil.

As this was a Softimage project, much of the technical side of this animation was created using ICE. My first task to conquer was the external landscape setup and layout, while my partner in TD crime, Ogi (Ognjen Vukovic) was busying himself with initial lighting setups and a feather system based on ICE strands.

The landscape setup was similar for every external shot. There is a large grid from which another higher res mesh is generated. That mesh has weight maps on which drive the distributions of grasses, stones, paths, and rocks, all of which are instantiated using scatter tools in ICE. The trees are a simple underlying mesh with a pointcloud of instanced leaves at the top. Bizarrely enough I was initially using a feather system, FC Feathers, for the leaves as it gave me great control over the overall flow, but that was junked in favour of a random distribution, bar on one of the designs, the pine tree, where it still works well.

Once we’d blocked out all the initial layouts, we started to combine every shot into something that could be lit nicely and render quickly. Each animated chicken was cached out from an animation scene using the Alembic .abc format, then imported into Softimage using a Python script Ogi had written that applied the animation from the Alembic cache onto the feathered chicken.

With the feathers in place, the grass, rocks, trees, flowers, distant hills and the myriad of fences and buildings were beginning to add up, a challenge for rendering anywhere, let alone Seed, a small studio with only a few full time staff and a proportionally sized render farm. The solution to this challenge was the truly remarkable Redshift 3D Renderer. This uses GPU rendering with Nvidia CUDA compatible cards. It’s fast. No. It’s really fast. With all the aforementioned details in shot, render times ranged from about 6 to 10 minutes per frame for most shots, including the time taken to send the scene to Redshift. That’s with reasonable sample settings, sometimes volumetric lighting, and at Full HD. We had a handful of PCs, mostly with two 980GTX cards fitted, though others had Quadros inside. Consider that… the power of thousands of pounds worth of CPU rendering hardware in a pair of gaming cards!

The only limit we found with such complicated scenes was RAM. Redshift uses the graphics RAM for its rendering, not only your PC’s RAM which is a major limitation if you only have a 4GB card for example. With so many geometry instances, feathers and other models in our scenes, it was actually system RAM which was a limiting issue and thereby scene extraction time too as the PCs were paging to the hard drive. The solution to this was to cache out the animated characters from the assembly scenes to Redshift proxy caches, then read them back in to a new scene and render from there.

Technicalities aside, lighting and set dressing was wonderfully straight forward and a joyous thing indeed. I have actually used Redshift before at Glassworks, just around the corner from Seed, but this was the first time I was lighting such complicated scenes with it. I recently returned to a studio where they were rendering using VRay and my old buddy Mental Ray. The latter in particular felt archaic, much more so than it ever has. I guess I held on to that one so long because of its tight integration with Softimage.

We’re all very pleased with the results on this ad. It was a brilliant team of exceptional talent. The animation especially helps, adding to the madness of such a quirky piece! Altogether now! Bwaaa! Cluck! Cla cla cluck!
Apparently an English dub is in the works…

Over the Hills and Far Away… Teletubbies Came to Play. And me. I was there too.

Teletubbies Logo

From the summer of 2014 through till the summer of 2015 I was involved in a project the scale of which I’d not played a part in before. A new series of Teletubbies was announced as being in the works, and Lola Post, where I was freelancing as a 3D type, had won the contract for all the VFX. All 60 episodes of it.

This amounted to hundreds of shots, a volume which is ordinarily associated with film projects. Initially I was involved in the pre-production, working alongside Pinewood-based prop-makers Propshop and the production company, Darrall MacQueen, in laying out designs for the set and other VFX assets. The actors were to be shot on a blue screen with the set being a 1:20 scale. It was our digital set layout which was 3D printed and then dressed by the prop shop staff. This allowed us to use the same 3D data when lining shots up in postproduction.

During the shoot I was working out of a hair and make-up room next to stage three at Twickenham Studios, alongside the DIT. This allowed me to continue developing assets for the 3D team back in London, while still being available on set for questions about set extensions, digital assets and so on.

Once the team on set were up to speed and questions of a 3D nature were thin on the ground I returned back to Lola Post in Fitzrovia. There we had set up a dedicated office and team specifically for the Teletubbies. My main responsibility there was to be lead 3D TD. However I was not the only one. Tiddlytubbies had become such a large part of the show that they had their own section, led by Jonny Grew and Josh George, with much of the animation by Steve White.

In the meantime, I had become what the supervisor, Garret Honn had described as ‘chief landscape gardener’. Every external shot has a set extension. The real scale model is only 4 metres across, representing an 80 metre circle in Teletubbyland. I had come up with a set extension system which was refined as the project went on, but allowed a few of us to continually churn through the many moving or high angle shots that required distant hills, grass, clumps of flowers and trees to be seen beyond the edge of the model set. For many shots which were lower or nowhere near the edge of the set, we got away with putting a large panoramic image in the background and sliding it around from shot to shot.

For the sake of generating distant hills with realistic lighting and so on, we’d gone down the route of using Terragen, a software I’ve used many times for external landscapes. However, with its relatively slow render times, it was only truly used for the opening and closing credits where the light swings round, creating raking shadows. The rest of the time, the background is a large cyclorama, rather akin to a zoetrope, constructed out of Terragen renders. This approach kept render times down, something that was very important with such a volume of material to get through.

Naturally enough, Teletubbyland needs more than just grass and hills, so there are trees, flowers, many tufts of grass and so on. The trees are based on illustrations created by an independent illustrator, brought to life through a combination of softwares; Speedtree, Mudbox and ultimately Softimage. Additionally, we created flowers based on the scale models from Propshop, alongside the stunt ball for Laa Laa, custard bubbles, snowballs and other non-spherical assets, such as the windmill. Naturally there was toast. Custard and toast. No wonder this bunch are funny colours.

Once the project had truly gotten underway I spent roughly half my time answering questions, watching dailies, attending meetings and keeping an eye on the render farm. In that regard it was the most technical role I’ve undertaken. The rest of the time was spent tracking shots, managing who did what and occasionally doing shots myself. Props to the rest of the 3D team for their untiring efforts, especially Olly Nash and Ismini Sigala who were both in it for the long haul. Between us and Tammy Smith we’ve tracked more than enough shots for a lifetime, animated many flowers and a lot of spherical objects.

Naturally, there’s more to life than the 3D side of VFX. The 2D side was phenomenal in scale. So many blue screen shots, so little time. It all needed keying, roto work, cleanup and the final compositing too. To list everyone here would be crazy and considering only a handful of people will read down to this paragraph, i’m not going to list them all! Just be aware that for every shot on Teletubbies that you watch with your kids, about 5 people will have touched it and most of those will be compositors and roto artists. Thanks to all involved. Your efforts did not go unnoticed!

Teletubbies is currently on air in the UK and is bound to be shown elsewhere soon. Response seems to be positive so far. Due to very strict licensing agreements I can’t currently post videos from the show here, so it’s over to the BBC with you!

Teletubbies page at Cbeebies

How To Build A Planet – My VFX Input

Not so long ago I worked at Lola Post, London, on another documentary hosted by Richard Hammond. Similar to the Journey to The Centre of The Planet and Bottom of The Ocean shows I worked on some time back, this entailed a heck of a lot of vfx.

The concept is that we see the constituent parts of scaled-down planets and the solar system being brought together in a large space over the Nevada desert. In order for Hammond to be able to present things at the necessary altitude, he is up at the top of a 2 mile high tower, which is obviously not real for various reasons. Nor is the desert much of the time. Or Hammond.

My input on the show was working on dust and sand particle systems. I was working on 2 sequences of shots. I will warn you now that some of this will get technical.

The first sequence shows a large swirling cloud of high-silica sand and iron. This includes a shot which was to become my baby for a month or two. It pulls out from Hammond at the top of the tower, back through the dust cloud swirling around him, then really far back so we see the entire 2km wide cloud in the context of the landscape around it. The whole shot is 30 seconds long.

The second sequence of shots shows the formation of Jupiter out of a large swirling disc of matter. Jupiter itself attracts dust inwards, which swirls as it approaches.

A few challenges presented themselves quite early on. One was creating particle systems in Softimage’s ICE that behaved correctly, especially when it came to dust orbiting Jupiter as the whole system itself swirls around the protosun. The initial swirling round the protosun was solved using a handy ICE compound that Lola have kicking about on their server, but if you use that twice in an ICE tree it is only evaluated once as it sets the velocity using an execute node, effectively overriding the new velocity value for each particle, rather than passing that out so it can be added to the previous velocity.

The solution to this was to break apart the compound. Integrating new nodes, including some out of a Move Towards Goal node, meant that I was able to make a new compound that I could proudly label Swirl Towards Goal. It sets the goal, then outputs a velocity which can be added to the velocity from the previous swirling compound higher up the tree. It even has sliders for distance falloff, swirl speed, and weight.

The most challenging aspect of this project was actually rendering. The swirling dust in each of my shots is made up of about 4 different clouds of particles. One alone has 60 million particles in it.

Enter Exocortex Fury, the fabled point renderer that was to save our bacon. Aside from one fluffy cloud pass per shot, rendered as a simple Mental Ray job on a separate lower detail cache, each cloud pass was rendered with Fury. Unlike traditional particle renderers that use CPU to render, Fury is a point renderer which can take advantage of the raw power of graphics cards. The upside is a far faster render compared to traditional methods, and done correctly it is beautiful. To speed things up further, particles which were offscreen were deleted so Fury wouldn’t consider them at all. Downsides are that it can flicker or buzz if you get the particle replication settings wrong and it has no verbose output to tell you quite how far it is through rendering. Between us dust monkeys many hours were spent waiting for Fury to do something or crash.

Adding to the complications was the scale of the main scene itself. The tower is rendered in Arnold, a renderer that works best when using one Softimage unit per metre. Unfortunately the huge scene scale caused problems elsewhere. In a couple of shots the camera is so high off the ground that mathematical rounding errors were causing the translation to wobble. Also, as particles, especially Fury-rendered ones, prefer to be in a small scene to a gigantic scene for similar mathematical reasons, they weren’t rendering correctly, if at all. The particles were in their own scenes for loading speed and memory overhead purposes, but in order to fix these issues, the whole system was 1/5 of the main scene scale and offset in such a way that it was closer to the scene origin yet would composite on top of the tower renders perfectly.

How to Build a Planet is on show in the US on Discovery’s Science channel before being shown to the UK in November.
Discovery Sci – How to Build a Planet