As of June 2020 I am available for 3D generalist VFX remote working opportunities!
For the last couple of years I’ve been concentrating on Houdini related work, so if that’s what you’re after we’ll get along even better. I can work remotely on your machine, plus have local Houdini Indie and Redshift 3D licenses.
Prior to that I have an additional 13 years experience of other 3D software, including some Maya and a whole lot of Softimage – remember that?
After a few more years of pootling about working in London and 2 years of Houdini work, it’s about time I updated my VFX reel! The previous one missed many projects out, so it’s fitting that this one is practically a 2019 one, especially considering the months spent during lockdown working on a personal project or two.
This particular reel contains some of the more effective and impactful shots I worked on for The Planets and the second series of Britannia, both at Lola Post Production in London’s West End. The Planets mainly involved decorative spheres in space, with a strong style that leans heavily on NASA’s archives with inky blacks and no stars. See my preview post from a while back for details!
As an aside here, as someone who has switched 3D software to Houdini, if you’re considering learning Houdini, don’t be daunted! Start with the simpler stuff. At Lola, I was given a Houdini Core license which gets used in studios to do the day to day 3D tasks, working on shots, bringing in assets others have made, plus creating shaders, doing layout work etc. If you can handle that, the FX stuff becomes a lot easier to get your head around because you are already thinking Houdini. Looking back, especially now I have my own Houdini license at home, if I had dived into the FX end of things first it would have put me off the software. I can now easily work around problems with wrangles, writing my own nodes in VEX, I understand the logic with which the transforms are put together, the reason why global transforms are hardly ever differentiated from local and so on. If I had to learn that AND how to make a custom destruction sequence I’d be a full time bowl carver by now.
Enjoy the reel! And the drum and bass. Apologies, I needed something with pace and no lace. Feel free to use that last sentence in a conversation today.
A while back, working at Lola Post, as things were winding down on BBC The Planets, I was handed a few things to work on for Sky/Amazon’s Britannia.
Initially this was a case of doing a spot of modelling. Barracks and cranes were needed to pad out the layout of an outdoor set. Aulus’ house had no roof in reality, then had my CG one, then a burnt version as some hooligans set fire to it using firebombs. Those also needed making as visual effects.
Two of the sets, one for the location known as Isca, and one for Oppida (an old name for settlement) were scanned using Lidar. Once that had been wrangled into something usable, it was handed over to us and used alongside many photo references taken on set to aid in all the modelling and set extension work that needed doing.
As there were many shots in both locations, there was a lot of tracking work to be done. I’m a firm believer in not over-engineering things. Whenever I could I pinned stills in to the plate using Nuke and then passed the whole lot onto compositors. However, being an atmospheric kinda show, this wasn’t always sufficient as said compositors usually had extra elements to add and a camera track was handy. Most of the shots tracked fine once we’d figured out lens info. Even when there’s loads of moving people in shot, there’s often enough to track between foreground and background for PFTrack to grab a hold of.
Isca had its own challenge. Being a hill fort inspired by some very old principles indeed, the actual set was tiny compared to the one that needed to be seen in wider shots. I was tasked with adding details in to aid its scale and believability. A quick fence creation setup in Houdini allowed me to draw in fences around the various huts dotted about. A series of particle distributions were used to scatter rocks, piles of logs and grasses around. Water troughs, buckets and other accoutrements were hand placed on to the set.
On the subject of technical work and Houdini, this was the second project where a large chunk of my work was done in Houdini, even layout and some of the modelling. This allowed me to continue learning an enigmatic software at a generalist level, only opening Houdini FX right at the end to set up a rigid body system. The project I’m on now (a story for another day) has seen me create particle systems, pyro smoke and even water, while all the time fitting that in within what many would consider the staple tasks of a 3d generalist, in one 3d package.
I even had an opportunity (much in the same way as my Latin homework at school was a ‘learning experience’ according to our teacher) to learn the basics of crowd setup in Houdini. When Aulus’ army arrive at Isca and indeed are en route, they didn’t have the decency to be real people. Luckily for me, much of the leg work of rigging soldiers and sourcing motion capture data had already been done, but setting up new shots based on others necessitated pulling things apart to understand how they worked, then making new setups from scratch. Much of VFX work is this and asking colleagues how to do things. Asking questions isn’t a weakness. Pretending to know everything is.
After many years of work I’ve finally built up enough new shots to replace much of my old reel. It served me well, bringing in many projects, and indeed some of the better shots still remain, but now with spangly new work alongside!
My contribution to each shot is shown briefly in the bottom left of the screen, with a much more detailed explanation written shot by shot in the PDF breakdown.
In the past few years I’ve been fortunate enough to work on some very interesting projects that have been subject to watertight NDAs. Now that they’ve been broadcast and the dust has settled, it’s a real bonus for me to finally be able to share some of these with you.
The MARS series and Teletubbies were two such projects. MARS was seven months of my time and if I recall correctly, Teletubbies was significantly longer. This left two large projects missing from my reel and consequently any updates to it felt kinda pointless as I’d only be adding one or two shots and labelling it a new reel. The thing with working in TV or film is not all shots that I work on are actually showreel-worthy. Many are similar to each other or shots I’ve made previously, or they may be created using other people’s systems, to the point that putting them in a reel of my own work feels disingenuous.
This reel has been a long time coming, so I hope you enjoy it!
My last project before Christmas was at Seed Animation, somewhere I’d begun the year working on the surreal and hilarious Halwani Chicken commercial. This time around I was doing similar work on a P&O commercial, the style of which was to match a genuine stop-motion animation. As the client wished to see colour progressively spread through the world of our travelling couple, the choice to do this advert in 3d was made.
My input was as a layout guy and TD type. Layout is essentially placing things on screen in such a way that the eye feels comfortable when looking at the shots and is led to the correct area of interest. The TD (Technical Direction) input was mainly the unrolling paper effects seen throughout the animation.
Each strip of rolling paper was actually one long grid, flat by default, with a controller null at one end. A spiral curve was parented to that null and the grid deformed by that curve. As the null moved the spiral along the grid, an expression offset the position of the grid so it stayed put and the end unrolled. The grid was then extruded to give it thickness in its unrolled position, the new polygons kept in a separate cluster so they remained white, giving the impression of cut paper. As the operator stack remained live, the whole thing could be rolled back up, placed to suit the layout, then unrolled and deformed by a lattice to hug the surface it’s sat on. Having been animated, the whole thing was then converted to 2s so it mimicked the stop-motion 12fps standard.
Other inputs from me included the wobbly sellophane-like wake at the bottom of the ferry and the stones building up the castle in the background.
As with the Halwani commercial earlier in 2016, the whole project was rendered using Redshift 3D out of Softimage. Render times from Redshift still blow me away. It’s such a boon for small studios.
After assisting on a PS4 ad, tweaking a few shots to help another 3D guy with his workload, I moved onto this advert for Swisscom.
Layout is a stage that many of us do as part of shot creation. It’s similar to photographic composition in that elements in a scene must fit together on screen to draw the viewers’ attention to the right things, give scale to a shot, or perhaps a sense of drama or relaxation. In this case the skiier has to look fast so the piste has to be described on the mountainside in a way that suggests quick downhill progress in each shot.
We placed lots of fences in such a way that when someone else came along with a working system for simulating the wobble of said fences they were already there and the layout wouldn’t have to be worked on. This is almost always wrong as the layout tends to be adjusted according to client’s needs. For example if they feel the background isn’t working, perhaps the matte painting will need changing and the piste now runs into a mountainside. Looking at the final cut for the first time recently made me notice this had indeed happened and the fences had been adjusted accordingly.
All in all the piste appears consistent in width and our skiier makes it down to the finish line in double quick time!
To me this is quite a clever little advert, something that Glassworks seem to specialise in.
Recently I was fortunate enough to work with the guys at Seed Animation in Soho, London. As soon as I sat with owner Neil Kidney and watched the initial storyboarded animatic for what was to be a minute long Egyptian commercial for fast food chicken giants Halwani, I knew it was going to be an interesting month or so. Every shot was packed with details, loads of characters, and environments that at first glance seemed to all be different. With the addition of an Arabic song and fast cuts of shots that seemed to include a concentration camp and a swimming pool of frying oil, this project became something I doubt I’ll forget in a hurry.
My involvement was as one of 2 TD and lighting types, picking up from where someone else had left off, a position that can be a little tricky. Everybody approaches technical setups differently, so some adjustments were necessary. Animators were brought in to animate chickens, and others were off site modelling and setting up the fluid simulation for the swimming pool of boiling oil.
As this was a Softimage project, much of the technical side of this animation was created using ICE. My first task to conquer was the external landscape setup and layout, while my partner in TD crime, Ogi (Ognjen Vukovic) was busying himself with initial lighting setups and a feather system based on ICE strands.
The landscape setup was similar for every external shot. There is a large grid from which another higher res mesh is generated. That mesh has weight maps on which drive the distributions of grasses, stones, paths, and rocks, all of which are instantiated using scatter tools in ICE. The trees are a simple underlying mesh with a pointcloud of instanced leaves at the top. Bizarrely enough I was initially using a feather system, FC Feathers, for the leaves as it gave me great control over the overall flow, but that was junked in favour of a random distribution, bar on one of the designs, the pine tree, where it still works well.
Once we’d blocked out all the initial layouts, we started to combine every shot into something that could be lit nicely and render quickly. Each animated chicken was cached out from an animation scene using the Alembic .abc format, then imported into Softimage using a Python script Ogi had written that applied the animation from the Alembic cache onto the feathered chicken.
With the feathers in place, the grass, rocks, trees, flowers, distant hills and the myriad of fences and buildings were beginning to add up, a challenge for rendering anywhere, let alone Seed, a small studio with only a few full time staff and a proportionally sized render farm. The solution to this challenge was the truly remarkable Redshift 3D Renderer. This uses GPU rendering with Nvidia CUDA compatible cards. It’s fast. No. It’s really fast. With all the aforementioned details in shot, render times ranged from about 6 to 10 minutes per frame for most shots, including the time taken to send the scene to Redshift. That’s with reasonable sample settings, sometimes volumetric lighting, and at Full HD. We had a handful of PCs, mostly with two 980GTX cards fitted, though others had Quadros inside. Consider that… the power of thousands of pounds worth of CPU rendering hardware in a pair of gaming cards!
The only limit we found with such complicated scenes was RAM. Redshift uses the graphics RAM for its rendering, not only your PC’s RAM which is a major limitation if you only have a 4GB card for example. With so many geometry instances, feathers and other models in our scenes, it was actually system RAM which was a limiting issue and thereby scene extraction time too as the PCs were paging to the hard drive. The solution to this was to cache out the animated characters from the assembly scenes to Redshift proxy caches, then read them back in to a new scene and render from there.
Technicalities aside, lighting and set dressing was wonderfully straight forward and a joyous thing indeed. I have actually used Redshift before at Glassworks, just around the corner from Seed, but this was the first time I was lighting such complicated scenes with it. I recently returned to a studio where they were rendering using VRay and my old buddy Mental Ray. The latter in particular felt archaic, much more so than it ever has. I guess I held on to that one so long because of its tight integration with Softimage.
We’re all very pleased with the results on this ad. It was a brilliant team of exceptional talent. The animation especially helps, adding to the madness of such a quirky piece! Altogether now! Bwaaa! Cluck! Cla cla cluck!
Apparently an English dub is in the works…
Not so long ago I worked at Lola Post, London, on another documentary hosted by Richard Hammond. Similar to the Journey to The Centre of The Planet and Bottom of The Ocean shows I worked on some time back, this entailed a heck of a lot of vfx.
The concept is that we see the constituent parts of scaled-down planets and the solar system being brought together in a large space over the Nevada desert. In order for Hammond to be able to present things at the necessary altitude, he is up at the top of a 2 mile high tower, which is obviously not real for various reasons. Nor is the desert much of the time. Or Hammond.
My input on the show was working on dust and sand particle systems. I was working on 2 sequences of shots. I will warn you now that some of this will get technical.
The first sequence shows a large swirling cloud of high-silica sand and iron. This includes a shot which was to become my baby for a month or two. It pulls out from Hammond at the top of the tower, back through the dust cloud swirling around him, then really far back so we see the entire 2km wide cloud in the context of the landscape around it. The whole shot is 30 seconds long.
The second sequence of shots shows the formation of Jupiter out of a large swirling disc of matter. Jupiter itself attracts dust inwards, which swirls as it approaches.
A few challenges presented themselves quite early on. One was creating particle systems in Softimage’s ICE that behaved correctly, especially when it came to dust orbiting Jupiter as the whole system itself swirls around the protosun. The initial swirling round the protosun was solved using a handy ICE compound that Lola have kicking about on their server, but if you use that twice in an ICE tree it is only evaluated once as it sets the velocity using an execute node, effectively overriding the new velocity value for each particle, rather than passing that out so it can be added to the previous velocity.
The solution to this was to break apart the compound. Integrating new nodes, including some out of a Move Towards Goal node, meant that I was able to make a new compound that I could proudly label Swirl Towards Goal. It sets the goal, then outputs a velocity which can be added to the velocity from the previous swirling compound higher up the tree. It even has sliders for distance falloff, swirl speed, and weight.
The most challenging aspect of this project was actually rendering. The swirling dust in each of my shots is made up of about 4 different clouds of particles. One alone has 60 million particles in it.
Enter Exocortex Fury, the fabled point renderer that was to save our bacon. Aside from one fluffy cloud pass per shot, rendered as a simple Mental Ray job on a separate lower detail cache, each cloud pass was rendered with Fury. Unlike traditional particle renderers that use CPU to render, Fury is a point renderer which can take advantage of the raw power of graphics cards. The upside is a far faster render compared to traditional methods, and done correctly it is beautiful. To speed things up further, particles which were offscreen were deleted so Fury wouldn’t consider them at all. Downsides are that it can flicker or buzz if you get the particle replication settings wrong and it has no verbose output to tell you quite how far it is through rendering. Between us dust monkeys many hours were spent waiting for Fury to do something or crash.
Adding to the complications was the scale of the main scene itself. The tower is rendered in Arnold, a renderer that works best when using one Softimage unit per metre. Unfortunately the huge scene scale caused problems elsewhere. In a couple of shots the camera is so high off the ground that mathematical rounding errors were causing the translation to wobble. Also, as particles, especially Fury-rendered ones, prefer to be in a small scene to a gigantic scene for similar mathematical reasons, they weren’t rendering correctly, if at all. The particles were in their own scenes for loading speed and memory overhead purposes, but in order to fix these issues, the whole system was 1/5 of the main scene scale and offset in such a way that it was closer to the scene origin yet would composite on top of the tower renders perfectly.
A few months back I worked on a trailer for the South Bank Show, featuring Melvyn Bragg walking through the Leake St tunnel under Waterloo station. Bragg was shot on a greenscreen, with the environment being recreated in Softimage by myself and fellow freelancer Rasik Gorecha.
The obvious question there is why? Why can’t Mr. Bragg just go into the tunnel and we shoot it there, huh? Well, there are a few obvious answers to that. The tunnel, itself a road with access to a car wash half way down, is dank, contains certain undesirable types Mr. Bragg would probably best steer clear of, and is continually in flux thanks to it being one of the few areas in London where it is legal to graffiti. It’s also not the most comfortable of places to sit around in for long hours on a shoot. The other reason is that lots of the graffiti was to be replaced with animated posters and artwork featuring well known faces from the arts. That process is a lot easier if created digitally and lit using indirect lighting solutions.
My input on this was twofold. Firstly I set up the lighting in Arnold. After an hour or so of experimenting, the solution found was to place shadow casting point lights in the ceiling under about half of the strip light fittings, plus a spot light at either end of the tunnel. Additional fill lights were used to brighten up the nearest walls. The lights in the walls toward the back of the tunnel are merely textured models and not actual lights.
One of the things with a Global Illumination solution like Arnold is that it can lead to fizzing. One solution to lighting this tunnel would be area lights. This was ditched as a plan extraordinarily fast as it led to lots of noise, plus the modelled lights themselves act as bounce cards essentially negating the need for area lights at all.
Rasik had the majority of the modelling done by the time I joined in the project but was yet to embark on cables. Whilst he set up initial texturing, I became cable monkey. I modelled cables and brackets, trays for them to run along, pipes and all sorts. It took a few days of continually modelling cables before I’d finished them. Simple stuff but it really added to the believability.
The top of the two images above is the model with finished textures and below that is the finished lighting.
The final trailer is not as it appeared on Sky for 2 reasons. They added their own logo at the end, naturally enough, and they own full copyright of the sound bizarrely, so mine’s a silent movie. Add your own ragtime soundtrack as appropriate.
Recently in America, The History Channel broadcast The Bible Series, knocking American Idol into the weeds for ratings. The real reason of course to celebrate this fact is that I worked on VFX for this, along with many others hired by / working at Lola Post, London.
There were hundreds of shots. As the series covers many well-known events that are either epic in scale or miraculous in nature, it’s hard to cut corners with this kind of content.
One of the advantages of VFX is the ability to extend sets or create new ones. The most used model shared amongst the 3d crew was that of Jerusalem. It was originally an off-the-shelf-model of a real scale model, intended to be seen from a distance, so it needed to be tweaked and improved upon where appropriate on a shot by shot basis. With so many artists having touched the model at one point or other, the lighting setup, materials and textures got improved to the extent that once composited, the shots really shone out. Many of the shots I did for The Bible featured Jerusalem, either as an entirely CG set or an extension tracked into existing footage.
One story that is covered in the show is that of Moses parting The Red Sea, with the Israelites being chased by Egyptians through the parted waves. The shot I did for this sequence is a slightly top down shot, following the fleeing crowds through the freshly created gap in the ocean. To achieve this, I effectively split the 3d ocean into horizontal grids and vertical grids. The horizontal grids were simulated with aaOcean in Softimage. The vertical ones were distorted to represent the sea walls, textured with composited footage of waterfalls running upwards. The join where the two sets of grids met was blended using a matte and Nuke’s iDistort node. Softimage’s CrowdFX was used for the fleeing crowd. Twirling smoke elements were added once passed to the comp.
An advantage of Softimage’s ICE simulation system is that making a convincing cloud or mist is a fairly straight forward procedure. I was tasked with creating a storm over Jericho, a swirling mass of cloud and debris that had to look huge and imposing whilst looking down through the eye of the storm. With clouds, water, and many other fluids, scale can be half the battle. A large wave only looks large if surrounded by smaller ones, a cloud only looks like a huge ominous mass if seen as a collection of smaller masses, but go too small and the effect is lost entirely. In the case of the cloud, if too many small details were apparent it very quickly seemed fluffy. Cute a storm is not. Once the cloud’s scale was correct, there was the issue of it having to spin, distort and generally seem organic. Handily ICE has a node for rotating clouds around points in space so that solved that one. The distortion was shape animation applied to a lattice attached to the cloud.
The rest of my involvement on The Bible was tracking shots in PFTrack and adding in set extensions. Most of the 3d content was rendered using Solid Angle’s Arnold Renderer.