After a few more years of pootling about working in London and 2 years of Houdini work, it’s about time I updated my VFX reel! The previous one missed many projects out, so it’s fitting that this one is practically a 2019 one, especially considering the months spent during lockdown working on a personal project or two.
This particular reel contains some of the more effective and impactful shots I worked on for The Planets and the second series of Britannia, both at Lola Post Production in London’s West End. The Planets mainly involved decorative spheres in space, with a strong style that leans heavily on NASA’s archives with inky blacks and no stars. See my preview post from a while back for details!
As an aside here, as someone who has switched 3D software to Houdini, if you’re considering learning Houdini, don’t be daunted! Start with the simpler stuff. At Lola, I was given a Houdini Core license which gets used in studios to do the day to day 3D tasks, working on shots, bringing in assets others have made, plus creating shaders, doing layout work etc. If you can handle that, the FX stuff becomes a lot easier to get your head around because you are already thinking Houdini. Looking back, especially now I have my own Houdini license at home, if I had dived into the FX end of things first it would have put me off the software. I can now easily work around problems with wrangles, writing my own nodes in VEX, I understand the logic with which the transforms are put together, the reason why global transforms are hardly ever differentiated from local and so on. If I had to learn that AND how to make a custom destruction sequence I’d be a full time bowl carver by now.
Enjoy the reel! And the drum and bass. Apologies, I needed something with pace and no lace. Feel free to use that last sentence in a conversation today.
It’s not often I find the time to write something for my blogs these days. Even this news is 2 months old at time of writing. Back in September, the first episode of The Alienist, a show I spent many months doing modelling and texturing work on at Peerless, won the 2018 Emmy for Outstanding Special Visual Effects in a Supporting Role!
Needless to say it has gone on the CV. My colleague and good friend Rasik hopped over to LA (bravely I might add) and picked up this little lady for us. Here I am holding what has to be the most obvious hiding-in-plain-sight potential murder weapon I’ve ever held. Those lightning wings are spiky. There’s a sentence I never thought I’d write. Anyway! Onwards and upwards!
The Alienist is now available on Netflix in the UK.
After many years of work I’ve finally built up enough new shots to replace much of my old reel. It served me well, bringing in many projects, and indeed some of the better shots still remain, but now with spangly new work alongside!
My contribution to each shot is shown briefly in the bottom left of the screen, with a much more detailed explanation written shot by shot in the PDF breakdown.
In the past few years I’ve been fortunate enough to work on some very interesting projects that have been subject to watertight NDAs. Now that they’ve been broadcast and the dust has settled, it’s a real bonus for me to finally be able to share some of these with you.
The MARS series and Teletubbies were two such projects. MARS was seven months of my time and if I recall correctly, Teletubbies was significantly longer. This left two large projects missing from my reel and consequently any updates to it felt kinda pointless as I’d only be adding one or two shots and labelling it a new reel. The thing with working in TV or film is not all shots that I work on are actually showreel-worthy. Many are similar to each other or shots I’ve made previously, or they may be created using other people’s systems, to the point that putting them in a reel of my own work feels disingenuous.
This reel has been a long time coming, so I hope you enjoy it!
Recently I was fortunate enough to work with the guys at Seed Animation in Soho, London. As soon as I sat with owner Neil Kidney and watched the initial storyboarded animatic for what was to be a minute long Egyptian commercial for fast food chicken giants Halwani, I knew it was going to be an interesting month or so. Every shot was packed with details, loads of characters, and environments that at first glance seemed to all be different. With the addition of an Arabic song and fast cuts of shots that seemed to include a concentration camp and a swimming pool of frying oil, this project became something I doubt I’ll forget in a hurry.
My involvement was as one of 2 TD and lighting types, picking up from where someone else had left off, a position that can be a little tricky. Everybody approaches technical setups differently, so some adjustments were necessary. Animators were brought in to animate chickens, and others were off site modelling and setting up the fluid simulation for the swimming pool of boiling oil.
As this was a Softimage project, much of the technical side of this animation was created using ICE. My first task to conquer was the external landscape setup and layout, while my partner in TD crime, Ogi (Ognjen Vukovic) was busying himself with initial lighting setups and a feather system based on ICE strands.
The landscape setup was similar for every external shot. There is a large grid from which another higher res mesh is generated. That mesh has weight maps on which drive the distributions of grasses, stones, paths, and rocks, all of which are instantiated using scatter tools in ICE. The trees are a simple underlying mesh with a pointcloud of instanced leaves at the top. Bizarrely enough I was initially using a feather system, FC Feathers, for the leaves as it gave me great control over the overall flow, but that was junked in favour of a random distribution, bar on one of the designs, the pine tree, where it still works well.
Once we’d blocked out all the initial layouts, we started to combine every shot into something that could be lit nicely and render quickly. Each animated chicken was cached out from an animation scene using the Alembic .abc format, then imported into Softimage using a Python script Ogi had written that applied the animation from the Alembic cache onto the feathered chicken.
With the feathers in place, the grass, rocks, trees, flowers, distant hills and the myriad of fences and buildings were beginning to add up, a challenge for rendering anywhere, let alone Seed, a small studio with only a few full time staff and a proportionally sized render farm. The solution to this challenge was the truly remarkable Redshift 3D Renderer. This uses GPU rendering with Nvidia CUDA compatible cards. It’s fast. No. It’s really fast. With all the aforementioned details in shot, render times ranged from about 6 to 10 minutes per frame for most shots, including the time taken to send the scene to Redshift. That’s with reasonable sample settings, sometimes volumetric lighting, and at Full HD. We had a handful of PCs, mostly with two 980GTX cards fitted, though others had Quadros inside. Consider that… the power of thousands of pounds worth of CPU rendering hardware in a pair of gaming cards!
The only limit we found with such complicated scenes was RAM. Redshift uses the graphics RAM for its rendering, not only your PC’s RAM which is a major limitation if you only have a 4GB card for example. With so many geometry instances, feathers and other models in our scenes, it was actually system RAM which was a limiting issue and thereby scene extraction time too as the PCs were paging to the hard drive. The solution to this was to cache out the animated characters from the assembly scenes to Redshift proxy caches, then read them back in to a new scene and render from there.
Technicalities aside, lighting and set dressing was wonderfully straight forward and a joyous thing indeed. I have actually used Redshift before at Glassworks, just around the corner from Seed, but this was the first time I was lighting such complicated scenes with it. I recently returned to a studio where they were rendering using VRay and my old buddy Mental Ray. The latter in particular felt archaic, much more so than it ever has. I guess I held on to that one so long because of its tight integration with Softimage.
We’re all very pleased with the results on this ad. It was a brilliant team of exceptional talent. The animation especially helps, adding to the madness of such a quirky piece! Altogether now! Bwaaa! Cluck! Cla cla cluck!
Apparently an English dub is in the works…
It’s now a decade since I first cut my teeth doing VFX on music videos. Lots has changed, technology has marched on at a huge pace, and yet the fundamental way of approaching a shot is almost the same.
Simple solutions are often the most effective ones and in particular those you know and can trust. For me this has meant finding appropriate methods for a particular time & situation and sticking with them for similar projects in the future. Consequently alongside my extensive Softimage, Terragen and PFTrack experience, my VFX fingers have touched Adobe products, GIMP, Deep Exploration, SpeedTree, Global Mapper, Inkscape, Combustion, Nuke, Maya, Max, and Cinema 4D.
As a generalist with such a broad background skillset, I found myself recently in an unusual position; that of a 3D lead artist on a 60 episode long TV series. All in all I spent a year working alongside a team of staff from both the production and post production side of things. I was even on set for a stint, something I hadn’t done for many years. Rather irritatingly, the whole thing is under wraps so I can’t say a word about that directly until it’s broadcast.
In the past 10 years I’ve learnt more than I could possibly have imagined when I left college. Here’s a few things I’d like to pass on to those entering the brave new (actually quite old) world of VFX. They’re based on my experience, so might not match the opinion of others.
Firstly and most importantly, listen to those telling you not to be sedentary. Stand up often and walk around. Consider a standing desk. Exercise regularly. You need it. Yes you do. Fresh air too, and daylight. By daylight I mean directly from the Sun, not a simulation bulb. Plus if you work from home, which you may well do at some point, human contact is essential. You need those breaks from the screen to be a human being rather than a ‘zombie’ as I’ve heard execs refer to VFX guys as.
On a similar note, burning the candle at both ends does nobody any good. Try to avoid long hours, even if you are enjoying a project. Past a certain point in the day, I find the work I am doing is deteriorating in quality and my brain is no longer functioning at its best. On that note, drink plenty of water. Lots of offices are air-conditioned and will dry you out very fast. If you must work extra time, try to wangle a weekend, especially if you’re a freelancer. You’ll get paid an extra day and will have the benefit of further sleep. Some of my best work has been done on a Saturday.
Don’t be ashamed to take shortcuts or cheat. The whole of VFX is a cheat, a lie. It’s OK to use stock libraries for footage, elements, sound, textures and even models. Quality varies so do your research, but the time you could save will actually save money in the end too. For an HD project, consider rendering out elements at 720p, then upscaling in the comp. 720p has less than a million pixels in it. 1080p has over 2 million. Render times are much lower and many cannot tell the difference in image quality. There are rare exceptions to this, but I’ve even passed SD anamorphic widescreen renders of skies and the like to be composited before now and nobody’s noticed or cared. If it is matching something soft in the background footage or is out of focus anyway, it just doesn’t matter.
Keep curious. Ask questions of those around you, whether they’re older or younger, wiser or greener. Everybody knows something the person next to them doesn’t and in this profession, that’s especially true. Whether you are self-taught or degree educated, you cannot possibly know all there is to know about the huge amount of software and associated techniques. Remember what I wrote earlier about simple solutions? The more experienced near you will possibly know them, so just ask. Don’t waste four hours struggling to do something that could be done in one hour using a technique they know.
VFX isn’t all about big budget movies and long form TV shows. Consider using your skills elsewhere. There’s a huge amount of corporate and educational work out there. I did quite a long stint of work on illustrative animations for educational websites and kids TV. As another example, did you know there’s 3D warehouse simulation software, requiring many real-time 3D models? Now you do.
Finally, if you’re a freelancer, get used to this question: “So what are you working on at the moment?”
My answer is currently, “Nothing,” so feel free to get in touch!
If you have no money, don’t, but do read this: https://www.ajcgi.co.uk/blog/?p=855
This past few months I’ve been beavering away at Lola Post on 2 series of shows, creating VFX of a weathery, Earth-scale nature for Britains’ Most Extreme Weather, and shots of all scales for series 3 of How The Universe Works.
Ordinarily I’d put together blog posts before a show goes to air, but in the case of Britain’s Most Extreme Weather it slipped from my mind as soon as I rocked back onto How The Universe Works. Much of my weathery input was particle systems and strands, either using existing setups from previous shows or creating new ones as appropriate. A particular favourite of mine was a system showing the movement of air around cyclones and anticyclones; A strand system that rotates particles around many points, allowing them to move fluidly from one direction to another as if air, all wrapped around a lovely spherical Earth.
How The Universe Works is a series I’ve been on for many many months now. I first started on it in November I think. The first episode, all about our Sun, is to be shown on 10th July on Science in the USA.
For that show I took Lola’s existing Sun cutaway setup, introducing a more boiling lava-like feel through judicious use of animated fractals and grads.
Overall I’ve worked on 8 episodes with a handful of shots in each show. After all that dedication to spheres in space I am now supervising the VFX on one of the last shows for this series!
More geeky details and videos for both shows to come!
When I am asked what I do for a living, there is a follow-up question that is so common I begin to answer it right away now. That question is, “Ok, that sounds interesting. So what do you actually do? What is Visual Effects really?”
It’s a fair question actually and one whose answer changes as time goes on. If I’m stumped for an answer to the question, I try some of the following.
My staple answer now is,
“I add stuff to video footage that wasn’t there in the first place, or take it away if it wasn’t meant to be there.”
More often than not, the actual answer is,
“I create something with the appearance of having been shot as real life, but which is actually impossible to shoot, be that for practical, artistic or financial reasons.”
Ah, so that will answer it, right? Nope. I find these answers are enough for most people to understand at least vaguely what the end result of my job is. However, some are mad about film, TV dramas and whatnot and really want to show their interest. Again, fair enough. A question you might get is,
“So when you say you add things into video footage or film or whatever, how do you do that?”
That’s the really tricky one to answer, especially as everyone’s preconceptions of media, especially digital, are different. There’s the Make Awesome button right? It’s all done by the computer right?
However, wonderfully, a lot of people use Photoshop now and kind of get the concept of layering things over each other. Lately, I’ve been explaining with,
“VFX has similar principles to editing photographs, only these photos are on the move. Imagine using Photoshop for moving images, with all the layers and masks moving, the colour corrections animating and so on. I make elements, series of 2D images, that are composited on top of others, like layers are in Photoshop.”
I do almost exclusively 3D VFX, by which I mean those elements are created in a 3D package, such as Maya, rendered out as 2D images, just like photographs have no physical depth to them. I no longer get bogged down into details when explaining VFX. To begin with, I don’t even mention the many jobs available; compositor, modeller, 3D generalist, render wrangler etc. I used to say I did 3D animation, but that would lead people down the path of thinking I did Toy Story or was about to reinvent Wallace and Gromit. Another danger with the 3D moniker is the recent resurgence in 3D cinema which is another kettle of fish altogether.
So there we are. A fairly basic answer which most people understand! Incidentally, I am a 3D generalist, available to hire in London, UK. Check out my work on the home page at https://www.ajcgi.co.uk.
The British cut is different to the US one. The cut shown on Sci had to be edited to allow for the ad breaks. So, if you like your Hammond unsullied, this is the showing for you! Additionally, this being the UK, Hammond appears in the title of his own show. The international cuts often drop his name so as to make them more marketable in countries where he is little known.
The second episode is likely to be broadcast a week or so later but is yet to be confirmed I think.
Recently I’ve been retraining in Maya and giving myself extra alone time with the Arnold renderer from Solid Angle.
I decided to use this as not only an opportunity to find out how my Softimage lighting and rendering skills translate to Maya, but to show how basic compositing is something that every 3d artist should embrace if they don’t already.
One thing which has surprised me again and again is how little students and graduates of 3d courses are given a grounding in understanding what goes into their image and why it’s beneficial to use the compositing process as part of their workflow. Some students are even penalised for not showing their raw unenhanced render, having points deducted for daring to composite. To give a parallel, this to me is like a film photography student handing in negatives and no prints. The job is half done.
This won’t be a tutorial, more a pointer in the right direction for those who are starting out.
The example I use, a still life of a bowl of fruit, is a model from the very first lighting challenge hosted over at CGTalk. The files and others are downloadable at 3dRender. The model’s pretty old now so it’s not especially high detail but is still sufficient to show you what I intend to.
After a bit of setup in Maya and throwing on some pretty rough textures, here’s the beauty straight out of Arnold:
It’s lit with 3 lights; A cool exterior light, a warmer interior light, and a fill for the shadow in the middle. On their own, the images appear like this:
These images can be added together in any compositing software and they will give exactly the same result as the beauty above, to the extent that a pixel on the beauty will be exactly the same colour as when these three images are added together.
Each of these images is itself a composite image. Arnold, Mental Ray, Vray and other renderers consider many different material properties when returning the final colour for a particular pixel. Each property can be saved out as an image itself and added together to form the final image. In the case of the beauty itself, these are the images that I’ve rendered out of Arnold:
Again, added together, these form the same image as the beauty above perfectly.
(A side note here: A few component images are missing, including reflections, but were missed out of this contact sheet as they are entirely black. As none of the materials are reflective in the traditional sense, the reflection image is returned as black, whereas the direct specular contains highlights that mimic reflections. Arnold is peculiar in that it can consider reflections in 2 ways and transparency in 2 ways, depending on what is trying to be achieved.)
So what am I getting at here?
Here’s the beauty again:
Now here is a warm, evening setup:
And finally, a night lighting setup:
All three use the same component images, composited together in different ways: For example, tinting the lights, changing the intensity by blending the images with varying opacity, or even desaturating the key light to achieve a moonlit interior effect. On the night lighting I’ve changed the apple using a matte together with the specular & sss channels from the fill light. It was too bright and waxy. I could have re-rendered the 3d perhaps, but a tweak in Nuke was a lot more efficient.
The compositing process, even at this basic level, allows for flexibility from the get go. Where clients are concerned, flexibility is key. When passing work by a client it’s inevitable that changes will be requested and often they are something subtle that can be achieved in the composite. If you try to achieve that yourself using only 3d solutions, the render times will get long, especially when working on tv or film. Ordinarily I work alongside compositors and it’s up to them to do compositing tweaks whilst I work on a new shot or more substantial alterations to a current one.
Similarly, when first lighting a shot, working with many rendered channels, including additional ones of your own creation, is a rapid method of figuring out whether your setup is indeed heading in the right direction. Using the same component images for multiple looks is a time saver too.
One thing to bear in mind is once you know which channels are likely to be needed, it’s time to stop rendering the others as these can fill up hard drives quite nicely.
In short, stop tweaking your 3d scenes asap. Render out your initial lighting setup and see how much can be done in the comp. It isn’t cheating; It’s part of the process. It allows you to render the shot out, pass it on, and start a new one. Ultimately it will help your relationship with compositors who like to know what’s going into your image and what they need to add, plus [perhaps I shouldn’t say this, but here goes] it will make you more employable.