The cost of making a live action film is rarely anything to do with camera equipment, but instead the logistics and costs of locations and sets. Is CGI animation the solution for the cash strapped indie filmmaker?
Let's be clear at the outset that the thing which prevents most people from making good microbudget movies is not, below a certain threshold, money. It's talent. Many – even most - pocket-money shorts are damned by their mediocre writing or editing long before money starts to come into the picture, and neither of those things is particularly financially limited. Even camera equipment is so good for so little money that it's barely a barrier to entry, though lighting setups and – crucially – something interesting to shoot are the big issue. Locations and people, especially large numbers of costumed, made-up extras, are easily the single biggest, most expensive issue facing short filmmakers.
So, let's assume that our hypothetical filmmakers are all future Academy Award winners in the fields of writing, directing, acting and editing. What limits the scope of their creativity is access to locations. Even totally mundane things such as a shopping mall present huge problems: to shoot a big, semi-public area in a way that's palatable to mainstream cinema audiences will almost inevitably require closing it to the public at huge expense, populating it with extras and lighting it. And it's a shopping mall. Nobody's impressed that a film has a scene in a shopping mall. It's utterly unspectacular, and yet it costs a fortune.
On the busses
The other example that comes to mind is public transport. Want to shoot on a bus? Even if you can deal with natural light, which is tough on a moving vehicle, you still need a bus and a driver, insurance and extras – the cost probably pushing five figures before we've even stepped on set. There is, however, another approach. Matt Cerini's short film Dear Alice is set on a bus, but the bus itself probably didn't cost Cerini a cent, because Dear Alice is an entirely computer-generated short and a very pretty one made in what we can cautiously describe as the Pixar style. Lots of similar things are available on the other end of a YouTube search for “CG short” and many of them, like Cerini's work, are very accomplished.
Entirely computer-generated movies are totally common in 2018, but they haven't ever been a solution for cash-strapped conventional filmmakers. There are many stages in the creation of a watchable computer-generated narrative, from modelling and rigging to texturing and lighting, each of which has only a fairly loose link to any of the real-world crafts that get people interested in film and TV work in the first place. From the perspective of a live-action filmmaker, Maya is expensive and Blender is a nightmare of usability problems, but both demand years of training and experience, on a full-time basis, to create anything watchable.
To some extent, that's always going to be the case, but the advancement of technology has the potential to help out in at least some ways. One of the biggest challenges is modelling humans and one of the other biggest challenges is animating them. Technologies such as photogrammetry, with which you can capture both shape and some aspects of surface texture, might help with the creation of models. Motion capture might help with the animation.
Specialist skills
Right now, in August 2018, neither of those things is really ready to be a solution for indie filmmaking. There's still too much specialist work involved. Even if a scan could create a workable shape and texture for a human face (which it probably can, just about), there's the issue of rigging and simulating the bone beneath the flesh, which is essential for realistic results. Motion capture can work wonders, as it did for The Last Of Us, but the kind of results we see below require some of the very best technology currently available and it's probably as expensive as shooting live if you're not creating assets for a computer game.
What we're looking at here are attempts to create completely realistic humans and that's a dangerous game to play. Get it wrong and the results can easily chart a course along the very deepest crevasses of the uncanny valley. The 2017 computer game Mass Effect: Andromeda was pilloried for its – let's be nice – extremely uneven hand-done facial animation which was probably made even less acceptable by the fact that the models themselves were at least somewhat realistic, especially after the eyes were relit in early patches.
It's far from impossible that the sort of technology required to easily shoot watchable (if not photorealistic) CG movies will become available at indie-filmmaker prices. Andromeda uses the Frostbite game engine to render those attractive visuals, though there are alternatives, particularly the Unreal engine. The publishers of Unreal seem aware of the potential for the technology to be used to produce narrative fiction. The process will always involve some special skills, though, so does the carpentry required to build conventional sets, and putting together a spaceship in Unreal Engine 4 is certainly a cheaper, more accessible way to learn than doing it on the 007 stage at Pinewood. The output of Unreal Engine 4 looks, for many purposes, as good as a mid-90s offline-rendered movie, especially when it's freed of the requirement to create a 360-degree game environment. It is also free until the project (assuming it's a game) makes $3000, an amount of money that most indie filmmakers have never seen all in one place.
So when cell phone motion capture becomes reliable, the future looks – well – interesting. Right now, the principal body of work is mainly one of sci-fi and fantasy action movies, because that's what typifies video games. But those are also the genres which demand the most from their environments. Anyway, let's not put all our money on the idea that a real-time engine like Unreal will never be used to create a feature film, because there will soon be very little reason not to do just that.
Tags: Production
Comments