Making a Level Sequence act like a shot

unholy_union.png

Last time I discussed our under-construction unreal pipeline, I talked about how to make Unreal behave like a renderer plus an offline compositor. Our tests were very successful, but left a significant issue unaddressed--how do you do you set up compositions separately for different shots in a sequence?

Since then, I’ve come to think that this is a more general purpose question that isn’t specifically tied to compositing alone. The question is, how do you make a level sequence behave like a shot?

Shots in a conventional, DCC-centric pipeline correspond to scenes. A Maya scene contains all the animation data for a shot, and also either references or contains all the other data the shot requires ie. meshes, rigs, etc. Tools like USD exist to compose shot data in more sophisticated ways, but usually are also set up to include whatever data is necessary to represent a given shot. Regardless or what other data files they may reference, looking at a Maya scene or a USD stage for a given shot shows you whatever that shot is supposed to be (or at least a representation of it that corresponds to a particular stage of the pipeline).

This isn’t the way Unreal works--level sequences don’t include all the data necessary to display a shot, just the animation data. You can make an actor “spawnable,” tying it to the level sequence, but this isn’t a satisfactory replacement for a “shot,” because spawnable actors don’t maintain relationships to each other: they don’t maintain “attachment” (Unreal’s version of parenting), for instance, or maintain the relationships between Composure elements.

In a lot of ways, this isn’t surprising--Unreal, after all, is a game engine, and the scenario Sequencer (and it’s progenitor, Matinee) was originally designed for is pretty different to the one we’re putting it to now. Looking at Level Sequences as a tool for in-game cinematics, for instance, their design makes perfect sense: You have a level. The level has actors in it. You need to be able to control those actors in a time-based way, and maybe add a few things on a per-shot basis like cameras or effects. You wouldn’t expect the scene and the relationship between actors in it to change completely from shot to shot. That’s certainly the impression one would get from Epic’s Sequencer demos and training materials--they tend to focus on designing camera cuts around existing animated material, all in the same level.

There are several reasons why this doesn’t work for us. We’re relying on complex Composure composites with attached lighting, and those composites and lighting may need to be entirely different from shot to shot. We need to be able to animate elements of the shot (like cameras) in ad-hoc hierarchies. Because we are relying on a combination of traditional assets and illustrations projected on shot-specific projection geometry, environmental elements themselves may differ significantly from shot to shot (not to mention the bit where you sometimes cut to a completely different location, which of course, in narrative film, is something you do all the time). And finally, it’s not the way I want to think about shots in the context of animated filmmaking. Shooting a “set” and “actors” like a live action film is certainly one way to approach CG animation, but animation is not live action, and I don’t want to be forced to work in that way. The bespoke creation of content that is catered specifically to a specific shot is one of animation’s greatest artistic advantages.

In other words, what we need is not the ability to spawn an actor in a given sequence--we need to “spawn” an entire level. Thankfully, Unreal lets us do this!

Note that the contents of the outliner here are changing completely from shot to shot. The key here is the “level visibility track.” This allows us to have per-shot sublevels that just turn themselves on and off as needed--the level will default to off, but be turned on when the sequence that includes it’s Level Visibility Track is playing. Notably, when a sublevel is turned off it isn’t just not being drawn--it’s actually been “streamed out” and is not evaluating at all, which is very important for performance (otherwise, you might have every shot’s Composure elements trying to render at once!)

What this means is that we can essentially divide the contents of a shot into two categories--the shots Level (what is in the shot) and the shot’s Level Sequence (how the entities in the shot move). A Maya scene happens to store both of these elements together, but you could similarly conceive of them as being something separate, and indeed we kind of need to think of them as being separate in order to move animation data back and forth.

Now, just like my hacky solution to Unreal shadow passes, joining the sublevel and Level Sequence conceptually without actually joining them together in the UI is clearly a crime against the laws of god and man. Keeping the Level Sequence and its corresponding sublevel together ends up being the responsibility of the artist working on the edit in Unreal--if you open up the Level Sequence and the shot’s level is not a sublevel of whatever level you happen to be looking at, all your shots actual data will be missing and all your tracks will turn error-message red as your sequence stares at you, it’s cruel creator, in wordless horror. But when you’re taking a game engine and turning it to an entirely different purpose then the one it was designed for you’re going to have to necessarily accept the occasional breach of medical ethics.

One open question about this approach is performance--I’m not sure if hiding and unhiding sublevels has a significant performance cost or not, because the tests I’ve done so far have been done using relatively simple scenes. As work on our short progresses we’ll find out if streaming in a more complex set will cause a hitch on the camera cut. One thing that works in our favor here is that our needs for per-shot data aren’t that large--we’re not going to be doing very geometry-heavy sets for this project in any case, and a lot of the projects I’d like to use this approach for will rely heavily on projected illustrations on simple geometry.