Questions to film TD guys!

Hey!

About

I’ve only got experience in the game industry but I’d like to have a grasp on how the film industry works as well!
To get a better understanding I’m actually working on a pretty ambitious home project where I’ll be rendering a CG sci-fi battle scene. My script and storyboards are done and I’m about to enter pre-vis stage.
Now I’ve done quite alot of research on how the film industry works and all the departments there is to it and anything there is to know about upstream / downstream.

Problem

Flexibility is my main goal, if a problem occurs I don’t want it to screw the pipe.
So let’s start with assets.
An asset can be a character, tank, weapon, prop, environment etc.
Each asset is composed of a model scene, rig scene with proxy mesh and a shader scene from look-development.
Now my idea was to have a master asset file that references the model scene, rig scene and the shader scene and everything is hooked together. When the master scene is published it’s placed inside a “published” folder with all the references collapsed / imported. Republishing an asset would replace that same file. This might be familiar with most of the guys in the film industry.

Then we have the sequences and shots. Each shot is composed of a layout scene, animation scene, simulation scene, lighting scene and rendering scene. Those scenes would reference assets from the “published” folder and nowhere else.
My goal here is to only have the animation scene reference a rigged asset, and all the other scenes would use point cache data that animation spits out. The reason for this is, lighting, simulation, rendering is not concerned about rig and controls.
But there is a conflict with my “master asset” file idea since it already contains the rig in it, which I’d like to get rid of for the sim, lighting, rendering scenes, and deleting is a no-no to me after referencing the “master asset”. This introduces a second master asset file (render master asset) which only contains the geometry that can receive the point cache data that animation spat out.

This is kinda where I’m getting lost and I have no experience in the film industry. How do you guys make sure everything is flexible as possible all the way from modeling to rendering. I’m fine all the way into animation, but from there on I’m kinda not happy with my solution where I have two master files for each asset where one is the rigged for animation dept and the other is the same asset with the rig stripped for point cache.
Besides this, how do you handle the rendering scene? You most definately don’t want the render mesh in the viewport, do you guys just keep it hidden using a layer? Or do you at rendertime use a script to switch out all low / mid poly references to a high poly version? Or do you guys use mental ray render proxy scenes that you export out to .mi? (best for memory since maya doesn’t load the mesh, only mental ray does)

How could we introduce the new maya asset system into this, hiding, black boxing, locking geometry and unnessesary nodes in the asset container seems like a smart thing to do but can you still read / write point cache from it?

I’d like to hear how this could be managed in a simple and effective way.

This subject has seen very little discussion and even though I would love to shed some light on it, as far as workflow here at framestore goes, telling you would mean a breach of contract.

I’m assuming this is why you aren’t getting that many replies, even though a lot of us know some thoroughly tested and standard way of doing things.

=/

Yeah there is like nothing out there about the vfx pipeline. Sure you can find information about each department and their roles. But the actual DCC workflow stuff is completely hidden from public, even though almost all the vfx studios work in similar ways.

Let’s take another approach then, I’ll do my best to present a pipeline / workflow with my limited knowledge, instead of you talking about how it’s done at your company, with flexibility in mind from modeling --> rigging --> texturing + lookdev --> animating --> simulating --> effects --> lighting --> rendering --> compositing
with pictures step by step and you’ll tell me or question my decisions that can cause some problems in prodution and I’ll try to counter / firefight them with a new presentation?
Perhaps we’ll end up with a free vfx pipeline tool? :slight_smile:

There is this old openPipeline thing for maya but it’s too basic and only good upto animation. Not good enough for my needs.

Regards!

Let me know if you ever want some extra help with that Yasin, I wouldn’t mind helping out to learn a few things.

Mattanimation: Thanks, I’ll pm you soon! :slight_smile:

I checked out a whitepaper the foundry released about KATANA (look-dev / lighting / renderer integration).

The node graph below describing the workflow in a typical vfx environment actually states that
they have Rigging Assets and Look Dev Assets, I guess my 2 master file idea wasn’t that bad after all, where one goes for animation department and the other is used for effects / lighting / rendering using point / geometry cache from animation.

Now I’ll try to think of a way to introduce the Maya asset container system into this, I really like the black boxing and locking features it provides, hopefully without having any trouble with exported Mental Ray Assembly files (*.mi) for lazy loading of render geometry at render time to keep memory usage as low as possible. Hopefully I can also use Point Cache data with those render assembly files together with maya assets without any conflicts (read / write) but I’m not getting my hopes up yet.

some studios use build scripts instead of any referencing. You run the script at the end (or beginning) of a production stage and it assembles all the required data for this stage from previous stages, so you have a clean scene with just the stuff you need to work on.

When you’re done, you save the file to your repository (or server or whatever). The guys working on the next stage run their build script. It grabs all the data it needs and gives them a clean scene with just the stuff they need, and so on.

Pros: no references that can break. No app specific workflows in the pipeline - you could keep the entire pipeline, just change the build process and switch to, for example, 3ds max (keeping the same naming conventions and steps)

Cons: no hard references between assets (don’t mess with the naming!). The build script (and your naming convention) keep track of referencing assets across your pipeline. Requires a well thought out naming convention (which is a MUST for this type of work - in games we can sometimes get away with sloppy naming, not so for pre-rendered). Building can take some time.

Since a lot of this depends on proper naming and following conventions, QA (and QA scripts) plays an important role, especially before handing off assets to the next stage. You want to make sure your files are always a-ok before any other stage can use them.

Hmm seems interesting, so there is a mega importer that each shot utilizes to setup itself, and the importer knows from the project management database (such as shotgun) what assets to import into each shot.

So animation, effect, simulation, lighting, rendering in a shot would basicaly look up the database and import the correct published / commited final assets and use the layout scene to position them and maybe automate stuff such as applying point cache data to lighting / rendering assets, black matte shaders for effects and simulation assets and setup contribution maps for each light / asset and all the render passes for the rendering scene that comp dept requires?
Most of this could be done in parallel after animation is done since effects, sim and lighting can work in parallel, besides as animation gets updated point cache data is overriden and everything downstream updates perfectly.
I’m not that worried about the naming convention, that’s easily fixable by just not relying on artist editable names :slight_smile:

Seems interesting and easily portable to other applications if you abstract the DCC script commands into a thin layer and use your own layer on top of it in python!

[QUOTE=LoneWolf;14573]
So animation, effect, simulation, lighting, rendering in a shot would basicaly look up the database and import the correct published / commited final assets and [/QUOTE]

Yes. Scene building is a perfect solution to the nightmare that is referencing (at least for Maya work). It also allows for some other fun things like pulling specific versions (during a parallel pipeline process) so you can take texturing_003 + lighting_006 instead of the “latest” file in case something is broken in certain versions.

We are building something similar here, all managed on the data level to perform the strict naming conventions, tracking of assets, version control, etc…

Cool!

What happens in a parallel pipeline process where halfway the shot the director asks for a few more props to be added into the shot, which means those props needs to be propagated downstream to animation, sim, effects, lighting, rendering even if they are halfway done?

So my guess is the layout artists goes and adds them to the the Layout scene, places it into the correct place and master / publish a new version out of it.

Then I’m guessing that each department would need to run another “build” now to get those newly placed props. I’m also guessing that in an ideal world that “build” should be smart enough to notice and say: “3 more props has been added, let’s bring them in into the latest Animation, Sim, Effects, Lighting, Rendering” and place them at the correct position that the Layout has, just so they can keep their work that is halfway done, am I correct?
I’m guessing this is halfly managed through the project management system where each shot would have an asset list? But for actual positioning in the world the Layout would have to be queried by by the Animation, Sim, Effects, Lighting, Rendering scenes right?

one another thing to consider is the updates in the camera animation which also needs to propogate downstream through almost every department, as everything might need to be re-rendered again. In one of their effects tutorials I remember Blur having a tool, which would notify the artist if there is a newer camera available.

Cool GeminiPrevails!

Here is what I have so far, did it quickly in Nuke. I’ve also done some testing and I managed without any problems at all to get geometry caching to work together with Maya asset containers :slight_smile: Which means blackboxing / locking / hiding / unpublishing unwanted nodes is possible, including geometry that will need to output caches. Only requirement is that the asset container has a published root transform! ^^

Now I’ll try some more stuff regarding mental ray proxy meshes / assemblies so I don’t have to let maya load huge meshes in the viewport and also in the renderer itself. I only have 4gb of memory and I don’t want maya to eat half of it :slight_smile:

I really like the build script idea so I’m probably gonna stick to that when it comes to applying correct assets to each department. An example below would be the layout scene. It contains only the proxy meshes / positions from the pre-viz stage. What I’ll let the script do is only query the positions of those proxies and replace them with the real Master Rig Assets in the animation scene. The rest of the stages will use Alembic Caches so they will automatically be placed at the correct positions :slight_smile:

If you can see any concerns feel free to tell me about it, I’m always open to ideas and improvement!

While it’s true that scene building is often used in some pipelines instead of referencing in my experience it moves to be more of a hustle. Referencing if managed well is essential for a good pipe.
Obviously, versioning and support for current and latest files is very important to make it robust.
Often notification systems are implemented to let artists know they are working with an out of date version.
This kind of system basically tries to achieve a balance between a push or pull pipeline. (which has forever been a topic of heated debate)

Most studios I worked at had some sort of notion of a scene description file where layout would specified the assets that should be in the scene. This however often required manual oversight and I’ve yet to see a system where this wasn’t a point of constant complaint.
Animators would love to drag anything into their scene at will but pipeline needs to control the assets so that they could be effectively managed, updated and passed downstream. =

I will definately use referencing. Don’t plan to make a huge scene building script for myself just yet but something inbetween, besides getting it robust will take some time! Will do something inbetween to get best of both worlds =}

Here is version 2 with a few more details:

Looks like your very close to spot on! One thing that comes to mind is that you might consider keeping the input and output from artists (or “pipeline steps” in your case) as clean as possible.

Since you’ll be doing all of it by yourself it isn’t as important, but at least in a team of people, where as a group you’ll follow the set workflow by your lead, but as an individual you’ll have your very own sub-approach that most of the time ends up being compromise upon compromise. Thats why it is important for, say Animators, to get a solid and common starting point (like a scene builder could provide; rigs along with static elements) and that they all work towards outputting the same type of data (curves and/or point caching data along with any metadata such as playbasts and version numbers). Then what happens inbetween isn’t as important!

In Clean
|
v
A mess (a.k.a. “work in progress”)
|
v
Out Clean

Hey Marcuso!

I’m working with this pipeline to see how it works as we speak. Designing and theorycrafting on paper is one thing and production is another thing right :slight_smile:
Have to test things out!

So I took my Starcraft 2 Immortal model into the Look-Development stage and here is the turntable I came up with after a day of look-deving:

LookDev output: http://www.youtube.com/watch?v=WPyqA5ZeJ10
Rigging output: http://www.youtube.com/watch?v=kuXoXPxT15w [Old but reassembled into pipeline standards]

Now the LookDev output is the LookDev_MASTER (green node) on the above image.
If I understand your clean in –> wip –> clean out process the LookDev_MASTER file would spit out all the SHADERS and just the SHADERS right, just like how Animation_MASTER only spits out Alembic caches? Wether I do this with scripts or by hand is up to me (compromise :D).
Then the assignment would happen and saved out in the Shader_MASTER scene ( could be renamed to Surface_MASTER ) or something as well.

I wonder if I need any sort of outputs on the Lighting scene, like spitting out all the lights and lights only including IBL, and then the render scene reassembles them using a build script / process, or should I just reference the lighting scene into the render scene, can’t decide we’ll see when I get there, helps to try your own design in production :slight_smile:
But I’m guessing I’ll just spit out the lights, because lighting will also do comp to test their stuff out with alot of render passes that I don’t want to bring over to rendering. Rendering should setup all the rendering related stuff for final output to the compositors. We’ll see we’ll see :slight_smile:

So far I’m liking it!

I think your turntable is waaaay too fast, and that you probably should have attended IAA (think that they are called Campus I12 now) instead of Future Games. :wink:

Yeah I know, it’s 48 frames and it took me 6 hours to render it with my dual core 2.4 ghz crappy processor. Should have playplasted the damn thing but didn’t since it’s just “oh your simple turntable that nothing can go wrong with”, but I was wrong! :frowning:

Maybe I’ll do another one over night that is slower :stuck_out_tongue:

Off-topic: About Campus I12, hadn’t heard about them until now :stuck_out_tongue: Just checked them out seems very nice!
I might apply for Digital Compositing depending on if “insert company name” I applied to can get me a H-1B Visa or not, thanks for the tip! :slight_smile:

Looks really good, Yasin. :slight_smile:

Ty =}

Just came across the making of warhammer online cinematic trailer, second one, by Blur Studio who actually used Mental Ray for it (switched from Brazil back then).
They use VRay nowdays though!

It’s two hours long and very interesting and great presentation.

For those who are interested here it is:

http://www.gnomonschool.com/events/blurwarhammeronline/blur_warhammer-online.php