Maya/Max GDC08 Presentations at The Area

http://area.autodesk.com/index.php/events/gdc08

There are two relevant technical presentations:

Video 3, from Flagship, shows off a really cool asset management tool. It seems to be very full featured and automates things very well. It certainly isn’t something needed for every studio, but for a studio that relied almost wholly on outsourced art, I am sure it saved loads of time and mistakes.

Video 7 is from Ubisoft, about Assassin’s Creed. It is really an excellent presentation. He covers a couple excellent tips and ideas:
Using Skinwrap to create LoD’s
Bone-vert aligner
Briefly describing his ‘smart symmetry’ script
Auto-render and composite
MaxAppLink, a clone of zAppLink for zBrush

The presenter, Francois Levesque, also touted the very wise manta of setting up hand-made data and scenes by hand and merging them in. In a quest for scripting coolness, I (and others) often tend to try to do everything via script. It is much faster to set it up by hand once, rather than script the entire process (such as merging in a scene that is set up for rendering with background, lights, etc, instead of setting up the entire scene via script, which would take a while). The tools presented are wonderful in their simple elegance instead of their technical complexity.

Head over to The Area, and check them out.

Just finished watching the Flagship video – interesting stuff. I can definitely see how using a tool like that would be useful…I myself am somewhat guilty of using naming convention shortcuts.

Thanks for the link!

Hi Rob, I’m glad that you liked my presentation :):

I focused on simplicity because I figured that most people walking around Autodesk’s booth wouldn’t be experts MaxScripts writers so I didn’t want to scare artists away. But like you said, there is real value to simple solutions so I hoped that even experts would find them interesting. It’s too bad I only had 30 minutes though, that didn’t leave any time for details and I actually had to cut a lot of stuff.

Hey Francois,

Great to see you posting here, I really liked the presentation! Some of the things you do with very simple scripts are really impressive and time-saving.
I liked the “standard render turnaround” script so much that I wrote the same thing for Maya today, I was wondering what sort of stuff your Maxscript takes as parameters, and how long it usually took to render all images and composite them into a final file. Did you then write the file out to a network location or version-control system for reviewing?

My Maya “standardRender” script takes 3 parameters - image height, image width, and “number of rotations” (basically it will divide 360 by this number to reposition the camera that many times). Then it just renders each angle to a temporary file, and assembles them into a layered texture and writes the final composition out to disk. It also assembles the whole composite texture at runtime - I originally did it your way (artist setup, always had to be 8 images) but I decided I wanted more flexibility, so now I can pass this value in to assemble a strip of images however long I want.

My script is probably about 50 lines, but that’s because I actually set all the material and render settings via the script (although the lights, cameras and ground plane are merged from a set-up scene). I understand why you did the “load preset” option for render settings and the material, but how did you ensure that all the artists Max files had the same Render Preset .rps file? Did it just copy it from a network location, or did each artist have to make sure they had the file in the correct place? Was it the same for the “template” Max file with the lights and camera rig?

Sorry about all the questions, I am just quite excited because I got the whole thing working perfectly in Maya tonight and rendered out a few models, it works really well to keep consistency without the effort! I’m a Max guy myself anyway, just using Maya more at work now, so I mostly script in Maya these days.

[QUOTE=Francois Levesque;352]Hi Rob, I’m glad that you liked my presentation :):

I focused on simplicity because I figured that most people walking around Autodesk’s booth wouldn’t be experts MaxScripts writers so I didn’t want to scare artists away. But like you said, there is real value to simple solutions so I hoped that even experts would find them interesting. It’s too bad I only had 30 minutes though, that didn’t leave any time for details and I actually had to cut a lot of stuff.[/QUOTE]

Using the light for a texture blend was gold. Great idea Francois.

Feel free to elaborate much further on your lecture here! :):

Yea, I have to say I thought that was rather slick as well. I also like the use of the composite material for rendering out the character images.

Also, thanks for posting the link to the videos Rob. Having attending GDC I still tried to get as much as I could from the GDC site however many presentations were never uploaded. :no:

I love all of these vids, but Francois’ has to be my favorite. Great work on this stuff, a lot of cool insight and I can’t wait to try coming up with a render setup similar to what Francois described an MoP is working on. The Harmonix AD video, while unrelated, is pretty enjoyable as well.

[QUOTE=Francois Levesque;352]Hi Rob, I’m glad that you liked my presentation :):

I focused on simplicity because I figured that most people walking around Autodesk’s booth wouldn’t be experts MaxScripts writers so I didn’t want to scare artists away. But like you said, there is real value to simple solutions so I hoped that even experts would find them interesting. It’s too bad I only had 30 minutes though, that didn’t leave any time for details and I actually had to cut a lot of stuff.[/QUOTE]

Well despite what you had to cut and cater to, it was still one of the more interesting videos by far… I actually ended up leaving most of the Autodesk-sponsored talks I went to (admittedly few) as they were geared towards a much more introductory level, while yours was chock-full of good ideas.

I would really encourage you to explain more about the smart morphing thing. This is something I really want to write myself (unless you could release the code, heh heh), since it’d have lots of uses in skinning as well as morphing. You could start a thread on start an article on the Wiki. The rest of the subjects were really cool and interesting ideas, and a good TA could take them and run with them, but the smart symmetry wasn’t really delved into on a technical level at all, I’d really like to find out more.

[QUOTE=Rob Galanakis;391]

I would really encourage you to explain more about the smart morphing thing. This is something I really want to write myself (unless you could release the code, heh heh), since it’d have lots of uses in skinning as well as morphing. You could start a thread on start an article on the Wiki. The rest of the subjects were really cool and interesting ideas, and a good TA could take them and run with them, but the smart symmetry wasn’t really delved into on a technical level at all, I’d really like to find out more.[/QUOTE]

Couldn’t agree more. These sorts of lectures are why I kick myself over missing events like this.

Sorry about the delay, I wanted to check with Ubisoft’s management before sharing more stuff! Turns out they’re fine with it. :cool:

Yep. Ubisoft’s cinematic team used it for morph targets, FarCry2 artists used it for modeling and I use it myself for skinning. It’s very handy.

I usually start my scripts by figuring out the easiest way to get that result manually. To snap 2 objects manually I’d always start by finding a vertex that is obviously in the same topological position on both mesh, like the tip of their noses. On object 1 that vertex might connected to a few edges like this:

Edge1 = Up
Edge2 = Down
Edge3 = Left

But on the second model the same vertex might be connected like this:

Edge3 = Up
Edge1 = Down
Edge2 = left

At this point it is rather easy to figure out which edges are related, even if their vertices don’t share the same index or even if they are not anywhere near each others… I just snap edges based on their angles. The newly snapped vertices are turned into new starting points and I can keep doing this until everything has been snapped. That’s it. The scale or the position of each object is irrelevant since I only care about topology and edge flow. The script works like that because thats how I do it myself.

In a script you can find angle of an edge like this:
normalize(Vertex1.pos - Vertex2.pos)

And you can find if 2 angles are similar by doing a dot product:
(dot Angle1 Angle2)

You should get a number somewhere between -1.0 and 1.0 where:
1 = The angles are identical
-1 = The angles are opposite

That’s pretty much all the math you need. In my script I ask the artist to pick starting vertices himself, so all I have to do is grab the list of edges that are connected to them, find their angles and match them to the other object’s based on which angles are closest. There’s also some stuff to workaround small difference in topologies but nothing fancy. Vertices that are not connected to the same number of edges on both models are simply skipped so the script usually ‘wrap’ around them. It works really well right now, but perhaps the system could be improved even more by also using the concept of edge rings & edge loops?

I have a confession to make :scared: In my presentation it looks like the ‘smart symmetry’ takes like 5 seconds to run on a rather high-resolution head. It does run that quickly on in-game characters, but on high-res data it takes a minute or two. So the results are real but I shortened the ‘processing time’ for that one; I couldn’t wait that long with only 30 minutes to show everything.

edit: MoP I’ll answer your questions later after work

I have a confession to make In my presentation it looks like the ‘smart symmetry’ takes like 5 seconds to run on a rather high-resolution head. It does run that quickly on in-game characters, but on high-res data it takes a minute or two. So the results are real but I shortened the ‘processing time’ for that one; I couldn’t wait that long with only 30 minutes to show everything.

Ahah that’s ok Francois, the actual process is cool enough to overlook that :D:. Thanks for giving that bit of info on the subject as it definitely gets the brain working.

Very cool, Francois. Thanks for the info! I may give it a stab on my own (outside of work), it would be a cool thing to be able to release since it has a multitude of uses.

Great info Francois. Thanks for sharing that. I’ve also thought about ways to compare two meshes in the past, mainly after seeing Gator in action in XSI and wondering how they managed to match two different meshes that didn’t exactly overlap.

Your way of using edges is great and I hadn’t thought of that. But a different method I though about would be to do a distance comparison to find the closest verts, but then also a normal comparison. So you would start with any vert on Mesh1 and look for a vert on Mesh2 that is not just the closest, but has the best match of its normal. That way the meshes wouldn’t necessarily need to completely overlap. And to cut down on processing time so that you don’t have to go through every vert in Mesh2, you could use a volume select modifier on Mesh1, set to use a sphere of a certain size for the mesh gizmo and just keep moving it around to match the position of the vert you’re testing for at that time.

Hope that makes sense. There would be that issue of scale with this method, that you mentioned you don’t have to worry about, but I did a quick prototype test in the past and it worked pretty good overall.

I believe the Skin Utilities used for transferring skin weights from one mesh to another only use a distance comparison, and so you end up with with strange weighting at times if the meshes aren’t really set up correctly to match each other as close as possible. And that’s why the Gator method was so impressive, because it did such a better job at matching dissimilar meshes. So I’m guessing they use something more the distance, possibly even a combination of both of our methods, using edges and normals as well as distance.

[QUOTE=MoP;357]Great to see you posting here, I really liked the presentation! Some of the things you do with very simple scripts are really impressive and time-saving.
I liked the “standard render turnaround” script so much that I wrote the same thing for Maya today, I was wondering what sort of stuff your Maxscript takes as parameters, and how long it usually took to render all images and composite them into a final file. Did you then write the file out to a network location or version-control system for reviewing?[/QUOTE]
1- There aren’t any parameters since enforcing standard settings is the whole point of that particular script. Of course the .rps is easy to update, but it’s not meant to change during production. 2- It’s rendered by 3dsMax so it’s quite fast unless you go crazy with lights. The composite image takes around 2 seconds I guess, it’s also handled by 3dsMax’s renderer. 3- None of the artists liked the idea of me archiving their renders automatically, so I didn’t do it. My feeling is that they wouldn’t have used the script as often if it did. The file was saved locally and it was up to them to upload an image when the character was finished.

The script always load files from the network, never copy. That way I don’t worry about people having custom or old versions. Also, our artists only had ‘read’ access to my folders so they couldn’t change pipeline stuff without telling me about it.

Yep, makes sense, that’s kinda how I’m doing it. My scripts are under version control, so people will always have the latest version if they keep SVN up to date, and they don’t have to copy files around. Plus I can see if anyone commits an altered version.

Seems Max’s compositor is much, much faster than Maya’s … 8 renders in Mental Ray usually take about 3-5 seconds each, but then “convert solid” (rendering a material node to a texture in Maya) takes up to 30 seconds for a 5200x900 image. I would have thought it’d be faster, but then again it’s Maya :slight_smile:

Thanks for taking the time to share this stuff! :smiley:

Thanks for the insight Francois, it’s really interesting to see these production techniques are so simple when you think about them, or in this case someone tells you! :laugh:

As they say, “It’s easy once you know the answer!”.

[QUOTE=JHaywood;437]Great info Francois. Thanks for sharing that. I’ve also thought about ways to compare two meshes in the past, mainly after seeing Gator in action in XSI and wondering how they managed to match two different meshes that didn’t exactly overlap.

Your way of using edges is great and I hadn’t thought of that. But a different method I though about would be to do a distance comparison to find the closest verts, but then also a normal comparison. So you would start with any vert on Mesh1 and look for a vert on Mesh2 that is not just the closest, but has the best match of its normal. That way the meshes wouldn’t necessarily need to completely overlap. And to cut down on processing time so that you don’t have to go through every vert in Mesh2, you could use a volume select modifier on Mesh1, set to use a sphere of a certain size for the mesh gizmo and just keep moving it around to match the position of the vert you’re testing for at that time.[/QUOTE]
Edges only works when both objects have the same topology so that can be a problem, but the accuracy is almost flawless. With position it’s the opposite, you can use any meshes but accuracy won’t be nearly as good. So both solutions are useful I think, position is certainly better for topologies that are not similar. For different topologies you could also use ray-casting, or even 3dsMax’s ‘projection modifier’ which can project vertex data such as color, alpha and position.

The volume select optimization is cool, that’s exactly the sort of solutions I like :slight_smile: Is it fast? If not you might want to try to use a BSP tree, like Seb’s. I havn’t tried it but it looks pretty good. http://www.subdivme.com/sebscorner/?page_id=10

In Assassin’s Creed our human characters are scaled in-game, but everyone has the same size in 3dsMax so sharing skinning was relatively easy. There arn’t any monsters or things like that.

You’d probably be best off to cast a ray from the vert, get the face it hits, and then test the verts on that face and maybe one ‘growth’ of adjacent faces and their verts.

Hi all!

I realy got inspired by the assassins creed vid, especially the part with LOD steps. It inspires me so much that I want to do something similar to maya. Thanks alot for the inspiration!

I didn’t go far before i stumble into my first problem. Ok I can make a blend shape between two LOD1 heads there’s no problem, but how do I connect Lod2 and lod3 with lod1? In max he used skinwrap is there something similar in maya? my maya knowledge are very limited to modelling and texturing. I hope that some of you can point me towards the right direction how to make it this wounderful script possible in maya.

In Maya you would use a Wrap deformer as it would act the same as the shrinkwrap as they were useing it in max.