Planet Tech Art
Last update: October 25, 2016 10:59 AM
October 14, 2016

Killzone 2

I got a bogus DMCA notice on this post. Google took it down and now I'm putting it back up. I just finished Killzone 2 and it really is graphically impressive. If you are reading this blog then you are interested in graphics which means you owe it to yourself to play this game. The other levels in the game I think are actually more impressive than the one in the demo. The level in the demo was pretty geometrically simple. Lots of boxy bsp brush looking shapes. The later levels are a lot more complex. In particular the sand level was very pretty.

Level Construction
There didn't seem to be much high poly mesh rendered to a normal map looking stuff. Most everything was made from texture tiles and heightmap generated normalmaps. Most of the textures are fairly desaturated to the point of being likely grayscale with most of the color coming from the lighting and post processing. This is something we did quite a bit in Prey and is something we are trying to change. You may notice the post changing when you walk through some door ways. The most likely candidates are doors from inside to outside.

Their biggest triumph I think is in the fx and atmospherics. There is a ridiculous number of particles. The explosions are some of the best I've seen in a game. There is a lot of dust from bullet impacts, foot falls, wind, explosions. There's smoke coming from explosions, world fires, rocket trails. Each bullet impact also causes a spray of trailed sparks that collide with the world and bounce. Particles are not the only thing contributing. There are also a lot of tricks with sheets and moving textures. For the dust blowing in the wind effect there is a distinct shell above the ground with a scrolling texture plus lots of particles. The common trick with sheets is fading them out when they get edge on and when you get close to them. Add soft z clipping and a flat sheet can look very volumetric. There is also a lot of light shafts done with sheets. One of these situations you can see in the demo. All of this results in a huge amount of overdraw. It has already been pointed out that they are using downsized drawing. This looks to be 1/4 the screen dimensions (1/16 the fill). This is bilinearly upsampled to apply it to the screen opposed to using the msaa trick and drawing straight in. Having the filtering makes it look quite a bit better. It looks like it averages about 10% of the GPU frame time. That would mean they didn't need to sacrifice much to get these kind of effects.


All the shadows are dynamic shadow maps. Sunlight is cascaded shadow maps with each level at 512x512. Omni lights use cube shadow maps. They are drawing the back faces to the shadow map to reduce aliasing. Some of the shadow maps can be pretty low resolution. This isn't as bad as Dead Space because they have really nice filtering. This is likely because the rsx has shadow map bilinear pcf for free. I can't tell exactly what the sample pattern is but it looks to alternate. They have stated there is up to 12 samples per pixel. There is a really large number of lights casting dynamic shadows at a time. Even muzzle flashes cast shadows. Lightning flashes cast shadows. At a distance the shadows fade out but the distance is pretty far. To be expected their triangle counts were evenly split between screen rendering and shadow map rendering at about 200k-400k. They should be able to get away with a lot more than that amount of tris.


I think this is the first game to really milk deferred lighting for what its worth. There are a ton of lights. The good guys have like 3 small lights on each one of them. That doesn't include muzzle flashes. The bad guys are defined by the red glowing eyes. These have a small light attached to them so the glowing eyes actually light the guys and anything they are close to. In the factory level you can see 230 lights on screen at once. I'm curious if all of these are drawn or if a good fraction is faded out. If there aren't any faded that is insane. 200 draw calls just in lights and that doesn't count stencil drawing that can happen before. Their draw counts seem to always be below 1000 so this is not likely the case.

Post processing

A fair amount of their screen post processing is done on SPU's. As far as I know this is a first. The DOF has a variable blur size. This is most easily visible when going back and forth to the menu. There is motion blur on everything but the blur distance is clamped to very small.

Environment maps are used on many surfaces. They are mostly crisp to show sharp reflections. I didn't see any situation where they were locational. They are instead tied to the material.

Another neat effect was the water from the little streams. This wasn't actually clipping with the ground or another piece of geometry at all. It is merely in the ground material and it masked to where it should be. The plane moves up and down by changing what range of a heightmap to mask to.

Their profiler says they are spending up to 30% of an SPU on scene portals. I assumed this meant area / portal visibility. In the demo this made sense. After playing it all it no longer makes sense. There are many areas in the game that are just not portalable. I'm not sure what that could mean anymore. They could use it as a component of visibility and the other component is not on the SPUs. In that case I am curious what they used for visibility.

The texture memory amount stayed constant. This must mean that they are not doing any texture streaming.

They have the player character cast shadows but you can not see his model. I found this to be kind of strange especially when you can see the shadows at the feet z fighting with the ground but no feet that would have conveniently hid the problem. It's expensive to get the camera in the head thing to work really well so I understand why they didn't wish to do it but personally I would have gone with both or nothing concerning the players shadow. BTW, why is the player like a foot and a half shorter than everyone else?

For more killzone info:
Deferred lighting
Profiling numbers

It isn't quite to the level of the original prerendered footage but honestly who expected it to be? It is a damn good effort from the folks at Guerrilla. I look forward to their presentation at GDC next week. This is the first year since I've been doing this professionally that I am not going to GDC. I'll have to try and get what I can from the powerpoints and audio recordings. You are all posting your slides right? Wink, wink.

by (Brian Karis) at October 14, 2016 03:30 AM

October 06, 2016

Chronicles of CedricK: escape from Pymel Bay

As anyone can guess the last 3 years were quite challenging: i was lucky enough to travel and work in Canada, Spain and Scotland.

I didn’t manage to update this blog in the meantime, but todays post will but related to maya development .

Last week Chris Evans wrote a post on skin-weights saving/loading in Maya, I will try to talk here about the part I find relevant and lastly elements we can improve.

Downloadable files and tool:

After a review of this task and additional information you will be able see some benchmark using different methods and tool.

The one i wrote for this occasion can be grab on my git hub account:

State of the art and current configuration:

Saving and loading weights is a common task any character TD will face at some point.

Usually in the rigging pipeline :

  • This update can be motivated when a character model has changed.
  • A rig module has changed or animation department request a special feature.

Those updates will mean our rig need to be rebuilt.

Back in the day changing skeleton layout requires people to detach their skin deformer but is no longer needed.

Another observation is that we usually are interested to updated a whole character skincluster or build it from scratch:

  • loading weight on selected points is quite rare ( but easy to do ).

Pracital interaction and scripting access:

On a scripting point of view this task can be done with the commands skinPercent  and skinCluster.

#lets try this code in the python script Editor
vertexToWrite = ['bodySuit.vtx[0]',

#when you want to write some point weights
                      transformValue = ('spine1',0.5),
                                       ('spine2', 0.5)])
#We can see you can specify as argument the component you wish to act upon

This example can illustrate some of maya strong points : its nodal nature,  openness and scripting capacity  .

A very interesting design choice  was made by the maya architects:

  • letting user having free access to attributes .

This freedom let you read/write data at will and connect compatible element together and this process can be done without having  to select their parent node

In contrast  in 3dsmax the skinOps command is only accessible from the command panel after  a skin modifier have been selected, unfrtunately :

  • Some exposed function are broken or doesn’t support undo…

Skincluster node study:

To continue the 3dsmax analogy an operator which act on set of points is called a deformer.

  • its a specialized node which only update point attributes  (so normals /position are quite common, but usually not the topology nor the uvs)

In our case a skincluster purpose is to attach a set of points to a joint and then define a smooth falloff between regions.


( above the equation in academic papers: not very useful and readable isn’t it?)

In common terms each influence will move all points rigidly ( as if the whole shape was parented to it ) and then bone vertex weights will be used to define how we mix things together.


Without any surprise the main input attribute used to define this behavior is the matrix attribute.

Each influences worldMatrix will feed this list and tell the deformer  when to update our shape point positions.

In order to parent the shape to a joint we need to convert points into this joint space:

  • at bind Time Maya will store your joint worldInverseMatrix into the correct bindPrematrix attribute.


Technically weight data are stored into a compound attribute:

  • each vertex will have a sparse array of float/double related to the joints influencing them
  • One other fun fact is that you can connect this attribute ( maya being a nodal application remember?) to have animatable weights ( it will slow down your rig in a tremendous manner on complex character though).

This choice was mostly driven for it flexibity:

  • when you paint weights or colors on a set of points having a sparse array attribute save space and memory.

Its interesting to see that this attribute is inherited from weightGeometryFilter parent class and as such is shared between all kind of deformer in maya ( very useful example of class design and inheritance).


Lets try some simple experience to illustrate the Linear Blend Skinning algorithm:


Saving weights from the grounds up:

It can be tempting to write and read this data to a text/xml/ json file when people try to complete this task faster.

It can give some reasonable result speed wise but fall apart if you try to use it on feature film quality character. ( waiting minutes for skin weight loading/saving or for rig built using pymel is not acceptable by the way).

At this end the logical answer is to have a look at the API:

getWeights ( const MDagPath & path,
const MObject & components,
unsigned int influenceIndex,
MDoubleArray & weights
) const

Above one of the method  of maya MFnSkinCluster function set

On this side we can see that we have more options to extract and write weights:

  • one common points is that we have to provide the shape path and components we which to act upon
  • component here refers to vertex point, curve CV or lattice point etc…

Years ago when i was testing the different options to save weight faster I start by saving individual vertex data to json file.

One thing that stroke me was that people are still trying to export skin weights in xml on a per Vertex basis.

The next step was to reverse the logic and to save the data from a joint point of view:

  • each influence will carry a list  of vertex index and matching list of weights

It was showing great improvement in speed but in the end the correct answers in my case

was to dump the whole shape:

  • instead of filtering point weights below a certain threshold the APImethod will expose all values( even zero one ) for all influences .

One interesting side effect is that for a mesh with 12000 vertices ( 36000 tris ) and 170 joint you will have a 2 million float list ( and no need for influence index ).

It can me limiting on very dense asset but have proved to be the fastest method maya can offer.


Above sven rig strip down to only its influence object and geometries ( from autodesk open source project )

In python it makes more sense to save this kind of data set as a whole in a binary format:


with open(binFile ,’wb’) as weightData:

Here exposedSkinData will be a MdoubleAray  filled by your getWeights method. and will roughly take …. seconds.

Another alternative, opened to mel scripter and maya native user, is to take advantage of maya nature and leverage is strong points.

It is kind of sad to see nowadays softimage user bashing mel, and trying to promote pymel as the only correct way to interact with maya.(sorry but Ice and visual programming is not cool nor new to any nodal application user ).

Unfortunately pymel design which enforce object programming:

  • cuts maya user from its architecture,
  • uses and expose api element to beginners( messing with MObject will have terrible consequence in your UI or scripts… ( look at motionbuilder instability if you want to see one concrete example of this potential disaster)
  • slower than both mel and and maya command
  • carry subtle bugs
  • pollutes maya namespace and scripted plugin at import times.

end of the rant…

Benchmarking and final toughts:

So the funny trick to save and load weight is to actually let maya do all the work:

  • exporting a selected skincluster node in either binary or ascii format (you need to unsure no connection ,history other other accessory elements are included  at save time ).

Lets call it maya ascii/binary injection.( there is a really intersting section in the API documention on mayaAscii filter with similar concepts).

(The following test can be carried out on your side if you dowmload and use script from :

On my laptop using Chris script ( after correcting the missing import statements )

sven Export weight methods will produce this result  in the script editor:

Exported skinWeights for 22 meshes in 1.80200004578 seconds.

The binary extraction methods from the script I will share on github:
Processings took 0.82452176412 seconds

The ascii extraction methods
Processings took 2.28121973499 seconds

(slower but still reasonable with retrospective to the amount of work involved… )

The last methods will take advantage of alembic cache node:


to save heavy element into abc archive


and layout additional information in to a json file. asset element are then package into a zip file.
Saving 22 elements took 0.473954696144 seconds
Export to D:/ was successful
Geometry SuitShape
Number of vertex 18148
Number of influences 216
Number of weights Samples 3919968
Processings took 0.313700978128 seconds

(SKIPPING 21 other elements)
Processings took 0.516421672774 seconds

Thats it for today, will update the post with techniques related to nodecast / ascii injection in the comming days

Filed under: Uncategorized

by circerigging at October 06, 2016 08:57 AM

September 30, 2016

September Tool: Ballistic Animation

A lot of times while animating, you need to create simple but convincing physics. This is one of the reasons a bouncing ball is the quintessential animation exercise. This month’s tool is about quickly and easily generating a ballistic arc.

For example, say you’re animating a character throwing a ball. You could animate the ball up to the point where physics should take over, select the frame range to simulate, and then run the tool to generate the animation.

Ballistic Animation
Ballistic Animation
Version: 1
5.0 KiB

Runs very simple gravity physics on the translation of an object, taking into account initial velocity. <br> <a href="">Installation and Support</a><br>(This script requires the <a href="">ml_utilities</a> module.)

Category:Animation Scripts
Date:1 October, 2016

If this proves useful, I’d definitely like to add more features in the future, in this initial release it’s actually quite simple. Hope you enjoy, thanks to my Patreon supporters for making this possible!

by Morgan Loomis at September 30, 2016 07:52 PM