Planet Tech Art
Last update: April 18, 2014 10:59 PM
April 18, 2014

Roger Roger

If you've been playing with the stansdaloneRPC server, I've added a new branch to the github project that includes a minimal level of password security. It's still not the kind of security you want if this is to be exposed to the wicked world, but it should suffice to keep you free from teammates who want to prank you.

Comments / bug reports and pull request welcome!  If you use the github wiki to log an issue or ask a question, it's a good idea to put a comment here to make sure I see it.

by Steve Theodore ( at April 18, 2014 05:17 AM

April 17, 2014

Torus Knots

Made a torus knot from a line and an attrib vop. I was able to made a few different types after I figured out the basics from wikipedia, and a few other sites around the web…

by Ian at April 17, 2014 11:59 PM

April 16, 2014

A video is worth 30,000 words per second?

    Just popping in to say that if you haven't checked out my Vimeo channel recently...err...there's not much new stuff there, but there's more fun stuff coming (along with new blog posts that I've promised people and myself).  I'm considering doing some Cinder video tutorials if I ever find some free time, not sure about what, maybe covering Kinect, the Intel depth cameras, that sort of thing...not sure really, we'll see.  But anyway, yeah, bookmark my channel, show your friends, loved ones, your inner circle, all that.  If nothing else, it's good for a tiny bit of inspiration...maybe.

>>> Me on Vimeo <<<

by Seth Gibson ( at April 16, 2014 06:01 AM

Results are not the point?

The phrase “results are not the point” often confuses people new to Lean thinking. It confused the shit out of me, not having really understood it even after my first few books. This is a shame, because it’s such a simple thing.

On Friday night, Danny got really drunk, coded a game, and the game was a hit. Danny did this again the following Friday, with the same results. And once more that third Friday.
Jess codes on sober Saturday nights instead (still drinks on Friday). Jess programs a game, and it runs poorly, crashes often, and isn’t fun. The following Saturday, Jess makes a new game, which runs fast but still isn’t fun and crashes often. That third Saturday, Jess creates a new well-performing, fun game, though it still crashes.
Would you bet on the long-term success of Danny or Jess?

Clearly, the better bet here is Jess. Jess has discovered a process which can be continuously improved. There is good reason to believe Jess will eventually create reliable success. The fact that Danny has been successful three times is basically irrelevant, since Danny’s process is totally haphazard.

This is the idea behind results are not the point. Focusing on the results, and not how those results were achieved, doesn’t improve anything in the long term. The point is to create a repeatable, empirical, continuously improving process. If we can create a reliable, successful process (which here includes culture and practices), we can get reliable, successful results.

by Rob Galanakis at April 16, 2014 02:43 AM

April 15, 2014

Autodesk 3ds Max 2015 released

And with it, the Python API!

Which comes with something really interesting now… PySide 2.1 is built-in! Really cool :)

Anyway, bookmark this URL, you will probably need it during the next year:

by Artur Leao at April 15, 2014 08:42 PM

Rigging Dojo’s Artist in Residence (AIR) : April GDC Wrap up

To subscribe – After you complete payment click “return to Rigging Dojo” and then “register” with  FirstnameLastname format. Hi All, Chad here. Well GDC and several production milestones bumped out …

The post Rigging Dojo’s Artist in Residence (AIR) : April GDC Wrap up appeared first on Rigging Dojo.

by Rigging Dojo at April 15, 2014 02:32 PM

Coding a Maya Production Pipeline with MetaData

Heads up.. I'm going to be doing a presentation at Develop in Brighton this year about how to utilize Red9Meta in a production pipeline, running through some internal examples of the tools and Maya dag structures that we're currently working on at Crytek. 

This will be an overview really, how and why metaData helps not just in constructing complex setups, everything from Exporter, Facial and Rigging pipelines, but also as a light coding api to deal more seamlessly with nodes in Maya.

For all of those doing the Rigging Dojo Character Engineering course, might be a good chance to catchup.

More details to follow but if there's anything in particular that you'd like me to include drop me a mail. 



by Mark Jackson ( at April 15, 2014 11:21 AM

Grab 3dsmax viewport with python – Part 2

I finally managed to get the viewport grabbing to work with python only, no hacks using MaxPlus.Core.EvalMAXscript(). More or less :) It’s working for the standard viewport.getViewportDib() equivalent in Maxscript, I still can’t figure out how to grab it directly from the GraphicsWindow (gw.getViewportDib()) to get a clean viewport snapshot without any overlays like gizmos, etc. This method works only in 2015.

Long story short, I’ve updated the YCDIVFX MaxPlus Packages in github and even added a new package called maxhelpers where I will put code that will be reused across all other packages.


Here’s how the code looks:

def ActiveViewport(self, filename=(MaxPlus.PathManager.GetRenderOutputDir()
                                   + r'\default.jpg')):
        """Grabs viewport to a file on the hard-drive using default viewport size.

        :param str filename: a valid path to an image file

        :rtype:  MaxPlus.Bitmap
        # Create storage
        storage = MaxPlus.Factory.CreateStorage(BitmapTypes.BMM_TRUE_64)

        # Create BitmapInfo
        bmi = storage.GetBitmapInfo()

        # Set filename

        # Create bitmap to hold the dib
        bmp = MaxPlus.Factory.CreateBitmap()

        # Viewport Manager
        vm = MaxPlus.ViewportManager
        # Get active viewport
        av = vm.GetActiveViewport()
        # Grab the viewport dib into the bitmap
        av.GetDIB(bmi, bmp)

        # Open bitmap for writing

        return bmp

To grab and display the bitmap in max you just need to do this:

bitmap = grabActiveViewport()

Hope this is useful!

by Artur Leao at April 15, 2014 11:10 AM

April 14, 2014

Sweet Sumotori Dreams

I had no idea that the genius behind Sumotori Dreams is still making awesome procedural animation toys.

If you're not familiar with Sumotori Dreams, it's the funniest thing that ever happened to procedual animation.  Proof here (loud cackling and some profanity in the audio track, could not find any that did not have lots of hilarity and shouting):

If you're at all interested in procedural animation - or have even a tiny sliver of a sense of humor - you should buy the iPhone app the android app, or the PC version.  This guys deserves our support!

On a related note, if you like this you may find this talk from the developer of Overgrowth interesting as well.

by Steve Theodore ( at April 14, 2014 09:23 PM

Super simple manual flipbook.

Say you want to manually animate a flipbook in matinee.
Here's a very simplified setup for it. The downside is: is has to be one single row but you can modify the amount of columns as you wish.

There could be less instructions but the idea is to make it easy to instance. In the material instance, you just have to modify the FramesAmount parameter and you can animate the FrameIndex parameter in matinee with everything working fine.

What's happening there?
You simple tile and shift your texture.


In this case, one tile is a forth of the texture size. So you divide 1 by the amount of horizontal frames in your texture and multiply only the U of your texture coordinates.
You've got the correct tile size.


To display the correct frame at the right moment, you'll just need to move the texture horizontally.
The amount by which you need to shift the texture to reach the next frame is one nth of your texture, n being th amount of horizontal frames. (4 in this case)
The FrameIndex value (animated from matinee) multiplies this to find how many times you need to shift it.

The floor node is there to ensure you only display full frames.
Since the frame index is floored, your index will be starting at 0, remember this when you control the value in matinee. To animate a four-frames texture, the value will have to interpolate between 0 and 3. (If your texture is set to wrap, a FrameIndex of four will get you back to displaying frame 0.)

Not super clean or flexible but definitely super simple.

by mkalt0235 ( at April 14, 2014 05:39 PM

Get 3dsMax viewport HWND

In Maxscript! I’ll just leave it here as a reminder and hopefully it will be useful for you too!~

 assembly = dotnet.loadAssembly "Autodesk.Max"
 g = (dotnetClass "Autodesk.Max.GlobalInterface").Instance
 inface = g.CoreInterface
 activeview = inface.ActiveViewExp
 print activeview.Hwnd

by Artur Leao at April 14, 2014 11:37 AM

April 13, 2014

Warning: Garish graphics ahead!

If you're tired of boring old light-grey-on-dark-grey text, you'l'l be pleased to know that the Maya text widget actually supports a surprising amount of HTML markup. Which means that instead of this:

You set peoples eyeballs on fire like this:

This is a single cmds.text object  with it's  label property set to an HTML string.  

It turns out that cmds.text is actually a fairly full-featured HTML4 renderer! That means that you can create pretty complex layouts using many -- though not all -- of the same tools you'd use for laying out a web page.  You can style your text with different fonts, sizes, colors, alignments and so on - you can even us CSS style sheets for consistency and flexibility.

More than that you can also include images, tables and layout divisions, which are great for formatting complex information.  No more printing out reports into dull old textScrollFields!

Best of all, it's trivial to do.

All you need to do is set the label property of a cmds.text object to a striing of valid HTML. By default your object inherits the standard maya background and foreground colors but you can override these in your HTML  You can even just compose your text in an HTML editor like DreamWeaver or Expression Blend; that how I did the example in the graphic above..

There are some limitations you need to be aware of.  The big ones seem to be:

  • HTML/CSS controls for positioning text or divs don't seem to work. Align tags inside a span element do work, but float and positions apparently do not.
  • The renderer won't fetch images or other resources from a URL or relative paths.
  • No JavaScripts - so no blinking texts or animated gifs.  I'm not sure that's a loss.
  • No inputs such as buttons, checkboxes or text fields.
  • Fonts seem to render smaller inside the Maya text than they do in a conventional browser. You can't specify text size in ems or percentages; pixel sizes seem to work fine, however.
  • It looks like text is the only control that supports this styling right now ( tested in Maya 2014).
I'd assume that these limitation reflect the behavior of underlying QWidgets inside of Maya - if anybody has the real dope to supplement my guesswork, please chime in.   

In the mean time, here's to the inevitable avalanche of eye-ripping garishness that is sure to result from this revelation. As for me, I'm off to go convert my whole toolset to Comic Sans! 

by Steve Theodore ( at April 13, 2014 12:59 AM

April 12, 2014

The Last of Us: Remastered (PS4) !

My last project, The Last of Us us getting Remastered release on PS4 with a special dose of HD All-The-Things !

Check out the info here: Playstation Blog

by Nathan at April 12, 2014 09:21 PM

The “Year of Code” Director is Your Boss

There was some hubbub a few months ago when it was revealed the Executive Director of the UK’s Year of Code initiative can’t code [link]. Not that technical (domain) competency is a sufficient condition for management and leadership, but I’d certainly call it a necessary condition. (I’ll use the world ‘technical’ below to refer to any sort of specialized domain, not just programming.)

Apparently a number of people don’t agree with the idea that competency in a domain is a requirement to manage that domain.* I find this idea infuriating and it can only end poorly.

Perhaps you have a manager who knows little about programming or design or whatever your specialty is, and you consider this person to be the best boss of all time. Great! I’ll call this person Your Boss for convenience. Here’s the problem:

At some point, Your Boss needs to make some contentious decisions. Maybe over your work, maybe over something you’re not directly involved with (I bet Your Boss was hated by a lot of people, too!). Your Boss has literally no ability to help resolve a technical decision. “Wait!” I hear you say. “My Boss is enlightened enough to know that the people closer to the problem should be making the decision!

But who are those people closer to the problem? Who put them there? Oh, that’s right: Your Boss. But your boss has little technical knowledge. How is Your Boss supposed to decide who makes the more technical decisions? Without basic technical ability, Your Boss doesn’t even know what questions to ask. Your Boss can’t even learn; she doesn’t have the technical prerequisites. Instead of being able to provide leadership, Your Boss is left scratching her head. This is not leadership, and this is not management. This is a cancer and an organization that is unable to grow and learn.

It’s no surprise this topic is divisive. When Your Boss places a lot of trust in you, you are autonomous and think of Your Boss as the best boss of all time. But when someone runs up against you and Your Boss, they have no real recourse, because Your Boss trusts you and has no ability to further inspect the situation.

Certainly, superior ability or experience is not a requirement for management over a domain. But I thoroughly believe that not just familiarity, but some actual professional practice, with a domain is a requirement. I hope that if you are someone who believes in the myth of the competent non-technical manager, you’ll rethink your experience and view Your Boss in a more complete light.

* Clearly, at some point, you cannot be experienced in all the domains you manage, and need to trust people. Unfortunately we do this far to soon, and accept a development manager who has not developed, or an engineering manager who has not done much programming. In the case of the Year of Code Director, I think the issue is a non-technical background (in programming nor teaching) and a general lack of experience. If she had proven a wunderkind in her given field (which is, btw, basically PR/marketing/communications), maybe she should be given the benefit of the doubt. There are many examples where talented administrators have moved into new areas and been quite successful. But her appointment, along with most of the rest of the board, is pretty clear cronyism (and when you throw out technical merit and domain experience, you’re left pretty much with cronyism).

by Rob Galanakis at April 12, 2014 07:59 PM

Adobe Photoshop CC- Not exactly an early adopter...

Its a words post! Where is the code? That comes later... for now, words. I'm now an official owner (or at least subscriber) of/to Adobe CC... yeah, I got through my whining stage and now I'm learning to love the cloud, or, at least learning how to accept the inevitable.
Long live the cloud! I guess.

But from the Photoshop tool development perspective I think I actually find it a little more exciting than I am letting on. You mean *everyone* will be on a standard version? No kidding! What an awesome development! No more hacking in (sometimes) seeming random version numbers! The ability to assume everything you want to support is supported. Great! Don't have the right version? Update your Photoshop buddy!

When taken from that perspective I really think that Adobe's decision (aside from the whole aspect of never actually 'owning' the software) is a pretty great one. Maintaining pipelines for multiple versions of Photoshop ceases to be a major problem*, and tool development and distribution becomes, if not simpler, at least a little more direct in execution. 

I'm also taking my first steps into the Photoshop SDK, which is an incredibly powerful and daunting piece of architecture. Not only does it require C++ for creating plug-ins, but it also seems to be half way between a development framework and a history lesson on ye early days of Photoshop. And the documentation? Reading through it, there seems to be a big Photoshop SDK tutorial shaped hole where the SDK tutorials ought to be. 

But, if it were easy, it wouldn't be as fun! Now to work through that hello world tutorial...

* Currently supporting four different versions at work and trying very hard not to.  

by Peter Hanshaw ( at April 12, 2014 03:36 PM

April 09, 2014

Perforce: Setting up Ignore lists

Sometimes your local workspace will have files that you don't want to check in. Examples include Maya swatches files, if you are using Unity, the library folder, and anything called tmp.

You can set up P4 to ignore files and folders using the P4IGNORE environment variable, and a text file in your perforce root called .p4ignore.txt

Here is how you set it up:

An example of the contents of an ignore file. Comments are added using the # sign. 
  • Create a file in the root directory of your workspace (eg: C:/Projects/Perforce/) called .p4ignore.txt
  • Inside this file, define which file types to ignore. For example:
  • The .swatches file, which is generated by Maya. 
  • The Library folder, generated by Unity. 
  • Any other folder that contains source file work that should remain local to people's workstations (texture bakes, render data etc).
  • Once this file is set up, open the command console (windows key + r then type in cmd) and type in: p4 set P4IGNORE = .p4ignore.txt
Setting the environment variable in a windows environment. 
  • Now, Perforce should ignore the file types and folders defined in the text document. 
  • Next time an attempt to add this file is made, a warning should show up, and the files will not be added to your changelist. 


by Peter Hanshaw ( at April 09, 2014 06:37 PM

Fixed rotator

What's on today:
  • Fixed rotator.
  • Fixed rotator with steps.
  • Quick look at vector graphics textures in after effect.

Fixed rotator.

How to rotate a texture by a certain fixed angle?

Just input a constant value in the time pin of your rotator. The constant value is like saying ‘this is where the rotator would get you in that much time’. (I mean, not technically since the unit is not seconds but it's sort of a way of seeing it.)

What to expect from that value? First thing, set your rotator speed to 1. Then you have to know that unreal uses radians as a rotation unit.
180 degrees = π (pi, the maths symbol equal to 3.14159 and so on).

Therefore, 90 degrees is going to be π / 2.
Good thing to know: you can do maths in input fields. In your constant input, you can simply type in 3.14159/2.


You could use a fixed rotator to represent a circular gauge for instance. Most likely, the gameplay code is going to provide you with some normalized parameter. (Frankly that’s the best option. Sure modifying the value to fill your needs will add a few instructions but it gives you the flexibility to easily do anything you want with it, rather than having to go bug a coder so he changes the value he's hard coded for you.)

Modify the input to the range you are interested in (say 0 to 90 degrees), plug it into the rotator, and then into a mask that multiplies your gauge.

You’ll notice that I’ve just multiplied my normalised value; since my min is 0, I don’t need to use a lerp which would add more instructions for the same result.

Fixed rotator with steps.

We can go one step further.
In Enslaved, Trip has an emp gauge which recharges over time and is split in several chunks which appear one at a time.

I used a similar setting, with a rotating mask that multiplied my texture, only this time the rotation value had to be contained until it reached the next step.

In the following example our gauge value is at 0.5, that's half filled. The gauge is split into 5 chunks, so each step is 0.2 .

We check: our many steps are there in our current value ? That's 0.5 / 0.2 = 2.5. We're only interested in full steps so we floor it.
We've then got two full steps, the step size is 0.2 that's 0.2*2, our output value is 0.4.
The floor node is going to keep the value contained at 0.4 until the next integer is reached. When we get to 3 full steps, the output will suddenly jump to 0.6 and so on.

The input value is divided by your step size, floored and then multiplied by your step size.
Credit for this little setup goes to Gavin Costello, currently Lead Programmer at Ninja Theory and full time graphics programming genius.

Vector graphics textures in after effect.

 I've got a thing for after effects in general, and I find it excellent for motion graphics in particular. It's very flexible, the textures are easily scaled and iterated. Illustrator could do the same but with after effects you can also animate everything for previs and/or flipbook generation. (Which is exactly how I worked during Enslaved. My previs and textures were the same assets.)

Here's the way I made the previous gauge for instance, using shape layers:

2 ellipses, 6 rectangles, a subtract merge to cut out the inner circle and the rectangles, another rectangle and an intersect merge to extract the bottom left quarter. And finally a white fill of course.

I like to use expressions even with these sort of very simple shapes. It is a small time saver as you make the texture and might be a massive one along the project as you iterate on your texture.

Right there for instance, I link the anchor point to my rectangle size for the anchor to be at the end of the rectangle rather than in the default centre. Sure I could have done it by hand but I find it better when automated. I was a bit lasy in this case so I stopped there but if I had created this texture for production, I would have also:
  • linked the size of every rectangle to the first rectangle (or even neater, to a slider control)
  • linked the rotation of every rectangle to a slider control (and multiplied it by 2, 3, 4 etc. for each next rectangle)
  • and maybe controlled the radius of each ellipse from a slider parameter too, just so as to modify everything for one single place and not have to open up the content properties

by mkalt0235 ( at April 09, 2014 05:53 PM

The manager’s responsibility to review code

I believe any technical leader has a responsibility to review all the code that goes into a codebase.* I am certainly not the only person to feel this way (Joe Duffy as MSFT and Aras Pranckevičius as Unity have said the same).

Furthermore, I don’t believe the responsibility to review code ends at a certain level. Everyone from an Engineering Manager to the CTO should be reviewing code as well. In my experience, I’m able to do thorough reviews for 5 to 8 people, and more cursory reviews for another 15 to 20.

Code review above a team lead level** is not about micro-management. A manager should never, ever be saying “we do not use lambdas here, use a named function instead.” Instead, try “do you think this would be more readable with a named function instead of a lambda?” Or maybe you say nothing, and observe what happens, and inspect what other code looks like. If lambdas are common in the codebase, either your opinions need more information, or you have done a poor job educating.

Code reviews by managers should be about getting enough information to manage and lead effectively.*** It keeps you up to speed about what is going on, not just in terms of tasks, but in terms of culture. Are people writing spaghetti? Are bad choices challenged? Are hacks put in? Is code documented? Are standard libraries being used? Are the other technical leads reviewing and leading their teams effectively? You can learn an incredible amount through code review, and you need this information to do your job of leadership and management effectively.

*: I believe all programming managers and leaders must be able to program. I find it shameful this needs to be said.

**: It should go without saying, but team leads should be reviewing every checkin on that team.

**: Code reviews are the *genchi genbutsu

, or the go and see part of Lean management.

by Rob Galanakis at April 09, 2014 03:16 PM

MaxPlus and PyCharm – Update!

This is an update on my YCDIVFX MaxPlus Packages which removes a dependency on ExternalMaxscriptIDE. Now thanks to the brilliant work of Christoph Bülter and his SublimeMax package, we can execute our python scripts in 3ds Max directly from Python without too much hassle, way easier to setup. Don’t be scared with all the bullet points, I’ve tried explain it almost click by click.

I basically deleted code from SublimeMax to make it fit to the simple requirements of and PyCharm.

This should make the setup for PyCharm and 3dsmax much easier, here’s an update on the step-by-step:

  1. Install PyCharm-
  2. Download the YCDIVFX MaxPlus Packages and unzip it to a folder of your choice (ex. C:\YCDIVFX\MaxPlus)
  3. Open PyCharm and open the directory where you unzipped the previous file.
  4. Go to File -> Settings or press Alt+F7 and search for Project Interpreter
  5. Your default project interpreter should be Python 2.7.3 bundled with 3ds Max (C:/Program Files/Autodesk/3ds Max 2014/python/python.exe) if not, don’t worry go to the next step.
  6. Click Configure interpreters and if you don’t have an interpreter set, click the + button and add your Python interpreter (C:/Program Files/Autodesk/3ds Max 2014/python/python.exe)
  7. With the project interpreter selected on the top list view, click on the Paths tab.
  8. Click the + button and add your default 3dsmax 2014 root folder (C:/Program Files/Autodesk/3ds Max 2014) if it doesn’t show up there already.
  9. Go to Project Structure and on the right pane, select the “packages” folder and press “Sources” button.
  10. Press OK

Now let’s setup one configuration and then you can duplicate this one to run other scripts:

  1. Press Run -> Edit Configurations
  2. Fill in with the following values – Script:  C:\YCDIVFX\MaxPlus\MyExamples\ / Script parameters:  -f C:\YCDIVFX\MaxPlus\
  3. Press OK

Now open 3dsmax 2014.

In PyCharm just select “run main” and press the Run button (little green play button)

In the 3ds Max Listener you should see this:

hello world

Congratulations, you’ve made it!

You can also run it from the command-line, be sure to check the README file.

Recommended optional installs (distribute, pip, nose and coverage):

  1. PyCharm settings on the Python Interpreters page, you should see a warning to install “distribute”, click on it and then another for “pip”, install that too.
  2. Now click the Install button and search for “nose”, install “nose” package (Description: nose extends unittest to make testing easier/Author:Jason Pellerin)
  3. Now search and install “coverage” package.

If you are interested in remote debugging, check my other blog post here: Pycharm, 3dsmax, remote debugging love!

by Artur Leao at April 09, 2014 12:45 PM

April 08, 2014

Mighty Morphin Module Manager Made Moreso

I've added a port of the Maya Module Manager I posted a while back to the examples included with the mGui maya GUI library.  This was an exercise to see how much the neater and more concise I could make it using the library.

Here's some interesting stats:

The original version was 237 lines of code, not counting the header comments. The mGui version was 178 without the header, so about 25% shorter overall.  There are about 80 lines of unchanged, purely behind-the-scenes code which didn't change between versions, so the real savings is more like 45%.   Plus, the original sample included some functions for formLayout wrangling  so real savings might be a little higher for more old-fashioned code.

Like I said last time, the mGui package is still evolving so it's still very much in a "use at your own risk" state right now... That said, I'd love to get comments, feedback and suggestions.

by Steve Theodore ( at April 08, 2014 10:16 PM

maya he3d

Since I don't get the time to update this blog much anymore, I thought I'd just drop in quickly to recommend a google groups mailing list that I have been contributing to for a while now. If you feel like some question/answer type discussions focused specifically on problem solving in maya this might be the [...]

by david at April 08, 2014 02:18 PM

April 07, 2014

A Better Maya to Unity Workflow

In the past, I’ve implemented tools and fixes to Unity’s import pipeline of native Maya files by manually modifying the FBXMayaExport.mel script. Unfortunately, these types of modifications are difficult to maintain. For one, they need to be redone every time the Unity installation is upgraded. Second, the files are buried in obscure places that require administrative privileges to modify them. It would be ideal if changes could be made in one place, one time, in a way that supported more modular design. So, I finally decided to take the time to sit down and tackle this problem.

Unity’s Export Process

Unity uses the FBX SDK to import complex models. As such, one popular way to get models into projects is to simply export FBX files from some DCC application (e.g., Maya, 3D Studio Max). Although this approach has some advantages, the manual exportation step means that two versions of the file are kept in separate places. Files can become out of sync, and (depending on your tools) this workflow can introduce the possibility of human error in the configuration of export settings or location of files. Another option is to drop files in the DCC application’s native format (e.g., .ma, .mb, .max) into the project. Depending on the DCC application, the Unity editor spawns a headless child process that executes some script to automatically convert the native file into a temporary FBX that is imported into the project.*

In the case of Maya, there are some template scripts in Unity’s install location (e.g., FBXMayaMain.mel, FBXMayaExport.mel). When your Unity project imports a Maya file, it duplicates these scripts into your project’s temp folder, modifies some file paths inside of them, and then launches Maya as a child process, passing a command line argument that sources the duplicated FBXMayaMain.mel when Maya launches. This script ensures the FBX plug-ins are loaded, and then sources the duplicated FBXMayaExport.mel script, which handles the FBX export.

In the past, this latter approach was less attractive to larger teams. For one, each file had to be reimported by every user, unless the team were using the Unity Asset Server (which cached the imported data in Unity’s native asset format). If you were using something like Perforce, however, each user needed to have Maya installed in order to import the models, and the process could be slow for very large projects with hundreds or more Maya files. The introduction of Unity’s Cache Server, however, makes the direct workflow potentially more appealing. The Cache Server is compatible with any VCS, and can cache the imported data when they are committed, just like the Asset Server workflow.

A Solution

Although in past work, I have often incorporated a combination of direct modifications to the FBXMayaExport.mel template, it can be a hassle to maintain, for the reasons I stated at the start of this post. As such, I had a few design goals for this exercise:

  1. I did not want to have to make any modifications to the template files in the Unity install
  2. I wanted an automated process that could be handled from a studio-wide script
  3. I wanted any users to be able to easily register their own modifications in a modular way

Taking these considerations in mind, I basically wanted a script that, when imported in, would know if it were being launched in a child process of the Unity editor, and would register callbacks using the messaging system built into Maya’s API if so.

Determining whether the Maya instance is a child of the Unity editor is as simple as reading the command-line arguments. Specifically, we look for the -script flag and see what the argument’s value is:

import os.path
import sys
    startup_script = sys.argv[sys.argv.index('-script') + 1]
except Exception:
    startup_script = None
if startup_script is not None:
    directory, script = os.path.split(startup_script)
    is_maya_child_of_unity = (
        os.path.basename(directory) == 'Temp' and
        script = 'FBXMayaMain.mel'
    is_maya_child_of_unity = False

The trick at this point is registering callbacks. The FBXMayaExport.mel script uses the FBXExport command. Why wouldn’t it? Autodesk explicitly says you should, as opposed to using the file command. Unfortunately, the FBXExport command does not presently broadcast messages that you can hook into with the MSceneMessage class.

Knowing that Unity makes copies of its mel scripts in a writable location, however, opens up a really simple possibility. Namely, if we detect that the Maya instance is owned by Unity using the command-line arguments, we can also find the duplicated FBXMayaExport.mel script in the same location as FBXMayaMain.mel. Moreover, because it is writable, we can make a simple modification to use the file command.

# continued from above
import re
if is_maya_child_of_unity:
    path_to_fbx_export_script = os.path.join(
        directory, 'FBXMayaExport.mel'
    with open(path_to_fbx_export_script) as f:
        contents =
    contents = re.sub(
        'FBXExport -f ',
        'file -force -type "FBX export" -exportAll ',
    with open(path_to_fbx_export_script, 'w+') as f:

Now, after this code has executed, if we know that the Maya instance is a child of Unity, we can use MSceneMessage.addCallback() using the MSceneMessage.kBeforeExport message type.

If you’re interested in easily dropping a solution like this into your pipeline, I have put up a simple Python package on github. In order to use it, all you have to do is import the unityexport package in your studio-wide script. You can then read its unity_project_version attribute to determine if you are running as a child of the Unity editor (and what version the project is) in order to register callbacks with the MSceneMessage class. As an example, the package also registers a callback to automatically adjust the FBX export settings for blend shapes, as per my previous post.

*I don’t work on Windows, but in the couple of tests I have done, it seems like Unity 4.3.x currently hangs on exit if it has an active Maya child process. There have been some assorted vague reports of similar behavior for Max.

by Adam at April 07, 2014 10:33 PM

April 06, 2014

Why Agile became meaningless

Uncle Bob recently wrote a post about The True Corruption of Agile. I think it will be a defining post for me because, as I’ll explain in my next post, I’m ready to give up on Agile. It has become meaningless due to the corruption Uncle Bob describes, and trying to reclaim Agile isn’t possible.

Imagine the Lean movement without Toyota. Toyota is the guiding force in Lean because it grew out the The Toyota Way.* When Lean goes awry, Toyota- the company and its principles, practices, and culture- is there to set things straight.

Toyota can guide Lean because the company has been successful for decades and Toyota attributes its success to the principles and practices known as The Toyota Way. But for many years, Toyota’s success was explained away by anything except the Toyota principles. Finally, all that was left was The Toyota Way. Toyota is the Lean reference implementation.

Agile has no such entity. Instead, we have hundreds of “Agile” shops who attribute success to some (non-)Agile practices. Then, once they’ve evangelized their (non-)Agile stories, reality catches up with them and the success disappears.** But no one hears anything about that failure. The corruption and perversion here is inevitable.

Without a company like Toyota giving birth to Agile and showing others how to do it right, Agile was destined to become what it is now: meaningless and corrupt.

*: The Toyota Way started out as the Toyota Production System. They aren’t technically the same but for the purposes of this post there’s no reason to distinguish.

**: For example, maybe InnoTech decides to use Scrum on a global scale to ship an ambitious product, and talks a lot about how they pulled this off and what benefits it yielded. Years later, velocity is in the toilet because the endless mountains of technical debt created, and maybe the company has had layoffs. The Scrum transformation will be in a book or on a stage. The layoffs or technical debt will not.

by Rob Galanakis at April 06, 2014 03:54 PM

April 05, 2014

Earth calling maya.standalone!

Somebody on was asking about how to control a maya.standalone instance remotely.  In ordinary Maya you could use the commandPort, but the commandPort doesn't exist when running under standalone - apparently it's part of the GUI layer which is not present in batch mode.

So, I whipped up an uber-simple JSON-RPC-like server to run in a maya standalone and accept remote commands. In response to some queries I've polished it up and put it onto GitHub.

It's an ultra-simple setup. Running the module as a script form mayapy.exe starts a server:
    mayapy.exe   path/to/

To connect to it from another environment, you import the module, format the command you want to send, and shoot it across to the server. Commands return a JSON-encoded dictionary. When you make a successful command, the return object will include a field called 'results' containg a json-encoded version of the results:
cmd = CMD('', type='transform')
print send_command(cmd)
>>> {success:True, result:[u'persp', u'top', u'side', u'front'}

For failed queries, the result includes the exception and a traceback string:
cmd = CMD('cmds.fred') # nonexistent command
print send_command(cmd)
>>> {"exception": "",
"traceback": "Traceback (most recent call last)... #SNIP#",
"success": false,
"args": "[]",
"kwargs": "{}",
"cmd_name": "cmds.fred"}

It's a single file for easy drop. Please, please read the notes - the module includes no effort at authentication or security, so it exposes any machine running it to anyone who knows its there. Don't let a machine running this be visible to the internet!

by Steve Theodore ( at April 05, 2014 06:18 PM

Pipi Object Model pt. 3

In part 2, we further defined our pipeline, its intent and what is required to successfully implement it. Lets have a look at how tracking fits in with all of this.


In electronics, tracking refers to

The maintenance of a constant difference in frequency between two or more connected circuits or components. – Google “Definition of tracking”

But for our intents and purposes, we may refer to it as the way in which data is inferred by other data.

Lets take an example.

Bob outputs an obj to John who turns it into an mb and sends it forward.

To the recipient of Johns output, there is only an mb – the obj is nowhere in sight, yet the obj had a significant impact on the creation of mb.

Here, tracking means to maintain a link between obj and mb so that the recipient of Johns output may later refer back to it.

Why Tracking?

Yes, why bother. The obj is done, mb is what the cool kids are all talking about these days. However, consider this.

You are looking at the output of Mary, the compositor. Mary has produced a sequence of images for review and in the review there is you and there is Mary.

“I want the plane to come in from the left” – you say.

It is not in Mary’s responsibilities to alter neither animation nor camera so the request must be passed up-stream to whomever is responsible and capable of processing this change.

Responding to Change

In part 2, we touched briefly on how to deal with change. We said that in order for change to enter into a graph, a node must be capable of outputting partially finished information, before it is finished.

Lean Manufacturing was first coined by John Krafcik in 1988 and later translated into something called Lean Software Development

In it, there are two principles applicable to our situation.

  • Decide as late as possible
  • Deliver as fast as possible

To us, this means that whoever is responsible for making that plane come in from the left has not yet decided on a side from which the plane is to come in. The data sent was partial and is still being computed.

Thus the decision is made as late as possible, ideally after Mary has had a chance to show her work to you so that you may comment and suggest change beyond her responsibilities.

Decide Late


To decide late implies that changes may alter data as it is being processed and if there is any methodology in our industry that facilitates for this it is that of proceduralism.

Proceduralism involves working with a description of steps, rather than actually performing them. It means to have the output of each step be generated rather than created which may involve delegating such processes to an external medium, such as our computers.

Lets take an example.

No doubt the first thing that runs through your mind at this point is that of Houdini and its capabilities in regards to procedural generation of data. (If it isn’t, I envy you. You’ve got some rather delightful experiences ahead of you).


In this example, Bob outputs a tetrahedron to John who rotates it 45° and sends it along to Mary who colours it red.

What all three of them have in common is how they have each described the steps necessary to achieve their processing. Houdini then is the one who actually performs the processing and creates the output.

If there is anything for us to absorb from this example it is that Bob may alter his output after John and Mary have finished processing without either John or Mary having to re-visit any of their work.

This is a key factor in our design and governs the majority of choices made for the Pipi Object Model and in fact those of Pipi itself.

Output = Description

When data is described rather than created, we facilitate change.

But how can we apply these same practices to something as abstract and perhaps difficult to grasp as that of the result of another artist?

By now, we have established that to a pipeline in terms of a graph there are two things for us to work towards.

  • Partial outputs
  • Contracts

When output is partial, we can transmit it quickly and when both the transmitter and recipient have both agreed to on what data they will each receive – i.e. have signed a contract – the transmission can happen repeatedly without either party having to look for differences across inputs.

Lets transform the illustration from part 1 into something a little more suited to our conversation.

Pipeline Conversion

There. Now we can clearly see the that each node is connected via a link and that in some cases, one node has multiple inputs. The plot thickens.

Limits of Proceduralism

The notion of a describing a set of steps for the computer to process is great, but is it applicable to everything?

Can we alter the output of Bob without involving John or Mary? Yes. But can we have the output of a Storyboard department alter its output, and have that change trickle down-stream without influencing any other node?

The Wolfram Computational Engine

When you ask Siri a question, your question is translated into text. The text is then sent to one or more processes, one of which may be the Wolfram Computational Engine [1], [2].

It may be possible to one day have a change in storyboarding trickle down-stream, and witness its repercussions interactively – just as we are with the Tetrahedron outputted by Bob, rotated by John and coloured by Mary.

Until then, lets locate our limits so that we may work within them.

Within Limitations

Reasonable Bounds

Here is my claim.

Each node within this red rectangle may be condensed into a set of descriptions – just like those illustrated in Houdini above.

You may not believe me, and I don’t blame you. What goes in and what comes out of each node within this illustration varies greatly between one studio and the next.

In many cases, there are no contracts. In others, there are no partial outputs.

Why are development studios different?

You may find it odd that even though talent in our industry never stays in one place for very long, best practices and a general approach to any given task may not be even partially the same across development studios.

This may have something to do with the speed at which technology shifts today. The process employed to produce the latest blockbuster is legacy as soon as the movie hits the theatres.

At this rate, how could we ever expect consistency?

Within Reasonable Bounds

It may be possible one day for full consistency to be achieved between productions; but just as render-times has hovered around 8 hours per frame for the past 12 years [1] despite the colossal increase in computing power, so too may requirements be added to any achievable consistency.

Until then, lets locate our bounds so that we may work within them.

Stay tuned for part 4, thanks and see you in a bit.


by Marcus at April 05, 2014 03:34 PM

ZBrush- Goblin final sculpt and 3d print Prep

In between writing tools and prototyping a pretty sweet new project (shhh!) I've taken the goblin sculpt far enough that I'm willing to dump some dough on a 3d print. Here is the final, posed model, ready for hollowing out and sending off to the magical 3d printing machines.

The entire model, from start to finish, was created in Zbrush. It all started with a sphere, and the primary tools used were the Clay Buildup brush, clip, move and transpose toolset. I can't get enough of the clip brush. Seriously.

Posing anything in ZBrush is a bit of a pain in the ass, but the Topological masking feature of the transpose tools makes it a lot more intuitive (even if the transpose tools themselves are pretty ambiguous in function). But once you get how its meant to work its suddenly one of the most enjoyable 3d tools to use.

The basis for the clothing was created using the panel loops tool, and the belts and buckles were added using a curve brush (which is an absolute bastard to use). Finally the model was combined into a single shell by gradually combining and dynameshing the individual elements.

I'm looking forward to seeing how it turns out as an actual model!


The model is now available on Shapeways!

by Peter Hanshaw ( at April 05, 2014 08:04 AM

Visuals in some great games

I was thinking about visuals of the best games I’ve recently played. Now, I’m not a PC/console gamer, and I am somewhat biased towards playing Unity-made games. So almost all these examples will be iPad & Unity games, however even taking my bias into account I think they are amazing games.

So here’s some list (Unity games):

Monument Valley by ustwo.

DEVICE 6 by Simogo.

Year Walk by Simogo (also for PC).

Gone Home by The Fullbright Company.

Kentucky Route Zero by Cardboard Computer.

The Room by Fireproof Games.

And just to make it slightly less biased, some non-Unity games:

Papers, Please by Lucas Pope.

The Stanley Parable by Galactic Cafe.

Now for the strange part. At work I’m working on physically based shading and things now, but take a look at the games above. Five out of eight are not “realistic looking” games at all! Lights, shadows, BRDFs, energy conservation and linear colors spaces don’t apply at all to a game like DEVICE 6 or Papers, Please.

But that’s okay. I’m happy that Unity is flexible enough to allow these games, and we’ll certainly keep it that way. I was looking at our game reel from GDC 2014 recently, and my reaction was “whoa, they all look different!”. Which is really, really good.

by at April 05, 2014 07:34 AM

April 04, 2014

Classic (?) CG: Bingo the Clown

From the Classic CG files comes Bingo the Clown. This was originally created to showcase the capabilities of Maya 1.0, back in 1998. It creeped me out then and it creeps me out now.

I've been told, I don't know how correctly, that Chris Landreth - the animator who did this film - was the driving force between Maya's decision to use Euler angles for everything. I hope that's not true. Having this video and those goddamn Euler angles on your conscience is a lot to answer for.

by Steve Theodore ( at April 04, 2014 05:00 PM

Global Glob

I am cleaning out my drafts and found this two year old post titled “Globals Glob” with no content. The story is worth telling so here we go.

There was a class the EVE client used to control how missiles behaved. We needed to start using it in our tools for authoring missiles with their effects and audio. The class and module was definitely only designed (I used the term loosely) to run with a full client, and not inside of our tools, which are vanilla Python with a handful of modules available.

My solution was the GlobalsGlob base class, which was just a bag of every singleton or piece of data used by the client that was unavailable to our tools. So instead of:


it’d be:


The ClientGlobalsGlob called the service, but FakeGlobalsGlob did nothing. The GlobalsGlob allowed us to use the missile code without having to rewrite it. A rewrite was out of the question, as it had just been rewritten, except using the same code. (sigh)

Unsurprisingly, GlobalsGlob was super-fragile. So we added a test to make sure the interface between the client and fake globs were the same, using the inspect module. This helped, but of course things kept breaking.

This all continued until the inevitable and total breakdown of the code. Trying to use the missile code in tools was abandoned (I think it was, I have no idea what state it’s in). This was okay though, as we weren’t using the missile tools much after those few months. GlobalsGlob served its purpose, but I will never be able to decide if it was a success or failure.

by Rob Galanakis at April 04, 2014 03:45 PM