Planet Tech Art
Last update: April 14, 2014 02:59 AM
April 13, 2014

Warning: Garish graphics ahead!

If you're tired of boring old light-grey-on-dark-grey text, you'l'l be pleased to know that the Maya text widget actually supports a surprising amount of HTML markup. Which means that instead of this:

You set peoples eyeballs on fire like this:

This is a single cmds.text object  with it's  label property set to an HTML string.  

It turns out that cmds.text is actually a fairly full-featured HTML4 renderer! That means that you can create pretty complex layouts using many -- though not all -- of the same tools you'd use for laying out a web page.  You can style your text with different fonts, sizes, colors, alignments and so on - you can even us CSS style sheets for consistency and flexibility.

More than that you can also include images, tables and layout divisions, which are great for formatting complex information.  No more printing out reports into dull old textScrollFields!

Best of all, it's trivial to do.

All you need to do is set the label property of a cmds.text object to a striing of valid HTML. By default your object inherits the standard maya background and foreground colors but you can override these in your HTML  You can even just compose your text in an HTML editor like DreamWeaver or Expression Blend; that how I did the example in the graphic above..

There are some limitations you need to be aware of.  The big ones seem to be:

  • HTML/CSS controls for positioning text or divs don't seem to work. Align tags inside a span element do work, but float and positions apparently do not.
  • The renderer won't fetch images or other resources from a URL or relative paths.
  • No JavaScripts - so no blinking texts or animated gifs.  I'm not sure that's a loss.
  • No inputs such as buttons, checkboxes or text fields.
  • Fonts seem to render smaller inside the Maya text than they do in a conventional browser. You can't specify text size in ems or percentages; pixel sizes seem to work fine, however.
  • It looks like text is the only control that supports this styling right now ( tested in Maya 2014).
I'd assume that these limitation reflect the behavior of underlying QWidgets inside of Maya - if anybody has the real dope to supplement my guesswork, please chime in.   

In the mean time, here's to the inevitable avalanche of eye-ripping garishness that is sure to result from this revelation. As for me, I'm off to go convert my whole toolset to Comic Sans! 

by Steve Theodore ( at April 13, 2014 12:59 AM

April 12, 2014

The Last of Us: Remastered (PS4) !

My last project, The Last of Us us getting Remastered release on PS4 with a special dose of HD All-The-Things !

Check out the info here: Playstation Blog

by Nathan at April 12, 2014 09:21 PM

The “Year of Code” Director is Your Boss

There was some hubbub a few months ago when it was revealed the Executive Director of the UK’s Year of Code initiative can’t code [link]. Not that technical (domain) competency is a sufficient condition for management and leadership, but I’d certainly call it a necessary condition. (I’ll use the world ‘technical’ below to refer to any sort of specialized domain, not just programming.)

Apparently a number of people don’t agree with the idea that competency in a domain is a requirement to manage that domain.* I find this idea infuriating and it can only end poorly.

Perhaps you have a manager who knows little about programming or design or whatever your specialty is, and you consider this person to be the best boss of all time. Great! I’ll call this person Your Boss for convenience. Here’s the problem:

At some point, Your Boss needs to make some contentious decisions. Maybe over your work, maybe over something you’re not directly involved with (I bet Your Boss was hated by a lot of people, too!). Your Boss has literally no ability to help resolve a technical decision. “Wait!” I hear you say. “My Boss is enlightened enough to know that the people closer to the problem should be making the decision!

But who are those people closer to the problem? Who put them there? Oh, that’s right: Your Boss. But your boss has little technical knowledge. How is Your Boss supposed to decide who makes the more technical decisions? Without basic technical ability, Your Boss doesn’t even know what questions to ask. Your Boss can’t even learn; she doesn’t have the technical prerequisites. Instead of being able to provide leadership, Your Boss is left scratching her head. This is not leadership, and this is not management. This is a cancer and an organization that is unable to grow and learn.

It’s no surprise this topic is divisive. When Your Boss places a lot of trust in you, you are autonomous and think of Your Boss as the best boss of all time. But when someone runs up against you and Your Boss, they have no real recourse, because Your Boss trusts you and has no ability to further inspect the situation.

Certainly, superior ability or experience is not a requirement for management over a domain. But I thoroughly believe that not just familiarity, but some actual professional practice, with a domain is a requirement. I hope that if you are someone who believes in the myth of the competent non-technical manager, you’ll rethink your experience and view Your Boss in a more complete light.

* Clearly, at some point, you cannot be experienced in all the domains you manage, and need to trust people. Unfortunately we do this far to soon, and accept a development manager who has not developed, or an engineering manager who has not done much programming. In the case of the Year of Code Director, I think the issue is a non-technical background (in programming nor teaching) and a general lack of experience. If she had proven a wunderkind in her given field (which is, btw, basically PR/marketing/communications), maybe she should be given the benefit of the doubt. There are many examples where talented administrators have moved into new areas and been quite successful. But her appointment, along with most of the rest of the board, is pretty clear cronyism (and when you throw out technical merit and domain experience, you’re left pretty much with cronyism).

by Rob Galanakis at April 12, 2014 07:59 PM

Adobe Photoshop CC- Not exactly an early adopter...

Its a words post! Where is the code? That comes later... for now, words. I'm now an official owner (or at least subscriber) of/to Adobe CC... yeah, I got through my whining stage and now I'm learning to love the cloud, or, at least learning how to accept the inevitable.
Long live the cloud! I guess.

But from the Photoshop tool development perspective I think I actually find it a little more exciting than I am letting on. You mean *everyone* will be on a standard version? No kidding! What an awesome development! No more hacking in (sometimes) seeming random version numbers! The ability to assume everything you want to support is supported. Great! Don't have the right version? Update your Photoshop buddy!

When taken from that perspective I really think that Adobe's decision (aside from the whole aspect of never actually 'owning' the software) is a pretty great one. Maintaining pipelines for multiple versions of Photoshop ceases to be a major problem*, and tool development and distribution becomes, if not simpler, at least a little more direct in execution. 

I'm also taking my first steps into the Photoshop SDK, which is an incredibly powerful and daunting piece of architecture. Not only does it require C++ for creating plug-ins, but it also seems to be half way between a development framework and a history lesson on ye early days of Photoshop. And the documentation? Reading through it, there seems to be a big Photoshop SDK tutorial shaped hole where the SDK tutorials ought to be. 

But, if it were easy, it wouldn't be as fun! Now to work through that hello world tutorial...

* Currently supporting four different versions at work and trying very hard not to.  

by Peter Hanshaw ( at April 12, 2014 03:36 PM

April 09, 2014

Perforce: Setting up Ignore lists

Sometimes your local workspace will have files that you don't want to check in. Examples include Maya swatches files, if you are using Unity, the library folder, and anything called tmp.

You can set up P4 to ignore files and folders using the P4IGNORE environment variable, and a text file in your perforce root called .p4ignore.txt

Here is how you set it up:

An example of the contents of an ignore file. Comments are added using the # sign. 
  • Create a file in the root directory of your workspace (eg: C:/Projects/Perforce/) called .p4ignore.txt
  • Inside this file, define which file types to ignore. For example:
  • The .swatches file, which is generated by Maya. 
  • The Library folder, generated by Unity. 
  • Any other folder that contains source file work that should remain local to people's workstations (texture bakes, render data etc).
  • Once this file is set up, open the command console (windows key + r then type in cmd) and type in: p4 set P4IGNORE = .p4ignore.txt
Setting the environment variable in a windows environment. 
  • Now, Perforce should ignore the file types and folders defined in the text document. 
  • Next time an attempt to add this file is made, a warning should show up, and the files will not be added to your changelist. 


by Peter Hanshaw ( at April 09, 2014 06:37 PM

Fixed rotator

What's on today:
  • Fixed rotator.
  • Fixed rotator with steps.
  • Quick look at vector graphics textures in after effect.

Fixed rotator.

How to rotate a texture by a certain fixed angle?

Just input a constant value in the time pin of your rotator. The constant value is like saying ‘this is where the rotator would get you in that much time’. (I mean, not technically since the unit is not seconds but it's sort of a way of seeing it.)

What to expect from that value? First thing, set your rotator speed to 1. Then you have to know that unreal uses radians as a rotation unit.
180 degrees = π (pi, the maths symbol equal to 3.14159 and so on).

Therefore, 90 degrees is going to be π / 2.
Good thing to know: you can do maths in input fields. In your constant input, you can simply type in 3.14159/2.


You could use a fixed rotator to represent a circular gauge for instance. Most likely, the gameplay code is going to provide you with some normalized parameter. (Frankly that’s the best option. Sure modifying the value to fill your needs will add a few instructions but it gives you the flexibility to easily do anything you want with it, rather than having to go bug a coder so he changes the value he's hard coded for you.)

Modify the input to the range you are interested in (say 0 to 90 degrees), plug it into the rotator, and then into a mask that multiplies your gauge.

You’ll notice that I’ve just multiplied my normalised value; since my min is 0, I don’t need to use a lerp which would add more instructions for the same result.

Fixed rotator with steps.

We can go one step further.
In Enslaved, Trip has an emp gauge which recharges over time and is split in several chunks which appear one at a time.

I used a similar setting, with a rotating mask that multiplied my texture, only this time the rotation value had to be contained until it reached the next step.

In the following example our gauge value is at 0.5, that's half filled. The gauge is split into 5 chunks, so each step is 0.2 .

We check: our many steps are there in our current value ? That's 0.5 / 0.2 = 2.5. We're only interested in full steps so we floor it.
We've then got two full steps, the step size is 0.2 that's 0.2*2, our output value is 0.4.
The floor node is going to keep the value contained at 0.4 until the next integer is reached. When we get to 3 full steps, the output will suddenly jump to 0.6 and so on.

The input value is divided by your step size, floored and then multiplied by your step size.
Credit for this little setup goes to Gavin Costello, currently Lead Programmer at Ninja Theory and full time graphics programming genius.

Vector graphics textures in after effect.

 I've got a thing for after effects in general, and I find it excellent for motion graphics in particular. It's very flexible, the textures are easily scaled and iterated. Illustrator could do the same but with after effects you can also animate everything for previs and/or flipbook generation. (Which is exactly how I worked during Enslaved. My previs and textures were the same assets.)

Here's the way I made the previous gauge for instance, using shape layers:

2 ellipses, 6 rectangles, a subtract merge to cut out the inner circle and the rectangles, another rectangle and an intersect merge to extract the bottom left quarter. And finally a white fill of course.

I like to use expressions even with these sort of very simple shapes. It is a small time saver as you make the texture and might be a massive one along the project as you iterate on your texture.

Right there for instance, I link the anchor point to my rectangle size for the anchor to be at the end of the rectangle rather than in the default centre. Sure I could have done it by hand but I find it better when automated. I was a bit lasy in this case so I stopped there but if I had created this texture for production, I would have also:
  • linked the size of every rectangle to the first rectangle (or even neater, to a slider control)
  • linked the rotation of every rectangle to a slider control (and multiplied it by 2, 3, 4 etc. for each next rectangle)
  • and maybe controlled the radius of each ellipse from a slider parameter too, just so as to modify everything for one single place and not have to open up the content properties

by mkalt0235 ( at April 09, 2014 05:53 PM

The manager’s responsibility to review code

I believe any technical leader has a responsibility to review all the code that goes into a codebase.* I am certainly not the only person to feel this way (Joe Duffy as MSFT and Aras Pranckevičius as Unity have said the same).

Furthermore, I don’t believe the responsibility to review code ends at a certain level. Everyone from an Engineering Manager to the CTO should be reviewing code as well. In my experience, I’m able to do thorough reviews for 5 to 8 people, and more cursory reviews for another 15 to 20.

Code review above a team lead level** is not about micro-management. A manager should never, ever be saying “we do not use lambdas here, use a named function instead.” Instead, try “do you think this would be more readable with a named function instead of a lambda?” Or maybe you say nothing, and observe what happens, and inspect what other code looks like. If lambdas are common in the codebase, either your opinions need more information, or you have done a poor job educating.

Code reviews by managers should be about getting enough information to manage and lead effectively.*** It keeps you up to speed about what is going on, not just in terms of tasks, but in terms of culture. Are people writing spaghetti? Are bad choices challenged? Are hacks put in? Is code documented? Are standard libraries being used? Are the other technical leads reviewing and leading their teams effectively? You can learn an incredible amount through code review, and you need this information to do your job of leadership and management effectively.

*: I believe all programming managers and leaders must be able to program. I find it shameful this needs to be said.

**: It should go without saying, but team leads should be reviewing every checkin on that team.

**: Code reviews are the *genchi genbutsu

, or the go and see part of Lean management.

by Rob Galanakis at April 09, 2014 03:16 PM

MaxPlus and PyCharm – Update!

This is an update on my YCDIVFX MaxPlus Packages which removes a dependency on ExternalMaxscriptIDE. Now thanks to the brilliant work of Christoph Bülter and his SublimeMax package, we can execute our python scripts in 3ds Max directly from Python without too much hassle, way easier to setup. Don’t be scared with all the bullet points, I’ve tried explain it almost click by click.

I basically deleted code from SublimeMax to make it fit to the simple requirements of and PyCharm.

This should make the setup for PyCharm and 3dsmax much easier, here’s an update on the step-by-step:

  1. Install PyCharm-
  2. Download the YCDIVFX MaxPlus Packages and unzip it to a folder of your choice (ex. C:\YCDIVFX\MaxPlus)
  3. Open PyCharm and open the directory where you unzipped the previous file.
  4. Go to File -> Settings or press Alt+F7 and search for Project Interpreter
  5. Your default project interpreter should be Python 2.7.3 bundled with 3ds Max (C:/Program Files/Autodesk/3ds Max 2014/python/python.exe) if not, don’t worry go to the next step.
  6. Click Configure interpreters and if you don’t have an interpreter set, click the + button and add your Python interpreter (C:/Program Files/Autodesk/3ds Max 2014/python/python.exe)
  7. With the project interpreter selected on the top list view, click on the Paths tab.
  8. Click the + button and add your default 3dsmax 2014 root folder (C:/Program Files/Autodesk/3ds Max 2014) if it doesn’t show up there already.
  9. Go to Project Structure and on the right pane, select the “packages” folder and press “Sources” button.
  10. Press OK

Now let’s setup one configuration and then you can duplicate this one to run other scripts:

  1. Press Run -> Edit Configurations
  2. Fill in with the following values – Script:  C:\YCDIVFX\MaxPlus\MyExamples\ / Script parameters:  -f C:\YCDIVFX\MaxPlus\
  3. Press OK

Now open 3dsmax 2014.

In PyCharm just select “run main” and press the Run button (little green play button)

In the 3ds Max Listener you should see this:

hello world

Congratulations, you’ve made it!

You can also run it from the command-line, be sure to check the README file.

Recommended optional installs (distribute, pip, nose and coverage):

  1. PyCharm settings on the Python Interpreters page, you should see a warning to install “distribute”, click on it and then another for “pip”, install that too.
  2. Now click the Install button and search for “nose”, install “nose” package (Description: nose extends unittest to make testing easier/Author:Jason Pellerin)
  3. Now search and install “coverage” package.

If you are interested in remote debugging, check my other blog post here: Pycharm, 3dsmax, remote debugging love!

by Artur Leao at April 09, 2014 12:45 PM

April 08, 2014

Mighty Morphin Module Manager Made Moreso

I've added a port of the Maya Module Manager I posted a while back to the examples included with the mGui maya GUI library.  This was an exercise to see how much the neater and more concise I could make it using the library.

Here's some interesting stats:

The original version was 237 lines of code, not counting the header comments. The mGui version was 178 without the header, so about 25% shorter overall.  There are about 80 lines of unchanged, purely behind-the-scenes code which didn't change between versions, so the real savings is more like 45%.   Plus, the original sample included some functions for formLayout wrangling  so real savings might be a little higher for more old-fashioned code.

Like I said last time, the mGui package is still evolving so it's still very much in a "use at your own risk" state right now... That said, I'd love to get comments, feedback and suggestions.

by Steve Theodore ( at April 08, 2014 10:16 PM

maya he3d

Since I don't get the time to update this blog much anymore, I thought I'd just drop in quickly to recommend a google groups mailing list that I have been contributing to for a while now. If you feel like some question/answer type discussions focused specifically on problem solving in maya this might be the [...]

by david at April 08, 2014 02:18 PM

April 07, 2014

A Better Maya to Unity Workflow

In the past, I’ve implemented tools and fixes to Unity’s import pipeline of native Maya files by manually modifying the FBXMayaExport.mel script. Unfortunately, these types of modifications are difficult to maintain. For one, they need to be redone every time the Unity installation is upgraded. Second, the files are buried in obscure places that require administrative privileges to modify them. It would be ideal if changes could be made in one place, one time, in a way that supported more modular design. So, I finally decided to take the time to sit down and tackle this problem.

Unity’s Export Process

Unity uses the FBX SDK to import complex models. As such, one popular way to get models into projects is to simply export FBX files from some DCC application (e.g., Maya, 3D Studio Max). Although this approach has some advantages, the manual exportation step means that two versions of the file are kept in separate places. Files can become out of sync, and (depending on your tools) this workflow can introduce the possibility of human error in the configuration of export settings or location of files. Another option is to drop files in the DCC application’s native format (e.g., .ma, .mb, .max) into the project. Depending on the DCC application, the Unity editor spawns a headless child process that executes some script to automatically convert the native file into a temporary FBX that is imported into the project.*

In the case of Maya, there are some template scripts in Unity’s install location (e.g., FBXMayaMain.mel, FBXMayaExport.mel). When your Unity project imports a Maya file, it duplicates these scripts into your project’s temp folder, modifies some file paths inside of them, and then launches Maya as a child process, passing a command line argument that sources the duplicated FBXMayaMain.mel when Maya launches. This script ensures the FBX plug-ins are loaded, and then sources the duplicated FBXMayaExport.mel script, which handles the FBX export.

In the past, this latter approach was less attractive to larger teams. For one, each file had to be reimported by every user, unless the team were using the Unity Asset Server (which cached the imported data in Unity’s native asset format). If you were using something like Perforce, however, each user needed to have Maya installed in order to import the models, and the process could be slow for very large projects with hundreds or more Maya files. The introduction of Unity’s Cache Server, however, makes the direct workflow potentially more appealing. The Cache Server is compatible with any VCS, and can cache the imported data when they are committed, just like the Asset Server workflow.

A Solution

Although in past work, I have often incorporated a combination of direct modifications to the FBXMayaExport.mel template, it can be a hassle to maintain, for the reasons I stated at the start of this post. As such, I had a few design goals for this exercise:

  1. I did not want to have to make any modifications to the template files in the Unity install
  2. I wanted an automated process that could be handled from a studio-wide script
  3. I wanted any users to be able to easily register their own modifications in a modular way

Taking these considerations in mind, I basically wanted a script that, when imported in, would know if it were being launched in a child process of the Unity editor, and would register callbacks using the messaging system built into Maya’s API if so.

Determining whether the Maya instance is a child of the Unity editor is as simple as reading the command-line arguments. Specifically, we look for the -script flag and see what the argument’s value is:

import os.path
import sys
    startup_script = sys.argv[sys.argv.index('-script') + 1]
except Exception:
    startup_script = None
if startup_script is not None:
    directory, script = os.path.split(startup_script)
    is_maya_child_of_unity = (
        os.path.basename(directory) == 'Temp' and
        script = 'FBXMayaMain.mel'
    is_maya_child_of_unity = False

The trick at this point is registering callbacks. The FBXMayaExport.mel script uses the FBXExport command. Why wouldn’t it? Autodesk explicitly says you should, as opposed to using the file command. Unfortunately, the FBXExport command does not presently broadcast messages that you can hook into with the MSceneMessage class.

Knowing that Unity makes copies of its mel scripts in a writable location, however, opens up a really simple possibility. Namely, if we detect that the Maya instance is owned by Unity using the command-line arguments, we can also find the duplicated FBXMayaExport.mel script in the same location as FBXMayaMain.mel. Moreover, because it is writable, we can make a simple modification to use the file command.

# continued from above
import re
if is_maya_child_of_unity:
    path_to_fbx_export_script = os.path.join(
        directory, 'FBXMayaExport.mel'
    with open(path_to_fbx_export_script) as f:
        contents =
    contents = re.sub(
        'FBXExport -f ',
        'file -force -type "FBX export" -exportAll ',
    with open(path_to_fbx_export_script, 'w+') as f:

Now, after this code has executed, if we know that the Maya instance is a child of Unity, we can use MSceneMessage.addCallback() using the MSceneMessage.kBeforeExport message type.

If you’re interested in easily dropping a solution like this into your pipeline, I have put up a simple Python package on github. In order to use it, all you have to do is import the unityexport package in your studio-wide script. You can then read its unity_project_version attribute to determine if you are running as a child of the Unity editor (and what version the project is) in order to register callbacks with the MSceneMessage class. As an example, the package also registers a callback to automatically adjust the FBX export settings for blend shapes, as per my previous post.

*I don’t work on Windows, but in the couple of tests I have done, it seems like Unity 4.3.x currently hangs on exit if it has an active Maya child process. There have been some assorted vague reports of similar behavior for Max.

by Adam at April 07, 2014 10:33 PM

April 06, 2014

Why Agile became meaningless

Uncle Bob recently wrote a post about The True Corruption of Agile. I think it will be a defining post for me because, as I’ll explain in my next post, I’m ready to give up on Agile. It has become meaningless due to the corruption Uncle Bob describes, and trying to reclaim Agile isn’t possible.

Imagine the Lean movement without Toyota. Toyota is the guiding force in Lean because it grew out the The Toyota Way.* When Lean goes awry, Toyota- the company and its principles, practices, and culture- is there to set things straight.

Toyota can guide Lean because the company has been successful for decades and Toyota attributes its success to the principles and practices known as The Toyota Way. But for many years, Toyota’s success was explained away by anything except the Toyota principles. Finally, all that was left was The Toyota Way. Toyota is the Lean reference implementation.

Agile has no such entity. Instead, we have hundreds of “Agile” shops who attribute success to some (non-)Agile practices. Then, once they’ve evangelized their (non-)Agile stories, reality catches up with them and the success disappears.** But no one hears anything about that failure. The corruption and perversion here is inevitable.

Without a company like Toyota giving birth to Agile and showing others how to do it right, Agile was destined to become what it is now: meaningless and corrupt.

*: The Toyota Way started out as the Toyota Production System. They aren’t technically the same but for the purposes of this post there’s no reason to distinguish.

**: For example, maybe InnoTech decides to use Scrum on a global scale to ship an ambitious product, and talks a lot about how they pulled this off and what benefits it yielded. Years later, velocity is in the toilet because the endless mountains of technical debt created, and maybe the company has had layoffs. The Scrum transformation will be in a book or on a stage. The layoffs or technical debt will not.

by Rob Galanakis at April 06, 2014 03:54 PM

April 05, 2014

Earth calling maya.standalone!

Somebody on was asking about how to control a maya.standalone instance remotely.  In ordinary Maya you could use the commandPort, but the commandPort doesn't exist when running under standalone - apparently it's part of the GUI layer which is not present in batch mode.

So, I whipped up an uber-simple JSON-RPC-like server to run in a maya standalone and accept remote commands. In response to some queries I've polished it up and put it onto GitHub.

It's an ultra-simple setup. Running the module as a script form mayapy.exe starts a server:
    mayapy.exe   path/to/

To connect to it from another environment, you import the module, format the command you want to send, and shoot it across to the server. Commands return a JSON-encoded dictionary. When you make a successful command, the return object will include a field called 'results' containg a json-encoded version of the results:
cmd = CMD('', type='transform')
print send_command(cmd)
>>> {success:True, result:[u'persp', u'top', u'side', u'front'}

For failed queries, the result includes the exception and a traceback string:
cmd = CMD('cmds.fred') # nonexistent command
print send_command(cmd)
>>> {"exception": "",
"traceback": "Traceback (most recent call last)... #SNIP#",
"success": false,
"args": "[]",
"kwargs": "{}",
"cmd_name": "cmds.fred"}

It's a single file for easy drop. Please, please read the notes - the module includes no effort at authentication or security, so it exposes any machine running it to anyone who knows its there. Don't let a machine running this be visible to the internet!

by Steve Theodore ( at April 05, 2014 06:18 PM

Pipi Object Model pt. 3

In part 2, we further defined our pipeline, its intent and what is required to successfully implement it. Lets have a look at how tracking fits in with all of this.


In electronics, tracking refers to

The maintenance of a constant difference in frequency between two or more connected circuits or components. – Google “Definition of tracking”

But for our intents and purposes, we may refer to it as the way in which data is inferred by other data.

Lets take an example.

Bob outputs an obj to John who turns it into an mb and sends it forward.

To the recipient of Johns output, there is only an mb – the obj is nowhere in sight, yet the obj had a significant impact on the creation of mb.

Here, tracking means to maintain a link between obj and mb so that the recipient of Johns output may later refer back to it.

Why Tracking?

Yes, why bother. The obj is done, mb is what the cool kids are all talking about these days. However, consider this.

You are looking at the output of Mary, the compositor. Mary has produced a sequence of images for review and in the review there is you and there is Mary.

“I want the plane to come in from the left” – you say.

It is not in Mary’s responsibilities to alter neither animation nor camera so the request must be passed up-stream to whomever is responsible and capable of processing this change.

Responding to Change

In part 2, we touched briefly on how to deal with change. We said that in order for change to enter into a graph, a node must be capable of outputting partially finished information, before it is finished.

Lean Manufacturing was first coined by John Krafcik in 1988 and later translated into something called Lean Software Development

In it, there are two principles applicable to our situation.

  • Decide as late as possible
  • Deliver as fast as possible

To us, this means that whoever is responsible for making that plane come in from the left has not yet decided on a side from which the plane is to come in. The data sent was partial and is still being computed.

Thus the decision is made as late as possible, ideally after Mary has had a chance to show her work to you so that you may comment and suggest change beyond her responsibilities.

Decide Late


To decide late implies that changes may alter data as it is being processed and if there is any methodology in our industry that facilitates for this it is that of proceduralism.

Proceduralism involves working with a description of steps, rather than actually performing them. It means to have the output of each step be generated rather than created which may involve delegating such processes to an external medium, such as our computers.

Lets take an example.

No doubt the first thing that runs through your mind at this point is that of Houdini and its capabilities in regards to procedural generation of data. (If it isn’t, I envy you. You’ve got some rather delightful experiences ahead of you).


In this example, Bob outputs a tetrahedron to John who rotates it 45° and sends it along to Mary who colours it red.

What all three of them have in common is how they have each described the steps necessary to achieve their processing. Houdini then is the one who actually performs the processing and creates the output.

If there is anything for us to absorb from this example it is that Bob may alter his output after John and Mary have finished processing without either John or Mary having to re-visit any of their work.

This is a key factor in our design and governs the majority of choices made for the Pipi Object Model and in fact those of Pipi itself.

Output = Description

When data is described rather than created, we facilitate change.

But how can we apply these same practices to something as abstract and perhaps difficult to grasp as that of the result of another artist?

By now, we have established that to a pipeline in terms of a graph there are two things for us to work towards.

  • Partial outputs
  • Contracts

When output is partial, we can transmit it quickly and when both the transmitter and recipient have both agreed to on what data they will each receive – i.e. have signed a contract – the transmission can happen repeatedly without either party having to look for differences across inputs.

Lets transform the illustration from part 1 into something a little more suited to our conversation.

Pipeline Conversion

There. Now we can clearly see the that each node is connected via a link and that in some cases, one node has multiple inputs. The plot thickens.

Limits of Proceduralism

The notion of a describing a set of steps for the computer to process is great, but is it applicable to everything?

Can we alter the output of Bob without involving John or Mary? Yes. But can we have the output of a Storyboard department alter its output, and have that change trickle down-stream without influencing any other node?

The Wolfram Computational Engine

When you ask Siri a question, your question is translated into text. The text is then sent to one or more processes, one of which may be the Wolfram Computational Engine [1], [2].

It may be possible to one day have a change in storyboarding trickle down-stream, and witness its repercussions interactively – just as we are with the Tetrahedron outputted by Bob, rotated by John and coloured by Mary.

Until then, lets locate our limits so that we may work within them.

Within Limitations

Reasonable Bounds

Here is my claim.

Each node within this red rectangle may be condensed into a set of descriptions – just like those illustrated in Houdini above.

You may not believe me, and I don’t blame you. What goes in and what comes out of each node within this illustration varies greatly between one studio and the next.

In many cases, there are no contracts. In others, there are no partial outputs.

Why are development studios different?

You may find it odd that even though talent in our industry never stays in one place for very long, best practices and a general approach to any given task may not be even partially the same across development studios.

This may have something to do with the speed at which technology shifts today. The process employed to produce the latest blockbuster is legacy as soon as the movie hits the theatres.

At this rate, how could we ever expect consistency?

Within Reasonable Bounds

It may be possible one day for full consistency to be achieved between productions; but just as render-times has hovered around 8 hours per frame for the past 12 years [1] despite the colossal increase in computing power, so too may requirements be added to any achievable consistency.

Until then, lets locate our bounds so that we may work within them.

Stay tuned for part 4, thanks and see you in a bit.


by Marcus at April 05, 2014 03:34 PM

ZBrush- Goblin final sculpt and 3d print Prep

In between writing tools and prototyping a pretty sweet new project (shhh!) I've taken the goblin sculpt far enough that I'm willing to dump some dough on a 3d print. Here is the final, posed model, ready for hollowing out and sending off to the magical 3d printing machines.

The entire model, from start to finish, was created in Zbrush. It all started with a sphere, and the primary tools used were the Clay Buildup brush, clip, move and transpose toolset. I can't get enough of the clip brush. Seriously.

Posing anything in ZBrush is a bit of a pain in the ass, but the Topological masking feature of the transpose tools makes it a lot more intuitive (even if the transpose tools themselves are pretty ambiguous in function). But once you get how its meant to work its suddenly one of the most enjoyable 3d tools to use.

The basis for the clothing was created using the panel loops tool, and the belts and buckles were added using a curve brush (which is an absolute bastard to use). Finally the model was combined into a single shell by gradually combining and dynameshing the individual elements.

I'm looking forward to seeing how it turns out as an actual model!


The model is now available on Shapeways!

by Peter Hanshaw ( at April 05, 2014 08:04 AM

Visuals in some great games

I was thinking about visuals of the best games I’ve recently played. Now, I’m not a PC/console gamer, and I am somewhat biased towards playing Unity-made games. So almost all these examples will be iPad & Unity games, however even taking my bias into account I think they are amazing games.

So here’s some list (Unity games):

Monument Valley by ustwo.

DEVICE 6 by Simogo.

Year Walk by Simogo (also for PC).

Gone Home by The Fullbright Company.

Kentucky Route Zero by Cardboard Computer.

The Room by Fireproof Games.

And just to make it slightly less biased, some non-Unity games:

Papers, Please by Lucas Pope.

The Stanley Parable by Galactic Cafe.

Now for the strange part. At work I’m working on physically based shading and things now, but take a look at the games above. Five out of eight are not “realistic looking” games at all! Lights, shadows, BRDFs, energy conservation and linear colors spaces don’t apply at all to a game like DEVICE 6 or Papers, Please.

But that’s okay. I’m happy that Unity is flexible enough to allow these games, and we’ll certainly keep it that way. I was looking at our game reel from GDC 2014 recently, and my reaction was “whoa, they all look different!”. Which is really, really good.

by at April 05, 2014 07:34 AM

April 04, 2014

Classic (?) CG: Bingo the Clown

From the Classic CG files comes Bingo the Clown. This was originally created to showcase the capabilities of Maya 1.0, back in 1998. It creeped me out then and it creeps me out now.

I've been told, I don't know how correctly, that Chris Landreth - the animator who did this film - was the driving force between Maya's decision to use Euler angles for everything. I hope that's not true. Having this video and those goddamn Euler angles on your conscience is a lot to answer for.

by Steve Theodore ( at April 04, 2014 05:00 PM

Global Glob

I am cleaning out my drafts and found this two year old post titled “Globals Glob” with no content. The story is worth telling so here we go.

There was a class the EVE client used to control how missiles behaved. We needed to start using it in our tools for authoring missiles with their effects and audio. The class and module was definitely only designed (I used the term loosely) to run with a full client, and not inside of our tools, which are vanilla Python with a handful of modules available.

My solution was the GlobalsGlob base class, which was just a bag of every singleton or piece of data used by the client that was unavailable to our tools. So instead of:


it’d be:


The ClientGlobalsGlob called the service, but FakeGlobalsGlob did nothing. The GlobalsGlob allowed us to use the missile code without having to rewrite it. A rewrite was out of the question, as it had just been rewritten, except using the same code. (sigh)

Unsurprisingly, GlobalsGlob was super-fragile. So we added a test to make sure the interface between the client and fake globs were the same, using the inspect module. This helped, but of course things kept breaking.

This all continued until the inevitable and total breakdown of the code. Trying to use the missile code in tools was abandoned (I think it was, I have no idea what state it’s in). This was okay though, as we weren’t using the missile tools much after those few months. GlobalsGlob served its purpose, but I will never be able to decide if it was a success or failure.

by Rob Galanakis at April 04, 2014 03:45 PM

April 03, 2014

What does your Product Owner own?

In a previous post, I came down hard on Agile leaders that don’t program. Now I’ll turn my sights to another part of the Scrum trinity: the Product Owner. I’ll raise some concerns for what I’ve seen it become in videogames, and suggestions for improving how we use the role.

Most product owners I’ve seen in the videogame industry are much closer to project managers than business owners. Their primary job is often the coordination, planning, and prioritization of the cross-team dependencies that the scaled-up nature of game development tends to create. I’ve seen designers and business/marketing in the PO role on occasion. It has sometimes gone very poorly.

I’ve always thought this situation strange, as the PO role most closely aligns with someone from the discipline of game design. We usually don’t have a problem with mapping a Creative Director or other core vision holder to the role of PO. After all, they are the product champion, and marry game design and business sense. A project manager clearly wouldn’t suffice here. But then other POs on the same project are all project managers. What gives?

There’s are some litmus tests for seeing how product ownership works in your organization.:

  • Do people “graduate” from Scrum Master to Product Owner?
  • Do the same people occupy both Scrum Master and Product Owner roles, concurrently or not?
  • Is your product owner leading and championing, or following orders (from above and from the team) and focused on execution (metrics, tracking)?

The skills between product owner and project manager are significantly different. There’s a problem if most people are seen as able to do both, and if POs aren’t coming primarily from design, business, and marketing.

There are lots of reasons things get this way. The important thing is to realize that the term PO isn’t a good fit for what most POs are doing. I see two options.

The first option is to commit to a Chief Product Owner/Area Product Owners structure (described in the footnotes*). Here, product ownership is seen as a distinct set of skills that bridge the business and design/creative side of development. If you have the right people (for example, POs for the overall/creative, technological, visual, and operational parts of the product), this can work quite well. I’d say this is a much better option, but frankly can be difficult or impossible to pull off if you do not have the right people or mindset.

The second option is to commit to having a single Product Owner, and having a project manager (Producer) on each team who is responsible for traditional project management duties and being a proxy for the real PO. They make few decisions of their own, but just act as dutiful middlemen. Usually the Producer will also take the role of Scrum Master, though I think this is a shame as their focus will be on traditional project management. This will make it difficult to make sure your teams are getting an ongoing Lean and Agile education.

Ultimately, the key is to acknowledge how product ownership in your organization works. If how people are fulfilling the role of PO does not seem to align with the literature, change something. You can choose option one, and change your organization to match the literature. Or you can choose option two, to abandon the literature, and find something that will work instead. In either case, do not continue the dissonance.

The core of Lean and Agile is continual improvement. If you are using confusing or inappropriate terms and organizational structures, you sow confusion. If you are confused and without direction, you cannot reliably improve.

*: Scaling Lean and Agile Development is the best book I’ve read about scaling Agile development methodologies. Regarding the role of the product owner, their recommendation is to have a single PO if possible, but to have a single Chief Product Owner and several Area Product Owners if one PO is impractical (which it often is in game development). Importantly, POs are tied to areas of the product, and not to teams (who can and should drift between areas of the product).

by Rob Galanakis at April 03, 2014 03:41 PM

April 02, 2014

Caricature Show 2014 at Disney

Every April 1st, a new Caricature Show opens at the studio. It’s a great opportunity to display the great skills of our colleagues in form of caricatures of people from work. You could see awesome works from Bobby, Dani and Jin.

I wanted to participate this year, and this is one of my entries: my colleague and very friend Luis Labrador. Click on it to display him in his full glory!



by ike at April 02, 2014 08:22 AM

Maya GUI II: All Your Base Classes Are Belong To Us

In Rescuing Maya GUI From Itself  I talked in some detail about how to use descriptors and metaclasses to create a wrapper for the Maya GUI toolkit that, er, sucks less than the default implementation. I also strove mightily to include a lot of more or less irrelevant references to Thunderbirds. This time out I want to detail what a working implementation of the ideas I sketched out there looks like.

I think this time the irrelevant thematic gloss will come from All Your Base Are Belong To Us jokes. Because (a), we're talking about base classes, (b) what could be more retro and 90's than Maya's GUI system, and (c) For Great Justice, Make All Zig!

I've put my current stab at a comprehensive implementation up on Github, where you can poke at it to your heart's content. The whole project is there, and it's all free to use under the MIT, 'do-what-thou-wilt-but-keep-the-copyright-notice' license. Enjoy!  I should warn you, though, that this is still W.I.P code, and is evolving all the time! Use it at your own risk -- things may change a lot before its really 'ready'. I  expect that over time the code in the Github will change in ways that are not reflected here, so if you're reading this post in say, fall of 2014 you may need to work backwards from later entries to understand differences between the code on Github and the descriptions here..

I'm sharing at an early stage so i can get feedback from you folks in the trenches, and also because the development process includes a lot of teachable moments that are worth sharing.  And while this article is of course about Maya GUI, extending the same ideas — using metaclasses, descriptors and pythonic wrapper classes — to other areas of the package without too much work.

All Your Properties Are Belong To Our Base Class

With help, Maya GUI can be this good.
What we're shooting for is a library that provides all of Maya;'s GUI widgets in a clean, pythonic way without making anybody learn too much new stuff. If all goes well, the result is a cleaned up and more efficient version of things most of us already know.  You can also treat this an template for how you might want to to wrap other aspects of Maya -- say, rendering or rigging -- in cleaner code.

From last time, we know we can wrap a Maya GUI component in a class which uses descriptors to make conventional property access work. The main thing we're going to be delivering in this installment  is a slew of classes that have the right property descriptors to replicate the Maya GUI toolkit. We'll be using the metaclass system we showed earlier to populate the classes (if none of this makes sense, you probably want to hop back to last time before following along).

To keep things simple and minimize the boilerplate, we'll want to derive all of our concrete classes -- the widgets and layouts and so on -- from a single base. This helps the code simple and ensure that the features we add work the same way for the whole library. We'll add a second class later to handle some details specific to layouts, but that will derive from the base class.

Before we look at the details of the base class, we should think a little more about the properties. In the last installment, we treated all properties the same way - as generic wrappers around maya.cmds. In a production setting, though, we want to distinguish between 3 types of properties:

    Regular properties

    These are just wrapped accesses to commands, like we demonstrated last week. They use the same ControlProperty class as we used last time to call commands on our GUI widgets.

    Read-only properties.

    A small number of Maya GUI commands are read-only. It would be nice and more pythonic to make sure that these behave appropriately. So, ControlProperty has been tweaked with a flag that allows it to operate as a read-only property; otherwise it's just the same descriptor we talked about last time out.  ]


    This one is a bit more involved. I've already complained about the weaknesses of the event model in Maya GUI. Cleaning it up starts with knowing which properties are callback properties and treating them accordingly.

To differentiate between these three types of properties, we need to tweak our old metaclass so that it can distinguish between regular properties, read-only properties, and event properties. Luckily the necessary changes are super simple - basically, we'll take out the hard-coded list of properties we used before and allow every incoming class to declare a list of properties, a list of read-onlies, and a list of callbacks. (if you want to compare, the version from last time is here):

Somebody Set Us Up The Bomb!

Before getting into the nitty-gritty of our overall widget class, I want to make a side noted about the special properties used for the callbacks.  These CallbackProperty descriptors are slightly different from the familiar ControlProperty. Their job is to de-couple the maya GUI widget from the commands it fires. They create special delegate objects which will intercept callbacks fired by our GUI objects.

If you have experimented a little with last time's code, you may already have seen that it works just as well for callbacks and commands as for other properties. So you may wonder why we should bother to treat callbacks differently.  What's the point?

There are two main reasons this is a useful complication.

First, and most usefully, event delegates make it easier to add your callbacks after you lay out your GUI, rather than forcing you to interleave your code logic with the process of building forms and layouts. De-coupling the functional code form the graphic display makes for more readable and more maintainable code. It also makes it possible for you to reuse fairly generic layouts with different back ends.  In pseodo-code:


ButtonA deletes selected item from ListA
ButtonB renames selected item from ListA
ButtonC adds new item to ListA

as opposed to

I'm going to delete something from the list when it gets made
I'm going to rename something in the list when it gets made
I'm going to add something to the list when it gets made

Keeping the functional bits separate makes it easy to, say, split the purely visual layout details into a separate file, but more importantly makes it clear whats an administrative detail and what's actual functionality.

On a second, more tactical level the proxies also allow you to attach more than one function to a callback. It's pretty common, for example, that you the act of want selecting an item in a list to select object in the Maya scene, but also to enable some relevant controls and maybe check with a database or talk to source control. Using an event proxy lets you handle those different tasks in three separate functions instead of one big monster that mixes up lots of UI feedback and other concerns.

If you're familiar with QT you'll rexognize that event delegates are basically QT "Signals"

So that's why the extra complexity is worth it.

The actual workings of the proxy class are documented in the file in the Github project; I'll get back to how those work in a future post. Details aside, they key takeaway for right now is that this setup helps us move towards GUI code that's more declarative. That's the other reason why 'button.label = "Reset"' is better than cmds.Button(self.activeButton, e=True, l='Reset' -- it's not just less typing,  it's real value comes from  treating the GUI layout as data rather than code,. That means you can concentrate on the actual work of your tools rather than the fiddly details of highlighting buttons or whatever.

Last but not least - by standardizing on the event mechanism we have an easy way to standardize the information that comes with the callback events for very little extra works.  So, for example, all of the callbacks include a dictionary of keyword arguments when they fire - and the dictionary includes a reference to the widget that fired the event. That way it's easy to write a generic event handler and not have to manually bind the firing control to a callback function.

While we're on the topic of de-coupling:  Wouldn't it be nice to separate out the details of the visuals ("what color is that button?") from the structure of the forms and layouts?. Spoiler alert! This is a topic for a future post  -- but the curious might want to check out in the GitHub

Think ahead

How the hell are you going to explain THAT to your grandchildren?
The obvious lession is THINK AHEAD.

So, we've covered our  improved property descriptors, and now it's time to set up our base class.

This is a great opportunity to do some plumbing for more efficient coding. However it's also a temptation -- when the desire to sneak everything under the sun into your base classes is a recipe for monster code and untraceable bugs. This design should be as simple as we can make it.

Still, there are a couple of things that it would be nice to put into the base class - they are all very general (as befits base-class functions) and they are all common to any GUI tasks.


In most GUI systems, you can attach any arbitrary data you want to a widget. For example, you might want to have an array of buttons that all did the same thing with slightly different values, say moving an object by different amounts. In Maya you have to encapsulate the data into your command call:. With a tag attached to the buttons, on the other hand, you can write a script that just says 'move the target by the amount in this button's tag', which is much easier to maintain and more flexible.  And as we just pointed out, the event mechanism always sends a reference to the control which owns an event when it fires, so it's easy to get to the right Tag when you want it.

A real name

Having explicit names for your pieces is very handy, particularly in large, deeply nested systems like a GUI..

In conventional maya coding the names are critical, since they are your only way of contacting the GUI after it's built. They are also unpredictable, because of Maya's habit of renaming items to give them unique path names. Luckily for us we don't need to rely on the widget names from Maya, since we're managing the GUI items under the hood inside our wrappers. This gets us off the hook for creating and managing variables to capture the results of every GUI command under the sun.

That said, names are still useful in a big complex system. So, to make it really clear how to find one of our wrappers inside a GUI layout it makes sense to ask for an explicit name passed in as the first argument - that way it's totally clear what the control is intended to be. There are, of course, plenty of control you don't really care about once they're made: help text, spaces, separators and so on. To avoid making users have to invent names for those guys, we should let users pass in 0 or False or None as a succinct way of saying "I don't care about the name of this thing".

One minor note: I used Key as the name of the property so my IDE did not bug me for using in the Python reserved word 'id'. Little things matter :)

Speaking of little things: there are some great tools in the Python language to make classes more efficient to work with. The so called 'magic methods'  allow you to customize the behavior of your classes, both to make them feel more Pythonic and to express your intentions more clearly. Here are a couple of the things we can do with the magic methods in our base class:


    Speaking of that pass-in-zero-to-skip-names gimmick, one simple but super-useful thing we can do is to implement the __nonzero__ method. That's what Python calls when you try the familiar
    if something:
    test. In our case, we know that all Maya GUI controls have the exist flag, and therefore all of our GUI classes will too. So, if our __nonzero__ just returns the exist property of our class instances, we can elegantly check for things like dead controls with a simple, pythonic if test.


    __repr__ is what Python calls when you need a printable representation of an object. In our case, we can pass back our underlying Maya GUI object, which is just a GUI path string. This way, you can pass one of our wrapper classes to some other python code that works on GUI objects and it will 'just work' -- This is more or less what PyMel does for nodes, and it's a great help when integrating a new module into an existing codebase. Like PyMel's version there will be some odd corner cases that don't work but it's a handy convenience most of the time.
    As a minor tweak, the __repr__ is also tweaked to display differently when the GUI widget inside a wrapper class has been deleted. This won't prevent errors if you try to use the widget, but it is a big help in parsing error messages or stack traces.


    The next magic method we want to add is __iter__. It is the what python calls when you try to loop over a list or a tuple.
    Now, a single GUI object obviously is not iterable. A layout like columnLayout, on the other hand, can be iterated since it has child controls. By implementing __iter__ here and then over-riding it when we tackle layouts, we can iterate over both layouts and their children in a single call. This makes it easy to look for layout children :
    for child in mainlayout:
    if child.Key == 'cancel': #.... etc

So with all those methods added the base Control class looks like this ( You'll notice that it is inheriting from two classes we have not touched on, Styled and BindableObject.Those don't interact with what we're doing here - they'll come up in a later post.  You can pretend it just says 'object'):
Despite my rather verbose way of describing it all, this is not a lot of code. Which is what exactly you want in a base class: simple, common functionality, not rocket science. Hopefully, though, adding those pythonic behaviors will save a lot of waste verbiage in production work.
If you're reading the code carefully you'll probably spot a little bit of code I haven't described. That's just there to support event proxies -- we'll talk about the details when we get back to the event proxies in the future.

All Your Children Are Belong To Parent Layout

There's one little bit of plumbing in Control that I didn't call out:

That's way of making sure that we can store references to our control wrappers in our layout wrappers - that is, when you create a wrapped button inside a wrapped columnLayout, the columnLayout has a handle to the wrapper class for the button. Which brings us around neatly to the wrapper class for layouts - called... wait for it... Layout.

Damn, the internet has a lot of time on its hands

To support nesting, we want the the Layout wrapper class to be a context manager. The idea is that you when you start a Layout, it declares itself the active layer and all GUI controls that get created add themselves to it; when you're done with it control is return to whatever Layout was active before. As Doctor Who says of bow ties, "Context Managers are cool."

 If you've done a lot of Maya GUI you know it's also nice to have the same functionality for menus as well. So, to avoid repeating ourselves let's start by creating a generic version of Control that works as a context manager so we can get identical functionality in windows, layouts and menus. Then we can inherit it into a wrapper class for layouts and another for windows and voila, they are all context managers::

While we're messing with contexts, this is also a great opportunity to do what PyMel already does and make all layouts automatically manage UI parenting. This gets rid of all those irritating calls to setParent(".."), and lets us write GUI code that looks like real Python and not a plate of spaghetti. Compare this typical cmds example:

To this version using context manager layouts:

That example also includes one other neat way to leverage contexts too. If you double check that Layout, definition you'll see that it adds child wrapper objects to it's own __dict__. So, in this example, you could get to the last sphere-making button in this example as gui.g_buttons.grid.mk_sphere without having to manually capture the name of the underlying widgets they way the first example must. Since Maya GUI is always a single-rooted hierarchy , if you know the first parent of a window or panel you can always get to any GUI part if you know their names; this saves a lot of the boring boilerplate you would otherwise need to do just keeping track of bits and pieces.

There's a little extra magic in there which allows the add method to discriminate between children you care about and those you don't. If your child controls have no key set, they won't be added. On a related note, you can also be tricksy and add a control which is not a direct child of the layout - for example, if you had a layout with a list of widgets in a scrollLayout, you don't usually don't care about the scrollbar - it's just along for the ride. So you can add the widgets directly to the 'real' parent layout and keep the paths nice and trim. The goal, after all, is to make the gui layout a logical tree you can work with efficiently.  There's a practical example of this trick in the file on Github

 Here's a snippet tacked on to the end of that last sample showing how you can use the iterability of the layouts to set properties in bulk.  You can see how the work of turning command-style access into property style access, combined with the extra clarity we get from context managers, really pays off:

One parting note about the naming scheme, It does have one, inevitable drawback: it means that the child wrappers have unique names inside a given context. Not much we can do about that. They can, however, have the same name under different parents - the example above has , gui.t_buttons.grid.mk_sphere, and gui.g_buttons.grid.mk_sphere Thats a useful thing to exploit if you want to, say, find all of the 'Select' buttons on a form and disable them or something off that sort.

Make All Zig!

Hopefully, the combination of some syntax sugar in our wrappers and turning layouts into context managers will make Maya GUI layout less of a pain in the butt. However, we still need to actually crank out all the wrappers for all those scores of classes in the maya GUI library. Descriptors and metaclasses are powerful tools, but few of us have the intestinal fortitude to plow through the dozens of classes in the Maya GUI library getting every flag and command correct.

In an ideal world we'd have a way of reflecting over some kind of assembly information and extracting all of the maya GUI commands with their flags and options. Of course, in an ideal world we would not have to do this in the first place, since the native GUI system would not be the unfortunate SNES-era mishmash that it is.

Mass production is a pain in the ass.
Luckily, the TA spirit cannot be kept down by adversity. In this case we don't have a nice clean api but we do have MEL.... poor, neglected, wallflower MEL. Well, here's a chance for the wallflower to save the party: MEL's help command can list all of the commands and all of the flags in Maya. So, what we need to do is to run through all of the Mel commands in help, find the ones that look like GUI commands, and capture their command - flag combinations as raw material for our metaclass control factory.

See? This was getting all programmery, but now we're back in familiar TA spit-and-bailing-wire territory. Comfier?

The actual code to build the wrappers isn't particularly interesting (its here if you want to see it). In two sentences: Use the mel help * command to find all of the commands in Maya which share flags with cmds.control or cmds.layout. Then collect their flags to make the list of class attributes that the metaclass uses to create property descriptors. The final output will be a big ol' string of class definitions like this:

We generate two files, one for controls and one for layouts (that's an arbitrary design call on my part, you could of course have one file). Now they're just sitting on disk as if we'd written them by hand. We can import our newly generated modules and away we go, with nice pythonic properties and our new functions.

There is one judgement call here that is worth mentioning in passing.

The logic in the helper modules which generate this is all deterministic, it doesn't need human intervention so it could actually be run at module load time rather than being run and dumped out to a file. For what I want to do, I felt that physical files were a better choice, because they allow the option of hand tailoring the wrapper classes as the project evolves. Plus, the startup cost of trolling through every MEL command, while it's not very big, is real and it seems good to avoid it. I've have heard enough grumbling over the years about PyMel's startup speed that I thought it wisest to opt for speed and clarity over fewer files on disk.

One nice side effect of generating our wrappers this way: we've added some functionality through our base classes but fundamentally we've kept the same names and options we already know from plain old maya.cmds. The only changes are the mandatory names and the fact that I've capitalized the class names to make them more pep-8-friendly.

Hopefully, keeps the learning curve short for new user. Its hard enough to pick up a new style, making you memorize hundreds of new property names seem like a big tax on users.

In the version up on Github (and in this example) I opted to use only the long name for the properties. This is definitely a matter of taste; I'm sure that many TAs out there are sufficiently familiar with the old maya command flags that a howler like cmds.rowLayout(nc=2, cw2=(50,100), ct2=('both', 5), bgc = (.8,.6,.6), cl2=("left", "right") makes its meaning clear. for my part, though, the long names clarify the intent of the code enormously if you make a fairly small upfront investment in typing. 

If you are of the opposite opinion, though, you can call the generate_helpers and generate_controls functions in withincludeShortNames set to true make your own wrappers with the short flags too.

What You Say!!!

Now we've got a complete library of all the widgets. You can see the results in and on GitHub. (The base classes are also up there for your perusal in If all you want is to stop writing long commands every time you touch a GUI item, you're done. You can write crisper layout code, tweak your properties, and so on with what we've covered so far.  If you're interested in making practical use of this setup -- remember that WIP warning! -- you should read the docs in the module to make sure you know how to hook up callback events. I'll cover that in more detail in the future.

 However... Simpler syntax is just scratching the surface of what we can get up to now that we have a proper class library for our widgets. Next time out we'll look at the event mechanism in more detail and talk about how to cleanly separate your functional code, GUI layouts, and the display styles of your widgets.

Until next time....

by Steve Theodore ( at April 02, 2014 03:36 AM

April 01, 2014

Mike Bland’s profound analysis of the Apple SSL bug

Mike Bland has done a series of articles and presentations about the Apple SSL bug over on his blog. To be frank, it’s pretty much the only good coverage of the bug I’ve seen. He’s submitted an article to the ACM; crossing my fingers it gets printed, because it’s an excellent analysis. I’ve been frustrated it hasn’t gotten more traction.

Mike’s take is that this bug could have easily been caught with simple unit testing. It’s sad this has been so overlooked. We gobble up the explanations that do not force us to confront our bad habits. We can point and laugh at the goto fail (“hahaha, never use goto!”), or just shrug at the merge fail (“we’ve all mistakes merging”). It’s much more difficult to wrestle with the fact that this error- like most- is because we are lazy about duplicating code and writing tests.

I have no problem categorically saying anyone who doesn’t blame the SSL bug on lack of automated testing and engineering rigor is making excuses. Not for Apple but for his or herself.

by Rob Galanakis at April 01, 2014 03:58 PM

March 31, 2014

What if Carl Sagan were a hack?

I was watching the first episode of Cosmos, and Neil deGrasse Tyson talked some about how stellar of a scientists Carl Sagan was and what an impact Carl had on Neil personally. Carl’s abilities were important for his advocacy, because a) it lent him credibility, and b) it allowed him to engage. He practiced while he advocated. I can’t imagine Carl Sagan achieving the impact and legacy he did by abandoning the lab for the lecture circuit.

What a powerful lesson for those of us that manage people doing work we’ve never done. How can we deeply connect with them?

What a reminder for those of us that have moved into managing and left behind creating. Should our dues, once paid, last forever?

What a feeling for those of us who have moved into management out of expectation. Is it right to tell people what to do, while we have lost enough passion to do it ourselves?

by Rob Galanakis at March 31, 2014 08:05 PM

March 30, 2014

Planet Mars, the simple feed aggregator

In December, I spent some time polishing up a new fork of the old PlanetPlanet Python feed aggregator, named Planet Mars: . I chose the name to contrast it with Planet Venus, a successful PlanetPlanet fork. My goal for Planet Mars is to make it as simple as possible, to be the best aggregator for the very common use case. It’s based on my experience running a PlanetPlanet-based river at That feed has been powered by Planet Mars for the last few months.

The main features are:

  • Jinja2 templating, in addition to the original htmltmpl engine. Old templates still work, new templates can use a better language.
  • Parallelization of feed updating, so things are many times faster.
  • The code is stripped down and much easier to understand and improve.
  • Planet Mars can be installed via pip, so instead of having to place your customizations in its package directory, you can pip install the GitHub repo and import it as a module into your own code (you can see the Planet TechArt repo as an example).
  • Planet Mars otherwise very similar to PlanetPlanet, making switching to PlanetMars very easy :)

Please take a look and tell me what you think, especially if you are an original Planet user.I’m very open to improvements and suggestions. If there’s any demand, I can put this on PyPI as well.

by Rob Galanakis at March 30, 2014 07:28 PM