Planet Tech Art
Last update: October 31, 2014 04:00 PM
October 31, 2014

Open Dialogue Update One

Here is a video update of progress with Open Dialogue, the open source python based dialogue tree creation tool ma jig.

more details can be found with this other post here

by mattanimation at October 31, 2014 08:20 AM


October 30, 2014

Rigging Dojo’s (A.I.R) : Halloween Treat no Tricks with Josh Carey

October 31st 2014 Artist in Residence (A.I.R) Live With Josh Carey Rigging Dojo Co-Founder and Rigging Supervisor Reel FX Friday 2/3pm central     Josh, taking a break after finishing up “The Book of Life” is currently teaching on location in Denmark so we have an earlier time than our other AIR events. Also we want […]

The post Rigging Dojo’s (A.I.R) : Halloween Treat no Tricks with Josh Carey appeared first on Rigging Dojo.

by Rigging Dojo at October 30, 2014 05:52 PM


The Dog Ate My Homework

I had an interesting issue at work the other day. While the details are unit-test specific, I learned a useful general idea that’s worth sharing.

We run all of our various Maya tools through a single build system which runs unit tests and compiles code for our different targets (currently Maya 2011 and 2015). Ordinarily, since I’m very allergic to using binaries when I don’t have to, this multi-maya setup doesn’t cause us a lot of headaches. I have a little extractor routine which unzips the few binaries we do distribute in the right places, and all the rest of the code is blissfully unaware of which Maya version it’s running (with the exception of the nasty ls bug I mentioned a few weeks ago.)



Last week, however, we added a new tool and accompanying test suite to the toolkit. It works fine in 2015 (where we do all of our actual development right now), but crashes in 2011. After a bit of head-scratching we eventually realized that this one was absurdly simple: the test uses a saved Maya so that it can work with known, valid data. Of course the file was saved from Maya 2015, so when the Maya 2011 version of the tests tries to run boot up, it falls over because 2011 won’t read a 2015 file.
Or, as the checkin comment has it, “Doh!”

Test cancelled!


The obvious fix is just to skip the test in Maya 2011 - a test that can never pass is hardly generating much useful information, and the likelihood that our small pool of 2011 customers actually need this tool is low anyway. Skipping a test is easy enough if you’re running the tests manually in an IDE – but a lot more complex if you’re got a build server that’s trying to auto-detect the tests. Plus, designing a system that makes it too easy to skip tests is a Bad Thingtm; - you generally want all of your tests running all the time, since “I’ll re-enable that test after I deal with this problem” is right up there with “the check is in the mail” and “it’s not you, it’s me” in the probity department.
So, the goal is to allow us to conditionally disable tests based on a hard constraint - in this case, when they are running on an inappropriate version of Maya - without compromising the tests as a whole . Secondarily it would be nice to do this without any kind of central registry file - we’d really just like the tests to just run, except when they can’t.



Now, typically a test runner will detect tests by looking for classes that derive from unittest.TestCase. The easiest way to skip the test, therefore, is simply not to define it at all - if the test runner doesn’t see the class when it imports your test modules, we’ll be fine. Note: this strategy won’t work if you have some kind of hand-rolled test harness that finds tests by string parsing file contents or something like that! However, you probably want to be doing the standard thing anyway… As they say in Python land, “There should be one– and preferably only one –obvious way to do it.”

In C++ or C# you could do this with a “preprocessor directive”, aka a “#define” - a conditional check that runs at compile time to include or exclude certain parts of a file.

In Python we don’t even need that: you can just inline the check in your file and it will execute when the module is imported. Here’s a simple example which conditionally use Raymond Hettinger’s ordereddict module in Python 2.6 and the equivalent built-in version in Python 2.7:

import sys

if sys.version_info.major == 7:
from collections import OrderedDict
else:
from ordereddict import OrderedDict

(If you are total #IFDEF addict there is also the pypredef module. Not my cup of tea, but the author does make some good points about the utility of his approach).

The inline approach works fine in small amounts, but it’s aesthetically unappealing - it forces a bunch of module-level definitions away from the left margin, visually demoting them from important names to generic code blocks. More importantly, it’s easy to mess up: a misplaced indentation can radically change the contents of your file, and even though I’m a big fan of indentations over cur lies, I miss my indents with depressing regularity.

Fortunately, Python has an elegantly succinct way of annotating code for higher-level purposes without messing up the visual cleanliness and logical flow: decorators. Decorators are handy here for two reasons: first off, they express your intent very clearly by telling future readers something unambiguous about the structure of your code. Secondly, they can execute code (even fairly complex code, though frankly it’s a bad idea for what I’m describing here!) without compromising the layout and readability of your module.
The particularly nice thing about decorators in this case is that the way decorators work in any case is a natural match for the problem we have.

The substitute teacher

A decorator is just a function (or a callable class) which takes another function or class as an argument. When Python finds a decorated function or class, it calls the decorator function and passes the target – that is, the decorated bit of code – as an argument Whatever comes out of the decorator function is then swapped in for the original code.
Here’s a simple example, using functions for simplicity:


def decorated(original_func):
def replacement_func(arg):
# this function replaces the original
# it only knows what the original does
# because that was passed in when the
# decorator was called....
print "calling original"
result = original_func(arg)
print "original says : ", result
return result
return replacement_func
# return our new replacement function
# but bind it to the name of the original

@decorated
def size(arg):
return len(arg)

example = size( [1,2,3])
# calling original
# original says : 3
print example:
# 3

The decorator can completely replace the original code if it wants to:

def override(original\_func):
def completely\_different():
return "and now for something completely different"

@override
def parrot():
return "I’d like to make a complaint about a parrot"

print parrot()
# and now for something completely different

Or, it could leave it untouched too:

def untouched(original_func):
return original_func

@untouched
def spam():
return "spam!"

print spam()
#spam!

The essential thing here is that the decorator sort of like one of those elves who swap out children for changelings. Officially nothing has changed - the name you defined in the un-decorated code is right there - but under the hood it may be different.


Mandatory testing

Once you understand the decorator-as-changeling idea, it becomes pretty easy to see how the decorator can allow code swaps based on some condition. You might, for example, try to patch around a function which returns an empty list in Maya 2014, but crashes in Maya 2015(link):


def safe_2015(original_func):
if '2015' in cmds.about(v=True):
# wrap it for safety in 2015
def safe\_ls(*args, **kwargs):
try:
return original_func(*args, **kwargs)
except RuntimeError:
return []()
return safe_ls
else:
# send it back unchanged in non-2015
return original_func

@safe_2015
def do_something():
\#....

(Disclaimer: I wouldn’t use this code in practice! It’s a good example of the principle, but not a wise way to patch around the 2015 ls bug).

Returning at long last to the problem of suppressing tests: we just need to harness the power of decorators to replace the class definition of our test classes with something else that won’t get run by our test suite. And, luckily, that’s really easy to do since we don’t have to return anything:

def Only2015(original):
if '2015' in cmds.about(v=True):
return original # untouched!
else:
return object # the decorated class is now just object

So if your do something like this in your tests:

from unittest import TestCase
import maya.standalone
try:
maya.standalone.initialize()
except:
pass


@Only2015
class Test2015Only(TestCase):
def test_its_2015(self):
assert '2015' in cmds.about(v=True)

class TestOtherVersions(TestCase):
def test_any_version(self):
assert '20' in cmds.about(v=True)

As you’d expect, both of these test will run and pass when run on a Maya 2015 python. However, under any other version of Maya the file really looks like this:

from unittest import TestCase
import maya.standalone
try:
maya.standalone.initialize()
except:
pass

# in 2014 <, this TestCase class has been replaced by a dumb object() class
class Test2015Only(object):
pass


class TestOtherVersions(TestCase):
def test\_any\_version(self):
assert '20' in cmds.about(v=True)

Because Test2015Only is now an object() instead of a TestCase, the test runner doesn’t even see it and doesn’t try to run it.

Makeup work

This is a lovely example of why Python can be so much fun. The language has the magical ability to extend itself on the fly - in this case, change the meaning of whole blocks of otherwise conventional code - but at the same time it offers simple, conservative mechanisms that keep that process for degenerating into mere anarchy (or, worse, into JavaScript).

This particular gimmick was a great way to clean up our messy test set. Predictably, about 30 seconds I verified that it worked I was starting to brainstorm all sorts of cool new uses for this tactic.

A few more minutes of reflection, however, brought me to see that this kind of trick should be reserved for special occasions. The ability to swap the contents of a name based on runtime condition is definitely cool - but it’s hardly a good practice for readability and maintenance down the road. It happens to be a nice fit for this problem because a test is never going to be used by anything other than the test suite. Trying the same thing with, say, a geometry library that gets imported all over the place would be a nightmare to debug.

Magic is wonderful but, best used sparingly.


by Steve Theodore (noreply@blogger.com) at October 30, 2014 04:55 PM


A Lack of Drawing

There has most definitely been a lack of drawings lately. Not so much as in the amount I’ve been doing, but more in the aspect of the ones I post. I think it’s two fold. For one, I’ve been busy drawing I just don’t post as much. I’m finding lately that I’m getting even more critical of my drawings and I still don’t feel like many of them are ready for the public eye. Hence the quick journal flip thoughs. I’m forcing myself to draw, but every drawing doesn’t need to be under the microscope of the Internet. I need to draw to have fun, and the only way I can do that is to be able to fail and not believe its being made to get posted.

The second reason which is probably more of the reason why I haven’t been posting lately is because I’ve fallen in love with pottery. I’ve been taking a class near my house that is an 8 week course as well as practicing on the weekends. At first I thought it was going going to be a artistic doodling experiment, turns out it is much more scientific than I was hoping. I feel like I do so much science at work, I need the other side of my brain to relax. I’ve also found out all the things I like to do are a mix of the science and arts. Maybe that’s just part of my DNA and I’m forever cursed between the two sides. Either way, I’ve really gotten the pottery bug. I’ve been watching a lot of videos online as well as reading and investing as much material as I can. I feel creatively energized again. Anyways here are a few pictures, a big batch ready for glazing.

by at October 30, 2014 04:27 PM


October 28, 2014

Laziness and cleanliness and MEL, Oh My.

The other day I was following a thread on Tech-Artists which reminded me of one of those little Maya things that doesn't really matter, but which drives me bonkers: busted front ends for Maya plugins.

When a developer makes a plugin for Maya, they can create new Mel commands as well as new nodes. The new commands will ultimately use the same basic strategy to parse their incoming arguments: Maya will give them an MArgList object and they will have to parse out what that means. If the plugin uses an MSyntax and an MArgParser to pull the values out then the plugin will behave just like the functions in maya.cmds.  Flags and arguments will be checked the same way that we're used to in the rest of Maya Python.

Unfortunately, there's no law that says the plugin has to do it 'correctly'.  There are more than a few plugins that don't use the standard MSyntax/MArgParser combo and just pull values out of the argument list directly.  The most notorious offender is the FBX Plugin, which generates a ton of commands which all fail to use the standard parsing mechanism.  And, of course, there are also bits of MEL lying around from other sources as well that are a bit painful to call from Python, That's why you see tons of hairy beasts like this:

import maya.mel as mel
mel.eval("FBXExportBakeComplexStart -v " + str(start_frames[x]))
mel.eval("FBXExportBakeComplexEnd -v " + str( end_frames[x]))
mel.eval("FBXExport -f \"" + get_export_file(x) + ".fbx\"")

While this is workable, it's fragile: composing strings inline inside a function call is an invitation to bugs like forgetting an escaped quote (tell me you'd notice that last escape in the final line if it was borked!) or a bit of significant whitespace. It's also harder to meta-program anything that's written like this - you can't create a dictionary of options or a variable length list of arguments when you call the function. Last - but not least, at least not for lousy typists like myself - you can't rely on autocompletion in your IDE to make things quicker and less error prone.

In cases like this it's handy to be able to fall back on a wrapper that will feed the plugin a correctly formatted MEL-style argument but which looks and codes like regular Maya Python. Luckily, you can usually rely on the MEL syntax, even when the plugin's argument parsing is as Python-unfriendly as the FBX plugins: If the MEL version doesn't work either, the whole thing isn't worth rescuing ! -- but if it does then you can Python-ify the front end with a little bit of Python magic to make sure the arguments are passed correctly.

One thing we can do to make this a simple job is to use what's known as MEL function syntax.  This is a little-used MEL behavior that lets you call MEL more or less like a traditional computer function, rather than the shell-style script format you usually see. Function syntax uses parentheses and a comma-delimited list of arguments rather than white space. It means that these two calls are identical:

spaceLocator -p 1 2 3 -n "fred";
spaceLocator("-p", "1", "2", "3", "-n", "fred");

While you probably don't want to type that second one, it's a lot easier to manage if you're trying to turn a bunch of flags and arguments into a MEL command string.  What we'll be doing is creating a function that generates argument strings in the function syntax style and then passes them to MEL for you, allowing you to use the familiar cmds-style arguments and keywords instead of doing all the string assembly in-line with your other code.

The rest of the relevant MEL syntax rules are pretty simple, with one exception we'll touch on later:


  • Everything is a string!
  • Flags are preceded by a dash
  • Flags come first
  • Non-flag arguments follow flags
  • Multipart values are just a series of single values


That first one may suprise you but it's true - and in our case it's extremely useful. If you're dubious, though, try this in your MEL listener:

polyCube ("-name", "hello", "-width", "999");

Implementing these rules in a function turns out to be pretty simple.

import maya.mel
def run_mel(cmd, *args, **kwargs):
# makes every value into a tuple or list so we can string them together easily
unpack = lambda v: v if hasattr(v, '__iter__') else (v,)
output = []
for k, v in kwargs.items():
output.append ("-%s" % k)
# if the flag value is True of False, skip it
if not v in (True, False):
output.extend (unpack(v))

for arg in args:
output.append (arg)

quoted = lambda q: '"%s"' % str(q)

return maya.mel.eval("%s(%s)" % (cmd, ",".join(map(quoted, output))))

This function will correctly format a MEL call for almost all circumstances (see note 1, below, for the exception).  For example the irritating FBX commands above become

run_mel("FBXExportBakeComplexStart", v = start_frames[x])
run_mel("FBXExportBakeComplexEnd", v = end_frames[x])
run_mel("FBXExport", f = get_export_file(x) + ".fbx")

That's a big improvement over all that string assembly (not leastaways because it pushes all the string nonsense into one place where it's easy to find and fix bugs!)   However it's still a bit ugly. Wouldn't it be cleaner and more readable to nudge these guys another step towards looking like real Python?

Luckily that's quite easy to do. After all, the run_mel("command") part of this is the same except for the command names. So why not make a second function that makes functions with the right command names?  This is basically just a tweak on the way decorators work. For example:

def mel_cmd(cmd):
def wrap (*args, **kwargs):
return run_mel(cmd, *args, **kwargs)
return wrap

This takes a MEL command name ("cmd") and makes a new function which calls run_mel using that command. So you can create objects which look and work like Python commands but do all the nasty mel stuff under the hood like this:

FBXExport = mel_cmd("FBXExport")    
FBXExportBakeComplexStart = mel_cmd("FBXExportBakeComplexStart")
FBXExportBakeComplexEnd = mel_cmd("FBXExportBakeComplexEnd")

And call them just like real Python:

FBXExport(f = "this_is_a_lot_nicer.fbx")

All this might seem like a bit of extra work -- and it is, though its not much more work than all those laboriously hand-stitched string concatenations you'd have to do otherwise.. More importantly, this actually is a case where code cleanliness is next to Godliness: keeping rogue MEL from invading your python code is a big boon to long term maintenance.  String assembly is notoriously bug prone: it's way too easy to miss a closing quote, or to append something that's not a string and bring the whole rickety edifice crashing down.  Moreover, exposing all of that stringy stuff to other code makes it impossible to do clever python tricks like passing keyword arguments as dictionaries.  So in this case, a little upfront work is definitely worth it.

Plus, if you're lazy like me you can import these functions in a module and they'll autocomplete. Fat Fingers FTW!

So, if you find this useful, the complete code is up on Github.

Note 1: If you're a real mel-head you may have noticed one limitation in the run_mel implementation above.  MEL allows multi-use flags, for commands like

ls -type transform -type camera

However the function here doesn't try to figure format arguments that way. In part because it's a relatively rare feature in MEL, but mostly because it doesn't occur in the places I've needed to wrap MEL commands.  It would not be hard to extend the function so you could annotate some flags as being multi-use - if you give it a whirl let me know and I'll post it for others to see.

Note 2: The Github also has another module which uses the same basic idea (but a slightly different code structure) to wrap that stupid FBX plugin.

by Steve Theodore (noreply@blogger.com) at October 28, 2014 07:09 AM


October 27, 2014

If you hear “perception is reality” you’re probably being screwed

I was once told in a performance review that “perception is reality.” I was infuriated, and the words stuck in my mind as the most toxic thing a manager could say to an employee. I have avoided writing about it, but the “This American Life” episode about Carmen Segarra’s recordings at the Fed has inspired me to change my mind. Here’s the relevant section, emphasis mine:

Jake Bernstein: Carmen says this wasn’t an isolated incident. In December– not even two months into her job– a business line specialist came to Carmen and told her that her minutes from a key meeting with Goldman executives were wrong, that people didn’t say some of the things Carmen noted in the minutes. The business line specialists wanted her to change them. Carmen didn’t.

That same day, Carmen was called into the office of a guy named Mike Silva. Silva had worked at the Fed for almost 20 years. He was now the senior Fed official stationed inside Goldman. What Mike Silva said to Carmen made her very uncomfortable. She scribbled notes as he talked to her.

Carmen Segarra: I mean, even looking at my own meeting minutes, I see that the handwriting is like nervous handwriting. It’s like you can tell. He started off by talking about he wanted to give me some mentoring feedback. And then he started talking about the importance of credibility. And he said, you know, credibility at the Fed is about subtleties and about perceptions as opposed to reality.

Well shit, if that doesn’t sound familiar. Here I was, doing work that was by all measures extremely successful, yet pulled into a feedback meeting to be told “perception is reality.”

Let me tell you what “perception is reality” means, and why you should plan on leaving your job the moment you hear it:

The arbitrary opinions of your manager’s manager defines your situation. And they don’t like what you’re doing.

Your manager may be well-meaning (mine was, as was Mike Silva), but at the point you get this “perception is reality” bullshit, you can be sure there’s nothing that they are going to do to help you. Someone above them has taken notice, your manager has probably heard hours of complaints, and you can either shut up or get out. Perception isn’t reality everywhere; it is only the mantra in sick organizations totally removed from reality.

by Rob Galanakis at October 27, 2014 12:36 PM


October 25, 2014

Nexus Distributed Rendering plugin

Today is a great day! Finally I can share with you all what I’ve been working on along with my good friend Daniel Santana and Jonathan de Blok! Nexus DBR is a plugin available for 3ds Max  users based on the Nexus Framework.

Nexus DBR plugin allows fast iterations on renders without the hassle of network rendering while using your render farm in an interactive workflow. Some of its main features include:

  • Easy scene file distribution
  • Works with any commercially available renderer
  • Does not block the application so you can work while rendering, no extra overhead

But words are just words! So here’s a video showing it all :) Hope you like it and if you are interested in knowing more about price and licensing scheme contact us at info@youcandoitvfx.com

Click here to view the embedded video.

by Artur Leao at October 25, 2014 07:03 PM


October 24, 2014

Another one bites the dust. A bunch of #inktober in this one



Another one bites the dust. A bunch of #inktober in this one

by at October 24, 2014 11:21 PM


October 23, 2014

Technical debt takes many forms

Most people are familiar with “technical debt” in terms of code or architectural problems that slow down development. There are other forms of technical debt, though, that can be overlooked.

Dead Code: There are endless “dead code as debt” scenarios. You have a “live” function that is only used from dead code, hiding the fact that this function is also dead (this situation is cancerous). Every time you grep, you have to wade through code that is dead. Every time someone stumbles across the dead code, they need to figure out if it’s dead or alive. There’s no reason for any of this (especially not “keep it around as reference”). Dead code is a debt, but it’s also easy to pay back. Remove your dead code.

Unused Repos or Branches: Every time a new person starts, they will ponder what code is dead and what is alive. This pondering includes code, issues, and documentation. It is sloppy and unnecessary. Put unused repositories in cold storage. Delete stale branches.

Large Backlog: The larger the backlog, the worse the experience of using it. It’s harder to find, reclassify, and prioritize. Some developers will not even bother. A backlog is not a place for everyone to list anything they think should ever be done. Close stale and low-priority tickets. Close “symptom” tickets that you know won’t be addressed until a system is rewritten. Close everything except 3 months of work (and manage further out work on your roadmap, which should not be in your backlog).

Dirty Wikis/Documentation: Why out of date documentation is harmful should be pretty self-explanatory. Don’t let it get that way (or just delete the documentation). Make documentation someone’s responsibility, or make it part of the “definition of done.”

Every organization has these things. By recognizing them as debt, and thus detrimental to development, it can perhaps simplify any argument about what to do.

by Rob Galanakis at October 23, 2014 12:55 PM


October 22, 2014

Ramahan Faulk New Mentor, New Course at Rigging Dojo

We are excited and happy to welcome and old friend, Ramahan Faulk, recently of Digital Domain, to Rigging Dojo as our new modeling Mentor! Ramahan will be teaching a course on professionally built topology for performance based deformations. A must-take course for modelers who need to build for animation as well as character TDs needing […]

The post Ramahan Faulk New Mentor, New Course at Rigging Dojo appeared first on Rigging Dojo.

by Rigging Dojo at October 22, 2014 03:35 PM


October 19, 2014

PracticalMayaPython: RuntimeError: Internal C++ object (PySide.QtGui.QStatusBar) already deleted.

TLDR: If you get that error for the code on page 163, see the fix at https://github.com/rgalanakis/practicalmayapython/pull/2/files

In August, reader Ric Williams noted:

I’m running Maya 2015 with Windows 7 64-bit. On page 163 when we open the GUI using a Shelf button, the GUI status bar does not work, (it works when called from outside Maya). This RuntimeError appears: RuntimeError: Internal C++ object (PySide.QtGui.QStatusBar) already deleted.

I no longer have Maya installed so I couldn’t debug it, but reader ragingViking (sorry, don’t know your real name!) contributed a fix to the book’s GitHub repository. You can see the fix here: https://github.com/rgalanakis/practicalmayapython/pull/2/files
And you can see the issue which has more explanation here: https://github.com/rgalanakis/practicalmayapython/issues/1

Thanks again to Ric and ragingViking. I did my best to test the code with various versions of Maya but definitely missed some things (especially those which required manual testing). If you find any other problems, please don’t hesitate to send me an email!

by Rob Galanakis at October 19, 2014 05:30 PM