Planet Tech Art
Last update: March 14, 2014 11:59 AM
March 14, 2014

River WIP

River sim I did a few months back for fun. I don’t remember all the specs exactly (I’ve since removed the cache files). The sim ran overnight, and was about 30 million particles. The renders were pretty quick too (mantra). It’s using a custom vorticity setup I’ve been using for a while (tracks with the fluid and fades out over time) for the ‘foamy’ areas. I did take the render one step farther with the shader (added volume under the surface and better driven by my vorticity attributes), but hadn’t gotten around to getting a full sequence rendered. I’ll probably re-sim / re-render soon enough.

by Ian at March 14, 2014 06:27 AM

March 13, 2014

Speaking at i3D and GDC 2014

I’ll be speaking at i3D Symposium and GDC in San Francisco in a couple of days.

At i3D, Industry Panel (Sunday at 11:00AM). Jason Mitchell (Valve) will do a panel on the scalability challenges inherent in shipping games on a diverse range of platforms. Panelists are Michael Bukowski (Vicarious Visions), Jeremy Shopf (Firaxis), Emil Persson (Avalanche) and yours truly.

My first i3D, can’t wait to see what it is about!

At GDC, a bunch of talks on Tuesday:

The Unity tracks will probably get a shorter repeat at Unity expo booth on Wednesday.

See you there!

by at March 13, 2014 05:38 AM

scntocEditor

I just put on github a small side-project which should make it a tad easier for me to handle scntoc files at work. I thought that since I would do a special module to do this I might as well share it. So I present to you scntocEditor

It still has no documentation of any sort and the GUI could use like a 1000 things to make it really nice. But let’s take it one step at a time.

by at March 13, 2014 12:00 AM

March 12, 2014

GDC 14: The Quest For Fun

Next week is that wondrous time of GDC again and boy am I looking forward to saying hi to old friends, meeting new ones and absorbing some great talks.

Unlike my previous visits however, this time I attend representing not an employer, but myself as a freelancer. Freelancing in Q3 and 4 of 2013 was filled with lots of fun and great challenges, so my mission at this years GDC will be to build my schedule to make 2014 at least as much fun.

If you have a task or project you think we could do some good work on together / want to talk about Behave for a bit or just say hi, please do not hesitate to give me a ping.

Shortly after arrival I will pick up a prepaid SIM, so I am reachable via Twitter or the GDC app PM system.

I’ll also be co-hosting the IGDA Unity and AI SIG meetings Wednesday (Unity SIG) and Friday (AI SIG). Do stop by if you have an interest in either of those topics.

In any case, if you’re attending GDC I hope you have a good time – I know I plan to.

by at March 12, 2014 11:00 PM

ZBrush Speedsculpt: "Hydrosol" Cargo ship.

Theme this week was 'starship' so I made something up. I didn't want to have some big weapon covered star ship, so I went with a cargo freighter design.

"Hydrosol", sculpt time 50 minutes, although I took 10 minutes to sketch out a rough idea before I started.

 Solids on top, liquids down the bottom.

by Peter Hanshaw (noreply@blogger.com) at March 12, 2014 09:05 PM

Red9 goes all GIT!

Well this has been asked for by many of you for a while so I'm finally sorting it out, pushing Red9 repository up onto GitHub. I've always used a private SVN server but this should give people more exposure to the changes happening on a daily basis.

https://github.com/markj3d/Red9_StudioPack

Still getting by head round GIT so bear with me

Mark

by Mark Jackson (noreply@blogger.com) at March 12, 2014 05:20 PM

Descriptors and pythonic Maya properties

I'm still working on the followup to Rescuing Maya GUI From Itself, but while I was at, this StackOverflow question made me realize that the same trick works for pyMel -style property access to things like position or rotation.  If you're a member of the anti-pyMel brigade you might find this a useful trick for things like pCube1.translation = (0,10,0). Personally I use pyMel most of the time, but this is a good supplement or alternative for haterz or for special circumstance where pymel is too heavy.

As you can see from the example, it's about as simple as it can get thanks to the magic of descriptors. This example spotlights one advantage of descriptors over getter/setter property functions: by inheriting the two classes (BBoxProperty and WorldXformProperty) I can get 4 distinct behaviors (world and local, read-write and read-only) with very little code and no if-checks.
You may note the absence of a __metaclass__. In this case, with only a single class, a meta would be an unnecessary complication. Meanwhile the code for MayaGUI stuff is up on GitHub, though it is definitely not ready for use and is still changing rapdily. It is/will be available under the MIT License). Comments and/or contributions welcome!

by Steve Theodore (noreply@blogger.com) at March 12, 2014 05:55 AM

Who is using PyPy?

I brought up the idea of using PyPy for an internal service we are building at work, and was asked what actual projects are using PyPy? I had no answer and couldn’t find one. Are there are well known projects using PyPy? Do you know of anyone that is using it to run small servers, even? I find PyPy hugely exciting, and I know it’s current limitations and issues (as well as its benefits), but I’m wondering if anyone’s used it yet and what their experience has been.

by Rob Galanakis at March 12, 2014 01:04 AM

March 11, 2014

Value variation.

Today we'll look at a very simple way to generate a ‘random’ value. Of course it's not truely random, but it does the job for an artist sake.

What will happen if you plug a constant as your texture coordinates?
Every single pixel of your constant has the same value so they're all going to go pick the same pixel. That means that you'll get a constant value as an output as well.

If you're after a flicker, simply add a panner in the middle. You'll move the texture around so the value of the pixel in the centre will be different over time, and you get a fluctuating value. You wanna pan in both U and V to avoid repetitive patterns.

The rhythm of a malfunctioning neon light could be something like this:
(The texture is simply the inverted cell pattern effect from the after effects with some levels to darken it. )

If you want some mad flicker, use a very contrasted texture and pan it faster.
If you want some very soft variations, use some very soft gradients that you'll pan slowly.

Unfortunately, I haven't been able to record any of this because the framerate issues make it useless.

by mkalt0235 (noreply@blogger.com) at March 11, 2014 09:19 PM

A little bit of innovation

If recent demise of XSI has you all depressed about the state of 3d software, here's a little preview of a new Modo feature that will make some modelers very happy

http://community.thefoundry.co.uk/store/plugins/meshfusion/

by Steve Theodore (noreply@blogger.com) at March 11, 2014 02:00 PM

Perforce: Delete empty change lists using Python

Some of the tools I've written allow a user to check out chains of files- like Maya files as well as any exported FBX files and their unity metadata etc, all in one nice pass. This is great, but it has one annoying side effect. If the user presses the button twice the files are added to another change list, but the one they were just in stays there, empty. Press it a number of times and suddenly you have an army of zombie phantom changelists whose only purpose is to clutter up your pending list and annoy you.

I have been using the P4Python API and found it to be... well... its kinda crap. It would be great if it had a 'delete all empty pending changelists' function, but it doesn't seem to. So I wrote one of my own.

Beware! If you don't like hacking output out of strings the following code will make you cringe...

 There's my target...
import subprocess"""Get a list of changelists from the command line. Try to delete them (will delete if empty)"""# sInfo= subprocess.STARTUPINFO() # Use this instead of the code below to hide the output window. Kinda handy if you don't like annoying artists.  # sInfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW # Hide the cmd window# p = subprocess.Popen(command, stdout=subprocess.PIPE, startupinfo=sInfo) # Same as below, but no cmd window. workspace = 'YourWorkspace'# Get the pending changes on the local client using the P4 console commands and reading the output.command = "p4 changes -s pending -c %s" % workspace p = subprocess.Popen(command, stdout=subprocess.PIPE) temp = p.stdout.read()# Split the block of text at the line endings. This should give you one changelist per line. raw_data = temp.split('\n') print raw_data# Now go through each line and split the text at the spaces. # This should end up with something that reads like ['Change', '99999', 'on' etc etc...for line in raw_data:     target = line.rstrip().split(" ")    if len(target) > 1: # Skip any list with single elements. That aint got what we want.         changeNum = target[1] # The changeNum should be the second element        command = 'p4 change -d %s' % (changeNum) # Attempt to delete the changelist. P4 refuses to delete changelists with items.          p = subprocess.Popen(command, stdout=subprocess.PIPE) # Lets get some info back again.         temp = p.stdout.read() # Tell us about what you just did.         print temp # Good boy.

 The Result... poor 76610 got what was coming to him...

Output will be something like this:
["Change 76610 on 2014/02/06 by ****@**** *pending* '[EmptyChangeListToNuke] '\r",
"Change 76607 on 2014/02/06 by ****@**** *pending* '[ToolDev] Stuff '\r", '']

Change 76610 deleted.
Change 76607 has 35 open file(s) associated with it and can't be deleted.

by Peter Hanshaw (noreply@blogger.com) at March 11, 2014 04:48 AM

March 10, 2014

Rescuing Maya GUI from itself

Last time out was devoted to a subject most TA's already know: the shortcomings of Maya's native GUI. This time we're going to start looking at ways to rescue Maya from itself.

And if you don't know what that picture is there, go here - this tech-art stuff is not as important as a good understanding of Thunderbirds.

With that out of the way:

Any good rescue mission starts with objectives. The three main drawbacks to coding in Maya GUI natively are nasty syntax, clunky event handling, and difficult management. In today's thrill-packed episode, we're going lay some foundations for tackling that old-school syntax and dragging Maya GUI kicking and screaming into the 21st century.

Under the surface

Composing a Maya GUI in code is annoying because the only way to access the properties of a Maya GUI node is via a command - there's no way to get at the properties directly without a lot of command mongering.

Sure, the purist might say that alternatives are just syntax sugar - but Maya GUI's drawbacks are are (a) an obstacle to readability (and hence maintenance) and (b) such a big turn off that people don't bother to learn what native GUI can do.  This is particularly true for formLayouts, which are the most useful and powerful - and also the least handy and least user-friendly - way of layout of controls in Maya.  All the power is no use if you just stick with columnLayouts and hand-typed pixel offsets because setting things up takes a whole paragraph's worth of typing.

So, the first thing I'd like to ponder is how to cut out some of the crap.  Not only will a decent wrapper be more pleasant to read and write - at some point in the future when we get to talk about styling controls, real property access will be a big help in keeping things tidy.  Plus, by putting a wrapper around property access we'll have  a built in hook for management and cleaning up event handling as well, even though that's a topic for a future post.

The upshot of it all: we're stuck with the under-the-hood mechanism, but there's no reason we can't wrap it in something prettier. Consider this simple example:

This is plain-vanilla Python properties in action. It's easy to extend it so you can set 'Label' also:

Rescuing the rescuers

While this is a nice trick, it doesn't take long to figure out that replacing the whole Maya GUI library with this will take a lot of annoying, repetitive, and typo-prone code. button alone has 34(!) properties to manage, and real offenders like rowLayout have a lot more. Writing wrappers for all of these is a huge waste of valuable human brainpower.

Luckily, that's not the end.  Property objects are really instances of Python descriptors, which means we have some more options for creating them.

The official docs on descriptors are kind of opaque, but the link I shared above to Chris Beaumont's article on properties and descriptors does a great job of explaining what they do: which is, in a nutshell, to provide property like services in the form of class-level objects. (Update: here's another great explanation in the form of a five minute video) Instead of defining methods and decorating them as we did above, you create a class which handles the function-to-property behavior (both getting and setting) and stick it directly into your own class namespace, the same way you would place a def or a constant (as an aside, this placement is why the CMD field in the example is a class field rather than a hard code or an instance property - it makes it easy for the descriptor to call the right cmds function and flags.  We could make a separate class for cmds.floatField, for example,  swapping out only the class level CMD parameter, and it would 'just work' the same way).

The gotcha to bear in mind with descriptors is that they are separate objects that live in the class, not instance members   You don't create them inside your __init__, you declare them in the class namespace. They don't belong to individual instances - that's why in the example below you'll notice that self refers to the descriptor itself, and not to the ExampleButton class (this is how each descriptor in the example below remembers how to format it's own call to the maya command under the hood).

The "bad" part of that is that you the descriptor is ignorant of the class instance to which it is attached when you call it. You need to pass the instance in to the descriptor, as you'll see in the example below. The good part, on the other hand,  is that the descriptor itself can (if need be) have a memory of its own - that's why the descriptors in the next example can remember which flags to use when they call the underlying Maya GUI commands.

While this sounds scary, it's mostly a minor mental adjustment - once you do a couple times it will be routine.  And all the oddness is concentrated in the definition of the descriptor objects themselves -  once the descriptor is actually declared,  you access it just as if it were a conventional instance property and all is plain-jane foo.bar = baz.

Here's the button example re-written with a couple of descriptors:

That's more like it - only two lines of data-driven code where we used to have six (well, not counting CtlProperty - but thats a one time cost to be spread out over scads of different GUI classes later). It's a lot easier to read and understand as well, and contains far fewer opportunities for typos.

But... we're still talking 34 lines like that for cmds.button,  and God knows how many for cmds.rowColumLayout.

Sigh.

Act III

No rescue drama is complete without a false climax, and this is ours.  Despite the ominous music just before the commercial,. the situation is not really that bad.  The last example shows that the problem is not really one of code any more, it's just data.  Since descriptors are objects, you can crank them out like any other object: provide a list of the appropriate flags for a given class and you can crank out the correct descriptors, as they say, "automagically."

As long as you promise not to use that stupid word around me.

Fortunately for our rescue team, Python treat\s classes the same way it treats anything else: as objects that can be created and maniuplated. If you use the Python builtin type on any Python class, you'll get back
<type 'type'>
In other words, a Python class definition is itself an instance of the class 'type'. How... meta.

The reason this matters to us is that we can fabricate classes the same way fabricate other kinds of instances.  You could do this by hand, creating type instances and filling them out yourself: types take three arguments: a string name, a list of parent types, and dictionary of named fields and propertis. Thus:

However this would send you down a possible rabbit hole, since the idea we're really chasing is a way to mass produce classes to make UI coding easier and it would not be very easy if all of the classes had to be coded up in this clunky way. Luckily Python has an obscure but extremely powerful mechanism designed for just this sort of problem. Because, you know, it's the language of geniuses.

"Brains, Activate the Metaclass"

The helpful MacGuffin in this case it the Metaclass.  Metaclasses have a reputation - not entirely undeserved - as deep voodoo.  The most commonly circulated quote about them is that "If you can solve the problem without a metaclass, you should."

However, in our case we really can't solve the problem without some form of class factory.  In particular, we need a way to bang out classes with the right collection of Descriptors to cover all of the zillions of flags in the Maya GUI system.  So just this once we can put on the big blue glasses and lab coats and venture into the super secret lair of the mad metaclass scientists.

The job of a metaclass is to customize the act of class creation. When a class is first defined, python will pass the type it creates (that same object we played with in the last example) to the metaclass for further manipulation.   The __new__ function of the metaclass will be called on the just-defined type, taking it name, parents and internal dictionary as arguments.  The __new__ can fiddle with any of these as it sees fit before passing it along for actual use.

As you can imagine, this is a good time for PythonMan's Uncle Ben to remind us that 'with great power comes great responsibility' -- it's easy to shoot yourself in the foot with a Metaclass, since you can make changes to the runtime versions of your classes that will not be represented in your source files. Don't just run off and meta all over everything in sight.  A minimalist approach is the best way to stay sane.

But you'd probably like to see what this really looks like in practice. Here's an example.

The actual code is pretty simple. It takes the type object created by the 'real' class and grabs the contents of the CMD class field (remember that from the earlier examples?). Then it loops through its own list of command names and inserts them all into the new class as descriptors with the correct commands and the maya command that was stored in the command object. So our earlier button example becomes:

There is a minor problem with this very truncated example, however: there's no label or command in the the metaclass, so the MetaButton has no button specific properties - only the generic ones in our list (which I made by trolling the flags for cmds.control, the 'base class' of all Maya control commands).

This is easily fixed by adding properties that are specific to buttons to a class field, and tweaking the metaclass to read and use them the same way it already uses the CMD class field.  Like CMD, these are good class-level attributes since the collection of flags  is shared by all buttons, fields or whatever.

As you can see, extending the automatic analysis is easy now that we know the basic trick. Just add a semi-private class field with the class specific attributes, and away we go!

In our next exciting episode...

I think this pretty much demonstrates that overhauling the Maya GUI toolkit is possible.  However, in its current state it's just a down-payment.

The combination of descriptors and metaclasses is an incredibly powerful tool and it's not hard to see what comes next (it's also easy to imagine similar setups for other problems which suffer from ugly imperative syntax). Now that we have a method for cranking out control widget classes by the bucketload, filling out the class library itself is pretty simple.  There are, though, a few tricks we can use to make it better and less manual, as well as making sure it is complete.  So, in a future outing, we'll tackle a method for replicating the whole Maya command hierarchy in a more or less automatic way.

I'll also try to setup a proper GitHub project so readers can get at the real code more easily.  Once that's out of the way we can talk about the real bugbear - event handling!   All in good time....

Until then, as we say at International Rescue Headquarters:

by Steve Theodore (noreply@blogger.com) at March 10, 2014 06:10 AM

March 09, 2014

GAE Python SDK Installation Notes

This installation log assumes you know what Google App Engine development is all about.  There are links to video resources there too.
The official installation tutorial here goes over the general steps below, which I have amended with my installation experiences (Notes) saved here for future reference or as a guide for Windows developers starting out.  I unexpectedly had a rough go with this seemingly simple tutorial, running into problems setting up the deceptively small list of dependencies.

When you’re done experimenting, you can download a copy of the project. You have two options:

• Use the git clone command to download a copy of the original files in the project. This will create a new project-directory. Make a note of it:
git clone --recursive https://github.com/GoogleCloudPlatform/appengine-django-skeleton

Notes:

This tutorial requires git to be on the PATH

I gitHub Windows interface to “Clone in Desktop” on git.com the django skeleton repo from this link:

1. Run locallyBefore you start:
• You need to be running Python 2.7
• You must install pip1.4 or later.
• You must also install mercurial (the build shell uses mercurial’s hg command). If you’re missing hg you can install it with pip install mercurial once you have pip installed

Notes:

You don’t absolutely need mercurial, in fact, it can be a bitch to install in python.
In my case msvc9compiler.py throws errors similar to:

• Download and unzip the Python SDK. This will create a sdk-directory named google_appengine. Make a note of its location, and add it to your PATH.

Notes:

In Windows, if you download the msi, that will automatically check dependencies and install to the default Program Files (x86)/Google/google_appengine directory and add the path to the PATH system environment variable.

For a completely custom install, I recommend putting the zip file where you choose, but I didn’t go that route.

• App Engine only uses libraries that are inside your project directory. If you downloaded the project as a zip file, it already contains all the needed libraries. If you cloned the original repository, install dependencies in the project’s lib directory by running this command

cd <project-directory
./build.sh

Notes:

I cloned the project in git and ran the commands manually that were in the build.sh because in Windows .sh files are not supported.  I could’ve made a .bat but this is probably the only time i’m setting this up locally.

running this command:

pip install –no-install -r requirements.txt

…should put the files in a ./src directory.  Then if git bin directory is in your PATH (see above note) you should have access to ls, rm and cp commands.

C:\path\to\appengine-django-skeleton>cp -r src\django-dbindexer\dbindexer dbindexer

C:\path\to\appengine-django-skeleton>cp -r src\django-nonrel\django django

C:\path\to\appengine-django-skeleton>cp -r src\djangoappengine\djangoappengine djangoappengine

C:\path\to\appengine-django-skeleton>cp -r src\djangotoolbox\djangotoolbox djangotoolbox

Because of git permissions, I couldn’t do an rm on the src directory itself so I rm’d each subdirectory in a separate command and then changed dirs to the project directory and rm’d the src dir.

C:\path\to\appengine-django-skeleton\src>ls

C:\path\to\appengine-django-skeleton\src>rm -rf django-nonrel

C:\path\to\appengine-django-skeleton\src>rm -rf djangoappengine

C:\path\to\appengine-django-skeleton\src>rm -rf djangotoolbox

C:\path\to\appengine-django-skeleton\src>cd ..

C:\path\to\appengine-django-skeleton>rmdir src

• Start a local server for the project with the command:

cd <project-directory>
./manage.py runserver

Notes:

In Windows the command looks like:

python manage.py runserver

The prompt should look similar to this (paths modified to protect the innocent ):

C:\path\to\appengine-django-skeleton>python manage.py runserver

WARNING 2014-03-09 16:39:37,085 dev_appserver.py:3570] The datastore file stub is deprecated, and will stop being the default in a future release.

Append the –use_sqlite flag to use the new SQLite stub.

You can port your existing data using the –port_sqlite_data flag or purge your previous test data with –clear_datastore.

WARNING 2014-03-09 16:39:37,085 datastore_file_stub.py:528] Could not read datastore data from C:\path\to\appengine-django-skeleton\.gaedata\datastore

WARNING 2014-03-09 16:39:37,233 simple_search_stub.py:1038] Could not read search indexes from c:\users\%USER%\appdata\local\temp\dev_appserver.searchindexes

It may nag you to allow dev_appserver to check for updates on startup:

Allow dev_appserver to check for updates on startup? (Y/n): y
dev_appserver will check for updates on startup. To change this setting, edit C:\Users\%USER%/.appcfg_nag
INFO 2014-03-09 16:40:48,609 sdk_update_checker.py:241] Checking for updates to the SDK.
INFO 2014-03-09 16:40:49,930 sdk_update_checker.py:269] The SDK is up to date.
WARNING 2014-03-09 16:40:49,931 dev_appserver.py:3570] The datastore file stub is deprecated, and will stop being the default in a future release.
Append the –use_sqlite flag to use the new SQLite stub.

You can port your existing data using the –port_sqlite_data flag or
purge your previous test data with –clear_datastore.

WARNING 2014-03-09 16:40:49,933 datastore_file_stub.py:528] Could not read datastore data from C:\path\to\appengine-django-skeleton\.gaedata\datastore
WARNING 2014-03-09 16:40:49,936 simple_search_stub.py:1038] Could not read search indexes from c:\users\%USER%\appdata\local\temp\dev_appserver.searchindexes
INFO 2014-03-09 16:40:49,947 dev_appserver_multiprocess.py:656] Running application dev~your-application-id-here on port 8000: http://127.0.0.1:8000
WARNING 2014-03-09 16:41:27,661 py_zipimport.py:139] Can’t open zipfile C:\Program Files (x86)\Python27\lib\site-packages\pexpect-2.4-py2.7.egg: IOError: [Errno 13] file not accessible: ‘C:\\ProgramFiles (x86)\\Python27\\lib\\site-packages\\pexpect-2.4-py2.7.egg’
WARNING 2014-03-09 16:41:27,663 py_zipimport.py:139] Can’t open zipfile C:\Program Files (x86)\Python27\lib\site-packages\litecoin_python-0.3-py2.7.egg: IOError: [Errno 13] file not accessible: ‘C:\\Program Files (x86)\\Python27\\lib\\site-packages\\litecoin_python-0.3-py2.7.egg’
WARNING 2014-03-09 16:41:27,664 py_zipimport.py:139] Can’t open zipfile C:\Program Files (x86)\Python27\lib\site-packages\djangotoolbox-1.4.0-py2.7.egg: IOError: [Errno 13] file not accessible: ‘C:\\Program Files (x86)\\Python27\\lib\\site-packages\\djangotoolbox-1.4.0-py2.7.egg’
WARNING 2014-03-09 16:41:27,665 py_zipimport.py:139] Can’t open zipfile C:\Program Files (x86)\Python27\lib\site-packages\django_dbindexer-0.3-py2.7.egg: IOError: [Errno 13] file not accessible: ‘C:\\Program Files (x86)\\Python27\\lib\\site-packages\\django_dbindexer-0.3-py2.7.egg’
WARNING 2014-03-09 16:41:27,667 py_zipimport.py:139] Can’t open zipfile C:\Program Files (x86)\Python27\lib\site-packages\django_autoload-0.01-py2.7.egg: IOError: [Errno 13] file not accessible: ‘C:\\Program Files (x86)\\Python27\\lib\\site-packages\\django_autoload-0.01-py2.7.egg’
INFO 2014-03-09 16:41:28,729 __init__.py:44] Validating models…
DEBUG 2014-03-09 16:41:29,026 dev_appserver_import_hook.py:1421] Access to module file denied: C:\Program Files (x86)\Python27\lib\site-packages\win32\lib\win32con.py
INFO 2014-03-09 16:41:29,183 __init__.py:55] All models validated.
INFO 2014-03-09 16:41:31,617 dev_appserver.py:3090] “GET / HTTP/1.1″ 200 -
INFO 2014-03-09 16:41:32,197 dev_appserver.py:3090] “GET /favicon.ico HTTP/1.1″ 404 -

• Point your browser at http://localhost:8080.

My address turned out to be http://localhost:8000/

That’s it for my additions to the tutorial.

Here’s what it looks like when you’re done:

by Jonas Avrin at March 09, 2014 11:53 PM

The Human Triangulation Experiment, Stage 1

Recently, us lucky saps in the Perceptual Computing Lab have been fortunate enough to be doing some prototypes for different external groups, and I've actually been lucky enough to work with one of my favorite groups, [REDACTED]!  Needless to say, I'm super excited, and one of the first projects I'm working on involves using depth data and object/user segmentation data to interact with virtual/digital content, much like the ever popular kinect/box2d experiments you've probably seen floating around...

Check out more of Stephen's stuff on github or on his website.

Depth buffer, OpenCV, cinder::Triangulator, and Box2D, seemed pretty straightforward, I mean let's be honest, that's creative coding 101, right?  That's what I thought, but as usual, the devil's in the details, and after some (not terribly extensive) searching and a little bit more coding I had...eh, well...nothing.  My code looked correct, but no meshes were drawn that day, and even in a cursory inspection of my TriMesh2ds, there was nary a vertex to be seen.  Here's what I tried originally (this is sketch code, so yeah, the pattern is a little sloppy):

//cv contours are in mContours, mMeshes is vector<TriMesh2d>
for(auto vit=mContours.begin();vit!=mCountours.end();++vit)
{
Shape2d cShape;
vector<cv::Point> contour = *vit;
auto pit=contour.begin();
cShape.moveTo(pit->x,pit->y); ++pit;
for(/* nothing to see here */;pit!=contour.end();++pit)
{
cShape.lineTo(pit->x, pit->y);
cShape.moveTo(pit->x, pit->y);
}
cShape.close;
Triangulator tris(cShape);
mMeshes.push_back(tris.calcMesh());
}

Right, so at this point, it should be a simple exercise in gl::draw()ing the contents of mMeshes, yeah?  Sadly, this method yields no trimesh for you!, and as I mentioned above, even a quick call to getNumVertices() revealed that there were, in fact, no vertices for you!, either.  The docs on Triangulator lead me to believe that you can just call the constructor with a Shape2d and you should be good to go, and a quick test reveals that constructing a Triangulator with other objects does in fact yield all the verts you could ever want, so methinks maybe it's an issue with the Shape2d implementation, or perhaps I'm building my Shape2d wrong.  I rule the latter out, though (well, not decisively), since Triangulator has the concept of invalid inputs, e.g. if you don't close() your Shape2d, the constructor throws, so...what to do, what to do?  To the SampleCave!

TRIANGULATE AGAIN, ONE YEAR! NEXT!

Mike Bostock, he of d3.js fame gave a great talk at eyeo festival last year on the importance of good examples (Watch it on Vimeo), and you know, it's so true.  It's sorta like documentation, we employ technical writers for that sorta thing, I feel like we should at least give some folks a solid contract to put together good sample code for whatever we're foisting onto the world, rather than relegating samples to free time and interns (no offense to either free time or interns).  Now Cinder has amazing sample code, so a quick google search for TriMesh2d popped up the PolygonBoolean sample, which was basically doing what I wanted, i.e. constructing and drawing a TriMesh2d from a Shape2d...kinda.  I trust the good folks at Team Cinder to not ship sample code that doesn't work, so a quick build 'n' run later and I had a solution.  I was sooooo close...

//cv contours are in mContours, mMeshes is vector<TriMesh2d>
for(auto vit=mContours.begin();vit!=mCountours.end();++vit)
{
PolyLine2f cShape;
vector<cv::Point> contour = *vit;
for(auto pit=contour.begin();pit!=contour.end();++pit)
{
cShape.push_back(fromOcv(*pit));
}
Triangulator tris(cShape);
mMeshes.push_back(tris.calcMesh());
}

The results?  Well, see for yourself:

My tribute to Harold Ramis, may you never end up in one of your own traps, sir.

Next steps are to maybe run some reduction/smoothing on the contours, although I suppose it doesn't matter terribly for this prototype, and get it into Box2D, all of which I'll cover in Stage 2, including a quick 'n' dirty Cinder-based Box2D debug draw class.  This is awesome, it's total Tron stuff, bringing the real into the digital and all that sort of sorcery.  Once I get the Box2D stuff implemented, I'll stick a project up on github, until then, if you have specific questions, Tag The Inbox or leave a comment below, you are always welcome to try my Shogun Style...for reference, here's the complete update() and draw():

//Using Cinder-OpenCV and Intel Perceptual Computing SDK 2013
void segcvtestApp::update()
{
mContours.clear();
mMeshes.clear();
if(mPXC.AcquireFrame(true))
{
PXCImage *rgbImg = mPXC.QueryImage(PXCImage::IMAGE_TYPE_COLOR);
PXCImage *segImg = mPXC.QuerySegmentationImage();
PXCImage::ImageData rgbData, segData;
{
mRGB=gl::Texture(rgbData.planes[0],GL_BGR,640,480);
rgbImg->ReleaseAccess(&rgbData);
}
{
mSeg=gl::Texture(segData.planes[0],GL_LUMINANCE,320,240);
segImg->ReleaseAccess(&segData);
}

mSrcSurf = Surface(mSeg);
ip::resize(mSrcSurf, &mDstSurf);
mPXC.ReleaseFrame();
}

cv::Mat surfMat(toOcv(mDstSurf.getChannelRed()));
cv::findContours(surfMat, mContours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);

for(auto vit=mContours.begin();vit!=mContours.end();++vit)
{
PolyLine2f cLine;
vector<cv::Point> contour = *vit;

for(auto pit=contour.begin();pit!=contour.end();++pit)
{
cLine.push_back(fromOcv(*pit));
}

Triangulator tris(cLine);
mMeshes.push_back(tris.calcMesh());
}
}

void segcvtestApp::draw()
{
// draw camera feed
gl::clear(Color( 0, 0, 0 ) );
gl::color(Color::white());
gl::draw(mRGB, Vec2f::zero());

//draw meshes
gl::enableWireframe();
gl::color(Color(0,1,0));
for(auto mit=mMeshes.begin();mit!=mMeshes.end();++mit)
{
gl::draw(*mit);
}
gl::disableWireframe();
}

by Seth Gibson (noreply@blogger.com) at March 09, 2014 09:33 PM

March 08, 2014

March 2014 VHUG

I presented some Flip and Point Advection stuff at the March 2014 Vancouver Houdini User Group. The files are here for anybody who was there or anybody interested:

VHUG file  (200 something megs. This has the hip file and also the quicktimes and one frame of geo from the rest field setup)

Fliptricks hip file

Pyro Point Advect / Color Volume file

I have two hip files. One is a vorticity solver and rest field setup for flip. The vorticity fades out over time, and the rest fields (which is done post flip sim) can add tons of extra detail at rendertime. Some fun stuff to try: you can also use vorticty to drive density or viscosity and create some neat bubbles or foam or even ‘oobleck’ where the fluid can become more viscous as it becomes more stressed.

The second is a setup that uses Gas Advect to advect points along with a high velocity pyro sim. The pop advect by volumes doesn’t work well with high velocities at your source, so I am using a gas advect node instead. There are a few issues I didn’t quite work out with this setup in houdini 13 (needed more than one substep to get it accurate), but I think it’s working pretty well regardless. Prior to 13 I would merge points, create my ids, etc… essentially creating myown particle system. I do also have another few tricks in this file — such as getting a normal on the points from the gradient of the density field, and then I also take the points and use their colors to color the pyro as well. The examples aren’t the greatest artistically, but should get the point across (and I realize I should have made the terrain darker in the viewport). I’ve used setups like this in production and both have proved extremely useful.   And as a last note, the files aren’t perfect, I did them rather quickly in my spare time, so there are likely some mistakes in them or things that can be improved.  :)

by Ian at March 08, 2014 07:52 AM

March 07, 2014

Spring Break Madness – 30% off our special Rigging workshop

It is March and maybe it is spring break fever or the excitement of GDC or a little luck of the Irish but we are offering a 30% rebate for …

The post Spring Break Madness – 30% off our special Rigging workshop appeared first on Rigging Dojo.

by Rigging Dojo at March 07, 2014 04:20 PM

New Pose Saver tools and features

Well it's been a while since I did a demo so here's one that goes through the upgrades to the poseSaver in v1.41.

New PoseBlending:Pose Blending is a new feature that allows you to mix in a percentage of any pose to the current state of your controllers. When you RMB>PoseBlend you get a new slider UI that controls the mix. Note that when this slider is launched the current state of the rig is CACHED against it's current state so please be aware of that. I had to do this to get the slider to react fast enough to make it worth doing.

New MaintainParents: Another big update to the poseLoader. This one allows the pose code to 'hold' or maintain a given set of attributes during pose load. This not only returns the given attrs back to their current state prior to loading the pose, but it also recalculates the pose at the same time. This means that even if a pose is stored with all the controllers in one parent space and your current controller is in a different space, the original stored pose will still be reached, but it'll be recalculated in the current space ;)

This relies on the 'relative space' flag and is only available in this mode as I use this mechanism to do the psace compensation.

Any comments, suggestions or bugs let me know

Mark

by Mark Jackson (noreply@blogger.com) at March 07, 2014 08:52 AM

March 06, 2014

Moving in Unity

Updated March 8th: Added a few more details on mapping to navmesh and extrapolating the root motion vector.

Moving something around on the screen in Unity is really not that hard. The point of this post is therefore not to introduce you to how this is accomplished, but rather to point out where you’re doing it wrong™.

Generally Unity moves objects using one of three systems:

• Direct transform manipulation.
• Physics.

In the end, movement is measured on transform updates. I make the distinction from the point of view of where you provide the input. Technically you could argue that animation should be up there as well, but I choose to lump that in with transform manipulation, since disabling the application of movement by the animation system has no other side effects.

As a side-note, since 4.3 it is possible to partially or completely disable having the animation system update the transform hierarchy.

Transform

So long as the goal is simply to move an object and nothing else, modifying the values of the transform component, or using its various useful methods to do so, is all you need.

This includes directly animating the transform via the animation window or by enabling animation root motion.

However chances are that your situation is more complex than this and you would do well to read on.

Physics

When your object in any way needs to affect and/or be affected by the physics simulation, you need to make some extra considerations. Simply slapping on a collider and calling it a day will ruin your next.

To properly participate as a dynamic part of the physics simulation, an object needs to have a rigidbody component attached somewhere in its transform hierarchy.

The physics system considers separate transform hierarchies as separate objects, so one (and only one) rigidbody component will mark the rest of its hierarchy as dynamic.

Multiple rigidbodies in the same hierarchy leads to undetermined behaviour (read: weirdnessness) – the only exception being if those rigidbodies are connected by a joint – thus making their behaviour again well defined.

Kinematic

“But I don’t need gravity or forces or all that other nonsense!” – be cool, that is what the “kinematic” switch is for. This basically tells the physics simulation that your object is dynamic, but you will take care of all its movement.

Kinematic rigidbodies will not be affected by forces or collisions, but will collide with non-kinematic rigidbodies, sending collision events and pushing them out of the way (assuming there are colliders present somewhere in the transform hierarchy of the object).

“So why add the rigidbody in the first place? Things work just fine without!” – if you have Unity pro, I would direct your attention at the profiler as you move about – if not, take my word for it that it is not a joyous sight.

Any object (unique transform hierarchy) with no rigidbody present is treated by the physics simulation as static. For optimal performance, all static colliders are baked into a static collision geometry, securing optimal performance when doing collision checks.

However every time one static collider (note that this has nothing to do with the static flags on the GameObject – just the presence of absence of a rigidbody on the object in question) is moved, the whole static collision geometry is marked dirty and regenerated.

This is not a terribly costly operation, so moving pieces of level geometry from one position to another from time to time is fine. However moving a character of one or more static colliders around every frame will cost you.

Note that while moving by directly modifying the transform of a kinematic rigidbody is just fine, you will get better results for rapid movement by using the MovePosition and MoveRotation rigidbody functions.

The former will effectively “teleport” the physical object – fine for short distances and minor rotations, but less so for longer moves. MoveRotation and -Position effectively “drags” the object from A to B.

Kine-not-so-matic

“Ok, so maybe it would pretty useful if my character could walk into walls, get pushed by others and that sort of thing…” No problem. Disable the kinematic flag and start moving via the rigidbody component. If you wanted proper forces and all that, I’m sure you’re already all over the AddForce function and all that.

However if you still want strict control – just with a touch of presence – you should look at directly setting the velocity property of the rigidbody.

Given sideways and forward movement input, forming a velocity vector is easy. By setting the velocity of the rigidbody you add that information to the physics simulation as well as tell it to update the position of the object smoothly.

That includes pushes from other rigidbodies or pushbacks from static geometry. However directly setting the velocity will override any directional change and so momentum will remain unchanged.

Therefore consider factoring in some acceleration when building your target velocity vector – for a more natural look after your character is pushed or makes an abrupt change of direction.

Parallel to the physics simulation you find the navigation runtime. Similarly to how the rendered world model is defined by static and dynamic geometry and the physics world model is defined by static colliders and rigidbodies, the navigation world model is defined by interconnected navigation mesh and dynamic obstacles.

While the static physics geometry defines areas of no access, navigation mesh defines areas which are navigable. This information is used for finding a valid path from point A to B, but more importantly it is used to constrain characters and inform them of their surroundings.

The physics simulation can be used for this as well and traditionally is. However the data covered by the physics system is vastly more complex and its ability to define traversable space is a side effect of its ability to define non-traversable space.

This is where you end up spending way too much time blocking off sections of scenery and later testing that there are indeed no holes in that. Navmeshes on the other hand define a surface on which characters of a given height and with a given radius can move.

NavMeshAgent

Similarly to the rigidbody component, the NavMeshAgent component wires an object to the navigation runtime. In stead of the single kinematic switch, however, the NavMeshAgent has separate updatePosition and updateRotation toggles.

To get things going, you can either set a path by one of the many accessors for that or directly set the velocity. Assuming that the NavMeshAgent is configured to update the position, this will start smoothly moving the object like with the rigidbody – only this time constrained by the navigation meshes rather than collision geometry.

In addition to pathfinding and staying on the navigation mesh, the navigation runtime will also attempt to have the various NavMeshAgents avoid one-another by adjusting velocity based on the position and velocity of nearby dynamic obstacles and NavMeshAgents.

Avoidance can be completely tweaked though – so that one NavMeshAgent can ignore it completely or itself be ignored or weigh different NavMeshAgents differently.

Direct control

“Pathfinding? I just need to move this character around based on input.” Sure, fine – that is where you just go set the velocity of the NavMeshAgent rather than trying to set a path or destination for it.

Like with the rigidbody, this starts moving the NavMeshAgent smoothly and tells the runtime about its current velocity – giving other agents a chance to avoid. Note that even if you do not let the NavMeshAgent directly control the position of your character, you should still feed the velocity back to it – in order to keep the avoidance runtime up to date.

Animation

WAT! You did not count this as one of the three ways of moving stuff!” – nop, quite right. However root motion is awesome, so let’s briefly touch on how we tie that to the other systems for much greatness.

While most of what the animation system does is not of too much concern for your movement logic, one very useful feature is. By analysing animations on import, the surface movement relevant for those animations is calculated and later blended as the animations are blended.

By default the animation system will, with root motion enabled, move an animated object around by directly updating the transform position based on the root motion of the currently playing animation blend. Animation nicely synchronised with world movement.

I’ll root my own motion, thank you

However while that looks mighty cool, it really isn’t very considerate of your carefully crafted physics simulation or your neatly marked navigation runtime. Luckily it does give an in by allowing you to override the actual application of the root motion.

This is accomplished by, on the same game object as your animator component, attaching a script implementing the method OnAnimatorMove. Once that method is defined, the animation system will no longer directly apply root motion and in stead call this implementation post-evaluation.

In the implementation of the OnAnimatorMove callback, you could then update a target velocity vector by simply dividing animator.deltaPosition by Time.deltaTime and similarly rotation. And once we have desired movement in the form of a velocity vector, plenty of the earlier described scenarios become relevant.

Most interactive niceness

“Great, I’ll take one of each!” Sure, no problem. Well… It’s not exactly straight forward, but it is indeed possible to combine all of these things to get something that is responsive, embedded in your simulations and looks great.

The chain of data goes a little something like this:

• Player controlled characters:
1. Feed input to the animator, resulting in nicely blended animations and root motion.
2. Generate a target velocity in OnAnimatorMove.
3. Set velocity of NavMeshAgent (configured to not update position or rotation).
4. Set velocity and rotation of non-kinematic rigidbody (properly constrained on rotation so it doesn’t tip over).
5. Map transform position to navigation mesh via NavMesh.SamplePosition.
• Non-player characters or indirectly controlled player characters:
1. Set destination or path of NavMeshAgent.
2. Feed desiredVelocity of NavMeshAgent to animator.
3. Repeat as for player controlled characters from 2.

What this setup does not give you is responsiveness to being bumped into. However velocity-wise this is not something your movement implementation should handle directly unless you are ok with breaking that nice root motion setup you just established.

In stead I would recommend using queries on the surrounding physics and navigation environment to inform the animation state machine of special conditions like “player wants to go full speed, but there’s a wall two units from here” and handle slowing down, stopping and similar in there where the result will look good.

One simple trick is to do a navigation raycast along the forward vector of the moving transform for some amount of look-ahead distance and if a navigation mesh edge is hit and do a physics raycast further forward from a point slightly elevated from that edge hit point.

With that setup you can very simply gather information about where the character is headed – if into a steep wall or perhaps to a ledge or obstacle which could be leapt.

Unity Hacks

Since apparently I lumped just about everything into that project, unsurprisingly the Unity Hacks project has some work on this form of wired up movement. Particularly the Mover component attempts to create a system-agnostic movement interface as well as some simple movement on networked setups not covered here.

While not complete or in any way a final answer, I hope that with the information provided here, it turns out useful for you.

by at March 06, 2014 11:00 PM

Zbrush Speedsculpt: Alien Pilot Guy... thing...

Hooray for Zbrush lumpy aliens! No particular theme this week, so just kinda sculpted away... Sculpted in under an hour.

 Could also pass for an alien optometrist...

by Peter Hanshaw (noreply@blogger.com) at March 06, 2014 12:24 AM

March 05, 2014

MGODI Android beta preview

It’s only been a couple of months since we released MGODI for iOS but I am happy to say we reached our beta milestone and we can even show a little video showing the basic functionality. Sorry about the broken monitor but we do work very hard, so hard we broke the screen!

Hope you like it and meanwhile you can download and install the iOS version here www.mgodi.com

by Artur Leao at March 05, 2014 03:29 PM

*Sigh*

It's bad enough we just lost Harold Ramis.

Now we've also lost SoftImage.

Although I've never used XSI professionally, I was incredibly impressed by the people who made it, and I've always felt that XSI was a superior product to Max or Maya: making better use of modern hardware and showing off really innovative concepts in an industry that's gotten pretty damn stale for something that's sounds so high tech and is occasionally so magical.

In honor of the passing of this great piece of software, I'm going to reprint an article I wrote for Game Developer back in 2008 when the sale of XSI to Autodesk was first announced.  I'm afraid I may have been a little too optimistic. However I do think that the basic idea of the piece - that we let ourselves in for this kind of treatment by not being more informed and flexible consumers - is still true.

In the mean time, I'm going to go have an Irish wake for a poor old XSI.  (BTW, if you're waiting on the follow-up to Rescuing Maya GUI From Itself, I'm cleaning up the code and writing tests before I go blabbing...)

Update: Came upon this interesting set of charts (espceially the next to the last one at the bottom) which explains a lot of what's going on on here. Doesn't make it hurt less, though.

The M-Word

If you’ve been in a crunch-time media blackout for the past month, or shut down your internet connection to avoid election news,  or are the only games artist on the planet who’s never received a youtube link via email you may have missed an interesting little tidbit of news. On October 23 we learned that Avid is going to sell SoftImage, the Montreal-based developer or SoftImage|XSI to Autodesk.  If and when the deal goes through, all three of the biggest 3d modeling and animation packages will all belong to a single company.

Even if you managed to miss the announcement, you can probably predict the immediate reactions anyway.   In the XSI community, the dominant mode was shell-shock.  The “Resistance is Futile” jokes and Borg-themed Photoshop jobs could not disguise the level of emotion in the air -- the poster on the XSI forums who simply said “I think I’ll cry” wasn’t kidding.  There was a smattering of optimists suggesting the deal would give more people access to some of XSI’s best tech.  A few pragmatists found consolation in the idea that the conglomeration would give cross-package data transfer the attention it deserves.  But the most common reactions were shock and anxiety.

It’s hardly surprising that the possibility of being forced to abandon the comfort and security of a familiar environment would give XSI users the heebie-jeebies.  The official Pixel Pusher line has always been that any professional game artist should be competent in at least two packages. But even traditional artists are famous for being emotionally attached to their tools (never, ever venture an opinion about Kolinsky sable brushes in mixed company!) For us, who spend so much of our lives poking at one particular set of dialogs or buttons, the thought of being forced to swap them for a different, unfamiliar set of dialogs and buttons is deeply disturbing.  The fact that some XSI fans were so distraught they’d consider switching to Blender out of pique is an index of how emotional this issue can be.

What’s surprising, though, is that a similar miasma could be seen in the Max and Maya forums after the buyout announcement. Emotions ran high even for those not affected directly. Hardcore Maya fans suffered flashbacks as they relived the 2006 buyout of Alias.  More commonly, though, users were grimly pondering the future of graphics software in general, rather than the fate of any particular package.  Some naysayers worried that technology would stagnate without the underdogs like XSI striving to gain an advantage through innovation. Others fretted that consolidation in the industry means the exciting, Wild-West days of graphics are really over.  And many users of all three packages speculated that the lack of competition will lead to price gouging.

You Are Elected Chairman of the Board

Before we pronounce the graphics software business dead, we ought to look at this deal in its historical context.  These kinds of corporate dramas are unsettling for artists because they are an unsubtle reminder that we creative types are dependent on huge, impersonal corporations to get anything done. Masters-of-the-Universe style MBA analysis isn’t part of our job descriptions, so it’s hard for use mere users to figure out how to respond. A little bit of history, however, is often a good way to get some perspective; so here’s a very abbreviated walk through the life and times of SoftImage to help you understand today’s news.
XSI may be the youngest of the big three graphics packages, but SoftImage the company is one of the oldest firms in 3d graphics software.  The original “SoftImage Creative Environment” debuted in 1988, but in an economic environment very different from todays.  3D graphics was very closely akin to rocket science – for one thing, it was mysterious new high-tech discipline and for another you needed an exotic workstation that cost upwards of $50,000[1] to do either one. It was a very esoteric, very pricey business. SoftImage|3D was the first commercial application to offer artist-friendly IK (1991) and it quickly became the gold standard for computer animation. Many seminal CGI films of the early ‘90’s were animated in SoftImage, most famously Jurassic Park. Those early days of the CG revolution were heady times. Hollywood stood ready to firehose money onto anybody who could render a good looking triangle -- SIGGRAPH veterans still murmur nostalgically about the heydays of studio parties – and the boom times were good for the company. In 1992, the Montreal firm went public to much acclaim. Success also changed the way the industry worked. By the mid ‘90s, the explosion of 3d game development shifted the industry dynamic: the ranks of 3d artists and animators expanded enormously, but few games companies could afford to put the equivalent of a luxury sports car under every animator’s desk. Affordable PCs with primitive graphics cards started stealing business from workstations and PC based packages like Autodesk’s 3d Studio started making inroads against pricey workstation software. From Sale Of Stock You Get$45

Microsoft, naturally, wanted to see the PC forces prevail . In 1994 they bought SoftImage for $130 million – a pretty high price given that the whole 3D software market was only around$60 million a year back then. But box sales weren’t the real goal: Microsoft needed to port a top-end workstation graphics package to Windows and legitimize the market for high end graphics on Windows.
For many SoftImage vets, the events of last month may have an eerily familiar ring, right down to the “you will be assimilated” jokes (although,  in 1994 the reference was forgivably fresh).

The MS acquisition was not a very pleasant experience for SoftImage users. Not only were many Unix devotees forcibly converted to a new OS, but the demands of porting and cross-platform development shunted innovation to the sidelines.  It took almost 7 years for SoftImage|3D to get from version 3.0 to version 4.0 , and the package lost a lot of its technological edge to newer platforms like 3dStudio Max and Maya.  It’s not surprising that the survivors of that first buyout react suspiciously to the latest.

Unfortunately, the eningeering success of the product did not translate into success for SoftImage’s owners.  Avid’s core business has been hit hard by the proliferation of lower-cost video editing software like Apple’s Final Cut Pro.  Even though the SoftImage was profitable, it wasn’t profitable enough: to get a sense of the scale, you might note that the $35 million sale price for the company won’t even cover Avid’s losses for the 3rdquarter of this year. As times got leaner, Avid needed to focus on protecting its core video editing business, so it started hunting for a buyer early this year. Autodesk, as home to both of XSI’s a main rivals, was not the first buyer who was approached… but it was the final one, which is the one that counts. What lessons can you learn from this little history? First, it doesn’t provide a lot of evidence for conspiracy theories about monopoly power. The fact is, supplying 3d software is pretty small potatoes in the grand scheme of capitalism. It’s been said that there are only about half a million seats of full 3d packages in the world – sounds like a lot, but that’s smaller than the number of people in the beta program alone for Photoshop. It’s not a market where achieving dominance is a huge financial win. All three turnovers at SofImage have been driven by strategic concerns that didn’t have to do with monopoly power or market domination. Microsoft bought SoftImage to catalyze the switch from workstations to PCs. Avid bought it to solidify its FX and compositing business and saw modeling and animation through that prism. The most recent sale didn’t originate with a sinister plot from inside of Autodesk, it originated with Avid’s accountants. The more interesting – but also more depressing – aspect of this story, though, isn’t concerned with money. You could read the whole thing as a stirring tale of steadfast devotion. It was user loyalty that sustained SoftImage during the drift of the Microsoft years, when technical sluggishness might have let Max and Maya completely marginalize the original SoftImage. The emotional reaction to the news is proof of how viscerally loyal users are to their favorite tools. Unfortunately, that loyalty is a two edged sword. The last few version of XSI were consistently excellent -- but no combination of cool features and good design managed to seduce away enough users from other packages to secure SoftImage’s future. They competed on tech and features and did a great job – but it wasn’t enough to overcome the entrenched loyalties of Max and Maya fans. Individual artists might admire this feature or that bit of UI, but collectively we’re reactionaries: we stick with what we know. On top of that, most studios have tools and processes are designed around a particular package and aren’t eager to chuck those investments for the sake of sexy icons or a cleaner interface. The fact is, we don’t really reward tools companies for pushing the envelope. Even when something new does break into the scene, we try to shoehorn it into our existing workflows rather than embracing the new. We’re the last people to start denouncing monopolies and phoning up the Federal Trade Commission. Most of us have already folded our hands by letting ourselves become emotionally attached or technically beholden to particular bits of software. If you’re in the same camp as the XSI user who posted “they’ll have to pry my license from my cold dead fingers,” you live in a virtual monopoly already. Get Out Of Jail Free? That’s not to say that things aren’t going to change. The absence of major-league alternatives will definitely give Autodesk a much freer hand in choosing both its price points and research directions. T fact that their track record to date is pretty benign is comforting, but the knowledge that we’re dependent on their altruism from here on out should give us pause. Autodesk has put some genuine effort into trying to explain the deal to users (there’s an interesting interview featuring the GMs of Autodesk and SoftImage up on the Autodesk website, with more info promised as the deal solidifies) but apart from reassurances that XSI isn’t going to go away overnight, the magic 8-ball is pretty cloudy. The uncertainty is tough, particularly for anxious XSI users but for all of us. We all know the mantra, “it’s the artist not the tools” – but in practice it’s sometimes hard to say where the artist leaves off and the tool begins. The feeling that such an important part of our lives is out of our control is unnerving. What can we do about it? As individuals, that means being open to new software and new ways of working, so that we make an environment where companies have a real incentive to give us new and better tools. As studios we should invest in in-house tools rather than relying too faithfully on any single vendor. As an industry we should push harder for more consistent, open standards in data formats and for open source tools so we can make our pipelines less dependent on the ups and downs of individual companies. None of these steps will magically unwind the clock but they will give us a little more input into this critical part of our lives. Or, we could all switch to Blender… But man, I hate the way they do their menus. It’s not like Max. You couldn’t pay me enough to switch. [1]That’s$65,000 in today’s dollars.  For a machine less powerful than an iPhone.

by Steve Theodore (noreply@blogger.com) at March 05, 2014 05:59 AM

March 04, 2014

GDC 2014: The Year of Animation FOR REALS

YOU GUYS.

I just did a quick audit of all GDC talks related to animation. This includes, but is not limited to, animation, design, AI, character development and rigging. What did I find?

Monday is the full day Animation Bootcamp. But we knew that already.

Tuesday is nap day. We animators need our beauty-pass.

Only 11 am Wednesday is free of animation goodness. Silly 11 am, you don't know what's good for you.

The great part about this is that animators will have a true smorgasboard of talks to attend. The bad part? A few conflict with the Animation and Character Performance Roundtables. But you know what? I'm OK with that.

 Animators and Jerry Lewis, separated at birth.

I'm OK with the fact that animation is finally getting proper representation at GDC. I'm OK that we finally have a GOOD problem to deal with- choice.

So what are these choices? GLAD YOU ASKED. I have listed them below. If you read this and feel that another relevant talk should be added to the list, let me know in the comments or via twitter!

Monday ANIMATION BOOTCAMP WOOOO:

Animation Bootcamp: Intro & Achieving a Believable Performance
Animation Bootcamp: Establishing an Ecology for NPCs
Animation Bootcamp: Fluid and Powerful Animation within Frame Restrictions
Animation Bootcamp: Animating the Spy Fantasy in Splinter Cell
Animation Bootcamp: Animation Prototyping for Games
Animation Bootcamp: An Indie Approach to Procedural Animation
Animation Bootcamp: Using the Power of Layered Animation to Expand Premium Content in Battlefield 4
Animation Bootcamp: Animating Cameras for Games
Animation Bootcamp: Animator's Approach to Directing an Idea

Wednesday:

11 am:
NONE?!? Well, it's because GDC knows we like to sleep in!

2 pm:

by anim8d (noreply@blogger.com) at March 04, 2014 08:28 PM

March 03, 2014

New Sound file inspector:

Anybody who's dealt with moCap sessions where you're recording audio will have probably run into the broadcast wav format, an extension of the standard wav but containing a whole extra chunk of metaData specifically aimed at syncing data around studio's. The key is that it includes an internal timecode for the wav, usually pumped into it from the studio's timecode generator so that video, audio and moCap all has the same reference and can be kept in sync.

In order to use this at work, and because we use the Red9_AudioNode as a basis for all Maya audio functions, I've added in full BWav support to the Red9.AudioNode this is both at a simple inspect level (Wav Inspector - launched from the Red9_Sound menu in the Trax Editor seen below) and full support from the code side, so you can cast any sound node to a Red9_AudioNode and just run the .bwav_getHeader(). This builds up an internal dict with all the header data bound to it.

If the wav isn't a Bwav then you still get the main header data, it just omits the BroadcastWav block from the UI.

This has been an absolute pain in the arse to extract as I've had to get deep into binary chunks to fine where in the file stream the Bwav header data block exists!

Anyway, this will be in the next release, unless you want to give it a whirl in which case drop me a mail

cheers

Mark

by Mark Jackson (noreply@blogger.com) at March 03, 2014 12:47 PM

March 02, 2014

Rigging Reel 2013

Hi guys,

I just updated my reel (unfortunatelly had to cut a lot of stuff) and I'm now actively looking for a job opportunity as Character TD/Technical Artist (preferably full-time, I'm ok with relocation). I'd really appreciate if you could pass this around.

Here's also a link to my resume, just in case ;-)

Thank you!

by Cesar Saez at March 02, 2014 03:00 AM

update?

Well it’s been more than a year since I wrote anything here and I’d like to do more of it so I am trying to ditch wordpress for something much minimalistic. After I saw Doug Hellmann’s post I decided to try tinkerer and it looks quite super. Lets see if that gets me somewhat motivated to write something clever.

I moved all the old posts (including my most popular post, ‘Replacing layers in AfterEffects’) over to restructured text and dumped into tinker (if you are curious you can check out my mega-hacky script here)

I did not try moving all the comments from wp to disqus. I haven’t even googled if that is possible.

by at March 02, 2014 12:00 AM

March 01, 2014

Being amazed by software development

I am continually amazed by the state of software development. I am amazed at how broken things seem to be, and I’m amazed at what powerful tools we have to fix things.

A few weeks ago, I struggled to find a documentation hosting solution. We have an internal version of Read the Docs which took far too much effort to get set up (dependency problems, creating a new SCM backend) and administrate, but Read the Docs doesn’t work with private Github Repos. After some hacking, we still couldn’t get it to work. So I looked at hasdocs.com, which promised to host our docs in the cloud, but was totally broken. I looked around at other solutions and was tearing my hair out. I was amazed that no good solutions already existed, and was more amazed that the best solutions are somewhere on the spectrum of broken (no offense here to RtD).

At the end of the day, it occurred to me my use case was incredibly simple: just provide an index page for (already generated) HTML documentation. Let people POST their HTML files and some metadata, and just dynamically generate the index based on what’s available. I was going to run this behind a firewall, so I didn’t need to worry about security. Hell I could even get away with no database, and just read the filesystem.

The next day (a Saturday), while my son and wife napped, Host the Docs was born. It took 1.5 hours to create an initial working version, and then I spent a few hours here and there polishing things up. It was painless to deploy, and I did some more improvements after that. Throughout this experience, a few things struck me:

• I’m amazed by frameworks like Flask and Bootstrap. You can create a reasonable site in no time that is totally maintainable. At no point was I “hacking” to get something up and running, it was instead a very small first version I was able to iterate on.
• I’m amazed by Linux. I have to use Windows at work, but feel like I develop faster on my Linux Mint netbook as I do on my Windows 7 workstation. The power at my fingertips is divine. It makes me mad at Windows.
• I’m amazed how well some software works the same way I’m amazed at how broken some software is. Software is truly evolving; for every Flask, there are confusing and broken web frameworks.
• I’m amazed how much time having autonomous teams can save. It would have taken a day or more to deploy HTD if I had to go through IT to provision a normal virtual machine (I doubt that frustration is unique in our industry). Instead, by giving teams an AWS budget and not centrally controlling things, HTD was live in minutes.

I’ll post more fully about Host the Docs later. I just wanted to express my satisfaction before it wore off :)

(BTW, I’m aware how similar this sentiment is to Jeff Knupp’s story of building Bull: Python and Flask are ridiculously powerful. Probably not coincidentally it is also a story of using Flask :)

by Rob Galanakis at March 01, 2014 09:24 PM

Reorganized Tools, New Tools

I’ve updated the Tools page to organize the scripts into two different categories, animation and rigging. I wanted to start adding some more simple rigging utilities, the first two new ones are Parent Shape and Reset Bind. ml_parentShape reparents shape  nodes to other transforms, or unparents shapes from their transforms, and ml_resetBind resets skinCluster deformations after moving some joints around. They’re both pretty basic and straightforward, (and I’m sure they’ve been written before) but I figured I’d share them because I get a lot of use out of them in any case.

I’ve also added one new animation tool, ml_transferKeytimes. Again, a pretty basic utility, just copies the keytimes (not the keys) from any one node to a group of others. A use case for this is if you’ve set key poses on a root control, and you want a group of other controls to have all their keys set on the same frames as that one control, this can transfer keytimes while preserving animation (insofar as that’s possible).

Grab them from the Tools page!

by Morgan Loomis at March 01, 2014 12:28 PM

QuickLauncher met Maya

Hey folks,

I'm coming out of the Softimage bubble and trying to port some of my open source projects to maya.

It's funny, of all of them QuickLauncher is the simplest but one of the most useful at the same time (I cannot live without it) , so I decided to give it a try and there's a maya version in the repo now :D

I'm still struggling with the maya plugin system, so the installation process is not as straight forward as in softimage, but I'll get there.

Thanks to David Martinez and David Moulder for their support.

Cheers!

by Cesar Saez at March 01, 2014 03:00 AM

February 28, 2014

Photoshop tip: sample all layers.

Today I came across this website which I highly recommend:

It can look a bit dry at first sight but don't let it put you off. It's very clear, thorough and well described with examples.

The tip I found today works for the blur and the healing patch tool (perhaps more tools have that option).

If you tick the Sample All Layers option, you can paint your blur or healing on an empty layer which will act as an adjustment layer. The layers underneath will appear blur but will remain unaffected.
Neat innit?

by mkalt0235 (noreply@blogger.com) at February 28, 2014 05:11 PM