Building Tech Art Infrastructure

First, some background. When I started at Vigil as an Environment Artist, the extent of Tech Art at the studio was a few tools that had been created for Darksiders I back in 2007. Few people at the company know MaxScript, or Python, and the Environment Art team had no one to go to if they thought of something that needed automation. If those people existed, their presence wasn’t well-publicized. In my first few weeks, I noticed myself hitting the same buttons hundreds of times to do the same things. I pinged h6x6n to give me a hand and automate one of these things, and in the process acted as an intermediary between him and Max to make sure the code worked. From there, the artists immediately around me started to request other automated macros. A few months later I was adding these scripts and others into our official toolset, had access to the code branch, and spent more and more time writing tools both in Python and Maxscript. The scope of the tools I was writing grew, the number of teams I was assisting grew, I started talking to programmers, helping producers generate reports on various pieces of data in the game. I was a tech artist. Familiar story, yes?

As the tool and tool support load started to let up for a couple weeks, it was dawned on me that in order for Tech Art to grow at Vigil, I had to find ways to move it from being “That one guy in the corner who knows Maxscript” to a more professional, scalabe, and accessible entity. To build up the Tech Art infrastructure here, I want to have the following in place:

  • Scalable Tech Art Task Pool
  • Training Resources
  • Support Process
  • Track Tool Usage
  • Transparency to Production
  • Robust End-User Documentation
  • Robust Tech Art Documentation
  • Internal Code Libraries
  • Increase Tech Art Accessibility for Feature Requests and Bug Reporting
  • Be able to Deploy Alpha Toolset to Trusted Partners

Coincidentally, this all started dawning on me about the same time as the 2012 Boot Camp talks got posted. Being in such a lonely position, I thought it’d be useful to expose the process of building up the Tech Art infrastructure at Vigil to TAO in order to take in feedback, advice, and pitfalls I’ll potentially run across.

So, where we are now, in terms of meeting the goals I set forth:

  • We have a viable platform for the effective distribution of our DCC app toolset. This tool will ensure that all custom scripts in the tool’s source directory will be applied to their equivalents in the C:\Program Files\Autodesk\3ds max 2012\ directory every time the user starts up 3DS Max. Tools are modified in our Perforce code branch, and using our build machine, the code from that directory is moved to the tool’s source directory.

  • A company wiki, and a Tech Art hub for that wiki that is project-independent. The Tech Art hub is broken down into the following sections:
    Max Tools Userguides
    — Shows images of tools in 3DS Max, their GUIs (if applicable), and a brief description of how the tool functions
    Animation Tools Userguides
    — Shows images of Animation-specific Tools in 3DS Max, their GUIs (if applicable), and a brief description of how the tool functions
    Python Tools Userguides
    — Shows images of tools built exclusively in Python, and independent of any other application, and a brief description of how the tool functions
    Reports Userguides
    — Shows examples of the contents of each Excel report that is generated by Tech Art scripts every morning based off our build data
    Goals
    — Shows a breakdown (by game) of the tools, features, and other high-level goals ahead for Tech Art
    Resources
    — Reference links for languages used by Tech Art, style guides used for languages used by Tech Art, interesting links, etc.
    Tool Spec Directory
    — This is something new to which I’ve started adding, and one of the things I’m most curious to get feedback on. This lists new tools, features, and tasks and their associated spec sheet.

  • Libraries
    – A library of commonly-used Max functions that are stored as loose functions and accessible via the “FileIn” command
    – A Python module of home-rolled functions and classes used by our Python scripts

What I’m doing now:

Transparency to Production

I have a template form for questions I feel need to be answered before moving ahead with a tool. This is also useful for me in planning out larger tasks, like building and implementing internal Maxscript libraries. Outlining the basic plan to get something done, and it looks something like this:

Strategic Questions
What is this tool/feature?
How will the target user interact with this tool/feature?
Why does this tool/feature need to exist?

Tactical Questions
What are the subsystems and steps necessary to complete this tool/feature?
What are the reviewable phases of this tool/feature’s development?

What else would you expect to see in a document such as this? Can this be trimmed down to make it more streamlined?

Internal Code Libraries

I’m evaluating our current set of tools looking for boilerplate functions, functions already extant in the internal library that could be better utilized, tools that, because of their relative simplicity, could be added to an internal library and modified slightly to be usable both in their macroscript, and accessible via one line of code. Also been looking at external libraries, like the TAO Maxscript libraries, to learn from their structure and see what I could pull from and use in-house. Loose structure for these libraries lookes like this:

osOps

  • Functions to interact with the operating system that don’t come stock with Max

mathOps

  • Math functions that don’t come stock with Max (lerp, round, sum, angleFromPoints)

pyOps

  • Wrappers to interact with all our Python tools

p4Ops

  • Wrappers to interact with Perforce, either through Python (to get the return from P4) or directly through Commandline

oblivionOps

  • Functions to interact with our other in-house plugins (custom materials, user properties used by our exporter)

vUtils

  • Miscellanous functions used across many of our tools (logging, reading text file, writing out to text files)

VigilTools

  • Wrappers for much of the less lengthy and easily adaptable to take inputs from commandline and selection
  • Wrappers for some GUI tools that are call as both standalone and from other GUIs

All the Maxscript libraries will be held in structs, instantiated on startup.

Again, thoughts are appreciated on that structure, and any other advice from y’all on building up the Tech Art infrastructure at a studio.

That’s awesome!

We’re going through a similar building process now. My main focus has been on removing black boxes from the pipeline ( old exporters ) and building up an infrastructure so we can automate everything (game data, source data, the engine itself).

One thing that I didn’t notice was an UI library, having modular components that you can just drop into a tool and update them as need be is something I’m really fighting for. Things like a graph editor, node editors, timelines, that multiple tools will use and that can be QWidgetized to be used in PySide.

Hey Matt, its cool to see that you have made the transition to TechArt and that you are trying to build a much larger TA entity at Vigil. I recognized your name and it seems you were a former Volition employee and current THQ’er.

Just throwing this out there and its by no means a way to derail your plans here but Volition is always willing to assist our fellow THQ studios. In fact we’ve actually helped DS2 in many other regards. Our library of scripts, python tools and max plugin distribution is actually being used at other THQ studios as well. A lot of what you mentioned has been built up and improved for the last 10 years or so. If anything you may be able to see what we have and take advantage of what you feel may be useful to you guys.

Feel free to contact me on Lync at work. I’m not sure at what capacity we can fully assist you but I have the understanding that sharing our tools between studios is encouraged.

This is quite a task to embark upon. Good luck to you!

@Randall: Quite so! I was at Voli in the summer of '10. I don’t think we have any of your Maxscript libraries, though we’re using vInstaller which removed one of the biggest hurdles for Tech Art. I’ll hit you up on Monday when I’m back in the office.

@ lkruel: That’s awesome! I had yet to hit on a GUI library, although that’s certainly something we’ll need. Good catch.

Matt, greetings from the sister V studio!

I think the fact you’re even thinking about this stuff is fantastic, and a sign that you’re at least headed in the right direction.

Building up a solid TA infrastructure takes time. It’s not just technical issues, but cultural as well. In fact, the cultural ones are often the harder of the two to overcome. I would suggest analyzing that along with the technical issues, and be prepared to spend a lot of time talking with art, programming and design as new elements of your plan are put into place. Keeping them as informed as possible about what you’re up to and what’s coming is crucial.

As for technical details, you have a lot laid out there. As Randall mentioned, I would also recommend leaning on as much existing tech from Volition as you’re able to absorb. All of it has been through shipping game cycles, and we can also help you with that and learn the details. I’m TAD on the core tools/tech group here, so that’s what I do.

I believe you’re right about our MaxScript libraries, Vigil probably doesn’t have those. That might be an easy place to start. Email/IM next week and we can discuss.

I can’t stress this enough- the fact that you can get assistance from Volition is something you absolutely cannot pass on. In fact, I’d say if you were to pass on it or couldn’t make effective use of it, you certainly wouldn’t be able to set things up from scratch :wink:

The road you’re embarking down is something many good TA’s have embarked upon. The fact that there are still so many problems indicates this is not an easy thing to do. Expect that the first 2, 3, 4 times you do it, you’re doing it very wrong (though less wrong each time). Having the Volition guys, who are quite clearly doing it right, be able to provide assistance is just amazing. Let me warn you in advance there are definitely some challenges with using whatever it is they’re going to give you, but (and this is a general rule) I’d try as hard as you can to adopt and adapt what they can give you rather than write it yourself- until you’ve learned enough to clearly explain why it’s crap and you cannot use it :slight_smile:

Today’s small update:

I’ve filtered through most of the tools I have right now to find more common functions, and see which of those functions would fall into which struct, and to outline how each tool would be updated to account for the new libraries.

Because these struct instances are registered on Max startup (by being placed in the 3ds max 2012\stdplugs\stdscripts) directory, and I want to be able to reliably test these on Max startup down the road, I rolled a Python script to copy files in my code branch to my local 3ds max 2012 directory.

And started moving some common functions I already have stored in Maxscripts that I use through fileIn into their respective libraries. That lead to testing the viability of keeping most of a tool’s logic in a struct. Ran into an issue with outer local variable references because this format


fn someFunction =
(
    fn someOtherFunction =
    (
        print "ping"
    )
    someOtherFunction()
)

does not easily translate to being contained in a struct. My solution (and I’d love to hear if there’s a better way around this issue), was to store all those functions at the top-level of the struct, and reference the instance of the struct when I needed to access them in the main function.



struct someStruct_lib
(
    fn someOtherFunction =
    (
        print "pong"
    ),
fn someFunction =
    (
        someStruct.someOtherFunction()
    )
)
someStruct = someStruct_lib()
someStruct.someFunction()

Have you considered on-demand loading for your structs instead of throwing everything into startup? Loading everything up front will impact your startup times (though on beefy modern machines this may not be too perceptible) and it will also ensure that everything is in memory.

MXS doesn’t have an equivalent of Python’s import statement (or even of mel’s auto-source-on-file-name behavior) but its not hard to create one using filein and global variables. Pythonistas generally like seeing all the imports up at the top of a file since it gives them a general idea of how interdependent their code is (if your import block is too big, it probably indicates bad scoping – if you’re doing something so complex in this file that your need functions from 30 other files, it’s probably time to rethink the design for tighter focus and more modularity).

It’s absolutely a good idea to work on a test suite for that makes sure that your structs are working. Ideally, you’d want every exposed function in every struct to be automatically tested after every checkin – you never know when something really subtle will screw you over. I recall once creating an avalanche of chaos by innocently changing the case in a utility function that returned a string. if you do lots of code re-use, you have to make sure you’re not causing ripple effects with little changes! If you do stick with massive startup loads this is especially important because a hung startup script can completely bork your users. A decent build system that makes sure everything is shipshape is a lifesaver, well worth the effort and also a great learning excersize.

Matt, like Randall and Adam mentioned, feel free to leverage the Volition experience we have here. I’d be more then happy to help out and I’m sure most if not all Volition TAs would be willing to help out other THQ studios.

Feel free to hit me up on lync anytime.

@Theo: Right now most of that is handled with FileIn, and those files just contain functions that are registered at startup. Part of building this infrastructure is to try and move away from that format in order to try and make those kinds of functions more readily accessible to other Tech Artists. The other issue I’ve found with the fileIn is the number of file operations seems to be limited by Max (after a certain point, Max can’t do any more fileIns).

On evaluating external libraries:

Being aware of when you’re falling into NIH is important. Also important is recognizing that replacing an extant in-house solution with an NIH solution for the sake of not falling into NIH may cause more work than is necessary.

If accessibility to others is the main issue, you should consider doing automatic documentation. Importing a module is easy, knowing it’s there is hard.

The python version would be something like Doxygenor Sphinx. There’s no MXS capable one out there that I know of, at least as of 2009 or so, but it’s not too much work to build one on your own by trolling for comments in MXS files that live right under defs (a la python docstrings) and then generate HTML help pages for all your functions. Tags and indices are a big help in directing teammates towards existing functions and avoiding duplication of effort… plus, knowing that other people will see your functions induces a certain healthy amount of self-consciousness :slight_smile:

The tech side is pretty easy, the daunting part is creating a culture where documentation is taken seriously.

I’ve started using/modifying the Maxscript documentation generator that Rob wrote a while back. It works by treating every block comment preceeded by “<DOC>” as a docstring, and grabbing the function name and arguments from above that line. It works quite well. In order to make sure everything works with that documentor, though, I went through all the new libraries and created a docstring for each function in that format. Nothing helps you get familiar with your code libraries like documenting every last function.

Looking over Volition’s tools was immensely helpful. It showed me where my Maxscript-fu was (is? I’m working on it) lacking, and was a good baseline off which to base my library structure.

Rob was right, not everything would be usable, and I expected that. Most of what I grabbed won’t work right out of the gate, either. Certainly a useful exercise in gaining a greater understanding of Maxscript beyond what I’ve been chiefly using it for in most of my tool development.

After bringing in and organizing the disparate functions into appropriate libraries, and documenting them, I’m moving on to adding new functions to those libraries. By having a pre-established docstring format, it’s easy and almost automatic to start typing "fn functionName = " and have the next line be the <docstring>.

A couple of the libraries (p4 interaction, logging) will need their own specs written up. The complexity and needs of those libraries go beyond the scope of writing general-purpose libraries and I want to make sure I don’t rush into anything with either of those.

Also need to write .help() functions for these libraries. Give the user more access to the documentation in more places.

Thoughts on the benefits of the instantiated-at-startup library system:

I’m approaching the authoring of these libraries as Tools for Tech Artists. Like most tools, they’re designed for levels of familiarity, to be able to span the range from newb to power user, to include simple verbs that can be combined for interesting emergent functionality, things like that. More importantly, they’ll come with a Help feature to minimize the amount of time the user has to spend digging through documentation. For the libraries, the Help feature comes in the form of a public method for each library that prints an at-a-glance guide to the public variables and methods of that library to the Maxscript listener.

For example, a Tech Artist comes along and sees something like this:


fn someFunction inputVar =
    (
        --return this
        testLib.doSomething to:inputVar
    )

someFunction 5
>>> False


The trick is establishing an understanding that all libs used in the toolset have a .help() method. Now, the Tech Artist that stumbled across the above code wonders what testLib.doSomething does, and knowing all internal libraries have a built-in .help() method, he does this:



testLib.help()

>>>testLib
>>>Author: That Guy
>>>Description: A library to do things, stores stuff.
>>>
>>>Public Variables
>>>    <bool>stuff: stores the last output from doSomething
>>>Public Functions
>>>    <bool>testLib.doSomething to:<int>
>>>        Performs a check on the optional argument to:, returns True if even, false if odd
>>>        If unsupplied, returns false
>>>    <void>testLib.help()
>>>        Prints this string
>>>


This means the user doesn’t have to leave Max to go root through documentation, it’s all at the user’s fingertips. Less time spent digging through documentation = more time spent coding.

The immediacy of .help is nice, although you might be better off implementing it as an single command instead of a member function. Help (Foo) only requires that the doc string in foo is present; Foo.help() is code, which means bugs, cut-and-paste repetitive code, or both. Also, the snippet you posted suggests that the docs are basically a bit of header data in the struct; I’d strongly recommend localizing them to comments in individual defs instead, since distance between the code and the docs always encourages doc rot.

FWIW, there are two big upsides to having the docs available outside. In case it wasn’t clear, I’d emphasize that I’m talking about docs generated off of code comments (‘docstrings’) , not a separately maintained set of documents – which is the kind of thing only huge teams can even contemplate and few can really pull off.

The biggest plus of API-style external doces is that other TA’s scan them when they’re looking for things. To call Foo.help() you need to know Foo is there in the first place; if Foo is listed on a page of Foo-related functions (grouped by tags, not just by where the code happens to live) everybody will see it when they are looking at Foo-related functions. That way they know to dig into it when a Foo-ish task comes up.

In large teams, where code appears and morphs fast, it’s far more likely that you will know the bulk of the codebase on this kind of drive-by basis. Drive-by is unsatisfying, but it’s still better than “wait, we already have code for that?” And of course, even drive-by knowledge is a huge help in avoiding pointless duplication.

Depending on how complex your doc-generation routine becomes, you can also stuff useful metadata into external docs. For example, you can extract the defs that get called inside a given def and hyperlink to them, which makes it easy to backtrack unexptected results. On larger teams its also nice to know who wrote Foo so when it breaks you know who to yell at.

Of course, docs are rarely a subitute for actually reading the code you’re calling – when things go poo you’ll end up opening all the source files anway (that’s why some XP zealots claim you should actively avoid documenting your code (!) ) . In the real world, no medium of communication is 100% effective so it’s a good idea to flood the zone – Help(), docstrings, code-reading and external docs aren’t mutually exclusive. Indeed, the Help() method is a subset of the functionality you’d need to generate indexed help pages, so you get it for free if you go the more elaborate route.

And then of course there’s doctests– but that’s an argument for another day :slight_smile:

Ah, I see what you’re saying. I may have been unclear because everything is getting spread around.

You’re right, .help is in addition to externally generated documentation, file-level docstrings, function-level docstrings. Each member of the struct has a docstring that’s included in external documentation, and each one of those goes into greater detail than what appears in .help()/ the docstring.

Because everything I’m doing right now is all in Maxscript, I don’t see Doxygen, Sphinx, or doctests being entirely applicable. I have a Python-based solution to generate a .html file with all the docstrings from the function-level and file-level docstrings.

You’ve got a point about the copy/paste issue of docstring -> code. I think down the road I’ll figure out a way to use the doc generator to make a .ms file that’s evaluated after all the libraries are instantiated and populates a private variable the .help() function prints out. Dynamic generation, requires less copy pasta.

My experience has taught me this- if you haven’t done it before (and probably even if you have), you’re going to fuck it up anyway. Better to get started and get something working because it’s going to be shit no matter what.

Rob, can we get a shirt with that:)

Techart - Get something working because it is going to be shit no matter what!

Hah, Amazing.

We don’t need that shirt – that’s what the “real” programmers all think of us no matter what we wear.

Story of… well… this story.

The code libraries went live this morning. They proved their utility almost immediately with a one-off request that came in (which I was able to author more quickly by using those libraries), and a feature request that dealt with how the global project path variables were set.

Since all the tools online right now can function without the libraries, I’m only updating them as I need to fix bugs or add features to them, rather than taking a large chunk of time to rework the entire codebase to use this new system simply for the peace of mind of it.

After making edits to code all day, the amount of time I was sinking into modifying documentation for each function in three different places became frustrating. I wrote a tool spec for how I think I can get around that issue. It involves the Maxscript documentation generator modifying a config file that the libraries can reference in their .help functions.

While I was developing the libraries, a few of them turned up as being outside the scope of the Library Creation/Implementation of Outside Code task. Things like a way to use Perforce in Max without leaving Max, or a Python-like logging feature that would allow me to track tool usage. Those ended up being chunked out to separate Tool Specs that I’ll address later. For now, I’ve been dark long enough, and I’m at a point where I’m comfortable moving forward using these libraries to tackle some of the feature requests that came in a while I was dark.

And then there’s the elephant in the room. If not your room, it’s in my room, and I realize I haven’t addressed it yet despite its effect on what I’ve been doing.

Another task I’ve had to undertake is to find a way to make sure everything is sufficiently self-explanatory so that anyone else coming in for Tech Art requires minimal briefing and spin-up time. I’ve already got a sticky note to make a kind of One Page Design Doc for all the Tech Art resources available to try and mitigate the “Wait, I didn’t know we had that!” feeling. This is all necessary, unfortunately, because as part of the layoffs at Vigil, my contract with Vigil will wrap up at the end of April.