Wrapping DCC Tools to start them in different "environments"

Hey there,

i am currently designing a system to wrap different tools to be able to launch them in different environments. For example launch different versions of nuke with different toolsets without making any local changes.
So basically i’d have some kind of python setup that wraps setting up an environment and launching the correct version.
Now i have different ideas how to tackle that, and am looking for input what you guys think would work best.

I would like to make this a rather generalized wrapper that allows adding tools/versions/toolsets rather easily and quickly. First up i am wondering would you rather have a python class generalizing the different aspects and then inherit from that per tool (=write a derived python class per tool), or generalize it so much, that you have central config files (say YAML) per tool and load/parse them automatically. For the second one i would have to add some special sauce that allows triggering operations like a file copy via the config file. While that would prolly work fine with YAML and a reduced set of possible things to do (copy/delete/move files etc) i am wondering if this is complex enough to just write a derived class per tool and auto-loading them from a specific directory.

Any thoughts? Or better even experiences?

Regards,
Thorsten

We use an in-house tool to do this. It takes care of updating local files and making sure things are copied to the right place for the given DCC app. It has a special command-line switch that lets us call it from a desktop/start menu shortcut, and when the update is finished, open the actual DCC app. Those shortcuts we deploy use the DCC app’s icon and name, so to the user it looks like they’re running it like usual. But in actuality it’s calling our tools updater, which then opens the app itself. It’s worked out well for us.

How you would potentially design a tool like that (Python, C#, etc.) is really up to you, and what fits with your skillsets and tools culture.

Hey Adam,

thanks a lot for the input. I am currently thinking about this not really on the UI side, but rather on the abstract back-end. The way the app is actually launched will differ quite a bit. I think there will be a kind of “Dashboard” that allows to manually launch selected environments/versions. The artist will usually launch a specific configuration from the production tracking (which should have the environment defined when the project is created). Then there is the render farm that will launch a specific env based on the env it was submitted from and/or the production tracking system.

So i am thinking about a low-level class to handle all the gritty details and then having different UIs using that to create a specific env and launch the tool.

Kind regards,
Thorsten

I have a related problem, supporting people with different toolsets (for example, different versions of Maya or a tool that is only available internally but not for outsourcers, etc.)

What I do is maintain branches for different distributions inside my perforce. That way common code is common, but code that has to work differently in a different environment can be as different as needed, and the whole thing has proper history and branch protection (if i fix a bug in the main branch perforce will tell me if it conflicts with a local change in one of the branches when I try to integrate the changes). I build specialized distributions for each branch as zip’s of pycs, so the whole environment gets delivered to each target as a unit, which gets me out of having to mess with the details of different people’s python environments (I’ve got to make an option to create JARS for Jython so I can share some database stuff on the web too… but that is just an extra branch with a branch-specific bit of build code at the end.

The really nice thing about this setup is that it eliminates tons of bugs, because there’s no need for ‘if host_tool ==’ kind of code – that’s all implicit in the branch rather than having to be handled in a million spots all over the code. However making it work well does take a lot of thinking about what goes where – I try to keep my module hierarchy organized by dcc tool, so that I know what has some kind of dependency on maya or MB or whatnot. I try to write as little code for each dcc as humanly possible – for example a tool to let level designers do bulk edits on level files should not require maya, even if the most common use case for that code is actually inside of maya. Keeping implementation distinct from UI is always good practice, and this setup kinds of shoves it down my throat :slight_smile:

The main drawback to this is that you do have many branches, so you have to think ahead about your organization. If you discover a bug in branch X you should really take the time to fix it in the main branch and push it to all your branches instead of fixing it locally. If branch Y teaches you how to refactor something, it may be a pain to work out the right way to refactor the trunk. It helps a lot to do unit-testing and to have a structured publishing process, instead of just ‘check it in and see what happens.’ I don’t have to worry about lots of people working independently in different branches right now, but when that happens I’m sure it will be a headache. It might be easier with a branch-centric source control setup like mercurial than with perforce.

As for config files, etc – it’s certainly not a bad thing to have a common way to store installation or user specific variables and settings, but it’s not a flexible enough mechanism to deal with all of the ways in which different host situations can change things ( for example, your cfg or yaml file can tell you that what server to ping for updates – but only code can know what to do if one server is an ipv6 address and the code is expecting ipv4. Since you’ll have to write the code anyway, the cfg file is mostly a method of storing settings or other mildly volatile data (like window prefs or file bounce-back locations) instead of controlling program flow. I’m sure Rob will chime in with his rant sooner or later :slight_smile:

Some very interesting ideas there, thanks a lot for sharing. My ecosystem is a bit different, but i guess some will definitely help. As for config files, after thinking about it, i agree and came to the conclusion, that it is not really more effort to inherit a class and keep defaults for a simple new tool rather than write a YAML config file…actually au contraire it would be more hassle to have yet another “language” in the mix there.

So i’d like to lay out in more detail what i want to accomplish and what i am currently planning. Maybe this is interesting for some, and maybe you guys have some more input.

On the development side i plan to distinguish between products and the pipeline/toolsets. Mainly because they are developed, deployed and used differently. The pipeline and toolset would contain all the day-to-day used scripts, plugins and tools. Products are defined either by being sold/licensed to clients or by beeing complex enough to be developed like a product rather than part of the toolset (even tho both might be in use in production obviously). I am currently mainly concerned with the pipeline and toolset part. As a sidenote a product would be developed with more elaborate planning, have a release cycle, milestones etc. Whereas the pipeline and toolset is treated more like a single big project/product than many small ones.

What is part of the production toolset?

On the application side i have to support 3d Studio Max (usually min. 2 different Versions at any given time), Nuke (also usually min. 2 Versions), VRay (used to be one Versionlock per Max Version, will be more free with the new approach i am aiming for), Photoshop (yuck) and a load of internal tools ranging from small app-specific 3-liners to products beeing licensed to clients and SaaS products.
We have our own internal production tracking system to handle all task- and time-tracking and a lot more. Its local client app should also handle opening the correct apps in the correct environment in the future.

Development and deployment strategies

As said i plan on having different approaches for products and pipeline. Products will be more traditional, whereas pipeline will be more along the lines of http://scottchacon.com/2011/08/31/github-flow.html with some twists. Basically i plan on having two main branches (i’ll call them master and dev for now). Both of these branches should always be in a deployable state. So no real development is happening there. A typical feature cycle would be as follows. You create a new named branch for a feature to work on. You tweak until you are ready to test. Then you issue a pull request, have someone review quickly and merge into dev. This triggers unittests (via jenkins) and deployment to the dev environment (also via Jenkins). At this point users (usually supes and TDs and alike) would launch their tools in the dev environment to test the new stuff. When approved you’d merge your changes to master. This would trigger unittests on master, and deployment to the production environment (again both via Jenkins).

We only run unittests on a bunch of our products. In the future i would like to make unit-tests required for at least everything that can be re-used (anything that has an API or is a library i guess).

Enviroments

So this is the point where the original question comes into play. Basically i am looking for an easy way to launch tools from dev/production and to be able to select different versions of tools within both of them (e.g. different VRay Versions). So an Environment in that sense could consist of the following for a 3D Artist:

  • 3D Studio Max, 2011, VRay nightly built, our production toolset

Or for a 2D Artist

  • Nuke, 6.3v8, our current dev toolset
    etc.

One thing i am not really aiming for is having different toolsets for the different versions of the tools, only combining different versions of the toolset with different versions of the apps (no per host version selection or anything. Tools are meant to work on all deployed versions or have proper abstraction).

So for now i would say i am going to have some kind of an abstract Tool class, that has methods for setting up, tearing down, launching etc. And for each tool i’d simply inherit and then auto-discover all inherited classes to be able to populate launchers etc.
One of these might still be a SimpleTool class or similar that allows adding commandline tools by simply having a bunch of simple config files that anyone can write. Will have to see if there is a use for that first.

Any input is very welcome. Feel free to roast me :slight_smile:

Kind regards,
Thorsten

We had a system at my last place where all the tools that we integrated with, had their root folders included in Perforce. When we have a product ready for release, we would move it to a release branch and in conjunction with an updater app that we developed similar to Adam’s setup, it would update based on a configuration file to a particular revision or combination.

Having the root of the application in perforce was hugely beneficial as you could tightly integrate with the App and also replace or fix any issues that the app may have (amazing with Maya). We had a couple versions on Max, Maya and Photoshop running under this system across multiple teams, each with their own configuration.

The biggest issue with this system were service packs for the applications. But, in combination with a good software rollout system, we could perform patches overnight without the artists being disrupted.

Interesting. So you had local installs that were overridden? Or rolled them out portable?

Regards,
Thorsten

What an excellent geek-out topic![I]

On the absolutely bare-bones practical side of launching multiple products:[/I]

I tend to use simple stuff like environment variables when I want to run tools in a certain ‘mode’. For example our userSetup.py runs from the zip distribution unless the user has an environment variable set, in which case it loads out of the loose python files in the source tree. Everything can read and write env vars ( even BAT files!) and they’re easy to inspect. The main issue is making sure that they are easy for end users to set/inspect – being invisible most of the time it’s easy to forget that you’re running in ‘dev’ mode instead of ‘production’ mode. If the env var simply points the rest of the code to the right code/project/path, then it’s really easy to hop between modes without zillions of if tests littered in your code.

On the theory of maintaing different end products from a single source tree:

I suppose the real strategy question to ponder is what mechanism is best for making sure that end-environment differences are respected with a minimum of work on your part. In a sense a branched build enviroment and OO inheritance are both attempts to do the same thing – make sure that invoking ‘do X’ in a tool does the right thing in context, which may be different depending on lots of stuff ranging from what DCC tool is involved to what the user’s job or security permissions are.

My setup is, strictly speaking, a misuse of branching because the ‘product’ branches – the ones that spit out finished tools to users – aren’t ever going to merge back into the main line – they are environment-specific modifications or patches on a main line. Since I’m dealing with a pretty small team i don’t have to worry about the more common branching situation where people go off and work in isolation then return with refactored code (I do some of that, but it’s a minority case for me). If the team had 10 people contributing without talking to each other every day it would break down pretty quick.

In old school programming this would all be #IFDEF stuff. I avoid that sort of thing because it’s so error prone – and not very pythonic either: whether you have a lot of if/then in your code or in your ‘headers’ its still a lot of if’s and hence possible bad code paths that need to be excersized. It also because it encourages bad habits like magic constants and hard-coded names ( pace Rob B, contextual constants is a great use for config files ! )

The OO alternative, where everything is a concrete implementation of an abstract base, is conceptually appealing for that reason. The main drawback there tends to be unintended flow through of changes – especially in Python where there’s nothing that will warn you if you’re returning strings in the base class but floats in a derrived class. Aggressive unit testing is the only way to survive if you go that route – it sounds like you’re already going that way so it’s not unreasonable. Secondarily, this usually creates a ton of uninteresting boilerplate code or a hodgepodge of ‘if env ==’ tests, neither of which is fun.

Another alternative is to to treat your core code as if it were a DLL – basically build, test and publish the core code and then import it into your target environments as a unit. Then go on to apply whatever local modifications or situational logic you need to do there. (In python it’s also trivial to do the subclassing this way) This eliminates a lot of the bolierplate that’s needed for generating hordes of do-nothing concrete classes, and it simplifies the testing regime. The issue there tends to be the accessibility of documentation, etc – it’s easier to tinker with the way a class or function works if you’ve got it right there in front of you. Eventually I’m going to refactor my own codebase into a hierarchy of projects instead of a single tree along these lines – project references in Aptana make it (relatively) doable and it will be more hospitable to traditional branch-and-merge operations inside each project.

No mattter how you slice it, the key problem is always that changes are flowing ‘downstream’ from bug fixes and new features in the common code, but also flowing ‘upstream’ from branches where edge cases and special situations reveal flaws or re-factor-able limitations in the core code. Something you do locally to solve a little problem turns out to be globally useful; something you do globally turns out to be worse-than-useless for half your applications, etc… And there’s also context issues - sometimes you want to see all the implementations of function X side by side (easy to do if they’re all derived classes in one file) and sometimes you want to see all the code for target X together ( easier if all X-specific code is in the X branch or product).

Alas, it’s like designing a source tree for content – all the answers are wrong … though some are wronger than others :slight_smile: I have not had a chance to really review this but it looks interesting in this context. I’d have more confidence if the website did not look straight out of 1998 :slight_smile:

Great stuff! Keep it coming. I will need a bit to thoroughly read that through and think about it :slight_smile:

[QUOTE=instinct-vfx;18225]Interesting. So you had local installs that were overridden? Or rolled them out portable?

Regards,
Thorsten[/QUOTE]

We would install the applications via SMS or Landesk then overwrite the updated files and integration files over the top. We could have put the whole of Maya in perforce but it’s getting a lot trickier esp with later releases.

PS. I would also recommend floating licences any day, with the only caveat you have a solid network and licence server.

[QUOTE=Butters;18246]We would install the applications via SMS or Landesk then overwrite the updated files and integration files over the top. We could have put the whole of Maya in perforce but it’s getting a lot trickier esp with later releases.

PS. I would also recommend floating licences any day, with the only caveat you have a solid network and licence server.[/QUOTE]

Thanks for the feedback! We’re all float here. Way too many machines and lics to keep track of. Except for a few evil companies not offering floating licensing (Yes my fist is shaking in your direction Adobe! Curse you!)

Thorsten

Yes, the same fist shaking here with ADOBE!

Hey Rob - what do you do for outsourcers?

Well that was another kettle of fish. Long story short the work that we outsourced was mesh and material work; we had a system setup to import them into our asset management system once they were completed and signed off.

For Co-Creators on a project, we treated them as we would one of our studios across Australia, with the exception that they only had access to a particular branch of the project they were working on. With our configuration file we could also limit them from seeing or switching to a different project.

We could however create a package from Perforce and a little installer/uninstaller that would allow outsources to work independently of a network, but we never got to implementing it. Joy.

PS. For those of you that have not read this thread, you should check it out: Distributing Tools to outside companies