Developing Tools for your pipeline general question

Hey all! I’m doing some art pipeline tools and I’d like some opinions.

At what point do you actually deliver your tools to your customers, whether it be designers, artists, tech artists, etc…? Do you have a development cycle on creating tools for your pipelines? (Maybe specifically art because that’s what I’m doing…)

I usually try to make my tools as efficient and bug free as possible by testing a lot of different use cases, but then at some point the customer needs to use it and deadlines creep up on me. And I sort of expect there to be bugs after the first delivery. Then if I need to make another iteration on it, I usually branch off of the working tool and add features, also get feedback from my customer and suggestions for improvements from my team. So what do you guys do? At what point do you say, “This works, I’m gonna hand it over now…”

Thanks!!

Rule #1: nothing is ever done. The stuff that looks done is just the stuff where the fixes, upgrades or improvements aren’t important enough to move to the head of the line. Life is much easier when you accept this basic fact :slight_smile:

You can make releases less scary a couple of ways.

  1. schedule them! let people know when things are going to change. And schedule the inevitable bug fixes, feedback, and disaster recovery too. Mondays or Tuesdays are good, because people are much more unhappy about things going blooey if you are close to a deadline, which usually means end-of-week. Scheduled releases (coordinate with other people) are much less stressful than “here it is, I have to move on to something else” releases for all concerned.

  2. Have a rollback plan if things go wrong. If you pass out a new revision of a tool and something comes up – it often does – have an quick, reliable way to go back to the way things were before the problems. That way you can fix it on your own time without a hundred angry users breathing down your neck, which is a sure recipe for hasty coding and problems later on.

  3. Good error reporting should be built in. Artists only report a minority of the bugs! So an email sender, log files, or anything else automatic is a good way to make sure you know what’s really going on.

  4. find all the threads on ‘test driven development’ here. It’s a lifesaver.

  5. find all the threads on distributing tools here. ditto.

Good luck!

what theodox said.

Also, know your customers / project teams!
Their expectations, attitude, experience also plays a role in how to develop tools. There’s a difference if you make stuff for a junior team of games artists or senior animators in film (we got both here where I work).

Try to find out before you start developing:

  • Know if the users want intuitive interfaces (sometimes at the cost of flexibility), or if they can deal with complexity. Senior and film folks seem to do better with “power user” enabled tools.
  • Know if they are okay with beta versions of your tools (sometimes a beta tool can be worth more to them than no tool at all).
  • Find out what your release schedule should be. Some project teams prefer ASAP (especially when they ok with beta tools), others want a more predictable model. Both can work, as long as everyone involved is aware of the consequences.

Get requirements:

  • Get your use cases right first. Don’t implement use cases you, or your team considers useful - the clients must find them useful! i.e. run ideas via your clients first, if necessary prototype.
  • Check how many users of a tool benefit from a feature. Sometimes people suggest features just because THEY like them, but nobody else in the team cares at all. Obviously, focus on features that benefit most people and which are requested by most users.
  • Based on your analysis of features, you can rank them, add them to your backlog and start implementing in an agile fashion.
  • Have some change control process in place that evaluates requests to the tool in terms of how many people benefit from the change, if it supports the main goal of the tool (or if its better to spin it off into a separate tool), and how much work it will cause you, and of course, if it’s possible to implement this at all (some people have crazy ideas what we TAs should be able to do!)

Requirements = everything requested before development starts
Changes = anything after agreement on requirements has been reached. Avoid lots of uncontrolled changes once development starts. This leads to scope creep. Instead evaluate changes and add to backlog (except for bugs)

Once your tool is done:

  • Find a way to inventory your tools. Don’t follow the “let’s dump every script into a common” folder method that I’ve seen at quite a few studios. Add at least some info what the script is supposed to do, what dependencies it has (e.g. 3rd party modules) and for what software it was originally developed. If you want to do it properly, invest time in documentation and versioning. This helps with re-use. It let’s you browse if you already have a similar tools.
  • Have a feedback process TO the art team. People will stop giving you feedback if they get the feeling you’re not listening. i.e. explain why some changes may not be implemented, update users on new features, fixed bugs etc. (keeping a changelist.txt or similar is useful). Depending on the art team’s professional maturity you may have to communicate a lot to remind them you’re listening :confused:
  • Think about training - does your tool require it? How to ensure everyone knows about the capabilities and limitations of the tool? e.g. readme.txt, short demo to the team, ppt, built-in help, video, wiki?

Be wary of management designated product owners.
Wrong: management assigns a project owner who doesn’t care, doesn’t have time, is selfish about requested features. Your team then treats this guy as the Scrum PO.
Right: designate your own PO. Scrum doesn’t say the client necessarily has to be the owner. If the owner can’t provide value to the project then it is okay to have someone as PO who “acts in the best interest of the client” and who represents the client. This could be someone from your team, who ensures that the wishes of the PO designated by management match the requirements of the project team. This way you can deal with selfish/lazy/busy POs - they may even be happy that you do all the work.
(when I first did agile, I did it the wrong way quite a few times :frowning: results were wrong features, lack of feedback and in general a horrible delivery and feedback experience because the PO only provided very little value).

One thing I would also like to add to tie together the comment about beta tools and the question.

As Theo said, most of your tools will always be in beta, at least to you. You’ll always find problems, whether it be a bug that you didn’t think about, an old tool that doesn’t fit within the parameters of your pipeline, or something new you would like to add or optimize. The main thing is, if you are not comfortable enough to support the tool in its current form, then you need to discuss it with your users. No one will be happy if you deliver something sub par that even you don’t want to help with. All of the ideas and planning tools above point to being transparent with your users about all the expectations of a tool, its possibilities for future work, and your capabilities to achieve the goals you’ve set out to do. As you work on more tools you will start to understand how to break it up into sizable chunks in comparison to your own skill set. So, as your expectations become clearer with your customers and vice versa, you’ll start delivering on time with those expectations and you’ll start to notice how well you and your users work and communicate together.

Last thing, if something is going to blow up and you didn’t really have a contingency plan, just let your users know. People hate being dodged and talking to them directly, keeping them in the loop, goes a long way.

Don’t view it as a customer or client relationship. Don’t work on something until you feel it is absolutely done and then unveil it and deliver it. The dullest tool in the box is the one that never gets used. If your artists & designers haven’t had input into the tools they need to solve their problems they may just turn their noses up at it and never use it (We’ve all faced this issue). Show them in progress versions, ask for feedback. Find the one or two people on the team that have the innate ability to instantly get the tool to crash (all teams have at least one of these people, it seems), and give it to them early.

I had done some writing about getting an infrastructure set up for TA tools development a while back when I worked at Vigil. (see: http://tech-artists.org/forum/showthread.php?2497-Building-Tech-Art-Infrastructure)

The thing I had that was most helpful was keeping my trusted partners/squeaky wheels/crash-prone users on a separate tools branch from the rest of the team. I’d develop all my tools in that branch, so they would get the development version of whatever tool I was working on at the time. I sat near these users, so if/when something went wrong I’d know almost immediately. After a little while of stability and active use I’d feel comfortable enough to integrate the latest version over to the main tools branch that got pushed out to everyone.

@ozzmeister00 Thanks I’ll have to check out that post. About branching, I’m totally doing that as well, it is helpful to have several versions of tools. I tend to push out the original intention of the tool asap, but our game changes over time and so need the tools. Ive had to go back many times to reevaluate what I need them to do and change them. But yes, I keep a separate branch for users so in case I’m working on something that is broken, they still have a tool that works.

@Theodox In your experience, is it the creator of the tools’ job to manage the tool after its been delivered? At my job (the only TA position I’ve ever had, so not sure what other game companies do…) are usually put on a project and develop tools for it, and then inherently responsible for its upkeep. So it gives us time and room to step back and ask how our tools can be improved, and if there are bugs, we are responsible for going back and fixing them. Usually a TA will have an ongoing and upmost priority to manage, and then other projects are usually secondary to that. I guess certain TA’s here own certain tools and are experts with them. Is it different in other places?

Lots of advice stems from efficiently collecting feedback. I ask a lot of questions before I start a project, and then when the user actually uses it, I frequently check in to see how it’s going. And I sit right next to them, luckily, they just tell me if a feature needs to be added or if something is broken. But they won’t always tell me 100% of the time, and won’t know all the ways it can be improved. I’m curious about specific implementations of a feedback system directly in your tools. Do you have some window that pops up if the tool crashes asking to write an email? Do you have a window or button somewhere where the user can write a message? Or maybe just some Google doc logging bugs the user runs into?

A lot of this goes back to my other post about being a SWE turned TA… I also see the value in getting more experience with art to understand their workflow and to be more of a mindreader for what they need.

Another problem I’ve had is if I make a mistake. What if you build something and, like what Jeff Hanna was saying, it just doesn’t serve a purpose? I made one iteration of a feature and it’s not really used… Do I take it out if artists aren’t using it? Do I try to fit it to solve another problem? Do you ever throw out stuff you don’t need?

Thank all of you for advice, it’s a lot to digest, in a good way :smiley: I’ll have to think hard how I can apply to my day-to-day.

Hopefully this makes sense!

[QUOTE=isabellc;26599]
Another problem I’ve had is if I make a mistake. What if you build something and, like what Jeff Hanna was saying, it just doesn’t serve a purpose? I made one iteration of a feature and it’s not really used… Do I take it out if artists aren’t using it? Do I try to fit it to solve another problem? Do you ever throw out stuff you don’t need?[/QUOTE]

If you do proper requirements gathering and change control, then every feature should have had a purpose when it was implemented.
So you should first check if the purpose still exists - maybe it doesn’t, i.e. the use case isn’t value any more.
If it exists, then one of the following is the reason people are not using the feature:

  • it’s buggy -> you can fix it
  • it’s hard to use -> you can improve or provide training or help
  • people don’t know it exists -> raise awareness
  • your PO is selfish and asked you to implement a feature HE cares about, but nobody else does. -> learn your lesson. choose a better PO. Get feedback and requirements not just from one single source

What to do with the feature?
If it causes extra maintenance or trouble when extending the tool, then remove it.
If not, leave it in - somebody may actually find a use for it. Just as a programmer from Adobe said “we don’t always know how people will use the features we put in. They are artists. Often they surprise us with the ways they use our tools to be more creative”

I have a group of users that are my ‘beta testers’. people who are comfortable working for a day with a new system/updated system and are expecting hiccups, and are ready to give me detailed and specific feedback with repro cases.

Usually these are senior artists who are stakeholders for the tools and systems I’m working with.

What are some different implementations of getting feedback for tools from your artists or designers? (Other than asking them.) Do you inject error and crash reports that are sent back to you?

Just curious how I can do this better :slight_smile:

automated crash reports are super useful – artists manage to combine “I love to complain” and “I hate to actually file bug reports” in ways you would never imagine. It’s particularly good if you collect stack traces and email them to yourself so you can spot the offending lines – 90% of the time you’ll just have forgotten to bulletproof something and you can have a one-line fix ready right away.

Maya.utils has a field called formatGuiException, which get calls any time a python exception fires. Rob G’s book has some example code, helpfully githubbed here: https://github.com/rgalanakis/practicalmayapython/blob/master/src/chapter3/excepthandling.py

i recently implemented an automated error reporting email system based on the aforementioned book. it’s a great book, highly recommend it.

we noticed that some of our stack traces went through the standard exception hook, so we had to override both the maya.utils.formatGuiException and the sys.excepthook. You can tell them apart by the way they are commented. If they use MEL-style comments (//), they went through the formatGuiException (as well as appearing red), if they use Python comments (#), then it came from the sys.excepthook.

haha, I like the way you put it.

Now I just wish I could also deal with Artist superstition.
Just never roll out any tools when your license server has issues. Our new tool got a lot of flak from artists for shutting down Maya, when it was really all the license server’s fault. I guess it didn’t help that many of them couldn’t read the English license error message either…

haha, I like the way you put it.

Now I just wish I could also deal with Artist superstition.
Just never roll out any tools when your license server has issues. Our new tool got a lot of flak from artists for shutting down Maya, when it was really all the license server’s fault. I guess it didn’t help that many of them couldn’t read the English license error message either…

There was this other thread not long ago about logging errors/info from tools.

@RobertKist: Well, I had a guy who insisted that his max performance depended on how close his keyboard was to his monitor :slight_smile:

That’s why I never use Max - happens me every time.