Open Source Publishing Development

Sorry for being lengthy, but I pretty much concerned myself with QA on the art production side for the last 3 - 4 years…

As I said a pipeline is a process, defined and carried out by humans and (often) supported by pipeline tools. But it’s totally possible to have a pipeline without any scripting involved. It’s just more cumbersome. But you have to start from there to understand how you can fit tools into the process.

First when defining a QA process you have to answer the questions WHO, WHEN, WHAT?

WHAT are the steps your QA process must carry out?

  • technical checks
  • artistic checks
    both decide if an asset passes or fails
  • other: defining checklists, usage monitoring, auditing for correct use/abuse, etc.
    (side note: think about exceptions. What if your art brief consists of 100 assets, each with a distinct and defined poly-count? is your checklist handling system flexible enough for such things?)
    (side note: if you allow e.g. ADs to override assets despite failed checks, you need to monitor and audit, to avoid abuse by ADs/Producers who want to take shortcuts which compromise quality)

WHO carries these steps out? This depends on your studio org and the people you have available. For most tasks the roles are pretty set though: you usually have someone who does the planning, someone who is technically knowledgable, someone who is artistically knowledgeable and the artist. Often there’s an overlap. In your process you need to define who makes which decisions, who becomes active at which stage and who needs which information and when they need this information.

WHEN? This defines the checking process. E.g. when do reviews take place? When do technical checks happen? e.g. after each production step? before export? daily? Depending on the assets, and the project’s focus on quality, this can vary a lot! You could work on a character that takes 30 days to produce, or on small scale props where your artists churn out a dozen a day. Assets can be very high poly or low poly.

Once you have answers to all this, your process is defined.
Next: how can you support this with tools? And at this point the re-usability of most pipelines break, because they’re not flexible enough.

Let’s break down the WHAT some more:
Remember, ALL checks must pass for the asset to pass (at least in most cases :slight_smile: )

  1. technical checks which can be automated
  2. technical checks which cannot be automated and must be checked manually
  3. artistic checks which cannot be automated and must be checked manually
    (without point 2 and 3 your checking will be incomplete)

point 3 is obvious. You cannot check if the asset conforms to a concept or an art style.
point 1 are checks which are generally applicable to a whole group of assets: like naming conventions, freeze transforms, etc.
point 2 are checks which cannot be automated due to things like:
a) time/skill/budget constrains: your programmers just do not have time/skill/budget to implement this check (and trust me this WILL happen, we got over 100 different checks and still clients can think of new stuff! :wink: )
b) technical limitations: your check framework doesn’t support (or integrated with) whatever tech it needs for the checking
c) technicalities which need human judgment: e.g. the arrangement of edge-loops for good animation/rigging, or anything that needs to be manually checked because of a) and b)

The manual way:

  1. after the workflow is defined, people know whom to inform, and when to inform.
  2. they know who needs to attend reviews, what the review standards are, and who makes the decisions
  3. decisions are hopefully(!) recorded in writing (e.g. Excel, e-mail) and not just verbally
  4. new tasks will be created (for the next production step, or for any fixes), e.g. in Excel, or also just verbally

The automated way:
Your pipeline can only hope to fully automate (i.e. no human intervention) point 1.
However, a bad pipeline would stop here, and totally forget about the 2 other checks that need to be carried out by humans (point 2 and 3)!
A good pipeline would now offer a system to collect the decisions of these 2 other steps as well, and integrate with whatever data it collected by itself.
To do this, the pipeline must still rely on human being to manually enter the results from the manual checks. This is where the pipeline must work with people like the AD/Lead for art check, a TA/art-QA for tech check, and maybe a producer to schedule follow up tasks (if your pipeline tools are flexible, you could also update/spawn tasks in systems like Jira or Hansoft).
For this the pipeline tools must know the WHO and WHEN. I.e. you need a way to define your process and actors and make it understood to the pipeline tools (another thing where most pipelines become inflexible and bound to one certain way of doing things)

You see, the actual pipeline tools do very little: They do automated checks and provide glue/communication between the human parts of the pipeline.
However, especially with the communication a pipeline can contribute tremendously to the success of art QA!

Another issue to think about is how to interweave QA with production steps. QA at the end is bad. The later QA happens the more you have to undo. Optional QA is usually as useful as no QA. Checking results which are not monitored and taken into account in decision making are useless. A pipeline and its process must ensure these problems do not happen. This is one of the few times where I think it is actually okay for a pipeline to be inflexible.

Summary: for QA focus on the entire process, and ensure your tool is aware of it and supports it by taking information, generated outside, into account.

Unfortunately I haven’t come across much good literature about software QA, and most if it is geared for very formalized environments dealing with code (also be warned, it’s a super dry subject! which explains the bad reviews), but yet I found these books quite helpful when understanding that QA is really about process, and what methods it uses: http://www.amazon.co.uk/Software-Testing-Continuous-Quality-Improvement-ebook/dp/B000Q36ELK/ref=sr_1_3?ie=UTF8&qid=undefined&sr=8-3&keywords=software+quality+crc
http://www.amazon.co.uk/Software-Quality-Assurance-Implementation-Alternative/dp/0201709457/ref=sr_1_1?ie=UTF8&qid=undefined&sr=8-1&keywords=software+quality+assurance (even though this one gets better review, I feel it’s even more abstract)

Finally, there is another reason why your pipeline/QA tools must be flexible: Process improvement (think CMMI & Co). If the QA process needs to change, your pipeline needs to change (because people shouldn’t be slaves to a pipeline :wink: ) I know a lot is very hard to accomplish, especially the desired (required) flexibility. But that’s imho one thing where pipelines need to improve most. And that’s also what will decide if your pipeline is generic enough to be re-used and shared by many projects.

Just wanted to say Robert, you addressed pipeline and validation for a pipeline in a very good way!
On the other hand I think it’s good to also note that many checks that CAN be automated are really beneficial to production if moved away from manual checks.

That is because some automated checks can be:
- Hard to check manually (either hard to track down or a lot of data to process)
This means that by implementing such an automated validation you’ll gain a lot of speed in the process. This will make it much more likely that you perform such checks more often! :slight_smile:
- Very prone to human error (think naming conventions)
Well, you know what that means for production!

And about quality control (points 2 and 3 you mentioned) I think it’s also good to note the other side:
It’s likely that an artist that gets hired for creating a character is somewhat familiar with how good topology works, but not necessarily is familiar with a studio’s naming conventions. (He’s familiar with how to check his ‘art’)
For example when you bring in a freelancer you would expect a high quality mesh, though the way the model’s been laid out in the scene and/or cleaned up will likely not be 1 to 1 with how it’s in the studio.

Also note that some levels of review can be considered somewhat more related to asset management (Shotgun?) of a production than a single step in Publish, like when a director needs to make decisions on what the art is looking like. Though I agree that Publish should allow to make that flow nicely as well. Like ‘ping’ the director when new art is done (this is something Shotgun could also do) or ask for review of a team member by e-mail, and again this will vary per production (maybe even per asset!).

Be ready to adapt, always! (This really is the number one rule!)
In production you never seem to have everything in place, because it’s constantly changing. New tools come in, or a new project defines a new set of rules for the art, or another client just deals differently with reviews. Especially if you vary a lot with art styles and/or type of projects you’ll constantly need to adapt your pipeline to the needs. So the first thing any system should do is make sure it becomes possible within the budget/time constraints to adapt. To change the validation process or add in a small component wherever necessary.

Whether an artist just wants to run a single check (he might be facing some issues!) and run the ‘auto-fix’ OR he’ll publish with a long series of validations it’ll always need to be customizable.
This is also why I thought of the ‘node-graph’ like structure where the process of validations, exports is built up out of modular chunks (nodes).

[QUOTE=BigRoyNL;25371]Just wanted to say Robert, you addressed pipeline and validation for a pipeline in a very good way!
On the other hand I think it’s good to also note that many checks that CAN be automated are really beneficial to production if moved away from manual checks.
[/QUOTE]

There’s absolutely no argument about the usefulness of automated checks.

However the reality is more complex. I work in an outsourcing studio and we work from clients small to big (EA, Ubi, Naughty Dog). Mobile to Gen4. From weeks to up to a year, and on pretty much all platforms. We often have 20 to 40 concurrent projects.
Please believe me when I say that the creativity of our clients, but also artists, for what can possibly be checked knows no bounds :wink:

On good projects we reach a coverage of technical checks of 90%. On bad projects maybe 30% to 40%.

Why is that? First we don’t have a huge team. We have to prioritize which project deserves it. e.g. writing a check for 1 week for a project that lasts 2 weeks isn’t useful. Some stuff we cannot do to technical limitations. For some we lack the skill. For some we lack support in Python (or time for homegrown solutions). Some technicalities still require human checking, such as topology. And then there are a lot of “throw away” checks - they are project/client specific, i.e. specific to their engine or workflow, and they cannot be re-used.

Trust me, you will never be able to have 100% of technical checks automatically covered. Although I would welcome an algorithm that can detect funny edge loops and tell me if the make sense or not :wink:

Which leads to the issue of trust and skill. I agree that whoever you hire should know what they’re doing. But that’s not QC. That’s just trust. …like the saying goes “trust is good, control is better”. And you will encounter people who try to take shortcuts. I’ve encountered this in every studio. When the deadline gets closer the time for quality gets shorter; People will sacrifice long term gains for short term gains (instant gratification) of making whatever milestone. Things will be hectic, there may be crunch and people’s time and attention will be limited. People will be tired and more mistakes will happen. At this point control is essential. But also support from AD and producer to ensure Quality isn’t just lip service, taking a backseat when it counts most. i.e. Quality isn’t optional. And too many shitty games prove that. Whatever short term gains you get, it will all be nullified in the long run. E.g. when broken assets cause a chain reaction throwing the rest of the pipeline out of whack. If you’re serious about QC then you control - always, and exceptions should be difficult! Or else they’ll happen all the time and your QA process is useless.


Regarding Shotgun and other tools. I would expect a modern pipeline to not exist in isolation. We already have isolated tools, which are ignorant of the rest of the pipeline: Max, P4, Photoshop, Hansoft. A pipeline which is just a bigger isolated tool wouldn’t be very useful. The major work of people like ADs, TAs, Producers is to coordinate these tools and its users, via tools, excel sheets, e-mail, meetings, etc. If I want to check the history of an asset I have to go to P4, the wiki which holds the production brief, the excel sheet which holds the estimates and Hansoft which holds the tasks. Good pipeline tools would help me to connect all of this, so that e.g. I cannot open assets for which I have no task. That I can easily access the brief or wiki that belongs to my asset. That I can follow communication, production status (where in the pipe is it), estimated time left, time spent, users working on it, QA-status, number of change requests easily, and not dig around manually in many isolated systems.

This is all likely beyond the scope of your work, but if you make your pipeline open so that TAs can easily push and pull data to/from it and thus create these connections, then you’d be doing everyone a great favor! In my company we started offering RESTful interfaces for some services, and I think a service oriented architecture could be quite useful in that regard, to provide better interoperability.

Not really adding anything here again, but just wanted to say that I fully agree with that last post of yours.
Our studio is nowhere near as big as that. Actually we’re quite the opposite, working with a small team (relative of course) where many tasks are shared among everyone in production.
Nevertheless the points you mention hold true on our end as well.

[QUOTE=BigRoyNL;25371]Just wanted to say Robert, you addressed pipeline and validation for a pipeline in a very good way!
On the other hand I think it’s good to also note that many checks that CAN be automated are really beneficial to production if moved away from manual checks.
[/QUOTE]

There’s absolutely no argument about the usefulness of automated checks :slight_smile:

But I don’t think you will ever get rid of manual checking, unless you employ an AI.
For example, how do you check edgeloops?

  • Do you trust the artist? But trust is not control. And when deadlines are tight, people will take shortcuts and/or get sloppy due to stress and crunch.
  • Do you just hope the art director will look at it? Do you have a way to ensure he does?

What I’m trying to say, if you leave a hole in the QA process, you might just as well not do QA at all because in the end it’ll be pure chance again if an asset breaks the rest of the production pipeline or not. But since you cannot possibly automate everything, you need to think of how to make your tool aware of the decisions made by the human actors in the pipeline. We also found it useful to educate producer when and how often to do checks and how to coordinate manual and automated checking. i.e. we have them tips how to coordinate the tool with the production process.

Another reason for manual checking: it’s cheaper. In the business world it doesn’t matter if you can technically implement something. It matters if it’s financially feasible.
Example: I will not spend 1 week developing a check for a project running 2 weeks. I will not spend 1 week developing a check that gets run on 3 assets. I will not spend 1 week (i.e. 40 man hours) for a job that can be manually checked in 5 man hours.
And, from looking at our own check base, I can say we have maybe 50/50 generic re-usable checks and the rest client/project specific checks (*). This means we’ll not any time soon be able to say “we can stop developing, we covered everything”. Clients will have new requirements and tech will keep changing. This means I very rarely expect that we can cover all technical checks of a project for 100%.

(*) I work for an outsourcer, we have up to 40 concurrent projects. AAA to small scale mobile and Facebook games. 1 week to 1 year production time. And clients are super super annoyingly creative in coming up with obscure things for us to check. And cost/benefit analysis plays an important part in deciding checking strategies for assets.

@RobertKist; I found one of the book you mentioned on the American version of Amazon, and the reviews are quite the opposite, close to five full stars. Is it the same book, you think?

[QUOTE=marcuso;25378]@RobertKist; I found one of the book you mentioned on the American version of Amazon, and the reviews are quite the opposite, close to five full stars. Is it the same book, you think?
http://www.amazon.com/Software-Testing-Continuous-Quality-Improvement-ebook/dp/B000Q36ELK[/QUOTE]

yes it is. just be warned. It’s dry. It’s made for formal processes. It’s intended for code. I think there’s a chapter about Agile in it though.

In essence art assets are still “IT artefacts” i.e. data required to run a program, so they need to be QA’d too, thus the knowledge in the book applies to them as well.
Except real world game art production is much less formal. I think taking the processes right out of the book and applying them unmodified would make for great drama (or comedy). But in any case, knowing the “proper formal way” is a good starting point to develop your own art QA process. Made me think a lot about our own approaches.

yes it is. just be warned. It’s dry. It’s made for formal processes. It’s intended for code. I think there’s a chapter about Agile in it though.

No problem at all Robert, I eat these kinds of books for breakfast. Cheers!

Very interesting.
I have to do the same thing for a studio (and a full pipeline…). :wink:

Thanks! You are very welcome to join and use Pyblish if you wanted to. Show your supervisors the Wiki and maybe try things out, it might just be that it goes faster and turns out better by working together. :slight_smile:

When i will start, i will try for sure Pyblish. And if it can match my current studio, i will join. :wink:
But now, i have to take some decisions :

  1. Folder structure
  2. Stay on Windows or switch to Linux (photoshop, 3dsmax…)
  3. FTrack stuffs
  4. More…

Hi,

I have tested quickly Pyblish.
Here a function to test if an object has an unique name :

def process_instance(self, instance):
        mismatches = []
        mismatchesTMP = []
        for node in instance:
            mismatchesTMP.extend([cmds.ls('*%s*'%x, long=True) for x in filter(None, node.split('|')) if len(cmds.ls('*%s*'%x, long=True)) > 1])

        for mismatche in mismatchesTMP:
            mismatche.sort()
            if mismatche not in mismatches:
                mismatches.append(mismatche)

        if mismatches:
            msg = "The following nodes have the same name
"
            for node in mismatches:
                msg += "	{0}
".format(str(node))

            err = ValueError(msg)
            err.nodes = mismatches

            raise err

The first problem is : if one validator is not valid, the process is stopped.
Generally, we want to check all validators in one time, and at the end, the artist can check/fix globally what is wrong.
Validate1, fix, validate2, fix, validate3, fix… can be a nightmare.

And for the design, on my previous studio, we used something like this (here a quick UI fron QT Designer) :


“Enable checkbox” and “autofix” is for the “start all” button.
“Fix” and “autofix” is gray when we cannot autofix this by code.
The color is here to say if the validator is good or not.
We can click individually of each validator to check if the fix is ok.
We can click individually of each fix button to retry a autofix.
All artist was very happy with this because it was very fast.

Little fix on the file select_object_set.py from pyblish_maya, line 49.
Replace instance.add(node) by instance.add(cmds.ls(node, long=True)[0]).

Thank.

Hey Deex,

I’ve opened up an issue about this for you here:

Should get fixed shortly.

I think your UI example is great, would you like to post it in our GUI issue so everyone can see?

Good work on getting your first plugin working, looks great!

Best,
Marcus

This has been fixed.

Previously, in the script editor, you would get messages about validations as though they were exceptions. Now you’ll be getting them one-by-one as plain-text, so it’s easier to spot which validations went wrong. It is then up to you to provide an appropriate message for the users of your plugin via your exception. The message passed to the exception will be the one shown to users in the Script Editor.

Let me know if you need help with anything else! I’m finishing up on some tutorials and end-user documentation for the upcoming 1.0 Alpha of Pyblish, scheduled for the coming Sunday. Stay tuned.

Hi Marcus,

You are fast ;).
I have posted my UI on git.

For the fix, so now we have plain-text with the error. But :
How do you execute each validator one by one ?
I use this but it is not good :

import pyblish.main
reload(pyblish.main)
import pyblish.backend.plugin
reload(pyblish.backend.plugin)

context = pyblish.backend.plugin.Context()
pyblish.main.select(context)

listValidators = pyblish.backend.plugin.discover(type='validators')
for validator in listValidators:
    cValid = validator()
    print cValid.families, cValid.version
    cValid.process_instance(context)

I have a Maya error : # AttributeError: ‘Instance’ object has no attribute ‘split’ #
Because the instance is : [Instance("[u’Set’]")]
(Set is my Maya set)

But when i do this code :

import pyblish.main
reload(pyblish.main)
pyblish.main.validate_all()

It works, the instance is : [u’Set’] (and not [Instance("[u’Set’]")])

So what is the good method to :

  1. Get a list of all validators ? (pyblish.backend.plugin.discover(type=‘validators’) is ok). This list will populate my UI.
  2. Execute each validators with the good instance ([u’Set’] and not [Instance("[u’Set’]")]) ? Obviously, for each validator, i want to catch the instance, error and the validator class (and not just a plain-text).
    Perhaps in the process() function from the main.py, add “yield instance, error, plugin” and not “yield instance, error”

Last question :
Do i need to call my “process function” in all validators and extrators : “process_instance” or not ?
And for all selectors : “process_context” ?

Why the name of this function is different for selectors ?

Bug : for a validator, when you call the attribute of the class with a property like :


class ValidateUniqueNames(pyblish.backend.plugin.Validator):
    @property
    def families(self):
        return ['model', 'animation', 'animRig']

You have an error :
# Error: argument of type 'property' is not iterable
# Traceback (most recent call last):
#   File "<maya console>", line 3, in <module>
#   File "C:\Python27\Lib\site-packages\pyblish\main.py", line 138, in validate_all
#     validate(context)
#   File "C:\Python27\Lib\site-packages\pyblish\main.py", line 67, in validate
#     for instance, error, plugin in process('validators', context):
#   File "C:\Python27\Lib\site-packages\pyblish\main.py", line 36, in process
#     compatible_plugins = pyblish.backend.plugin.plugins_by_host(plugins, host)
#   File "C:\Python27\Lib\site-packages\pyblish\backend\plugin.py", line 588, in plugins_by_host
#     if '*' not in plugin.hosts and host not in plugin.hosts:
# TypeError: argument of type 'property' is not iterable # 

But with :


class ValidateUniqueNames(pyblish.backend.plugin.Validator):
    families = ['model']

It is ok.

So the fix is line 588 from the backend\plugin.py :

if '*' not in plugin.hosts and host not in plugin.hosts:

By :

if '*' not in plugin().hosts and host not in plugin().hosts:

Thank you,
Damien

Hi Damien,

I’m psyched you’re getting your feet wet with Pyblish. Let’s go through the issues one-by-one.

How do you execute each validator one by one ?

I’m about to make an announcement about this, but seeing as it directly relates to your question, I’ll post a link to it here.
http://pyblish.com/#mocking-with-pyblish

In it is a short guide on how to work directly with the classes responsible for storing data, nodes and ultimately processing it with plugins. Have a look and see if it makes it any clearer.

  1. What is a good method to get a list of all validators ? (pyblish.backend.plugin.discover(type=‘validators’ ) is ok). This list will populate my UI.

That function is just what you need. That is how validators are normally discovered when you run pyblish.main.validate(context). In your GUI, you would use the results of this list to run each plugin, as opposed to using pyblish.main.validate(context), as that is mainly for convenience and a GUI will need more control than it has to offer.

Once you’ve gone through the above tutorial, you’ll see how you can run each of the plugins returned by that function.

  1. What is a good method to execute each validators with the good instance ([u’Set’] and not [Instance("[u’Set’]")]) ?

This should also become clear from the above tutorial, but in short, you can process them the same way as pyblish.main.process() does.


import pyblish.backend.plugin

context = pyblish.backend.plugin.Context()
instance = context.create_instance(name='MyInstance')

# We've got a few demo plugins with this family, let's use
# if here for illustrative purposes
instance.set_data('family', 'demo.model')

validators = pyblish.backend.plugin.discover(type='validators')
for plugin in validators:
    print "Processing with plugin: %s" % plugin
    for instance, error in plugin().process(context):
        if instance is None:
            # No instances were compatible with the available plugins
            print "Plugin '%s' couldn't find any compatible instances" % plugin

        elif error is not None:
            # Something went wrong, and this is where you can handle it.
            print "An error occured during processing of instance: %s, %s" % (instance, error)
        else:
            # Everything went fine, move along
            print "Instance '%s' processed well" % instance

Bug : for a validator, when you call the attribute of the class with a property like :

Are you by any chance looking at a plugin starting with an underscore? E.g. _validate_unique_names.py

This is how we normally disable plugins that misbehave until we find the time to fix it; if it starts with an underscore, it won’t be visible to the discovery mechanism. What you are seeing is coming from an older implementation in which properties were @property based, but currently they are mere attributes and so this problem is no longer current.

Thanks for the tracking these things down, and am I understanding you correctly that you are building a UI for Pyblish? That would most certainly be very interesting and something I’d like to follow up on.

Good luck, and let me know if you have any other questions!

Best,
Marcus

Here is some more reference to why the current plugin interface and the prior @property behaviour works the way it does.


https://groups.google.com/forum/#!topic/pyblish/tcY3SBK-NHQ

Pyblish web frontent

We’re looking for web-developers to implement the Pyblish web frontend. If you’re interested or know anyone who might be interested, point him or her our way! And if you have any thoughts, concerns or ideas about anything, speak up here.

More information here:

Best,
Marcus

Hi All,

Just wanted to share an update on the Web Frontend development.

After much deliberation and consideration between quite a few different languages and frameworks, it looks to me that the most suited method of implementing the above is using Flask for backend, Bootstrap for graphics and AngularJS for interactivity.

On top of that, to communicate between Pyblish, which is running under Python, and the Web Frontend, there is RESTful and Socket.io. RESTful is an architectural style and socket.io is used for its implementation and to physically communicate through websockets.

This is what it looks like at the moment. Each event populates this list of events, accessible via RESTful requests GET and PUT.

More details here:

We’re looking for help with AngularJS and styling. The goal remains like it was in the initial illustration above, now it’s time to actually create each widget responsible for visualising each type of data coming through from Pyblish; such as images, videos but also logs and 3d viewports.

If you’re interested in helping out, let us know here or email me directly as [email protected]

The repository is here:

Best,
Marcus

For the MVVM, i quite enjoyed http://knockoutjs.com/
I have not used AngularJs that much, but knockoutjs did the job for me :slight_smile: