Open Source Publishing Development

I think Publish shouldn’t just be a “model”-check-and-publish tool, but much more all-round.
The implementation should deliver the basis to allow for all scenarios: model publishing, rig publishing, lookDev publishing, etc.
Possibly even publishing the render scene to make it ready for renderfarm usage.

I totally agree with your descriptions that publishing in its core is a sequence of selection, validation (+ fixing), extracting and conforming.
Whether it’s something as complex as full animated shots needing to be published to lighting in chunks (where each chunk is separated into a valid format; eg. meshes to Alembics, vray proxies to separate .mb) or as simple as a single cube mesh saved to another location. In our studio every big project comes with new requirements for publishing, so it really needs to be configurable but most of all expendable.

The primary focus would be on creating the path-templating, configurable workflow, presets and user interfaces.
Because creating a single check is easy, but managing them in a consistent (possibly open-source industry standard way) is near impossible now.

Why support the rare cases?
For example Today I received scenes from a client that magically contained “scene metadata” (One of Maya’s new amazingly lacking features!) which turned almost all of our files (anything that referenced it) into 70 MB+ files (even if they didn’t contain any nodes at all) and had them crashing the whole time. It would be great if I could easily add a check to warn (and possibly fix) this next time. It’s definitely not an ordinary check, but it would save us a lot of hassle the next time.

Publishing a rig?
By the way, publishing a rig doesn’t necessarily check whether the rig is working fine. (That’s up to the riggers!)
But these could help:

  • cleaning up unused nodes could be useful, like remove any unnecessary temporary shaders used in that file.
  • remove animation from the rig file (if any)
  • good old naming conventions check
  • are all the controls in a character set? (see how pipeline specific this could be?)
  • are there any non-skinned or non-parented (= not in main group) meshes in a rig file then they are likely not moving along; raise warning?

Yet this could be very different for a project that requires a game rig. That might require that the joint chain used for deformation is decoupled from the rest of the rig so it can be easily exported. (Or at least is able to be baked to such a format).

Validation Workflow
Also… a neat thing about validations is that it could check a variety of things without stopping at each validation.
One thing that can be very annoying is:

  1. Run the check
  2. Find out there’s an error
  3. Fix it
  4. Run the check
  5. Find out there’s an error further down in the checks

Instead it’s better to:

  1. Run the checks
  2. Show all things that didn’t pass validation (mark them red?)
  3. Report
  4. Fix all of it (= Auto-fix where possible; if some can’t be auto-fixed (because of the nature of the validation) then fix it myself.)
  5. Re-Run the checks

I’m liking the scope of the discussion. I would say that for Publish (or whatever it’ll turn into), the first goal would be selection and then validation. If these two area gets implemented in a succesfull manner, the exporting and potential importing can be written after.

A plugin system for validation sounds like a neat idea. As for more successful plugin system that are event driven that I have experienced I can list; Shotgun Event Daemon and the Ftrack equivalent. Anybody know of other systems?

We have a scene validation tool that is plugin driven, so we can add new tests simply by creating a class and the tool will pick it up. As mentioned, it simply reports all errors instead of stopping at the first. It also keep track of what object or sub-object had the issue so the user can click the error line and it selects the object. There is also a repair tool that follows this same design, except it is aimed at things it can actually fix, e.g. construction history.

We have a third tool for organizing scenes, e.g. naming, layers, grouping, etc. This one is tricky and can be slow on large scenes. We have defined some very specific and sometimes illogical rules for how things are organized in the scene.

The reason we split these up is that some operations can be slow on large scenes or completely unnecessary.

I did have a question, how is any of this going to be useful outside of maya? or max? or whatever? To do any meaningful checks on the content, you’re going to need to be within the app in some fashion.

@TheMaxx: Thanks for your input! I’m happy to see you’re referring to, what we call “validations”, as “tests”. I’m contemplating whether this name is better suited, as it aligns better with tests in software development.

About where it’d be useful: initially, it’s for use within DCCs, like Maya, and I honestly hadn’t considered using it standalone but your question intrigued me!

The way I see it, the only tests we do are Black Box tests; i.e. those concerned about pre-set requirements. But imagine testing for compatibility between assets, such as whether or not a facial setup fits a puppet, or whether a texture passes outside the UVs on a certain model. Or regression testing, testing whether or not a change to a model messed up other parts of the project. For that, we’d need tests that could access multiple hosts and a method of visualising the results.

As for completely standalone; some tests I can think of, off the top of my head, might involve the Python Image Library and testing images for dimension or color depth. Or the alembic Python bindings to check for various things like hierarchy, or stepped keyframes that might mess with motion blur. (Or how about testing for objects moving too fast or pass the camera on a single frame?)

But above all, I think that once we get tests going, I’m expecting an expansion on how we work with tests today. At least until we’re in the same level as software developers - an area in which we are hopelessly behind - but also beyond that. We’ve got requirements that differ from software, I’m sure, and we’re also visual, which means it’s easier to visualise and understand how things (should) work than it is in developing software.

We’ve spoken about two types of tests at this point; the testing of data coming out of software and tests for data already out. Publish will mostly concern itself with data about to come out, but as discussed above importing is another potential area in which we may go and that might be closer aligned with the tests of existing data.

Publish will mostly concern itself with data about to come out, but as discussed above importing is another potential area in which we may go and that might be closer aligned with the tests of existing data.

There’s no difference between this data, except that the first is present in the DCC (eg. Maya) and the other isn’t (yet present in a data format).
There will always be custom made ‘tests/validations’ per pipeline and/or project. The same holds true for each application or data format, because the accessed data will be presented in a different way.
If Publish allows this plug-in/extendable workflow for these tests it will support all possibilities.

All it really needs is the appropriate Selectors, Tests/Validators and Extractions for each case. But the process of running these checks through a configurable/preset-based user interface is consistent because of Publish.
All you need to focus on is writing the Selectors and Validators (which is relatively easy) and any artist in the end could use your custom checks to implement there own ‘stack-based’ versions of Exporting/Importing based on a series of ‘Actions’.

So the actual Selectors and Checks will not be ‘software agnostic’ but the framework, user interface and workflow for testing will be.
At least that’s what I think the goal is, correct?

To add to this, I think Publish doesn’t need to ‘differentiate’ between whether it’s exporting or importing. It’s the type of Conform/Extraction you use with what a certain type of that allows you to either import or export.

Imagine having a ‘software/data-agnostic’ selector that only returns file paths. (You filter what to what you to import/read/validate).
Then you could add your custom Validator that checks whether the file is a valid .obj file.
If so, then it could go on and perform the conform/extract steps (basically importing the .obj file).

The core idea would be to figure out and clearly define a industry standard way of performing a series of actions that Select, Validate, Extract and Conform and do that on basis of modular ‘Actions’ that can be configurable through a UI. Where the Actions would be a “Selector”, “Validator”, “Extractor”, etc.
Hmmm… this might become to abstract. :smiley:

So keeping that in mind you would end up creating a MayaSelection class (which inherits from the Selection base class) that represents Maya scene data in a format (probably node’s fullPath) that validators for Maya can easily use.
Then you would need Selectors to have ways of choosing/filtering what data is relevant to your publishing steps (to validation and extraction)

Here’s some possible pseudocode




# Selection Types
class Selection(object):
    """ Base class that contains the 'selected' data to apply the next steps (Validation, Extract, Conform) to """
    pass
    
class MayaSelection(object):
    """ Class that contains Maya selection that enforces Maya long paths """
    pass
    
# Selectors
class Selector(object):
    """ Filters based on its process implementation """
    pass
    
class MayaSelectByType(Selector)
    __selection__ = MayaSelection # the kind of selection it receives as input and then outputs (so we know what kind of Validators can be combined with this; this could possibly be a list of types if suitable)
    
    def process(self, selection):
        nodes = mc.ls(type="mesh")
        selection.update(nodes)
        return selection
        
    
# Validators
class Validator(object):
    """ Class that provides a validation on a predefined Selection class """
    pass
    
class MayaValidateMeshNormals(Validator):
    __selection__ = MayaSelection
    
    def process(self, selection):
        meshes = mc.ls(selection, type="mesh", long=True) # check only meshes
        
        invalid = []
        for mesh in meshes:
            if mesh == "invalid":
                invalid.append(mesh)
                
        self.invalidNodes = invalid
        
        if self.invalidNodes:
            return 1
        else:
            return 0
        
    def fix(self):
        """
            Try auto-fix if process failed?
        """
        if self.invalidNodes:
            mc.delete(invalidNodes)
            
    def select(self):
        """ Select/Highlight/Show the problematic areas of this validation.
            This could result in a scene selection, a GUI pop-up with information, etc.
        """
        if self.invalidNodes:
            mc.select(invalidNodes, r=1)
        else:
            mc.select(clear=True)

Note that I’m using specific “Maya” based Selections and Validators. Because while thinking about this I thought there is no way to ensure a certain Selector provides enough consistent information to any Validator it requires to process it.
Thus the Validators would need to specify what type of Selection it needs to have, that way you can’t mix the wrong “Selection/Validations” steps.
So I guess you would make a “MayaSelection” type, and a “FileSelection” type that Validators can operate on.
Once I have some more time to think about this I’ll try to set up a schema that could fit this.

Hey Roy, thanks for your input!

There’s no difference between this data, except that the first is present in the DCC (eg. Maya) and the other isn’t (yet present in a data format)

I’m really glad you think so. You’ve clearly given this more thought than I; I can’t fully envision how this would work in practice just yet, but it’s certainly a direction I’d like it to go.

So the actual Selectors and Checks will not be ‘software agnostic’ but the framework, user interface and workflow for testing will be.
At least that’s what I think the goal is, correct?

Sounds like that hits the spot!

…and any artist in the end could use your custom checks to implement there own ‘stack-based’ versions of Exporting/Importing based on a series of ‘Actions’.

I’d love for you to put something together about this, perhaps a few illustrations of proposed workflow and more in-depth walk-through. You seem to have a vision of how these pieces fit together with the ‘Actions’ workflow and I’m very curious to hear more about it. What I’m most unsure of, is how much of a role a user would have in determining/customising tests, and what sort of tools he’d be given to create/modify them. Or are you referring to the initial set-up of tests, on a per-project basis or such?

Best,
Marcus

I was referring to setting up a check with building blocks that you can easily configure whenever required.
I’m not assuming you would configure this daily, but sometimes you find out mid-project a certain ‘publishing’ step needs an extra validation.
If all I need to do is write the custom Validator and it could then be easily added into the Validation Stack that would be most efficient.

By the way, also check my updated/edited post above (I added some quick mocked up pseudocode)

I’ve added some more information about how the Selection behaviour could work to one of the wiki pages of the project:https://github.com/abstractfactory/publish/wiki/Architecture,-Selection

The same modular and extendable workflow should be used for Validators and Extractors. (Not sure if that really is the best naming for them; but I’m just trying to stick to what’s been said already!)

@TheMaxx: Thanks for your input! I’m happy to see you’re referring to, what we call “validations”, as “tests”. I’m contemplating whether this name is better suited, as it aligns better with tests in software development.

Quoting myself for context; I recently gave this a go but quickly realised the potential for confusion between content tests, such as “check_normals_test.py” and Python tests e.g. “some_module_test.py”.

How have you dealt with this name-clash? Do you physically name your validations “test”? If so what do you call your unittests et. al., and how do you refer to each in discussions?

I’m now considering whether it’s better to stick with “validators” due to this clash, e.g. “check_normals_validator.py”.

Publishing a rig?
By the way, publishing a rig doesn’t necessarily check whether the rig is working fine. (That’s up to the riggers!)

Just throwing in that this is something I’d like for Publish to facilitate as well.

Today, riggers have no way of ensuring that their rigs are doing fine and writing their own tests is far too laborious. Riggers are much like software developers in a way, API designers in fact, in that they design a product (library) for use by others with a very specific and difficult-to-change interface (the controllers). If anyone should write their own tests to ensure a consistent level of quality throughout iterations, it’s the riggers. How this would work in practice of course is another matter altogether, just throwing in food for thought!

[QUOTE=marcuso;25177]Quoting myself for context; I recently gave this a go but quickly realised the potential for confusion between content tests, such as “check_normals_test.py” and Python tests e.g. “some_module_test.py”.

How have you dealt with this name-clash? Do you physically name your validations “test”? If so what do you call your unittests et. al., and how do you refer to each in discussions?

I’m now considering whether it’s better to stick with “validators” due to this clash, e.g. “check_normals_validator.py”.[/QUOTE]

Next to the runner, there is a python package that contains modules separated by task (meshes, io, etc.). In there they subclass ValidationTest. In this way, there is no file name clash, but there could (possibly?) be confusion with the class name, but it’s never come up before.

Thanks for letting me know!

For those of you with experience in publishing, I’m looking to gather some of the benefits and pitfalls of using publishing to better illustrate the benefits of something like Publish to those who know nothing at all about publishing and, for example, work directly with each others work-files.

  1. In short, how would you describe publishing?
  2. What would you say are the benefits of using publishing in your workflow?
  3. Could you describe a typical workflow, with and without publishing?
  4. What, in your experience, has publishing facilitated or prevented?

If you could take a moment to answer these questions, it would really help a lot!

Best,
Marcus

Hi all,

Figure I’d post an update on how things are going. At the moment we’re heading into GUI-land and are currently spawning ideas on how it could end up looking.

We’re also in the midst of vamping up the codebase, some highlights here:

We’ve also been spawning on ideas for a Node-based workflow in creating steps for validations and extractions and such:

Related to that, we’ve also had a look at adapting Depends to Publish.

To run:


$ git clone https://github.com/mottosso/deplish.git
$ cd deplish/deplish
$ python depends

Which will boot up a stripped down version of Depends, you can play around with the few nodes we’ve made so far.

We’re looking at integrations with various third-party suites, like Asana and Git, along with Maya. Shotgun and FTrack are other potential targets, maybe Trello too.

So, come and share your likes and dislikes in the issues, or here on Tech-Artists. We could use a few hands with experience in Shotgun and FTrack to help us consider how to involve it with an integration. Along with a few heads with experience in path templates and variable substitution for placing assets in a configurable manner. But mostly it would be great to just have you around. :slight_smile:

Best,
Marcus

Would be great to hear more people’s opinion on this, even just suggestions on workflow and what you do before handing your files off to other people:)

[QUOTE=BigRoyNL;25163]

Validation Workflow
Also… a neat thing about validations is that it could check a variety of things without stopping at each validation.
One thing that can be very annoying is:

  1. Run the check
  2. Find out there’s an error
  3. Fix it
  4. Run the check
  5. Find out there’s an error further down in the checks

Instead it’s better to:

  1. Run the checks
  2. Show all things that didn’t pass validation (mark them red?)
  3. Report
  4. Fix all of it (= Auto-fix where possible; if some can’t be auto-fixed (because of the nature of the validation) then fix it myself.)
  5. Re-Run the checks[/QUOTE]

As TheMaxx says, plugin driven is the way to go. We have a similar tool, based on Python with runs modularized checks in Max and Maya, and then records them in a DB. We’ve been using this for a few years now and it works very well.

However we found that users use the tool in both ways described above. Because they use them in different scenarios.
Scenario 1: the artist just wants to finish up an asset and fix errors one by one. In this case running all the checks is not always useful, especially if your checking process takes long (i.e. longer than 30 seconds!). Artists prefer a quick tool where they can check-fix-check-fix-check-fix until they fixed all the errors.

Scenario 2 occurs when submitting assets for review, delivery, publish or any other workflow milestones. In this case the check is mandatory, and it must take as long as it takes. Having a “check farm”, like some clients do, can be useful. Also ensure you have an override mechanism. Especially in games there are often exceptions, but they must be okay-ed by a higher authority.

We also find it useful to have “fix” buttons that try to automatically fix errors. However checking itself should just do that: check. But never fix, unless the user approves.
Having documentation available how to fix reported errors is also very important. People also need to be able to view checklists, so they understand, what to do and what not to do while working. We also have a function to select the asset and the problem (e.g. select verts, polies, etc.)

Finally, automated checks never cover 100% of checking. You must have a process in place (ideally supported by tools) to ensure checking happens for things which are not automated. If people skip the manual checks then you’re not much better off than leaving things to chance. So while you develop the pipeline, make sure you also develop a QA management process to go with it. The pipeline then supports that process.

Yup. Gotta put some thought into this. As outsourcer we have just so many requirements. Just checking in Maya and Max is not alone. In games you’d ideally love to interface with Unity, Unread, Photoshop and, to a lesser extent, ZBrush.

Also, writing checking modules has to be QUICK and EASY! Naming convention checks take up a lot of work. Naming conventions for layers, objects, materials, textures, files, directories, but also relationships between names, e.g. if material X is named like that then obejct A, B and C must be named like that, and so on.
Checking texture formats - linear, gamma, HDR properties, etc. also is very common and whatever system you choose should make it easy to pick 3rd party libraries that allow for checking all this.

Your challenge with checking plugins will be:

  • reduce dependencies on specific compiler, Maya versions to allow for greater (and quicker) re-use
  • reduce boiler plate code and compile times for quick development
  • develop testing tools to avoid that a broken check plugin takes down your entire system
  • develop a way to easily distribute plugins to each project

Unrelated question to QA: what sort of architecture are you guys using for your pipeline? I’ve come across a few pipelines so far, but one thing that bothered me was that most were very inflexible and thus hardly re-usable. Most of the time you had to disassemble them and assemble a new pipeline. But that took time.

I’ve been wondering, do you have some sort of modular approach in mind? I’m thinking along the lines of service based architecture (or even CBSE), where you could take components, like a QA checking component, a reporting component, a revision management component, a workflow management component, and re-arrange them in any way that supports your production workflow. This is based on my belief that a pipeline starts as a process definition, and all the tools (shotgun, tactic, P4, etc) just exist to support this process, and that they should adapt to it as far as possible (rather than the other way around, process adapting to tools).

Robert, just wanted to say you’re providing some very good insights here.

Note: I’m only taking part in working on Publish. I’m by no means the core developer or decision maker about where it is going.
And: This is based on the node-based workflow we’re looking into!

The way I see it every step implemented in a Publish Graph (whether you call it Check, Filter, Selector or anything) is like a node (black box) that operates only on its ‘inputs’ and possibly provides new ‘outputs’.
This ‘Graph’ is the ‘queue’ of steps it will take to perform a check, validation or full export. Such a graph can be created in the UI, like in any node editor, and saved out to be used anywhere in your pipeline.
You can create and define your own Graph based on combining multiple blocks and use that any way you want, eg. you don’t need to actually export anything but you could also only perform simple checks.
Since we’re separating backend/frontend it would be possible to run such a graph without the UI from anywhere in your own toolset. Thus only using it’s executing power plus it’s simple modular nodes for ‘checking’.

So yes, you will be able to create a multitude of Graphs that perform quick checks or full-blown validation + exports.
In its core I would look at Publish as a ‘processing graph’, basically anything that could be processed as a series of steps (with possible branching (node-graph)) could be implemented into that Graph. In theory it could support validating your scenes, validating files on disk, exporting stuff, importing stuff. Heck, it could even be the tool you use for a Scene Constructor that import/build up the scene based on files on disk (or other rules).

Of course, this is how I see it. And since it’s a community (open-source) tool just starting to develop remember that I’m only trying to contribute with the above idea and see if it raises interest or issues.

Hi Robert, thanks for your input.

Before I reply, I realised that in addition to providing you guys with a progress update on where things are headed, I should probably also update you on where we’ve been and where we are right now.

Present

First of all, Publish works right now and will continue working during development, you can install it and either use the small amount of plugins we’ve supplied, or write your own. It’ll append a “Publish” item to your Maya File-menu which will perform a “publish all”. This is what we’re using for testing and debugging while developing.

$ cd c:
$ git clone [email protected]:abstractfactory/publish.git
$ set PYTHONPATH=c:\publish\publish\integration\maya
$ maya

There is a demo-scene in c:/publish/resources/tests/selection/multiple_selection_test

The sequence of steps laid out initially still rings true; there are four major steps involved in publishing a single asset - selection, validation, extraction and conform.

Each step is executed in that exact order and each step is built using plugins. E.g.

[ol]
[li]SelectViaObjectSet
[/li][li]ValidateNamingConvention
[/li][li]ValidateMeshHistory
[/li][li]ExtractAsAlembic
[/li][li]ExtractAsObj
[/li][li]ConformWithAsana
[/li][li]ConformToAssetRepository
[/li][/ol]

Technically, each plugin is made up of a class-definition, implementing an interface with a single executable method - “process()”. This method is given exactly one argument - “context” - which is a collection of publishable elements as defined via the selection plugins.

Convention

About conventions, let’s have a look at selection. Selection is the process in which users specify what to publish. The plugins we’ve got up and running all deal with Maya and with reading user-defined attributes off of nodes, the convention is:

  1. Does the node have an attribute called “publishable”?
  2. If so, what is the value of its “family” attribute?

Every node living up to these requirements is known as an Instance.

Both the Plugin and Instance maintain a “family” attribute. The family determines which plugin is compatible with which instance. For example, you probably only want to be running validations and extractions for meshes on meshes, not on textures or shaders. A plugin may also support multiple families, such as validating naming conventions on both meshes and rigs.


 ___________           ____________
|           |         |            |
|  Plugin   o-------->o  Instance  |
|           o         |____________|
|           o
|___________|

Customisation

The goal is to fully insulate plugins from the development of Publish, so that users can implement their own requirements and for it to fit into any pipeline. As such, if you have particular conventions when it comes to rigging and exporting pointcaches into lighting, your plugins should be fully capable of facilitating this convention whilst still giving you all benefits of running Publish.

In that sense, Publish facilitates standardisation, rather than trying to provide one.

Artists prefer a quick tool where they can check-fix-check-fix-check-fix until they fixed all the errors.

This sounds wise, +1 to aim for this.

Scenario 2 occurs when submitting assets for review, delivery, publish or any other workflow milestones. In this case the check is mandatory, and it must take as long as it takes.

Yes, we’ve been thinking along the same lines, good to hear its common elsewhere too.

Having a “check farm”, like some clients do, can be useful.

Do you mean distributing checks onto an actual farm? I can see the use for that, especially in cases such as the one you mention about long-running processes. Can’t quite wrap my head around how that would work currently, but it’s certainly something I’d like Publish to facilitate.

Also ensure you have an override mechanism. Especially in games there are often exceptions, but they must be okay-ed by a higher authority.

That’s an interesting thought. Off the top of my head me thinks of a password-protected override button, where only those of higher authority carries the password. What do you think, too much?

We also find it useful to have “fix” buttons that try to automatically fix errors. However checking itself should just do that: check. But never fix, unless the user approves.

This is also something we’re all in the same boat about. I would personally prefer to have control over what sort of fixes goes on, as some fixes might cause damage to the scene - e.g. something crazy like removing history from my rig.

Having documentation available how to fix reported errors is also very important.

Loads of good stuff here, you’ve clearly put a lot of thought into this process! I’d imagine documentation to be available within the main GUI. At least snippets of it, with links to full explanations. Similar to how Mudbox does it in 2015 when a mesh has problems, for those familiar with that.

People also need to be able to view checklists, so they understand, what to do and what not to do while working. We also have a function to select the asset and the problem (e.g. select verts, polies, etc.)

That’s a good one. Similar to the Mesh -> Cleanup… of Maya in that it can look for trouble and either fix it or simply select it, letting the user handle it on his own.

Finally, automated checks never cover 100% of checking. You must have a process in place (ideally supported by tools) to ensure checking happens for things which are not automated.

Would you mind expanding on this? To wWhat sort of manual checks are you referring?

Your challenge with checking plugins will be:

  • reduce dependencies on specific compiler, Maya versions to allow for greater (and quicker) re-use
  • reduce boiler plate code and compile times for quick development
  • develop testing tools to avoid that a broken check plugin takes down your entire system
  • develop a way to easily distribute plugins to each project

I think we’ve got these covered.

Dependencies are handled on a per-plugin basis, so it’s ultimately up to the implementer to handle what he needs to import within a plugin. Boiler-plate is currently an interface with a single method, its input reduced to a single argument; a collection of Instances. The plugin mechanism can’t fail, but only discard plugins and produce a message in the console. Down the line, this could possible be part of the main GUI too. As for distribution, there are two methods.

  1. Import the package, and run a registration function - this will register a path with the current run-time.
  2. Add paths to a environment variable, PUBLISHPLUGINPATH which works similar to PYTHONPATH. This should cover most bootstrapping needs and persistent paths.

Unrelated question to QA: what sort of architecture are you guys using for your pipeline? I’ve come across a few pipelines so far, but one thing that bothered me was that most were very inflexible and thus hardly re-usable. Most of the time you had to disassemble them and assemble a new pipeline. But that took time.

This could get long, and though it’s an intriguing question, we should probably stick to the topic of this very slim part of the pipeline. But in short, Pipi follows what you would call a service-oriented architecture in that each part consists of a re-usable element, accessible via message-passing or RPC.

I’d imagine Publish to fit in with this kind of pipeline nicely, as each plugin could potentially be responsible for utilising these services at its own discretion, facilitating multi-processing of plugins, such as running validations concurrently and funnel the results through a common broker through which all pipeline related data flows.