In-house Tool Code Organization

tools
python
maya
houdini

#1

We’re doing quite a bit of clean-up of all our in-house Maya Python tools and have been trying to make our overall code-base more robust. The biggest hurdle I keep tripping over is the structure of our tools and delegation of responsibilities for the structure within.

Our initial plan is as follows to use a tool:

  1. Instantiate a tool (tool.Tool()) as our entry point
  2. Tool instantiates a model (tool.ToolModel()) to store and manipulate data pulled from a Maya scene
  3. Tool may generate a view (tool.ToolView()) which is any GUI display and update code

Business logic is currently is being placed into a Model class. The View is generating custom signals that the Tool, which I suppose is acting more as an adapter, then triggers Model code to interact with Maya and store any data in memory. The Tool then calls View methods to update the UI.

All in all, this is giving decent separation of responsibilities and limits any dependencies to just the Tool itself. In our perhaps naive thinking, if we ever want to do a similar task in a different application (i.e. Houdini), we can swap out the Model and hopefully still have things work with minimal changes. Full disclosure, we’re not doing this at all right now in our pipeline which is arguably a red flag for this line of thinking…

My concern is the complication of all this and whether it’s all worth it. What strategy is everyone else using for breaking up their tool code? Do you simply assume Maya is your model for most tools and put all business logic into the tool/application itself (seems that would make command generation more straightforward) rather than some other class/module?

If this is all nonsense let me know and I’ll try to put together an example of what I mean.

Thanks!


#2

This absolutely can work in specific cases, but will depend on what the requirements are.

In general, we would design the tool behaviour/structure to be as agnostic as possible, and in most cases (so far), this works.

The tool interacts with a Model, hooks up to and emits signals (important).
We then have a Tool View, which uses the Tool signals to update with.
Each ‘Model’ will handle application specific behaviour/data via common interface and emits common signals for the Tool to pass on.
Each model may also provide a View, and attach/modify the the Tool’s view.
This provides as much flexibility and accessibility as we need per application, and creating a new model isn’t a big job. Most signals are already emitted in wrapped Model functions, so alot of the Ui hookups are already taken care of.

Tired, so my apologies if this isn’t clear.


#3

Thanks for the reply! Interesting. Do you then find (in the case of Maya) that the Model would contain any pymel/cmds calls (or access to libraries that do) while the View contains all PyQt/Pyside code?

Also, is there concern that a Tool’s methods may simply look like something like this:

respond_to_button_click(self):
work = self.model.do_that_work()
# Perhaps modify work or format
self.view.update_with_work(work.pertinant_info)

Perhaps this is absolutely fine. I’m just wondering if I were to then add commandline parsing to the Tool’s interface in case it were accessed in some sort of non-gui mode if I’d then have lots of “if self.view” checks to ensure I do not attempt to update view in these cases. Either that or I have methods for handling view signals that then call commandline methods to access the model…

I suppose I’m interpreting your use of the term “signal” to be method/function calls (as illustrated in the above example) or are you actually implementing a custom signal for listeners to react to typically? Or perhaps passing observer functions to the Model?

Thanks in advance for patience as I stumble along here!


#4

I actually meant something like a pyside.Signal (PySignal project on github is recommended).

So,

class tool():
    def set_model(model):
        self.model = model
        self.model.something_went_wrong.connect(
            self.something_went_wrong
        )
        ...

    def do_something():
        self.about_to_do_something.emit()
        self.model.do_something()
        self.did_something.emit()

class model():
    def do_something():
        try:
            ...
    
        except ValueError:
            self.something_went_wrong.emit(
                'Wrong value...'
            )
            raise

class View(QWidget):
    def __init__(self, tool, parent):
        super(View, self).__init__(parent)
        tool.about_to_do_something.connect(
            self.prepare
        )
        tool.did_something.connect(
            self.cleanup
        )
        tool.something_went_wrong.connect(
            self.warn
        )

The idea being, the tool and model should contain and manage the data and never directly refer to a view.
The View uses the data collected and stored in the tool and model instances to populate and can modify it THROUGH the model.
The tool emits relevant signals the Ui can update to.

Obviously, this is really dependant on requirements, application and design. It may also be overkill in alot of places.
It’s certainly AN approach, but perhaps not THE approach.
Very interested to hear opinions and alternatives.


#5

I’m curious if folks find that in most tool scenarios, it’s sufficient to assume that Maya (via pymel) is itself the model layer of the pattern.

Let’s say we have a tool that allows a user to copy skin weights from one mesh to another. A user clicks a button to set a model as a “source.” Is it necessary then for that button click to pass from the View event, to a Controller method, to a Model class or function to a Pymel call? Or is it sufficient to assume that the Controller can make the PyMel call itself and store that selection in memory for use during that weight copying?

In other words:

View button clicks and alerts the Controller
Controller calls pymel’s getSelected and stores the result (if any)
Controller updates the View perhaps with the geometry node name

Or would it be more maintainable to perform the task like this:

View button clicks
Controller asks a Model to get whatever is selected
Model calls pymel’s getSelected and stores the result (if any)
Model alerts the Controller
Controller updates the View

I’m finding I’m spinning in circles attempting to abstract this for not only the simple cases like this, but larger workflow type applications within the context of a 3D package. My task is then also of course to ensure that my coworkers can easily understand and author themselves in the same structure :smile:


#6

It’s a fair point and really can only be answered by what functionality is required.
If the button click to load a source is only for the user’s benefit (to inform the user that an object has been ‘loaded’), then I wouldn’t expect that data to need to go through to the model from the view. It can just be stored by the view until it’s needed later.

The purpose of tool, model and view is to split the relative components into their respective systems. Ideally, this provides clarity, and robustness.

Even if the ‘load source’ button runs additional validation functionality to object viability then I would expect view to call a model.is_viable(node) check. I would still expect the view to retain the loaded object.

When the user presses ‘apply’, I would then expect the view to gather all relevant data and pass it to a single model entry point for executing.

The point being, you’ve written the model to be used without a view. You write the view to collect and pass data to the model.
If the line gets blurred then perhaps rethink your requirements, do you really need such separation? Is there another design that may be better suited?

If not, then pay the additional computational cost to maintain that separation. In the grand scheme of things, a user will typically expect a slight delay when applying an action via button anyway.

Sometimes optimisation isn’t worth the cost of comprising a design pattern.

As for the concern regarding view -> tool -> model -> signal -> tool -> signal -> view, I understand the concern. It does seem overly complicated, and as I’ve mentioned, may not be the best design for your requirements.
The upside being, you could replace the view at anytime, without the need to manage functional code and you could use various models without the view ever needing to know which application it’s being used in.

Edited: additional clarification.


#7

This is great info. Thanks so much for the dialogue!


#8

Anytime :smile:
Am interested in hearing others experiences/thoughts too


#9

One thing I’d add is to make sure you actually understand the use cases you are likely to use. If you add a lot of complexity to cover an anticipated future use that might never occur, you’re paying the bill without necessarily getting any benefits. If the tools are well scoped and have good internal architecture it may be easier to convert them to a higher level of abstraction when a practical need arises, rather than trying to anticipate the need and solve it in a vacuum.