I always use proper (well… as proper as you can in a scripting language) timestamping and compare the milliseconds used.
I don’t know how to achieve this in Maya, but in theory you can easily start by just storing the initial timestamp in milliseconds and putting a lot of debug print statements to various parts of the rig code where you list which stage started, when it ended how many milliseconds it took. Otherwise you’ll end up with trying to optimize something that wasn’t slow to begin with.
Doing anything based on the viewport FPS meter is just wrong wrong wrong.
E.g,
Say you have a very slow scene which runs at 10 FPS. Thus, you use 100ms per frame to render. You find some constant slow query which always takes 25ms per frame (like some lookup or something). You optimize that away and get the time down to 75ms per frame. However, your FPS counter only shows the optimized scene runs at 13.3 FPS.
However, if another scene was previously running at 25 FPS (thus, 40ms per frame), but you managed to use the same optimization there and gain another 25ms per frame saving - thus spending 15ms per frame you’ve managed to bump the framerate to 66 FPS.
Now, if you tell your users you got the framerate from 10 to 13.3 they’ll go… Oh… Keep at it.
But if you tell your users you got the framerate from 25 to 66 they’ll say you walk on water.
It’s amazing how many people (even programmers optimizing game code) get it wrong and just look at FPS numbers when they should ALWAYS be looking at milliseconds spent when seeing if an optimization made any sense.
SamiV.