Just how expensive is vertex alpha really?

I couldn´t find any conclusive information about this. (And googling for “vertex alpha performance” only generates results about protein powders -_- )
I know that vertex colors are super cheap, and that other types of alpha (bitmap) uses up texture memory - but when it comes to the actual rendering: how expensive is vertex alpha really?
The idea is to use it for texture blending, but I´m really not sure it´s worth it.

vertex color and alpha is “cheap” compared to other types of data, and the colors can be compressed if you need the memory.

not really enough information to answer your question though. The number of vertices is going to determine the actual cost and whether or not you allow 1 color per-vertex-face/

Assuming this is for games, your biggest cost comes from that fact that you’re using alpha regardless of where it comes from. There may be additional overhead to sort the polys properly both within the object it and between the object and other objects in the scene. Heaven forbid that transparent objects should interpenetrate. The other big cost is that the hardware has to read from framebuffer to do the blending.

Someone who spends more time with shaders than I do should feel free to correct me, but I don’t believe there should be any significant cost differences between using vertex alpha and texture alpha so long as the other parameters stay the same. IE, you don’t have to crank up your vert count to provide the level of alpha fidelity you’re trying to achieve. As soon as you start using vert colors/alpha extensively, you’ll probably notice strange artifacts as the colors blend along poly edges. This almost always forces you touch the model to flip troublesome edges, subdivide, etc.

Are you doing deferred rendering? If so, then chat with an engineer. Alpha is always problematic. Alpha in a deferred rendering environment even more so, and there could be problems with vert alpha here.

btribble is definitely correct, alpha is expensive on its own, the place that you store it doesn’t really factor into that upfront cost.

If you’re just using the vertex-alpha channel to handle a mask in the shader, that’s no more expensive then storing the mask in vertex-red, or vertex-green. Though depending on the engines ability to compress the vertex colors, adding that extra channel will up your memory footprint by some amount.

On the topic of compression, you’ll want to check with your engineers on that one. Depending on the density of your mesh and the compression method, you can often run into times where good old DXT compression will actually save you more memory, though at the cost of a texture sample.

Also another thing to look out for, is that vertex colors can contribute to the overall vertex count, much like UV-borders and soft/hard edges.

in performance terms, the cost of alpha has to do with overdraw. The graphics hardware likes opaque geometry; since it has a very simple job to do determining whether or not to change a pixel value – just check and see if it’s in front. A transparent pixel will be ‘drawn’ at least twice and depending on the implementation the you may have to draw a whole separate universe of alpha pixels that need to be sorted out. In the performance universe it doesn’t really matter if the alpha came from vertices or from textures: the number of transparent pixels in the final image is what drives cost, not how they became transparent.

in memory costs, its just numbers. The details will depend on the implementation. Vertex alpha will be cheaper than per-pixel alpha textures since you’re storing less of it; but vertex-alpha vertices are fatter than than vertices without alpha (the cost will be basically equivalent to adding another UV channel).

The balance between the two depends on platform. Generally less hardcore hardware is more likely to be performance bound (older i-devices, for example, and Android stuff generally. This is especially true for powerVR based graphics chips). Console hardware is more likely to be memory bound in some genres (I do big open world games, so memory is always big for me – but in a fighting game it’s highly unlikely to be the main problem).

In the specific case of texture blending, the ‘right’ answer is driven by budgets: unless the blending is dynamic – like one texture scrolling under a different one – a blend based solution will be slower and more memory efficient while a ‘baked’ solution (like doing the composition of textures in the pipeline) will run faster and use more memory.

In other worlds, the endless tragedy of game development: you can have memory or perf, but never both (and, sometimes, neither…)

Thank you for your replies.
Currently me and my colleague (also 3d artist) are working externally for another studio which we have close ties to. We are working with environment art and we have a limitation where we cannot use any splatmapping technique on meshes. For must stuff - like ground textures - we can create tilable textures to be used with their terrain tool. Im not sure how splatmapping is normally done with a terrain tool and I’m not sure I can get hold of any of their engineers as it is vacation times atm. Either way, to cut things short: We are working on a set of assets which requires texture blending on a mesh-level. Since their engine doesn’t support splatmapping on meshes, we are considering using vertex alpha instead, but we are hesitating since we are not sure how problematic it will be for their engine. It’s not a whole lot of textures to be blended here (more like one road texture onto one ground texture) but their engine is quite old.

I know that it’s rather awkward to ask you guys about this since you have no insight into their engine or their shaders, so I understand that you can’t give me anything but very general answers.
I’ll try and get into contact with one of their programmers this week, but till then all I have is you guys :wink:

Watch out you are not doing work for nothing as the shader has to support vertex alpha too.

Using vertex alpha for texture blending inside of opaque geometry does not incur transparency costs, though it will make your pixel shader more expensive than it would be otherwise. Many terrain systems use RGBA colors to provide masks for up to 5 layers of terrain textures, that works fine and does not have the overdraw costs. it does however stress out on memory bandwidth since a given pixel will need to access as many as 5 textures and those will not come in an order the hardware will be able to predict.

In an ideal world you’d pre-process the geometry into chunks so that a big block of geo which visually shows only one texture would use a different shader than a chunk which shows transitions. The combinatorial is pretty nasty but you could see if you went to just two shaders, ‘terrain simple’ and ‘terrain transition’ where only the latter actually did blending.

If you go that route you want to split the geometry offline and do it for the artists – making them manage that stuff themselves is asking for both tech trouble and rebellion

Is your name Theodox or Varys? :wink: