At Siggraph 2016 the Technical Art talk for Uncharted 4 dug into vertex shaders. What was a little mind blowing was the fact that they stored animation data in textures. That talk helped put this Tequila Works Article, “How to take advantage of textures in the vertex shader” into perspective. It makes sense that you can store transform and rotation vectors in RGB value. What is tripping me up is how you store all the necessary information with just an RGB value.
If you had to write the texture generator how would you encode the information for the vertex shader to read.
1: RBG values range from 0 to 255, does that mean you can never use negative values? Would you set pixels aside in the texture to choose what values are positive and negative?
2: Also What happens when you need to go over 255 as a value, for lets as a 360 rotation? It is possible to add two rgb values together but the feels like a waist of space.
3: Floats values … how would floats work with rgb values since they can only be whole numbers? I know in vertex shaders you can convert rgb values to float, but at that point the data would be lost right? You could have 0-255 range be set to 0 - 100 scale at that point, 1 = 0.39 but that low level of precision doesn’t sound usable. Or is there something in how vertex shaders parse information that I am missing?
Any thoughts, ideas or comments?