Vertex Shader Animation: How would you store the information in the texture?

At Siggraph 2016 the Technical Art talk for Uncharted 4 dug into vertex shaders. What was a little mind blowing was the fact that they stored animation data in textures. That talk helped put this Tequila Works Article, “How to take advantage of textures in the vertex shader” into perspective. It makes sense that you can store transform and rotation vectors in RGB value. What is tripping me up is how you store all the necessary information with just an RGB value.

If you had to write the texture generator how would you encode the information for the vertex shader to read.

1: RBG values range from 0 to 255, does that mean you can never use negative values? Would you set pixels aside in the texture to choose what values are positive and negative?

2: Also What happens when you need to go over 255 as a value, for lets as a 360 rotation? It is possible to add two rgb values together but the feels like a waist of space.

3: Floats values … how would floats work with rgb values since they can only be whole numbers? I know in vertex shaders you can convert rgb values to float, but at that point the data would be lost right? You could have 0-255 range be set to 0 - 100 scale at that point, 1 = 0.39 but that low level of precision doesn’t sound usable. Or is there something in how vertex shaders parse information that I am missing?

Any thoughts, ideas or comments?
-Cheers

You think about the image in a wrong way. We are used to think that an image is some special memory block that stores a color values - this is wrong. Example:
xff x00 x00 xff x00 xff x00 xff x00 x00 xff xff
Those would be three pixels - red, green and blue with alpha set to 255 if this was a memory block, that we treat as a typical 8 bits per channel RGBA bitmap. (xff x00 x00 xff) is for us a red pixel (these are four unsigned int numbers). However for the computer this is just an array of bits in memory, so instead of reading 4 bytes as one pixel, we read it as one 32 bit floating point number xff0000ff. That way we can save a full transform matrix into 64 bytes of memory (16 floating point numbers). If you open a texture like this it will seem to you completely crazy and meaningless, but for the computer only the numbers counts. As for negative values - the numbers can be signed or unsigned, that means that one bit in memory is sacrificed for a +/- definition.

Hope that this helped.

PS. Once again - RGB values of a bitmap is just a way we - humans translate the numbers stored in memory to colors known to us - we’ve made applications that can do it (every image viewer does that). However for the computer an image is an array of bits.

So in your example with 2 pixels you are able to save 4 half precision floating point numbers correct?

This “http://www.color-hex.com/color/923120” is the color and info I’m using in trying to understand this example. So lets take this shade of red in Hex “923120”, which gives us an rgba value of 146, 49, 32, 255. The colors binary values are; R = 10010010, G = 00110001, B = 00100000, A = 11111111, with those 8bit blocks we get our 32bit number. Then just use R & G as the first 16bit number correct?

Just trying to make sure I understand.

Thanks for your previous post.

By the time you hit an actual shader, your numbers are just ‘numbers’ – they’ll have different precision depending on where they came from and how they were encoded, but in the shader code they’ll be interoperable. You can interpret them in any way you see fit regardless of the underlying bit depth, as long as you don’t mind the artifacts that come from different levels of precision in the input texture. For example, you could drive a particle system with a texture where R, G and B represented positional data and A represented lifespan. The code will work the same way if you have a nasty compressed DXT texture or 16-bit floating point or a full 32-bit floating point texture as an input… but the visual results will look different because the underlying numbers will have been quantized by the storage mechanism.

When you do math on the numbers you’ll get results that reflect the quantization (if any) imposed by the storage mechanism… DXT compression for example tends to be more lossy in red and less in green because green drives visual luminance. So a DXT texture particle system will be a bit choppier in X than in Y – even though the movement code is identical in HLSL.

You can of course manipulate the numbers any way you want once you pull them out of the texture. A normal map typically converts a pixel by multiplying it by 2 and then subtracting 1, giving a normal vector whose X,Y and Z values range from -1 to 1. If you inspected the resulting values though you’d see that they are quantized by the original precision – so an uncompressed 8 bit texture would produce possible 255 possible values between -1 and 1 for each channel; a 16 bit texture would produce 65,000 possible values and a 32 bit texture would produce 4 million.

So: the precision of the input texture controls the number of discreet values coming out of a pixel, but the interpretation is up to you: it could represent 255 steps between 0 and .00001 or 255 steps between -1,000,000 and +1,000,000. Using deeper pixels or more pixels would allow you to get more steps when you need them but the actual value that comes out is up to you.

dupe… darn you, VBulletin

@TheoDox, Thank you!

Part 2:
Building the vertex fragment shader to reads said texture info in Unity. If anyone has actually animated vertexes using RGB values stored in texturs or examples, any reply would be greatly appreciated. Google has been exceptionally unhelpful.
I understand the theory I just haven’t found examples to put said theory into practice.

Never-mind I think I have it working, I just figured out I can read the vertex position based on the UV channel. Hoping to get a prototype up and running soon. :slight_smile:
Though I wouldn’t mind any reading material or hearing about anyone’s experience on the subject. :slight_smile:

I did a particle system where every particle animated based on the pixels in a texture. The particles would spawn as quads and the shader just gave them an id and a lifespan value (which the game updated every frame) The id told me the V coord of the texture I wanted and the lifespan number told me the U coord; the pixels recorded the position for that frame as an RGBA value. RGB was the normalized position and A was the length of that vector; the real position was (RGB * A * some_scaling_number) ; the result was the offset position for the quad. You could encode all sorts of arbitrary animations for a quad into the RGBA values and play them back on the vertices that way – the number of possible particles was the V size of the texture and the number of frames was the U size.

You should try to look up Natalie Bird’s GDC talk from last year, btw, it’s a great primer on vertex animation