Does anybody code shaders?

Our game uses a forward renderer - so it’s easier to put new shaders in. However, most of the shader work that I do is figuring out how to make our current set of shaders run faster, and how to put new features into our current shaders without adding too much to the performance cost.

It seems to me that with a deferred renderer, you should at least be able to write shaders that create the data that goes into the gbuffers in new ways. For example, if your gbuffers contain diffuse and normal data, can’t you still write shaders that create that diffuse and normal data? I’m referring to things like detail normal maps, combining multiple diffuse maps, scrolling texture effects, etc. And you should also be able to do post-process effects as well on the other side of the gbuffers.

The way to do it is a data-driven rendering pipeline. We used it at OVERKILL Software and I know Bitsquid Tech uses it as well. Read more about it here: http://www.bitsquid.se/presentations/benefits-of-a-data-driven-renderer.pdf

Basically they have a json file describing the entire rendering pipeline, you can setup any kind of rendering system with purely data without programmer intervention. Anything from render target creation, binding, render layers, shader binding are described with data. Materials point to render layers and the render layers are processed in the order they are declared, so depth prepass first, then gbuffer, then light accumulation etc. Want to insert a compute shader (DX11) based Depth of Field? No problem.

Regarding permutations, it’s handled by an asset conditioning pipeline.

Very flexible.

[QUOTE=bcloward;18474]Our game uses a forward renderer - so it’s easier to put new shaders in. However, most of the shader work that I do is figuring out how to make our current set of shaders run faster, and how to put new features into our current shaders without adding too much to the performance cost.

It seems to me that with a deferred renderer, you should at least be able to write shaders that create the data that goes into the gbuffers in new ways. For example, if your gbuffers contain diffuse and normal data, can’t you still write shaders that create that diffuse and normal data? I’m referring to things like detail normal maps, combining multiple diffuse maps, scrolling texture effects, etc. And you should also be able to do post-process effects as well on the other side of the gbuffers.[/QUOTE]

It’s still possible to make such changes, but since all the parts of pipeline are so tightly interwoven and subjects to continuous change and optimization, everybody that commits code that goes straight into builds is a rendering programmer and is involved in strategic meetings, trained in low-level optimization and can do gamma-correction in their heads. :stuck_out_tongue:
I worked on some forward shaders for special effects like water or alien skin, but anything in the deferred pipeline, even stuff that you mentioned, requires changes to the rendering-pipeline, shader-code and even our texture-conversion process. (we compress normalmaps, detailmaps, glossmaps etc. in very peculiar ways to get most out of the available value-ranges)

That’s not to say that a Technical Artist cannot do such things, in fact we have one, but I would consider him more of a Rendering Programmer than a Tech Artist, because he’s focused exclusively on development of the rendering pipeline and spends most of his time in visual studio or looking at screenshots at 800% magnification. :stuck_out_tongue:

[QUOTE=saschaherfort;18471]Ben, Bronwen, I’m curious to hear more about your rendering pipeline.

Like I said before our company uses a very elaborate deferred pipeline, where adding a single new slider to our main shader can drag a rats tail through the entire pipeline behind it. (permutations, cross-platform-optimzitations, modifying light-shaders to support the new param, packing all params into Gbuffers, etc.)
This is why we have only programmers writing shaders an everything that ties into it.

How do you deal with that at your studio?[/QUOTE]

Most of our games are forward rendered. Dota 2 is a deferred hybrid, with some objects getting an additional forward pass for more complicated shader effects. That was a pretty lengthy discussion, because obviously the extra pass is costly, but we determined it was worth it. Luckily the objects which use that pass don’t take up a lot of screen space. We haven’t dealt with your specific case very much (fully deferred), but we have discussed it a whole lot since it was on the table for a while. Once the deferred renderer is designed and implemented, any changes are hard for anyone (including graphics programmers). The cost of additional g-buffers is prohibitive. Changing the g-buffer contents can have a negative impact on content creation, requiring re-work. Both things mean that you are pretty much stuck with what you’ve got, regardless of how difficult or easy it is to actually code a change. After a certain point, almost any change is just a Bad Idea because it incurs too many costs.

Since deferred is the route your studio has gone, you may just have to accept that there are no opportunities for look dev and find a different way to contribute. Or if you’ve got a really strong opinion and you want to try something, you’ll have to expand your knowledge and abilities so that you’re on par with a graphics programmer. Learn all the things!

Can someone tell me why is a Tech Artist talking to me about shaders?

/turns his back
.
.
.
.
.
.
.
True story.

[QUOTE=Rob Galanakis;18489]Can someone tell me why is a Tech Artist talking to me about shaders?

/turns his back[/QUOTE]

ROFL!! :laugh:

:expressionless:

Bronwen, that was Rob’s impersonation of our lead engine programmer when I first tried to talk to him about getting some new shaders into the game. That was an interesting day.

Just before I started my first job at a startup (I was the first/only TA), the art team had been arguing with the programming team for 2 months because the art team said their normals were inverted and the programmers said everything was correct.

Of course the issue was just flipping a normal map component.

This seems to be a common issue, i remember when i started at another company that had never had TAs who did shaders, it was a similar issue…binormals weren’t being passed through properly. Is that just a thing?

It’s pure convention, so subject to arbitary decisionmaking. That’s why the normal map doohickey in Zbrush covers all combinations of XYZ/RGB, +/- and flipped U/Vs…

And of course if you’ve got one component right and the other one wrong, the ‘wrongness’ is subtle enough to elude eyes that know know programmer art.

sigh

1 Like

After doing a search on the forum for GLSL, I’ll dare to demonstrate my awesome necromancy skills on this thread :slight_smile:

I “write” shaders. I’ve been doing 3d art for a long time, but had zero experience with scripting, game engines and such. My introduction to scripting was initially through Rhino and architecture, but at one point I got to work with some flash devs who were experimenting with Away3D. I paid attention to what went on with AS3 which was somewhat confusing. What, to everyone’s surprise, seemed to make most sense to me, was something called AGAL which is basically an assembly shader language (opcodes, registers), but I saw colors and vectors.

From there I went to unity, step by step moved from tampering with surface shaders to CG, and later picked up WebGL through a user-friendly library, but i’ve been learning GLSL.

My original intention was to just work on a low poly art portfolio. But I hit a wall with normal maps, the workflow, the issues. I’m still under the impression that most 3d artists have an array of different hacks for doing this type of thing.

I decided the only way to proceed was to try and do my own synced workflow with an arbitrary engine. After a whole bunch of experimentation, i ended up with a pretty nifty workflow, that allows me to do a bake within Max, export the tangent space with a json format, and unpack the normals with a specific shader. After solving a gamma issue related to the nature of WebGL / browsers, the results look pretty good. It seems pretty hard to get seams by accident, and it seems to be performing pretty well.

It’s got some reduntant stuff as is (worst is the normal transformation from world to view in the pixel shader), some experimental lines, but this can all be stripped down to the bare bones - a phong shader, with a rim term, sampling a cubemap, unpacking normals. It’s simple to do all the calcs in the same space, get all the vectors and reduce the amount of operations.

http://dusanbosnjak.com/test/webGL/new/poredjenjeNM/poredjenjeNormala_01.html

Sure, things like shadows and transparency are complex, but a surface shader is a relatively simple concept, plus there are tons of stuff animation wise that an artistic person can do.

The world/ view/ screen transforms seem like a useful thing to know for anyone more tech oriented in the 3d world. The whole theory allowed me to better understand what a mesh is made of, how triangles are made, what is a seam (how many verts are there, how many normals) etc. etc. But in the end, i don’t know how to automate a batch export/ bake somewhere else/ import process with a nice user friendly GUI.

Could anyone give some advice on how to direct these skills more towards gaming? On one hand I understand that this could be scratching the surface of what an engineer does, but on the other, it’s a good exercise for 3D math, and having low level access as an artist might allow for some interesting effects to be made. Nevertheless it’s useful for the web.

[QUOTE=pailhead;24280]

http://dusanbosnjak.com/test/webGL/new/poredjenjeNM/poredjenjeNormala_01.html[/QUOTE]

Nice work! I’ve been working a bit with three.js, although I’m still stuck a bit with the shaders.

I started writing shaders as soon as we started developing apps in Unity. Previously we’ve been using 3Delight, but I never got time to start working on them because I’ve been constantly making pipeline tools.

I mostly use CG inside ShaderLab for mobile shaders, writing low-level pixel and vertex shaders, which is pretty awesome, but you have to constantly think how you would optimize channel usage and make things look cool with simple math. It’s important to know vector math and understanding of how colors can be mixed in different ways. Also how different transparency tests and depth buffer works.

Double-checking code that comes from “art” people is sometimes necessary, but if you get grasp of how graphical pipeline works it shouldn’t be too hard for you to optimize shaders if you aren’t working on very big project.

yes, there’s a lot of room for an art-first person to write shaders.

Generally speaking I’d say node-based shader ‘graphs’ are good training wheels – a chance to understand what the basic mathematical toolkit looks like and see what’s possible. Over the long haul, the node-based stuff tends to top out and get replaced by code, but nodes are a great gateway drug and when you get frustrated with what they offer you should be familiar enough with the basics of the business that the cost of typos etc is not overwhelming

There’s a whole level of graphics programming that’s way beyond most of us – stuff that requires a really serious grasp of the math and how the hardware works – but it takes years to get there. If it’s your destiny you might find your way there, but you can do cool stuff without having to be John Carmack