“budgets” is a black art because it’s not always clear what the real units of allocation are. That’s why there’s no easily available reference.
A game with heavy use of streaming will have and ‘infinite’ budget in one sense – you can have an infinite series of textures over time, but at any given moment you’re limited to available VRAM. Textures are constantly coming and going – which is kind of neat apart from the fact that streaming textures in and out of your VRAM is a tax on CPU and disk usage. So you can’t think of texture memory as a fixed allocation bucket in a streaming game – it’s as much a bandwith problem as it is a resource problem. Geometry memory is also limited by VRAM, and potentially by streaming – and it has the same problem of adding a drain on CPU time as new meshes are added and removed.
On the other hand a fixed-load game --like a dungeon level that preloads all of its characters and effects and geometry during a load screen – is more or less like a fixed-bucket allocation limited by VRAM size.
Geometry is also interesting nowadays because on most top-end hardware we have more compute power than memory bandwidth. In the old days we worried about triangle counts because transforming and rasterizing triangles was hard. Nowadays we have enormous power for transforming vertices but we lose rasterizing efficiency when triangles become too small. LODS used to be there to help save math time – now they are there to make sure we don’t draw lots of sub-pixel sized triangles onto the same final pixel. Unfortunately that also means we need memory for all of those LODS : they basically become a kind of memory-performance tradeoff. Too many LODS can be bad for perf by breaking up draw batches; too few are bad because of the tiny triangles. The right amount is unfortunately dependent on the nature of your game content and how it is going to be seen by the player.
As an aside it’s important to know how what kind of shaders you’ll use with a given mesh . Additional UV channels can increase the memory cost of a mesh substantially. The VRAM cost of a mesh is the number of unique combinations of shader, position, normal, vertex-color and UV coords in the mesh, not just the vertex number you see in Maya or Max! So, a shader that expects a second UV channel will require a more expensive mesh than a single-uv-channel shader.
In lots of modern engines another problem is the management overhead. The same amount of models and textures may take up more or less memory depending on how componentized the game assets are: a single million poly mesh is cheaper than a million individual triangles because the CPU-side representation of the meshes usually comes with a bunch of overhead – components for doing in-game behaviors, metadata and whatnot – that can add up to a lot of memory. Unreal and Unity are both particularly annoying in this regard, since they use the ‘everything is a component’ model which encourages this kind of memory fragmentation.
Drawcalls are not exactly a fixed-resource bucket either: The nature of the calls matters. Older hardware (Xbox One era) involved a good deal of drawcall overhead so combining them made a lot of sense. Modern console hardware is less scared of drawcalls, but even there it depends. Low-level APIs (Metal, Vulkan, DirectX 12) can blast out a lot of cheap draw calls but have to be managed carefully by a good graphics programmer. Higher level APIs like DX11 have a lower ceiling but don’t require as much careful attention. “Mid 4-digits” is a vague ballpark for DCs in most console engines but it can go higher depending on the kind of game.
It also matters how much use you make of instancing - a well designed instancing scheme can make your one drawcall draw hundreds or thousands of instances in one go; an overly complicated asset might consume several by itself if it’s broken up into several shaders.
And, drawcalls are also being spent on other things than rendering meshes. Many games today use deferred renderers where you write into a set of buffers and then combine them all in a separate pixel shader to get the proper composition. This solves several problems – it means you usually don’t have a fixed upper limit on the number of lights you can use, for example – but it imposes others: it means you lose a bunch of memory to big fat render targets (screensize x 16 or 24 or 32 bits per pixel) and your performance scales down fast as your screen size goes up. Full screen post effects like bloom or motion blur are also potentially very expensive on both memory and perf – it’s really common to do these kinds of things in half- or quarter-resolution buffers to make them less expensive. It’s extremely important to get some idea of what the fixed cost of full screen passes is on your higher resolution targets – a single full screen drawcall can cost as much as hundreds of simple geometry draws – on a slow-memory platform like an XboxOne a full screen pass might be .66 ms, or 2.5% of the total frame budget for
All of which is a long-winded way of saying “the right answer depends on your problem set”. You could design a look that optimizes for, say, lots of geometry and lower texture resolution This will leverage how good the hardware is at simply not drawing pixels that are depth rejected, and also allow you to get cheap drawcalls by using lots of instances instead of drawing more complex assets : imagine a rock pile rendered as a heap of cheap rock instances. On the other hand you could go with a more conventional approach of larger assets with LODS (to cut down on overdraw) and texture streaming (to maximize visible resolution) – this would burn a bunch of memory and give away some render performance but allow for more sophisticated texture and shader effects. A bit open world game has to worry about long vistas full of stuff, and probably wants to invest in some kind of pre-baked impostors for long views – that’s another memory-for-perf tradeoff) where a dungeon crawler might save memory by omitting geometry LODS… On PC you can throw a lot of cheap geometry with alpha-blended effects at a problem, on mobile you usually have to be very careful with transparency. You can use displacement maps and geometry shaders to save geometry memory for high res assets on high end platforms, but at the cost of render perf. One mistake a lot of people still make is recycling older PC graphics strategies on mobile – but on mobile the problem is more likely to be overdraw than pushing vertices around.
TLDR: There’s no substitute for trying things and measuring – ever platform has its own quirks. There’s no good substitute for designing a render pipeline that suits the game your making, testing it to see how well your assumptions actually hold up, and keeping track of your memory and perf as new assets go in so you discover the hidden surprises.