Planet Tech Art
Last update: February 24, 2017 07:00 PM
February 23, 2017

2017 CGMonks cgmToolbox How-To

Josh Burton at CGMonks has created a new set of  How-to videos covering the updates and use of their cgmToolbox tools. These are a great set of tools that combine many separate used tools from other TDs and custom written ones from Josh and team. After install of course the one video to watch if […]

The post 2017 CGMonks cgmToolbox How-To appeared first on Rigging Dojo.

by Rigging Dojo at February 23, 2017 08:20 PM


February 20, 2017

New Red9 Website!

Well we finally launched our new website and as part of that we have a new blog which will become our main point of reference from now on. This blog will still be active but mainly just for more random mutterings.

The new site also has a ton of details regarding the ProPack toolset, overviews of the key features and projects that we've been involved in.

red9consultancy.com

thanks

Mark

by Mark Jackson (noreply@blogger.com) at February 20, 2017 06:09 PM


February 16, 2017

Unity Protocol Buffers

Time for another “ah, crap – better put something up here before heading to GDC”-post. And since I did some re-tinkering with the integration of protobuf in Behave late last year, some sharing on that topic seemed in order.

So yes, stuff is happening with Behave – no abandonment has taken place. On the contrary I have been working on several, long-running, feature branches. I’ll get around to merging some of those in and doing another release. More on that soonish.

Anywho, last I checked, protobuf-net was still a more complete solution than the younger protobuf for C# one, so I still stick with protobuf-net and can therefore only talk about that here. However it looks like the latter is seeing more rapid development, so I’ll probably review my choice when next I need to tinker with the integration.

More interestingly, some of the initial people behind protobuf have since gone and built the new and shiny Cap’n Proto. This looks to be even more powerful than its predecessor, but at time of writing it is still not as mature or implementation-rich as protobuf. Critically for the context of this post, there is no C# implementation yet. Cool stuff, though – worth keeping an eye on.

Overview

So what is the Protocol Buffers project? Succinctly, it is a compact binary serialisation format with very fast cross-platform and -language implementations. Super useful when you need to quickly move some data between memory and some other location – a file / a network peer / whatever.

It was first designed, implemented, and maintained by Google for communications between network peers on their internal network. Since then, it has been implemented in a number of different languages – including the protobuf-net implementation, built and maintained by Marc Gravell.

Serialisation use in Behave has gone through a couple of distinct phases:

  1. .net binary serialiser for asset.
  2. protobuf-net for asset.
  3. protobuf-net for asset and remote debugger protocol.
  4. JSON for asset, protobuf-net for remote debugger protocol.

Additionally I’ve used protobuf-net for runtime saving or build pipelines on various client projects.

Aside from speed and compression, protobuf-net vs. the .net binary serialiser was also an escape of the frustrating lack of support for versioning or even simple structural refactors of serialised types. The switch to JSON for the .behave asset files of-course came when I decided that merge-ability could be a fun thing to support.

On top of this general migration, I also went through a couple of different protobuf-net integrations – as my use cases and requirements changed. It just so happens that this gave me full coverage of the three approaches supported by protobuf-net, so no need for extra research before writing this post. Win!

Integration

For serialisation to work, you need a a serialiser and a schema. Protobuf-net offers a couple of ways for you to provide those:

  1. [Attribute] markup with runtime generated serialiser.
  2. [Attribute] markup with pre-generated serialiser.
  3. .proto file description with pre-generated serialiser.

I did this list in order of simplicity, which also happens to be the order in which I switched through them in the integration with Behave.

The first option is very .net esque and protobuf-net is indeed compatible both with its own serialisation attributes and the more general-purpose .net serialisation attributes. However since your interest here is in a Unity context, I’m sure that you have already spotted the problem with this approach.

Given that my initial use for protobuf-net was just for asset serialisation (which is editor-time only), I had no problem with the serialisation solution relying on JIT compilation in order to construct the serialiser at runtime. However as soon as I expanded my use case to include the runtime debugger, relying on JIT would mean not supporting the debugger on AOT platforms like iOS and consoles. Further, as Unity continues transitioning to their IL2CPP solution, you’re looking at a future where most will want to do AOT on all platforms.

So when introducing the remote debugger (previous debugger implementation was just 100% in-editor), I started to pre-generate the serialiser. This entails feeding your compiled assembly with [Attribute] markup to the protobuf-net precompile tool, which in turn generates an assembly with the serialiser type. For an example of how I used to do that, here’s a snippet of perl.

Extraction

Everything works! Win, right? Well… as I was expanding debugger support to non-C# targets and doing a general code cleanup, I got increasingly annoyed by having serialisation implementation detail in my general data type code. So I started looking at how protobuf-net supports the standard protobuf .proto format for schema definition.

As things stand, it takes a bit of work – specifically this work – to get going, but once there it is solid. In stead of using the precompile tool, you need to build and run the protogen tool from the protobuf-net repository. If you’re on Windows, things may just work straight out of the gate, but you may want to see what is in my patch anyway.

So how does this work? Well, the input is no longer [attribute] marked assemblies, but in stead a .proto definition file. You can find great detail on that in the general protobuf documentation, but you may want to consult the protobuf-net docs for implementation specifics/limits. Also, the output is not an assembly, but simply a C# file.

Keep in mind that protogen relies on the protoc tool from the general Google protobuf tools. I fetched this from homebrew as the “protobuf” package.

Again I have a snippet of perl to illustrate use – as well as an example .proto file. In this case the Behave debugger protocol definition.

Using this approach, I now have a nicely separated codebase and no duplication of schema definition between the C# debugger runtime and others. One trick remains though…

Object pooling

A mildly active Behave debugger session involves a lot of messages. I have very little interest in blowing up the garbage collector by constantly instantiating new messages and leaving them to get collected. So how do we integrate an object pool setup with the generated protobuf-net serialiser?

While not exactly as pretty as I would like, my answer is making use of the partialness of the generated types – in order to add a static pool and -constructor to hook it up.

And that’s pretty much what I’ve got. If I missed something or you have related questions, feel free to ping me and I’ll try to update this post when I can.

by at February 16, 2017 11:00 PM


February 15, 2017

Font Rendering is Getting Interesting

Caveat: I know nothing about font rendering! But looking at the internets, it feels like things are getting interesting. I had exactly the same outsider impression watching some discussions unfold between Yann Collet, Fabian Giesen and Charles Bloom a few years ago – and out of that came rANS/tANS/FSE, and Oodle and Zstandard. Things were super exciting in compression world! My guess is that about “right now” things are getting exciting in font rendering world too.

Ye Olde CPU Font Rasterization

A true and tried method of rendering fonts is doing rasterization on the CPU, caching the result (of glyphs, glyph sequences, full words or at some other granularity) into bitmaps or textures, and then rendering them somewhere on the screen.

FreeType library for font parsing and rasterization has existed since “forever”, as well as operating system specific ways of rasterizing glyphs into bitmaps. Some parts of the hinting process have been patented, leading to “fonts on Linux look bad” impressions in the old days (my understanding is that all these expired around year 2010, so it’s all good now). And subpixel optimized rendering happened at some point too, which slightly complicates the whole thing. There’s a good overview of the whole thing in 2007 Texts Rasterization Exposures article by Maxim Shemanarev.

In addition to FreeType, these font libraries are worth looking into:

  • stb_truetype.h – single file C library by Sean Barrett. Super easy to integrate! Article on how the innards of the rasterizer work is here.
  • font-rs – fast font renderer by Raph Levien, written in Rust \o/, and an article describing some aspects of it. Not sure how “production ready” it is though.

But at the core the whole idea is still rasterizing glyphs into bitmaps at a specific point size and caching the result somehow.

Caching rasterized glyphs into bitmaps works well enough. If you don’t do a lot of different font sizes. Or very large font sizes. Or large amounts of glyphs (as happens in many non-Latin-like languages) coupled with different/large font sizes.

One bitmap for varying sizes? Signed distance fields!

A 2007 paper from Chris Green, Improved Alpha-Tested Magnification for Vector Textures and Special Effects, introduced game development world to the concept of “signed distance field textures for vector-like stuffs”.

The paper was mostly about solving “signs and markings are hard in games” problem, and the idea is pretty clever. Instead of storing rasterized shape in a texture, store a special texture where each pixel represents distance to the closest shape edge. When rendering with that texture, a pixel shader can do simple alpha discard, or more complex treatments on the distance value to get anti-aliasing, outlines, etc. The SDF texture can end up really small, and still be able to decently represent high resolution line art. Nice!

Then of course people realized that hey, the same approach could work for font rendering too! Suddenly, rendering smooth glyphs at super large font sizes does not mean “I just used up all my (V)RAM for the cached textures”; the cached SDFs of the glyphs can remain fairly small, while providing nice edges at large sizes.

Of course the SDF approach is not without some downsides:

  • Computing the SDF is not trivially cheap. While for most western languages you could pre-cache all possible glyphs off-line into a SDF texture atlas, for other languages that’s not practical due to sheer amount of glyphs possible.
  • Simple SDF has artifacts near more complex intersections or corners, since it only stores a single distance to closest edge. Look at the letter A here, with a 32x32 SDF texture - outer corners are not sharp, and inner corners have artifacts.
  • SDF does not quite work at very small font sizes, for a similar reason. There it’s probably better to just rasterize the glyph into a regular bitmap.

Anyway, SDFs are a nice idea. For some examples or implementations, could look at libgdx or TextMeshPro.

The original paper hinted at the idea of storing multiple distances to solve the SDF sharp corners problem, and a recent implementation of that idea is “multi-channel distance field” by Viktor Chlumský which seems to be pretty nice: msdfgen. See associated thesis too. Here’s letter A as a MSDF, at even smaller size than before – the corners are sharp now!

That is pretty good. I guess the “tiny font sizes” and “cost of computing the (M)SDF” can still be problems though.

Fonts directly on the GPU?

One obvious question is, “why do this caching into bitmaps at all? can’t the GPU just render the glyphs directly?” The question is good. The answer is not necessarily simple though ;)

GPUs are not ideally suited for doing vector rendering. They are mostly rasterizers, mostly deal with triangles, etc etc. Even something simple like “draw thick lines” is pretty hard (great post on that – Drawing Lines is Hard). For more involved “vector / curve rendering”, take a look at a random sampling of resources:

That stuff is not easy! But of course that did not stop people from trying. Good!

Vector Textures

Here’s one approach, GPU text rendering with vector textures by Will Dobbie - divides glyph area into rectangles, stores which curves intersect it, and evaluates coverage from said curves in a pixel shader.

Pretty neat! However, seems that it does not solve “very small font sizes” problem (aliasing), has limit on glyph complexity (number of curve segments per cell) and has some robustness issues.

Glyphy

Another one is Glyphy, by Behdad Esfahbod (بهداد اسفهبد). There’s video and slides of the talk about it. Seems that it approximates Bézier curves with circular arcs, puts them into textures, stores indices of some closest arcs in a grid, and evaluates distance to them in a pixel shader. Kind of a blend between SDF approach and vector textures approach. Seems that it also suffers from robustness issues in some cases though.

Pathfinder

A new one is Pathfinder, a Rust (again!) library by Patrick Walton. Nice overview of it in this blog post.

This looks promising!

Downsides, from a quick look, is dependence on GPU features that some platforms (mobile…) might not have – tessellation / geometry shaders / compute shaders (not a problem on PC). Memory for the coverage buffer, and geometry complexity depending on the font curve complexity.

Hints at future on twitterverse

From game developers/middleware space, looks like Sean Barrett and Eric Lengyel are independently working on some sort of GPU-powered font/glyph rasterization approaches, as seen by their tweets (Sean’s and Eric’s).

Can’t wait to see what they are cooking!

Did I say this is all very exciting? It totally is. Here’s to clever new approaches to font rendering happening in 2017!


Some figures in this post are taken from papers or pages I linked to above:

by at February 15, 2017 07:02 PM