Sprites and UI
While thinking about the graphics pipeline, most of us may be envisioning getting a 3D mesh from a program like Maya rendered on screen by a game engine. However, artists and designers also may find the need for simple 2D sprites on screen for UI or even making a completely 2D game!
With sprites, we can make several optimizations for them since they are a simpler object to render. One might think you would use a standard quad mesh with vertices and indices, but we can do better than that! Since all sprites a pretty a much a 2D plane with a texture on it , we can store a single representation of a quad and use it for every sprite. Also, since this a a basic quad we can take use less memory by using triangle strips.
Sprite Vertex Input Layout
To operate on a sprite vertex we need two pieces of information, its position and its texture coordinates. For our default sprite vertices that we'll use for all sprites, we can set the values to be the lowest and highest and scale them to be what we need in the vertex shader. We can also represent these using the smallest possible data values as int8_t's and uint8_t's. We use signed integers for position because they can be negative. Thus the final size of a sprite vertex is two int8_t's and two uint8_t's.
With dependent and independent draw calls already being sorted, how should sprites be accounted for? The behavior of sprites is that they are always drawn on top of everything else which makes this easy, we simply draw sprites after drawing everything else! As for what data we encode in a sprite render command, we mainly need the material ID to know what texture to use. For constant data, we'll store what scale to draw the sprite and the local to world transformation matrix to use in the vertex shader.
Since all our sprites are using the same sprite quad, initially they are all drawn in the same position and scale on the screen. This is defined by the values we fill our default sprite vertices with. Users of our sprites need to specify four values to change the position and scale of this initial sprite.
Positions of a sprite are centered on screen. So if a user specified (0, 0) as the desired position of a sprite, it would be drawn with the center of the texture in the middle of the screen. (-1, 0) would have the texture center on the far left of the screen and (0, -1) would have the texture center on the bottom of the screen
Along with position, users specify scales between 0 and 1. A scale of (1, 1) would leave the sprite covering the entire screen while a scale of (0.5, 0.5) would draw the sprite half as big as the screen by scaling around the center.
With the above values specified, we can submit a scale and a local to world transform matrix to our per draw call constant buffer. We'll simply submit the scale values specified by the user above and calculate the local to world transform using the position specified by the user.
In our vertex shader, we can use the scale values from our constant buffer to create a scaling matrix to scale our vertex positions from our sprite quad. Then, we can use the local to world matrix to transform the vertex position into world space. Afterwards, our fragment shader simply calculates an albedo since we usually don't want our sprites to be affected by directional and ambient lights.
If we correctly implement all of the above, we should get sprites rendered on screen like the image below!
Notice how the sprites are on top of everything else in the scene, almost like they are stickers on the camera.
We can also make sure our sprite rendering is optimized by taking a look at our Graphics Analyzer timeline.
Notice how we set the topology to TRIANGLESTRIP and don't call DrawIndexed any more. This saves us memory as we don't need to store an index buffer just to draw a quad which would be six uint16_t's!