Wednesday, August 19, 2015

Simplified lighting for Unity 2D using a screen shader

Dark Deathmatch is a special mode in Dolguth placing the fighters in a completely dark arena with the mech/pilot as the only light source. In terms of gameplay this puts the challenge at a higher level as the player has a very limited view of the stage (and its traps).


Here Durgai can clearly spots Sadness Deployer position, but cannot know if a trap or a hole is between them.

How to accomplish this ?

The first approach we followed was a very orthodox one: we iteratively change the material shader of each sprite (included tiles and props) from Sprite/Default to Sprite/Diffuse and we added a point light to each Mech/pilot object (ensuring the light is correctly aligned in a 2d world).

This worked very well, and with a very appealing result. Unfortunately we got two problems:

  • Performance dramatically got worse (expecially in multilayered arenas)
  • Correctly lighting props (objects allowing the pilots to hide from mech view) was completely flaky

The second problem was not a big one, we decided to simply remove props from night arenas, but performance, in a game like dolguth where speed is the main factor, cannot be sacrificed.

Here come Unity postprocessing/image effects.

This techique allows the developer to get access to the whole resulting framebuffer (you basically get the bitmap of each frame of the game) and to modify it with a shader. Effects like global fog, grayscale decolouring or sepia tone, can be easily accomplished with a so called "screen shader" (a shader accessing the whole screen).

In our case we want our shader to draw each pixel as black (dark areas) except for the virtual circle (with configurable radius) around each mech/pilot.

The first step was instructing our main camera to apply an effect on the screen:

using UnityEngine;
using System.Collections;

public class NightBehaviour : MonoBehaviour {

    public Material material; 

   
    void OnRenderImage (RenderTexture sourceRenderTexture destination) {
        Graphics.Blit (sourcedestinationmaterial);
    }
}

 

The OnRenderImage() function is called for each frame of the game, the source argument is the content of the current framebuffer, while destination is where you are expected to draw the result of your work (read: this is what will be effectively rendered).

Graphics.Blit() allows you to draw a source framebuffer to a destination one applying a material (that has a shader obviously) to it.

Next step is defining our Night shader

Shader "Sprites/Night" {
    Properties {
        _MainTex ("Base (RGB)"2D) = "white" {}
    }
    SubShader {
        Pass {
            CGPROGRAM
            #pragma vertex vert_img
            #pragma fragment frag

            #include "UnityCG.cginc"

            uniform sampler2D _MainTex;

            float4 frag(v2f_img i) : COLOR {
                return float4(0, 0, 0, 1);
            }
            ENDCG
        }
    }
}

This is a pretty dumb one. It turns every pixel of the framebuffer to black. All is night in our game.
Note that _MainTex is automatically passed to the shader as the content of the framebuffer.

The goal now is disabling black for the virtual circle around each mech. For doing it, we need to pass the position of each mech (read: light) to the shader. We use a Vector4 to represent a light: x and y are the light position, z is the radius of the virtual circle (light distance) and w is a flag: 0 light is turned off, 1 is on.

using UnityEngine;
using System.Collections;

public class NightBehaviour : MonoBehaviour {

    public Material material;


   

    // global reference, this is where all game data is stored
    Global gb;
    
    void OnRenderImage (RenderTexture sourceRenderTexture destination) {

        // iterate each mech in the arena
        foreach (MechBehaviour mb in gb.arena.mechs_in_arena) {
            if (mb == null)
                continue;
            if (mb.dead) {
                // ensure light is turned off for dead players
                material.SetVector("_MechLight" + mb.player_idVector4.zero);
                continue;
            }

            // get the position of the mech
            Vector2 pos = mb.transform.position;

            // if the mech is destroyed use the pilot position
            if (mb.destroyed)
                pos = mb.pilot.transform.position;
           
// transform object world position to screen position
            Vector4 vl = Camera.main.WorldToViewportPoint(pos);

            // set light distance (the radius of the virtual circle, hardcoded)
            vl.z = 0.15f;

            // enable the light
            vl.w = 1;
            material.SetVector("_MechLight" + mb.player_idvl);
        }
        Graphics.Blit (sourcedestinationmaterial);
    }

    void Awake() {
        gb = Object.FindObjectOfType<Global> ();
    }
}


Code comments should help in understanding it, but basically we are only passing our vector lights to the shader.

Now we can complete our shader to take in account our virtual lights


Shader "Sprites/Night" {
    Properties {
        _MainTex ("Base (RGB)"2D) = "white" {}
        _AspectRatio ("Screen Aspect Ratio"Float) = 0
        _MechLight1 ("Mech1 Light"Vector) = (0000)
        _MechLight2 ("Mech2 Light"Vector) = (0000)
        _MechLight3 ("Mech3 Light"Vector) = (0000)
        _MechLight4 ("Mech4 Light"Vector) = (0000)
        _MechLight5 ("Mech5 Light"Vector) = (0000)
        _MechLight6 ("Mech6 Light"Vector) = (0000)
    }
    SubShader {
        Pass {
            CGPROGRAM
            #pragma vertex vert_img
            #pragma fragment frag

            #include "UnityCG.cginc"

            uniform sampler2D _MainTex;
            
            uniform float4 _MechLight1;
            uniform float4 _MechLight2;
            uniform float4 _MechLight3;
            uniform float4 _MechLight4;
            uniform float4 _MechLight5;
            uniform float4 _MechLight6;
            
            uniform float _AspectRatio;
            

            float4 frag(v2f_img i) : COLOR {
                float4 c = tex2D(_MainTexi.uv);
                float2 ratio = float2(11/_AspectRatio);
                float delta = 0;
                
                float ray = length((_MechLight1.xy - i.uv.xy) * ratio);
                delta += smoothstep(_MechLight1.z0ray) * _MechLight1.w;
                
                ray = length((_MechLight2.xy - i.uv.xy) * ratio);
                delta += smoothstep(_MechLight2.z0ray) * _MechLight2.w;
                
                ray = length((_MechLight3.xy - i.uv.xy) * ratio);
                delta += smoothstep(_MechLight3.z0ray) * _MechLight3.w;
                
                ray = length((_MechLight4.xy - i.uv.xy) * ratio);
                delta += smoothstep(_MechLight4.z0ray) * _MechLight4.w;
                
                ray = length((_MechLight5.xy - i.uv.xy) * ratio);
                delta += smoothstep(_MechLight5.z0ray) * _MechLight5.w;
                
                ray = length((_MechLight6.xy - i.uv.xy) * ratio);
                delta += smoothstep(_MechLight6.z0ray) * _MechLight6.w;
                
                c.rgb *= delta;
                return c;
            }
            ENDCG
        }
    }
}

If you are not into shader programming, you may find last code horrible. DRY is ignored, some var is one-letter, well, welcome into shader programming :)

When writing shaders you always need to remember that 'if' are expensive, and when you want to pass lot of data, uploading textures (that will be used as generic buffers in the shader) is the only viable approach. But 'uploading' is expensive too, so if you have data constantly changing re-uploading the texture at every frame will be overkill.

The mech/pilot position is a thing that potentially changes at every frame, so having a uniform (variables you can pass from your game to the shader) for each light is the best approach for performance. We know that there can be at max 6 mechs/pilots in the game, so we define 6 uniforms for lights.

Do you remember that 'if' are bad in shaders ? Well, this is why smoothstep() is used.

In this context smoothstep() returns an interpolated float between 0 and 1 based on the 'ray' value (the distance from the light center to the pixel). If 'ray' is 0 the result will be 1 (full lighting for the pixel), while if it is higher than 'z' (the light distance) it will be fixed to 0 (black pixel). Values between 0 and 'z', will be interpolated toward 1. The result is multiplied by the 'w' of the vector, that you can see as a flag to recognize turned-off light (they have 'w' set to 0).

A nice thing about this approach is that more the pixel is far from the light,  darker it will be. This ensures a smooth light circle around the mech.

What is 'ratio' ?

Our game is 16:9, so computing the distance between two points will result in an ellipse instead of a circle. Albeit we could have hardcoded that value in the shader (well, it is always 16.0f/9.0f) we prefer to pass it from the game engine as in the future we may want to support other aspect ratio.

Last note is about the delta value, summing it allows our shader to generate more intense lighting when multiple mechs/pilots are near.

Friday, August 14, 2015

Reusing your Unity 2D in-game animations for UI/Canvas

Finally Dolguth has been approved on Steam Greenlight, so we started pushing things up to begin a Kickstarter campaign and to release the first beta demo.

In the last sprint we started redesigning the choose-your-mech menu. Now we have a fully responsive interface that dinamically adapts itself based on the number of players (and obviously by the resolution). The screenshot below shows a 4-player interface (Dolguth currently supports up to 6 player)


During the test of the new interface we realized that animating the pilot (and the mech after the player confirmation) could have been a great addition.

Well, we already have our "idle" animations, so it should be easy to show them in our Unity UI/Canvas-based scene. Wrong.

We suddenly realized that there is no fast way to run an animation controller over an Image UI object. Our first (horrible) choice was to assign to each player interface (that is a prefab) a list of frames taken by the "idle" animation clip. Finally, in the Update() cycle we dynamically change the sprite attribute of the Image UI component based on the fps. It worked. But come on, it sucks.

We want to reuse our animation clip without compromises !

Well, after some head-smashing we came out with a really funny (and working) solution.

In our 2D environment, each animation clip keyframe modifies the sprite field of the SpriteRenderer component attached to the object. Accessing this field during the animation (in the Update() phase) is extremely easy:

Sprite image = GetComponent<SpriteRenderer> ().sprite;

If we add an Animator component to the object with the UI/Image and we assign a controller to it (one of those already defined in our game) it will not work, as it expects to modify the sprite field of a SpriteRenderer.

The trick here is adding a disabled SpriteRenderer component to the object to allow the AnimatorController to do its magic.

Finally In the Update() function we simply read the value of the sprite field of the SpriteRenderer (remember: it is disabled, so it is not visible) and assign it to the sprite field on the UI/Image component. We can do all of this at runtime:

using UnityEngine;
using System.Collections;
using UnityEngine.UI;

public class ImageCanvasAnimator : MonoBehaviour
{
    
    // set the controller you want to use in the inspector
    public RuntimeAnimatorController controller;
    
    // the UI/Image component
    Image imageCanvas;
    // the fake SpriteRenderer
    SpriteRenderer fakeRenderer;
    // the Animator
    Animator animator;
    
    
    void Start ()
    {
        imageCanvas = GetComponent<Image>();
        fakeRenderer = gameObject.AddComponent<SpriteRenderer>();
        // avoid the SpriteRenderer to be rendered
        fakeRenderer.enabled = false;
        animator = gameObject.AddComponent<Animator>();
        
        // set the controller
        animator.runtimeAnimatorController = controller;
    }
    
    void Update () {
        // if a controller is runningset the sprite
        if (animator.runtimeAnimatorController) {
            imageCanvas.sprite = fakeRenderer.sprite;
        }
    }
    
}


Easy and effective :)

Now when you want to animate a UI/Image component just add the previous script to the object and assign an AnimatorController to controller field in the inspector. The default state of the controller will be executed.

Some note:


  • You can obviously set AnimatorController parameters in your code, just get a reference to the Animator
  • You can create animation clips with a curve that directly modifies the UI/Image component instead of the SpriteRenderer. This is obviously way more elegant and efficient than the solution we used, but who wants to define the same animation twice?