Unity_objecttoworld



Unity provides a handful of built-in global variables for your shaders A small script that contains the mathematical calculations and algorithms for calculating the Color of each pixel rendered, based on the lighting input and the Material configuration. Unity expects our shader to have a float4x4 unityObjectToWorld variable to store the matrix. As we're working with HLSL, we have to define that variable ourselves. Then use it to convert to world space in the vertex function and use that for its output.

  1. Unity_objecttoworld Shader Graph
  2. Unity_objecttoworld Instancing
  3. Unity_objecttoworld Example

Summary 🔗︎

I made a tutorial about planar mapping previously. The biggest disadvantage of the technique is that it only works from one direction and breaks when the surface we’re drawing isn’t oriented towards the direction we’re mapping from (up in the previous example). A way to improve automatic uv generation is that we do the mapping three times from different directions and blend between those three colors.

This tutorial will build upon the planar mapping shader which is a unlit shader, but you can use the technique with many shaders, including surface shaders.

Unity_objecttoworld Shader Graph

Unity_objecttoworld urp

Calculate Projection Planes 🔗︎

To generate three different sets of UV coordinates, we start by changing the way we get the UV coordinates. Instead of returning the transformed uv coordinates from the vertex shader we return the world position and then generate the UV coordinates in the fragment shader.

Open up the.shader file in your favourite editor, (or if you have visual studio setup to work with unity, double click the shadero shader.) You need to find any of the following code: -unityObjectToWorld This is what determines for your parallax what matrix moves it. Unity grid floor shader. GitHub Gist: instantly share code, notes, and snippets. Simple Billboard shader for Unity. GitHub Gist: instantly share code, notes, and snippets.

We use transform tex to apply the tiling and offset of the texture like we’re used to. In my shader I use xy and zy so the world up axis is mapped to the y axis of the texture for both textures, not rotating them in relation to each other, but you can play around with the way use use those values (the way the top UVs are mapped is arbitrary).

After obtaining the correct coordinates, we read the texture at those coordinates, add the three colors and divide the result by 3 (adding three colors without dividing by the number of colors would just be very bright).

Normals 🔗︎

Having done that our material looks really weird. That’s because we display the average of the three projections. To fix that we have to show different projections based on the direction the surface is facing. The facing direction of the surface is also called “normal” and it’s saved in the object files, just like the position of the vertices.

So what we do is get the normals in our input struct, convert them to worldspace normals in the vertex shader (because our projection is in worldspace, if we used object space projection we’d keep the normals in object space).

For the conversion of the normal from object space to world space, we have to multiply it with the inverse transposed matrix. It’s not important to understand how that works exactly (matrix multiplication is complicated), but I’d like to explain why we can’t just multiply it with the object to world matrix like we do with the position. The normals are orthogonal to the surface, so when we scale the surface only along the X axis and not the Y axis the surface gets steeper, but when we do the same to our normal, it also points more upwards than previously and isn’t orthogonal to the surface anymore. Instead we have to make the normal more flat the steeper the surface gets and the inverse transpose matrix does that for us. Then we also convert the matrix to a 3x3 matrix, discarding the parts that would move the normals. (we don’t want to move the normals because they represent directions instead of positions)

The way we use the inverse transpose object to world matrix is that we multiply the normal with the world to object matrix (previously we multiplied the matrix with the vector, order is important here).

To check our normals, we can now just return them in our fragment shader and see the different axis as colors.

To convert the normals to weights for the different projections we start by taking the absolute value of the normal. That’s because the normals go in the positive and negative directions. That’s also why in our debug view the “backside” of our object, where the axes go towards the negative direction, is black.

After that we can multiply the different projections with the weights, making them only appear on the side we’re projecting it on, not the others where the texture looks stretched. We multiply the projection from the xy plane to the z weight because towards that axis it doesn’t stetch and we do a smiliar thing to the other axes.

We also remove the division by 3 because we don’t add them all together anymore.

That’s way better already, but now we have the same problem again why we added the division by 3, the components of the normals add up to more than 3 sometimes, making the texture appear brighter than it should be. We can fix that by dividing it by the sum of it’s components, forcing it to add up to 1.

And with that we’re back to the expected brightness.

The last thing we add to this shader is the possibility to make the different directions more distinct, because right now the area where they blend into each other is still pretty big, making the colors look messy. To archieve that we add a new property for the sharpness of the blending. Then, before making the weights sum up to one, we calculate weights to the power of sharpness. Because we only operate in ranges from 0 to 1 that will lower the low values if the sharpness is high, but won’t change the high values by as much. We make the property of the type range to have a nice slider in the UI of the shader.

Triplanar Mapping still isn’t perfect, it needs tiling textures to work, it breaks at surfaces that are exactly 45° and it’s obviously more expensive than a single texture sample (though not by that much).

You can use it in surface shaders for albedo, specular, etc. maps, but it doesn’t work perfectly for normalmaps without some changes I won’t go into here.

I hope this tutorial helped you understand how to do triplanar texture mapping in unity.

You can also find the source code for this shader here: https://github.com/ronja-tutorials/ShaderTutorials/blob/master/Assets/010_Triplanar_Mapping/triplanar_mapping.shader

I hope you enjoyed my tutorial ✨. If you want to support me further feel free to follow me on twitter, throw me a one-time donation via ko-fi or support me on patreon (I try to put updates also there, but I fail most of the time, bear with me 💖).

In this post I’ll describe an alternative to a vignette post processing effect. I haven’t seen this technique described before so I’m calling it an imprint for lack of a better term.

Note I’m using assets from Unity’s Adventure Sample Game for demonstration purposes.
All the gameplay screenshots in this post are using Adventure Sample Game assets, not from the unannounced Double Loop game.

Many games need to focus the player’s attention on specific content on screen. In my first blog post I used a vignette post processing effect darken the game world a short distance away from the player’s character when the player was low on health. This works really well in situations where there’s a single area to focus on, but it becomes less effective at directing the player the larger the vignette becomes (since the player won’t know exactly where in the larger area to look).

For example, if you want to highlight both the player and a powerup they should get, it may require a full-screen vignette. In a local co-op game you might have additional player characters that need their own highlights. While it’s certainly possible to extend Unity’s vignette shader to support multiple locations on-screen that should have vignettes, if you want to take into account the position, rotation, and scale of each object it’s going to start to require quite a few shader properties that need to be updated each frame.

Another challenge with using a vignette effect is trying to make the effect maintain a consistent look across different aspect ratios. If you have UI elements that anchor themselves to screen edges, you’d have to make sure that the vignette settings accounted for that possibility. On the flip side, art that’s in the world rather than in the UI will not act like that - its position will not be affected by aspect ratio unless the camera field of view changes.

It would be a lot more convenient if the post processing effect just depended on the transforms & shapes of the affected objects. They should be able to move around on screen and change shape without having to manually update the vignette about their new transform or bounds. One way to do that would be to add a shader pass to the materials used by objects we want to highlight and have them draw into a render texture instead of the camera render target.

Another option would be to add an additional material to the mesh renderer component if it’s not convenient or possible to modify the shader used by individual objects. While it might be reasonable to do that for one or two shader types, it could start to get tedious to support characters, environment art, VFX, and UI. That might not be possible if you can’t customize the mesh renderer components to do so.

Depending on your situation, shader replacement might also be an option if you want all characters with specific shader tags to get the effect. However, if you have multiple objects on screen that should get the effect and they all use the shader, there isn’t a great way to override properties for a subset of renderers since code applying replacement shaders doesn’t provide individualized access. It’s also not possible to rely on properties that weren’t assigned to the original material.

Ultimately the solution I ended up settling on (and there are probably other options) was to attach a MonoBehaviour to all game objects that should get an imprint and then make it the responsibility of the post processing effect to draw those objects into a render texture used as an input texture on a full screen post processing effect. It relies on the CommandBuffer passed via the context parameter of the post processing effect to manage both the render texture and the drawing requests. The sample grayscale effect in the documentation already demonstrates a very simple use of the command buffer - to draw a full screen effect using the previously rendered content combined with a material.

Since I have a whole blog post dedicated to configuring post processing and another blog post to creating other types of effects, please read through them if you’re unsure about how to install and setup Unity’s post processing package for your cameras & scenes.

Unity_objecttoworld Instancing

This new effect will start with the same basic setup that the Grayscale sample uses, namely:

  • ImprintEffect which is based on the post processing package settings type which is used to save/load data specified in the Post Processing Profile
  • ImprintRenderer which is based on the post processing package renderer type to handle applying the effect to the output of the previous post processing effects (or the camera output if it’s the only effect)
  • Imprint/Show which is based on the post processing package shader to transform the post processing stack input (or the camera output) with the imprints and produce a final composite

In addition, there are two more pieces specific to the Imprint technique:

  • ImprintBehaviour, a MonoBehaviour attached to any object that should not be affected by screen darkening
  • Imprint/Make, another shader being used to draw each object with an ImprintBehaviour on it into a render texture passed to the Imprint/Show shader to determine where to apply the effect

The implementation is minimal - it just has a blending factor to determine the intensity of the darkness applied to any areas of the screen that aren’t being highlighted.

The implementation is minimal - it adds itself to a global HashSet when the component is enabled and removes it when disabled. It expects to have a Renderer assigned to determine the transform and bounds of the object. If your objects have multiple Renderer components you can either customize this class and the ImprintRenderer to expect that or use one ImprintBehaviour per Renderer. The static HashSet will be iterated by the ImprintRenderer to determine which objects need an imprint.

The implementation is a little more involved than the Grayscale example renderer since it needs to obtain an additional temporary screen-sized render texture and then draw each imprint into it using the Imprint/Make shader. Once that is finished, the Imprint/Show shader is used to composite the previous post processing results (or camera output) with the temporary imprint render texture to darken areas without imprints.

I used the Sphere primitive mesh to draw imprints using DrawMesh but you could use whatever mesh you like or use DrawRenderer instead to draw the actual renderer assigned to the ImprintBehaviour if you don’t want to include any of the surroundings.

Note that it’s important that the temporary render texture is cleared to black before we draw into it since the Imprint/Show shader expects black to mean not imprinted and white to mean imprinted.

This implementation is also very minimal - it just draws white into the texture wherever there is geometry from an imprint.

The implementation is also very simple - it reads an additional texture to determine the blending between full color camera output and darkened areas that are not highlighted. Note that I didn’t bother to add a property to specify the imprint texture since it’s just fed by the shader effect but you could do that if you want to debug it in isolation. Expect that the _Blend property value will get overwritten by ImprintRenderer too.

Let’s take a look at the output in a sample scene. It gets the job done nicely; the screen is darkened except wherever renderers set to leave an imprint are positioned. However, I think it’s worth trying to make the impression edges softer like you’d get in a vignette. A standard vignette effect can do this easily when it’s done in screen space since you can calculate where the edges of the screen are in screen space. However, although the Imprint/Show shader also works in screen space, it doesn’t know where the edges of each imprint are. While we could try a shader-based multi-sample edge detection technique like is used for anti-aliasing, that only performs well for a small number of samples. If we want imprints to have a very soft edge it would have to sample many pixels to find the edges.

Shader

Another option might be to have the Imprint/Make shader encode the imprints into the temporary render texture as a signed distance field. If you haven’t heard of the technique before, this would mean that each pixel in the texture would contain the distance to the nearest shape. This is what TextMesh Pro uses for efficient resizable font rendering. When sampling the texture, you’d know based on the sign of the value whether you were inside or outside of the shape and based on the magnitude how far away you are from the edge of the shape. This technique would also allow you to draw any sort of screen-aligned signed distance shapes without relying on a real mesh. However, since it only works on primitive shapes that you specifically code in the shader, I set it aside.

The option I ended up going with was to use a fresnel effect where the parts of the shape facing orthogonal to the camera become darker (most fresnel effects make these brighter). There’s a lot of great literature out there already about fresnel effects in Unity so I’m not going to dwell on its usage here and just present the altered shader. As you might expect, it relies on a per-vertex normal (which thankfully is set in the primitive meshes) along with the world space camera position. Some additional calculations occur in the vertex shader to arrive at the fresnel value which is then passed to the fragment shader and used as the output color (since it was just white before there’s no point in multiplying it by 1). If you want to tweak the fresnel strength, then you can expose a property on the shader to override the 0.75 passed to the pow function and hook it up to a property exposed by the ImprintEffect settings.

This ends up providing a soft edge to the shape and should work well for any meshes that have vertex normals configured as expected. This works great for a single imprint, but unfortunately the effect probably won’t look right if you have multiple overlapping imprints - you can end up with darker overlapping outlines due to the fresnel, but the impact depends on the draw order. Currently ImprintRenderer draws objects in the order that the HashSet enumerator provides them which is effectively arbitrary. Unfortunately sorting them won’t help in all situations - especially partial overlaps. Thankfully we can use blending in the shader to achieve this. While traditional blending techniques use Blend SrcAlpha OneMinusSrcAlpha, instead of adding the source and destination values together, we really want the max of the source and destination. Fortunately, we can set BlendOp Max and Blend One One to take the maximum of the full source and destination values.

There’s one more issue to resolve - if the camera is ever inside of an imprint, then the imprinted object still appears dark. As it turns out, this happens for two reasons:

  • Unsurprisingly, the Imprint/Make shader culls back-facing geometry and the primitive meshes just have single-sided front-facing geometry. This can be fixed by setting Cull Off in the shader.
  • The fresnel calculating uses the dot product of the eye normal and vertex normal. When the camera is inside the geometry, these become colinear whereas they’re normally opposing. When the vectors are opposed, then their dot product approaches -1 which means the fresnel becomes 1 - pow(1 - 1) or simply 1. Colinear vectors have a dot product of 1 which means the fresnel becomes 1 - pow(1 + 1) or simply -1. Thankfully we can treat opposing and colinear vectors the same if we subtract the absolute dot product.

Depending on the position of your camera and other objects in your scene, you may notice that objects which appear in front of imprints are also drawn bright even if they’re not themselves marked for imprinting. While this might be perfectly desirable for your situation, I preferred to keep foreground objects outside of the imprint dark. I think this is less visually jarring since it keeps your eye focused on the imprinted area not whatever happens to be in front of it. Thankfully we can sample the _CameraDepthTexture drawn when rendering the scene normally to compare the scene depth versus the imprint depth to decide if the pixel should be affected by an imprint or not. If the scene depth is less than the imprint depth (i.e. there’s an object between the camera and an imprint), then we can leave it dark.

For efficiency reasons, cameras do not draw to a depth texture by default, but you can configure them to do so. I’ve updated the ImprintRenderer to change the camera depth texture mode if necessary as follows:

We’ll need to update the Imprint/Make shader to sample from the depth texture as well:

Unity_objecttoworld Example

Imprints now have a soft edge, blend correctly when overlapping, and don’t cause foreground objects outside of the imprint to appear bright. There’s potentially still room for improvement beyond the scope of this post - specifically using instancing to draw imprints more efficiently which will require changes to both the ImprintRenderer and Imprint/Make shader. I decided not to pursue it because there was already enough content to discuss and my specific use case didn’t involve more than a handful of objects to highlight. I’m pretty happy with the result but let me know if you see other areas for improvement.