How to Rasterize Dynamic Effects (Attacks, Explosions) using Niagara
By Ghislain Girardot
Summary
## Key takeaways - **Fixed Pool Limits Scalability**: The system uses a fixed pool of 16 effects with a circular index, which recycles slots but risks overwriting active effects if spawning too many too quickly, making it unsuitable for hundreds of actors spawning dozens of effects. [00:50], [02:11] - **Effects Encoded in Eight Floats**: Each effect is described by eight floats: 2D position, spawn timestamp, duration, radius, strength, direction angle in radians, and cone angle in degrees, fitting into two Vector4s for efficient GPU transfer. [00:59], [01:31] - **LUT Stores Effects as Tiny Texture**: The lookup table is a render target sized to the number of effects wide by 2 pixels high in RGBA32 float, where a compute shader writes two RGBA pixels per effect from the float buffer, enabling easy sampling anywhere textures are supported. [08:28], [09:38] - **Sample Effects for Masks and Displacement**: Sampling the LUT retrieves effect data to compute spherical distance mask, cone direction mask using trigonometry, and time-based lifetime fade, then multiplies by strength to accumulate 2D displacement vectors for grass or particles. [10:27], [12:03] - **LUT Sampling Costly Per Vertex**: Sampling all effects requires two texture samples plus math per effect, leading to 256 samples for 128 effects in a vertex shader, causing noticeable performance drops despite the tiny cached texture. [14:33], [17:52] - **Rasterization Bakes Summed Displacements**: Rasterizing effects into a render target computes displacements for every pixel in a local world area centered on the character, allowing cheap single-sample access but baking the summed result without per-effect time tweaks or 3D spatialization. [16:16], [20:21]
Topics Covered
- How does a circular effect pool avoid complexity?
- Why sample effects via lookup texture over parameters?
- Can lookup textures compute dynamic displacements?
- Does rasterization sacrifice flexibility for speed?
Full Transcript
<b>Hey, what's up YouTube?
</b> <b>In this video, I'm going to showcase a system for</b> <b>creating simple effects that can be</b> <b>sampled in, say, your grass shader,</b> <b>particle systems, and more.
</b> <b>It's not groundbreaking, but it's straightforward</b> <b>and can get the job done.
</b> <b>That said, it's not really</b> <b>scalable, so don't expect too much.
</b> <b>It's a quick and dirty solution</b> <b>that can help you get started, and</b> <b>in some cases, might even be enough on its own,</b> <b>depending on what you need.
</b> <b>But if you're expecting hundreds of actors all</b> <b>spawning dozens of effects,</b> <b>this approach won't cut it.
</b> <b>All right, let's dive in.
</b> <b>So the overall idea is to create a pool of</b> <b>effects, let's say 16 of them.
</b> <b>Each effect will be described</b> <b>by the following properties.
</b> <b>Its 2D position, a timestamp representing the</b> <b>game time when the effect was spawned.
</b> <b>Its duration in seconds,</b> <b>its radius in centimeters,</b> <b>its strength expressed as a</b> <b>displacement in centimeters,</b> <b>its direction expressed as</b> <b>an angle in radians, and</b> <b>its cone angle expressed as an</b> <b>angle in degrees for convenience.
</b> <b>Now this setup is somewhat arbitrary.
</b> <b>You could structure it differently or even use a</b> <b>3D position instead of 2D,</b> <b>for example, it really depends on your needs.
</b> <b>All of these parameters add</b> <b>up to eight floats in total,</b> <b>which conveniently fit into two</b> <b>float four, or vector four values.
</b> <b>With this pool of effects, the idea is to use a</b> <b>circular index to spawn new effects.
</b> <b>That means each time you spawn</b> <b>an effect, you increment an index.
</b> <b>Once you reach the 16th effect, the index loops</b> <b>back to zero, recycling the first one.
</b> <b>Alternatively, you could manage a variable sized</b> <b>list that only keeps track of</b> <b>currently active effects, but</b> <b>that approach adds extra complexity.
</b> <b>For today, a fixed size pool will work just fine.
</b> <b>Of course, this comes with some limitations.
</b> <b>Spawning too many effects too quickly might cause</b> <b>you to recycle an effect</b> <b>whose lifetime hasn't yet</b> <b>expired, causing all sorts of issues.
</b> <b>It's something to keep in</b> <b>mind, but with a large enough pool,</b> <b>reasonable spawn rates, and effects that don't</b> <b>last too long, it's usually not an issue.
</b> <b>Now the goal is to sample this</b> <b>data, this pool of effect on the GPU.
</b> <b>And I'm going to demonstrate two possible ways to</b> <b>do that, though there are certainly more.
</b> <b>For example, you could simply use a material</b> <b>parameter collection and</b> <b>store two vector 4s per effect to list them</b> <b>all, then access each vector</b> <b>individually in a material.
</b> <b>But that approach has its limits.
</b> <b>Adding new effects to the pool becomes a pain.
</b> <b>It's deduced to manage in shaders, and it doesn't</b> <b>easily extend to other systems like Niagara.
</b> <b>Sure, you could link a Niagara parameter</b> <b>collection to your material parameter collection,</b> <b>but that's still a very</b> <b>manual and error-prone process.
</b> <b>By the way, this is somewhat similar to the</b> <b>method I used years ago to create</b> <b>one of my grass interaction systems.</b> <b>Instead, I'll show you two more flexible methods,</b> <b>how effects can be rasterized</b> <b>and "baked" into a render target,</b> <b>and how they can be stored in</b> <b>a texture as a lookup table,</b> <b>allowing you to iterate and sample them anywhere</b> <b>a texture sample is available,</b> <b>whether that's in shaders, Niagara, or elsewhere.
</b> <b>I'll demonstrate the lookup texture method first.
</b> <b>For either approach, we'll need a</b> <b>bit of logic on the CPU side first.
</b> <b>I've gone ahead and built a</b> <b>blueprint actor component.
</b> <b>The effect pool is stored as a float array.
</b> <b>As I mentioned earlier, I could have used a</b> <b>vector 4 buffer and stored</b> <b>two vector 4s per effect,</b> <b>but using a float array is</b> <b>generally less restrictive,</b> <b>especially when dealing with data structures that</b> <b>don't fit perfectly into vector 4s.
</b> <b>You do you.
</b> <b>In my case, each effect consists of eight floats,</b> <b>so the first step is to resize the array to</b> <b>accommodate the correct number of effects.
</b> <b>This happens in that init function,</b> <b>where I can specify both the</b> <b>number of effects to store in the buffer</b> <b>and the attachment actor,</b> <b>which will come into play later.
</b> <b>Once I know how many effects I need, I simply</b> <b>resize the array to hold eight floats per effect.
</b> <b>Next, since I'm going to use</b> <b>Niagara to either rasterize these effects</b> <b>or store them in a lookup texture,</b> <b>this footprint actor component needs to reference</b> <b>a Niagara actor that owns</b> <b>a Niagara system component.
</b> <b>And so this next function is a bit</b> <b>over-engineered to handle cached references,</b> <b>but the key part is this.
</b> <b>I simply spawn a Niagara</b> <b>actor, cache its reference,</b> <b>access its Niagara</b> <b>component and cache that as well,</b> <b>and then I set the system asset.
</b> <b>After that, I provide a Niagara with the number</b> <b>of effects and the number of floats per effect.
</b> <b>I also attach the Niagara</b> <b>component to the desired component.
</b> <b>In my case, this blueprint</b> <b>component is attached to my character.
</b> <b>And so I want the Niagara component to be</b> <b>attached to the character's root component.
</b> <b>That way, the Niagara system's position</b> <b>automatically reflects the character's position,</b> <b>and this becomes important</b> <b>later when rasterizing effects.
</b> <b>Anyway, next I initially disable the rasterizers</b> <b>and lookup texture</b> <b>emitters in the Niagara system,</b> <b>and then I reinitialize it just to be safe.
</b> <b>So that's the</b> <b>initialization part.
Nothing too fancy.
</b> <b>I'm simply setting up the array</b> <b>size and spawning a Niagara system</b> <b>that I can easily access</b> <b>from this blueprint component.
</b> <b>Now, from the user's perspective, once this</b> <b>component has been initialized,</b> <b>all that's left to do is call its spawn function</b> <b>and provide the necessary arguments.
</b> <b>2D world position, 2D direction,</b> <b>duration, radius, angle and strength.
</b> <b>Eight floats in total, just as expected.
</b> <b>So let's have a look at that function.
</b> <b>The first step is to compute the offset in the</b> <b>float array, where we want</b> <b>to write that new effect.
</b> <b>Since each effect uses eight</b> <b>floats, updating, say, the second effect</b> <b>means starting at the ninth</b> <b>float in the array, so index 8.
</b> <b>Next I perform a quick safety check, and then I</b> <b>simply update the array values.
</b> <b>X and Y position first, then the timestamp,</b> <b>duration radius direction,</b> <b>cone angle and finally strength.
</b> <b>After that, I increment the effect</b> <b>index and make sure it wraps around.
</b> <b>So when it reaches the pull</b> <b>size, it loops back to zero.
</b> <b>Then I send the updated array to Niagara and</b> <b>enable the required emitters.
</b> <b>And these emitters stay</b> <b>active only for as long as needed.
</b> <b>For example, the rasterization emitter only needs</b> <b>to run for the duration of the effect,</b> <b>while drawing the lookup</b> <b>table takes just a single frame.
</b> <b>So that emitter can be disabled on the next tick.
</b> <b>And that's pretty much it for this blueprint.
</b> <b>All it really does is</b> <b>manage a circular float buffer,</b> <b>send that buffer to Niagara,</b> <b>and activate or deactivate the</b> <b>relevant Niagara emitters when needed.
</b> <b>Thus, most of the magic happens in Niagara.
</b> <b>First, this Niagara system includes a few user</b> <b>parameters that can be set from Blueprint,</b> <b>specifically the float array named effects, which</b> <b>gets updated in Blueprint,</b> <b>as well as the number of</b> <b>effects and floats per effect.
</b> <b>It also has texture render target user parameters</b> <b>that point to, well, the actual render targets.
</b> <b>Alright, so let's start by looking at how the</b> <b>lookup table is rendered.
</b> <b>In that emitter, when the emitter spawns,</b> <b>I create a render target 2D</b> <b>parameter at the emitter level.
</b> <b>This parameter points to the</b> <b>lookup texture render target.
</b> <b>I override its format to rgba32 float,</b> <b>and set its filter mode to</b> <b>nearest, which is very important.
</b> <b>Next, I use a simple custom</b> <b>module to set the texture size.
</b> <b>In this case, I want the lookup table to be as</b> <b>wide as the number of effects,</b> <b>and since 8 floats can be stored in 2 rgba</b> <b>pixels, the height only needs to be 2.
</b> <b>So it's a tiny render target, right?
</b> <b>The rest happens in a simulation stage, which</b> <b>dispatches a compute shader once per effect.
</b> <b>Inside that custom module, I sample the float</b> <b>buffer at the appropriate index</b> <b>to access the x position, y</b> <b>position, timestamp, duration, and so on.
</b> <b>Then I simply store these values into the rows of</b> <b>the render target dedicated to each effect.
</b> <b>Since the compute shader</b> <b>is executed once per effect,</b> <b>the execution index</b> <b>corresponds to the effect index,</b> <b>and so I simply write the 8 floats into the top</b> <b>and bottom rows of the render target,</b> <b>2 sets of 4 values per effect.
</b> <b>And that's basically it.
</b> <b>Since my character calls the</b> <b>spawn function on left-click,</b> <b>I can keep clicking</b> <b>left-click in game to spawn effects</b> <b>and watch the lookup texture update</b> <b>in real-time, 1, 2, 3, 4, and so on.
</b> <b>You can clearly see the</b> <b>circular buffer in action,</b> <b>as it loops back to the</b> <b>first effect again and again.
</b> <b>Cool.
</b> <b>Now the final step is to sample this lookup</b> <b>texture to, so to speak, render these effects.
</b> <b>By sampling the 2 pixels in a given column, we</b> <b>can retrieve all of that effect's data.
</b> <b>From there, knowing the</b> <b>effect's world position and radius,</b> <b>we can create a sphere</b> <b>mask from any world position.
</b> <b>Next, by comparing the effect's spawn time storm</b> <b>with the current game time,</b> <b>we can determine how much time has passed since</b> <b>this effect was spawned.
</b> <b>If that value is negative, we skip the effect.
</b> <b>Otherwise, we divide the</b> <b>elapsed time by the effect's duration</b> <b>to get a normalized</b> <b>lifetime value in the 0 to 1 range.
</b> <b>Kinda like a particle.
</b> <b>0 at birth and 1 at death.
</b> <b>If it's greater than 1,</b> <b>we skip the effect as well.
</b> <b>With that 0 to 1 lifetime value, we could sample</b> <b>another lookup texture to drive a curve,</b> <b>for example, to create a fade in</b> <b>and out with a spring-like animation.
</b> <b>Or we could simply compute</b> <b>such a curve mathematically.
</b> <b>Next, using the effect's</b> <b>direction, stored as an angle in radians,</b> <b>we can reconstruct a 2D unit vector with basic</b> <b>trigonometry to get its world space direction.
</b> <b>That can then be compared to the direction from</b> <b>the sample's position to the effect's origin.
</b> <b>Then, with the effect's cone</b> <b>angle, we can compute a cone mask.
</b> <b>If we want a spherical effect, setting the angle</b> <b>to 360 degrees will do that.
</b> <b>Otherwise, we can narrow the</b> <b>cone for more directional effects.
</b> <b>And by combining these masks,</b> <b>distance, time and direction,</b> <b>we can compute the effects' 2D displacement at</b> <b>any given time from any world position.
</b> <b>Do that for each effect, sum the resulting</b> <b>displacements, and voila.
</b> <b>That's the theory.
</b> <b>Now let's see the implementation.
</b> <b>Here's my grass shader.
</b> <b>This texture is the effect's lookup table.
</b> <b>And the number of effects</b> <b>corresponds to the texture's width.
</b> <b>The world positions at which we sample the</b> <b>effects are the grass pivots,</b> <b>converted from local to world space.
</b> <b>Baking and sampling pivots is something I've</b> <b>explained several times on this</b> <b>channel, so I won't go into it again.
</b> <b>For each effect, I compute the rows U</b> <b>coordinate used to sample the lookup table.
</b> <b>I sample the top pixel and then the bottom pixel</b> <b>to access the effect's data.
</b> <b>Next, I compute the position delta between the</b> <b>effects' origin and the grass blade's position.
</b> <b>This value is multiplied by the inverse radius,</b> <b>then inverted, 1 minus,</b> <b>to create a spherical mask.
</b> <b>1 up close and 0 at the</b> <b>effect's maximum distance.
</b> <b>The cone fade is a bit more complex.
</b> <b>It starts by converting the direction from the</b> <b>grass blade's position to the effect's origin</b> <b>into an angle in radians.
</b> <b>Then there's a bit of math to account for the</b> <b>effect's direction and cone</b> <b>angle to form the cone fade.
</b> <b>It's not particularly interesting.
</b> <b>Finally, the time fade is a simple spring</b> <b>oscillation using cosine modulated</b> <b>over time with an exponential decay.
</b> <b>I'm not too happy with this implementation since</b> <b>it doesn't properly</b> <b>respect the effect's duration.
</b> <b>The spring fadeout isn't really tied to the</b> <b>effect's duration and might still be running once</b> <b>the effect's time has ended.
</b> <b>But whatever, it's also faded in.
</b> <b>The direction from the grass blade to the</b> <b>effect's origin, multiplied by all these fades</b> <b>and the effect's strength,</b> <b>gives a 2D displacement vector</b> <b>that's accumulated across all effects.
</b> <b>The grass tips are then simply offset by that</b> <b>amount while keeping the roots fixed using a</b> <b>top-down gradient backed in UVs.</b> <b>Then, to fake a rotation effect, I use a</b> <b>spherical reprojection, which by the way gives me</b> <b>a direction I can use to</b> <b>adjust the normal if needed.
</b> <b>I covered spherical reprojection in</b> <b>detail in my stylized grass video.
</b> <b>Anyway, ta-da! The grass is moving.
</b> <b>And that's essentially why</b> <b>this approach isn't very scalable.
</b> <b>It basically involves a for-loop with quite a few</b> <b>texture samples, potentially per-vertex,</b> <b>depending on your use case,</b> <b>so there's a performance cost.
</b> <b>For around 16 effects and depending on your</b> <b>vertex count, though, it's usually fine,</b> <b>considering such a tiny look-up texture will</b> <b>likely stay fully cached.
</b> <b>It still comes with a non-negligible cost.
</b> <b>As always, when in doubt, profile on your target</b> <b>hardware and see for yourself.
</b> <b>Anyway, this can be replicated</b> <b>in, say, Niagara, for instance.
</b> <b>The HLSL code can almost be copy-pasted, although</b> <b>the way to return a value is different in</b> <b>Niagara, as well as the texture sample function.
</b> <b>But yeah, you can sample the</b> <b>effects to, say, add a force.
</b> <b>And along with gravity, rotational drag,</b> <b>collision and whatnot, you</b> <b>can create very cool effects.
</b> <b>This effect shares many similarities with one I</b> <b>showcased in a previous video, where I used a</b> <b>Niagara's rigid mesh collision interface to</b> <b>create some very cool effects.
</b> <b>Anyway, that's the first solution.
</b> <b>The second solution would be to render each</b> <b>effect, but this time for</b> <b>every pixel in a render target.
</b> <b>Each pixel would be given a world coordinate that</b> <b>fits within a certain area in world space, and</b> <b>you'd run pretty much the same code.
</b> <b>The result would be a render</b> <b>target that looks like this.
</b> <b>You'd then simply sample this texture in your</b> <b>grass shader, Niagara</b> <b>system or whatever you're using.
</b> <b>Now, before going any further, I'd like to</b> <b>highlight the potential pros</b> <b>and cons of each technique.
</b> <b>For both solutions, the</b> <b>CPU cost can be negligible.
</b> <b>Right now, that's not really the case since this</b> <b>is a blueprint implementation, but converting it</b> <b>to C++ would be fairly straightforward, and once</b> <b>that's done, the CPU cost would</b> <b>become practically negligible.
</b> <b>On the GPU side, rendering the</b> <b>lookup table itself is cheap.
</b> <b>The main cost comes from Niagara's overhead, and</b> <b>I bet converting it to a simple compute shader</b> <b>dispatched in C++ would make it even cheaper.
</b> <b>Spoiler alert, I've already implemented that, so</b> <b>stay tuned for the results.
</b> <b>What is costly is sampling all the</b> <b>effects stored in the lookup table.
</b> <b>That's two texture samples per</b> <b>effect, plus some math for each one.
</b> <b>In a particle system, that's not necessarily a</b> <b>big deal since it's done once per particle, but</b> <b>in a vertex shader, the cost becomes noticeable.
</b> <b>And it adds up fast.
</b> <b>If I set the effect count to something large, say</b> <b>128, you can see the performance</b> <b>start to drop in this particular scene.
</b> <b>Each vertex ends up performing 256 texture</b> <b>samples, and even though the lookup table is</b> <b>small and probably fully</b> <b>cached, it still has a cost.
</b> <b>The benefit of this technique, however, is that</b> <b>it opens up a lot of possibilities.
</b> <b>As I mentioned earlier, effect positions could be</b> <b>stored as 3D coordinates in the lookup table,</b> <b>allowing you to compute sphere masks in full 3D</b> <b>space with the z component taken into account,</b> <b>instead of only using 2D plane coordinates.
</b> <b>That could be really useful for properly</b> <b>spatializing each effect.
</b> <b>Although, since the effect's time is computed on</b> <b>the fly, you can do interesting things, like</b> <b>slightly offset the time along the grass height.
</b> <b>This creates a subtle delay in the animation,</b> <b>which looks pretty cool.
</b> <b>Another thing to keep in mind is that outputting</b> <b>velocities is extremely important nowadays, with</b> <b>all the temporal effects like TAA and others.
</b> <b>And this is done by computing the world position</b> <b>offset from the previous frame.
</b> <b>And when you're using any time-based effect, this</b> <b>is automatically computed for you.
</b> <b>Time is internally replaced with time minus delta</b> <b>time to compute the previous frame's world</b> <b>position offset.
It's all taken care of.
</b> <b>The memory footprint is also tiny. A 16x2 texture</b> <b>with 32-bit floats is practically nothing.
</b> <b>That's the lookup table method.
</b> <b>Now let's talk about</b> <b>rasterizing the effects to a 2D texture.
</b> <b>The rasterization pass itself is fairly cheap,</b> <b>and even with a large number of effects, it</b> <b>wouldn't cost much, assuming the render target</b> <b>size remains reasonable.
</b> <b>Sampling the rasterized effects in the grass</b> <b>shader or in Niagara</b> <b>would also be extremely cheap.
</b> <b>You'd just convert world coordinates to unit</b> <b>space and do a single texture sample.
</b> <b>However, what's baked into the texture is baked.
</b> <b>For instance, adding a time delay along the grass</b> <b>height is no longer possible, at least not</b> <b>directly from this baked texture.
</b> <b>Once it's baked, the effects time can't be</b> <b>modified, and you can only</b> <b>sample the sum of all displacements.
</b> <b>That being said, there are solutions.
You could</b> <b>rasterize the effects to another 2D texture with</b> <b>a bit of time delay, but it comes at a cost,</b> <b>extra memory footprint and so on.
</b> <b>Moreover, outputting velocities becomes a bit</b> <b>tricky.
Since this isn't a time-based offset</b> <b>anymore, there's no live time variable, right?
</b> <b>Computing the previous frame's world position</b> <b>offset for the velocity buffer can</b> <b>only be done using a double buffer.
</b> <b>That means you'd rasterize the effects into this</b> <b>render target, but before doing so, copy it to</b> <b>another render target and use</b> <b>the previous frame switch node.
</b> <b>So you end up with two textures and two texture</b> <b>samples.
It's still fairly cheap, but the memory</b> <b>footprint is significantly higher than that of</b> <b>the lookup texture method.
</b> <b>So, as usual, they are pros and cons.
Choosing</b> <b>the right method depends on your use case,</b> <b>expectations memory</b> <b>budget, performance and whatnot.
</b> <b>Finally, using the lookup table method allows you</b> <b>to sample effects both up close and at far</b> <b>distances, whereas rasterizing</b> <b>effects is spatially limited.
</b> <b>Alright, let's take a look at</b> <b>the rasterizer implementation.
</b> <b>We're going to define an area in world space of</b> <b>a given size, centered on the</b> <b>desired "capture location".
</b> <b>Since the Niagara emitter is attached to the</b> <b>character, I can simply use its location as a</b> <b>capture location because I want to rasterize</b> <b>effects around the character.
</b> <b>The size of this area is arbitrary and depends on</b> <b>several factors.
How far the effects need to be</b> <b>visible, the render target resolution, the</b> <b>required texel density and so on.
</b> <b>Let's quickly go back to the blueprint for a</b> <b>moment.
Remember, the effects buffer is sent to</b> <b>Niagara and the appropriate rasterizer emitter</b> <b>are enabled or disabled as needed.
</b> <b>One thing I didn't show is that the area scale is</b> <b>also sent to Niagara and updated in a material</b> <b>parameter collection when it's set.
</b> <b>The same goes for the rasterizer location, which</b> <b>is updated in a material parameter collection</b> <b>every tick using a timer.
</b> <b>In Niagara, I have a first emitter that</b> <b>rasterizes the effects within a defined area</b> <b>around that location. I refer to this as the</b> <b>"local space rasterizer".
</b> <b>I use two 16-bit rgba render</b> <b>targets to create a double buffer.
</b> <b>In the first simulation stage, whatever is stored</b> <b>in the render target is copied to the previous</b> <b>render target to keep track of the effects</b> <b>rasterized during the previous frame.
</b> <b>This step is necessary to output velocities, as I</b> <b>explained just a moment ago.
</b> <b>Next, in the second simulation stage, I get the</b> <b>emitter's world position (the character's</b> <b>position) and extract its 2D components.
</b> <b>Since this simulation stage iterates on the</b> <b>render target, the execution</b> <b>index can be converted to UVs.</b> <b>Then I convert these UVs</b> <b>to world space coordinates.
</b> <b>First, the center of the texture, 0.5, has to be</b> <b>the origin, 0, hence the minus 0.5.
</b> <b>Then UVs are scaled to be converted to</b> <b>centimeters and centered</b> <b>on the character's location.
</b> <b>From there, I use essentially the same code I</b> <b>used in the grass shader.
</b> <b>Now I used to sample the</b> <b>lookup table to get the effect data.
</b> <b>But here I access the float buffer directly to</b> <b>retrieve each piece of data one by one.
</b> <b>XY position, timestamp, duration, and so on.
</b> <b>The rest is identical.
</b> <b>There's the distance fade, cone fade, time fade,</b> <b>and the accumulated</b> <b>displacement vector is exactly the same.
</b> <b>That said, it's also faded out towards the edges</b> <b>of the render target to prevent the effect from</b> <b>disappearing abruptly in the distance at the</b> <b>boundary of the capture area.
</b> <b>That's output to the render</b> <b>target to produce this result.
</b> <b>It's moving because I'm actually moving my</b> <b>character around, so the effects change position</b> <b>relative to my character, which remains at the</b> <b>center of the render target.
</b> <b>It can then be sampled in the grass shader using</b> <b>the XY world position converted to UVs using the</b> <b>capture location and scale.
</b> <b>It's best to use the pivot world position if</b> <b>possible instead of the vertex position to avoid</b> <b>distortion around the effect's origin.
</b> <b>I'm also using clamp addressing to prevent the</b> <b>texture from tiling, and since the borders are</b> <b>black because of the edge fade, no offset will be</b> <b>sampled outside the rasterized area.
</b> <b>The rest of the shader remains the same.
</b> <b>And voila, the effects are rasterized around you</b> <b>and sampled in the grass shader.
</b> <b>It's much cheaper than using the lookup table</b> <b>method, but it's also more constraining.
</b> <b>There's no time delay along the grass length, for</b> <b>instance, it's spatially limited, etc.</b> <b>Now one thing you might want to do is convert</b> <b>this render target from</b> <b>local space to world space.
</b> <b>It's a concept I already explained in a previous</b> <b>video, so I'll keep it short here, especially</b> <b>since it doesn't necessarily serve</b> <b>any purpose in this particular case.
</b> <b>Still, it's a useful trick to</b> <b>know, so I want to mention it briefly.
</b> <b>The idea is to sample the render target at the</b> <b>capture location, let it tile, and render an area</b> <b>of the same size but positioned</b> <b>with its corner at the world origin.
</b> <b>This happens in that emitter.
</b> <b>I use a 2D grid to store the intermediate result,</b> <b>which is initialized here.
</b> <b>I retrieve the emitter's world location, the</b> <b>capture location, just like before.
</b> <b>Then I compute UVs from the execution index,</b> <b>center the texture, and offset it further to</b> <b>account for where the render target was</b> <b>rasterized in unit space.
</b> <b>Next, with these UVs, I sample the render target</b> <b>and store the XYZ components in the 2D grid.
</b> <b>After that, I create the double buffer.
</b> <b>I copy whatever is stored in that render target</b> <b>into another render target, and then I update the</b> <b>main render target with</b> <b>the contents of the 2D grid.
</b> <b>This gives you the rasterized effects in local</b> <b>space, now converted to world space.
</b> <b>I'm still moving my character around, but notice</b> <b>that the effects don't move with me anymore.
</b> <b>I do spawn them at different locations, but they</b> <b>all stay fixed in place.
</b> <b>That's because they are now rasterized relative</b> <b>to the world origin, which remains fixed instead</b> <b>of relative to my character.
</b> <b>As I move, effects that are the boundary of the</b> <b>local render target start to fade away, almost</b> <b>like a box mask is being applied around my</b> <b>character's world location,</b> <b>if that makes sense.
</b> <b>It can then be sampled in the grass shader using</b> <b>only the scale to convert world space coordinates</b> <b>to unit space since the capture</b> <b>is already cornered at 0, 0, 0.
</b> <b>To prevent tiling, you can apply a box mask</b> <b>around the capture's location.
</b> <b>And again, the result</b> <b>isn't visually different here.
</b> <b>But this spatially stable render target can be</b> <b>iterated on additively, allowing you to keep</b> <b>accumulating the local render target to create a</b> <b>trail or something similar.
</b> <b>A handy trick to know.
</b> <b>Voila, that's pretty much it.
</b> <b>I've demonstrated two</b> <b>techniques for creating simple effects.
</b> <b>Each technique has its pros and cons, and both,</b> <b>as they are, are to be honest somewhat limited</b> <b>and not highly scalable.
</b> <b>There are plenty of techniques to achieve this</b> <b>kind of effect, but none are perfect, so it's</b> <b>always good to know your options.
</b> <b>Although, keep in mind that regardless of the</b> <b>technique used, Niagara offers a lot of</b> <b>possibilities, and what I demonstrated today can</b> <b>be implemented in many different ways.
</b> <b>For example, instead of managing a float buffer,</b> <b>you could use Niagara's data channels to describe</b> <b>effects and push them to the data channel.
</b> <b>I chose to keep it simple here by managing a</b> <b>float array that's sent to the</b> <b>Niagara system via a direct reference.
</b> <b>Oh, and by the way, data channels is a topic I</b> <b>covered in a previous</b> <b>video, link will be up there.
</b> <b>I hope this video was helpful.
</b> <b>If you enjoyed it, feel free to leave a like and</b> <b>subscribe to the channel.
</b> <b>And if you'd like to support me or access the</b> <b>project files, they're available</b> <b>in the tier 2 reward on my Patreon.
</b> <b>Thanks for watching, and a big thank you to my</b> <b>patrons for their support.
</b> <b>I'll see you in the next video,</b> <b>until then, take care of yourself.
</b> <b>Bye-bye!
</b>
Loading video analysis...