Linear Interpolate (*Lerp*) is a simple mathematical function that is used to find a value somewhere between two inputs. Specifically, we're looking at the use of *Lerp* in relation to a transform position.

There are 3 parameters in a *Lerp*. The first two, **a** & **b**, could also be called Start & End, Low & High or Min & Max, depending on context.

The final parameter, **t**, is sometimes called Time, Alpha, Delta, Blend, or Mix.

Most math libraries support **a** & **b** being floating point numbers or vectors (and therefore also colours). In all cases, **t** is a floating point number.

Some functions clamp **t** into the range of 0 -> 1, others allow any value for **t**.

**a** + (**b** - **a**) * **t**;

It's that simple. Whilst it's good to understand the maths behind the function, bare in mind most engines have a math library which has the function built in.

There are many uses for Lerp. The one which we will be focusing on in this article is;

Each frame, interpolate a vector position from point a to point b by a fraction of delta time.

Specifically we're looking at the following case:

lerp(currentPosition, targetPosition, deltaTime * strength)

By moving some fraction of the remaining distance between where we are currently, and where we would like to be, we move in progressively smaller steps towards our target, giving the appearance of smooth motion.

As we can see above, the movement **eases in**. Over the course of the movement, the velocity is largest on the first frame and smallest on last frame. This is because the distance between **a** and **b** is largest on the first frame, and *lerp* moves a fraction of the distance between **a** and **b** *per frame*.

We don't have any concept of persistent velocity here, so the moment the target moves we get a sharp change in velocity followed by a smooth ease in.

An alternative way to move from point **a** to point **b** which takes current **velocity** into account, is called **Simple Harmonic Motion**. Since velocity it taken into account, an object in motion will not suddenly change path when a new target appears, it will alter course smoothly before attempting to reach a new equilibrium.

The Simple Harmonic Class class uses two parameters: **Angular Frequency**, and **Damping Ratio**. To oversimplify:

The *angular frequency* parameter affects how fast the *current state* will move towards the *equilibrium state* (A bit like the **t** value in a lerp).

The *damping ratio* parameter affects how 'springy' the motion will be.

We can **under-damp** (**DR < 1**) to allow the movement to overshoot and then attempt to recover. Under-damped harmonic motion is a really great way to add juice to UI, character movement, and physics simulations.

We can **critically damp** (**DR == 1**) to perfectly move towards the point as fast as the AF allows, without overshooting. A critically dampened spring is very similar to a lerp, but current velocity is taken into account leading to a smoother move.

We can **over-damp** (**DR > 1**), which ensures we never oscillate, but we might not reach equilibrium as quickly as possible. We haven't found much use for over-damped springs in our game, but there are probably some cool ones out there!

For a more in depth explanation from someone who knows far more than us, check out Ryan Juckett

The maths being spring motion in a single dimension can easily be transferred across to multidimensional equations by simply isolating each dimension as its own spring, calculating all the springs, then recombining into a multidimensional vector.

The code included on this page has methods for 1D, 2D, and 3D calculations. If anyone was feeling especially clever it wouldn't be difficult to add methods for 4D/Colour interpolation.

The original code comes from Ryan Juckett. We have translated his code into C# and made it **Unity** friendly. There are two stages to its use - The first is to call **CalcDampedSpringMotionParams** in order to convert from **AF**/**DR **into a set of coefficients which can be used in the **Calculate** method. This can be done every frame if you're tweaking values, but should really be cached at the start of your application to avoid the expensive function call in the hot path.

We've opted to use ref parameters for current state and velocity.

As demonstrated by Inconvergent, B-Splines can be used to produce beautiful generative artworks. By applying **noise over time** to B-Spline control points, and layering temporal snapshots it's possible to build up stunning images.

Using Unity, and Dreamteck Splines, I developed a quick system to render super high resolution B-Splines.

Below, I have documented an outline of the process I used to create all the artwork on the page.

Initial State

To keep the process of generating and testing variations quick, it's important to separate blocks of settings into several easily modifiable and reusable assets, or components. Usually I handle serialization myself using JSON, but since this project doesn't require any external modification, I used Unity's native **ScriptableObjects**.

A custom **Renderer** class instructs a referenced **SplineHandler **class to set up the splines, and to update their state according to its needs. The SplineHandler has several settings objects which are divided up as follows;

**RenderSettings. **The *Material*, *Mesh* *Resolution*, *Thickness* of the splines that are generated.

**NoiseSettings. **The control curves and multipliers for amplitude and frequency.

**GenerationHandler.**** **The class which creates the splines in an initial configuration (*Spline* *Count*, *Vertex* *Count*).

To create the Splines, I store an array of a custom spline point **structs**, one for each point on each spline. Each point contains an initial position, seed, some component references, and the two primary values;

**Alpha (Y)**. 0-1 value representing which spline the point is on. If i create 40 splines, the 20th spline will have an alpha of 0.5.

**Delta (X)**. 0-1 value representing how far along it's spline the point is. On the linear example, the point on the left would have a delta of 0, and the right would have a delta of 1.

The *GenerationHandler* uses the *Alpha* and *Delta* values to create the initial positions, and the *NoiseSettings* use them to seed the frequency and amplitude of a point at any given time value.

Curves on curves

Once the initial splines have been created and configured, noise is applied to deform the B-Splines in interesting ways. There are two important rules when creating simulations such as these;

1. All noise must use a

Seed. This means that given the same input values a simulation will compute the exact same results. 2. Simulations must be calculated in 'absolute time'. It must be possible to evaluate a simulation without information about previous frames.

The '**SplineHandler**' class with a single Coroutine** called** **Evaluate(float time)** iterates across all the spline points, and calculates a Vector offset from their initial position, based on a combination of spline point properties (*alpha*, *delta*), and the simulation *time*.

Time is used to control the quality of the end results.The same simulation can be rendered in 6 frames or in 600 frames - The more frames, the softer the end result. In order to ensure consistent exposure the contribution of each captured frame to the final image is proportional to the total number of frames being rendered, as discussed below.

There are 6 curves which will control the noise. 3 curves for amplitude, 3 curves for frequency. Each curve group multiplies the 3 curves together - One against Alpha, one against Delta, and one against Time.

Multiplying the two curve groups results in two float values; One for frequency, one for amplitude. Using these two values,a **Noise** class can be evaluated, giving an XY result for a points positional offset.

Once all the spline points have been moved, all the spline meshes are re-built.

Since DreamTeck didn't build in an 'On-Call' update method for their spline solution, I modified their system to allow me to only update their splines when I need to. This massively improved performance.

The image above shows the end result if the spline points are also rendered. Since the renders layer up over time, the image get more and more noisy as time passes; This is the reason that simulation duration must not be coupled to render steps - Even short simulations work very well, but it's important to have a high number of render steps to keep the end result smooth.

When rendering lines, there are several options. One option is to use dots; This allows us to implement a variable point density (If required), which is useful when working with high frequency noise, as the dot count would be directly proportional to the spline length.

With splines; no such luck. High frequency noise requires more spline points in order to increase the resolution of the generated mesh used for rendering. In order to allow for this, I also created a curve for **DensityFromAlpha**, which is less useful than *DensityFromFrequency*, but since the splines are only generated once, it was the easiest way to handle this. For a more complex simulation I might consider re-creating the entire spline every single frame, just to allow for a density directly proportional to the frequency.

The noise I used was a basic Simplex C# noise. Different noise algorithms would produce slightly different results, but the majority would be largely indistinguishable; Since CPU bottlenecks would come from mesh generation, not computation, this was not a concern.

Hunting For Pixels

To save these pieces of art from Unity, It was important to find a way to store and save frames.

The videos on this page are rendered as 512px sequences, but the still images are rendered at 16k in order to ensure a superb print quality if printing to canvas. The process of capturing at this resolution is outlined below.

The total render time for the whole simulation was about 2 minutes; Most of this time is actually spent converting RenderTextures to EXRs, although the mesh generation also takes a few ms.

By following the process several times with different noise configurations, post production can be used to layer together images and produce colourful results.

In order to ensure that end results have value ranges between 0 and 1, the alpha contribution of each frame is proportional to the total number of frames being simulated. This is setup up so that theoretically if a spline did not move at all, you wouldn't see nasty alpha layering where the splines alias.

As well as applying noise over time, I wanted to see some bold lines come through in the end result. In order to do this, I evaluate the simulation time against a curve - A curve which flattens in the middle, causing everything to temporarily slow down in the middle. Since the lines move less when they are slow, they layer over each other more for a brief period, which gives a bold line in the end result.

Set global properties on materials. Calculate current simulation progression (Normalised and Stepped).

Apply noise to all spline points by evaluating curves against point properties and simulation time.

Generate spline meshes from stored points. Ensure mesh is camera-facing.

For each tile, capture a temporary **RenderTexture**, and Blit it over that tile's existing texture.

Either per frame, or at the end of the simulation, save all final textures as RGBAFloat EXR.

Load EXR/s, apply color filters, curves, gradient maps, layers.

Resolution

Unity supports native Texture2Ds of up to 4k. I knew this was not going to be high enough, so the approach I used was to render tiles. The capture step takes place within a single frame, so temporal locking was not an issue.

Using some fairly simple maths, I set up a 'tiled shot' camera system, whereby the rendered area of space (usually a 10 by 10 unit square) could be broken down into any squared number of tiles (1,4,9 etc). Each tile can be rendered at up to 4k, allowing for final image resolutions of up to 64k.

Since the layering of frames happens in Unity, Unity must store all the tiles in memory, which gives a hardware limit to the resolution. In theory, the tiles could be saved out to a temporary location and cleared from memory, or the whole piece could be rendered and saved in multiple passes. Both of these solutions would remove the hard limit, but I have yet to require such high resolutions.

Depth

Post production is easiest when working with high quality source files. Floating point precision on an image is usually not possible, but thanks for the new **EncodeToEXR **Unity method, a **RGBAFloat RenderTexture **can be saved. With this bit depth, exposure can be finely tuned in Photoshop without any risk of banding.

Working with floating point textures is actually quite trivial in Unity. Just be aware that the texture memory is much higher on such high precision textures. The only other down side to using them is that MSAA will not work if you are rendering in HDR (Which you should be, since you have floating point precision). Given the resolutions being captured I did not find this to be an issue.

Quick Tip; Converting a **RenderTexture** to a RGBAFloat** Texture **will only work if the source RenderTexture is also RGBAFloat. Any other format will crash.

Derivative Design

Much of this work on this page is directly inspired by or derived from Anders Hoff, and also indirectly related to works by J.Tarbell. The work over on Inconvergent is excellent, and anyone interesting in this field should definitely check it out.

The images produced below are all produced in Unity and composited in Photoshop.

From The Author

There are still lots of cool things left to try out here! I'm going to have a go at Anders technique of using dots instead of lines, and I'm going to have a go at running noise through a doctored flow map. I might also have a look at changing spline material properties of time/based on frequency etc. This could give some cool 'heat map' effects.

If you'd like to know more about the process, or you'd like a high resolution of one of the pieces for print, then feel free to send me an email.