Reputation: 373082
In applications using Perlin noise, it is common to add multiple octaves of noise together to create finer and finer details in the resulting noise. Here's an animation of this, taken from Wikipedia:
Typically, this is implemented by making multiple calls to the same noise function. In pseudocode, that looks like this:
initialize an output grid, all initially zero
set frequency = 1
for each (x, y) point in the grid:
grid[x, y] += noise(x * frequency, y * frequency) / frequency
frequency *= 2
Since Perlin noise (typically) uses a consistent set of hash functions at lattice points, this effectively works by starting with a high-resolution (say, 2n × 2n) noise pattern and adding multiple rescaled copies of that pattern to itself, clipping the end result to just a small region.
My question is whether it's necessary that we use the exact same noise function each time. Imagine, for example, that we tweak the pseudocode like this:
initialize an output grid, all initially zero
set frequency = 1
for each (x, y) point in the grid:
choose a new random noise function 'thisNoise'
grid[x, y] += thisNoise(x * frequency, y * frequency) / frequency
frequency *= 2
Would the resulting noise still have the same desirable properties found in "classic" Perlin noise? Or would changing this this way lead to something undesirable?
Upvotes: 1
Views: 30