Reputation: 1052
I'm writing a Mandelbrot app in C# (and I'm testing with Python). I already have the continous coloring from the set to its borders. My current problem is to set the background color of the environment. My current code for getting the color now looks like this, it gets the color as double (the logarithm function is done before) and checks wether it's part or not and creates a quite smooth gradient (from black to orange).
private Color getColor(double i)
{
double ratio = i / (double)(iterations);
int col = (int)(i / iterations * 255);
int alpha = 255;
if (ratio >= 0 && ratio < 0.25)
return Color.FromArgb(alpha, col, col/5, 0);
if (ratio >= 0.25 && ratio < 0.50)
return Color.FromArgb(alpha, col, col/4, 0);
if (ratio >= 0.50 && ratio < 0.75)
return Color.FromArgb(alpha, col, col/3, 0);
if (ratio >= 0.75 && ratio < 1)
return Color.FromArgb(alpha, col, col/2, 0);
return Color.Black; //color of the set itself
}
How can I change the black environement (not the Mandelbrot set) to another color like the obfuscated Python script (http://preshing.com/20110926/high-resolution-mandelbrot-in-obfuscated-python) does? I already edited the script to a nicer form, but it doesn't fit my algorithm.
EDIT: Forgot to mention, I'm not using a class for the complex equotation, I compute the fractal with the algorithm that's shown on Wikipedia.
Upvotes: 1
Views: 2480
Reputation: 123473
Here's a quick `n dirty adaptation of my answer to another question about mapping a range of values to pseudocolors that allows them to be mapped to a whole palette of RGB colors instead of only two. Note that the in-between colors are being interpolated in RGB colorspace, not HSV (which is generally nicer looking in my opinion, but requires more computation).
I'm not completely happy with it, but my time is very limited this weekend and at least what I have so far seems to work, even if it's sub-optimal, so I'll post it for you to play around with:
def palette_pseudocolor(val, minval, maxval, palette):
max_index = len(palette)-1
# convert val in range minval...maxval to range 0..max_index
v = (float(val-minval) / (maxval-minval)) * max_index
# split result into integer and fractional parts
i = int(v); f = v-i
# interpolate between two colors in the palette
c0, c1 = palette[i], palette[min(i+1, max_index)]
d = c1[0]-c0[0], c1[1]-c0[1], c1[2]-c0[2]
return c0[0]+f*d[0], c0[1]+f*d[1], c0[2]+f*d[2]
if __name__ == '__main__':
numsteps = 10
palette = [(1,0,0), (0,1,0), (0,0,1)] # [RED, GREEN, BLUE]
print 'val R G B'
for val in xrange(0, 100+numsteps, numsteps):
print ('%3d -> (%.3f, %.3f, %.3f)' %
((val,) + palette_pseudocolor(val, 0, 100, palette)))
Output:
val R G B
0 -> (1.000, 0.000, 0.000)
10 -> (0.800, 0.200, 0.000)
20 -> (0.600, 0.400, 0.000)
30 -> (0.400, 0.600, 0.000)
40 -> (0.200, 0.800, 0.000)
50 -> (0.000, 1.000, 0.000)
60 -> (0.000, 0.800, 0.200)
70 -> (0.000, 0.600, 0.400)
80 -> (0.000, 0.400, 0.600)
90 -> (0.000, 0.200, 0.800)
100 -> (0.000, 0.000, 1.000)
Here's a color gradient produced with the red, green, and blue palette in the example:
Upvotes: 1