Reputation: 335
I normalized my screen coords in the range from -1 to +1. Then I trace some rays starting from those normalized coords and calc the signed distance field (sdf) with the origin and direction vectors.
for (int i = 0; i < rterm::w; i++)
for (int j = 0; j < rterm::h; j++) {
float x = i / (rterm::w / 2.0f) - 1.0f;
float y = j / (rterm::h / 2.0f) - 1.0f;
glm::vec3 o = glm::vec3(x, y, -10.0f);
glm::vec3 d = glm::vec3(0.0f, 0.0f, 1.0f);
if (trace(o, d))
rterm::ctx->buffer[i + j * rterm::w] = '#';
}
The sdf is working correctly but I must have a bug in my code. The sphere rasterized is no sphere, it's more like that.
+---------------------------------+
| |
| |
|######### |
|################# |
|##################### |
|####################### |
|######################### |
|######################### |
|######################### |
|######################### |
|######################### |
|####################### |
|##################### |
|################# |
|######### |
| |
| |
+---------------------------------+
The sdf is just a simple sphere.
float sphere(glm::vec3 p, float r) {
return glm::length(p) - r;
}
float get(glm::vec3 p) {
float ds = sphere(p, 0.8f);
return ds;
}
And here is my trace implementation.
bool trace(glm::vec3 o, glm::vec3 d) {
float depth = 1.0f;
for (int i = 0; i < MARCH_STEPS; i++) {
float dist = sdf::get(o + d * depth);
if (dist < EPSILON) return true;
depth += dist;
if (depth >= CLIP_FAR) return false;
}
return false;
}
Upvotes: 0
Views: 91
Reputation: 15951
You have to take into account the aspect ratio of your image which, in general, will not be 1. What you're effectively doing at the moment is you're defining your image plane to be 2 units in width and 2 units in height. You then subdivide this image plane into a grid of rterm::w
pixels along the x and rterm::h
pixels along the y dimension. Note that the region through which you're casting rays into the world is still rectangular, you just subdivide it at different intervals along the x and y axis. When you then display the image through some standard mechanism that assumes the pixels are sampled at the same, regular interval along both dimensions, the image will appear distorted.
What you typically want to do is work with the same spatial sampling rate along both x and y axis. The typical way to get there is to adjust the x or y dimension of the area through which you're casting your rays to match the aspect ratio of the resolution of the image you want to produce. The aspect ratio is commonly defined as the ratio between the x resolution and the y resolution:
float a = rterm::w * 1.0f / rterm::h;
If, for example, the image is more wide than it is tall, the aspect ratio will be larger than 1. If the image is taller than it is wide, the aspect ratio will be less than one. For a non-square image, to make the distance between pixel locations along x and y the same, we can either scale the x coordinate by a
or scale the y coordinate by 1.0f / a
. For example
float x = a * (i / (rterm::w / 2.0f) - 1.0f);
float y = j / (rterm::h / 2.0f) - 1.0f;
Note: The * 1.0f
in the computation of the aspect ratio above is not redundant. It's there to force the computation to be carried out in float
; otherwise you'd end up with an integer division (assuming you resolution is given by values of integral type)…
Upvotes: 1