Luc
Luc

Reputation: 164

Raytracing Reflection distortion

I've started coding a raytracer, but today I encounter a problem when dealing with reflection.

First, here is an image of the problem:

enter image description here

I only computed the object's reflected color (so no light effect is applied on the reflected object) The problem is that distortion that I really don't understand. I looked at the angle between my rayVector and the normalVector and it looks ok, the reflected vector also looks fine.

Vector Math::calcReflectedVector(const Vector &ray,
                                 const Vector &normal) const {
  double cosAngle;
  Vector copyNormal = normal;
  Vector copyView = ray;

  copyNormal.makeUnit();
  copyView.makeUnit();
  cosAngle = copyView.scale(copyNormal);
  return (-2.0 * cosAngle * normal + ray);
}

So for example when my ray is hitting the bottom of my sphere I have the following values:

cos: 1

ViewVector: [185.869,-2.44308,-26.3504]

NormalVector: [185.869,-2.44308,-26.3504]

ReflectedVector: [-185.869,2.44308,26.3504]

Bellow if the code that handles the reflection:

Color Rt::getReflectedColor(std::shared_ptr<SceneObj> obj, Camera camera,
                            Vector rayVec, double k, unsigned int pass) {
  if (pass > 10)
    return obj->getColor();
  if (obj->getReflectionIndex() == 0) {
    // apply effects
    return obj->getColor();
  }

  Color cuColor(obj->getColor());
  Color newColor(0);
  Math math;
  Vector view;
  Vector normal;
  Vector reflected;
  Position impact;
  std::pair<std::shared_ptr<SceneObj>, double> reflectedObj;

  normal = math.calcNormalVector(camera.pos, obj, rayVec, k, impact);
  view = Vector(impact.x, impact.y, impact.z) -
         Vector(camera.pos.x, camera.pos.y, camera.pos.z);
  reflected = math.calcReflectedVector(view, normal);
  reflectedObj = this->getClosestObj(reflected, Camera(impact));
  if (reflectedObj.second <= 0) {
    cuColor.mix(0x000000, obj->getReflectionIndex());
    return cuColor;
  }
  newColor = this->getReflectedColor(reflectedObj.first, Camera(impact),
                                     reflected, reflectedObj.second, pass + 1);
  // apply effects
  cuColor.mix(newColor, obj->getReflectionIndex());
  return newColor;
}

To calculate the normal and the reflected Vector:

Vector Math::calcReflectedVector(const Vector &ray,
                                 const Vector &normal) const {
  double cosAngle;
  Vector copyRay = ray;

  copyRay.makeUnit();
  cosAngle = copyRay.scale(normal);
  return (-2.0 * cosAngle * normal + copyRay);
}

Vector Math::calcNormalVector(Position pos, std::shared_ptr<SceneObj> obj,
                              Vector rayVec, double k, Position& impact) const {
  const Position &objPos = obj->getPosition();
  Vector normal;

  impact.x = pos.x + k * rayVec.x;
  impact.y = pos.y + k * rayVec.y;
  impact.z = pos.z + k * rayVec.z;
  obj->calcNormal(normal, impact);
  return normal;
}

[EDIT1]

I have a new image, i removed the plane only to keep the spheres:

enter image description here

As you can see there is blue and yellow on the border of the sphere. Thanks to neam I colored the sphere applying the following formula:

  newColor.r = reflected.x * 127.0 + 127.0;
  newColor.g = reflected.y * 127.0 + 127.0;
  newColor.b = reflected.z * 127.0 + 127.0;

Bellow is the visual result:

enter image description here

Ask me if you need any information. Thanks in advance

Upvotes: 2

Views: 740

Answers (2)

neam
neam

Reputation: 934

There are many little things with the example you provided. This may -- or may not -- answer your question, but as I suppose you're doing a raytracer for learning purposes (either at school or in your free time) I'll give you some hints.

  • you have two classes Vector and Position. It may well seems like it's a good idea, but why not seeing the position as the translation vector from the origin ? This would avoid some code duplication I think (except if you've done something like using Position = Vector;). You may also want to look at some libraries that does all the mathematical things for you (like glm could do). (and this way, you'll avoid some errors like naming your dot function scale())

  • you create a camera from the position (that is a really strange thing). Reflections doesn't involve any camera. In a typical raytracer, you have one camera {position + direction + fov + ...} and for each pixels of your image/reflections/refractions/..., you cast rays {origin + direction} (thus the name raytracer, which isn't cameratracer). The Camera class is usually tied to the concept of physical camera with things like focal, depth of field, aperture, chromatic aberration, ... whereas the ray is simply... a ray. (could be a ray from the plane where the output image is mapped to the first object, or a ray created from reflection, diffraction, scattering, ...).

  • and for the final point, I think that your error may comes from the Math::calcNormalVector(...) function. For a sphere at a position P and for an intersection point I, the normal N is: N = normalize(I - P);.

EDIT: seems like your problem comes from the Rt::getClosestObj. Everything else is looking fine

There's ton a websites/blogs/educative content online about creating a simple raytracer, so for the first two points I let them teach you. Take a look at glm. If don't figure out what is wrong with calcNormalVector(...) please post its code :)

Upvotes: 3

eroween
eroween

Reputation: 163

Did that works ?

I assume that your ray and normal vector are already normalized.

Vector Math::reflect(const Vector &ray, const Vector &normal) const
{
    return ray - 2.0 * Math::dot(normal, ray) * normal;
}

Moreover, I can't understand with your provided code this call :

this->getClosestObj(reflected, Camera(obj->getPosition()));

That should be something like that no ?

this->getClosestObj(reflected, Camera(impact));

Upvotes: 3

Related Questions