Feature: Environment Emitter

External libraries:
No further libraries than nori-framework has been used to complete this task.
Time spent:
30:00 (approximately)
Acknowledgements:
  • Physically Based Rendering 2010
  • Lecture slides
  • Mitsuba-Renderer => For comparing results
Files:
  • include/nori/mipmap.h (new)
  • include/nori/infiniteAreaLight.h (new)
  • include/nori/dpdf2d.h (new)
  • src/mipmap.cpp (new)
  • src/infiniteAreaLight.cpp (new)
  • src/environment_direct_ems.cpp (new)
  • src/environment_direct_mis.cpp (new)
  • src/environment_path_mis.cpp (new)
  • common.h/cpp (updated)
  • scene.h/cpp (updated)

Architecture

The main idea was of course to let the interface exactly be the same as for all the emitters so far. Within this approach, the importance sampling is already given within the direct_mis- and path_mis-integrator. The important change is that the texture of the environment-map can be loaded, as well the sampling-distribution which is based on it.

To load and manage the textures, I implemented the MIPMap class with the help of the book. This I could also reuse to generate a texture-by-image. The mip-map is important to prevent the effects which can happen when a texture is rendered from too far away (aliasing, etc..). Therefore, different resolutions of the texture are generated and the correct one is used for the evaluation. All this could be relative easily implemented given by the descriptions in the book. The textures are stored

To handle the distribution in the 2D-space with the conditional and marginal, I implemented a new distribution-class which handles exactly this by managing discrete-1D-distributions for each row and one for the marginal to sample the row. This I accomplished with the help of the lecture-slides.

Implementation

MIP-Map (include/nori/mipmap.h and src/mipmap.cpp)

This loads the image and resamples/resiszes it if needed to a size which is a power of 2. This is not necessary, but helps for creating the pyramid of the different resolutions easier.

The resizing is done with a resampling per each new location with different weights of the neighbors of the old size. This weights of the old pixels of how much they contribute to a new pixels is calculated with the lanczos-filter (described in the book) and a neighbor-hood of 4 pixels.

For the lookup of the texel-color, different lookup methods are implemented. (all implemented with the help of the book.)

Sampling (include/nori/dpdf2d.h)

class DiscretePDF2D

To generate a proper sampling method, the method of the lecture slides is applied. First, for each row a discrete distribution is created and calculated. Then by the sum of each row, the marginal distribution is calculated to choose the row properly. As a final step, all the distributions are normalized. Then it is easy to convert a 2D-uniformly sample into this distribution. To generate the distribution, the max-value of the rgb-chanel of each pixel is taken.

To look up the position to sample, first the row is selected with the marginal-distribution and the first random-variable. With the selected row, we can now sample the second random-variable on this row. Then we have the correct sampled uv-coordinates with which the 3D-ray can be calculated (not in the distribution-class).

struct PDF2DQueryRecord

This struct is a helper function as other query-record classes for other features. It is used to get the samples and pdfs based on the query-settings. It holds all information needed for this actions.

Environment map emitter (include/nori/infiniteAreaLight.h and src/infiniteAreaLight.cpp)

class InfiniteAreaLight

This class represents the environment map emitter and also implement the emitter-interface as all the other light-sources do. The sampling is based on the discrete 2D distribution which is generated by the given texture. The color is evaluated with the mip-map which holds all necessary resolutions (can be looked up with the sampled uv-coordinates) and the pdf of the sampling can also be calculated by the 2d-distribution. So as for all other emitters, the methods Color3f sample(...), Color3f eval(...) and float pdf(...) do exactly what all the other emitters do, but specialised for the environment map.

Additionally, the class implements two methods to handle rays which have missed the scene.

void activate(const Scene* scene)
To create the sphere of the environment map, the scene has to be passed to get the correct bounding-box with all objects inside the scene. With this information, a bigger sphere can be placed around the scene. IMPORTANT: the sphere is NOT added on the scene, since it is not a real shape belonging to the scene. It is more a helping object for internal calculations.
bool getRayIntersect(const Ray3f &ray, Intersection &intersection)
This method is to calculate the intersection with the environment map. Of course, this is fact not a real intersection, but more a projection of the ray/map to a sphere with the given distant to calculate all needed information. It is the same interface as other intersections are calculated. Therefore, the integrator can use it exactly the same way as checking other intersections. Then the intersection have all the information which the integrator needs to construct the correct query to call the other emitter-methods.
Color3f rayIntersect(const Ray3f &ray)
With this method, the correct color of the hit texel can be applied in the integrator for any ray which missed the scene.

Integrators (src/environment_*.cpp)

class Environment*Integrator

The integrators are copied from already existing integrators. The only thing which has changed is following: until now, for each ray which missed the scene, either black has been returned for a primary ray or if it is any secondary ray, the loop has simply stopped. But now for each of the missing ray, the infinite area light is called to get the correct radiance of the intersection with the ray. This color is then applied to the ray and the path-tracing is stopped. It is not allowed for any ray to bounce from the environment map emitter back into the scene.

Validation

MIP-Map

The levels of the mip-map can be selected hardcoded if needed to see the effect of selecting different resolutions. Indeed, it is the expected behaviour. By selecting a higher level, the resolution suffers. Also by simple looking at the different patches when the resolution is reduced, one can see that the color is the average from the 4 texels before. (Of course, in a perfect world with enough time and pressure I would write a test for this)

Level 0/12 Level 4/12 Level 6/12 Level 7/12

Dielectric and mirror

The interacting with the environment map emitter throught the mirror and the class yields as well the render-results which are expected.

Map 1 Map 2

Comparing with Mitsuba

For some reasons I was not able to set up the scene in Mitsuba the exactly same way as I did it in Nori. Also when I tried to set the material to "mirror" it did not work in any way. I was not able to apply the mirror BSDF to any shape. Also in the source code which I downloaded to build it on my system was no mirror.cpp file available. Due to these reasons, I simply made both sphere dielectric. I had to set the camera perspective manually in the rendering-window of Mitsuba.

Overall, the renderings are not exactly the same, but also not completely different. The first difference is, that somehow the environment map in Mitsuba seems to be closer than in Nori. Another difference is that the walls look real orthogonal in Mitsuba, whereas in Nori with my implementation some look a little bit curved. It can also be that in Mitsuba they only look like this because it is zoomed in just as much that the curves are not visible anymore. (Because the image itself, which holds the environment map, is also a little bit curved and not with orthogonal walls).

The "content" of the spheres is also not exactly the same. The rendered parts of the environment map are a little bit shifted compared to each other. This is probably also because the depth is not the same.

However, what looks correctly are the artifacts of the reflected rays

Nori Mitsuba

Less luckily is the situation with the sampling.... I could not find out what I am doing wrong. My assumption is that with the sampling and the PDF something is mixed up. Also the shape looks interestingly flat. Due to lack of time and sleep I can not fix this, my brain is too mixed up to find a rational solution. Probably only something rather small is wrong (at least this is what I hope). Also, why the colors are upside down I'm not completely sure. I did not see where I am applying any wrong transformation. In fact, during the renderings of my final image I'm never occurred anything like this. But probably I just do not recognise it since there is always some texture and many other things around.

What really grinds my gears is the fact that the shadow seems somehow right.. also the colors are relatively close to each other, just upside-down.

I would be really happy when you can point out my mistake so I can fix it for my honor ;-)

Nori Mitsuba

Conclusion

Even though the emitter seems not completely correctly implemented, it does its purpose for the final image.