Misplaced Pages

Deep lambertian networks

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Deep lambertian networks" – news · newspapers · books · scholar · JSTOR (April 2014) (Learn how and when to remove this message)
This article may be too technical for most readers to understand. Please help improve it to make it understandable to non-experts, without removing the technical details. (April 2014) (Learn how and when to remove this message)
This article is an orphan, as no other articles link to it. Please introduce links to this page from related articles; try the Find link tool for suggestions. (January 2017)
(Learn how and when to remove this message)

Deep Lambertian Networks (DLN) is a combination of Deep belief network and Lambertian reflectance assumption which deals with the challenges posed by illumination variation in visual perception. Lambertian Reflectance model gives an illumination invariant representation which can be used for recognition. The Lambertian reflectance model is widely used for modeling illumination variations and is a good approximation for diffuse object surfaces. The DLN is a hybrid undirected-directed generative model that combines DBNs with the Lambertian reflectance model.

In the DLN, the visible layer consists of image pixel intensities v ∈ R, where Nv is the number of pixels in the image. For every pixel i there are two latent variables namely the albedo and surface normal. GRBMs are used to model the albedo and surface normals.

Combining Deep Belief Nets with the Lambertian reflectance assumption, the model can learn good priors over the albedo from 2D images. Illumination variations can be explained by changing only the lighting latent variable. By transferring learned knowledge from similar objects, albedo and surface normals estimation from a single image is also possible. Experiments demonstrate that this model is able to generalize as well as improve over standard baselines in one-shot face recognition.

The model has been successfully applied in reconstruction of shadows facial images, given any set of lighting conditions. The model has also been tested on non-living objects. The method outperforms most other methods and is faster than them.

References

  1. Yichuan Tang et al, Deep Lambertian Networks


Stub icon

This computer science article is a stub. You can help Misplaced Pages by expanding it.

Categories: