Misplaced Pages

Image geometry correction

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
This article may need to be rewritten to comply with Misplaced Pages's quality standards. You can help. The talk page may contain suggestions. (May 2013)
This article's use of external links may not follow Misplaced Pages's policies or guidelines. Please improve this article by removing excessive or inappropriate external links, and converting useful links where appropriate into footnote references. (June 2024) (Learn how and when to remove this message)
(Learn how and when to remove this message)

Image Geometry Correction (often referred to as Image Warping) is the process of digitally manipulating image data such that the image’s projection precisely matches a specific projection surface or shape. Image geometry correction compensates for the distortion created by off-axis projector or screen placement or non-flat screen surface, by applying a pre-compensating inverse distortion to that image in the digital domain.

Usually, Image geometry correction is applied such that equal areas of projection surface are perceived by the viewer map to equal areas in the source image. It can also be used to apply a special effect distortion. The term “Image” Geometry Correction, implying a static image, is slightly misleading. Image geometry correction applies to static or dynamic images (i.e. moving video).

Overview

Image geometry correction is generally implemented in 2 different ways:

  1. Graphics processing
  2. Signal processing

Both techniques involve the real time execution of a spatial transformation from the input image to the output image, and both techniques require powerful hardware. The spatial transformation must be pre-defined for a particular desired geometric, and may be calculated by several different methods (more to follow).

In Graphics Processing, the spatial transformation consists of a polygon mesh (usually triangles). The transformation is executed by texture mapping from the rectilinear mesh of the input image to the transformed shape of the destination image. Each polygon on the input image is thus applied to an equivalent (but transformed in shape and location) polygon in the output image.

Graphics Processing based Image Geometry Correction, may be performed with inexpensive PC-based graphics controllers. The sophisticated software that uses the texture mapping hardware of a graphics controller is not standard, and is available only through vendors of specialty software (i.e. Mersive Technologies and Scalable Display Technologies).

Graphics Processing based image geometry correction is very effective for content that originates in the PC. Its major drawback is that it is tied to the graphics controller platform, and cannot process signals that originate outside the graphics controller.

In Signal Processing based image geometry correction, the spatial transformation consists of spatially defined 2-dimensional image re-sampling or scaling filter. The scaling operation is performed with different scaling ratios in different parts of the image, according to the defined transformation. Special care must be taken in the design of the scaling filter to ensure that spatial frequencies remain balanced in all areas of the image, and that the Nyquist criterion is met in all areas of the image.

Signal Processing based image geometry correction is implemented by specially designed hardware in the projection system (i.e. IDT, Silicon Optix or GEO Semiconductor), or in stand-alone Video Signal Processors (i.e. Flexible Picture Systems).

Signal Processing based image geometry correction is the most flexible form of this technology, enabling the correction of images that originate from ANY graphics controller platform. The drawback of Signal Processing based Image Geometry Correction is the extra expense of the hardware that is used to perform it. This extra expense can be mitigated by the inclusion of additional features (such as switching and Edge Blending in the Signal Processing based image geometry correction system).

Calculation of the image geometry correction transformation The image geometry correction transformation can be calculated by predictive geometry (i.e. calculating exactly where an image should land on a regular surface such as sphere or a cylinder), or by an automatic optical feedback system (i.e. a camera can be used to evaluate the alignment of test images), or by user iteration (i.e. movement of points by an operator). In all methods, the transformation is generally described as a 2-dimensional array. The number of points in the 2-dimensional array that are required to do an accurate Image Geometry Correction depends on the surface involved. In the case of Keystone Correction, 4 points are all that are required to completely describe any projection situation.

Applications

The simplest application of image geometry correction is a specific case known as keystone distortion correction derived from Keystone effect. Keystone distortion gets its name from the symmetric trapezoidal distortion resulting from misaligned projector placement in the vertical dimension (although the term is generally applied to the non-symmetric quadrilateral shape that occurs from an off-axis projection in both dimensions). Keystone correction capabilities are now included in most projectors currently available on the market, allowing users to move the image both vertically and horizontally. Even with this feature, the degree of adjustment available is limited, and image quality may suffer. Keystone correction is ideal for simple business (i.e. conference room) adjustments. For more complex distortion correction, an external processor is required.

Projector Stacking is an advanced form of keystone correction. In this application, two or more projectors are projected onto exactly the same surface. Since the two projectors cannot occupy exactly the same space, the output of each must be at least slightly corrected for Keystone Distortion.

Passive 3D Projector Stacking provides precise alignment for 2 synchronized projectors that are supplying the left eye and right eye of a 3D application.

Image Geometry Correction onto regular surfaces (such as spheres and cylinders) is the next level of complexity. Both these regular shapes are encountered frequently in Professional Audio Video (Pro AV) situations, in the form of domed or curved wall theatres. Other commonly encountered regular-shaped surfaces are subway walls and pillars.

Image Geometry Correction onto irregular surfaces is the most advanced form. This type of projection is common in architectural installations such as casinos.

Edge Blending is a companion application to image geometry correction. Edge blending enables the seamless projection of a large image using several overlapping projectors. Since Keystone Distortion (and frequently the requirement of projection onto a non-flat surface) is a built-in requirement of almost all Edge Blending systems, the pairing of Edge blending and image geometry correction in the same video signal processor is a natural one.

News

  • Wired Magazine – Flexible Picture Systems Image AnyPlace-200 Video Upscaler
  • White Paper - Geometry Correction

See also

External links

References

  1. Wood, S. "Technology- Image Geometry Correction".
  2. Cage, Chuck (9 November 2009). "Even the Oldest, Crappiest Video Can Shine". WIRED Magazine. Retrieved 9 November 2009.
  3. White, Paper. "Geometry Correction" (PDF). unknown. Archived from the original (PDF) on 2012-03-21. Retrieved 2011-03-22.
Category: