![]() |
Autostereoscopic Display | |||||||||
| A new approach based on Cyclograph | ||||||||||
| University of Delaware / Computer Graphics Group |
||||||||||
|
Background
Traditional Light Field-based Autostereoscopic Display is derived from light field rendering techniques[2][6]. In a light field, an array of cameras is placed facing towards the desired target object, as is shown in Figure 1. Each camera captures a specific view of the object to form a view database. New views are synthesized in terms of the data views without knowing the 3D geometry of the object [3][5]. In other words, the 3D model is replaced by a collection of reference views as its primary representation. Since each pixel in a view also corresponds to a specific ray in 3D space, a light field can be considered as a ray representation of a 3D object.
Figure 1: A light field uses an array of cameras to capture different views of an object. These views are stored as images and reclassified as rays in the light field.
The light field representation allows observers to view the object in all possible perspectives without knowing its geometry. When the new view is not presented in the view database, it can be synthesized by appropriately filtering and interpolating the rays. In recent years, Computer Graphics Group at MIT have been relentlessly designing special ray interpolation and reconstruction filters to achieve specific rendering effects [1, 7, 8]. Among them, the most interesting reconstruction filter is the confocal aperture filter [1]. The confocal aperture filter allows user to dynamically change the depth-of-view of the object. While most conventional reconstruction filters have had the property that they are direct analogs of a real-world camera model with a fixed resolution and aperture, the confocal aperture filter combines the best properties of multiple realizable optical systems. Figure 7 shows an example of our confocal aperture filter on a plant scene. We can dynamically focus on different parts of the scene and only objects lying on the corresponding depth will be clearly focused. The first attempt to use Light Field to build an Autostereoscopic Display is probably the work by Isaksen et al[1]. Basically, their proposed method is implemented through putting a hexagon array of light field sample behind a lens array, and in this specific setup each lenslet simulates a pinhole camera, so it is equivalent to presenting a subset of light field database behind the lens array. Figure 3 is a picture taken from their paper, notice that inside each hexagon is a specific pinhole camera image with fixed Center of Projection and and Field of View. When the lenslet is viewed from a particular direction, the entire lenslet takes on the color of a single point in the image plane of that lenslet. Like Figure 4 top shows. Because our eyes have certain disparity, so the image perceive by two eyes will be different images corresponds to particular focal distance and field of view of the hexagon array light field, as illustrated by Figure4 bottom. Therefore, the final perceived image under the lensarray will give the viewer a stereo experience, and also if the field of view is large enough for each lenslet, noticeable horizontal and vertical parallax can be observed.
Figure 3 An Autostereoscopic Image Produced by Isaksen et al. (This image should be viewed through a lens array.) Figure 4 Top: every parallel rays will converge to a point where principal ray hits the image plane. Bottom: the rays perceived by left and right eyes.
[1] Aaron Isaksen, Leonard McMillan, and Steven J. Gortler. Dynamically reparameterized light fields, SIGGRAPH 2000, Computer Graphics Proceedings, Annual Conference Series. [2] Marc Levoy and Pat Hanrahan. Light field rendering. SIGGRAPH 96, pages 31–42. [3] S. Gortler, R. Grzeszczuk, R. Szeliski, and M. Cohen, ”The Lumigraph,” Proc. ACM SIGGRAPH ’96 (1996), pp. 43–54. [4] Billy Chen, Building a projection autostereoscopic display, Stanford Tech Report 2002 [5] L. McMillan and G. Bishop. Plenoptic modeling: An image-based rendering system. Computer Graphics (SIGGRAPH'95), :39--46, Aug. 1995 [6] S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, "The lumigraph," in Proc. SIGGRAPH'96, pp. 43--54, 1996 [7] Yu, Jingyi, Leonard McMillan, and Steven Gortler, “Surface Camera (scam) Light Field Rendering ”, International Journal of Image and Graphics (IJIG), Special Issue on Integrated Image and Graphics Technologies, Volume 4, Number 4, 2004. [8]Jason Stewart, Jingyi Yu, Steven Gortler, Leonard McMillan, “A New Reconstruction Filter for Undersampled Light Field ”, Rendering Techniques 03, Eurographics Symposium on Rendering (EGSR) 2003, Leuven, Belgium, June 2003.
|
|||||||||
| @2005 University of Delaware | Computer & Information Sciences | Computer Graphics Group | ||||||||