![]() |
Autostereoscopic Display | |||||||||
| A new approach based on Cyclograph | ||||||||||
| University of Delaware / Computer Graphics Group |
||||||||||
|
Our Method
Our method of producing an Autostereoscopic Display is capable of showing the three dimensional views rather than two dimensional images of an object. This is because the single lenslet acts as a view-dependent pixel, as seen in Figure 5. In our proof-of-concept design, for each lenslet, the focal length of the lens is set to be equal to the thickness of the lens array. A light field image of the object is placed behind the lens array, such that a subset of the ray database lies behind each lenslet. When the lenslet is viewed from a particular direction, the entire lenslet takes on the color of a single pixel in the image. To predict which color will be seen from a particular direction, we use a paraxial lens approximation. We draw a line parallel to the viewing direction which passes through the principal point of the lenslet. This line will intersect the image behind the lenslet at some point; this point determines the view-dependent color.
We have tested the prototype 3D medical display on a 3D brain model using a lenslet array and a high-resolution printed light field image. Each hexagon sub-image is taken from the principal point of a lenslet, with the image plane parallel to the flat face of the lens array facing towards the brain. The sub-images are then composited together to create a large image, as in the background of Figure 9, which is then placed under the lens array. The placement and orientation of the lens array determines if the viewed light field will appear in front or behind the display. If the lens array is placed in front of the object, then the object will appear behind the display. Because the lens array image is rendered from a light field and not directly from an integral camera, we can place the lens array image behind the captured object, and the object will appear to float in front of the display.
We plan to mount the lenslet array onto the LCD display to achieve dynamic 3D effects. When the 3D model changes, we can recomputed the light field image and project it onto the LCD screen. To do so, several critical problems need to be solved. First, we need investigate new calibration algorithms to determine the proper size and location of the light field image on the screen to match the lens locations. This requires designing new computer vision algorithms for calibration. Second, new geometry reconstruction algorithms need to be designed to create the 3D model from the CT or MRI data. Finally, new rendering procedures should be used to convert the 3D model into a light field. This is analogous to positioning an array of desired cameras in front of the 3D model, where each of these cameras captures of a tiny sub-image with the size of a lenslet. For time-critical applications such as brain surgeries, the rendering of the light field needs to be created dynamically and in real-time because the 3D model may be continuously deforming. Latest work using commodity graphics hardware for light field rendering has shown promising results to achieve real-time modeling and rendering in real-time.
|
|||||||||
| @2005 University of Delaware | Computer & Information Sciences | Computer Graphics Group | ||||||||