| |
  
The multi-camera system, developed by the UD Graphics Lab, consists of 10 Flea2 1394b cameras, a cutting edge Quad-core Xeon server, and 2 InFocus 1400x1050 projectors. As the top left image illustrates, a subset of cameras have been mounted on movable plates attached to the ceiling racks and the others are placed at different positions using tripods. This configuration gives us high flexibility for various applications. With extremely high computing power and high throughput of the central server, our system can connect up to 10 Flea2 1394b cameras with resolution of 800x600x3 at 7.5Hz, as shown in the central image. The right image shows seven synchronized views captured by our system from different viewpoints.
|
|
| Overview
Recent breakthroughs in digital imaging technologies, coupled with advances in 2D/3D image processing, have spearheaded a revolution in many scientific disciplines. At the same time, advances in semiconductor technologies have made increasing computing power and transmission bandwidth available for building sensing systems that incorporate large number of cameras. The UD Graphics Laboratory has built a multi-camera system of 10 CCD PointGrey Flea2 cameras, the world's smallest 1394b camera. The optics and physical arrangements of the camera system are reconfigurable. With the Rainbow S48WI Lens, the Flea2 camera could provide us the 1024x768 YUV color image at the frame rate of 30Hz. |
|
While most multi-camera systems that require interconnecting multiple workstations to synchronize and control multiple cameras, our system only uses a single workstation to control all 10 Flea2 1394b cameras (bandwidth 100MB/s). We use 2 PCI-E x1 (bandwidth 512MB/s) cards, 2 PCI-X 64bit 133Hz (bandwith 1GB/s) cards, an Intel S5000 Motherboard with I/O bandwidth 4GB/s, and two 2.33GHz quad-core Intel Xeon CPUs (L2 cache 8M, FSB 1333MHz). Due to the high speed and wide bandwith of the server system bus, we can avoid the legacy of low network transmission occuring on multi-workstation systems. And with software synchronizing unit, our system is designed to return live, synchronized, uncompressed video from all 10 cameras at once. This makes our system a portable multi-camera acquisition device, similar to the Stanford camera array. |
Calibration
1. Camera calibration.
Basically, we use Tomas's multi-camera self calibration algorithm to calibrate the 10 cameras, including both intrinsic and extrinsic parameters. We also write OpenCV calibration program (Zhang's algorithm) to do cross evaluation on camera intrinsic parameters of both algorithms to ensure the accuracy of estimation. After the calibration, we calculate the distortion map for each of the cameras, and undistort the obtained images for later processing.The right image is one of the radial distortion maps. |
 |
2. Color calibration.
In order to reduce the artifacts of multi-camera application, 3D reconstruction of objects for instance, the color consistence among multi-cameras must also be satisfied. We use Adrian's method to retrieve the color setting (Polynomial Transform matrix, Bayer parameter, etc.) of each camera, and then save them as system ini files. And load them into the system before the cameras are initialized. |
|
Applications
A multi-camera system has the potential to benefit many research fields, depending on the geometric configuration of the cameras.
|
1. Multi-view Surveillance
Our multi-view camera system is well suitable for video surveillance. In particular, since our system is portable, we plan to work with local chemical plants to improve chemical storage security and to provide disaster control. Because all cameras in our system are synchronized, we can simultaneously deliver reliable tracking of human and vehicles from multiple viewpoints. One single workstation will be used to run computer vision algorithms that perform multi-view object reconstruction and motion and behavior analysis. This multi-view integration processing enables higher accuracy 3D object location, occlusion reasoning, and abnormal behavior analysis when the subjects are simultaneously seen in more than one camera view-field. |
2. Capturing dynamic fluid surfaces
We propose to develop a novel approach for reconstructing fluid surfaces using the multi-camera system and a novel scattered data interpolation method. We propose to use our system to simultaneously capture different views of a fluid surface. We place a known pattern beneath the surface and position the camera array on top to observe the pattern. Each individual camera in the light field array will observe a distinct time-varying distortion pattern, and we reconstruct a sampled fluid surface by analyzing these distortions. We also intend to develop a class of surface reconstruction algorithms based on a fully non-linear fluid dynamics model. |
|
|
3. Appearance Modeling
We propose to build an integrated system that provides a complete pipeline for processing high-dimensional appearance data. In particular, we will address several key challenges -- from capturing, archiving, to interactive visualization and editing. We will use the multi-camera system, a hand-held LED light, and a light probe to capture the dynamic illumination conditions. Our proposed device permits rapid capturing of appearance data under spatial, directional, and illumination variations. Because the camera system is synchronized and we only use a simple user-controlled light source, we avoid the complicated camera-lighting synchronization and calibration required by most existing systems. The major advantage of our system is that it provides a portable, accessible, and cost-efficient acquisition setup, well suited for on-site deployment. |
|
|
People
|
Papers
- Robust Tracking and Stereo Matching under Varying Illuminations
Jingdan Zhang, Leonard McMillan, and Jingyi Yu, CVPR'06, pp. 871-878
- Surface Camera (scam) Light Field Rendering
Jingyi Yu, Leonard McMillan, and Steven Gortler, IJIG'04, Special Issue on Integrated Image and Graphics Technologies, Volume 4, Number 4.
- General Linear Cameras
Jingyi Yu and Leonard McMillan, the 8th European Conference on Computer Vision, ECCV 2004, Prague, Czech, oral presentation.
|
|
Other references
|
Copyright
Feng Li @ Graphics Lab UD
Last update: April 20, 2007 11:53:37 AM
Created on: April 19, 2007 04:03:37 AM
|