Workshop on Light Fields for Computer Vision

Journal Special Issue

Elsevier Journal of Computer Vision and Image Understanding (CVIU)

Special Issue on Light Field for Computer Vision (LF4CV)

Overview

With the availability of commodity light field cameras such as Lytro, Raytrix, and Pelican mobile light field cameras, it has become increasingly common to acquire a light field in place of a single image of the scene. While the original goal of acquiring a light field is to conduct image-based modeling and rendering (e.g., to produce after-capture refocusing), recent studies have shown that light fields can be extremely useful in various applications in computer vision and robotics, including stereo matching and 3D reconstruction, stereoscopy synthesis, saliency detection, new non-perspective camera designs, surveillance, recognition, etc. There has been also useful and interesting research in several geometrical aspects of light field cameras such as calibration.

We are soliciting original contributions use light field acquisition devices and post-processing algorithms for resolving traditionally challenging computer vision problems. Specific questions we would like to address include, but are not limited, to:

Dense multi-view stereo
- How to model light field 3D scene reconstruction as dense multi-view stereo?
- How to develop special algorithms to accommodate the light field data?
- How to acquire light field datasets and build a benchmark for comparing existing methods?

Light field rendering
- How to develop new rendering schemes to overcome the spatial/angular undersampling problems?
- How to develop novel rendering schemes to better visualize the light field?

Light field modeling
- How to combine lumigraph and light field?
- Can we build previously unexplored modeling tools?
- How to develop tailored 3D reconstruction algorithms such as structure-from-motion using the light field camera?
- How to conduct intra-LF camera calibrations?
- Can we use the special structure of a light field to go beyond the common Lambertian reflectance models?

Light field sensors
- How can we design new LF sensors using emerging computational imaging technologies such as programmable lenses, programmable coded aperture, etc?
- How to resolve the low spatial/angular resolution problem from the sensor’s perspective?

Descriptors for light field images
- Can we build light field statistics models similar to image statistics?
- Can we develop saliency detection algorithms that take light fields as inputs?

Detection/recognition algorithms on light field images
- How can light field improve detection and recognition tasks?
- Can we develop tailored detection/recognition algorithms for light field?

Applications of light field cameras, e.g. robotics, surveillance, inspection
- What is the main advantage of replacing the regular camera on a robot with a light field camera?
- How can light field cameras facilitate more reliable SLAM?
- What can we learn from what we know about the compound eyes which in many ways resemble the light field camera? Can this inspire new sensor designs and algorithm developments?

Light-field inspired modeling of non-central cameras
- How to use light field to study geometrical models of new non-central cameras including rolling-shutter, catadioptric and underwater cameras?
- How to design multi-view Geometry and Structure from motion of these non-central cameras?
- How to conduct feature detection and feature matching in images from these non-central/non-perspective cameras?

Guest Editors:
  • Jingyi Yu
  • Bastian Goldluecke
  • Rick Szeliski
  • Tomas Pajdla
  • Deadlines:
    Submission deadline: December 15, 2014
    First-iteration reviewers response: May 15, 2015
    Authors re-submission: August 15, 2015
    Second-iteration reviewers response: November 15, 2015
    Publication of the Issue: December 2015