Dense Light Field Reconstruction: from Depth-based to Learning-based Approaches
Citation:
Chen, Yang, Dense Light Field Reconstruction: from Depth-based to Learning-based Approaches, Trinity College Dublin.School of Computer Science & Statistics, 2021Download Item:
PhD_Thesis_Yang_Chen_final_Archived_TARA_version.pdf (PDF) 52.37Mb
Abstract:
Light field imaging and processing is an emerging technique that motivates production of 3D visual content and makes it possible to provide high quality immersive 3D experiences. The principle of light fields is designed to describe all light rays passing through a given volume in 3D space, and only suitable acquisition can construct a dense light field, which is advantageous in practical applications, such as medical imaging, computer animation and post-capture photography. However, limited by the processing capability, acquiring a sufficient amount of high dimensional information usually leads to significant computational complexity and inaccuracy. Thus, to bridge the gap between acquisition and the required visual information, in this thesis, we are looking for the establishment of an efficient and accurate light field reconstruction framework, which only requires sparse light field input.
First, we will introduce our contribution to depth estimation from the 4D light fields and its application to render novel views for light field reconstruction. We build an optical flow framework to estimate disparity by tracking pixel movement. To further improve the efficiency, instead of using traditional global optimization, we use an alternative edge-aware filtering to efficiently encourage the smoothness while retaining high-frequency information. Compared to other state-of-the-art methods, our framework is capable of extracting geometrical information in an efficient and accurate fashion. Furthermore, we also move to light field reconstruction by warping input views to novel locations with the estimated disparity map.
Second, we investigate subsampling and reconstruction strategies for light fields processing. Limited angular resolution of acquired light fields is one of the issues in light field data, which usually is massive requiring high computational expense to process. Different from numerous previous works focusing on employing novel techniques, we chose an unique angle to optimize the performance of light field reconstruction, which is concentrated on comparing various commonly used view selection strategies. This work could benefit a wide range of applications, such as camera hardware design, light field compression and light field rendering.
Last but not least, we propose a deep learning based framework for light field view synthesis. With the booming development of data-driven techniques, deep learning based methods have been successfully applied to light field related tasks, such as material recognition, depth estimation and view synthesis. However, learning methods usually require a huge amount of data and collecting sufficient light field data is a challenging task due to expensive acquisition. Thus, we employ cycle consistency to the light field view synthesis task, which enables training to be performed in a self-supervised manner and avoids the requirement for huge training data. Experimental results show that our method outperforms other state-of-the-art light field view synthesis methods, especially when input views have wider disparity.
Sponsor
Grant Number
Science Foundation Ireland (SFI)
Description:
APPROVED
Author: Chen, Yang
Advisor:
Smolic, AljosaPublisher:
Trinity College Dublin. School of Computer Science & Statistics. Discipline of Computer ScienceType of material:
ThesisCollections:
Availability:
Full text availableLicences: