Light field (LF) has proven to be promising in immersive representation of the real world. However, a major limitation of micro-lens array based LF camera is the low spatial resolution, due to the inherent trade-off between angular and spatial dimensions. In this paper, we propose a framework to show that a single high-resolution (HR) RGB image effectively improves the performance of LF spatial super-resolution. We adopt an end-to-end convolutional neural network, which takes a low-resolution (LR) light field image (LFI) and a single HR center view as inputs. The LFI provides the information about LF structure in angular domain, while the HR center view provides more details in spatial domain. Experimental results on 57 test LFIs with various challenging natural scenes demonstrate that our algorithm outperforms current state-of-the-art methods.
|Title of host publication||International Conference on Digital Signal Processing, DSP|
|Publication status||Published - Nov 2018|
Bibliographical noteThis work was supported in part by the CityU Start-up Grant for New Faculty under Grant 7200537/CS and in part by the Hong Kong RGC Early Career Scheme Funds 9048123 (CityU 21211518).
- deep learning
- Light field