Description
Image restoration in physics-based vision (such as image denoising, dehazing, and deraining) are fundamental tasks in computer vision that attach great significance to the processing of visual data and subsequent applications in different fields. Existing methods adopt a deconstructive idea that relies on manually engineered features and handcrafting models, and the related research mainly focuses on exploring the physical properties and mechanisms of the imaging process. However, relevant theories and hypothetical models may involve human bias and may fail in simulating the complex physical systems in actual practices. With the progress of representation learning, generative methods, especially generative adversarial networks (GANs), are considered a promising solution for image restoration tasks. It allows end-to-end learning of the image-to-image translation processes without understanding their physical mechanisms and also enables intelligent-level interpretation and semantics-level understanding of the input images by involving external knowledge from big data. Nevertheless, recent studies that apply GAN models dose not achieve satisfactory performances compared with the deconstructive methods. And there is scarcely any study to explain how the generative models work in relevant tasks.In this research, we analyze the information-theoretic framework of these generative models and identify their flow and sources of information as well as their optimization objectives. Based on this theory, we found that directly applying GAN models to the image restoration tasks may suffer from three key problems, including over-invested abstraction processes, inherent details loss, and imbalance optimization with vanishing gradient. We formulate and theoretically explain these problems and provide empirical evidence to prove them respectively. To address these problems, we propose corresponding solutions from optimizing sampling network structure, enhancing details extraction and accumulation, to alternating the measures of the loss function. Ultimately, we verify our solutions on four datasets and achieve significant improvement on the baseline models.
Period | 11 Apr 2022 |
---|---|
Event title | Postgraduate Seminar Series |
Event type | Public Lecture |