TY - JOUR
T1 - Scalable Face Image Coding via StyleGAN Prior: Toward Compression for Human-Machine Collaborative Vision
AU - MAO, Qi
AU - WANG, Chongyu
AU - WANG, Meng
AU - WANG, Shiqi
AU - CHEN, Ruijie
AU - JIN, Libiao
AU - MA, Siwei
PY - 2024
Y1 - 2024
N2 - The accelerated proliferation of visual content and the rapid development of machine vision technologies bring significant challenges in delivering visual data on a gigantic scale, which shall be effectively represented to satisfy both human and machine requirements. In this work, we investigate how hierarchical representations derived from the advanced generative prior facilitate constructing an efficient scalable coding paradigm for human-machine collaborative vision. Our key insight is that by exploiting the StyleGAN prior, we can learn three-layered representations encoding hierarchical semantics, which are elaborately designed into the basic, middle, and enhanced layers, supporting machine intelligence and human visual perception in a progressive fashion. With the aim of achieving efficient compression, we propose the layer-wise scalable entropy transformer to reduce the redundancy between layers. Based on the multi-task scalable rate-distortion objective, the proposed scheme is jointly optimized to achieve optimal machine analysis performance, human perception experience, and compression ratio. We validate the proposed paradigm's feasibility in face image compression. Extensive qualitative and quantitative experimental results demonstrate the superiority of the proposed paradigm over the latest compression standard Versatile Video Coding (VVC) in terms of both machine analysis as well as human perception at extremely low bitrates (< 0.01 bpp), offering new insights for human-machine collaborative compression.
AB - The accelerated proliferation of visual content and the rapid development of machine vision technologies bring significant challenges in delivering visual data on a gigantic scale, which shall be effectively represented to satisfy both human and machine requirements. In this work, we investigate how hierarchical representations derived from the advanced generative prior facilitate constructing an efficient scalable coding paradigm for human-machine collaborative vision. Our key insight is that by exploiting the StyleGAN prior, we can learn three-layered representations encoding hierarchical semantics, which are elaborately designed into the basic, middle, and enhanced layers, supporting machine intelligence and human visual perception in a progressive fashion. With the aim of achieving efficient compression, we propose the layer-wise scalable entropy transformer to reduce the redundancy between layers. Based on the multi-task scalable rate-distortion objective, the proposed scheme is jointly optimized to achieve optimal machine analysis performance, human perception experience, and compression ratio. We validate the proposed paradigm's feasibility in face image compression. Extensive qualitative and quantitative experimental results demonstrate the superiority of the proposed paradigm over the latest compression standard Versatile Video Coding (VVC) in terms of both machine analysis as well as human perception at extremely low bitrates (< 0.01 bpp), offering new insights for human-machine collaborative compression.
KW - generative compression
KW - Human-machine collaborative compression
KW - scalable coding
KW - StyleGAN
UR - http://www.scopus.com/inward/record.url?scp=85181395200&partnerID=8YFLogxK
U2 - 10.1109/TIP.2023.3343912
DO - 10.1109/TIP.2023.3343912
M3 - Journal Article (refereed)
C2 - 38133987
AN - SCOPUS:85181395200
SN - 1057-7149
VL - 33
SP - 408
EP - 422
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
ER -