Attentional Feature Fusion for End-to-End Blind Image Quality Assessment

Mingliang ZHOU*, Shujun LANG, Taiping ZHANG, Xingran LIAO, Zhaowei SHANG, Tao XIANG, Bin FANG

*Corresponding author for this work

Research output: Journal PublicationsJournal Article (refereed)peer-review

15 Citations (Scopus)


In this paper, an end-to-end blind image quality assessment (BIQA) model based on feature fusion with an attention mechanism is proposed. We extracted the multilayer features of the image and fused them based on the attention mechanism; the fused features are then mapped into score, and the image quality assessment without reference is realized. First, because the human visual perception system hierarchically approaches the input information from local to global, we used three different neural networks to extract physically meaningful image features, and we use modified VGG19 and modified VGG16 to extract the substrate texture information and the local information of the edges, respectively. Meanwhile, we use the resNet50 to extract high-level global semantic information. Second, to take full advantage of multilevel features and avoid monotonic addition in hierarchical feature fusion, we adopt an attention-based feature fusion mechanism that combines the global and local contexts of the features and assigns different weights to the features to be fused, so that the model can perceive richer types of distortion. Experimental findings on six standard databases show that our approach yields improved performance.

Original languageEnglish
Pages (from-to)144-152
Number of pages9
JournalIEEE Transactions on Broadcasting
Issue number1
Publication statusPublished - Mar 2023
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 1963-12012 IEEE.


  • attentional feature fusion
  • blind image quality assessment
  • end-to-end
  • Hierarchical feature extraction


Dive into the research topics of 'Attentional Feature Fusion for End-to-End Blind Image Quality Assessment'. Together they form a unique fingerprint.

Cite this