Mixed local channel attention for object detection

Attention mechanism, one of the most extensively utilized components in computer vision, can assist neural networks in emphasizing significant elements and suppressing irrelevant ones. However, the vast majority of channel attention mechanisms only contain channel feature information and ignore spat...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Engineering applications of artificial intelligence Ročník 123; s. 106442
Hlavní autoři: Wan, Dahang, Lu, Rongsheng, Shen, Siyuan, Xu, Ting, Lang, Xianli, Ren, Zhijie
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier Ltd 01.08.2023
Témata:
ISSN:0952-1976
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Attention mechanism, one of the most extensively utilized components in computer vision, can assist neural networks in emphasizing significant elements and suppressing irrelevant ones. However, the vast majority of channel attention mechanisms only contain channel feature information and ignore spatial feature information, resulting in poor model representation effect or object detection performance, and the spatial attention modules were often complex and expensive. In order to strike a balance between performance and complexity, this paper proposes a lightweight Mixed Local Channel Attention (MLCA) module to improve the performance of the object detection network, and it can simultaneously incorporate both channel information and spatial information, as well as local information and global information to improve the expression effect of the network. On this basis, the MobileNet-Attention-YOLO(MAY) algorithm for comparing the performance of various attention modules is presented. On the Pascal VOC and SMID datasets, MLCA achieves a better balance between model representation efficacy, performance, and complexity than alternative attention techniques. Against the Squeeze-and-Excitation(SE) attention mechanism on the PASCAL VOC dataset and the Coordinate Attention(CA) method on the SIMD dataset, the mAP is enhanced by 1.0 % and 1.5 %, respectively. [Display omitted] •Proposed a lightweight Mixed Local Channel Attention (MLCA) method.•Proposed a new object detection network called MobileNet-Attention-YOLO (MAY).•Verified the feasibility and effectiveness of MLCA and MAY.
ISSN:0952-1976
DOI:10.1016/j.engappai.2023.106442