Bottleneck Transformers for Visual Recognition
We present BoTNet, a conceptually simple yet powerful backbone architecture that incorporates self-attention for multiple computer vision tasks including image classification, object detection and instance segmentation. By just replacing the spatial convolutions with global self-attention in the fin...
Gespeichert in:
| Veröffentlicht in: | Proceedings (IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Online) S. 16514 - 16524 |
|---|---|
| Hauptverfasser: | , , , , , |
| Format: | Tagungsbericht |
| Sprache: | Englisch |
| Veröffentlicht: |
IEEE
01.06.2021
|
| Schlagworte: | |
| ISSN: | 1063-6919 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | We present BoTNet, a conceptually simple yet powerful backbone architecture that incorporates self-attention for multiple computer vision tasks including image classification, object detection and instance segmentation. By just replacing the spatial convolutions with global self-attention in the final three bottleneck blocks of a ResNet and no other changes, our approach improves upon the baselines significantly on instance segmentation and object detection while also reducing the parameters, with minimal overhead in latency. Through the design of BoTNet, we also point out how ResNet bottleneck blocks with self-attention can be viewed as Transformer blocks. Without any bells and whistles, BoTNet achieves 44.4% Mask AP and 49.7% Box AP on the COCO Instance Segmentation benchmark using the Mask R-CNN framework; surpassing the previous best published single model and single scale results of ResNeSt [67] evaluated on the COCO validation set. Finally, we present a simple adaptation of the BoTNet design for image classification, resulting in models that achieve a strong performance of 84.7% top-1 accuracy on the ImageNet benchmark while being up to 1.64x faster in "compute" 1 time than the popular EfficientNet models on TPU-v3 hardware. We hope our simple and effective approach will serve as a strong baseline for future research in self-attention models for vision. 2 |
|---|---|
| AbstractList | We present BoTNet, a conceptually simple yet powerful backbone architecture that incorporates self-attention for multiple computer vision tasks including image classification, object detection and instance segmentation. By just replacing the spatial convolutions with global self-attention in the final three bottleneck blocks of a ResNet and no other changes, our approach improves upon the baselines significantly on instance segmentation and object detection while also reducing the parameters, with minimal overhead in latency. Through the design of BoTNet, we also point out how ResNet bottleneck blocks with self-attention can be viewed as Transformer blocks. Without any bells and whistles, BoTNet achieves 44.4% Mask AP and 49.7% Box AP on the COCO Instance Segmentation benchmark using the Mask R-CNN framework; surpassing the previous best published single model and single scale results of ResNeSt [67] evaluated on the COCO validation set. Finally, we present a simple adaptation of the BoTNet design for image classification, resulting in models that achieve a strong performance of 84.7% top-1 accuracy on the ImageNet benchmark while being up to 1.64x faster in "compute" 1 time than the popular EfficientNet models on TPU-v3 hardware. We hope our simple and effective approach will serve as a strong baseline for future research in self-attention models for vision. 2 |
| Author | Parmar, Niki Lin, Tsung-Yi Vaswani, Ashish Srinivas, Aravind Shlens, Jonathon Abbeel, Pieter |
| Author_xml | – sequence: 1 givenname: Aravind surname: Srinivas fullname: Srinivas, Aravind email: aravind@cs.berkeley.edu organization: UC Berkeley – sequence: 2 givenname: Tsung-Yi surname: Lin fullname: Lin, Tsung-Yi organization: Google Research – sequence: 3 givenname: Niki surname: Parmar fullname: Parmar, Niki organization: Google Research – sequence: 4 givenname: Jonathon surname: Shlens fullname: Shlens, Jonathon organization: Google Research – sequence: 5 givenname: Pieter surname: Abbeel fullname: Abbeel, Pieter organization: UC Berkeley – sequence: 6 givenname: Ashish surname: Vaswani fullname: Vaswani, Ashish organization: Google Research |
| BookMark | eNotjUtOwzAUAA0Cibb0BLDIBRLe8y_2EiJ-UiVQVbqtnOQZGVIbxWHB7YkEs5ndzJKdxRSJsWuEChHsTbN_3UotRV1x4FgBaq5O2BK1VlIqsPyULRC0KLVFe8HWOX8AgOCI2poFq-7SNA0UqfssdqOL2afxSGMuZhf7kL_dUGypS-8xTCHFS3bu3ZBp_e8Ve3u43zVP5ebl8bm53ZSBg5jK1s232ljsenSIkjuUrXcSyFvTOSG9A2W89dhb4r1EZWTvDJKmVmlDYsWu_rqBiA5fYzi68edgVT2D4hepR0Xz |
| CODEN | IEEPAD |
| ContentType | Conference Proceeding |
| DBID | 6IE 6IH CBEJK RIE RIO |
| DOI | 10.1109/CVPR46437.2021.01625 |
| DatabaseName | IEEE Electronic Library (IEL) Conference Proceedings IEEE Proceedings Order Plan (POP) 1998-present by volume IEEE Xplore All Conference Proceedings IEEE Electronic Library (IEL) IEEE Proceedings Order Plans (POP) 1998-present |
| DatabaseTitleList | |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences |
| EISBN | 1665445092 9781665445092 |
| EISSN | 1063-6919 |
| EndPage | 16524 |
| ExternalDocumentID | 9577771 |
| Genre | orig-research |
| GroupedDBID | 6IE 6IH 6IL 6IN AAWTH ABLEC ADZIZ ALMA_UNASSIGNED_HOLDINGS BEFXN BFFAM BGNUA BKEBE BPEOZ CBEJK CHZPO IEGSK IJVOP OCL RIE RIL RIO |
| ID | FETCH-LOGICAL-i203t-ba6917891cd1a1142a14bfa40ef98ca34fa058f9f1d9e2d41584da81e6eb568e3 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 982 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000742075006072&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| IngestDate | Wed Aug 27 02:28:30 EDT 2025 |
| IsPeerReviewed | false |
| IsScholarly | true |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-i203t-ba6917891cd1a1142a14bfa40ef98ca34fa058f9f1d9e2d41584da81e6eb568e3 |
| PageCount | 11 |
| ParticipantIDs | ieee_primary_9577771 |
| PublicationCentury | 2000 |
| PublicationDate | 2021-June |
| PublicationDateYYYYMMDD | 2021-06-01 |
| PublicationDate_xml | – month: 06 year: 2021 text: 2021-June |
| PublicationDecade | 2020 |
| PublicationTitle | Proceedings (IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Online) |
| PublicationTitleAbbrev | CVPR |
| PublicationYear | 2021 |
| Publisher | IEEE |
| Publisher_xml | – name: IEEE |
| SSID | ssj0003211698 |
| Score | 2.6880398 |
| Snippet | We present BoTNet, a conceptually simple yet powerful backbone architecture that incorporates self-attention for multiple computer vision tasks including image... |
| SourceID | ieee |
| SourceType | Publisher |
| StartPage | 16514 |
| SubjectTerms | Adaptation models Botnet Computational modeling Computer architecture Computer vision Image segmentation Object detection |
| Title | Bottleneck Transformers for Visual Recognition |
| URI | https://ieeexplore.ieee.org/document/9577771 |
| WOSCitedRecordID | wos000742075006072&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1NSwMxEB1q8eCpait-k4NHt9002W1ytVg8SCmlLr2VfEygKLvS3fr7Tbbb6sGLuSSEQEhCeDPJvDcADypIjKdBClELFXFqXORHphEdmYC-aEWts529jqZTsVzKWQseD1wYRKyDz7AfmvVfvi3MNjyVDWQy8sX7Oke-2nG1Du8pzHsyqRQNO47GcjDOZnMe_qW8FzikfW_bhHzYv3Ko1BAy6fxv8lPo_XDxyOyAMmfQwvwcOo3xSJqrWXah_1QEOeIczTtZ7K1Rb9sRX5NsXW7VB5nvo4WKvAdvk-fF-CVqkiFE62HMqkir1HtWQlJjqQoEWEW5dorH6KQwinGn4kQ46aiVOLQelwW3SlBMUSepQHYB7bzI8RKI8yhtGI8txoorq7U1OtHMJFYapqm7gm5Y_upzp3exalZ-_Xf3DZyE_d2FT91Cu9ps8Q6OzVe1Ljf39SF9A927k-Q |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LTwIxEJ4QNNETKhjf9uDRhe1ud2mvEglGJIQg4Ub6mCZEsmt4-PttlwU9eLGXNk2Tpm2ab6ad7xuAB-klxlMvhai4DBjVNnAj04C2tUdfNLzQ2Z7024MBn07FsAKPey4MIhbBZ9j0zeIv3-R645_KWiJpu-J8nYOEsYhu2Vr7F5XY-TKp4CU_joai1ZkMR8z_TDk_MKJNZ934jNi_sqgUINKt_W_6E2j8sPHIcI8zp1DB7AxqpflIysu5qkPzKfeCxBnqDzLe2aPOuiOuJpP5aiMXZLSLF8qzBrx3n8edXlCmQwjmURivAyVT51txQbWh0lNgJWXKShaiFVzLmFkZJtwKS43AyDhk5sxITjFFlaQc43OoZnmGF0Csw2kds9BgKJk0ShmtEhXrxAgdK2ovoe6XP_vcKl7MypVf_d19D0e98Vt_1n8ZvF7Dsd_rbTDVDVTXyw3ewqH-Ws9Xy7viwL4Bv9qXKw |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=proceeding&rft.title=Proceedings+%28IEEE+Computer+Society+Conference+on+Computer+Vision+and+Pattern+Recognition.+Online%29&rft.atitle=Bottleneck+Transformers+for+Visual+Recognition&rft.au=Srinivas%2C+Aravind&rft.au=Lin%2C+Tsung-Yi&rft.au=Parmar%2C+Niki&rft.au=Shlens%2C+Jonathon&rft.date=2021-06-01&rft.pub=IEEE&rft.eissn=1063-6919&rft.spage=16514&rft.epage=16524&rft_id=info:doi/10.1109%2FCVPR46437.2021.01625&rft.externalDocID=9577771 |