Diagnosis after zooming in: A multilabel classification model by imitating doctor reading habits to diagnose brain diseases

Purpose Computed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a challenge for junior radiologists to diagnose CT images accurately and comprehensively. It is necessary to build a system that can help doctor...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Medical Physics Ročník 49; číslo 11; s. 7054 - 7070
Hlavní autori: Wang, Ruiqian, Fu, Guanghui, Li, Jianqiang, Pei, Yan
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: United States Wiley 01.11.2022
Predmet:
ISSN:0094-2405, 2473-4209, 2473-4209
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Purpose Computed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a challenge for junior radiologists to diagnose CT images accurately and comprehensively. It is necessary to build a system that can help doctors diagnose and provide an explanation of the predictions. Despite the success of deep learning algorithms in the field of medical image analysis, the task of brain disease classification still faces challenges: Researchers lack attention to complex manual labeling requirements and the incompleteness of prediction explanations. More importantly, most studies only measure the performance of the algorithm, but do not measure the effectiveness of the algorithm in the actual diagnosis of doctors. Methods In this paper, we propose a model called DrCT2 that can detect brain diseases without using image‐level labels and provide a more comprehensive explanation at both the slice and sequence levels. This model achieves reliable performance by imitating human expert reading habits: targeted scaling of primary images from the full slice scans and observation of suspicious lesions for diagnosis. We evaluated our model on two open‐access data sets: CQ500 and the RSNA Intracranial Hemorrhage Detection Challenge. In addition, we defined three tasks to comprehensively evaluate model interpretability by measuring whether the algorithm can select key images with lesions. To verify the algorithm from the perspective of practical application, three junior radiologists were invited to participate in the experiments, comparing the effects before and after human–computer cooperation in different aspects. Results The method achieved F1‐scores of 0.9370 on CQ500 and 0.8700 on the RSNA data set. The results show that our model has good interpretability under the premise of good performance. Human radiologist evaluation experiments have proven that our model can effectively improve the accuracy of the diagnosis and improve efficiency. Conclusions We proposed a model that can simultaneously detect multiple brain diseases. The report generated by the model can assist doctors in avoiding missed diagnoses, and it has good clinical application value.
AbstractList Computed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a challenge for junior radiologists to diagnose CT images accurately and comprehensively. It is necessary to build a system that can help doctors diagnose and provide an explanation of the predictions. Despite the success of deep learning algorithms in the field of medical image analysis, the task of brain disease classification still faces challenges: Researchers lack attention to complex manual labeling requirements and the incompleteness of prediction explanations. More importantly, most studies only measure the performance of the algorithm, but do not measure the effectiveness of the algorithm in the actual diagnosis of doctors.PURPOSEComputed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a challenge for junior radiologists to diagnose CT images accurately and comprehensively. It is necessary to build a system that can help doctors diagnose and provide an explanation of the predictions. Despite the success of deep learning algorithms in the field of medical image analysis, the task of brain disease classification still faces challenges: Researchers lack attention to complex manual labeling requirements and the incompleteness of prediction explanations. More importantly, most studies only measure the performance of the algorithm, but do not measure the effectiveness of the algorithm in the actual diagnosis of doctors.In this paper, we propose a model called DrCT2 that can detect brain diseases without using image-level labels and provide a more comprehensive explanation at both the slice and sequence levels. This model achieves reliable performance by imitating human expert reading habits: targeted scaling of primary images from the full slice scans and observation of suspicious lesions for diagnosis. We evaluated our model on two open-access data sets: CQ500 and the RSNA Intracranial Hemorrhage Detection Challenge. In addition, we defined three tasks to comprehensively evaluate model interpretability by measuring whether the algorithm can select key images with lesions. To verify the algorithm from the perspective of practical application, three junior radiologists were invited to participate in the experiments, comparing the effects before and after human-computer cooperation in different aspects.METHODSIn this paper, we propose a model called DrCT2 that can detect brain diseases without using image-level labels and provide a more comprehensive explanation at both the slice and sequence levels. This model achieves reliable performance by imitating human expert reading habits: targeted scaling of primary images from the full slice scans and observation of suspicious lesions for diagnosis. We evaluated our model on two open-access data sets: CQ500 and the RSNA Intracranial Hemorrhage Detection Challenge. In addition, we defined three tasks to comprehensively evaluate model interpretability by measuring whether the algorithm can select key images with lesions. To verify the algorithm from the perspective of practical application, three junior radiologists were invited to participate in the experiments, comparing the effects before and after human-computer cooperation in different aspects.The method achieved F1-scores of 0.9370 on CQ500 and 0.8700 on the RSNA data set. The results show that our model has good interpretability under the premise of good performance. Human radiologist evaluation experiments have proven that our model can effectively improve the accuracy of the diagnosis and improve efficiency.RESULTSThe method achieved F1-scores of 0.9370 on CQ500 and 0.8700 on the RSNA data set. The results show that our model has good interpretability under the premise of good performance. Human radiologist evaluation experiments have proven that our model can effectively improve the accuracy of the diagnosis and improve efficiency.We proposed a model that can simultaneously detect multiple brain diseases. The report generated by the model can assist doctors in avoiding missed diagnoses, and it has good clinical application value.CONCLUSIONSWe proposed a model that can simultaneously detect multiple brain diseases. The report generated by the model can assist doctors in avoiding missed diagnoses, and it has good clinical application value.
Computed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a challenge for junior radiologists to diagnose CT images accurately and comprehensively. It is necessary to build a system that can help doctors diagnose and provide an explanation of the predictions. Despite the success of deep learning algorithms in the field of medical image analysis, the task of brain disease classification still faces challenges: Researchers lack attention to complex manual labeling requirements and the incompleteness of prediction explanations. More importantly, most studies only measure the performance of the algorithm, but do not measure the effectiveness of the algorithm in the actual diagnosis of doctors. In this paper, we propose a model called DrCT2 that can detect brain diseases without using image-level labels and provide a more comprehensive explanation at both the slice and sequence levels. This model achieves reliable performance by imitating human expert reading habits: targeted scaling of primary images from the full slice scans and observation of suspicious lesions for diagnosis. We evaluated our model on two open-access data sets: CQ500 and the RSNA Intracranial Hemorrhage Detection Challenge. In addition, we defined three tasks to comprehensively evaluate model interpretability by measuring whether the algorithm can select key images with lesions. To verify the algorithm from the perspective of practical application, three junior radiologists were invited to participate in the experiments, comparing the effects before and after human-computer cooperation in different aspects. The method achieved F1-scores of 0.9370 on CQ500 and 0.8700 on the RSNA data set. The results show that our model has good interpretability under the premise of good performance. Human radiologist evaluation experiments have proven that our model can effectively improve the accuracy of the diagnosis and improve efficiency. We proposed a model that can simultaneously detect multiple brain diseases. The report generated by the model can assist doctors in avoiding missed diagnoses, and it has good clinical application value.
Purpose Computed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a challenge for junior radiologists to diagnose CT images accurately and comprehensively. It is necessary to build a system that can help doctors diagnose and provide an explanation of the predictions. Despite the success of deep learning algorithms in the field of medical image analysis, the task of brain disease classification still faces challenges: Researchers lack attention to complex manual labeling requirements and the incompleteness of prediction explanations. More importantly, most studies only measure the performance of the algorithm, but do not measure the effectiveness of the algorithm in the actual diagnosis of doctors. Methods In this paper, we propose a model called DrCT2 that can detect brain diseases without using image‐level labels and provide a more comprehensive explanation at both the slice and sequence levels. This model achieves reliable performance by imitating human expert reading habits: targeted scaling of primary images from the full slice scans and observation of suspicious lesions for diagnosis. We evaluated our model on two open‐access data sets: CQ500 and the RSNA Intracranial Hemorrhage Detection Challenge. In addition, we defined three tasks to comprehensively evaluate model interpretability by measuring whether the algorithm can select key images with lesions. To verify the algorithm from the perspective of practical application, three junior radiologists were invited to participate in the experiments, comparing the effects before and after human–computer cooperation in different aspects. Results The method achieved F1‐scores of 0.9370 on CQ500 and 0.8700 on the RSNA data set. The results show that our model has good interpretability under the premise of good performance. Human radiologist evaluation experiments have proven that our model can effectively improve the accuracy of the diagnosis and improve efficiency. Conclusions We proposed a model that can simultaneously detect multiple brain diseases. The report generated by the model can assist doctors in avoiding missed diagnoses, and it has good clinical application value.
Author Pei, Yan
Li, Jianqiang
Fu, Guanghui
Wang, Ruiqian
Author_xml – sequence: 1
  fullname: Wang, Ruiqian
– sequence: 2
  fullname: Fu, Guanghui
– sequence: 3
  fullname: Li, Jianqiang
– sequence: 4
  orcidid: 0000-0003-1545-9204
  fullname: Pei, Yan
BackLink https://cir.nii.ac.jp/crid/1873398392565765504$$DView record in CiNii
https://www.ncbi.nlm.nih.gov/pubmed/35880443$$D View this record in MEDLINE/PubMed
BookMark eNp1kUtrVEEQhRtJMGMM-AukFy7c3LGft7vdhfiEhGSh60u_bmzox9h9hzD65-3JJAqimyqq-M4pOPUMHOWSPQAvMFpjhMibtFljLgV-AlaECTowgtQRWCGk2EAY4ifgrLVgEJdM0RGJp-CEcikRY3QFfr4L-jaXFhrU8-Ir_FFKCvkWhvwWnsO0jUuI2vgIbdTdZg5WL6FkmIrrS7ODIYWlr7rEFbuUCqvXbj9-0yYsDS4FusMND03VIfexed18ew6OZx2bP3vop-Drh_dfLj4Nl9cfP1-cXw6WcYQHaZCQihChuKOGE-G0wJTOVBNErNTGSD9LQbhTzmvi0MzQPCLCsXKGEktPweuD76aW71vflimFZn2MOvuybRMZFVMjIYR19OUDujXJu2lTQ9J1Nz0G9sfL1tJa9fNvBKNp_44pbab7d3R0_Rdq75MqeekxxH8JhoPgLkS_-6_xdHXzyL868DmE7r2vWApKlaSK8JGLkXPE6C9DUaTX
CitedBy_id crossref_primary_10_3389_fcomp_2024_1521066
crossref_primary_10_1016_j_compbiomed_2025_110780
crossref_primary_10_1016_j_compbiomed_2023_107321
crossref_primary_10_1038_s43856_024_00568_x
crossref_primary_10_1016_j_eswa_2023_120709
Cites_doi 10.1038/s42256-019-0052-1
10.1007/978-3-030-88010-1_4
10.1007/978-3-319-66179-7_31
10.1109/CVPR.2009.5206848
10.1109/ICCV.2015.515
10.1016/j.eswa.2012.05.008
10.1016/j.media.2020.101746
10.1007/978-3-319-70772-3_20
10.1109/CVPR.2016.496
10.1002/jmri.28025
10.1371/journal.pone.0253056
10.1016/j.artmed.2019.101744
10.1007/978-1-4842-2766-4_12
10.1038/s41551-018-0324-9
10.1109/CVPR.2016.117
10.1109/ACCESS.2020.3048315
10.1007/978-3-030-32251-9_46
10.1109/78.650093
10.1016/j.cmpb.2016.10.007
10.1007/s11042-015-2649-7
10.1145/3328485
10.1016/S0140-6736(17)32152-9
10.1016/j.acra.2015.05.007
10.1016/j.ophtha.2018.11.016
10.1109/ICCV.2015.512
10.1053/j.gastro.2019.06.025
10.1161/01.STR.0000259661.05525.9a
10.1007/978-981-15-0118-0_51
10.1093/bib/bbx044
10.3390/jpm11111213
10.3348/kjr.2018.0530
10.1148/ryai.2020190211
10.1145/3123266.3123327
10.1145/3123266.3123354
10.3390/s19092167
10.1016/j.neucom.2021.04.044
10.1016/S0140-6736(18)31645-3
10.1117/12.2293725
10.1016/j.cogsys.2018.12.015
10.1186/s41824-020-00086-8
10.1007/978-3-319-46723-8_25
ContentType Journal Article
Contributor Beijing University of Technology
FU, Guanghui
Computer Science Division, University of Aizu, Aizuwakamatsu, Japan ; University of Aizu [Japan] (UoA)
This study is supported by the National Key R&D Program of China with project no. 2020YFB2104402. Guanghui Fu is supported by the Chinese Government Scholarship provided by China Scholarship Council (CSC)
Algorithms, models and methods for images and signals of the human brain = Algorithmes, modèles et méthodes pour les images et les signaux du cerveau humain [ICM Paris] (ARAMIS) ; Inria de Paris ; Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Institut du Cerveau = Paris Brain Institute (ICM) ; Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Institut National de la Santé et de la Recherche Médicale (INSERM)-CHU Pitié-Salpêtrière [AP-HP] ; Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Sorbonne Universit
Contributor_xml – sequence: 1
  fullname: Beijing University of Technology
– sequence: 2
  fullname: Institut du Cerveau = Paris Brain Institute (ICM) ; Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Institut National de la Santé et de la Recherche Médicale (INSERM)-CHU Pitié-Salpêtrière [AP-HP] ; Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Sorbonne Université (SU)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)
– sequence: 3
  fullname: Algorithms, models and methods for images and signals of the human brain = Algorithmes, modèles et méthodes pour les images et les signaux du cerveau humain [ICM Paris] (ARAMIS) ; Inria de Paris ; Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Institut du Cerveau = Paris Brain Institute (ICM) ; Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Institut National de la Santé et de la Recherche Médicale (INSERM)-CHU Pitié-Salpêtrière [AP-HP] ; Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Sorbonne Université (SU)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Institut National de la Santé et de la Recherche Médicale (INSERM)-CHU Pitié-Salpêtrière [AP-HP] ; Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Sorbonne Université (SU)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)
– sequence: 4
  fullname: Computer Science Division, University of Aizu, Aizuwakamatsu, Japan ; University of Aizu [Japan] (UoA)
– sequence: 5
  fullname: This study is supported by the National Key R&D Program of China with project no. 2020YFB2104402. Guanghui Fu is supported by the Chinese Government Scholarship provided by China Scholarship Council (CSC)
– sequence: 6
  fullname: FU, Guanghui
Copyright 2022 American Association of Physicists in Medicine.
Copyright_xml – notice: 2022 American Association of Physicists in Medicine.
DBID RYH
AAYXX
CITATION
NPM
7X8
DOI 10.1002/mp.15871
DatabaseName CiNii Complete
CrossRef
PubMed
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic
PubMed

Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Medicine
Physics
EISSN 2473-4209
EndPage 7070
ExternalDocumentID 35880443
10_1002_mp_15871
MP15871
Genre article
Journal Article
GrantInformation_xml – fundername: China Scholarship Council (CSC)
– fundername: National Key R&D Program of China
  funderid: 2020YFB2104402
– fundername: National Key R&D Program of China
  grantid: 2020YFB2104402
GroupedDBID ---
--Z
-DZ
0R~
1OB
1OC
29M
33P
36B
4.4
53G
5GY
5RE
AAHQN
AAIPD
AAMMB
AAMNL
AANLZ
AAQQT
AAXRX
AAYCA
AAZKR
ABCUV
ABEFU
ABJNI
ABLJU
ABQWH
ABUFD
ABXGK
ACAHQ
ACBEA
ACCZN
ACGFO
ACGFS
ACGOF
ACPOU
ACXBN
ACXQS
ADBBV
ADBTR
ADKYN
ADMLS
ADOZA
ADXAS
ADZMN
AEFGJ
AEGXH
AEIGN
AENEX
AEUYR
AEYWJ
AFBPY
AFFPM
AFWVQ
AGHNM
AGXDD
AGYGG
AHBTC
AIACR
AIAGR
AIDQK
AIDYY
AITYG
AIURR
ALMA_UNASSIGNED_HOLDINGS
ALVPJ
AMYDB
ASPBG
BFHJK
C45
CS3
DCZOG
DRFUL
DRMAN
DRSTM
DU5
EBD
EBS
EMB
EMOBN
F5P
HGLYW
I-F
KBYEO
LATKE
LEEKS
LH4
LOXES
LUTES
LYRES
MEWTI
O9-
OVD
P2P
P2W
PHY
RNS
ROL
RYH
SUPJJ
SV3
TEORI
TN5
TWZ
USG
WOHZO
WXSBR
ZVN
ZZTAW
.GJ
2WC
3O-
5VS
AAHHS
AASGY
ABDPE
ABFTF
ABTAH
ACCFJ
AEEZP
AEQDE
AIWBW
AJBDE
ALUQN
EJD
HDBZQ
PALCI
RJQFR
SAMSI
XJT
ZGI
ZXP
ZY4
AAYXX
AIQQE
CITATION
NPM
7X8
ID FETCH-LOGICAL-c4501-8b078922795d3b527da7133f3a202c8abb8ef8725d9dea2d0f40f602519db32c3
IEDL.DBID DRFUL
ISICitedReferencesCount 4
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000838706000001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0094-2405
2473-4209
IngestDate Sun Nov 09 10:13:49 EST 2025
Wed Feb 19 02:26:28 EST 2025
Tue Nov 18 22:04:58 EST 2025
Sat Nov 29 06:02:51 EST 2025
Wed Jan 22 16:22:23 EST 2025
Mon Nov 10 09:14:52 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 11
Keywords human-AI interaction
interpretability
medical image classification
attention mechanism
Language English
License 2022 American Association of Physicists in Medicine.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c4501-8b078922795d3b527da7133f3a202c8abb8ef8725d9dea2d0f40f602519db32c3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0003-1545-9204
OpenAccessLink https://cir.nii.ac.jp/crid/1873398392565765504
PMID 35880443
PQID 2694962224
PQPubID 23479
PageCount 17
ParticipantIDs proquest_miscellaneous_2694962224
pubmed_primary_35880443
crossref_primary_10_1002_mp_15871
crossref_citationtrail_10_1002_mp_15871
wiley_primary_10_1002_mp_15871_MP15871
nii_cinii_1873398392565765504
PublicationCentury 2000
PublicationDate November 2022
PublicationDateYYYYMMDD 2022-11-01
PublicationDate_xml – month: 11
  year: 2022
  text: November 2022
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle Medical Physics
PublicationTitleAlternate Med Phys
PublicationYear 2022
Publisher Wiley
Publisher_xml – name: Wiley
References 2021; 9
2019; 3
2020; 64
2019; 1
2019; 57
1997; 45
2019; 126
2009
2017; 390
2016; 75
2019; 19
2018; 10575
2012; 39
2020; 103
2017; 138
2007; 38
2018; 19
2021; 16
2018; 2018
2020; 4
2020; 2
2021; 11
2018; 392
2019; 20
2021
2020
2015; 22
2019; 26
2022; 56
2019
2014; 15
2019; 157
2017
2016
2015
2021; 452
2014
e_1_2_11_32_1
e_1_2_11_30_1
e_1_2_11_36_1
e_1_2_11_51_1
e_1_2_11_13_1
e_1_2_11_34_1
e_1_2_11_53_1
e_1_2_11_11_1
e_1_2_11_29_1
e_1_2_11_6_1
e_1_2_11_27_1
e_1_2_11_4_1
e_1_2_11_48_1
e_1_2_11_2_1
e_1_2_11_20_1
e_1_2_11_47_1
e_1_2_11_24_1
e_1_2_11_41_1
e_1_2_11_8_1
e_1_2_11_22_1
e_1_2_11_43_1
e_1_2_11_17_1
e_1_2_11_15_1
e_1_2_11_38_1
e_1_2_11_19_1
Srivastava N (e_1_2_11_45_1) 2014; 15
e_1_2_11_50_1
e_1_2_11_10_1
e_1_2_11_31_1
e_1_2_11_14_1
e_1_2_11_35_1
e_1_2_11_52_1
e_1_2_11_12_1
e_1_2_11_33_1
e_1_2_11_7_1
e_1_2_11_28_1
e_1_2_11_5_1
e_1_2_11_26_1
e_1_2_11_3_1
e_1_2_11_49_1
e_1_2_11_21_1
e_1_2_11_44_1
e_1_2_11_46_1
e_1_2_11_25_1
e_1_2_11_40_1
e_1_2_11_9_1
e_1_2_11_23_1
e_1_2_11_42_1
e_1_2_11_18_1
e_1_2_11_16_1
e_1_2_11_37_1
e_1_2_11_39_1
References_xml – volume: 75
  start-page: 15601
  year: 2016
  end-page: 15617
  article-title: Automated classification of brain images using wavelet‐energy and biogeography‐based optimization
  publication-title: Multimed Tools Appl
– start-page: 248
  year: 2009
  end-page: 255
  article-title: Imagenet: A large‐scale hierarchical image database
– volume: 26
  start-page: 42
  year: 2019
  end-page: 46
  article-title: Toward human‐centered AI: a perspective from human‐computer interaction
  publication-title: Interactions
– volume: 56
  start-page: 99
  issue: 1
  year: 2022
  end-page: 107
  article-title: Deep learning assisted diagnosis of musculoskeletal tumors based on contrast‐enhanced magnetic resonance imaging
  publication-title: J Magn Reson Imaging
– start-page: 4534
  year: 2015
  end-page: 4542
  article-title: Sequence to sequence‐video to text
– start-page: 213
  year: 2017
  end-page: 222
  article-title: A novel deep learning based multi‐class classification method for Alzheimer's disease detection using brain MRI data
– volume: 38
  start-page: 1216
  year: 2007
  end-page: 1221
  article-title: Missed diagnosis of subarachnoid hemorrhage in the emergency department
  publication-title: Stroke
– volume: 10575
  year: 2018
  article-title: Deep 3D convolution neural network for CT brain hemorrhage classification
– volume: 39
  start-page: 12851
  year: 2012
  end-page: 12857
  article-title: A global‐ranking local feature selection method for text categorization
  publication-title: Expert Syst Appl
– volume: 57
  start-page: 147
  year: 2019
  end-page: 159
  article-title: Convolutional neural network based Alzheimer's disease classification from magnetic resonance brain images
  publication-title: Cogn Syst Res
– volume: 1
  start-page: 236
  year: 2019
  end-page: 245
  article-title: Pathologist‐level interpretable whole‐slide cancer diagnosis with deep learning
  publication-title: Nat Mach Intell
– volume: 157
  start-page: 1044
  year: 2019
  end-page: 1054
  article-title: Gastroenterologist‐level identification of small‐bowel diseases and normal variants by capsule endoscopy using a deep‐learning model
  publication-title: Gastroenterology
– volume: 4
  start-page: 1
  year: 2020
  end-page: 23
  article-title: Applications of artificial intelligence and deep learning in molecular imaging and radiotherapy
  publication-title: European J Hybrid Imaging
– year: 2014
– start-page: 420
  year: 2019
  end-page: 428
  article-title: Self‐supervised feature learning for 3d medical images by playing a Rubik's cube
– volume: 103
  year: 2020
  article-title: Multi‐resolution convolutional networks for chest X‐ray radiograph based lung nodule detection
  publication-title: Artif Intell Med
– start-page: 212
  year: 2016
  end-page: 220
  article-title: 3D deep learning for multi‐modal imaging‐guided survival time prediction of brain tumor patients
– start-page: 4584
  year: 2016
  end-page: 4593
  article-title: Video paragraph captioning using hierarchical recurrent neural networks
– volume: 452
  start-page: 263
  year: 2021
  end-page: 274
  article-title: Attention‐based full slice brain CT image diagnosis with explanations
  publication-title: Neurocomputing
– volume: 19
  start-page: 1236
  year: 2018
  end-page: 1246
  article-title: Deep learning for healthcare: review, opportunities and challenges
  publication-title: Brief Bioinform
– volume: 20
  start-page: 749
  year: 2019
  end-page: 758
  article-title: Effect of a deep learning framework‐based computer‐aided diagnosis system on the diagnostic performance of radiologists in differentiating between malignant and benign masses on breast ultrasonography
  publication-title: Korean J Radiol
– volume: 2
  year: 2020
  article-title: Construction of a machine learning dataset through collaboration: the RSNA 2019 brain CT hemorrhage challenge
  publication-title: Radiol Artif Intell
– start-page: 4507
  year: 2015
  end-page: 4515
  article-title: Describing videos by exploiting temporal structure
– volume: 19
  start-page: 2167
  year: 2019
  article-title: Image thresholding improves 3‐dimensional convolutional neural network diagnosis of different acute brain hemorrhages on computed tomography scans
  publication-title: Sensors
– start-page: 41
  year: 2021
  end-page: 52
  article-title: A multi‐resolution medical image fusion network with iterative back‐projection
– volume: 16
  year: 2021
  article-title: MFI‐Net: A multi‐resolution fusion input network for retinal vessel segmentation
  publication-title: Plos One
– start-page: 267
  year: 2017
  end-page: 275
  article-title: Zoom‐in‐net: Deep mining lesions for diabetic retinopathy detection
– volume: 11
  start-page: 1213
  year: 2021
  article-title: Explainable artificial intelligence for human‐machine interaction in brain tumor localization
  publication-title: J Pers Med
– start-page: 1029
  year: 2016
  end-page: 1038
  article-title: Hierarchical recurrent neural encoder for video representation with application to captioning
– volume: 9
  start-page: 11358
  year: 2021
  end-page: 11371
  article-title: A comprehensive survey analysis for present solutions of medical image fusion and future directions
  publication-title: IEEE Access
– year: 2020
– start-page: 658
  year: 2019
  end-page: 668
  article-title: Multimodal 3D convolutional neural networks for classification of brain disease using structural MR and FDG‐PET images
– volume: 64
  year: 2020
  article-title: Rubik's cube+: a self‐supervised feature learning framework for 3d medical image analysis
  publication-title: Med Image Anal
– volume: 15
  start-page: 1929
  year: 2014
  end-page: 1958
  article-title: Dropout: a simple way to prevent neural networks from overfitting
  publication-title: J Mach Learn Res
– volume: 138
  start-page: 49
  year: 2017
  end-page: 56
  article-title: Classification of CT brain images based on deep learning networks
  publication-title: Comput Methods Programs Biomed
– volume: 126
  start-page: 552
  year: 2019
  end-page: 564
  article-title: Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy
  publication-title: Ophthalmology
– year: 2017
– volume: 45
  start-page: 2673
  year: 1997
  end-page: 2681
  article-title: Bidirectional recurrent neural networks
  publication-title: IEEE Trans Signal Process
– start-page: 195
  year: 2017
  end-page: 208
– volume: 22
  start-page: 1191
  year: 2015
  end-page: 1198
  article-title: The effects of changes in utilization and technological advancements of cross‐sectional imaging on radiologist workload
  publication-title: Acad Radiol
– start-page: 1014
  year: 2017
  end-page: 1022
  article-title: Video description with spatial‐temporal attention
– volume: 2018
  start-page: 1571
  year: 2018
  end-page: 1580
  article-title: Visual explanations from deep 3D convolutional neural networks for Alzheimer's disease classification
– start-page: 146
  year: 2017
  end-page: 153
  article-title: Catching the temporal regions‐of‐interest for video captioning
– year: 2015
– volume: 3
  start-page: 173
  year: 2019
  end-page: 182
  article-title: An explainable deep‐learning algorithm for the detection of acute intracranial haemorrhage from small datasets
  publication-title: Nat Biomed Eng
– volume: 390
  start-page: 1151
  year: 2017
  end-page: 1210
  article-title: Global, regional, and national age‐sex specific mortality for 264 causes of death, 1980–2016: a systematic analysis for the Global Burden of Disease Study 2016
  publication-title: Lancet
– volume: 392
  start-page: 2388
  year: 2018
  end-page: 2396
  article-title: Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study
  publication-title: Lancet
– ident: e_1_2_11_31_1
  doi: 10.1038/s42256-019-0052-1
– ident: e_1_2_11_41_1
– ident: e_1_2_11_24_1
  doi: 10.1007/978-3-030-88010-1_4
– volume: 15
  start-page: 1929
  year: 2014
  ident: e_1_2_11_45_1
  article-title: Dropout: a simple way to prevent neural networks from overfitting
  publication-title: J Mach Learn Res
– ident: e_1_2_11_16_1
– ident: e_1_2_11_30_1
  doi: 10.1007/978-3-319-66179-7_31
– ident: e_1_2_11_42_1
  doi: 10.1109/CVPR.2009.5206848
– ident: e_1_2_11_12_1
  doi: 10.1109/ICCV.2015.515
– ident: e_1_2_11_51_1
  doi: 10.1016/j.eswa.2012.05.008
– ident: e_1_2_11_10_1
– ident: e_1_2_11_27_1
  doi: 10.1016/j.media.2020.101746
– ident: e_1_2_11_46_1
– ident: e_1_2_11_7_1
  doi: 10.1007/978-3-319-70772-3_20
– ident: e_1_2_11_14_1
  doi: 10.1109/CVPR.2016.496
– ident: e_1_2_11_36_1
  doi: 10.1002/jmri.28025
– ident: e_1_2_11_23_1
  doi: 10.1371/journal.pone.0253056
– ident: e_1_2_11_25_1
  doi: 10.1016/j.artmed.2019.101744
– ident: e_1_2_11_49_1
  doi: 10.1007/978-1-4842-2766-4_12
– ident: e_1_2_11_32_1
  doi: 10.1038/s41551-018-0324-9
– ident: e_1_2_11_11_1
  doi: 10.1109/CVPR.2016.117
– ident: e_1_2_11_22_1
  doi: 10.1109/ACCESS.2020.3048315
– ident: e_1_2_11_26_1
  doi: 10.1007/978-3-030-32251-9_46
– ident: e_1_2_11_43_1
  doi: 10.1109/78.650093
– ident: e_1_2_11_28_1
– ident: e_1_2_11_5_1
  doi: 10.1016/j.cmpb.2016.10.007
– ident: e_1_2_11_4_1
  doi: 10.1007/s11042-015-2649-7
– ident: e_1_2_11_20_1
  doi: 10.1016/j.cmpb.2016.10.007
– ident: e_1_2_11_33_1
  doi: 10.1145/3328485
– ident: e_1_2_11_2_1
  doi: 10.1016/S0140-6736(17)32152-9
– ident: e_1_2_11_3_1
  doi: 10.1016/j.acra.2015.05.007
– ident: e_1_2_11_35_1
  doi: 10.1016/j.ophtha.2018.11.016
– ident: e_1_2_11_9_1
  doi: 10.1109/ICCV.2015.512
– ident: e_1_2_11_44_1
– ident: e_1_2_11_52_1
– ident: e_1_2_11_38_1
  doi: 10.1053/j.gastro.2019.06.025
– ident: e_1_2_11_40_1
  doi: 10.1161/01.STR.0000259661.05525.9a
– ident: e_1_2_11_18_1
  doi: 10.1007/978-981-15-0118-0_51
– ident: e_1_2_11_53_1
  doi: 10.1093/bib/bbx044
– ident: e_1_2_11_39_1
  doi: 10.3390/jpm11111213
– ident: e_1_2_11_37_1
  doi: 10.3348/kjr.2018.0530
– ident: e_1_2_11_48_1
  doi: 10.1148/ryai.2020190211
– ident: e_1_2_11_13_1
  doi: 10.1145/3123266.3123327
– ident: e_1_2_11_15_1
  doi: 10.1145/3123266.3123354
– ident: e_1_2_11_21_1
  doi: 10.3390/s19092167
– ident: e_1_2_11_29_1
– ident: e_1_2_11_8_1
  doi: 10.1016/j.neucom.2021.04.044
– ident: e_1_2_11_47_1
  doi: 10.1016/S0140-6736(18)31645-3
– ident: e_1_2_11_17_1
  doi: 10.1117/12.2293725
– ident: e_1_2_11_50_1
– ident: e_1_2_11_6_1
  doi: 10.1016/j.cogsys.2018.12.015
– ident: e_1_2_11_34_1
  doi: 10.1186/s41824-020-00086-8
– ident: e_1_2_11_19_1
  doi: 10.1007/978-3-319-46723-8_25
SSID ssib058493607
ssib004865916
ssib002399406
ssj0006350
Score 2.4375472
Snippet Purpose Computed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a...
Computed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a challenge...
SourceID proquest
pubmed
crossref
wiley
nii
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 7054
SubjectTerms [SPI.SIGNAL] Engineering Sciences [physics]/Signal and Image processing
Attention mechanism
Brain Diseases
Human-AI interaction
Humans
Interpretability
Medical image classification
Reading
Title Diagnosis after zooming in: A multilabel classification model by imitating doctor reading habits to diagnose brain diseases
URI https://cir.nii.ac.jp/crid/1873398392565765504
https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fmp.15871
https://www.ncbi.nlm.nih.gov/pubmed/35880443
https://www.proquest.com/docview/2694962224
Volume 49
WOSCitedRecordID wos000838706000001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVWIB
  databaseName: Wiley Online Library Full Collection 2020
  customDbUrl:
  eissn: 2473-4209
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0006350
  issn: 0094-2405
  databaseCode: DRFUL
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://onlinelibrary.wiley.com
  providerName: Wiley-Blackwell
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3da9RAEB_sVYsvVavVq7aMIPoUu7e7STa-Fevhgy1FLNxb2E8ItLnjchXUf96dTe6kUEHwJXmZ7ITM7M5MZuY3AG-MFJ4x7TJVlCaTTppMc-EzF8OhsmA26ElIwybK83M1m1UXQ1Ul9cL0-BCbH260M9J5TRtcm-74D2jo9eL9JFfUPr7No9rmI9g-_Tq9_LI5h6Mp7RtQKkk5hHwNPcv48frZW8Zoq22au_zM225rsjvTR__zxo9hd_A28aRXjydwz7d7sHM25NP34EEqALXdU_h12tfcNR2mseH4M3rU0axh037AE0x1h1Fh_BVa8repwCjJFNMoHTQ_kDqlNBVRo5tTJgCXfX0-EhT4qsPVHF3Pw6OhwRQ4JIe6Z3A5_fTt4-dsGMyQWZmzSaYMgdQT9mDuhMl56TTFukFozrhV2hjlgyp57irnNXcsSBYKimYqZwS3Yh9G7bz1LwBlcL4K5KWFqDY2GG6dMqUpAtNch2IM79YSqu2AWk7DM67qHm-Z19eLOn3VMbzeUC56pI47aA6jkONCdJ2oUoiKWFPqt4ihmoxrrMVfx21GuRPd-vlNV1PDb1VEZyrSPO_1YsNF5PEQlFKM4W0S_1_Z12cX6X7wr4Qv4SGndovU-_gKRqvljT-E-_b7qumWR7BVztTRoPa_AdJ4AkQ
linkProvider Wiley-Blackwell
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3da9RAEB9q69eLH9W2V62uIPYpNre7STb6VKxHxbujSAt9C_sJgTZ3XK6C9p93Zze5Uqgg-JK8TDIhM7szszPzG4D3ijObptIkIi9Uwg1XiaTMJsaHQ0WeaieHLgybKKZTcX5enqzB574XJuJDrA7ccGWE_RoXOB5IH9yghl7OPw4zgf3jG9xrkVfvjaMfo7PxaiP2tjR2oJQckwhZjz2b0oP-2VvW6F5T13c5mrf91mB4Rk__65OfwZPO3ySHUUGew5ptNuHhpMuob8KDUAKq2xdwfRSr7uqWhMHh5Lf3qb1hI3XziRySUHnoVcZeEI0eN5YYBamSMEyHqF8Ee6UkllETM8NcAFnECn2CYODLlixnxEQeligcTUG69FD7Es5GX0-_HCfdaIZE8ywdJkIhTD2iD2aGqYwWRmK065ikKdVCKiWsEwXNTGmspCZ1PHU5xjOlUYxqtgXrzayxO0C4M7Z06Kc5rzjaKaqNUIXKXSqpdPkA9nsRVbrDLcfxGRdVRFym1eW8Cn91AO9WlPOI1XEHzZ6Xsn8RXoeiYKxE1pj8zX2wxv07evlXfqFh9kQ2dnbVVtjyW-benfI021ExVlxY5rdBztkAPgT5_5V9NTkJ991_JXwLj45PJ-Nq_G36_RU8pth8ETohX8P6cnFl9-C-_rms28WbTvv_ADObBUw
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3da9RAEB_qVUtfqtava62uIPqUdm9381Wfiueh2B6HWOhb2E8ItLnjci2o_7w7u8lJoYLgS_Ky2QmZmZ2ZzMxvAN4qwS2l0iRFlqtEGKESybhNjA-H8oxqJ0cuDJvIp9Pi4qKcbcCHvhcm4kOsf7ihZoTzGhXcLow7-oMaerU4HKUF9o9vCpwhM4DN8bfJ-en6IPa2NHaglAKTCGmPPUvZUf_sLWt0r6nruxzN235rMDyTh__1yo9gp_M3yUkUkMewYZtd2DrrMuq78CCUgOr2Cfwax6q7uiVhcDj56X1qb9hI3RyTExIqD73I2Eui0ePGEqPAVRKG6RD1g2CvlMQyamLmmAsgy1ihTxAMfNWS1ZyYSMMShaMpSJceap_C-eTT94-fk240Q6JFSkdJoRCmHtEHU8NVynIjMdp1XDLKdCGVKqwrcpaa0ljJDHWCugzjmdIozjR_BoNm3tgXQIQztnTopzkvONoppk2hcpU5Kpl02RDe9yyqdIdbjuMzLquIuMyqq0UVvuoQ3qxXLiJWxx1rDjyX_UZ4HRU55yWSxuRv5oM14ffo-V95RcPsiWzs_LqtsOXXS5l3eYbwPArGmgpP_TEoBB_Cu8D_v5KvzmbhvvevC1_D1mw8qU6_TL_uwzbD3ovQCPkSBqvltT2A-_pmVbfLV53w_wa2kwTH
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Diagnosis+after+zooming+in%3A+A+multilabel+classification+model+by+imitating+doctor+reading+habits+to+diagnose+brain+diseases&rft.jtitle=Medical+physics+%28Lancaster%29&rft.au=Wang%2C+Ruiqian&rft.au=Fu%2C+Guanghui&rft.au=Li%2C+Jianqiang&rft.au=Pei%2C+Yan&rft.date=2022-11-01&rft.issn=2473-4209&rft.eissn=2473-4209&rft.volume=49&rft.issue=11&rft.spage=7054&rft_id=info:doi/10.1002%2Fmp.15871&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0094-2405&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0094-2405&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0094-2405&client=summon