Diagnosis after zooming in: A multilabel classification model by imitating doctor reading habits to diagnose brain diseases
Purpose Computed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a challenge for junior radiologists to diagnose CT images accurately and comprehensively. It is necessary to build a system that can help doctor...
Uloženo v:
| Vydáno v: | Medical Physics Ročník 49; číslo 11; s. 7054 - 7070 |
|---|---|
| Hlavní autoři: | , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
United States
Wiley
01.11.2022
|
| Témata: | |
| ISSN: | 0094-2405, 2473-4209, 2473-4209 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | Purpose
Computed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a challenge for junior radiologists to diagnose CT images accurately and comprehensively. It is necessary to build a system that can help doctors diagnose and provide an explanation of the predictions. Despite the success of deep learning algorithms in the field of medical image analysis, the task of brain disease classification still faces challenges: Researchers lack attention to complex manual labeling requirements and the incompleteness of prediction explanations. More importantly, most studies only measure the performance of the algorithm, but do not measure the effectiveness of the algorithm in the actual diagnosis of doctors.
Methods
In this paper, we propose a model called DrCT2 that can detect brain diseases without using image‐level labels and provide a more comprehensive explanation at both the slice and sequence levels. This model achieves reliable performance by imitating human expert reading habits: targeted scaling of primary images from the full slice scans and observation of suspicious lesions for diagnosis. We evaluated our model on two open‐access data sets: CQ500 and the RSNA Intracranial Hemorrhage Detection Challenge. In addition, we defined three tasks to comprehensively evaluate model interpretability by measuring whether the algorithm can select key images with lesions. To verify the algorithm from the perspective of practical application, three junior radiologists were invited to participate in the experiments, comparing the effects before and after human–computer cooperation in different aspects.
Results
The method achieved F1‐scores of 0.9370 on CQ500 and 0.8700 on the RSNA data set. The results show that our model has good interpretability under the premise of good performance. Human radiologist evaluation experiments have proven that our model can effectively improve the accuracy of the diagnosis and improve efficiency.
Conclusions
We proposed a model that can simultaneously detect multiple brain diseases. The report generated by the model can assist doctors in avoiding missed diagnoses, and it has good clinical application value. |
|---|---|
| AbstractList | Purpose
Computed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a challenge for junior radiologists to diagnose CT images accurately and comprehensively. It is necessary to build a system that can help doctors diagnose and provide an explanation of the predictions. Despite the success of deep learning algorithms in the field of medical image analysis, the task of brain disease classification still faces challenges: Researchers lack attention to complex manual labeling requirements and the incompleteness of prediction explanations. More importantly, most studies only measure the performance of the algorithm, but do not measure the effectiveness of the algorithm in the actual diagnosis of doctors.
Methods
In this paper, we propose a model called DrCT2 that can detect brain diseases without using image‐level labels and provide a more comprehensive explanation at both the slice and sequence levels. This model achieves reliable performance by imitating human expert reading habits: targeted scaling of primary images from the full slice scans and observation of suspicious lesions for diagnosis. We evaluated our model on two open‐access data sets: CQ500 and the RSNA Intracranial Hemorrhage Detection Challenge. In addition, we defined three tasks to comprehensively evaluate model interpretability by measuring whether the algorithm can select key images with lesions. To verify the algorithm from the perspective of practical application, three junior radiologists were invited to participate in the experiments, comparing the effects before and after human–computer cooperation in different aspects.
Results
The method achieved F1‐scores of 0.9370 on CQ500 and 0.8700 on the RSNA data set. The results show that our model has good interpretability under the premise of good performance. Human radiologist evaluation experiments have proven that our model can effectively improve the accuracy of the diagnosis and improve efficiency.
Conclusions
We proposed a model that can simultaneously detect multiple brain diseases. The report generated by the model can assist doctors in avoiding missed diagnoses, and it has good clinical application value. Computed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a challenge for junior radiologists to diagnose CT images accurately and comprehensively. It is necessary to build a system that can help doctors diagnose and provide an explanation of the predictions. Despite the success of deep learning algorithms in the field of medical image analysis, the task of brain disease classification still faces challenges: Researchers lack attention to complex manual labeling requirements and the incompleteness of prediction explanations. More importantly, most studies only measure the performance of the algorithm, but do not measure the effectiveness of the algorithm in the actual diagnosis of doctors. In this paper, we propose a model called DrCT2 that can detect brain diseases without using image-level labels and provide a more comprehensive explanation at both the slice and sequence levels. This model achieves reliable performance by imitating human expert reading habits: targeted scaling of primary images from the full slice scans and observation of suspicious lesions for diagnosis. We evaluated our model on two open-access data sets: CQ500 and the RSNA Intracranial Hemorrhage Detection Challenge. In addition, we defined three tasks to comprehensively evaluate model interpretability by measuring whether the algorithm can select key images with lesions. To verify the algorithm from the perspective of practical application, three junior radiologists were invited to participate in the experiments, comparing the effects before and after human-computer cooperation in different aspects. The method achieved F1-scores of 0.9370 on CQ500 and 0.8700 on the RSNA data set. The results show that our model has good interpretability under the premise of good performance. Human radiologist evaluation experiments have proven that our model can effectively improve the accuracy of the diagnosis and improve efficiency. We proposed a model that can simultaneously detect multiple brain diseases. The report generated by the model can assist doctors in avoiding missed diagnoses, and it has good clinical application value. Computed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a challenge for junior radiologists to diagnose CT images accurately and comprehensively. It is necessary to build a system that can help doctors diagnose and provide an explanation of the predictions. Despite the success of deep learning algorithms in the field of medical image analysis, the task of brain disease classification still faces challenges: Researchers lack attention to complex manual labeling requirements and the incompleteness of prediction explanations. More importantly, most studies only measure the performance of the algorithm, but do not measure the effectiveness of the algorithm in the actual diagnosis of doctors.PURPOSEComputed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a challenge for junior radiologists to diagnose CT images accurately and comprehensively. It is necessary to build a system that can help doctors diagnose and provide an explanation of the predictions. Despite the success of deep learning algorithms in the field of medical image analysis, the task of brain disease classification still faces challenges: Researchers lack attention to complex manual labeling requirements and the incompleteness of prediction explanations. More importantly, most studies only measure the performance of the algorithm, but do not measure the effectiveness of the algorithm in the actual diagnosis of doctors.In this paper, we propose a model called DrCT2 that can detect brain diseases without using image-level labels and provide a more comprehensive explanation at both the slice and sequence levels. This model achieves reliable performance by imitating human expert reading habits: targeted scaling of primary images from the full slice scans and observation of suspicious lesions for diagnosis. We evaluated our model on two open-access data sets: CQ500 and the RSNA Intracranial Hemorrhage Detection Challenge. In addition, we defined three tasks to comprehensively evaluate model interpretability by measuring whether the algorithm can select key images with lesions. To verify the algorithm from the perspective of practical application, three junior radiologists were invited to participate in the experiments, comparing the effects before and after human-computer cooperation in different aspects.METHODSIn this paper, we propose a model called DrCT2 that can detect brain diseases without using image-level labels and provide a more comprehensive explanation at both the slice and sequence levels. This model achieves reliable performance by imitating human expert reading habits: targeted scaling of primary images from the full slice scans and observation of suspicious lesions for diagnosis. We evaluated our model on two open-access data sets: CQ500 and the RSNA Intracranial Hemorrhage Detection Challenge. In addition, we defined three tasks to comprehensively evaluate model interpretability by measuring whether the algorithm can select key images with lesions. To verify the algorithm from the perspective of practical application, three junior radiologists were invited to participate in the experiments, comparing the effects before and after human-computer cooperation in different aspects.The method achieved F1-scores of 0.9370 on CQ500 and 0.8700 on the RSNA data set. The results show that our model has good interpretability under the premise of good performance. Human radiologist evaluation experiments have proven that our model can effectively improve the accuracy of the diagnosis and improve efficiency.RESULTSThe method achieved F1-scores of 0.9370 on CQ500 and 0.8700 on the RSNA data set. The results show that our model has good interpretability under the premise of good performance. Human radiologist evaluation experiments have proven that our model can effectively improve the accuracy of the diagnosis and improve efficiency.We proposed a model that can simultaneously detect multiple brain diseases. The report generated by the model can assist doctors in avoiding missed diagnoses, and it has good clinical application value.CONCLUSIONSWe proposed a model that can simultaneously detect multiple brain diseases. The report generated by the model can assist doctors in avoiding missed diagnoses, and it has good clinical application value. |
| Author | Pei, Yan Li, Jianqiang Fu, Guanghui Wang, Ruiqian |
| Author_xml | – sequence: 1 fullname: Wang, Ruiqian – sequence: 2 fullname: Fu, Guanghui – sequence: 3 fullname: Li, Jianqiang – sequence: 4 orcidid: 0000-0003-1545-9204 fullname: Pei, Yan |
| BackLink | https://cir.nii.ac.jp/crid/1873398392565765504$$DView record in CiNii https://www.ncbi.nlm.nih.gov/pubmed/35880443$$D View this record in MEDLINE/PubMed |
| BookMark | eNp1kUtrVEEQhRtJMGMM-AukFy7c3LGft7vdhfiEhGSh60u_bmzox9h9hzD65-3JJAqimyqq-M4pOPUMHOWSPQAvMFpjhMibtFljLgV-AlaECTowgtQRWCGk2EAY4ifgrLVgEJdM0RGJp-CEcikRY3QFfr4L-jaXFhrU8-Ir_FFKCvkWhvwWnsO0jUuI2vgIbdTdZg5WL6FkmIrrS7ODIYWlr7rEFbuUCqvXbj9-0yYsDS4FusMND03VIfexed18ew6OZx2bP3vop-Drh_dfLj4Nl9cfP1-cXw6WcYQHaZCQihChuKOGE-G0wJTOVBNErNTGSD9LQbhTzmvi0MzQPCLCsXKGEktPweuD76aW71vflimFZn2MOvuybRMZFVMjIYR19OUDujXJu2lTQ9J1Nz0G9sfL1tJa9fNvBKNp_44pbab7d3R0_Rdq75MqeekxxH8JhoPgLkS_-6_xdHXzyL868DmE7r2vWApKlaSK8JGLkXPE6C9DUaTX |
| CitedBy_id | crossref_primary_10_3389_fcomp_2024_1521066 crossref_primary_10_1016_j_compbiomed_2025_110780 crossref_primary_10_1016_j_compbiomed_2023_107321 crossref_primary_10_1038_s43856_024_00568_x crossref_primary_10_1016_j_eswa_2023_120709 |
| Cites_doi | 10.1038/s42256-019-0052-1 10.1007/978-3-030-88010-1_4 10.1007/978-3-319-66179-7_31 10.1109/CVPR.2009.5206848 10.1109/ICCV.2015.515 10.1016/j.eswa.2012.05.008 10.1016/j.media.2020.101746 10.1007/978-3-319-70772-3_20 10.1109/CVPR.2016.496 10.1002/jmri.28025 10.1371/journal.pone.0253056 10.1016/j.artmed.2019.101744 10.1007/978-1-4842-2766-4_12 10.1038/s41551-018-0324-9 10.1109/CVPR.2016.117 10.1109/ACCESS.2020.3048315 10.1007/978-3-030-32251-9_46 10.1109/78.650093 10.1016/j.cmpb.2016.10.007 10.1007/s11042-015-2649-7 10.1145/3328485 10.1016/S0140-6736(17)32152-9 10.1016/j.acra.2015.05.007 10.1016/j.ophtha.2018.11.016 10.1109/ICCV.2015.512 10.1053/j.gastro.2019.06.025 10.1161/01.STR.0000259661.05525.9a 10.1007/978-981-15-0118-0_51 10.1093/bib/bbx044 10.3390/jpm11111213 10.3348/kjr.2018.0530 10.1148/ryai.2020190211 10.1145/3123266.3123327 10.1145/3123266.3123354 10.3390/s19092167 10.1016/j.neucom.2021.04.044 10.1016/S0140-6736(18)31645-3 10.1117/12.2293725 10.1016/j.cogsys.2018.12.015 10.1186/s41824-020-00086-8 10.1007/978-3-319-46723-8_25 |
| ContentType | Journal Article |
| Contributor | Beijing University of Technology FU, Guanghui Computer Science Division, University of Aizu, Aizuwakamatsu, Japan ; University of Aizu [Japan] (UoA) This study is supported by the National Key R&D Program of China with project no. 2020YFB2104402. Guanghui Fu is supported by the Chinese Government Scholarship provided by China Scholarship Council (CSC) Algorithms, models and methods for images and signals of the human brain = Algorithmes, modèles et méthodes pour les images et les signaux du cerveau humain [ICM Paris] (ARAMIS) ; Inria de Paris ; Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Institut du Cerveau = Paris Brain Institute (ICM) ; Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Institut National de la Santé et de la Recherche Médicale (INSERM)-CHU Pitié-Salpêtrière [AP-HP] ; Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Sorbonne Universit |
| Contributor_xml | – sequence: 1 fullname: Beijing University of Technology – sequence: 2 fullname: Institut du Cerveau = Paris Brain Institute (ICM) ; Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Institut National de la Santé et de la Recherche Médicale (INSERM)-CHU Pitié-Salpêtrière [AP-HP] ; Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Sorbonne Université (SU)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS) – sequence: 3 fullname: Algorithms, models and methods for images and signals of the human brain = Algorithmes, modèles et méthodes pour les images et les signaux du cerveau humain [ICM Paris] (ARAMIS) ; Inria de Paris ; Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Institut du Cerveau = Paris Brain Institute (ICM) ; Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Institut National de la Santé et de la Recherche Médicale (INSERM)-CHU Pitié-Salpêtrière [AP-HP] ; Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Sorbonne Université (SU)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Institut National de la Santé et de la Recherche Médicale (INSERM)-CHU Pitié-Salpêtrière [AP-HP] ; Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Sorbonne Université (SU)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS) – sequence: 4 fullname: Computer Science Division, University of Aizu, Aizuwakamatsu, Japan ; University of Aizu [Japan] (UoA) – sequence: 5 fullname: This study is supported by the National Key R&D Program of China with project no. 2020YFB2104402. Guanghui Fu is supported by the Chinese Government Scholarship provided by China Scholarship Council (CSC) – sequence: 6 fullname: FU, Guanghui |
| Copyright | 2022 American Association of Physicists in Medicine. |
| Copyright_xml | – notice: 2022 American Association of Physicists in Medicine. |
| DBID | RYH AAYXX CITATION NPM 7X8 |
| DOI | 10.1002/mp.15871 |
| DatabaseName | CiNii Complete CrossRef PubMed MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed MEDLINE - Academic |
| DatabaseTitleList | PubMed MEDLINE - Academic |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Medicine Physics |
| EISSN | 2473-4209 |
| EndPage | 7070 |
| ExternalDocumentID | 35880443 10_1002_mp_15871 MP15871 |
| Genre | article Journal Article |
| GrantInformation_xml | – fundername: China Scholarship Council (CSC) – fundername: National Key R&D Program of China funderid: 2020YFB2104402 – fundername: National Key R&D Program of China grantid: 2020YFB2104402 |
| GroupedDBID | --- --Z -DZ 0R~ 1OB 1OC 29M 33P 36B 4.4 53G 5GY 5RE AAHQN AAIPD AAMMB AAMNL AANLZ AAQQT AAXRX AAYCA AAZKR ABCUV ABEFU ABJNI ABLJU ABQWH ABUFD ABXGK ACAHQ ACBEA ACCZN ACGFO ACGFS ACGOF ACPOU ACXBN ACXQS ADBBV ADBTR ADKYN ADMLS ADOZA ADXAS ADZMN AEFGJ AEGXH AEIGN AENEX AEUYR AEYWJ AFBPY AFFPM AFWVQ AGHNM AGXDD AGYGG AHBTC AIACR AIAGR AIDQK AIDYY AITYG AIURR ALMA_UNASSIGNED_HOLDINGS ALVPJ AMYDB ASPBG BFHJK C45 CS3 DCZOG DRFUL DRMAN DRSTM DU5 EBD EBS EMB EMOBN F5P HGLYW I-F KBYEO LATKE LEEKS LH4 LOXES LUTES LYRES MEWTI O9- OVD P2P P2W PHY RNS ROL RYH SUPJJ SV3 TEORI TN5 TWZ USG WOHZO WXSBR ZVN ZZTAW .GJ 2WC 3O- 5VS AAHHS AASGY ABDPE ABFTF ABTAH ACCFJ AEEZP AEQDE AIWBW AJBDE ALUQN EJD HDBZQ PALCI RJQFR SAMSI XJT ZGI ZXP ZY4 AAYXX AIQQE CITATION NPM 7X8 |
| ID | FETCH-LOGICAL-c4501-8b078922795d3b527da7133f3a202c8abb8ef8725d9dea2d0f40f602519db32c3 |
| IEDL.DBID | DRFUL |
| ISICitedReferencesCount | 4 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000838706000001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0094-2405 2473-4209 |
| IngestDate | Sun Nov 09 10:13:49 EST 2025 Wed Feb 19 02:26:28 EST 2025 Tue Nov 18 22:04:58 EST 2025 Sat Nov 29 06:02:51 EST 2025 Wed Jan 22 16:22:23 EST 2025 Mon Nov 10 09:14:52 EST 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 11 |
| Keywords | human-AI interaction interpretability medical image classification attention mechanism |
| Language | English |
| License | 2022 American Association of Physicists in Medicine. |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c4501-8b078922795d3b527da7133f3a202c8abb8ef8725d9dea2d0f40f602519db32c3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
| ORCID | 0000-0003-1545-9204 |
| OpenAccessLink | https://cir.nii.ac.jp/crid/1873398392565765504 |
| PMID | 35880443 |
| PQID | 2694962224 |
| PQPubID | 23479 |
| PageCount | 17 |
| ParticipantIDs | proquest_miscellaneous_2694962224 pubmed_primary_35880443 crossref_primary_10_1002_mp_15871 crossref_citationtrail_10_1002_mp_15871 wiley_primary_10_1002_mp_15871_MP15871 nii_cinii_1873398392565765504 |
| PublicationCentury | 2000 |
| PublicationDate | November 2022 |
| PublicationDateYYYYMMDD | 2022-11-01 |
| PublicationDate_xml | – month: 11 year: 2022 text: November 2022 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States |
| PublicationTitle | Medical Physics |
| PublicationTitleAlternate | Med Phys |
| PublicationYear | 2022 |
| Publisher | Wiley |
| Publisher_xml | – name: Wiley |
| References | 2021; 9 2019; 3 2020; 64 2019; 1 2019; 57 1997; 45 2019; 126 2009 2017; 390 2016; 75 2019; 19 2018; 10575 2012; 39 2020; 103 2017; 138 2007; 38 2018; 19 2021; 16 2018; 2018 2020; 4 2020; 2 2021; 11 2018; 392 2019; 20 2021 2020 2015; 22 2019; 26 2022; 56 2019 2014; 15 2019; 157 2017 2016 2015 2021; 452 2014 e_1_2_11_32_1 e_1_2_11_30_1 e_1_2_11_36_1 e_1_2_11_51_1 e_1_2_11_13_1 e_1_2_11_34_1 e_1_2_11_53_1 e_1_2_11_11_1 e_1_2_11_29_1 e_1_2_11_6_1 e_1_2_11_27_1 e_1_2_11_4_1 e_1_2_11_48_1 e_1_2_11_2_1 e_1_2_11_20_1 e_1_2_11_47_1 e_1_2_11_24_1 e_1_2_11_41_1 e_1_2_11_8_1 e_1_2_11_22_1 e_1_2_11_43_1 e_1_2_11_17_1 e_1_2_11_15_1 e_1_2_11_38_1 e_1_2_11_19_1 Srivastava N (e_1_2_11_45_1) 2014; 15 e_1_2_11_50_1 e_1_2_11_10_1 e_1_2_11_31_1 e_1_2_11_14_1 e_1_2_11_35_1 e_1_2_11_52_1 e_1_2_11_12_1 e_1_2_11_33_1 e_1_2_11_7_1 e_1_2_11_28_1 e_1_2_11_5_1 e_1_2_11_26_1 e_1_2_11_3_1 e_1_2_11_49_1 e_1_2_11_21_1 e_1_2_11_44_1 e_1_2_11_46_1 e_1_2_11_25_1 e_1_2_11_40_1 e_1_2_11_9_1 e_1_2_11_23_1 e_1_2_11_42_1 e_1_2_11_18_1 e_1_2_11_16_1 e_1_2_11_37_1 e_1_2_11_39_1 |
| References_xml | – volume: 75 start-page: 15601 year: 2016 end-page: 15617 article-title: Automated classification of brain images using wavelet‐energy and biogeography‐based optimization publication-title: Multimed Tools Appl – start-page: 248 year: 2009 end-page: 255 article-title: Imagenet: A large‐scale hierarchical image database – volume: 26 start-page: 42 year: 2019 end-page: 46 article-title: Toward human‐centered AI: a perspective from human‐computer interaction publication-title: Interactions – volume: 56 start-page: 99 issue: 1 year: 2022 end-page: 107 article-title: Deep learning assisted diagnosis of musculoskeletal tumors based on contrast‐enhanced magnetic resonance imaging publication-title: J Magn Reson Imaging – start-page: 4534 year: 2015 end-page: 4542 article-title: Sequence to sequence‐video to text – start-page: 213 year: 2017 end-page: 222 article-title: A novel deep learning based multi‐class classification method for Alzheimer's disease detection using brain MRI data – volume: 38 start-page: 1216 year: 2007 end-page: 1221 article-title: Missed diagnosis of subarachnoid hemorrhage in the emergency department publication-title: Stroke – volume: 10575 year: 2018 article-title: Deep 3D convolution neural network for CT brain hemorrhage classification – volume: 39 start-page: 12851 year: 2012 end-page: 12857 article-title: A global‐ranking local feature selection method for text categorization publication-title: Expert Syst Appl – volume: 57 start-page: 147 year: 2019 end-page: 159 article-title: Convolutional neural network based Alzheimer's disease classification from magnetic resonance brain images publication-title: Cogn Syst Res – volume: 1 start-page: 236 year: 2019 end-page: 245 article-title: Pathologist‐level interpretable whole‐slide cancer diagnosis with deep learning publication-title: Nat Mach Intell – volume: 157 start-page: 1044 year: 2019 end-page: 1054 article-title: Gastroenterologist‐level identification of small‐bowel diseases and normal variants by capsule endoscopy using a deep‐learning model publication-title: Gastroenterology – volume: 4 start-page: 1 year: 2020 end-page: 23 article-title: Applications of artificial intelligence and deep learning in molecular imaging and radiotherapy publication-title: European J Hybrid Imaging – year: 2014 – start-page: 420 year: 2019 end-page: 428 article-title: Self‐supervised feature learning for 3d medical images by playing a Rubik's cube – volume: 103 year: 2020 article-title: Multi‐resolution convolutional networks for chest X‐ray radiograph based lung nodule detection publication-title: Artif Intell Med – start-page: 212 year: 2016 end-page: 220 article-title: 3D deep learning for multi‐modal imaging‐guided survival time prediction of brain tumor patients – start-page: 4584 year: 2016 end-page: 4593 article-title: Video paragraph captioning using hierarchical recurrent neural networks – volume: 452 start-page: 263 year: 2021 end-page: 274 article-title: Attention‐based full slice brain CT image diagnosis with explanations publication-title: Neurocomputing – volume: 19 start-page: 1236 year: 2018 end-page: 1246 article-title: Deep learning for healthcare: review, opportunities and challenges publication-title: Brief Bioinform – volume: 20 start-page: 749 year: 2019 end-page: 758 article-title: Effect of a deep learning framework‐based computer‐aided diagnosis system on the diagnostic performance of radiologists in differentiating between malignant and benign masses on breast ultrasonography publication-title: Korean J Radiol – volume: 2 year: 2020 article-title: Construction of a machine learning dataset through collaboration: the RSNA 2019 brain CT hemorrhage challenge publication-title: Radiol Artif Intell – start-page: 4507 year: 2015 end-page: 4515 article-title: Describing videos by exploiting temporal structure – volume: 19 start-page: 2167 year: 2019 article-title: Image thresholding improves 3‐dimensional convolutional neural network diagnosis of different acute brain hemorrhages on computed tomography scans publication-title: Sensors – start-page: 41 year: 2021 end-page: 52 article-title: A multi‐resolution medical image fusion network with iterative back‐projection – volume: 16 year: 2021 article-title: MFI‐Net: A multi‐resolution fusion input network for retinal vessel segmentation publication-title: Plos One – start-page: 267 year: 2017 end-page: 275 article-title: Zoom‐in‐net: Deep mining lesions for diabetic retinopathy detection – volume: 11 start-page: 1213 year: 2021 article-title: Explainable artificial intelligence for human‐machine interaction in brain tumor localization publication-title: J Pers Med – start-page: 1029 year: 2016 end-page: 1038 article-title: Hierarchical recurrent neural encoder for video representation with application to captioning – volume: 9 start-page: 11358 year: 2021 end-page: 11371 article-title: A comprehensive survey analysis for present solutions of medical image fusion and future directions publication-title: IEEE Access – year: 2020 – start-page: 658 year: 2019 end-page: 668 article-title: Multimodal 3D convolutional neural networks for classification of brain disease using structural MR and FDG‐PET images – volume: 64 year: 2020 article-title: Rubik's cube+: a self‐supervised feature learning framework for 3d medical image analysis publication-title: Med Image Anal – volume: 15 start-page: 1929 year: 2014 end-page: 1958 article-title: Dropout: a simple way to prevent neural networks from overfitting publication-title: J Mach Learn Res – volume: 138 start-page: 49 year: 2017 end-page: 56 article-title: Classification of CT brain images based on deep learning networks publication-title: Comput Methods Programs Biomed – volume: 126 start-page: 552 year: 2019 end-page: 564 article-title: Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy publication-title: Ophthalmology – year: 2017 – volume: 45 start-page: 2673 year: 1997 end-page: 2681 article-title: Bidirectional recurrent neural networks publication-title: IEEE Trans Signal Process – start-page: 195 year: 2017 end-page: 208 – volume: 22 start-page: 1191 year: 2015 end-page: 1198 article-title: The effects of changes in utilization and technological advancements of cross‐sectional imaging on radiologist workload publication-title: Acad Radiol – start-page: 1014 year: 2017 end-page: 1022 article-title: Video description with spatial‐temporal attention – volume: 2018 start-page: 1571 year: 2018 end-page: 1580 article-title: Visual explanations from deep 3D convolutional neural networks for Alzheimer's disease classification – start-page: 146 year: 2017 end-page: 153 article-title: Catching the temporal regions‐of‐interest for video captioning – year: 2015 – volume: 3 start-page: 173 year: 2019 end-page: 182 article-title: An explainable deep‐learning algorithm for the detection of acute intracranial haemorrhage from small datasets publication-title: Nat Biomed Eng – volume: 390 start-page: 1151 year: 2017 end-page: 1210 article-title: Global, regional, and national age‐sex specific mortality for 264 causes of death, 1980–2016: a systematic analysis for the Global Burden of Disease Study 2016 publication-title: Lancet – volume: 392 start-page: 2388 year: 2018 end-page: 2396 article-title: Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study publication-title: Lancet – ident: e_1_2_11_31_1 doi: 10.1038/s42256-019-0052-1 – ident: e_1_2_11_41_1 – ident: e_1_2_11_24_1 doi: 10.1007/978-3-030-88010-1_4 – volume: 15 start-page: 1929 year: 2014 ident: e_1_2_11_45_1 article-title: Dropout: a simple way to prevent neural networks from overfitting publication-title: J Mach Learn Res – ident: e_1_2_11_16_1 – ident: e_1_2_11_30_1 doi: 10.1007/978-3-319-66179-7_31 – ident: e_1_2_11_42_1 doi: 10.1109/CVPR.2009.5206848 – ident: e_1_2_11_12_1 doi: 10.1109/ICCV.2015.515 – ident: e_1_2_11_51_1 doi: 10.1016/j.eswa.2012.05.008 – ident: e_1_2_11_10_1 – ident: e_1_2_11_27_1 doi: 10.1016/j.media.2020.101746 – ident: e_1_2_11_46_1 – ident: e_1_2_11_7_1 doi: 10.1007/978-3-319-70772-3_20 – ident: e_1_2_11_14_1 doi: 10.1109/CVPR.2016.496 – ident: e_1_2_11_36_1 doi: 10.1002/jmri.28025 – ident: e_1_2_11_23_1 doi: 10.1371/journal.pone.0253056 – ident: e_1_2_11_25_1 doi: 10.1016/j.artmed.2019.101744 – ident: e_1_2_11_49_1 doi: 10.1007/978-1-4842-2766-4_12 – ident: e_1_2_11_32_1 doi: 10.1038/s41551-018-0324-9 – ident: e_1_2_11_11_1 doi: 10.1109/CVPR.2016.117 – ident: e_1_2_11_22_1 doi: 10.1109/ACCESS.2020.3048315 – ident: e_1_2_11_26_1 doi: 10.1007/978-3-030-32251-9_46 – ident: e_1_2_11_43_1 doi: 10.1109/78.650093 – ident: e_1_2_11_28_1 – ident: e_1_2_11_5_1 doi: 10.1016/j.cmpb.2016.10.007 – ident: e_1_2_11_4_1 doi: 10.1007/s11042-015-2649-7 – ident: e_1_2_11_20_1 doi: 10.1016/j.cmpb.2016.10.007 – ident: e_1_2_11_33_1 doi: 10.1145/3328485 – ident: e_1_2_11_2_1 doi: 10.1016/S0140-6736(17)32152-9 – ident: e_1_2_11_3_1 doi: 10.1016/j.acra.2015.05.007 – ident: e_1_2_11_35_1 doi: 10.1016/j.ophtha.2018.11.016 – ident: e_1_2_11_9_1 doi: 10.1109/ICCV.2015.512 – ident: e_1_2_11_44_1 – ident: e_1_2_11_52_1 – ident: e_1_2_11_38_1 doi: 10.1053/j.gastro.2019.06.025 – ident: e_1_2_11_40_1 doi: 10.1161/01.STR.0000259661.05525.9a – ident: e_1_2_11_18_1 doi: 10.1007/978-981-15-0118-0_51 – ident: e_1_2_11_53_1 doi: 10.1093/bib/bbx044 – ident: e_1_2_11_39_1 doi: 10.3390/jpm11111213 – ident: e_1_2_11_37_1 doi: 10.3348/kjr.2018.0530 – ident: e_1_2_11_48_1 doi: 10.1148/ryai.2020190211 – ident: e_1_2_11_13_1 doi: 10.1145/3123266.3123327 – ident: e_1_2_11_15_1 doi: 10.1145/3123266.3123354 – ident: e_1_2_11_21_1 doi: 10.3390/s19092167 – ident: e_1_2_11_29_1 – ident: e_1_2_11_8_1 doi: 10.1016/j.neucom.2021.04.044 – ident: e_1_2_11_47_1 doi: 10.1016/S0140-6736(18)31645-3 – ident: e_1_2_11_17_1 doi: 10.1117/12.2293725 – ident: e_1_2_11_50_1 – ident: e_1_2_11_6_1 doi: 10.1016/j.cogsys.2018.12.015 – ident: e_1_2_11_34_1 doi: 10.1186/s41824-020-00086-8 – ident: e_1_2_11_19_1 doi: 10.1007/978-3-319-46723-8_25 |
| SSID | ssib058493607 ssib004865916 ssib002399406 ssj0006350 |
| Score | 2.4375472 |
| Snippet | Purpose
Computed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a... Computed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a challenge... |
| SourceID | proquest pubmed crossref wiley nii |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 7054 |
| SubjectTerms | [SPI.SIGNAL] Engineering Sciences [physics]/Signal and Image processing Attention mechanism Brain Diseases Human-AI interaction Humans Interpretability Medical image classification Reading |
| Title | Diagnosis after zooming in: A multilabel classification model by imitating doctor reading habits to diagnose brain diseases |
| URI | https://cir.nii.ac.jp/crid/1873398392565765504 https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fmp.15871 https://www.ncbi.nlm.nih.gov/pubmed/35880443 https://www.proquest.com/docview/2694962224 |
| Volume | 49 |
| WOSCitedRecordID | wos000838706000001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVWIB databaseName: Wiley Online Library Full Collection 2020 customDbUrl: eissn: 2473-4209 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0006350 issn: 0094-2405 databaseCode: DRFUL dateStart: 19970101 isFulltext: true titleUrlDefault: https://onlinelibrary.wiley.com providerName: Wiley-Blackwell |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3daxQxEB_sVYsvflRbT22JIPp0NptsbhPfivXwoS1FrNzbkk9YaPeO22tB-883k-ydFCoIvuy-ZJNlZ5L5zc7MbwDeu1LIgmo_Yjy6KKVDylsVHVdvC6OockUuH_t5XJ2eyulUnfVZlVgLk_kh1j_ccGek8xo3uDbdwR_S0Mv5p0JILB_fZFFtxQA2j75Pzo_X53A0pbkARZUYQxAr6lnKDlbP3jFGG23T3Icz78LWZHcmT__njZ_Bkx5tksOsHs_hgW-3Yeukj6dvw6OUAGq7F3BzlHPumo6ktuHkd0TU0ayRpv1MDknKO4wK4y-IRbyNCUZJpiS10iHmF8FKKY1J1MTNMBJAFjk_nyAV-LIjyxlxeQ1PDDamIH1wqHsJ55OvP758G_WNGUa2FLQYSYMk9cg9KBw3glVOo68buGaUWamNkT7IigmnnNfM0VDSMEZvRjnDmeU7MGhnrX8FJBihaahoUDYiM_TVmQxCF3ZMHVU6DOHjSkK17VnLsXnGRZ35lll9Oa_TVx3Cu_XIeWbquGfMXhRynAivhaw4VwgQMfQ7jq5aGedYib-O2wxjJ7r1s6uuxoJfNY5gKo7ZzXqxXoWLeAiWJR_ChyT-vy5fn5yl--t_HfgGHjMst0i1j29hsFxc-T14aK-XTbfYh41qKvd7tb8Ft1gAbw |
| linkProvider | Wiley-Blackwell |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3daxQxEA-11eqLH9XqqdUIok9ns_m4TfSpWI-Kd0eRVvq25BMW2r3j9iqo_7yZZPekUEHwZfclu1l2JpmZzPx-g9Brx4UsiPZDymKIwh1Q3qoYuHpbGEWUKzJ87NuknM3k2Zk63kAfeixM5odYH7jBykj7NSxwOJDe_8MaerF4VwgJ-PEtHrUoqvfW4dfx6WS9EUdbmhEoikMSQfTcs4Tu989esUY3mrq-ztG86rcmwzO-91-ffB_d7fxNfJAV5AHa8M0O2p52GfUddCuVgNr2Ifp1mKvu6hanxuH4Z_Spo2HDdfMeH-BUeRhVxp9jCx43lBglqeLUTAebHxiwUhrKqLGbQy4AL3OFPgYy8FWLV3Ps8hweG2hNgbv0UPsInY4_nXw8GnatGYaWC1IMpQGaemAfFI4ZQUunIdoNTFNCrdTGSB9kSYVTzmvqSOAkjCCeUc4watku2mzmjX-CcDBCk1CSoGz0zSBapzIIXdgRcUTpMEBvexFVtuMth_YZ51VmXKbVxaJKf3WAXq1HLjJXxzVj9qKU44vgWsiSMQUuIiR_RzFY4_EdvfyruNAge6IbP79sK4D8qlF0p-KYx1kx1rMwEbdBztkAvUny_-v01fQ43Z_-68CX6PbRyXRSTT7PvjxDdyiALxIS8jnaXC0v_R66ab-v6nb5otP-3wvDA3c |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3db9MwED-NDiZe-BhfBQZGQvBU5thxE8PTRKlAdFWFGNpb5E8p0pZWTYcE_PP47KRo0pCQeEleHDvKnX13ufv9DuClzUWZUeVGjIcQJbdIeStD4OpMpiWVNkvwsW-zYj4vT0_lYgfe9ViYxA-x_eGGOyOe17jB3cr6wz-soeerN5koET--m2MPmQHsTr5MT2bbgzjY0oRAkTkmEUTPPUvZYf_sJWt0ranrqxzNy35rNDzT2__1ynfgVudvkqOkIHdhxzX7sHfcZdT34UYsATXtPfg1SVV3dUti43DyM_jUwbCRunlLjkisPAwq486IQY8bS4yiVElspkP0D4JYKYVl1MQuMRdA1qlCnyAZ-KYlmyWxaQ1HNLamIF16qL0PJ9MPX99_HHWtGUYmFzQblRpp6pF9UFiuBSuswmjXc8UoM6XSunS-LJiw0jrFLPU59WOMZ6TVnBn-AAbNsnGPgHgtFPUF9dIE3wyjdVZ6oTIzppZK5YfwuhdRZTrecmyfcVYlxmVWna-q-FWH8GI7cpW4Oq4YcxCkHCbCa1YWnEt0ETH5Ow7BWh7m6OVfhY2G2RPVuOVFWyHkN2hZcHmG8DApxnYVLsIxmOd8CK-i_P-6fHW8iPfH_zrwOewtJtNq9mn--QncZIi9iEDIpzDYrC_cAVw33zd1u37WKf9vkJAC8g |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Diagnosis+after+zooming+in%3A+A+multilabel+classification+model+by+imitating+doctor+reading+habits+to+diagnose+brain+diseases&rft.jtitle=Medical+Physics&rft.au=Wang%2C+Ruiqian&rft.au=Fu%2C+Guanghui&rft.au=Li%2C+Jianqiang&rft.au=Pei%2C+Yan&rft.date=2022-11-01&rft.pub=Wiley&rft.issn=0094-2405&rft.eissn=2473-4209&rft.volume=49&rft.spage=7054&rft.epage=7070&rft_id=info:doi/10.1002%2Fmp.15871 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0094-2405&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0094-2405&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0094-2405&client=summon |