Automatic Segmentation of Standing Trees from Forest Images Based on Deep Learning
Semantic segmentation of standing trees is important to obtain factors of standing trees from images automatically and effectively. Aiming at the accurate segmentation of multiple standing trees in complex backgrounds, some traditional methods have shortcomings such as low segmentation accuracy and...
Uloženo v:
| Vydáno v: | Sensors (Basel, Switzerland) Ročník 22; číslo 17; s. 6663 |
|---|---|
| Hlavní autoři: | , , , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
Basel
MDPI AG
01.09.2022
MDPI |
| Témata: | |
| ISSN: | 1424-8220, 1424-8220 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | Semantic segmentation of standing trees is important to obtain factors of standing trees from images automatically and effectively. Aiming at the accurate segmentation of multiple standing trees in complex backgrounds, some traditional methods have shortcomings such as low segmentation accuracy and manual intervention. To achieve accurate segmentation of standing tree images effectively, SEMD, a lightweight network segmentation model based on deep learning, is proposed in this article. DeepLabV3+ is chosen as the base framework to perform multi-scale fusion of the convolutional features of the standing trees in images, so as to reduce the loss of image edge details during the standing tree segmentation and reduce the loss of feature information. MobileNet, a lightweight network, is integrated into the backbone network to reduce the computational complexity. Furthermore, SENet, an attention mechanism, is added to obtain the feature information efficiently and suppress the generation of useless feature information. The extensive experimental results show that using the SEMD model the MIoU of the semantic segmentation of standing tree images of different varieties and categories under simple and complex backgrounds reaches 91.78% and 86.90%, respectively. The lightweight network segmentation model SEMD based on deep learning proposed in this paper can solve the problem of multiple standing trees segmentation with high accuracy. |
|---|---|
| AbstractList | Semantic segmentation of standing trees is important to obtain factors of standing trees from images automatically and effectively. Aiming at the accurate segmentation of multiple standing trees in complex backgrounds, some traditional methods have shortcomings such as low segmentation accuracy and manual intervention. To achieve accurate segmentation of standing tree images effectively, SEMD, a lightweight network segmentation model based on deep learning, is proposed in this article. DeepLabV3+ is chosen as the base framework to perform multi-scale fusion of the convolutional features of the standing trees in images, so as to reduce the loss of image edge details during the standing tree segmentation and reduce the loss of feature information. MobileNet, a lightweight network, is integrated into the backbone network to reduce the computational complexity. Furthermore, SENet, an attention mechanism, is added to obtain the feature information efficiently and suppress the generation of useless feature information. The extensive experimental results show that using the SEMD model the MIoU of the semantic segmentation of standing tree images of different varieties and categories under simple and complex backgrounds reaches 91.78% and 86.90%, respectively. The lightweight network segmentation model SEMD based on deep learning proposed in this paper can solve the problem of multiple standing trees segmentation with high accuracy. Semantic segmentation of standing trees is important to obtain factors of standing trees from images automatically and effectively. Aiming at the accurate segmentation of multiple standing trees in complex backgrounds, some traditional methods have shortcomings such as low segmentation accuracy and manual intervention. To achieve accurate segmentation of standing tree images effectively, SEMD, a lightweight network segmentation model based on deep learning, is proposed in this article. DeepLabV3+ is chosen as the base framework to perform multi-scale fusion of the convolutional features of the standing trees in images, so as to reduce the loss of image edge details during the standing tree segmentation and reduce the loss of feature information. MobileNet, a lightweight network, is integrated into the backbone network to reduce the computational complexity. Furthermore, SENet, an attention mechanism, is added to obtain the feature information efficiently and suppress the generation of useless feature information. The extensive experimental results show that using the SEMD model the MIoU of the semantic segmentation of standing tree images of different varieties and categories under simple and complex backgrounds reaches 91.78% and 86.90%, respectively. The lightweight network segmentation model SEMD based on deep learning proposed in this paper can solve the problem of multiple standing trees segmentation with high accuracy.Semantic segmentation of standing trees is important to obtain factors of standing trees from images automatically and effectively. Aiming at the accurate segmentation of multiple standing trees in complex backgrounds, some traditional methods have shortcomings such as low segmentation accuracy and manual intervention. To achieve accurate segmentation of standing tree images effectively, SEMD, a lightweight network segmentation model based on deep learning, is proposed in this article. DeepLabV3+ is chosen as the base framework to perform multi-scale fusion of the convolutional features of the standing trees in images, so as to reduce the loss of image edge details during the standing tree segmentation and reduce the loss of feature information. MobileNet, a lightweight network, is integrated into the backbone network to reduce the computational complexity. Furthermore, SENet, an attention mechanism, is added to obtain the feature information efficiently and suppress the generation of useless feature information. The extensive experimental results show that using the SEMD model the MIoU of the semantic segmentation of standing tree images of different varieties and categories under simple and complex backgrounds reaches 91.78% and 86.90%, respectively. The lightweight network segmentation model SEMD based on deep learning proposed in this paper can solve the problem of multiple standing trees segmentation with high accuracy. |
| Audience | Academic |
| Author | Mo, Lufeng Yi, Xiaomei Wang, Guoying Wu, Xiaoping Wu, Peng Shi, Lijuan |
| AuthorAffiliation | 1 College of Mathematics and Computer Science, Zhejiang A&F University, Hangzhou 311300, China 2 School of Information Engineering, Huzhou University, Huzhou 313000, China |
| AuthorAffiliation_xml | – name: 2 School of Information Engineering, Huzhou University, Huzhou 313000, China – name: 1 College of Mathematics and Computer Science, Zhejiang A&F University, Hangzhou 311300, China |
| Author_xml | – sequence: 1 givenname: Lijuan surname: Shi fullname: Shi, Lijuan – sequence: 2 givenname: Guoying surname: Wang fullname: Wang, Guoying – sequence: 3 givenname: Lufeng surname: Mo fullname: Mo, Lufeng – sequence: 4 givenname: Xiaomei surname: Yi fullname: Yi, Xiaomei – sequence: 5 givenname: Xiaoping surname: Wu fullname: Wu, Xiaoping – sequence: 6 givenname: Peng orcidid: 0000-0001-8946-3447 surname: Wu fullname: Wu, Peng |
| BookMark | eNptkk1v1DAQhi1URNuFA_8gEhc4bOuP2I4vSNtCYaWVkGg5WxNnHLxK4sXOIvHv8XarilbIB4_Gz7zj-TgnJ1OckJC3jF4IYehl5pxppZR4Qc5Yzetlwzk9-cc-Jec5bynlQojmFTkVijaMcX5Gvq_2cxxhDq66xX7EaS52nKroq9sZpi5MfXWXEHPlUxyrm5gwz9V6hL64riBjVxX6E-Ku2iCkqfCvyUsPQ8Y3D_eC_Lj5fHf9dbn59mV9vdosXd3IeYmUq5YpZ2RnjJDSNFTWmgETQupGKABjgKKj4FsmOtVqYXynUXJBleJKLMj6qNtF2NpdCiOkPzZCsPeOmHoLqRQ2oNWNAm6gddia2nllGuCoZas9GOFLxgX5eNTa7dsRO1f6kGB4Ivr0ZQo_bR9_W1MrWsu6CLx_EEjx1770yI4hOxwGmDDus-Wa8UY2isuCvnuGbuM-TaVVB4rVpRHsIHhxpHooBYTJx5LXldPhGFyZvw_Fv9K1koLp-4APxwCXYs4J_ePvGbWHNbGPa1LYy2esC8fBlyRh-E_EX_MxvOs |
| CitedBy_id | crossref_primary_10_3390_agronomy13082059 crossref_primary_10_3390_app14114884 crossref_primary_10_1007_s11676_025_01825_y crossref_primary_10_1007_s00371_023_03218_w crossref_primary_10_3390_jimaging9040074 crossref_primary_10_1016_j_isprsjprs_2024_01_012 crossref_primary_10_3390_f14081547 crossref_primary_10_3390_electronics13204139 crossref_primary_10_3390_rs16091643 crossref_primary_10_3390_electronics14132533 crossref_primary_10_3390_f14122334 crossref_primary_10_3390_jimaging10060132 crossref_primary_10_3390_f16030419 crossref_primary_10_1016_j_ufug_2024_128316 crossref_primary_10_3390_f14051054 |
| Cites_doi | 10.1016/j.isprsjprs.2017.02.011 10.3390/rs13163054 10.1109/TMM.2017.2728318 10.1109/CVPR.2017.660 10.1049/iet-ipr.2019.1462 10.3390/jmse9060671 10.1016/j.jvcir.2018.03.001 10.1016/j.patrec.2020.07.029 10.1109/CVPR.2018.00474 10.1007/s00530-022-00945-3 10.1109/IGARSS39084.2020.9324600 10.1109/TFUZZ.2016.2514366 10.1016/j.cviu.2015.08.009 10.1109/TPAMI.2016.2644615 10.1364/JOT.86.000570 10.1109/TMI.2020.3048055 10.1007/s13042-018-0889-3 10.1049/ipr2.12090 10.1016/j.patrec.2022.04.025 10.1007/978-3-030-01234-2_49 10.1016/j.jss.2017.06.032 10.1016/j.compag.2020.105952 10.1109/TPAMI.2017.2699184 10.1109/CVPR.2015.7298965 10.1016/j.eswa.2022.118493 10.1109/TIP.2016.2577382 10.1109/ACCESS.2020.3021739 |
| ContentType | Journal Article |
| Copyright | COPYRIGHT 2022 MDPI AG 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. 2022 by the authors. 2022 |
| Copyright_xml | – notice: COPYRIGHT 2022 MDPI AG – notice: 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: 2022 by the authors. 2022 |
| DBID | AAYXX CITATION 3V. 7X7 7XB 88E 8FI 8FJ 8FK ABUWG AFKRA AZQEC BENPR CCPQU DWQXO FYUFA GHDGH K9. M0S M1P PHGZM PHGZT PIMPY PJZUB PKEHL PPXIY PQEST PQQKQ PQUKI PRINS 7X8 5PM DOA |
| DOI | 10.3390/s22176663 |
| DatabaseName | CrossRef ProQuest Central (Corporate) ProQuest Health & Medical Collection ProQuest Central (purchase pre-March 2016) Medical Database (Alumni Edition) Hospital Premium Collection Hospital Premium Collection (Alumni Edition) ProQuest Central (Alumni) (purchase pre-March 2016) ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials ProQuest Central ProQuest One Community College ProQuest Central Health Research Premium Collection Health Research Premium Collection (Alumni) ProQuest Health & Medical Complete (Alumni) Health & Medical Collection (Alumni Edition) PML(ProQuest Medical Library) ProQuest Central Premium ProQuest One Academic ProQuest Publicly Available Content ProQuest Health & Medical Research Collection ProQuest One Academic Middle East (New) ProQuest One Health & Nursing ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic (retired) ProQuest One Academic UKI Edition ProQuest Central China MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ Directory of Open Access Journals |
| DatabaseTitle | CrossRef Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest Central Essentials ProQuest Health & Medical Complete (Alumni) ProQuest Central (Alumni Edition) ProQuest One Community College ProQuest One Health & Nursing ProQuest Central China ProQuest Central ProQuest Health & Medical Research Collection Health Research Premium Collection Health and Medicine Complete (Alumni Edition) ProQuest Central Korea Health & Medical Research Collection ProQuest Central (New) ProQuest Medical Library (Alumni) ProQuest One Academic Eastern Edition ProQuest Hospital Collection Health Research Premium Collection (Alumni) ProQuest Hospital Collection (Alumni) ProQuest Health & Medical Complete ProQuest Medical Library ProQuest One Academic UKI Edition ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) MEDLINE - Academic |
| DatabaseTitleList | CrossRef MEDLINE - Academic Publicly Available Content Database |
| Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: PIMPY name: Publicly Available Content Database url: http://search.proquest.com/publiccontent sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering |
| EISSN | 1424-8220 |
| ExternalDocumentID | oai_doaj_org_article_786a29abceb94cf698a2e75b7fa93f13 PMC9460454 A746531714 10_3390_s22176663 |
| GrantInformation_xml | – fundername: Key Research and Development Program of Zhejiang Province grantid: 2021C02005 – fundername: Natural Science Foundation of China grantid: U1809208 |
| GroupedDBID | --- 123 2WC 53G 5VS 7X7 88E 8FE 8FG 8FI 8FJ AADQD AAHBH AAYXX ABDBF ABUWG ACUHS ADBBV ADMLS AENEX AFFHD AFKRA AFZYC ALMA_UNASSIGNED_HOLDINGS BENPR BPHCQ BVXVI CCPQU CITATION CS3 D1I DU5 E3Z EBD ESX F5P FYUFA GROUPED_DOAJ GX1 HH5 HMCUK HYE IAO ITC KQ8 L6V M1P M48 MODMG M~E OK1 OVT P2P P62 PHGZM PHGZT PIMPY PJZUB PPXIY PQQKQ PROAC PSQYO RNS RPM TUS UKHRP XSB ~8M 3V. 7XB 8FK AZQEC DWQXO K9. PKEHL PQEST PQUKI PRINS 7X8 PUEGO 5PM |
| ID | FETCH-LOGICAL-c485t-e026b16c95d993559805471a13357836aa99a0ec0afb13d6b739fd7e523066263 |
| IEDL.DBID | DOA |
| ISICitedReferencesCount | 17 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000851983400001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1424-8220 |
| IngestDate | Tue Oct 14 19:08:19 EDT 2025 Tue Nov 04 02:05:42 EST 2025 Fri Sep 05 08:41:00 EDT 2025 Tue Oct 07 07:11:28 EDT 2025 Tue Nov 04 18:17:10 EST 2025 Sat Nov 29 07:17:32 EST 2025 Tue Nov 18 21:55:49 EST 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 17 |
| Language | English |
| License | Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c485t-e026b16c95d993559805471a13357836aa99a0ec0afb13d6b739fd7e523066263 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0001-8946-3447 |
| OpenAccessLink | https://doaj.org/article/786a29abceb94cf698a2e75b7fa93f13 |
| PMID | 36081122 |
| PQID | 2711498014 |
| PQPubID | 2032333 |
| ParticipantIDs | doaj_primary_oai_doaj_org_article_786a29abceb94cf698a2e75b7fa93f13 pubmedcentral_primary_oai_pubmedcentral_nih_gov_9460454 proquest_miscellaneous_2712858625 proquest_journals_2711498014 gale_infotracacademiconefile_A746531714 crossref_primary_10_3390_s22176663 crossref_citationtrail_10_3390_s22176663 |
| PublicationCentury | 2000 |
| PublicationDate | 2022-09-01 |
| PublicationDateYYYYMMDD | 2022-09-01 |
| PublicationDate_xml | – month: 09 year: 2022 text: 2022-09-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | Basel |
| PublicationPlace_xml | – name: Basel |
| PublicationTitle | Sensors (Basel, Switzerland) |
| PublicationYear | 2022 |
| Publisher | MDPI AG MDPI |
| Publisher_xml | – name: MDPI AG – name: MDPI |
| References | Chen (ref_4) 2021; 181 Chen (ref_13) 2017; 40 Tung (ref_23) 2016; 143 Roy (ref_11) 2020; 14 ref_12 ref_10 Liu (ref_25) 2018; 53 ref_30 Li (ref_24) 2016; 25 ref_19 ref_18 ref_16 Hu (ref_8) 2019; 10 Fang (ref_26) 2017; 220 Chen (ref_21) 2020; 15 Dechesne (ref_1) 2017; 126 Ge (ref_6) 2022; 158 Kim (ref_7) 2017; 20 ref_20 Yao (ref_3) 2016; 24 Baheti (ref_14) 2020; 138 Badrinarayanan (ref_28) 2017; 39 ref_2 Peng (ref_17) 2020; 8 ref_29 ref_27 ref_9 Yang (ref_15) 2019; 86 Ge (ref_5) 2022; 210 Nath (ref_22) 2020; 40 |
| References_xml | – volume: 126 start-page: 129 year: 2017 ident: ref_1 article-title: Semantic segmentation of forest stands of pure species combining airborne lidar data and very high resolution multispectral imagery publication-title: ISPRS J. Photogramm. Remote Sens. doi: 10.1016/j.isprsjprs.2017.02.011 – ident: ref_2 doi: 10.3390/rs13163054 – volume: 20 start-page: 208 year: 2017 ident: ref_7 article-title: Interactive Image Segmentation Using Semi-transparent Wearable Glasses publication-title: IEEE Trans. Multimedia doi: 10.1109/TMM.2017.2728318 – ident: ref_30 doi: 10.1109/CVPR.2017.660 – volume: 14 start-page: 1653 year: 2020 ident: ref_11 article-title: FuSENet: Fused squeeze-and-excitation network for spectral-spatial hyperspectral image classification publication-title: IET Image Process. doi: 10.1049/iet-ipr.2019.1462 – ident: ref_16 doi: 10.3390/jmse9060671 – volume: 53 start-page: 76 year: 2018 ident: ref_25 article-title: Rate control schemes for panoramic video coding publication-title: J. Vis. Commun. Image Represent. doi: 10.1016/j.jvcir.2018.03.001 – volume: 138 start-page: 223 year: 2020 ident: ref_14 article-title: Semantic scene segmentation in unstructured environment with modified DeepLabV3+ publication-title: Pattern Recognit. Lett. doi: 10.1016/j.patrec.2020.07.029 – ident: ref_19 doi: 10.1109/CVPR.2018.00474 – ident: ref_9 doi: 10.1007/s00530-022-00945-3 – ident: ref_29 doi: 10.1109/IGARSS39084.2020.9324600 – ident: ref_18 – volume: 24 start-page: 1307 year: 2016 ident: ref_3 article-title: A Big Bang-Big Crunch Type-2 Fuzzy Logic System for Machine-Vision-Based Event Detection and Summarization in Real-World Ambient-Assisted Living publication-title: IEEE Trans. Fuzzy Syst. doi: 10.1109/TFUZZ.2016.2514366 – volume: 143 start-page: 191 year: 2016 ident: ref_23 article-title: Scene parsing by nonparametric label transfer of content-adaptive windows publication-title: Comput. Vis. Image Underst. doi: 10.1016/j.cviu.2015.08.009 – volume: 39 start-page: 2481 year: 2017 ident: ref_28 article-title: Segnet: A deep convolutional encoder-decoder architecture for image segmentation publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2016.2644615 – volume: 86 start-page: 570 year: 2019 ident: ref_15 article-title: Real-time DeepLabv3+ for pedestrian segmentation publication-title: J. Opt. Technol. doi: 10.1364/JOT.86.000570 – volume: 40 start-page: 2534 year: 2020 ident: ref_22 article-title: Diminishing Uncertainty Within the Training Pool: Active Learning for Medical Image Segmentation publication-title: IEEE Trans. Med. Imaging doi: 10.1109/TMI.2020.3048055 – volume: 10 start-page: 1909 year: 2019 ident: ref_8 article-title: An end-to-end differential network learning method for semantic segmentation publication-title: Int. J. Mach. Learn. Cybern. doi: 10.1007/s13042-018-0889-3 – volume: 15 start-page: 1115 year: 2020 ident: ref_21 article-title: Identification of plant disease images via a squeeze-and-excitation MobileNet model and twice transfer learning publication-title: IET Image Process. doi: 10.1049/ipr2.12090 – ident: ref_12 – volume: 158 start-page: 71 year: 2022 ident: ref_6 article-title: A hybrid active contour model based on pre-fitting energy and adaptive functions for fast image segmentation publication-title: Pattern Recognit. Lett. doi: 10.1016/j.patrec.2022.04.025 – ident: ref_10 doi: 10.1007/978-3-030-01234-2_49 – volume: 220 start-page: 223 year: 2017 ident: ref_26 article-title: ADAM-17 expression is enhanced by FoxM1 and is a poor prognostic sign in gastric carcinoma publication-title: J. Surg. Res. doi: 10.1016/j.jss.2017.06.032 – volume: 181 start-page: 105952 year: 2021 ident: ref_4 article-title: Semantic segmentation for partially occluded apple trees based on deep learning publication-title: Comput. Electron. Agric. doi: 10.1016/j.compag.2020.105952 – volume: 40 start-page: 834 year: 2017 ident: ref_13 article-title: Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2017.2699184 – ident: ref_27 doi: 10.1109/CVPR.2015.7298965 – volume: 210 start-page: 118493 year: 2022 ident: ref_5 article-title: An active contour model driven by adaptive local pre-fitting energy function based on Jeffreys divergence for image segmentation publication-title: Expert Syst. Appl. doi: 10.1016/j.eswa.2022.118493 – volume: 25 start-page: 3801 year: 2016 ident: ref_24 article-title: Correlated Logistic Model With Elastic Net Regularization for Multilabel Image Classification publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2016.2577382 – ident: ref_20 – volume: 8 start-page: 164546 year: 2020 ident: ref_17 article-title: Semantic Segmentation of Litchi Branches Using DeepLabV3+ Model publication-title: IEEE Access doi: 10.1109/ACCESS.2020.3021739 |
| SSID | ssj0023338 |
| Score | 2.4639504 |
| Snippet | Semantic segmentation of standing trees is important to obtain factors of standing trees from images automatically and effectively. Aiming at the accurate... |
| SourceID | doaj pubmedcentral proquest gale crossref |
| SourceType | Open Website Open Access Repository Aggregation Database Enrichment Source Index Database |
| StartPage | 6663 |
| SubjectTerms | Accuracy attention mechanism Deep learning Neural networks semantic segmentation Semantics standing tree image Trees |
| SummonAdditionalLinks | – databaseName: ProQuest Publicly Available Content dbid: PIMPY link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrZ3db9MwEMBP0PEAD3xPBDZkEBK8RG0SO7afULcxsQemig1pPEWOY5dJLClNyt_PXepmKyCeeE0uyUVnn32273cAbyorcRIsdGwSXcZcGR5ro7Etp36iuEjwId8Xm5Cnp-riQs9CenQbjlVufGLvqNe0Zzq3jU54XDWWVszHqcR5vCbyyfvFj5hqSNFeayiocRt2CLylRrAzO_k0-zoEYBnGY2u6UIah_rhNU8Ij5tnWmNSj-_900L8fmrwxCh0_-L_6P4T7YTbKpuvm8whuufox3LvBKHwCn6errum5ruzMza9CqlLNGs_OQk4MO1861zLKVGFU6rPt2MkV-qmWHeAYWTGUPnJuwQLLdf4Uvhx_OD_8GIdCDLHlSnSxw0CtTHKrRaWJx46KCxzUDMa3grJAjNHaTJydGF8mWZWXMtO-ko5WnHPC3ezCqG5q9wyYSTOfVKqyxL0ROD1Stkqk91xxLpXgEbzbmKKwgVJOxTK-FxitkNWKwWoRvB5EF2s0x9-EDsiegwDRtPsLzXJehM5ZSJWbVJvSulJz63OtTOqkKKU3GtXFl7yl1lBQn0dlrAmpC_hLRM8qppIodVRKPoK9jfWL4Aza4trYEbwabmM3pr0ZU7tm1cukSmB4KSKQWw1tS_XtO_Xltx4IrnlOJMXn__74C7ibUu5Gf0BuD0bdcuX24Y792V22y5ehr_wC310lgQ priority: 102 providerName: ProQuest |
| Title | Automatic Segmentation of Standing Trees from Forest Images Based on Deep Learning |
| URI | https://www.proquest.com/docview/2711498014 https://www.proquest.com/docview/2712858625 https://pubmed.ncbi.nlm.nih.gov/PMC9460454 https://doaj.org/article/786a29abceb94cf698a2e75b7fa93f13 |
| Volume | 22 |
| WOSCitedRecordID | wos000851983400001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAON databaseName: DOAJ Directory of Open Access Journals customDbUrl: eissn: 1424-8220 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0023338 issn: 1424-8220 databaseCode: DOA dateStart: 20010101 isFulltext: true titleUrlDefault: https://www.doaj.org/ providerName: Directory of Open Access Journals – providerCode: PRVHPJ databaseName: ROAD: Directory of Open Access Scholarly Resources customDbUrl: eissn: 1424-8220 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0023338 issn: 1424-8220 databaseCode: M~E dateStart: 20010101 isFulltext: true titleUrlDefault: https://road.issn.org providerName: ISSN International Centre – providerCode: PRVPQU databaseName: Health & Medical Collection customDbUrl: eissn: 1424-8220 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0023338 issn: 1424-8220 databaseCode: 7X7 dateStart: 20010101 isFulltext: true titleUrlDefault: https://search.proquest.com/healthcomplete providerName: ProQuest – providerCode: PRVPQU databaseName: ProQuest Central customDbUrl: eissn: 1424-8220 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0023338 issn: 1424-8220 databaseCode: BENPR dateStart: 20010101 isFulltext: true titleUrlDefault: https://www.proquest.com/central providerName: ProQuest – providerCode: PRVPQU databaseName: Publicly Available Content Database customDbUrl: eissn: 1424-8220 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0023338 issn: 1424-8220 databaseCode: PIMPY dateStart: 20010101 isFulltext: true titleUrlDefault: http://search.proquest.com/publiccontent providerName: ProQuest |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrZ3Pb9MwFMefYHCAA-KnCIzKICS4RGscO7aPLXRih1XVNqRyihzH3iaxdGpSjvztvJe6VQtIXLjkkLxKzntx_L6p3-cBvK-dwiRYmtRmpkqFtiI11uCzzMNQC5nhj0LfbEJNp3o-N7OdVl-0J2yNB1477kjpwnJjK-crI1wojLbcK1mpYE0e-n61HLOejZiKUitH5bXmCOUo6o9azgmEWOR7q08P6f_zVfz79sid9eb4MTyKiSIbrQf4BO745ik83MEHPoOz0apb9MhVdu4vb2IVUcMWgZ3HchV2sfS-ZVREwqgLZ9uxkxt8hbRsjMtXzdD6s_e3LGJWL5_D1-PJxacvaeyRkDqhZZd61FBVVjgja0OodKMxB1OZRekpqUDDWmPs0LuhDVWW10WlchNq5eljcEEkmhdw0Cwa_xKY5ejPWteOkDQSMxft6kyFILQQSkuRwMeN70oXAeLUx-J7iUKC3Fxu3ZzAu63p7Zqa8TejMQVga0Cg6_4Ehr-M4S__Ff4EPlD4SpqOOBhnY1UB3hKBrcqRIoAcdXlP4HAT4TLO07bkCvWgIYJOAm-3l3GG0d8mtvGLVW_DtUTlJxNQe0_G3tD3rzTXVz2r24iCIIev_se9voYHnIov-h1uh3DQLVf-Ddx3P7rrdjmAu2qu-qMewL3xZDo7G_STAo-nPyd4bnZyOvv2Czd7Ec4 |
| linkProvider | Directory of Open Access Journals |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V3Pb9MwFLbGQAIO_EYUBhgEgku0xrFj-4BQx5hWbVSIFak34zh2mcSS0qQg_in-Rt5Lk24FxG0HrsmLYydfPvs5732PkGe5k7AIFjqysc4iriyPtNWAZRb6iosYLgpNsQk5GqnJRL_fID-7XBgMq-w4sSHqvHS4R77NJKzcNWqdvJ59jbBqFP5d7UpoLGFx4H98B5etejXchff7nLG9t-M3-1FbVSByXIk68uB1ZHHqtMg1iotDmwIY2oKzJjClwVqtbd-7vg1ZnORpJhMdculx-zRF7RZo9wK5CDwuMYRMTk4dvAT8vaV6UZLo_nbFGMovpsnanNeUBvhzAvg9KPPMLLd3_X97PjfItXY9TQfLD-Am2fDFLXL1jMribfJhsKjLRpmWHvnpSZtsVdAy0KM2q4eO595XFHNtKBYrrWo6PAGmregOzPI5Betd72e0VaOd3iEfz2VQd8lmURb-HqGWJSHOVe5QuUfAAk-5PJYhcMW5VIL3yMvuZRvX6qxjuY8vBvwtxIVZ4aJHnq5MZ0txkb8Z7SBiVgaoB94cKOdT09KLkSq1TNvM-UxzF1KtLPNSZDJYDd2FRl4g3gyyFnTG2Tb5AoaE-l9mIFFnL5YxdH-rw5dp6awyp-DqkSer00BE-HfJFr5cNDZMCXCQRY_INSivdX39THH8uZE01zxFLcj7_775Y3J5f_zu0BwORwcPyBWGmShNuN8W2aznC_-QXHLf6uNq_qj5Lin5dN5A_wVzTHHn |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V3fb9MwELbGQAge-I1WGGAQCF6iNo4d2w8IdZSKaqia2JD6ZhzHLpO2pDQpiH-Nv467NO1WQLztgdf44tjx57PPvvuOkOe5k7AJFjqysc4iriyPtNWAZRZ6iosYXgpNsgk5HqvJRB9skZ-rWBh0q1zpxEZR56XDM_Iuk7Bz18h10g2tW8TBYPhm9jXCDFJ407pKp7GEyL7_8R3Mt-r1aABj_YKx4bujt--jNsNA5LgSdeTBAsni1GmRayQah_oFaGsLhpvA8AZrtbY973o2ZHGSp5lMdMilx6PUFHlcoN5L5LJMEolpI-TkzNhLwPZbMhklie51K8aQijFNNta_Jk3An4vB7w6a51a84c3_-V_dIjfafTbtLyfGbbLlizvk-jn2xbvkY39Rlw1jLT3009M2CKugZaCHbbQPPZp7X1GMwaGYxLSq6egUNHBF92D1zylID7yf0ZaldnqPfLqQTt0n20VZ-B1CLUtCnKvcIaOPgI2fcnksQ-CKc6kE75BXq4E3ruVfxzQgJwbsMMSIWWOkQ56tRWdL0pG_Ce0hetYCyBPePCjnU9OqHSNVapm2mfOZ5i6kWlnmpchksBqaC5W8ROwZ1GbQGGfboAzoEvKCmb5E_r1YxtD83RXWTKvmKnMGtA55ui4GBYW3Trbw5aKRYUqA4Sw6RG7AeqPpmyXF8ZeG6lzzFDkiH_z740_IVcC3-TAa7z8k1xgGqDRegLtku54v_CNyxX2rj6v542aKUvL5onH-C8PFeps |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Automatic+Segmentation+of+Standing+Trees+from+Forest+Images+Based+on+Deep+Learning&rft.jtitle=Sensors+%28Basel%2C+Switzerland%29&rft.au=Shi%2C+Lijuan&rft.au=Wang%2C+Guoying&rft.au=Mo%2C+Lufeng&rft.au=Yi%2C+Xiaomei&rft.date=2022-09-01&rft.pub=MDPI&rft.eissn=1424-8220&rft.volume=22&rft.issue=17&rft_id=info:doi/10.3390%2Fs22176663&rft_id=info%3Apmid%2F36081122&rft.externalDocID=PMC9460454 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1424-8220&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1424-8220&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1424-8220&client=summon |