Vis enkel innførsel

dc.contributor.authorSani-Mohammed, Abubakar
dc.contributor.authorYao, Wei
dc.contributor.authorHeurich, Marco Dietmar
dc.date.accessioned2023-01-26T11:52:43Z
dc.date.available2023-01-26T11:52:43Z
dc.date.created2022-12-26T10:10:57Z
dc.date.issued2022
dc.identifier.citationISPRS Journal of Photogrammetry and Remote Sensing (P&RS). 2022, 6 .en_US
dc.identifier.issn0924-2716
dc.identifier.urihttps://hdl.handle.net/11250/3046595
dc.description"© 2022 The Author(s). Published by Elsevier B.V. on behalf of International Society of Photogrammetry and Remote Sensing (isprs). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)"en_US
dc.description.abstractMapping standing dead trees, especially, in natural forests is very important for evaluation of the forest's health status, and its capability for storing Carbon, and the conservation of biodiversity. Apparently, natural forests have larger areas which renders the classical field surveying method very challenging, time-consuming, labor-intensive, and unsustainable. Thus, for effective forest management, there is the need for an automated approach that would be cost-effective. With the advent of Machine Learning, Deep Learning has proven to successfully achieve excellent results. This study presents an adjusted Mask R-CNN Deep Learning approach for detecting and segmenting standing dead trees in a mixed dense forest from CIR aerial imagery using a limited (195 images) training dataset. First, transfer learning is considered coupled with the image augmentation technique to leverage the limitation of training datasets. Then, we strategically selected hyperparameters to suit appropriately our model's architecture that fits well with our type of data (dead trees in images). Finally, to assess the generalization capability of our model's performance, a test dataset that was not confronted to the deep neural network was used for comprehensive evaluation. Our model recorded promising results reaching a mean average precision, average recall, and average F1-Score of 0.85, 0.88, and 0.87 respectively, despite our relatively low resolution (20 cm) dataset. Consequently, our model could be used for automation in standing dead tree detection and segmentation for enhanced forest management. This is equally significant for biodiversity conservation, and forest Carbon storage estimation.en_US
dc.language.isoengen_US
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/deed.no*
dc.subjectcarbon storageen_US
dc.subjectCIR aerial imageryen_US
dc.subjectforest managementen_US
dc.subjectinstance segmentationen_US
dc.subjectmask R-CNNen_US
dc.subjectstanding dead treeen_US
dc.titleInstance segmentation of standing dead trees in dense forest from aerial imagery using deep learningen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.subject.nsiVDP::Landbruks- og Fiskerifag: 900::Landbruksfag: 910::Skogbruk: 915en_US
dc.source.pagenumber14en_US
dc.source.volume6en_US
dc.source.journalISPRS Journal of Photogrammetry and Remote Sensing (P&RS)en_US
dc.identifier.doi10.1016/j.ophoto.2022.100024
dc.identifier.cristin2097375
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal