Přístupnostní navigace
E-přihláška
Vyhledávání Vyhledat Zavřít
Detail publikace
GIBRIL, M. AL-RUZOUQ, R. SHANABLEH, A. JENA, R. BOLCEK, J. ZULHAIDI MOHD SHAFRI, H. GHORBANZADEH, O.
Originální název
Transformer-based Semantic Segmentation for Large-Scale Building Footprint Extraction from Very-High Resolution Satellite Images
Typ
článek v časopise ve Web of Science, Jimp
Jazyk
angličtina
Originální abstrakt
Extracting building footprints from extensive very-high spatial resolution (VHSR) remote sensing data is crucial for diverse applications, including surveying, urban studies, population estimation, identification of informal settlements, and disaster management. Although convolutional neural networks (CNNs) are commonly utilized for this purpose, their effectiveness is constrained by limitations in capturing long-range relationships and contextual details due to the localized nature of convolution operations. This study introduces the masked-attention mask transformer (Mask2Former), based on the Swin Transformer, for building footprint extraction from large-scale satellite imagery. To enhance the capture of large-scale semantic information and extract multiscale features, a hierarchical vision transformer with shifted windows (Swin Transformer) serves as the backbone network. An extensive analysis compares the efficiency and generalizability of Mask2Former with four CNN models (PSPNet, DeepLabV3+, UpperNet-ConvNext, and SegNeXt) and two transformer-based models (UpperNet-Swin and SegFormer) featuring different complexities. Results reveal superior performance of transformer-based models over CNN-based counterparts, showcasing exceptional generalization across diverse testing areas with varying building structures, heights, and sizes. Specifically, Mask2Former with the Swin transformer backbone achieves a mean intersection over union between 88% and 93%, along with a mean F-score (mF-score) ranging from 91% to 96.35% across various urban landscapes.
Klíčová slova
remote sensing; satellite imagery; Mask2former; CNN; Swin Transformer; vision transformer
Autoři
GIBRIL, M.; AL-RUZOUQ, R.; SHANABLEH, A.; JENA, R.; BOLCEK, J.; ZULHAIDI MOHD SHAFRI, H.; GHORBANZADEH, O.
Vydáno
9. 3. 2024
Nakladatel
Elsevier
ISSN
1879-1948
Periodikum
ADVANCES IN SPACE RESEARCH
Ročník
73
Číslo
10
Stát
Spojené království Velké Británie a Severního Irska
Strany od
4937
Strany do
4954
Strany počet
17
URL
https://www.sciencedirect.com/science/article/pii/S0273117724002205
Plný text v Digitální knihovně
http://hdl.handle.net/11012/245513
BibTex
@article{BUT188212, author="Mohamed Barakat A. {Gibril} and Rami {Al-Ruzouq} and Abdallah {Shanableh} and Ratiranjan {Jena} and Jan {Bolcek} and Helmi {Zulhaidi Mohd Shafri} and Omid {Ghorbanzadeh}", title="Transformer-based Semantic Segmentation for Large-Scale Building Footprint Extraction from Very-High Resolution Satellite Images", journal="ADVANCES IN SPACE RESEARCH", year="2024", volume="73", number="10", pages="4937 --4954", doi="10.1016/j.asr.2024.03.002", issn="1879-1948", url="https://www.sciencedirect.com/science/article/pii/S0273117724002205" }