Document Type : Original Research Paper
Authors
Department of Surveying Engineering, Faculty of Civil, Water and Environmental Engineering, Shahid Beheshti University, Tehran, Iran
Abstract
Background and Objectives: Accurate land use classification is essential for effective natural resource management, urban planning, precision agriculture, and environmental monitoring. Such classification helps predict and prevent environmental issues. Methods like high-resolution satellite and aerial imagery, GIS, and deep learning techniques, including Convolutional Neural Networks (CNN) and the U-Net architecture, offer high precision in analyzing and classifying aerial images. The U-Net network, known for its unique structure, excels in defining land use boundaries. This study focuses on a region in Poland, using the U-Net model to enhance classification accuracy and efficiency through regularization techniques and the Adam optimizer.
Methods: This research used high-resolution aerial images and a deep learning model based on the U-Net architecture to achieve precise land use classification. The approach aimed at improving classification accuracy across four land use categories. High-resolution aerial images were collected, corrected geometrically and radiometrically to create orthophotos. These images were labeled and cropped to 256x256 pixels, with data augmentation techniques such as rotation and flipping applied. The dataset was divided into training (75%), validation (25%), and testing (5% of the validation set). The U-Net model includes convolutional blocks with 3x3 kernels, normalization layers, and dropout layers, organized into encoding, decoding, and output layers. Hyperparameters included the Adam optimizer, a learning rate of 0.0001, and a batch size of 16. Model performance was evaluated using metrics like overall accuracy, kappa coefficient, and Jaccard score.
Findings: The algorithm was tested on data from Poznań, Poland, utilizing high-resolution aerial images from 2021 with a 25 cm spatial resolution. The data, labeled by experts, covered four land use types: buildings, forests, roads, and water. Out of 769 labeled images, 576 were used for training (expanded to 2304 samples after augmentation), 183 for validation, and 10 for testing. The model, developed using Python and Keras on TensorFlow and trained in Google Colab, achieved high accuracy after 96 iterations, validated against expert-labeled maps. While the U-Net model performed well in general classification, it encountered challenges with rare classes like water. Data augmentation and more samples for such classes could improve accuracy. The training and validation accuracy reached 0.95 and 0.85, respectively, with validation errors stabilizing around 0.5. The U-Net model demonstrated significant improvements in accuracy, kappa coefficient, and Jaccard index compared to previous studies, underscoring the importance of high-quality data and precise parameter tuning.
Conclusion: The study assessed the U-Net deep learning model for accurate land use classification using aerial images. Results indicate that the model effectively identified and differentiated between land use types with high precision. The U-Net structure achieved an overall accuracy of 92.47%, a Jaccard index of 54.45%, and a kappa coefficient of 79.59%. These results demonstrate the model’s strong capability in defining class boundaries. Future improvements could involve utilizing multispectral and hyperspectral images for more detailed information, combining U-Net with other networks like ANN, optimizing hyperparameters with advanced search methods, and employing transfer learning, especially with limited training data. Implementing these strategies could enhance accuracy and efficiency in land use classification, with broader applications in scientific and practical fields.
Keywords
Main Subjects
COPYRIGHTS
© 2024 The Author(s). This is an open-access article distributed under the terms and conditions of the Creative Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) (https://creativecommons.org/licenses/by-nc/4.0/)