Original Research Paper
Photogrammetry
M. Shafiei; A. Milan; A. Vafaeinejad
Abstract
Background and Objectives: The integration of imaging technologies with geolocation sensors such as GPS, accelerometers, gyroscopes, and compasses enables precise determination of the camera’s position and orientation during image capture. This capability plays a crucial role in simplifying the ...
Read More
Background and Objectives: The integration of imaging technologies with geolocation sensors such as GPS, accelerometers, gyroscopes, and compasses enables precise determination of the camera’s position and orientation during image capture. This capability plays a crucial role in simplifying the georeferencing process of 3D models, particularly in mobile mapping systems and short-range terrestrial applications. In this context, smartphones, equipped with these geolocation sensors, have gained significant prominence as imaging devices. This study examines the feasibility of direct georeferencing of 3D models generated through Structure-from-Motion (SFM) photogrammetry using camera position and orientation data simultaneously recorded by the sensors embedded in a smartphone.Methods: The research data comprises ground control points collected using a Leica TS09 R1000 total station in reflector less mode, along with data obtained from an iPhone 13 Pro smartphone. In the proposed method, the relative orientation parameters estimated by the Structure-from-Motion algorithm were refined using orientation parameters directly measured by the smartphone’s motion sensors. Image acquisition was conducted under static conditions; therefore, the estimated and measured angles at each station should remain constant and, in the absence of errors, should be identical. The final model was oriented using the average of the measured angles at each station. The measured distance was then used to establish the model scale, and the average coordinates obtained at each station were employed to transform the model into the UTM coordinate system. Additionally, an alternative model was generated using the precise positions of the imaging stations via indirect georeferencing. The RMSE of the check points was used as the accuracy assessment metric for the proposed method.Findings: The studied stockpile was approximately 150 meters in length and 25 meters in height, oriented roughly from south to north. In this study, five 3D models were generated based on the collected data. The first model was constructed solely using images, positional data, and angles recorded by the smartphone’s sensors. Subsequently, coded targets were incrementally added as control points, with one, two, and three control points being incorporated into the equations. The fifth model was generated using the same images but with the precise coordinates of twelve imaging stations measured by a total station.In the first model, the positioning error remained at the accuracy level of the iOS single-point positioning system (approximately 2 meters in planimetric accuracy and 11 meters in height). In the model with a single control point, aligning the model with the low-accuracy GPS coordinates from the smartphone resulted in an azimuth calculation error, leading to model rotation. Although the model was correctly transferred, the azimuth error caused a misalignment with the reference system, resulting in an error of approximately 0.40 meters. With the addition of two and three control points, accuracy improved, reaching 0.004 meters, which matched the accuracy achieved using precise camera center coordinates. In the fifth case, where the exact coordinates of the imaging stations were used, an accuracy of 0.0007 meters was obtained.Conclusion: As observed, GPS accuracy remains the most challenging aspect of this system. Most smartphones do not provide raw GNSS data or high-quality raw positioning data; instead, they only offer the final processed position determined by the operating system. Given the hardware and environmental limitations, the following considerations are recommended for future studies:a) Development of a handheld mobile mapping system by integrating survey-grade GPS with a smartphone camera.b) Since the study area consists of earthwork features with irregular shapes, when applying this method to structured objects such as buildings, it is recommended to refine the relative exterior orientation parameters of images using constraints derived from prominent vertical and horizontal features in the scene and incorporating them into the bundle adjustment process.
Original Research Paper
Photogrammetry
M. Heidarimozaffar; Z. Dalvand
Abstract
Background and Objectives: The extraction of 2D floorplans of building interiors plays a vital role in various domains, including architecture, surveying, building information modeling (BIM), robotics, and virtual reality. Mobile laser scanners capture the geometric structure of indoor environments with ...
Read More
Background and Objectives: The extraction of 2D floorplans of building interiors plays a vital role in various domains, including architecture, surveying, building information modeling (BIM), robotics, and virtual reality. Mobile laser scanners capture the geometric structure of indoor environments with millimeter-level accuracy and record the results as point cloud data. Point clouds are a rich source of information for generating 2D floorplans of indoor spaces. However, surface reflection noise, occlusions caused by indoor objects, and the non-uniform density of points pose significant challenges for processing such data. Initially, 2D floorplan extraction relied on classical geometric methods. In recent years, however, deep learning-based approaches have gained increasing attention due to their strong ability to understand complex patterns and their robustness to noise. The main objective of this study is to present an effective framework for extracting 2D floorplans of building interiors from point cloud data using deep learning methods and to compare its performance with that of classical techniques.Methods: In this study, an effective framework is proposed for extracting 2D floorplans of indoor building spaces from point cloud data, consisting of three sequential steps: data preprocessing, model implementation, and final evaluation. This framework enables a direct comparison between classical methods and deep learning approaches within a unified setting. Point cloud data are inherently discrete and unstructured, making direct processing challenging. In the preprocessing step, point clouds were projected onto a 2D space to generate density images, thereby reducing computational complexity. In the second step, two deep learning models, U-Net and Pix2Pix, as well as the classical Hough Transform algorithm, were implemented, with the density images serving as a common input for all methods. In the third step, the proposed framework was evaluated using publicly available datasets, including FloorNet and Structure3D. The input data were split into training, validation, and test sets, and data augmentation techniques were applied to improve model generalization. The performance of the models was assessed using the Dice Score and Intersection over Union (IoU) metrics.Findings: Deep learning models demonstrated satisfactory performance on samples without occlusions, achieving accuracy levels above 90%. In particular, the U-Net model achieved a Dice Score of 97% on the Structure3D dataset. However, in samples containing occlusions, the models were unable to fully extract the floorplans. In contrast, the Hough Transform algorithm performed reasonably well in line detection but exhibited limitations in generating coherent and topologically valid outputs suitable for indoor map modeling due to its inability to capture topological structure. Moreover, the trial-and-error process required to tune the algorithm’s parameters significantly increased its runtime.Conclusion: The findings of this study indicate that deep learning methods, when provided with complete data, are capable of accurately and structurally extracting 2D floorplans from point clouds. However, under real-world conditions where occlusion is inevitable, developing models that are robust to incomplete data becomes essential. To address this challenge, future research directions include employing hybrid architectures and incorporating complementary data sources such as RGB images or depth maps. The proposed framework in this study serves as an effective step toward the systematic comparison of 2D floorplan extraction methods and provides a foundation for developing more advanced models suitable for real-world applications.
Review Paper
Remote Sensing
M.R. Zargar; A. Aghabalaei; S.a Khazai; A. Mohtadi
Abstract
Background and Objectives: With the expansion of radar remote sensing data and increased access to high-resolution imagery through sensors such as Sentinel-1, change detection using deep learning has emerged as a strategic and innovative field in geospatial sciences. Radar imagery, with its capabilities ...
Read More
Background and Objectives: With the expansion of radar remote sensing data and increased access to high-resolution imagery through sensors such as Sentinel-1, change detection using deep learning has emerged as a strategic and innovative field in geospatial sciences. Radar imagery, with its capabilities for day-and-night imaging, cloud penetration, and sensitivity to structural characteristics of the Earth’s surface, provides rich but complex data requiring advanced machine learning architectures for effective analysis. Accordingly, this study aims to systematically review deep learning-based methods for change detection in radar images, with a focus on comparative analysis of architectures, their strengths and limitations, and future research directions.Methods: This systematic review covers literature published between 2014 and 2025 and includes 44 selected studies from reputable databases such as IEEE, Elsevier, and MDPI. Inclusion criteria involved the use of SAR data, application of deep learning algorithms, availability of quantitative performance metrics (e.g., accuracy and F1-score), and operational relevance in domains such as urban monitoring, natural resource assessment, and disaster management. The studies were classified based on the type of learning approach (supervised, unsupervised, self-supervised, multi-source) and architecture used (MLP, CNN, U-Net, Autoencoder, LSTM, GAN, MSCDUNet), and were analyzed using comparative tables.Findings: The results indicate that supervised architectures such as U-Net performed best in urban and disaster-related applications, achieving up to 95% accuracy and F1-scores between 0.85 and 0.93. In unsupervised approaches, combining CNN with fuzzy clustering (FCM) reached accuracy levels up to 99.6%. Autoencoder-based models were successful in denoising and feature compression, while GAN architectures improved network performance through data augmentation. Multi-source models like MSCDUNet, integrating radar and optical data, reported F1-scores of up to 0.93. However, challenges persist, including inconsistent reporting of standard metrics such as F1, limited generalizability of models, and the computational complexity of processing heterogeneous datasets.Conclusion: Despite significant advancements in the use of deep learning for change detection, ongoing challenges include the scarcity of labeled data, lack of publicly available benchmark multi-source datasets, and the limited availability of lightweight algorithms for real-time applications. Future research should prioritize self-supervised methods such as contrastive learning, the development of noise-resistant and lightweight architectures for UAV and edge deployments, and the creation of standardized open-access datasets with comprehensive metrics. This study, by offering a structured classification and comparative evaluation of algorithms, aims to inform intelligent decision-making in the design of change detection systems for researchers and developers alike.
Original Research Paper
Software Engineering
H. Sanei arani; M. Esmaeili; M. Afshar kazimi
Abstract
Background and Objectives: Optimizing the placement of surveillance cameras is a fundamental component of intelligent urban traffic management systems. Proper camera deployment significantly enhances traffic monitoring accuracy and reduces incident detection time. As a result, the problem of optimal ...
Read More
Background and Objectives: Optimizing the placement of surveillance cameras is a fundamental component of intelligent urban traffic management systems. Proper camera deployment significantly enhances traffic monitoring accuracy and reduces incident detection time. As a result, the problem of optimal camera placement has long been a research challenge for many scholars. Modern approaches employ multi-objective optimization methods to enable simultaneous analysis of various influential parameters. Despite significant advancements in optimization techniques, current methods rely on 2D and 3D grid-based modeling of the study area, which faces major limitations in complex urban environments. In these methods, the space is divided into a regular grid, and optimal camera locations are selected with appropriate angular rotation. However, in real urban topologies, road networks consist of nested and irregular paths, causing many computed points to fall outside accessible routes. This mismatch between theoretical models and practical conditions severely undermines the effectiveness of traditional methods. Given these limitations, developing a new framework that simultaneously considers real urban topologies, physical constraints, and urban planning requirements has become essential. New methods must integrate actual traffic routes, permissible camera installation points, and mandatory angle adjustments into their models. This requires using realistic virtual traffic data and applying artificial intelligence algorithms for optimization.Methods: The current research analyzes urban maps and requires a comprehensive and precise city map to identify optimal locations based on real data. The map is represented as a matrix—a 2D grid of points—where accessible paths and obstacles are defined by different numerical values. Since a street's width includes multiple points, a central row is selected to represent the path, restricting vehicle movement to this route and providing an ideal location for surveillance cameras. The optimal placement process is systematically conducted in four stages after matrix formation. First, origin-destination pairs are randomly generated using population density-based probability distribution. Second, optimal routing for each pair is simulated based on traffic behavior—shortest path selection during normal hours and alternative routes during peak hours. Third, all generated routes are aggregated to create virtual traffic, and path density is calculated for traffic-based optimization. Finally, considering different camera types based on purchase cost and installation expenses, placement is optimized for cost efficiency.Findings: One hundred thousand new data points were generated, and two experiments were conducted. The first experiment used a greedy algorithm to maximize camera coverage across all paths. The second experiment applied the proposed method, first identifying high-traffic points, then maximizing coverage in these areas while minimizing installation costs. Results showed that the proposed method improves monitoring efficiency by 40% on new routes and reduces project costs by 6.6%.Conclusion: In urban surveillance camera placement, methods focusing solely on maximum path coverage are ineffective, and traffic assessment is crucial for optimization. Additionally, since geometric features of paths are eliminated in the proposed method, it is scalable and applicable to any city and routing system. Furthermore, urban planners often purchase cameras with varying fields of view and brands, which can be leveraged as an opportunity for cost optimization.
Original Research Paper
Photogrammetry
A. Shahsavari Babukani; S. Sadeghian; A. Vafaeinejad; D. Sedighpour
Abstract
Background and Objectives: Three-dimensional modeling and documentation of immovable cultural heritage, especially in the fields of conservation, restoration, and sustainable management of these assets, play a significant role in preserving the historical and cultural identity of communities. This approach ...
Read More
Background and Objectives: Three-dimensional modeling and documentation of immovable cultural heritage, especially in the fields of conservation, restoration, and sustainable management of these assets, play a significant role in preserving the historical and cultural identity of communities. This approach not only enables precise and comprehensive recording of the physical and spatial characteristics of historical buildings and sites but also provides a scientific basis for comparative analyses, defining heritage boundaries, damage assessment, and designing restoration programs. Utilizing technologies such as drone-based photogrammetry, as a non-destructive, accurate, and rapid method, allows for comprehensive data acquisition of historical and natural structures with minimal human intervention, facilitating documentation, analysis, and conservation/restoration planning processes. One of the notable advantages of this method is the significant reduction in costs and execution time compared to other surveying and documentation techniques, making it a cost-effective and efficient option.The importance of this method increases in areas with challenging geographical conditions, including mountainous, inaccessible, or restricted-access regions, as it enables precise data collection without the need for prolonged onsite presence. Therefore, drone-based photogrammetry can be considered an innovative and practical solution for recording, conserving, and optimizing planning in projects related to cultural heritage.Methods: This research was conducted in three stages: library, field, and office. In the library stage, basic information regarding the historical background, geographical location, and condition of the studied site was collected. In the field stage, ground control points were first surveyed using a dual-frequency satellite positioning receiver, and their precise coordinates were recorded to enable data georeferencing. Then, to capture accurate data of the target area, a DJI Mavic Mini 2 drone was used for aerial photography. During several planned flights, the drone captured a series of high-resolution aerial images from different angles of the castle and its surrounding environment. The images were acquired with appropriate overlap (80% longitudinal and 50% lateral) to enable accurate three-dimensional modeling.Findings: Following the completion of field data collection and office-based processing, a set of high-accuracy and high-quality digital products was generated, serving the objectives of the research in documentation, restoration, and conservation of cultural heritage. These products included a precise 3D model of the castle structure, a digital elevation model (DEM) representing surface topography, a high-resolution Ortho mosaic map with geometric corrections, and a 2D plan illustrating the dimensions and spatial layout of architectural elements. The accuracy assessment of the 3D model, conducted using ground control points (GCPs) measured around the study site, revealed planimetric and vertical errors of 2.4 cm and 1.9 cm, respectively, indicating a high level of precision in the final results. Additionally, a topographic map featuring contour lines and a comprehensive site map encompassing all natural and man-made features within the study area were produced. These data provide a solid foundation for environmental analyses, boundary delineation, and risk assessments, effectively supporting processes related to the conservation, restoration, and sustainable management of cultural heritage.Conclusion: The use of drone-based photogrammetry in the documentation and three-dimensional modeling of cultural heritage, as a non-destructive, accurate, and cost-effective method, has brought a remarkable transformation in the approaches to recording and conserving historical sites. This technology, utilizing high-resolution aerial images and image processing techniques, enables the production of precise three-dimensional models of buildings, sites, and cultural structures. The resulting data not only provides a realistic digital representation of the current condition of the heritage assets but also serves as a reliable basis for specialized analyses, conservation and restoration planning, and legal documentation. One of the key applications of these data is the preparation of a three-dimensional cadastral system for cultural heritage; a process in which spatial, descriptive, and ownership information of historical assets is organized in an integrated and digital format. The 3D cadaster, relying on accurate models generated by photogrammetry, facilitates advanced spatial management, precise delineation of heritage boundaries, monitoring of changes, and prevention of encroachments or environmental damages. Particularly in areas with dense historical fabric or complex topographic conditions, this information can play a central role in managerial and legal decision-making.
Original Research Paper
Remote Sensing
k. Moravej; S. Felegari; A. Sharifi; A. Golchin; P. Karami
Abstract
Background and Objectives: Human activities and natural processes drive land use changes, resulting in pressing issues such as deforestation, biodiversity loss, and heightened vulnerability to natural disasters like floods. Population growth and increasing socio-economic demands exert substantial pressure ...
Read More
Background and Objectives: Human activities and natural processes drive land use changes, resulting in pressing issues such as deforestation, biodiversity loss, and heightened vulnerability to natural disasters like floods. Population growth and increasing socio-economic demands exert substantial pressure on land use and cover, often leading to unregulated alterations primarily attributed to mismanagement in agriculture, urban development, pasturelands, and forests. Integrating remote sensing and geographic information systems (GIS) offers a potent approach to accurately assess and monitor land use changes across vast areas. Satellite data, particularly from sources like Landsat's Multispectral Scanner (MSS), Thematic Mapper (TM), and Advanced Thematic Mapper (ETM+), have been extensively utilized to analyze land use changes, especially in forested and agricultural regions. This study aims to analyze land use changes in paddy rice soil texture in Gilan, North Iran, from 1391 to 1401. Leveraging Landsat MSS and ETM+ data and GIS software, the study endeavors to identify and characterize significant land use and cover changes, providing valuable insights into regional landscape dynamics.Methods: In this research conducted in Gilan province, Landsat-8 satellite images from 2012 and 2022, featuring minimal cloud cover, were utilized. Geometric and radiometric corrections were made on Landsat-8 satellite images to reduce errors. Employing the maximum likelihood method, supervised classification of land use classes was determined. This method calculates the probability of a pixel belonging to each predefined class and assigns the pixel to the class with the highest probability. This comprehensive approach enabled the analysis of land use dynamics in the study area, offering valuable insights into environmental changes over time.Findings: The evaluation of land use classification maps revealed an overall accuracy of 80% and a kappa coefficient exceeding 0.8, indicating substantial agreement with ground truth classes. Forest area exploitation decreased from 46% in 2011 to 33% in 2011, signaling ecosystem degradation. Similarly, pasture land decreased from 51% in 1391 to 42% in 1401. Conversely, agricultural land witnessed significant growth, increasing by 7% from 2013 to 1401 (34% to 41%). Residential land area experienced a notable increase, rising by 34%. These findings underscore significant land use changes, including forest decline and increased residential expansion, highlighting the pressing need for sustainable land management practices in the study area.Conclusion: Forest cover in the study area declined by 13%, whereas residential land witnessed a significant expansion of 34%. Data analysis indicated that the primary alterations in land area were linked to changes in residential use. Remote sensing technology proved instrumental in precisely, effectively, and economically estimating these changes, highlighting its crucial role in environmental studies.
Original Research Paper
Geo-spatial Information System
J. Saberian ; A. Pourbeik; S. Mohseni
Abstract
Background and Objectives: In the modern era, the banking industry is continuously developing solutions to provide faster, more convenient, and more intelligent services to its customers. With the digitalization of nearly 80% of banking services, customer expectations for personalized experiences have ...
Read More
Background and Objectives: In the modern era, the banking industry is continuously developing solutions to provide faster, more convenient, and more intelligent services to its customers. With the digitalization of nearly 80% of banking services, customer expectations for personalized experiences have surged. In this context, Geographic Information Systems (GIS) have emerged as a powerful tool for location-based analysis and decision-making optimization. Previous studies have demonstrated that GIS can play a vital role in selecting new branch locations, assessing market share, and determining optimal routing.However, a need persists for an integrated tool that empowers customers to select the most suitable branch based on their priorities. The loss of time and the confusion customers face when searching for a branch that offers their required services remain significant challenges for in-person banking. The primary objective of this research is to design and develop an intelligent, user-centric Web-based Geographic Information System (WebGIS) for the optimal selection of bank branches. By integrating web technologies, GIS, and Multi-Criteria Decision-Making (MCDM) algorithms, this system aims to streamline the branch selection process for customers. The specific aims of this study include: creating an interactive platform for searching and displaying branch information; implementing an optimal routing functionality that accounts for real-world constraints; and, most importantly, providing a feature for ranking branches based on user-personalized criteria, thereby enabling the most intelligent choice in the minimum possible time.Methods: In this research, a multi-stage approach was employed for the system's development. Initially, a geospatial database was created within the ArcGIS Desktop environment using the WGS84 Web Mercator coordinate system. Data about the branches of Dey Bank, including both attribute data (name, address, code, services) and spatial information, were stored in this database. To ensure optimal management and facilitate concurrent user access, an Enterprise Geodatabase was utilized on the Microsoft SQL Server platform. In the subsequent stage, the required map services, encompassing the branch layer and the network analysis layer for routing, were published via ArcGIS for Server. The client-side of the system was developed using the ArcGIS API for JavaScript, which provides interactive functionalities such as searching, displaying information, filtering, and routing.For the implementation of the intelligent selection component, the Analytic Hierarchy Process (AHP), a prominent multi-criteria decision-making method, was adopted. The decision criteria selected were: VIP branch status, availability of safe deposit boxes, provision of foreign exchange services, and access to insurance services. Through a user interface, users can perform pairwise comparisons of these criteria to specify their relative importance. The system then utilizes these comparisons to construct a comparison matrix, normalizes it, and calculates the final weight for each criterion. These weights are ultimately applied to compute the final score and ranking for all bank branches.Findings: The outcome of this research is a fully operational WebGIS system, successfully accessible via web browsers across various platforms. Through this system, users can visualize all bank branches on an interactive map and access comprehensive information by clicking on any branch.The most significant finding of this study is the successful implementation of the AHP algorithm. The system ranks all branches based on these priorities and subsequently suggests the most suitable options to the user. Furthermore, a routing capability from the user's current location to a selected branch is incorporated into the system. This feature considers the traffic restriction zone layer as a barrier and renders the optimal route as a graphical line on the map.Conclusion: This research demonstrates that integrating WebGIS technology with multi-criteria decision-making algorithms, such as AHP, offers a highly effective solution to the real-world challenge of optimal service selection. By providing a suitable and intelligent platform, the developed system significantly mitigates the time loss and confusion experienced by customers, empowering them to make an informed choice that is fully aligned with their personal needs.This intelligent selection process enhances the customer experience for in-person banking, representing a significant step toward increasing customer satisfaction and loyalty. The study confirms that investing in intelligent location-based systems is value-adding not only for customers but also for organizations, enabling them to optimize services and gain a better understanding of demand patterns. This creates a win-win scenario for both service providers and their clientele.
Original Research Paper
Geo-spatial Information System
H. Bazalipour; Gh. Fallahi
Abstract
Background and Objectives: Spatial data, as one of the fundamental components of urban information systems, plays a crucial role in analysis, planning, decision-making, and policy evaluation processes. In recent decades, the rapid growth of urbanization, the emergence of smart cities, and the expansion ...
Read More
Background and Objectives: Spatial data, as one of the fundamental components of urban information systems, plays a crucial role in analysis, planning, decision-making, and policy evaluation processes. In recent decades, the rapid growth of urbanization, the emergence of smart cities, and the expansion of sensor networks and the Internet of Things (IoT) have led to an exponential increase in the volume and diversity of spatial data. These data are collected from multiple sources such as Geographic Information Systems (GIS), satellite imagery, remote sensing, intelligent transportation systems, and citizen-generated data. Consequently, the effective management of these datasets has become one of the major challenges in contemporary urban management. The absence of standardized and integrated infrastructures often leads to inconsistency among executive organizations, data redundancy, and reduced accuracy in data-driven decision-making.Methods: To address these challenges, this study proposes a novel framework based on Service-Oriented Architecture (SOA) for establishing an integrated spatial data infrastructure in urban management. SOA, with its core principles of service independence, reusability, composability, and interoperability, provides a flexible and scalable foundation for developing distributed spatial systems. Additionally, the research utilizes international OGC standards, including Web Map Service (WMS), Web Feature Service (WFS), and Web Processing Service (WPS), to establish a unified technical framework for the exchange, processing, and visualization of spatial data across heterogeneous environments. The use of these standards enables various urban subsystems to interact dynamically and seamlessly without dependency on specific technologies or programming languages.Findings: The findings indicate that the proposed framework consists of three main layers: the spatial data service layer for storing, managing, and accessing distributed datasets; the processing service layer for analyzing, integrating, and extracting spatial patterns at different decision-making levels; and the interaction management layer for service orchestration, data flow control, and quality assurance in heterogeneous environments. This three-layered structure was designed to enhance scalability, minimize inter-component dependencies, and improve interoperability among diverse urban systems. A case study was implemented in a real urban management environment to empirically evaluate the performance, stability, and reliability of the proposed framework in terms of response time, processing volume, and coordination among services.Conclusion: The results demonstrated that implementing the integrated SOA–OGC framework led to an average 30% reduction in response time, improved scalability in handling large spatial datasets, and simplified service maintenance and expansion. Moreover, interoperability among urban systems in various domains—such as transportation, environment, and public services—was significantly enhanced. However, challenges including data security assurance, user access control, system stability under high network load, and Quality of Service (QoS) remain critical issues requiring further investigation. In summary, the study concludes that adopting a service-oriented approach in conjunction with OGC standards provides an effective foundation for developing spatial data infrastructures in urban management. This framework not only strengthens data-driven decision-making but also paves the way toward smart city realization, sustainable resource management, and improved quality of urban life. Future research is recommended to integrate Cloud GIS, Big Spatial Data processing, and Artificial Intelligence (AI)-based spatial analytics within this architecture to further enhance the performance, scalability, and security of urban spatial systems.
Original Research Paper
Remote Sensing
P. Borzabadi; A. Razaghpoor; M. Aslani; A. Sharifi
Abstract
Background and Objectives: Land subsidence is considered one of the most significant geomorphological hazards in arid and semi-arid regions, threatening groundwater resources, urban infrastructure, and agricultural lands for decades. In Iran, unregulated urban expansion and excessive groundwater extraction ...
Read More
Background and Objectives: Land subsidence is considered one of the most significant geomorphological hazards in arid and semi-arid regions, threatening groundwater resources, urban infrastructure, and agricultural lands for decades. In Iran, unregulated urban expansion and excessive groundwater extraction have intensified this phenomenon in major cities such as Tehran, Mashhad, Isfahan, and Shiraz. In particular, the eastern areas of Shiraz, characterized by alluvial soils, high building density, and sharp declines in groundwater levels, have become one of the primary hotspots of subsidence in southern Iran. Given the high potential of Sentinel-1 radar data for analyzing land deformation and the effectiveness of DInSAR in rapid monitoring, this study aims to analyze the spatiotemporal patterns of land subsidence in eastern Shiraz, investigate natural and anthropogenic contributing factors, and propose solutions for risk mitigation and support of sustainable urban development.Methods: This study employed 24 Sentinel-1A SAR images (IW mode, VV polarization) from 2015 to 2025. Processing was conducted in SNAP software. Orbital corrections were applied using POD files, followed by radiometric calibration to extract Sigma0 values. A 7×7 Lee filter was used to reduce speckle noise. Fifteen image pairs with temporal baselines less than 365 days and perpendicular baselines below 150 meters were selected to generate interferograms. Phase unwrapping was performed using the SNAPHU algorithm with the Minimum Cost Flow (MCF) method. To minimize atmospheric effects, image pairs with similar humidity were chosen, and additional filtering included the Goldstein filter, topographic masking, and variogram analysis. The final phase data were analyzed statistically using mean, skewness, and kurtosis, as well as spatially through Moran’s I. Multiple regression analysis was also conducted to evaluate the influence of groundwater extraction, soil type, building density, and slope on the observed subsidence rates.Findings: The results showed an average subsidence rate of 18.4 mm/year with a standard deviation of 8.2 mm. Three main subsidence hotspots with rates of 25–45 mm/year were identified in the north, center, and south of the study area. Statistical analysis indicated a positively skewed distribution (1.23) with a kurtosis of 2.87. Multivariate regression analysis showed that groundwater extraction (β = 0.78, p < 0.001) was the most influential factor. Soil type (clay), building density, and slope also had significant effects, with positive and negative contributions. Moran’s, I test confirmed a clustered spatial pattern of subsidence (I = 0.742).Conclusion: DInSAR proved to be an effective and relatively accurate tool for monitoring land subsidence, especially in regions with limited in-situ data. This study underscores the significant role of human activities in exacerbating land subsidence and highlights the need for continuous monitoring, smart supervisory systems, and a reassessment of urban development patterns. Suggested future directions include developing machine learning models with Sentinel-1 data, integrating GNSS observations to enhance accuracy, and conducting land use change analysis using Landsat and Sentinel-2 imagery. The main limitations of the study were the lack of up-to-date groundwater level data and the temporal sparsity of some satellite images.
Original Research Paper
Remote Sensing
Behnaz Babaei; Reza Dousti; Eslam Javadnia; Sina Kiaei; Heshmat Karami; Amir Hossein Abdi
Abstract
Background and Objectives: Methane, as the second most important greenhouse gas after carbon dioxide, plays a significant role in intensifying global warming. Its global warming potential (GWP) over a 100-year period is estimated to be about 28 times greater than that of carbon dioxide. According to ...
Read More
Background and Objectives: Methane, as the second most important greenhouse gas after carbon dioxide, plays a significant role in intensifying global warming. Its global warming potential (GWP) over a 100-year period is estimated to be about 28 times greater than that of carbon dioxide. According to reports by the Intergovernmental Panel on Climate Change (IPCC), approximately 40% of anthropogenic methane emissions are linked to the energy sector, particularly the oil and gas industries. As one of the major producers of oil and gas worldwide, Iran faces serious challenges in monitoring and controlling methane emissions—a matter of particular importance within the framework of international commitments such as the Paris Agreement. The Sentinel-5P satellite, equipped with the TROPOMI sensor, provides high spatial resolution and daily coverage, enabling continuous monitoring and quantification of methane emissions on a global scale. This study aims to examine the temporal trends of methane emissions in Iran over a five-year period (2019–2023) and to identify critical areas in terms of emission intensity.Methods: This research was conducted using a descriptive–analytical approach based on time-series data derived from the TROPOMI sensor onboard the Sentinel-5P satellite within the Google Earth Engine platform. Methane concentration data with a spatial resolution of 5.5 × 7 km were extracted for the entire geographical extent of Iran and processed to obtain annual, seasonal, and monthly averages. To analyze temporal trends and spatial patterns, five-year variation maps and charts were generated to identify dominant trends and high-emission regions.Findings: The results indicated an increasing trend in the annual mean methane concentration over Iran during the study period, with an estimated annual growth rate of about 0.03%. On average, methane concentrations exceeded the IPCC threshold of 1800 ppb by approximately 101.21 ppb. Seasonal analyses revealed that the highest concentrations occurred in autumn and winter, likely due to increased gas extraction activities and reduced efficiency of leakage control systems during colder periods. The total cumulative methane concentration from all sources during the five-year study period reached a considerable value of 1,487,134,705 ppb.Conclusion: The findings highlight a serious challenge for Iran in managing and controlling methane emissions. The observed upward trend underscores the urgent need to formulate and implement effective mitigation policies. In this regard, the deployment of advanced leak detection systems and investment in modern emission control technologies can play a significant role in reducing the environmental impacts of methane.
Original Research Paper
Geo-spatial Information System
A. A. Alesheikh; A. Parvini
Abstract
Background and Objectives: Optimal management of humanitarian supply chains and distribution of relief items after natural disasters is a major challenge in the field of crisis management. Despite the importance of optimal allocation of local distribution centers in post-disaster situations, many existing ...
Read More
Background and Objectives: Optimal management of humanitarian supply chains and distribution of relief items after natural disasters is a major challenge in the field of crisis management. Despite the importance of optimal allocation of local distribution centers in post-disaster situations, many existing decision-making tools lack spatial capabilities, flexibility in scenario building, and ease of access. Aiming to fill the gap in previous studies, this paper designs a web-based system that utilizes geographic information systems (GIS) and meta-heuristic algorithms to enable optimal allocation of distribution centers and management of relief items.Methods: In this study, an intelligent web-based spatial decision support system has been developed that helps decision makers allocate relief distribution centers more efficiently in different post-crisis scenarios. This system consists of three main parts, including a database, a decision engine, and a web-based user interface, and can be fully implemented in a browser without the need to install additional software. Also, genetic and forbidden search algorithms have been integrated to optimize resource allocation and distribution of relief items in this system. Users can edit input data, define different scenarios, and visually view the results on a map. In this system, common uncertainties after disasters, including different rates of affected populations, as well as five different planning periods ranging from 8 to 72 hours (i.e. 8, 16, 24, 48, and 72 hours), have been considered. The system's high flexibility in defining and analyzing various scenarios makes it an effective tool for improving decision-making in planning relief aid distribution operations.Findings: Results show that the proposed hybrid algorithm has been able to improve the optimal allocation of distribution centers and the effective distribution of items and reduce the amount of unmet demand. However, depending on the number of iterations of the algorithm, different scenarios, and some input parameters, the results have sometimes been unstable, which can be investigated and analyzed more precisely in future studies.Conclusion: This study presents a comprehensive, web-based decision support system for the optimal management of relief distribution, which can significantly increase the efficiency of crisis operations. The combined use of meta-heuristic algorithms and geographic data in this system enables rapid response and accurate decision-making. Future development and improvements of this system can include support for different types of items and diverse disaster situations to play a more effective role in reducing human and financial losses.
Original Research Paper
Remote Sensing
M. Hasanlou; A. Ebrahimi
Abstract
Background and Objectives: With the rapid expansion of urbanization, the need for automatic updating of change maps has become increasingly important. Accurate and up-to-date spatial information is essential for monitoring construction activities and tracking the development of urban areas. Traditional ...
Read More
Background and Objectives: With the rapid expansion of urbanization, the need for automatic updating of change maps has become increasingly important. Accurate and up-to-date spatial information is essential for monitoring construction activities and tracking the development of urban areas. Traditional approaches to change detection are mostly limited to two-dimensional analysis and often lack sensitivity to vertical changes. This shortcoming fails to identify multi-story constructions, thereby limiting the completeness of monitoring outcomes. Recent advances in remote sensing and deep learning have enabled three-dimensional urban change detection, providing superior results compared to classical methods. This study aims to improve the performance of 3D urban change detection by introducing a deep learning approach that integrates multi-source data. The primary objective is to automatically identify and distinguish four types of building-related changes—new construction, complete demolition, height increase, and height decrease—alongside unchanged areas, to generate a comprehensive 3D change map.Methods: The dataset employed in this research consists of high-resolution RGB aerial imagery and corresponding Digital Surface Model (DSM) data acquired from two different periods over Valladolid, Spain. The input data were prepared by stacking RGB images and DSMs from both epochs into an eight-band input, allowing the network to jointly analyze spectral and elevation information. The dataset was divided into training (90%) and testing (10%) subsets. To increase variability in the training data and reduce overfitting, augmentation techniques such as horizontal and vertical flipping, random rotation, and Gaussian blurring were applied. The proposed model architecture combines a ResNet-34 backbone for feature extraction with a UNet++ decoder for pixel-level change reconstruction. Model parameters were updated using the Adam optimizer. In the first stage, the deep network was trained in a binary setting (change/no-change) and evaluated against classical approaches, including Random Forest, image differencing/ratioing, and a PCA–K-Means hybrid method. In the second stage, the network was retrained for five-class classification, including the four change categories and the unchanged class, using a loss function optimized directly for the Intersection-over-Union (IoU) metric. Model performance was assessed using Accuracy, Recall, Precision, and F1-score.Findings: In the binary classification stage, after 50 epochs of training, the network successfully identified most real changes while maintaining a low false alarm rate. Evaluation metrics confirmed this performance, with Recall and Accuracy both reaching 98.5% and an F1-score of 0.92, considerably outperforming the classical methods. Unlike traditional approaches, the deep learning model was able to detect almost all small-scale constructions and demolitions. In the five-class stage, the model effectively identified and classified change types, achieving a Recall of 96.32%, an Accuracy of 96%, and an F1-score of 0.95. All newly constructed and fully demolished buildings were correctly labeled in the output maps, and a large proportion of unchanged areas received no misclassification.Conclusion: The findings demonstrate that combining elevation data with 2D imagery and leveraging deep learning architectures significantly mitigates the limitations of traditional change detection approaches and enhances accuracy. The developed model is capable of detecting not only the location but also the type of change. This approach has strong potential applications in monitoring unauthorized constructions, updating spatial databases, and assessing urban development. However, its effectiveness relies on the availability of accurate DSM data, which may not be consistently accessible for all urban areas. Additionally, the training of deep networks requires extensive labeled datasets and considerable computational resources, which could limit their applicability in operational contexts.