https://scholar.google.com/citations?hl=en&user=7QwnQC0AAAAJ&view_op=list_works&authuser=4&gmla=AH70aAXSgsGfbihg4XfTuewCeQeYGy1HTwvT72Ir9iHrnZEDh1XFE7EzcqgkFv5kr1vS-lIMrz6MeOglUi59DhKE

Document Type : Original Research Paper

Authors

Department of Geomatics, Faculty of Civil Engineering, Babol Noshirvani University of Technology, Babol, Iran

Abstract

Background and Objectives: Close-range photogrammetry aims to produce accurate 3D geometric models of objects using images taken from the subject. Nowadays, the creation of realistic 3D models and their visualisation is a common practice that is becoming more popular every day. On the other hand, choosing the right modelling software for photogrammetry has always been a challenge and a topic of discussion among experts and researchers. Therefore, it is essential to examine and evaluate the models produced by different software tools. Due to the widespread use of Agisoft software among engineers and researchers in this field, this study aimed to perform image processing and modelling using two versions of this software, namely Photoscan and Metashape. In previous research, the criterion for optimising the image mesh has been based on improving the accuracy of the modelling. In order to assess and evaluate the 3D models produced by the two versions of the software, we defined different scenarios for the design of the image mesh. We compared the 3D models generated for each scenario with a mathematical reference model. We also examined the complete modelling in the software under different conditions using two different textures, as the texture of the image directly affects the quality of the point cloud. It is important to analyse the role of the image texture together with the geometry of the image mesh. Therefore, we evaluated the image texture as a radiometric index and investigated how these two factors affect the quality of the point cloud. As a result, we determined the optimal number of images with appropriate texture required to produce an accurate and high-quality 3D model.
Methods: close-range photogrammetry, we capture a series of images of an object using a specific image network. These images are then used with the Structure from Motion (SfM) method to generate point clouds and 3D models. The concept behind SfM is inspired by how our eyes perceive objects. This approach offers a quick, automated, and cost-effective way to obtain 3D data. It involves creating 3D coordinate models by processing a sequence of overlapping images of the object. Finally, the resulting 3D models are compared with a reference point cloud using the Cloud Compare point cloud processing software.
Findings:  The results of using images with simple texture show that in Photoscan software, increasing the number of images not only leads to noise in the point cloud, but also reduces the similarity of the generated model to the cube. According to the results, the best 3D model with a high similarity to the cube is associated with the fourth scenario (45 images) with an error of 0.01 millimetres. In the case of the Metashape software, the best model is associated with the third scenario (90 images) with an error of 0.05 millimetres. On the other hand, in cases where images with complex textures were used, the best point cloud is related to the fourth scenario (45 images) with an error of 0.02 millimetres in Photoscan software and to the third scenario (90 images) with an error of 0.04 millimetres in Metashape software. In general, the use of objects with complex textures leads to a better match and therefore to denser point clouds due to the presence of complex and non-uniform gradients in the images.
Conclusion: The results show that the optimal number of images and the presence of a complex image texture have a significant impact on the improvement of the quality of the 3D point cloud of the object. Despite the increased processing time, the quality of the 3D model does not increase significantly with a large number of images; it only leads to denser point clouds due to increased noise in the point cloud.

Keywords

Main Subjects

COPYRIGHTS 
© 2023 The Author(s).  This is an open-access article distributed under the terms and conditions of the Creative Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) (https://creativecommons.org/licenses/by-nc/4.0/

[1] Quan, L., Image-based modeling. 2010: Springer Science & Business Media.
[2] Alsadik, B., et al., Robust extraction of image correspondences exploiting the image scene geometry and approximate camera orientation. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2013. 5: p. W1.
[3] Lerma García, J.L., et al., From digital photography to photogrammetry for cultural heritage documentation and dissemination. Disegnarecon, 2013. 6(12): p. 1-8.
[4] Luna, O., Basics of photogrammetry for VR professionals: 3D visualization of cultural heritage objects. Visual Resources Association Bulletin, 2018. 45(1).
[5] Lauro, V. and V. Lombardo, The Cataloging and Conservation of Digital Survey in Archaeology: A Photogrammetry Protocol in the Context of Digital Data Curation. Heritage, 2023. 6(3): p. 3113-3136.
[6] Ahmet, U. and M. Uysal, Kültürel mirasın etkileşimli keşfi için mobil artırılmış gerçeklik ve web tabanlı görselleştirme teknolojilerinin kullanılması: Sfenks heykeli örneği. Afyon Kocatepe Üniversitesi Fen ve Mühendislik Bilimleri Dergisi, 2020. 20(6): p. 1024-1031.
[7] Remondino, F. and S. El‐Hakim, Image‐based 3D modelling: a review. The photogrammetric record, 2006. 21(115): p. 269-291.
[8] Kiamehr, R., Multi object optimization of geodetic Network. NCC Geomatics, 2003. 82.
[9] Barazzetti, L., M. Scaioni, and F. Remondino, Orientation and 3D modelling from markerless terrestrial images: combining accuracy with automation. The Photogrammetric Record, 2010. 25(132): p. 356-381.
[10] FRASER, C., Limiting error propagation in network design((in photogrammetry)). Photogrammetric Engineering and Remote Sensing, 1987. 53: p. 487-493.
[11] Fraser, C.S., Network design considerations for non-topographic photogrammetry. Photogrammetric Engineering and Remote Sensing, 1984. 50(8): p. 1115-1126.
[12] Fraser, C.S. and S. Cronk, A hybrid measurement approach for close-range photogrammetry. ISPRS journal of photogrammetry and remote sensing, 2009. 64(3): p. 328-333.
[13] Chandler, J., Effective application of automated digital photogrammetry for geomorphological research. Earth Surface Processes and Landforms, 1999. 24(1): p. 51-63.
[14] Lane, S., T. James, and M. Crowell, Application of digital photogrammetry to complex topography for geomorphological research. The Photogrammetric Record, 2000. 16(95): p. 793-821.
[15] Fonstad, M.A., et al., Topographic structure from motion: a new development in photogrammetric measurement. Earth surface processes and Landforms, 2013. 38(4): p. 421-430.
[16] Westoby, M.J., et al., ‘Structure-from-Motion’photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology, 2012. 179: p. 300-314.
[17] Natan, M., C. Jim, and L. Stuart N., Structure from motion (SFM) photogrammetry. 2015.
[18] Mostafavi, A., M. Scaioni, and V. Yordanov, Photogrammetric solutions for 3d modeling of cultural heritage sites in remote areas. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2019. 42: p. 765-772.
[19] Spodek, J.C. and C.K. Harrison, Creating Virtual Models with Digital Photogrammetry: Pertev Paşa Mosque (İzmit, Turkey). Preservation Education & Research, 2020. 12(1): p. 96-115.
[20] Grau González-Quevedo, E.R., et al., The Use of 3D Photogrammetry in the Analysis, Visualization, and Dissemination of the Indigenous Archaeological Heritage of the Greater Antilles. Open Archaeology, 2021. 7(1): p. 435-453.
[21] Esmaeelpour, M., Evaluation of a method for justifying video-based video frames for 3D image reconstruction. 2009, M. Sc. Thesis, Department of Surveying Engineering, University of Tehran.(In ….
[22] Ahmadabadian, A.H., et al., A comparison of dense matching algorithms for scaled surface reconstruction using stereo camera rigs. ISPRS Journal of Photogrammetry and Remote Sensing, 2013. 78: p. 157-167.
[23] Izadi, S., et al. KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera. in Proceedings of the 24th annual ACM symposium on User interface software and technology. 2011.
[24] Kaartinen, H., et al., Benchmarking the performance of mobile laser scanning systems using a permanent test field. Sensors, 2012. 12(9): p. 12814-12835.
[25] Bräuer-Burchardt, C., P. Kühmstedt, and G. Notni. Combination of air-and water-calibration for a fringe projection based underwater 3d-scanner. in International Conference on Computer Analysis of Images and Patterns. 2015. Springer.
[26] Singh, S.P., K. Jain, and V.R. Mandla, 3D scene reconstruction from video camera for virtual 3d city modeling. American Journal of Engineering Research, 2014. 3(1): p. 140-148.
[27] Nikolov, I. and C. Madsen. Benchmarking close-range structure from motion 3D reconstruction software under varying capturing conditions. in Euro-Mediterranean Conference. 2016. Springer.
[28] Gabara, G. and P. Sawicki. Accuracy study of close range 3D object reconstruction based on point clouds. in 2017 Baltic Geodetic Congress (BGC Geomatics). 2017. IEEE.
[29] Saadat Sarasht, M., Samadzadegan, Farhad. Camera placement in industrial photogrammetry, multi-evolutionary optimization approach.
[30] Hosseininaveh, A., et al. Automatic image selection in photogrammetric multi-view stereo methods. 2012. Eurographics Association.
[31] Wenzel, K., et al., Image acquisition and model selection for multi-view stereo. International archives of the photogrammetry, remote sensing and spatial information sciences, 2013. 5: p. W1.
[32] Alsadik, B., M. Gerke, and G. Vosselman, Optimal camera network design for 3D modeling of cultural heritage. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2012. 3: p. 7-12.
[33] Alsadik, B., M. Gerke, and G. Vosselman, Automated camera network design for 3D modeling of cultural heritage objects. Journal of Cultural Heritage, 2013. 14(6): p. 515-526.
[34] Sharifi, A., Ghanbari Parmehar, I., improving the accuracy of short-range photogrammetry using regular photography network, 25th and 3rd National Geomatics and Geomatics and Geospatial Information Technology Conference and Exhibition 2017.
[35] Naimaei, R. , Ghanbari Parmehar, I., Determining the optimal number of photos for 3D modeling of objects in photogrammetry, 12th National Congress of Civil Engineering.
[36] Agisoft, L., Agisoft PhotoScan user manual. Professional edition, version 0.9. 0. St. Petersburg: Agisoft LLC. Retrieved November, 2013. 8: p. 2018.
[37] Agisoft, L., Agisoft metashape user manual, Professional edition, Version 1.5. Agisoft LLC, St. Petersburg, Russia, from https://www.agisoft.com/pdf/metashape-pro_1_5_en.pdf, accessed June, 2018. 2: p. 2019.
[38] Shapiro, L.G. and G.C. Stockman, Computer vision. 2001: Prentice Hall.
[39] Ullman, S., The interpretation of structure from motion. Proceedings of the Royal Society of London. Series B. Biological Sciences, 1979. 203(1153): p. 405-426.
[40] Snavely, K., Scene reconstruction and visualization from internet photo collections. 2008, Ph. D. thesis, University of Washington, Seattle, Washington, USA.
[41] Zhu, Q., B. Wu, and Y. Tian, Propagation strategies for stereo image matching based on the dynamic triangle constraint. ISPRS Journal of Photogrammetry and Remote Sensing, 2007. 62(4): p. 295-308.
[42] Lowe, D.G. Object recognition from local scale-invariant features. in Proceedings of the seventh IEEE international conference on computer vision. 1999. Ieee.
[43] Lowe, D.G., Distinctive image features from scale-invariant keypoints. International journal of computer vision, 2004. 60(2): p. 91-110.
[44] Liu, Z., J. An, and Y. Jing, A simple and robust feature point matching algorithm based on restricted spatial order constraints for aerial image registration. IEEE Transactions on Geoscience and Remote Sensing. 2011. 50(2): p. 514-527.
[45] Bay, H., et al., Speeded-up robust features (SURF). Computer vision and image understanding. 2008. 110(3): p. 346-359.
[46] Hartley, R. and A. Zisserman, Multiple View Geometry in Computer Vision. 2003: Cambridge University Press.
[47] Remondino, F., et al. Design and implement a reality-based 3D digitisation and modelling project. in 2013 Digital Heritage International Congress (DigitalHeritage). 2013. IEEE.
[48] Fraser, C., Non Topographic Photogrammetry, ed. Edwards Brothers Inc., Virginia, 1989.
[49] Alsadik, B., et al., Minimal camera networks for 3D image based modeling of cultural heritage objects. Sensors, 2014. 14(4): p. 5785-5804.
[50] Haala, N. Multiray photogrammetry and dense image matching. in Photogrammetric Week. 2011. VDE Verlag.
[51] Hullo, J.-F., P. Grussenmeyer, and S. Farès, Photogrammetry and dense stereo matching approach applied to the documentation of the cultural heritage site of Kilwa (Saudi Arabia). 2010.