Por favor, use este identificador para citar o enlazar este ítem:
http://repositoriodspace.unipamplona.edu.co/jspui/handle/20.500.12744/3326
Registro completo de metadatos
Campo DC | Valor | Lengua/Idioma |
---|---|---|
dc.contributor.author | Mosquera Mykh, Artur. | - |
dc.date.accessioned | 2022-10-03T14:10:44Z | - |
dc.date.available | 2020-03-02 | - |
dc.date.available | 2022-10-03T14:10:44Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | Mosquera Mykh, A. (2019). Desarrollo de un sistema de visión artificial multiagente para la segmentación de vidrio en ventanas de rascacielos [Trabajo de Grado Maestría, Universidad de Pamplona]. Repositorio Hulago Universidad de Pamplona. http://repositoriodspace.unipamplona.edu.co/jspui/handle/20.500.12744/3326 | es_CO |
dc.identifier.uri | http://repositoriodspace.unipamplona.edu.co/jspui/handle/20.500.12744/3326 | - |
dc.description | El número de rascacielos construidos y en construcción en los últimos años evidencia la dinámica de crecimiento de los rascacielos y, con los requerimientos y tecnológicas empleadas para efectuar la limpieza de sus ventanas, se observa una necesidad de desarrollar nuevas alternativas que realicen esta tarea de forma más segura, rápida y precisa. Este trabajo presenta el desarrollo de un sistema de visión artificial multiagente para la segmentación de vidrio en ventanas de rascacielos, el cual está diseñado para ser integrado en el futuro a un prototipo de brazo robótico y un dron, con el fin de realizar el trabajo de limpieza de ventanas de rascacielos. La función del sistema desarrollado es la de segmentar el vidrio y, a partir de la segmentación, obtener los parámetros de la altura, el ancho y la posición en los ejes X, Y y Z del centro del vidrio con respecto a la posición de la cámara. En la investigación se presenta una introducción teórica de los métodos empleados para el desarrollo del sistema mencionado, un estado del arte de la visión artificial, y la metodología y resultados que guían la selección de los métodos empleados. Los resultados incluyen la compilación de una base de datos de ventanas de rascacielos con una amplia variedad de condiciones físicas y de iluminación; la segmentación del vidrio evaluada de forma preliminar empleando algoritmos de detección de bordes, tales como Canny, Sobel, Prewitt, Roberts y Laplaciano de Gaussiano; las propiedades de región enfocadas a áreas rectangulares; y la detección de líneas utilizando transformada de Hough. Adicionalmente, se presenta la implementación de técnicas de inteligencia artificial como las redes neuronales convolucionales junto con segmentación semántica, y la integración de todos los métodos expuestos anteriormente en un sistema de visión artificial multiagente para la segmentación de vidrio empleando un árbol de decisión. Por último, se propone una forma de lograr la conversión de pixeles a sistema métrico de los parámetros entregados por el sistema multiagente, con el fin de demostrar que el sistema desarrollado puede ser implementado. Lo anterior se logra siguiendo las ecuaciones matemáticas de la relación entre la imagen tomada por la cámara, la medida estimada por un sensor de distancia y los parámetros intrínsecos de la cámara. | es_CO |
dc.description.abstract | The number of skyscrapers under construction and built in recent years demonstrates the growth dynamics of skyscrapers and, with the requirements and technologies used to clean their windows, show a clear need to develop new alternatives that perform this task in a safer, faster and more accurate way. This work presents the development of a multi agent artificial vision system for glass segmentation in skyscraper windows, which is designed to be attached in the future into a robotic arm prototype and a drone, in order to perform the work of skyscraper window cleaning. The function of the developed system is to segment the glass and, from the segmentation, obtain the parameters of the height, width and position in the X, Y and Z axes of the glass's center in relation to the camera's position. The research presents a theoretical introduction of the methods used for the development of the mentioned system, a state of the art of artificial vision, and the methodology and results that guided the selection of the methods used. The results include the compilation of a database of skyscraper windows with a wide variety of physical and lighting conditions; a preliminary glass segmentation program using edge detection algorithms such as Canny, Sobel, Prewitt, Roberts and Laplacian of the Gaussian, region properties focused on rectangular areas, and line detection using Hough transform. Additionally, the implementation of artificial intelligence techniques such as convolutional neural networks is presented along with semantic segmentation, and all the methods set forth above integrated into a multi-agent artificial vision system for glass segmentation using a decision tree. Lastly, a way to achieve the conversion of pixels into the metric system of the parameters delivered by the multi-agent system is proposed, in order to demonstrate that the developed system can be implemented. The above is achieved by following the mathematical equations of the relationship between the image taken by the camera, the measurement made by a distance sensor and the intrinsic parameters of the camera. | es_CO |
dc.format.extent | 116 | es_CO |
dc.format.mimetype | application/pdf | es_CO |
dc.language.iso | es | es_CO |
dc.publisher | Universidad de Pamplona – Facultad de Ingenierías y Arquitectura. | es_CO |
dc.subject | Rascacielos, | es_CO |
dc.subject | Segmentación, | es_CO |
dc.subject | Algoritmos de bordes, | es_CO |
dc.subject | Transformada de Hough, | es_CO |
dc.subject | Red neuronal convolucional, | es_CO |
dc.subject | Sistema multiagente. | es_CO |
dc.title | Desarrollo de un sistema de visión artificial multiagente para la segmentación de vidrio en ventanas de rascacielos. | es_CO |
dc.type | http://purl.org/coar/resource_type/c_bdcc | es_CO |
dc.date.accepted | 2019-12-02 | - |
dc.relation.references | Aldwaik, M. & Adeli, H. (2014). Advances in optimization of highrise building structures. Structural and Multidisciplinary Optimization, 50(6), 899-919. | es_CO |
dc.relation.references | Antona Cortés, C. (2017). Herramientas modernas en redes neuronales: la librería Keras (Bachelor's thesis). | es_CO |
dc.relation.references | Aperador-Chaparro, W., Bautista-Ruiz, J. H., & Mejía, A. S. (2013). Determinacion por vision artificial del factor de degradacion en aleaciones biocompatibles. Información tecnológica, 24(2), 109-120. | es_CO |
dc.relation.references | Azad, R. & Baghdadi, M. (2014). Novel and Fast Algorithm for Extracting License Plate Location Based on Edge Analysis. arXiv preprint arXiv:1407.6496. | es_CO |
dc.relation.references | Bautista, B. M., Medina, J. A. P., & Marín, F. J. S. (2012, July). Vision sens. In International Conference on Computers for Handicapped Persons (pp. 490-496). Springer, Berlin, Heidelberg. | es_CO |
dc.relation.references | Berkaya, S. K., Gunduz, H., Ozsen, O., Akinlar, C. & Gunal, S. (2016). On circular traffic sign detection and recognition. Expert Systems with Applications, 48, 67-75. | es_CO |
dc.relation.references | Bhunia, A. K., Kumar, G., Roy, P. P., Balasubramanian, R., & Pal, U. (2018). Text recognition in scene image and video frame using Color Channel selection. Multimedia Tools and Applications, 77(7), 8551-8578. | es_CO |
dc.relation.references | Cao, X., Wei, Y., Wen, F., & Sun, J. (2014). Face alignment by explicit shape regression. International Journal of Computer Vision, 107(2), 177-190. | es_CO |
dc.relation.references | Chung, C. L., Huang, K. J., Chen, S. Y., Lai, M. H., Chen, Y. C., & Kuo, Y. F. (2016). Detecting Bakanae disease in rice seedlings by machine vision. Computers and electronics in agriculture, 121, 404-411. | es_CO |
dc.relation.references | Constante, P., Gordon, A., Chang, O., Pruna, E., Acuna, F., & Escobar, I. (2016). Artificial Vision Techniques to Optimize Strawberry's Industrial Classification. IEEE Latin America Transactions, 14(6), 2576-2581. | es_CO |
dc.relation.references | Council on Tall Buildings and Urban Habitat (2018). Another Record-Breaker for Skyscraper Completions. Recuperado de: http://www.ctbuh.org/ | es_CO |
dc.relation.references | Cubero, S., Lee, W. S., Aleixos, N., Albert, F., & Blasco, J. (2016). Automated systems based on machine vision for inspecting citrus fruits from the field to postharvest—a review. Food and Bioprocess Technology, 9(10), 1623-1639. | es_CO |
dc.relation.references | Dahl, G. E., Sainath, T. N., & Hinton, G. E. (2013, May). Improving deep neural networks for LVCSR using rectified linear units and dropout. In 2013 IEEE international conference on acoustics, speech and signal processing (pp. 8609-8613). IEEE. | es_CO |
dc.relation.references | Diao, Z., Zhao, M., Song, Y., Wu, B., Wu, Y., Qian, X., & Wei, Y. (2015). Crop line recognition algorithm and realization in precision pesticide system based on machine vision. Transactions of the Chinese Society of Agricultural Engineering, 31(7), 47-52. | es_CO |
dc.relation.references | Diaz, L. E. N., & Arceo, L. E. C. (2018). Algoritmo rápido de la transformada de Hough para detección de líneas rectas en una imagen. | es_CO |
dc.relation.references | Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., & Darrell, T. (2014, January). Decaf: A deep convolutional activation feature for generic visual recognition. In International conference on machine learning (pp. 647-655). | es_CO |
dc.relation.references | Duque, J. P. U., & Ospina, E. (2004). IMPLEMENTACIÓN DE LA TRANSFORMADA DE HOUGH PARA LA DETECCIÓN DE LÍNEAS PARA UN SISTEMA DE VISIÓN DE BAJO NIVEL. Scientia et technica, 1(24), 79-84. | es_CO |
dc.relation.references | Fahrurozi, A., Madenda, S., & Kerami, D. (2016). Wood Classification Based on Edge Detections and Texture Features Selection. International Journal of Electrical & Computer Engineering (2088- 8708), 6(5). | es_CO |
dc.relation.references | Fausett, L. (1994). Fundamentals of neural networks: architectures, algorithms, and applications. Prentice-Hall, Inc.. | es_CO |
dc.relation.references | Fischler, M. A., & Firschein, O. (Eds.). (2014). Readings in computer vision: issues, problem, principles, and paradigms. Elsevier. | es_CO |
dc.relation.references | Gongal, A., Silwal, A., Amatya, S., Karkee, M., Zhang, Q., & Lewis, K. (2016). Apple crop-load estimation with over-the-row machine vision system. Computers and Electronics in Agriculture, 120, 26-35. | es_CO |
dc.relation.references | Gonzalez, A., Bergasa, L. M., & Yebes, J. J. (2014). Text detection and recognition on traffic panels from street-level imagery using visual appearance. IEEE Transactions on Intelligent Transportation Systems, 15(1), 228-238. | es_CO |
dc.relation.references | Gonzalez, R. C., Woods, R. E., & Eddins, S. L. (2004). Digital image processing using MATLAB (Vol. 624). Upper Saddle River, New Jersey: Pearson-Prentice-Hall. | es_CO |
dc.relation.references | Gonzalez, R. C., & Woods, R. E. (2008). Digital image processing: Pearson prentice hall. Upper Saddle River, NJ, 1. | es_CO |
dc.relation.references | Guedes, M. C. & Cantuária, G. (2017). The Increasing Demand on High-Rise Buildings and Their History. In Sustainable High Rise Buildings in Urban Zones (pp. 93-102). Springer International Publishing. | es_CO |
dc.relation.references | Guil, N., Villalba, J., & Zapata, E. L. (1995). A fast Hough transform for segment detection. IEEE Transactions on Image Processing, 4(11), 1541-1548. | es_CO |
dc.relation.references | Hernández-Hernández, J. L., García-Mateos, G., González-Esquiva, J. M., Escarabajal-Henarejos, D., Ruiz-Canales, A., & Molina-Martínez, J. M. (2016). Optimal color space selection method for plant/soil segmentation in agriculture. Computers and Electronics in Agriculture, 122, 124-132. | es_CO |
dc.relation.references | Isasi Viñuela, P., & Leon, G. (2004). Redes de neuronas artificiales: un enfoque práctico. | es_CO |
dc.relation.references | Jaramillo, M. A., Fernández, J. A., & de Salazar, E. M. (2010). Implementación del detector de bordes de Canny sobre redes neuronales celulares. Universidad de Extremadura. | es_CO |
dc.relation.references | Khan, S., Rahmani, H., Shah, S. A. A., & Bennamoun, M. (2018). A guide to convolutional neural networks for computer vision. Synthesis Lectures on Computer Vision, 8(1), 1-207. | es_CO |
dc.relation.references | Korč, F., Förstner, W. (2009). eTRIMS Image Database for interpreting images of man-made scenes. Technical report, Department of Photogrammetry, University of Bonn. | es_CO |
dc.relation.references | Li, J. B., Huang, W. Q., & Zhao, C. J. (2015). Machine vision technology for detecting the external defects of fruits—a review. The Imaging Science Journal, 63(5), 241-251. | es_CO |
dc.relation.references | Lu, Z., & Zhang, L. (2016). Face recognition algorithm based on discriminative dictionary learning and sparse representation. Neurocomputing, 174, 749-755. | es_CO |
dc.relation.references | Meissner, M. (2017). Setting the Scene: Financial Spaces and Architectures. In Narrating the Global Financial Crisis (pp. 41-82). Springer International Publishing. | es_CO |
dc.relation.references | Moreira, G. A., & Sappa, A. (2015). Correspondencia Multiespectral en el espacio de HOUGH. Proyecto de fin de Carrera, Escuela Superior Politécnica del Litoral, Ecuador. | es_CO |
dc.relation.references | Mosquera, A. (2017). Limpieza de Ventanas de Rascacielos y Alternativas Tecnológicas Emergentes. Revista Colombiana de Tecnologías de Avanzada ISSN: 1692-7257 - Volumen 2 – Número 30. Universidad de Pamplona. Pamplona, (pp.109-118). | es_CO |
dc.relation.references | Neuhausen, M., Koch, C., & König, M. (2016). Image-based window detection: an overview. | es_CO |
dc.relation.references | Neuhausen, M., Martin, A., Obel, M., Mark, P., & König, M. (2017). A Cascaded Classifier Approach to Window Detection in Facade Images. In ISARC. Proceedings of the International Symposium on Automation and Robotics in Construction (Vol. 34). Vilnius Gediminas Technical University, Department of Construction Economics & Property. | es_CO |
dc.relation.references | Noh, H., Hong, S., & Han, B. (2015). Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE international conference on computer vision (pp. 1520-1528). | es_CO |
dc.relation.references | Noroozi, M., Vinjimoor, A., Favaro, P., & Pirsiavash, H. (2018). Boosting self-supervised learning via knowledge transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 9359-9367). | es_CO |
dc.relation.references | Parkhi, O. M., Vedaldi, A., & Zisserman, A. (2015, September). Deep face recognition. In BMVC (Vol. 1, No. 3, p. 6). | es_CO |
dc.relation.references | Parmar, D. N., & Mehta, B. B. (2014). Face recognition methods & applications. arXiv preprint arXiv:1403.0485. | es_CO |
dc.relation.references | Patterson, J., & Gibson, A. (2017). Deep learning: A practitioner's approach. " O'Reilly Media, Inc.". | es_CO |
dc.relation.references | Pink, L. & Eickeler, S. (2016). Performance Enhancements for the Detection of Rectangular Traffic Signs. In Advanced Microsystems for Automotive Applications 2016 (pp. 113-123). Springer International Publishing. | es_CO |
dc.relation.references | Poppe, R. (2010). A survey on vision-based human action recognition. Image and vision computing, 28(6), 976-990. | es_CO |
dc.relation.references | Quiros, A. R. F., Abad, A., Bedruz, R. A., Uy, A. C., & Dadios, E. P. (2015, December). A genetic algorithm and artificial neural network-based approach for the machine vision of plate segmentation and character recognition. In Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM), 2015 International Conference on (pp. 1-6). IEEE. | es_CO |
dc.relation.references | Ranft, B., & Stiller, C. (2016). The role of machine vision for intelligent vehicles. IEEE Transactions on Intelligent Vehicles, 1(1), 8-19. | es_CO |
dc.relation.references | Rebaza, J. V. (2007). Detección de bordes mediante el algoritmo de Canny. Escuela Académico Profesional di Informática. Universidad Nacional de Trujillo. | es_CO |
dc.relation.references | Romero, O. D., & Rolle, J. L. C. (2018). INTELIGENCIA ARTIFICIAL EN LA INGENIERÍA: PASADO, PRESENTE Y FUTURO. DYNA, 93(4), 350-352. | es_CO |
dc.relation.references | Sanabria, J. J., & Archila, J. F. (2011). Detección y análisis de movimiento usando visión artificial. Scientia et technica, 16(49). | es_CO |
dc.relation.references | Schroff, F., Kalenichenko, D., & Philbin, J. (2015). Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 815-823). | es_CO |
dc.relation.references | Scherer, D., Müller, A., & Behnke, S. (2010, September). Evaluation of pooling operations in convolutional architectures for object recognition. In International conference on artificial neural networks (pp. 92-101). Springer, Berlin, Heidelberg. | es_CO |
dc.relation.references | Sonka, M., Hlavac, V., & Boyle, R. (2014). Image processing, analysis, and machine vision. Cengage Learning. | es_CO |
dc.relation.references | Timofte, R., Zimmermann, K., & Van Gool, L. (2014). Multi-view traffic sign detection, recognition, and 3D localisation. Machine vision and applications, 25(3), 633-647. | es_CO |
dc.relation.references | Yang, M. Y., Förstner, W., & Chai, D. (2012). Feature evaluation for building facade images-an empirical study. In International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences:[XXII ISPRS Congress, Technical Commission I] 39 (2012), Nr. B3 (Vol. 39, No. B3, pp. 513-518). Göttingen: Copernicus GmbH. | es_CO |
dc.relation.references | Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). How transferable are features in deep neural networks?. In Advances in neural information processing systems (pp. 3320-3328). | es_CO |
dc.relation.references | Zeiler, M. D., & Fergus, R. (2014, September). Visualizing and understanding convolutional networks. In European conference on computer vision (pp. 818-833). Springer, Cham. | es_CO |
dc.relation.references | Zorrilla, V. M. S., Julián, F. G. C., Solano, M. Á. P., Reyes, M. V., & Calvo, E. R. (2016). DETECCIÓN DE BORDES DE UNA IMAGEN USANDO MATLAB. Pistas Educativas, 38(122). | es_CO |
dc.rights.accessrights | http://purl.org/coar/access_right/c_abf2 | es_CO |
dc.type.coarversion | http://purl.org/coar/resource_type/c_2df8fbb1 | es_CO |
Aparece en las colecciones: | Maestría en Controles Industriales |
Ficheros en este ítem:
Fichero | Descripción | Tamaño | Formato | |
---|---|---|---|---|
Mosquera_2019_TG.pdf | Mosquera_2019_TG.pdf | 6,89 MB | Adobe PDF | Visualizar/Abrir |
Los ítems de DSpace están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.