Por favor, use este identificador para citar o enlazar este ítem:
http://repositoriodspace.unipamplona.edu.co/jspui/handle/20.500.12744/5479
Registro completo de metadatos
Campo DC | Valor | Lengua/Idioma |
---|---|---|
dc.contributor.author | Riaño Bejar, Erika Viviana. | - |
dc.date.accessioned | 2022-12-15T21:59:15Z | - |
dc.date.available | 2018 | - |
dc.date.available | 2022-12-15T21:59:15Z | - |
dc.date.issued | 2018 | - |
dc.identifier.citation | Riaño, E. V. (2017). Caracterización de problemas a solucionar mediante programación multiproceso de memoria compartida [Trabajo de grado pregrado, Universidad de Pamplona]. Repositorio Hulago Universidad de Pamplona. http://repositoriodspace.unipamplona.edu.co/jspui/handle/20.500.12744/5479 | es_CO |
dc.identifier.uri | http://repositoriodspace.unipamplona.edu.co/jspui/handle/20.500.12744/5479 | - |
dc.description | El autor no proporciona la información sobre este ítem. | es_CO |
dc.description.abstract | El autor no proporciona la información sobre este ítem. | es_CO |
dc.format.extent | 315 | es_CO |
dc.format.mimetype | application/pdf | es_CO |
dc.language.iso | es | es_CO |
dc.publisher | Universidad de Pamplona – Facultad de Ingenierías y Arquitectura. | es_CO |
dc.subject | El autor no proporciona la información sobre este ítem. | es_CO |
dc.title | Caracterización de problemas a solucionar mediante programación multiproceso de memoria compartida. | es_CO |
dc.type | http://purl.org/coar/resource_type/c_7a1f | es_CO |
dc.date.accepted | 2017 | - |
dc.relation.references | Acevedo Alvaro Casasús, Juan Jose Benito Muñoz Prieto, F. U., & Corvinos, L. G. (2009). Resolución de la ecuación de Ondas en 2-D y 3-D utilizando diferencias finitas generalizadas . Consistencia y Estabilidad . Alvaro Casas u Francisco Ure n Introducci ´ on Diferencias finitas generalizadas y método explícito en, 2009, 1–8 | es_CO |
dc.relation.references | Alcubierre, M. (2005). Introducción a FORTRAN. Fortran, 34. | es_CO |
dc.relation.references | Amit Amritkar, S. D., & Tafti, D. (2013). Efficient parallel CFD-DEM simulations using OpenMP. | es_CO |
dc.relation.references | Asanovic, K., Catanzaro, B. C., Patterson, D. A., & Yelick, K. A. (2006). The Landscape of Parallel Computing Research : A View from Berkeley. | es_CO |
dc.relation.references | Atienza, D., Colmenar, J. M., & Garnica, O. (2010). Parallel heuristics for scalable community detection. Parallel Computing. https://doi.org/10.1016/j.parco.2010.07.001 | es_CO |
dc.relation.references | Auckenthaler, T., Blum, V., Bungartz, H.-J., Huckle, T., Johanni, R., Krämer, L., … Willems, P. . (2011). Parallel solution of partial symmetric eigenvalue problems from electronic structure calculations. | es_CO |
dc.relation.references | Barbara Chapman, Gabriele Jost, R. V. D. P. (n.d.). Using OpenMP, Portable Shared Memory Parallel Programming. | es_CO |
dc.relation.references | Blelloch, G. E. (2012). NESL: A Nested Data-Parallel Language. | es_CO |
dc.relation.references | Christian Terboven, Dieter an Mey, Dirk Schmidl, Henry Jin, T. R. (2008). Data and thread affinity in openmp programs. | es_CO |
dc.relation.references | Dorta, A., García, L., González, J. R., León, C., & Rodríguez, C. (n.d.). Aproximación paralela a la técnica Divide y Vencerás. | es_CO |
dc.relation.references | Eduard Ayguad, Nawal Copty, Duran Alejandro, Hoeflinger Jay, Yuan Lin, Massaioli Federico, Su Ernesto, Unnikrishnan Priya, G. Z. (n.d.). A Proposal for Task Parallelism in OpenMP. | es_CO |
dc.relation.references | Eduard Ayguadé, Nawal Copty, Alejandro Duran, Jay Hoeflinger, Yuan Lin, Federico Massaioli, Xavier Teruel, Priya Unnikrishnan, G. Z. (2009). The Design of OpenMP Tasks. | es_CO |
dc.relation.references | Foster, I. (1995). Designing and Building Parallel Programs. Interface, (Noviembre). https://doi.org/10.1109/MCC.1997.588301 | es_CO |
dc.relation.references | Francisco Almeida, Domingo Giménez, José Miguel Mantas, A. M. V. (n.d.). Sobre la situación del paralelismo y la programación paralela en los Grados de Ingeniería Informática. | es_CO |
dc.relation.references | Frederico Pratasa, Pedro Trancosob, Leonel Sousaa, Alexandros, S., & Shid Guochun, K. V. (2012). Fine-grain parallelism using multi-core, Cell/BE, and GPU Systems. | es_CO |
dc.relation.references | Haoqiang Jina, Dennis Jespersena, Piyush Mehrotraa, Rupak Biswasa, Lei Huangb, B. C. (2011). High performance computing using MPI and OpenMP on multi. | es_CO |
dc.relation.references | J.Aguilar, E. L. (2006). Introducción a la Computación Paralela. Memory. https://doi.org/10.1157/13068212 | es_CO |
dc.relation.references | Jiangzhou He, W. C., & Zhizhong, T. (2015). NestedMP: Enabling cache-aware thread mapping for nested parallel shared memory applications. | es_CO |
dc.relation.references | John H. Gibbons. (1989). High Performance Computing and Networking for Science September 1989. Performance Computing, (September). | es_CO |
dc.relation.references | Julio Monetti, O. L. (n.d.). USO DE THREADS PARA LA EJECUCIÓN EN PARALELO SOBRE UNA MALLA COMPUTACIONAL. | es_CO |
dc.relation.references | Marco Oliverio, William Spataro, Donato D’Ambrosio, Rocco Rongo , Giuseppe Spingola, G. T. (2011). OpenMP parallelization of the SCIARA Cellular Automata lava flow model. | es_CO |
dc.relation.references | Michael J, Q. (n.d.). Parallel Programming in C with MPI andOpenMP (Vol. 1). | es_CO |
dc.relation.references | Nasim Muhammad, H. J. E. (n.d.). OpenMP Parallelization of a Mickens Time-Integration Scheme for a Mixed-Culture Biofilm Model and Its Performance on Multi-core and Multi-processor Computers. | es_CO |
dc.relation.references | OpenMP. (2013). OpenMP Application Program Interface. The OpenMP Forum, Tech. Rep, (July), 320. https://doi.org/10.1080/08905769008604595 | es_CO |
dc.relation.references | Patrick Carribault, Marc Pérache, H. J. (2010). Enabling Low-Overhead Hybrid MPI/OpenMP Parallelism with MPC. | es_CO |
dc.relation.references | Piccoli, M. F. (2011). Computación de alto desempeño en GPU. Retrieved from http://sedici.unlp.edu.ar/bitstream/handle/10915/18404/Documento_completo__.pdf?sequence=1 | es_CO |
dc.relation.references | Reinders, J., & Jeffers, J. (2015). High Performance Parallelism Pearls, Volume One. High Performance Parallelism Pearls. https://doi.org/10.1016/B978-0-12-802118-7.00007-8 | es_CO |
dc.relation.references | Sampieri, R. H., Collado, C. F., & Lucio, P. B. (n.d.). Metodologia de. | es_CO |
dc.relation.references | Santa Cruz, C. (2007). Programando en Fortran. Fortran, 118. | es_CO |
dc.relation.references | Saraswat, V., Tardieu, O., Grove, D., Cunningham, D., Takeuchi, M., & Herta, B. (2012). A Brief Introduction to X10 (for the High Performance Programmer). The IBM Corporation, 10. | es_CO |
dc.relation.references | Severance, C. (2010). High Performance Computing. Hpc. Retrieved from http://www.computer.org/csdl/mags/pd/1994/03/p3085.pdf | es_CO |
dc.relation.references | Sima, D., Fountain, T. J., Kacsuk, P., Sima, D., Fountain, T. J., & Kacsuk, P. (n.d.). Part IV . Chapter 15 - Introduction to MIMD Architectures Thread and process-level parallel architectures are typically realised by MIMD ( Multiple Instruction Multiple Data ) computers . This class of parallel computers is the most general one since it permits autonomous operations on a set of data by a set of processors without any architectural restrictions . Instruction level data-parallel architectures should satisfy several constraints in order to build massively parallel systems . For example processors in array processors , systolic architectures and cellular automata should work synchronously controlled by a common clock . Generally the processors are very simple in these systems and in many cases they realise only a special function ( systolic arrays , neural networks , associative processors , etc .). Although in recent SIMD architectures the complexity and generality of the applied processors have been increased , these modifications have resulted in the introduction of process-level parallelism and MIMD features into the last generation of data-parallel computers ( for example CM-5 ), too . MIMD architectures became popular when progress in integrated circuit technology made it possible to produce microprocessors which were relatively easy and economical to connect into a multiple processor system . In the early eighties small systems , incorporating only tens of processors were typical . The appearance of Transputer in the mid-eighties caused a great breakthrough in the spread of MIMD parallel computers and even more resulted in the general acceptance of parallel processing as the technology of future computers . By the end of the eighties mid-scale MIMD computers containing several hundreds of processors become generally available . The current generation of MIMD computers aim at the range of massively parallel systems containing over 1000 processors . These systems are often called scalable parallel computers . 15 . 1 Architectural concepts The MIMD architecture class represents a natural generalisation of the uniprocessor von Neumann machine which in its simplest form consists of a single processor connected to a single memory module . If the goal is to extend this architecture to contain multiple processors and memory modules basically two alternative choices are available : a . The first possible approach is to replicate the processor / memory pairs and to connect them via an interconnection network . The processor / memory pair is ca…, 1– | es_CO |
dc.rights.accessrights | http://purl.org/coar/access_right/c_abf2 | es_CO |
dc.type.coarversion | http://purl.org/coar/resource_type/c_2df8fbb1 | es_CO |
Aparece en las colecciones: | Ingeniería de Sistemas |
Ficheros en este ítem:
Fichero | Descripción | Tamaño | Formato | |
---|---|---|---|---|
RIAÑO_2017_TG.pdf | 4,86 MB | Adobe PDF | Visualizar/Abrir |
Los ítems de DSpace están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.