• Repositorio Institucional Universidad de Pamplona
  • Trabajos de pregrado y especialización
  • Facultad de Ingenierías y Arquitectura
  • Ingeniería de Sistemas
  • Por favor, use este identificador para citar o enlazar este ítem: http://repositoriodspace.unipamplona.edu.co/jspui/handle/20.500.12744/5479
    Registro completo de metadatos
    Campo DC Valor Lengua/Idioma
    dc.contributor.authorRiaño Bejar, Erika Viviana.-
    dc.date.accessioned2022-12-15T21:59:15Z-
    dc.date.available2018-
    dc.date.available2022-12-15T21:59:15Z-
    dc.date.issued2018-
    dc.identifier.citationRiaño, E. V. (2017). Caracterización de problemas a solucionar mediante programación multiproceso de memoria compartida [Trabajo de grado pregrado, Universidad de Pamplona]. Repositorio Hulago Universidad de Pamplona. http://repositoriodspace.unipamplona.edu.co/jspui/handle/20.500.12744/5479es_CO
    dc.identifier.urihttp://repositoriodspace.unipamplona.edu.co/jspui/handle/20.500.12744/5479-
    dc.descriptionEl autor no proporciona la información sobre este ítem.es_CO
    dc.description.abstractEl autor no proporciona la información sobre este ítem.es_CO
    dc.format.extent315es_CO
    dc.format.mimetypeapplication/pdfes_CO
    dc.language.isoeses_CO
    dc.publisherUniversidad de Pamplona – Facultad de Ingenierías y Arquitectura.es_CO
    dc.subjectEl autor no proporciona la información sobre este ítem.es_CO
    dc.titleCaracterización de problemas a solucionar mediante programación multiproceso de memoria compartida.es_CO
    dc.typehttp://purl.org/coar/resource_type/c_7a1fes_CO
    dc.date.accepted2017-
    dc.relation.referencesAcevedo Alvaro Casasús, Juan Jose Benito Muñoz Prieto, F. U., & Corvinos, L. G. (2009). Resolución de la ecuación de Ondas en 2-D y 3-D utilizando diferencias finitas generalizadas . Consistencia y Estabilidad . Alvaro Casas u Francisco Ure n Introducci ´ on Diferencias finitas generalizadas y método explícito en, 2009, 1–8es_CO
    dc.relation.referencesAlcubierre, M. (2005). Introducción a FORTRAN. Fortran, 34.es_CO
    dc.relation.referencesAmit Amritkar, S. D., & Tafti, D. (2013). Efficient parallel CFD-DEM simulations using OpenMP.es_CO
    dc.relation.referencesAsanovic, K., Catanzaro, B. C., Patterson, D. A., & Yelick, K. A. (2006). The Landscape of Parallel Computing Research : A View from Berkeley.es_CO
    dc.relation.referencesAtienza, D., Colmenar, J. M., & Garnica, O. (2010). Parallel heuristics for scalable community detection. Parallel Computing. https://doi.org/10.1016/j.parco.2010.07.001es_CO
    dc.relation.referencesAuckenthaler, T., Blum, V., Bungartz, H.-J., Huckle, T., Johanni, R., Krämer, L., … Willems, P. . (2011). Parallel solution of partial symmetric eigenvalue problems from electronic structure calculations.es_CO
    dc.relation.referencesBarbara Chapman, Gabriele Jost, R. V. D. P. (n.d.). Using OpenMP, Portable Shared Memory Parallel Programming.es_CO
    dc.relation.referencesBlelloch, G. E. (2012). NESL: A Nested Data-Parallel Language.es_CO
    dc.relation.referencesChristian Terboven, Dieter an Mey, Dirk Schmidl, Henry Jin, T. R. (2008). Data and thread affinity in openmp programs.es_CO
    dc.relation.referencesDorta, A., García, L., González, J. R., León, C., & Rodríguez, C. (n.d.). Aproximación paralela a la técnica Divide y Vencerás.es_CO
    dc.relation.referencesEduard Ayguad, Nawal Copty, Duran Alejandro, Hoeflinger Jay, Yuan Lin, Massaioli Federico, Su Ernesto, Unnikrishnan Priya, G. Z. (n.d.). A Proposal for Task Parallelism in OpenMP.es_CO
    dc.relation.referencesEduard Ayguadé, Nawal Copty, Alejandro Duran, Jay Hoeflinger, Yuan Lin, Federico Massaioli, Xavier Teruel, Priya Unnikrishnan, G. Z. (2009). The Design of OpenMP Tasks.es_CO
    dc.relation.referencesFoster, I. (1995). Designing and Building Parallel Programs. Interface, (Noviembre). https://doi.org/10.1109/MCC.1997.588301es_CO
    dc.relation.referencesFrancisco Almeida, Domingo Giménez, José Miguel Mantas, A. M. V. (n.d.). Sobre la situación del paralelismo y la programación paralela en los Grados de Ingeniería Informática.es_CO
    dc.relation.referencesFrederico Pratasa, Pedro Trancosob, Leonel Sousaa, Alexandros, S., & Shid Guochun, K. V. (2012). Fine-grain parallelism using multi-core, Cell/BE, and GPU Systems.es_CO
    dc.relation.referencesHaoqiang Jina, Dennis Jespersena, Piyush Mehrotraa, Rupak Biswasa, Lei Huangb, B. C. (2011). High performance computing using MPI and OpenMP on multi.es_CO
    dc.relation.referencesJ.Aguilar, E. L. (2006). Introducción a la Computación Paralela. Memory. https://doi.org/10.1157/13068212es_CO
    dc.relation.referencesJiangzhou He, W. C., & Zhizhong, T. (2015). NestedMP: Enabling cache-aware thread mapping for nested parallel shared memory applications.es_CO
    dc.relation.referencesJohn H. Gibbons. (1989). High Performance Computing and Networking for Science September 1989. Performance Computing, (September).es_CO
    dc.relation.referencesJulio Monetti, O. L. (n.d.). USO DE THREADS PARA LA EJECUCIÓN EN PARALELO SOBRE UNA MALLA COMPUTACIONAL.es_CO
    dc.relation.referencesMarco Oliverio, William Spataro, Donato D’Ambrosio, Rocco Rongo , Giuseppe Spingola, G. T. (2011). OpenMP parallelization of the SCIARA Cellular Automata lava flow model.es_CO
    dc.relation.referencesMichael J, Q. (n.d.). Parallel Programming in C with MPI andOpenMP (Vol. 1).es_CO
    dc.relation.referencesNasim Muhammad, H. J. E. (n.d.). OpenMP Parallelization of a Mickens Time-Integration Scheme for a Mixed-Culture Biofilm Model and Its Performance on Multi-core and Multi-processor Computers.es_CO
    dc.relation.referencesOpenMP. (2013). OpenMP Application Program Interface. The OpenMP Forum, Tech. Rep, (July), 320. https://doi.org/10.1080/08905769008604595es_CO
    dc.relation.referencesPatrick Carribault, Marc Pérache, H. J. (2010). Enabling Low-Overhead Hybrid MPI/OpenMP Parallelism with MPC.es_CO
    dc.relation.referencesPiccoli, M. F. (2011). Computación de alto desempeño en GPU. Retrieved from http://sedici.unlp.edu.ar/bitstream/handle/10915/18404/Documento_completo__.pdf?sequence=1es_CO
    dc.relation.referencesReinders, J., & Jeffers, J. (2015). High Performance Parallelism Pearls, Volume One. High Performance Parallelism Pearls. https://doi.org/10.1016/B978-0-12-802118-7.00007-8es_CO
    dc.relation.referencesSampieri, R. H., Collado, C. F., & Lucio, P. B. (n.d.). Metodologia de.es_CO
    dc.relation.referencesSanta Cruz, C. (2007). Programando en Fortran. Fortran, 118.es_CO
    dc.relation.referencesSaraswat, V., Tardieu, O., Grove, D., Cunningham, D., Takeuchi, M., & Herta, B. (2012). A Brief Introduction to X10 (for the High Performance Programmer). The IBM Corporation, 10.es_CO
    dc.relation.referencesSeverance, C. (2010). High Performance Computing. Hpc. Retrieved from http://www.computer.org/csdl/mags/pd/1994/03/p3085.pdfes_CO
    dc.relation.referencesSima, D., Fountain, T. J., Kacsuk, P., Sima, D., Fountain, T. J., & Kacsuk, P. (n.d.). Part IV . Chapter 15 - Introduction to MIMD Architectures Thread and process-level parallel architectures are typically realised by MIMD ( Multiple Instruction Multiple Data ) computers . This class of parallel computers is the most general one since it permits autonomous operations on a set of data by a set of processors without any architectural restrictions . Instruction level data-parallel architectures should satisfy several constraints in order to build massively parallel systems . For example processors in array processors , systolic architectures and cellular automata should work synchronously controlled by a common clock . Generally the processors are very simple in these systems and in many cases they realise only a special function ( systolic arrays , neural networks , associative processors , etc .). Although in recent SIMD architectures the complexity and generality of the applied processors have been increased , these modifications have resulted in the introduction of process-level parallelism and MIMD features into the last generation of data-parallel computers ( for example CM-5 ), too . MIMD architectures became popular when progress in integrated circuit technology made it possible to produce microprocessors which were relatively easy and economical to connect into a multiple processor system . In the early eighties small systems , incorporating only tens of processors were typical . The appearance of Transputer in the mid-eighties caused a great breakthrough in the spread of MIMD parallel computers and even more resulted in the general acceptance of parallel processing as the technology of future computers . By the end of the eighties mid-scale MIMD computers containing several hundreds of processors become generally available . The current generation of MIMD computers aim at the range of massively parallel systems containing over 1000 processors . These systems are often called scalable parallel computers . 15 . 1 Architectural concepts The MIMD architecture class represents a natural generalisation of the uniprocessor von Neumann machine which in its simplest form consists of a single processor connected to a single memory module . If the goal is to extend this architecture to contain multiple processors and memory modules basically two alternative choices are available : a . The first possible approach is to replicate the processor / memory pairs and to connect them via an interconnection network . The processor / memory pair is ca…, 1–es_CO
    dc.rights.accessrightshttp://purl.org/coar/access_right/c_abf2es_CO
    dc.type.coarversionhttp://purl.org/coar/resource_type/c_2df8fbb1es_CO
    Aparece en las colecciones: Ingeniería de Sistemas

    Ficheros en este ítem:
    Fichero Descripción Tamaño Formato  
    RIAÑO_2017_TG.pdf4,86 MBAdobe PDFVisualizar/Abrir


    Los ítems de DSpace están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.