Suchergebnisse - Loop Level Parallelization
-
1
INTERNATIONAL PATENT: MYNATIX LTD FILES APPLICATION FOR "HIGH-PERFORMANCE CODE PARALLELIZATION COMPILER WITH LOOP-LEVEL PARALLELIZATION"
Veröffentlicht: Washington, D.C HT Digital Streams Limited 03.11.2024Veröffentlicht in US Fed News Service, Including US State News (03.11.2024)Volltext
Newsletter -
2
Multiple execution of the same MPI application: exploiting parallelism at hotspots with minimal code changes
ISSN: 1869-2672, 1869-2680Veröffentlicht: Berlin/Heidelberg Springer Berlin Heidelberg 01.12.2025Veröffentlicht in GEM international journal on geomathematics (01.12.2025)“… parallelization of compute-intensive loops without data dependencies. Splitting the work at such hotspots between the instances represents an independent level of parallelization on top of the domain decomposition …”
Volltext
Journal Article -
3
Just-in-Time Compilation-Inspired Methodology for Parallelization of Compute Intensive Java Code
ISSN: 0254-7821, 2413-7219Veröffentlicht: Mehran University of Engineering and Technology 01.01.2017Veröffentlicht in Mehran University Research Journal of Engineering and Technology (01.01.2017)“… Such repetitive code is commonly known as hotspot code. We observed that compute intensive hotspots often possess exploitable loop level parallelism. A JIT (Just-in-Time …”
Volltext
Journal Article -
4
An Implementation of LLVM Pass for Loop Parallelization Based on IR-Level Directives
Veröffentlicht: IEEE 01.11.2018Veröffentlicht in 2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW) (01.11.2018)“… Currently, multicore processors are widely used, and processing performance can be improved on many machines by exploiting thread level parallelism …”
Volltext
Tagungsbericht -
5
Scalable parallel implementation of exact inference in Bayesian networks
ISBN: 9780769526126, 0769526128ISSN: 1521-9097Veröffentlicht: IEEE 2006Veröffentlicht in 12th International Conference on Parallel and Distributed Systems - (ICPADS'06) (2006)“… We explore two levels of parallelization: top level parallelization which uses pointer jumping to stride across nodes …”
Volltext
Tagungsbericht -
6
Directive-Based Parallelization of For-Loops at LLVM IR Level
Veröffentlicht: IEEE 01.07.2019Veröffentlicht in 2019 20th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD) (01.07.2019)“… In this paper, we design the IRlevel parallelization directives for the LLVM infrastructure and implement them in LLVM …”
Volltext
Tagungsbericht -
7
Combining Data Reuse With Data-Level Parallelization for FPGA-Targeted Hardware Compilation: A Geometric Programming Framework
ISSN: 0278-0070, 1937-4151Veröffentlicht: New York IEEE 01.03.2009Veröffentlicht in IEEE transactions on computer-aided design of integrated circuits and systems (01.03.2009)“… ) decisions and loop-level parallelization, in the context of field-programmable-gate-array-targeted hardware compilation …”
Volltext
Journal Article -
8
Improving scalability of Earth system models through coarse-grained component concurrency – a case study with the ICON v2.6.5 modelling system
ISSN: 1991-9603, 1991-959X, 1991-962X, 1991-9603, 1991-962XVeröffentlicht: Katlenburg-Lindau Copernicus GmbH 21.12.2022Veröffentlicht in Geoscientific Model Development (21.12.2022)“… dimension that complements typically used parallelization methods such as domain decomposition and loop-level shared-memory approaches …”
Volltext
Journal Article -
9
Hybrid Approach for Parallelization of Sequential Code with Function Level and Block Level Parallelization
ISBN: 0769525547, 9780769525549Veröffentlicht: IEEE 2006Veröffentlicht in PARELEC 2006 : International Symposium on Parallel Computing in Electrical Engineering : 13-17 September 2006, Bialystok, Poland (2006)“… with functional level analysis for parallelization of sequential code and its illustrates its advantages over block …”
Volltext
Tagungsbericht -
10
Acceleration of Semiempirical QM/MM Methods through Message Passage Interface (MPI), Hybrid MPI/Open Multiprocessing, and Self-Consistent Field Accelerator Implementations
ISSN: 1549-9626, 1549-9626Veröffentlicht: United States 08.08.2017Veröffentlicht in Journal of chemical theory and computation (08.08.2017)“… The serial version of the code was first profiled to identify routines that required parallelization …”
Weitere Angaben
Journal Article -
11
An efficient parallel algorithm for 3D magnetotelluric modeling with edge-based finite element
ISSN: 1420-0597, 1573-1499Veröffentlicht: Cham Springer International Publishing 01.02.2021Veröffentlicht in Computational geosciences (01.02.2021)“… The algorithm is based on distributed matrix storage and has three levels of parallelism. The first two are process level parallelization for frequencies and matrix solving, and the last is thread-level parallelization for loop unrolling …”
Volltext
Journal Article -
12
Programming parallel dense matrix factorizations and inversion for new-generation NUMA architectures
ISSN: 0743-7315, 1096-0848Veröffentlicht: Elsevier Inc 01.05.2023Veröffentlicht in Journal of parallel and distributed computing (01.05.2023)“… by proposing multi-domain implementations for DMFI plus a hybrid task- and loop-level parallelization …”
Volltext
Journal Article -
13
MapReduce inspired loop mapping for coarse-grained reconfigurable architecture
ISSN: 1674-733X, 1869-1919Veröffentlicht: Heidelberg Science China Press 01.12.2014Veröffentlicht in Science China. Information sciences (01.12.2014)“… The proposed approach can find the optimal unrolling factor for each level loop, resulting in better parallelization of loops …”
Volltext
Journal Article -
14
Parallelizing more Loops with Compiler Guided Refactoring
ISBN: 9781467325080, 1467325082, 0769547966, 9780769547961ISSN: 0190-3918Veröffentlicht: IEEE 01.09.2012Veröffentlicht in 2012 41st International Conference on Parallel Processing (01.09.2012)“… The performance of many parallel applications relies not on instruction-level parallelism but on loop-level parallelism …”
Volltext
Tagungsbericht -
15
Time stamp algorithms for runtime parallelization of DOACROSS loops with dynamic dependences
ISSN: 1045-9219, 1558-2183Veröffentlicht: New York IEEE 01.05.2001Veröffentlicht in IEEE transactions on parallel and distributed systems (01.05.2001)“… This paper presents a time stamp algorithm for runtime parallelization of general DOACROSS loops that have indirect access patterns …”
Volltext
Journal Article -
16
NUMA-Aware Dense Matrix Factorizations and Inversion with Look-Ahead on Multicore Processors
ISSN: 2643-3001Veröffentlicht: IEEE 01.11.2022Veröffentlicht in Proceedings (Symposium on Computer Architecture and High Performance Computing) (01.11.2022)“… ". In addition, it exploits both hybrid task- and loop-level parallelizations while taking into account the NUMA organization of the memory hierarchy …”
Volltext
Tagungsbericht -
17
A practical run-time technique for exploiting loop-level parallelism
ISSN: 0164-1212, 1873-1228Veröffentlicht: New York Elsevier Inc 01.11.2000Veröffentlicht in The Journal of systems and software (01.11.2000)“… ) test, to further exploit loop-level parallelism. Two main characteristics make the SPNT test distinguished …”
Volltext
Journal Article -
18
Parallelizing Bzip2: A Case Study in Multicore Software Engineering
ISSN: 0740-7459, 1937-4194Veröffentlicht: Los Alamitos, CA IEEE 01.11.2009Veröffentlicht in IEEE software (01.11.2009)“… We conducted a case study of parallelizing a real program for multicore computers using currently available libraries and tools. We selected the sequential …”
Volltext
Journal Article -
19
Speculative parallelization
ISSN: 0018-9162, 1558-0814Veröffentlicht: New York, NY IEEE 01.12.2006Veröffentlicht in Computer (Long Beach, Calif.) (01.12.2006)“… The most promising technique for automatically parallelizing loops when the system cannot determine dependences at compile time is speculative parallelization …”
Volltext
Journal Article -
20
On loop transformations of nested loops with affine dependencies
ISSN: 0010-4655, 1879-2944Veröffentlicht: Amsterdam Elsevier B.V 01.09.2001Veröffentlicht in Computer physics communications (01.09.2001)“… The most common parallelization methods used are loop-level transformations based on unimodular transformations, and the most useful unimodular transformations are inner and outer loop …”
Volltext
Journal Article Tagungsbericht

