Accelerating scientific computations with mixed precision algorithms
On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanc...
Gespeichert in:
| Veröffentlicht in: | Computer physics communications Jg. 180; H. 12; S. 2526 - 2533 |
|---|---|
| Hauptverfasser: | , , , , , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
Elsevier B.V
01.12.2009
Elsevier |
| Schlagworte: | |
| ISSN: | 0010-4655, 1879-2944 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented.
Program title: ITER-REF
Catalogue identifier: AECO_v1_0
Program summary URL:
http://cpc.cs.qub.ac.uk/summaries/AECO_v1_0.html
Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland
Licensing provisions: Standard CPC licence,
http://cpc.cs.qub.ac.uk/licence/licence.html
No. of lines in distributed program, including test data, etc.: 7211
No. of bytes in distributed program, including test data, etc.: 41 862
Distribution format: tar.gz
Programming language: FORTRAN 77
Computer: desktop, server
Operating system: Unix/Linux
RAM: 512 Mbytes
Classification: 4.8
External routines: BLAS (optional)
Nature of problem: On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution.
Solution method: Mixed precision algorithms stem from the observation that, in many cases, a single precision solution of a problem can be refined to the point where double precision accuracy is achieved. A common approach to the solution of linear systems, either dense or sparse, is to perform the LU factorization of the coefficient matrix using Gaussian elimination. First, the coefficient matrix
A is factored into the product of a lower triangular matrix
L and an upper triangular matrix
U. Partial row pivoting is in general used to improve numerical stability resulting in a factorization
P
A
=
L
U
, where
P is a permutation matrix. The solution for the system is achieved by first solving
L
y
=
P
b
(forward substitution) and then solving
U
x
=
y
(backward substitution). Due to round-off errors, the computed solution,
x, carries a numerical error magnified by the condition number of the coefficient matrix
A. In order to improve the computed solution, an iterative process can be applied, which produces a correction to the computed solution at each iteration, which then yields the method that is commonly known as the iterative refinement algorithm. Provided that the system is not too ill-conditioned, the algorithm produces a solution correct to the working precision.
Running time: seconds/minutes |
|---|---|
| Bibliographie: | ObjectType-Article-2 SourceType-Scholarly Journals-1 ObjectType-Feature-1 content type line 23 |
| ISSN: | 0010-4655 1879-2944 |
| DOI: | 10.1016/j.cpc.2008.11.005 |