Machine and Collection Abstractions for User‐Implemented Data‐Parallel Programming

Data parallelism has appeared as a fruitful approach to the parallelisation of compute‐intensive programs. Data parallelism has the advantage of mimicking the sequential (and deterministic) structure of programs as opposed to task parallelism, where the explicit interaction of processes has to be pr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Scientific programming Jg. 8; H. 4; S. 231 - 246
1. Verfasser: Haveraaen, Magne
Format: Journal Article
Sprache:Englisch
Veröffentlicht: 2001
ISSN:1058-9244, 1875-919X
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Data parallelism has appeared as a fruitful approach to the parallelisation of compute‐intensive programs. Data parallelism has the advantage of mimicking the sequential (and deterministic) structure of programs as opposed to task parallelism, where the explicit interaction of processes has to be programmed. In data parallelism data structures, typically collection classes in the form of large arrays, are distributed on the processors of the target parallel machine. Trying to extract distribution aspects from conventional code often runs into problems with a lack of uniformity in the use of the data structures and in the expression of data dependency patterns within the code. Here we propose a framework with two conceptual classes, Machine and Collection. The Machine class abstracts hardware communication and distribution properties. This gives a programmer high‐level access to the important parts of the low‐level architecture. The Machine class may readily be used in the implementation of a Collection class, giving the programmer full control of the parallel distribution of data, as well as allowing normal sequential implementation of this class. Any program using such a collection class will be parallelisable, without requiring any modification, by choosing between sequential and parallel versions at link time. Experiments with a commercial application, built using the Sophus library which uses this approach to parallelisation, show good parallel speed‐ups, without any adaptation of the application program being needed.
AbstractList Data parallelism has appeared as a fruitful approach to the parallelisation of compute‐intensive programs. Data parallelism has the advantage of mimicking the sequential (and deterministic) structure of programs as opposed to task parallelism, where the explicit interaction of processes has to be programmed. In data parallelism data structures, typically collection classes in the form of large arrays, are distributed on the processors of the target parallel machine. Trying to extract distribution aspects from conventional code often runs into problems with a lack of uniformity in the use of the data structures and in the expression of data dependency patterns within the code. Here we propose a framework with two conceptual classes, Machine and Collection. The Machine class abstracts hardware communication and distribution properties. This gives a programmer high‐level access to the important parts of the low‐level architecture. The Machine class may readily be used in the implementation of a Collection class, giving the programmer full control of the parallel distribution of data, as well as allowing normal sequential implementation of this class. Any program using such a collection class will be parallelisable, without requiring any modification, by choosing between sequential and parallel versions at link time. Experiments with a commercial application, built using the Sophus library which uses this approach to parallelisation, show good parallel speed‐ups, without any adaptation of the application program being needed.
Author Haveraaen, Magne
Author_xml – sequence: 1
  givenname: Magne
  surname: Haveraaen
  fullname: Haveraaen, Magne
BookMark eNotkL1OwzAUhS1UJNrCxAt4R6H32nF-xipAqVREB4rYIse-KUGJU9lZ2HgEnpEnIaVM50c6Z_hmbOJ6R4xdI9wiKrUQALCIM5VAesammKUqyjF_m4weVBblIo4v2CyEDwDMEGDKXp-0eW8cce0sL_q2JTM0vePLKgxe__nA697zXSD_8_W97g4tdeQGsvxOD3qsttrrcdfyre_3Xndd4_aX7LzWbaCrf52z3cP9S_EYbZ5X62K5iQzGmEaxMDahHI2ylCprlMIkTRSZGC1UKh-TJWutVJmxIKpECkAjgYSUJhO5nLOb06_xfQie6vLgm077zxKhPCIpj0jKExL5C6cDV1o
ContentType Journal Article
DBID AAYXX
CITATION
DOI 10.1155/2000/485607
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList CrossRef
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1875-919X
EndPage 246
ExternalDocumentID 10_1155_2000_485607
GroupedDBID .DC
0R~
24P
4.4
5VS
AAFWJ
AAMMB
AAYXX
ABEFU
ABJNI
ABUBZ
ACCMX
ACGFS
ACPQW
ADBBV
AEFGJ
AENEX
AFRHK
AGIAB
AGXDD
AIDQK
AIDYY
ALMA_UNASSIGNED_HOLDINGS
ALUQN
ASPBG
AVWKF
BCNDV
CAG
CITATION
COF
DU5
EBS
EJD
FEDTE
H13
HZ~
IL9
IOS
IPNFZ
KQ8
MET
MIO
MV1
NGNOM
O9-
OK1
RIG
VOH
ID FETCH-LOGICAL-c1417-42cd6e91c5de75dc5516765ec41d0b59167deddd358cd02b63201c30e233c8293
ISSN 1058-9244
IngestDate Sat Nov 29 04:06:53 EST 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 4
Language English
License http://creativecommons.org/licenses/by/3.0
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c1417-42cd6e91c5de75dc5516765ec41d0b59167deddd358cd02b63201c30e233c8293
OpenAccessLink https://downloads.hindawi.com/journals/sp/2000/485607.pdf
PageCount 16
ParticipantIDs crossref_primary_10_1155_2000_485607
PublicationCentury 2000
PublicationDate 2001-00-00
PublicationDateYYYYMMDD 2001-01-01
PublicationDate_xml – year: 2001
  text: 2001-00-00
PublicationDecade 2000
PublicationTitle Scientific programming
PublicationYear 2001
SSID ssj0018100
Score 1.5287133
Snippet Data parallelism has appeared as a fruitful approach to the parallelisation of compute‐intensive programs. Data parallelism has the advantage of mimicking the...
SourceID crossref
SourceType Index Database
StartPage 231
Title Machine and Collection Abstractions for User‐Implemented Data‐Parallel Programming
Volume 8
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVWIB
  databaseName: Wiley Online Library Open Access
  customDbUrl:
  eissn: 1875-919X
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0018100
  issn: 1058-9244
  databaseCode: 24P
  dateStart: 19920101
  isFulltext: true
  titleUrlDefault: https://authorservices.wiley.com/open-science/open-access/browse-journals.html
  providerName: Wiley-Blackwell
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV27TsMwFLV4DSy8EW95YA04jh07I0IgBlp1aCu2yrFdhAQBlYcY-QS-kS_h-pEQAUMZkKIodeIq8UmOj32v70XoMCOFHI8znZTSkISZjCfl2OTws1CqADVnfbim4aXoduXVVdGLARUefToBUVXy9bV4-FeooQzAdktn_wB386dQAMcAOuwBdthPBXzHu0cGq4CfFgjJwE9KN6mhg-Ob8y0cwN00rg4-SLCPz2ngRQBFWZ_oqYnLtnLrVhQ4R667uquLgtZzg_c3ql29mis8r0HzKRWoraOuow2_nmVIW5RIOFAiDVEaj2wog1EO0KRPgtvwqGy9LqzNiZHmQ_dKw4zjT-bmLsiFWzjkszGBEBNfXVRtlv_WczX-hH4kw7nLqUlGofIsmqeCFy6bB2W9xrAkUxICVMSHiks2ofKxq3wcKrdESktt9FfQUhwm4JMA7yqasdUaWq5TcODIyOtoGNHGgDb-Qhu30caANnZof7y9t3DGDmcoqhHGLYQ30OD8rH96kcRUGYlOGegMRrXJbZFqbqzgRjvzp8i51Sw1pOQwBhDGGgOfotSG0DLPQPjpjFiaZVqC5NtEc9V9ZbcQtsKJQC3ZOOdMMaoMl4arjAlJSkr1Njqs22b0ECKijH5p_53pLttFi8Grz217aO5p8mz30YJ-ebp5nBx47D4Bma1UDg
linkProvider Wiley-Blackwell
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Machine+and+Collection+Abstractions+for+User%E2%80%90Implemented+Data%E2%80%90Parallel+Programming&rft.jtitle=Scientific+programming&rft.au=Haveraaen%2C+Magne&rft.date=2001&rft.issn=1058-9244&rft.eissn=1875-919X&rft.volume=8&rft.issue=4&rft.spage=231&rft.epage=246&rft_id=info:doi/10.1155%2F2000%2F485607&rft.externalDBID=n%2Fa&rft.externalDocID=10_1155_2000_485607
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1058-9244&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1058-9244&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1058-9244&client=summon