Best bang for your buck: GPU nodes for GROMACS biomolecular simulations
The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well‐exploited with a combination of single instruction multiple data, multithreading, and message passing interf...
Gespeichert in:
| Veröffentlicht in: | Journal of computational chemistry Jg. 36; H. 26; S. 1990 - 2008 |
|---|---|
| Hauptverfasser: | , , , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
United States
Blackwell Publishing Ltd
05.10.2015
Wiley Subscription Services, Inc John Wiley and Sons Inc |
| Schlagworte: | |
| ISSN: | 0192-8651, 1096-987X, 1096-987X |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well‐exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)‐based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off‐loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance‐to‐price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer‐class GPUs this improvement equally reflects in the performance‐to‐price ratio. Although memory issues in consumer‐class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost‐efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well‐balanced ratio of CPU and consumer‐class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.
Molecular dynamics (MD) simulation is a crucial tool for the study of (bio)molecules. MD simulations typically run for weeks or months even on modern computer clusters. Choosing the optimal hardware for carrying out these simulations can increase the trajectory output twofold or threefold. With GROMACS, the maximum amount of MD trajectory for a fixed budget is produced using nodes with a well‐balanced ratio of CPU and consumer‐class GPU resources. |
|---|---|
| AbstractList | The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well‐exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)‐based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off‐loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance‐to‐price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer‐class GPUs this improvement equally reflects in the performance‐to‐price ratio. Although memory issues in consumer‐class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost‐efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well‐balanced ratio of CPU and consumer‐class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well‐exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)‐based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off‐loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance‐to‐price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer‐class GPUs this improvement equally reflects in the performance‐to‐price ratio. Although memory issues in consumer‐class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost‐efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well‐balanced ratio of CPU and consumer‐class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. Molecular dynamics (MD) simulation is a crucial tool for the study of (bio)molecules. MD simulations typically run for weeks or months even on modern computer clusters. Choosing the optimal hardware for carrying out these simulations can increase the trajectory output twofold or threefold. With GROMACS, the maximum amount of MD trajectory for a fixed budget is produced using nodes with a well‐balanced ratio of CPU and consumer‐class GPU resources. The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well-exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)-based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off-loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance-to-price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer-class GPUs this improvement equally reflects in the performance-to-price ratio. Although memory issues in consumer-class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost-efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well-balanced ratio of CPU and consumer-class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime.The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well-exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)-based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off-loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance-to-price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer-class GPUs this improvement equally reflects in the performance-to-price ratio. Although memory issues in consumer-class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost-efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well-balanced ratio of CPU and consumer-class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well-exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)-based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off-loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance-to-price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer-class GPUs this improvement equally reflects in the performance-to-price ratio. Although memory issues in consumer-class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost-efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well-balanced ratio of CPU and consumer-class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. |
| Author | Páll, Szilárd Grubmüller, Helmut Esztermann, Ansgar Kutzner, Carsten Fechner, Martin de Groot, Bert L. |
| AuthorAffiliation | 1 Theoretical and Computational Biophysics Department Max Planck Institute for Biophysical Chemistry Am Fassberg 11 37077 Göttingen Germany 2 Theoretical and Computational Biophysics KTH Royal Institute of Technology 17121 Stockholm Sweden |
| AuthorAffiliation_xml | – name: 2 Theoretical and Computational Biophysics KTH Royal Institute of Technology 17121 Stockholm Sweden – name: 1 Theoretical and Computational Biophysics Department Max Planck Institute for Biophysical Chemistry Am Fassberg 11 37077 Göttingen Germany |
| Author_xml | – sequence: 1 givenname: Carsten surname: Kutzner fullname: Kutzner, Carsten email: ckutzne@gwdg.de organization: Theoretical and Computational Biophysics Department, Max Planck Institute for Biophysical Chemistry, Am Fassberg 11, 37077, Göttingen, Germany – sequence: 2 givenname: Szilárd surname: Páll fullname: Páll, Szilárd organization: Theoretical and Computational Biophysics, KTH Royal Institute of Technology, Stockholm, 17121, Sweden – sequence: 3 givenname: Martin surname: Fechner fullname: Fechner, Martin organization: Theoretical and Computational Biophysics Department, Max Planck Institute for Biophysical Chemistry, Am Fassberg 11, 37077, Göttingen, Germany – sequence: 4 givenname: Ansgar surname: Esztermann fullname: Esztermann, Ansgar organization: Theoretical and Computational Biophysics Department, Max Planck Institute for Biophysical Chemistry, Am Fassberg 11, 37077, Göttingen, Germany – sequence: 5 givenname: Bert L. surname: de Groot fullname: de Groot, Bert L. organization: Theoretical and Computational Biophysics Department, Max Planck Institute for Biophysical Chemistry, Am Fassberg 11, 37077, Göttingen, Germany – sequence: 6 givenname: Helmut surname: Grubmüller fullname: Grubmüller, Helmut organization: Theoretical and Computational Biophysics Department, Max Planck Institute for Biophysical Chemistry, Am Fassberg 11, 37077, Göttingen, Germany |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/26238484$$D View this record in MEDLINE/PubMed https://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-173956$$DView record from Swedish Publication Index (Kungliga Tekniska Högskolan) |
| BookMark | eNp1kUtv1DAUhS1URKeFBX8ARWIDi7R-5GGzQBpSCI9CEUMpO8txbqaeSeJiJ5T595h5VLSC1ZV8v3N8dc4B2uttDwg9JviIYEyPF1of0QQzfA9NCBZZLHj-fQ9NMBE05llK9tGB9wuMMUuz5AHapxllPOHJBJWvwA9Rpfp51FgXrezoomrUyxdR-fk86m0Nfr0ov5x9nBazqDK2sy3osVUu8qYLczC29w_R_Ua1Hh5t5yE6f_P6a_E2Pj0r3xXT01inXOBYAK0bDTTPKVSNFpDURCuOU1CqCTtepZDlGECwjNdKYSHCGyS6Bpwortghije-_hquxkpeOdMpt5JWGXlivk2ldXO5HC4lyZlIs8C_3PAB7qDW0A9Otbdktze9uZRz-1OmOKEE02DwbGvg7I8xhCU74zW0rerBjj78g0WOc8rygD69gy5CnH2II1CEEJFwwgL15O-Lbk7ZdRKA4w2gnfXeQSO1GdYphwNNKwmWf1qXoXW5bj0ont9R7Ez_xW7dr00Lq_-D8n1R7BTbyI0f4NeNQrmlzHKWp_LiUyk_zFiZnlxkcsZ-A5Biyu4 |
| CODEN | JCCHDD |
| CitedBy_id | crossref_primary_10_1007_s42485_019_00020_y crossref_primary_10_2298_JSC231125012B crossref_primary_10_3389_fimmu_2023_1112816 crossref_primary_10_1371_journal_pone_0183889 crossref_primary_10_3390_life11050385 crossref_primary_10_1038_s41565_024_01657_7 crossref_primary_10_1002_jcc_26011 crossref_primary_10_7717_peerj_7425 crossref_primary_10_1016_j_xphs_2020_09_010 crossref_primary_10_1080_07391102_2023_2212815 crossref_primary_10_1177_10943420211008288 crossref_primary_10_1016_j_jhazmat_2022_129517 crossref_primary_10_1038_s41467_024_46207_w crossref_primary_10_1021_acs_langmuir_5c01840 crossref_primary_10_3390_molecules28247997 crossref_primary_10_3390_molecules29061338 crossref_primary_10_1080_07391102_2022_2163427 crossref_primary_10_3390_cryst14060532 crossref_primary_10_1021_acs_jpcb_5c01073 crossref_primary_10_1016_j_bpj_2023_01_025 crossref_primary_10_1016_S1872_2067_24_60231_7 crossref_primary_10_1177_10943420231213013 crossref_primary_10_3389_fmicb_2021_665041 crossref_primary_10_3390_ijms17071083 crossref_primary_10_3390_ph17050549 crossref_primary_10_1002_jcc_25796 crossref_primary_10_1007_s42979_024_02958_3 crossref_primary_10_2298_JSC240428074L crossref_primary_10_1063_5_0018516 crossref_primary_10_1007_s00894_022_05369_4 crossref_primary_10_1016_j_jpdc_2025_105087 crossref_primary_10_1063_5_0019045 crossref_primary_10_1557_mrc_2019_54 crossref_primary_10_1002_wcms_1494 crossref_primary_10_1016_j_mtchem_2018_12_007 crossref_primary_10_4103_pm_pm_99_20 crossref_primary_10_1063_1_4943287 crossref_primary_10_1002_prot_25306 crossref_primary_10_1016_j_desal_2025_119375 crossref_primary_10_1002_slct_202401597 crossref_primary_10_1016_j_jmgm_2022_108282 crossref_primary_10_1002_chem_202000495 crossref_primary_10_1002_wcms_1444 crossref_primary_10_1080_07391102_2023_2252084 crossref_primary_10_1021_jacs_8b10840 crossref_primary_10_1016_j_ymeth_2018_04_010 crossref_primary_10_1007_s12268_018_0892_y crossref_primary_10_3389_fmolb_2024_1472252 crossref_primary_10_1002_jcc_25228 crossref_primary_10_1016_j_ymeth_2020_02_009 crossref_primary_10_3390_molecules28010413 crossref_primary_10_1146_annurev_biophys_070317_033349 crossref_primary_10_1016_j_cpc_2017_05_003 crossref_primary_10_1038_s41598_017_11736_6 crossref_primary_10_1007_s00894_021_04957_0 crossref_primary_10_1016_j_molliq_2023_121340 crossref_primary_10_1080_00222348_2021_1945080 crossref_primary_10_1063_5_0264186 crossref_primary_10_3390_membranes11100747 crossref_primary_10_1002_jcc_26545 crossref_primary_10_3389_fmolb_2019_00117 crossref_primary_10_1016_j_bbamem_2016_02_007 crossref_primary_10_1021_acs_jpcb_5c02216 crossref_primary_10_3389_fmicb_2024_1402963 crossref_primary_10_1038_s41524_019_0209_9 crossref_primary_10_1080_00319104_2023_2263897 crossref_primary_10_1080_07391102_2024_2325109 crossref_primary_10_1016_j_synbio_2024_05_008 crossref_primary_10_1016_j_tibs_2020_02_010 crossref_primary_10_1016_j_ijbiomac_2025_143063 crossref_primary_10_3389_fphar_2025_1591164 crossref_primary_10_1016_j_polymer_2020_122881 crossref_primary_10_1007_s00894_023_05558_9 crossref_primary_10_1016_j_cofs_2016_08_003 crossref_primary_10_3390_molecules23123269 crossref_primary_10_1016_j_ijbiomac_2024_137059 crossref_primary_10_1073_pnas_1816909116 crossref_primary_10_1080_07391102_2025_2500681 crossref_primary_10_3389_fpls_2021_732701 crossref_primary_10_1016_j_bbamem_2015_12_032 crossref_primary_10_1016_j_orgel_2019_105571 crossref_primary_10_1155_2022_5314179 crossref_primary_10_1016_j_sbi_2016_06_007 crossref_primary_10_3390_catal8050192 crossref_primary_10_1002_mgg3_1344 crossref_primary_10_1080_07391102_2023_2166119 crossref_primary_10_3389_fimmu_2023_1100188 crossref_primary_10_1016_j_molstruc_2022_133676 crossref_primary_10_1073_pnas_2407479121 crossref_primary_10_1016_j_heliyon_2024_e40774 crossref_primary_10_1002_jcc_24786 crossref_primary_10_1177_1094342019826667 crossref_primary_10_3390_molecules28196795 crossref_primary_10_1002_1873_3468_15077 crossref_primary_10_1002_cbic_202300373 crossref_primary_10_1016_j_molliq_2023_122127 crossref_primary_10_1039_C7CP08185E crossref_primary_10_31857_S2308114723700231 crossref_primary_10_3390_microorganisms8010059 crossref_primary_10_1016_j_bbamem_2018_04_013 crossref_primary_10_1002_jcc_70152 crossref_primary_10_1016_j_namjnl_2025_100044 crossref_primary_10_1134_S1811238223700285 crossref_primary_10_1021_acs_jpcb_5c02947 crossref_primary_10_1063_5_0279756 crossref_primary_10_1063_5_0014500 crossref_primary_10_1002_mats_202100066 crossref_primary_10_1158_1541_7786_MCR_20_1017 crossref_primary_10_1159_000503450 crossref_primary_10_1002_jcc_25823 crossref_primary_10_1016_j_molliq_2020_112870 crossref_primary_10_1002_cpe_5136 crossref_primary_10_1016_j_bpj_2020_07_027 crossref_primary_10_1093_carcin_bgaa091 crossref_primary_10_1016_j_jbc_2021_101271 crossref_primary_10_1146_annurev_physchem_061020_053438 crossref_primary_10_3389_fchem_2023_1103792 crossref_primary_10_1016_j_biochi_2024_03_004 crossref_primary_10_1016_j_cpc_2018_10_018 crossref_primary_10_3390_ijms25094918 crossref_primary_10_1007_s00894_018_3720_x crossref_primary_10_1016_j_bpj_2017_02_016 crossref_primary_10_1080_23746149_2024_2358196 crossref_primary_10_1021_acs_jcim_8b00108 crossref_primary_10_3390_cells10051052 |
| Cites_doi | 10.1002/jcc.21645 10.1021/ct9000685 10.1016/j.cpc.2011.10.012 10.1002/jcc.20289 10.1016/j.cpc.2013.06.003 10.1002/jcc.21773 10.1038/nsmb.2690 10.1063/1.470117 10.1021/ct700301q 10.1002/wcms.1121 10.1093/bioinformatics/btt055 10.1002/jcc.21287 10.1021/ct400140n |
| ContentType | Journal Article |
| Copyright | 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. Copyright Wiley Subscription Services, Inc. Oct 5, 2015 |
| Copyright_xml | – notice: 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. – notice: Copyright Wiley Subscription Services, Inc. Oct 5, 2015 |
| DBID | BSCLL 24P AAYXX CITATION CGR CUY CVF ECM EIF NPM JQ2 7X8 5PM ADTPV AFDQA AOWAS D8T D8V ZZAVC |
| DOI | 10.1002/jcc.24030 |
| DatabaseName | Istex Wiley Online Library Open Access CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed ProQuest Computer Science Collection MEDLINE - Academic PubMed Central (Full Participant titles) SwePub SWEPUB Kungliga Tekniska Högskolan full text SwePub Articles SWEPUB Freely available online SWEPUB Kungliga Tekniska Högskolan SwePub Articles full text |
| DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) ProQuest Computer Science Collection MEDLINE - Academic |
| DatabaseTitleList | MEDLINE - Academic MEDLINE ProQuest Computer Science Collection |
| Database_xml | – sequence: 1 dbid: 24P name: Wiley Online Library Open Access url: https://authorservices.wiley.com/open-science/open-access/browse-journals.html sourceTypes: Publisher – sequence: 2 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Chemistry |
| EISSN | 1096-987X |
| EndPage | 2008 |
| ExternalDocumentID | oai_DiVA_org_kth_173956 PMC5042102 3804531451 26238484 10_1002_jcc_24030 JCC24030 ark_67375_WNG_KS3G5DW6_S |
| Genre | news Research Support, Non-U.S. Gov't Journal Article Feature |
| GrantInformation_xml | – fundername: DFG priority programme “Software for Exascale Computing” (SPP 1648) |
| GroupedDBID | --- -~X .3N .GA 05W 0R~ 10A 1L6 1OB 1OC 1ZS 33P 36B 3SF 3WU 4.4 4ZD 50Y 50Z 51W 51X 52M 52N 52O 52P 52S 52T 52U 52W 52X 53G 5GY 5VS 66C 6P2 702 7PT 8-0 8-1 8-3 8-4 8-5 8UM 930 A03 AAESR AAEVG AAHQN AAMMB AAMNL AANHP AANLZ AAONW AASGY AAXRX AAYCA AAZKR ABCQN ABCUV ABIJN ABJNI ABLJU ABPVW ACAHQ ACBWZ ACCZN ACFBH ACGFO ACGFS ACIWK ACNCT ACPOU ACRPL ACXBN ACXQS ACYXJ ADBBV ADEOM ADIZJ ADKYN ADMGS ADMLS ADNMO ADOZA ADXAS ADZMN AEFGJ AEGXH AEIGN AEIMD AENEX AEUYR AEYWJ AFBPY AFFPM AFGKR AFWVQ AFZJQ AGQPQ AGXDD AGYGG AHBTC AIAGR AIDQK AIDYY AITYG AIURR AJXKR ALAGY ALMA_UNASSIGNED_HOLDINGS ALUQN ALVPJ AMBMR AMYDB ATUGU AUFTA AZBYB AZVAB BAFTC BFHJK BHBCM BMNLL BMXJE BNHUX BROTX BRXPI BSCLL BY8 CS3 D-E D-F DCZOG DPXWK DR1 DR2 DRFUL DRSTM DU5 EBS EJD F00 F01 F04 F5P G-S G.N GNP GODZA H.T H.X HBH HGLYW HHY HHZ HZ~ IX1 J0M JPC KQQ LATKE LAW LC2 LC3 LEEKS LH4 LITHE LOXES LP6 LP7 LUTES LW6 LYRES MEWTI MK4 MRFUL MRSTM MSFUL MSSTM MXFUL MXSTM N04 N05 N9A NF~ NNB O66 O9- OIG P2P P2W P2X P4D PQQKQ Q.N Q11 QB0 QRW R.K RNS ROL RX1 RYL SUPJJ TN5 UB1 UPT V2E V8K W8V W99 WBFHL WBKPD WH7 WIB WIH WIK WJL WOHZO WQJ WXSBR WYISQ XG1 XPP XV2 YQT ZZTAW ~IA ~KM ~WT 24P AAHHS ACCFJ AEEZP AEQDE AEUQT AFPWT AIWBW AJBDE ESX RWI RWK WRC AAYXX CITATION O8X CGR CUY CVF ECM EIF NPM JQ2 7X8 5PM .Y3 186 31~ 6TJ ABDPE ABEFU ABEML ACSCC ADTPV AFDQA AFFNX AGHNM AI. AIQQE AOWAS ASPBG AVWKF AZFZN BDRZF BTSUX D8T D8V FEDTE HF~ HVGLF M21 PALCI RIWAO RJQFR SAMSI VH1 ZCG ZY4 ZZAVC |
| ID | FETCH-LOGICAL-c5890-9e2dfce2772ebfc9e4d1ca805eaafe2d8b5e670ee9368daa099d8be4cde04a8a3 |
| IEDL.DBID | 24P |
| ISICitedReferencesCount | 208 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000360807700007&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0192-8651 1096-987X |
| IngestDate | Tue Nov 04 16:42:40 EST 2025 Tue Nov 04 02:02:42 EST 2025 Sun Nov 09 10:02:10 EST 2025 Fri Jul 25 19:04:58 EDT 2025 Thu Apr 03 06:52:59 EDT 2025 Sat Nov 29 03:23:32 EST 2025 Tue Nov 18 22:00:43 EST 2025 Wed Jan 22 16:55:22 EST 2025 Sun Sep 21 06:20:21 EDT 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 26 |
| Keywords | MD energy efficiency hybrid parallelization molecular dynamics GPU parallel computing benchmark |
| Language | English |
| License | Attribution http://creativecommons.org/licenses/by/4.0 http://doi.wiley.com/10.1002/tdm_license_1.1 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c5890-9e2dfce2772ebfc9e4d1ca805eaafe2d8b5e670ee9368daa099d8be4cde04a8a3 |
| Notes | istex:78A0AAF823C22979C9FD50D60A1B920B33161192 ark:/67375/WNG-KS3G5DW6-S ArticleID:JCC24030 DFG priority programme "Software for Exascale Computing" (SPP 1648) SourceType-Scholarly Journals-1 ObjectType-Feature-1 content type line 14 ObjectType-Article-1 ObjectType-Feature-2 content type line 23 |
| OpenAccessLink | https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fjcc.24030 |
| PMID | 26238484 |
| PQID | 1711194813 |
| PQPubID | 48816 |
| PageCount | 19 |
| ParticipantIDs | swepub_primary_oai_DiVA_org_kth_173956 pubmedcentral_primary_oai_pubmedcentral_nih_gov_5042102 proquest_miscellaneous_1709707237 proquest_journals_1711194813 pubmed_primary_26238484 crossref_citationtrail_10_1002_jcc_24030 crossref_primary_10_1002_jcc_24030 wiley_primary_10_1002_jcc_24030_JCC24030 istex_primary_ark_67375_WNG_KS3G5DW6_S |
| PublicationCentury | 2000 |
| PublicationDate | October 5, 2015 |
| PublicationDateYYYYMMDD | 2015-10-05 |
| PublicationDate_xml | – month: 10 year: 2015 text: October 5, 2015 day: 05 |
| PublicationDecade | 2010 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: New York – name: Hoboken |
| PublicationTitle | Journal of computational chemistry |
| PublicationTitleAlternate | J. Comput. Chem |
| PublicationYear | 2015 |
| Publisher | Blackwell Publishing Ltd Wiley Subscription Services, Inc John Wiley and Sons Inc |
| Publisher_xml | – name: Blackwell Publishing Ltd – name: Wiley Subscription Services, Inc – name: John Wiley and Sons Inc |
| References | W. M. Brown, A. Kohlmeyer, S. J. Plimpton, A. N. Tharrington, Comput. Phys. Commun. 2012, 183, 449. R. C. Walker, R. M. Betz, In XSEDE '13 Proceedings of the Conference on Extreme Science and Engineering Discovery Environment: Gateway to Discovery; ACM: New York, NY, 2013. J. C. Phillips, R. Braun, W. Wang, J. Gumbart, E. Tajkhorshid, E. Villa, C. Chipot, R. D. Skeel, L. Kale, K. Schulten, J. Comput. Chem. 2005, 26, 1781. B. Hess, C. Kutzner, D. van der Spoel, E. Lindahl, J. Chem. Theory Comput. 2008, 4, 435. L. Bock, C. Blau, G. Schröder, I. Davydov, N. Fischer, H. Stark, M. Rodnina, A. Vaiana, H. Grubmüller, Nat. Struct. Mol. Biol. 2013, 20, 1390. U. Essmann, L. Perera, M. Berkowitz, T. Darden, H. Lee, J. Chem. Phys. 1995, 103, 8577. C. C. Gruber, J. Pleiss, J. Comput. Chem. 2010, 32, 600. B. R. Brooks, C. L. Brooks, III, A. D. Mackerell, Jr., L. Nilsson, R. J. Petrella, B. Roux, Y. Won, G. Archontis, C. Bartels, S. Boresch, A. Caflisch, L. Caves, Q. Cui, A. R. Dinner, M. Feig, S. Fischer, J. Gao, M. Hodoscek, W. Im, K. Kuczera, T. Lazaridis, J. Ma, V. Ovchinnikov, E. Paci, R.W. Pastor, C. B. Post, J. Z. Pu, M. Schaefer, B. Tidor, R. M. Venable, H. L. Woodcock, X. Wu, W. Yang, D. M. York, M. Karplus, J. Comput. Chem. 2009, 30, 1545. G. Shi, J. Enos, M. Showerman, V. Kindratenko, In 2009 Symposium on Application Accelerators in High Performance Computing (SAAHPC'09), University of Illinois at Urbana-Champaign: Urbana, IL, 2009. M. J. Abraham, J. E. Gready, J. Comput. Chem. 2011, 32, 2031. R. Salomon-Ferrer, D. Case, R. Walker, WIREs Comput. Mol. Sci. 2013, 3, 198. S. Pronk, S. Páll, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. Shirts, J. Smith, P. Kasson, D. van der Spoel, B. Hess, E. Lindahl, Bioinformatics 2013, 29, 845. S. Páll, B. Hess, Comput. Phys. Commun. 2013, 184, 2641. C. Kutzner, R. Apostolov, B. Hess, H. Grubmüller, In Parallel Computing: Accelerating Computational Science and Engineering (CSE); M. Bader, A. Bode, H. J. Bungartz, Eds.; IOS Press: Amsterdam/Netherlands, 2014; pp. 722-730. I. S. Haque, V. S. Pande, In 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing; Stanford University: Stanford, CA, 2010. C. L. Wennberg, T. Murtola, B. Hess, E. Lindahl, J. Chem. Theory Comput. 2013, 9, 3527. S. Páll, M. J. Abraham, C. Kutzner, B. Hess, E. Lindahl. In International Conference on Exascale Applications and Software, EASC 2014, Stockholm, Sweden; S. Markidis, E. Laure, Eds.; Springer International Publishing: Switzerland, 2015; pp. 1-25. M. J. Abraham, T. Murtola, R. Schulz, S. Páll, J. C. Smith, B. Hess, E. Lindahl, SoftwareX 2015. M. Harvey, G. Giupponi, G. D. Fabritiis, J. Chem. Theory Comput. 2009, 5, 1632. 2012; 183 2013; 29 2010; 32 2009; 30 2013; 3 2010 2013; 20 2009 2006 2011; 32 2013; 184 2015 1995; 103 2009; 5 2014 2008; 4 2013 2005; 26 2013; 9 e_1_2_8_17_1 Shi G. (e_1_2_8_18_1) 2009 e_1_2_8_13_1 Haque I. S. (e_1_2_8_20_1) 2010 e_1_2_8_15_1 e_1_2_8_16_1 Páll S. (e_1_2_8_10_1) 2015 Walker R. C. (e_1_2_8_19_1) 2013 e_1_2_8_3_1 e_1_2_8_2_1 Kutzner C. (e_1_2_8_14_1) 2014 e_1_2_8_5_1 e_1_2_8_4_1 e_1_2_8_7_1 e_1_2_8_6_1 e_1_2_8_9_1 e_1_2_8_8_1 Abraham M. J. (e_1_2_8_11_1) 2015 e_1_2_8_21_1 e_1_2_8_22_1 e_1_2_8_12_1 e_1_2_8_1_1 |
| References_xml | – reference: U. Essmann, L. Perera, M. Berkowitz, T. Darden, H. Lee, J. Chem. Phys. 1995, 103, 8577. – reference: R. C. Walker, R. M. Betz, In XSEDE '13 Proceedings of the Conference on Extreme Science and Engineering Discovery Environment: Gateway to Discovery; ACM: New York, NY, 2013. – reference: M. J. Abraham, J. E. Gready, J. Comput. Chem. 2011, 32, 2031. – reference: S. Pronk, S. Páll, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. Shirts, J. Smith, P. Kasson, D. van der Spoel, B. Hess, E. Lindahl, Bioinformatics 2013, 29, 845. – reference: C. Kutzner, R. Apostolov, B. Hess, H. Grubmüller, In Parallel Computing: Accelerating Computational Science and Engineering (CSE); M. Bader, A. Bode, H. J. Bungartz, Eds.; IOS Press: Amsterdam/Netherlands, 2014; pp. 722-730. – reference: S. Páll, B. Hess, Comput. Phys. Commun. 2013, 184, 2641. – reference: S. Páll, M. J. Abraham, C. Kutzner, B. Hess, E. Lindahl. In International Conference on Exascale Applications and Software, EASC 2014, Stockholm, Sweden; S. Markidis, E. Laure, Eds.; Springer International Publishing: Switzerland, 2015; pp. 1-25. – reference: M. Harvey, G. Giupponi, G. D. Fabritiis, J. Chem. Theory Comput. 2009, 5, 1632. – reference: B. Hess, C. Kutzner, D. van der Spoel, E. Lindahl, J. Chem. Theory Comput. 2008, 4, 435. – reference: G. Shi, J. Enos, M. Showerman, V. Kindratenko, In 2009 Symposium on Application Accelerators in High Performance Computing (SAAHPC'09), University of Illinois at Urbana-Champaign: Urbana, IL, 2009. – reference: W. M. Brown, A. Kohlmeyer, S. J. Plimpton, A. N. Tharrington, Comput. Phys. Commun. 2012, 183, 449. – reference: I. S. Haque, V. S. Pande, In 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing; Stanford University: Stanford, CA, 2010. – reference: J. C. Phillips, R. Braun, W. Wang, J. Gumbart, E. Tajkhorshid, E. Villa, C. Chipot, R. D. Skeel, L. Kale, K. Schulten, J. Comput. Chem. 2005, 26, 1781. – reference: R. Salomon-Ferrer, D. Case, R. Walker, WIREs Comput. Mol. Sci. 2013, 3, 198. – reference: C. L. Wennberg, T. Murtola, B. Hess, E. Lindahl, J. Chem. Theory Comput. 2013, 9, 3527. – reference: C. C. Gruber, J. Pleiss, J. Comput. Chem. 2010, 32, 600. – reference: B. R. Brooks, C. L. Brooks, III, A. D. Mackerell, Jr., L. Nilsson, R. J. Petrella, B. Roux, Y. Won, G. Archontis, C. Bartels, S. Boresch, A. Caflisch, L. Caves, Q. Cui, A. R. Dinner, M. Feig, S. Fischer, J. Gao, M. Hodoscek, W. Im, K. Kuczera, T. Lazaridis, J. Ma, V. Ovchinnikov, E. Paci, R.W. Pastor, C. B. Post, J. Z. Pu, M. Schaefer, B. Tidor, R. M. Venable, H. L. Woodcock, X. Wu, W. Yang, D. M. York, M. Karplus, J. Comput. Chem. 2009, 30, 1545. – reference: L. Bock, C. Blau, G. Schröder, I. Davydov, N. Fischer, H. Stark, M. Rodnina, A. Vaiana, H. Grubmüller, Nat. Struct. Mol. Biol. 2013, 20, 1390. – reference: M. J. Abraham, T. Murtola, R. Schulz, S. Páll, J. C. Smith, B. Hess, E. Lindahl, SoftwareX 2015. – start-page: 722 year: 2014 end-page: 730 – volume: 184 start-page: 2641 year: 2013 publication-title: Comput. Phys. Commun. – volume: 9 start-page: 3527 year: 2013 publication-title: J. Chem. Theory Comput. – year: 2009 – volume: 20 start-page: 1390 year: 2013 publication-title: Nat. Struct. Mol. Biol. – volume: 29 start-page: 845 year: 2013 publication-title: Bioinformatics – volume: 4 start-page: 435 year: 2008 publication-title: J. Chem. Theory Comput. – volume: 26 start-page: 1781 year: 2005 publication-title: J. Comput. Chem. – year: 2006 – volume: 32 start-page: 600 year: 2010 publication-title: J. Comput. Chem. – year: 2015 publication-title: SoftwareX – volume: 183 start-page: 449 year: 2012 publication-title: Comput. Phys. Commun. – volume: 30 start-page: 1545 year: 2009 publication-title: J. Comput. Chem. – volume: 103 start-page: 8577 year: 1995 publication-title: J. Chem. Phys. – start-page: 1 year: 2015 end-page: 25 – volume: 32 start-page: 2031 year: 2011 publication-title: J. Comput. Chem. – volume: 5 start-page: 1632 year: 2009 publication-title: J. Chem. Theory Comput. – volume: 3 start-page: 198 year: 2013 publication-title: WIREs Comput. Mol. Sci. – year: 2014 – year: 2010 – year: 2013 – start-page: 722 volume-title: In Parallel Computing: Accelerating Computational Science and Engineering (CSE) year: 2014 ident: e_1_2_8_14_1 – ident: e_1_2_8_17_1 doi: 10.1002/jcc.21645 – ident: e_1_2_8_5_1 doi: 10.1021/ct9000685 – ident: e_1_2_8_4_1 doi: 10.1016/j.cpc.2011.10.012 – ident: e_1_2_8_6_1 doi: 10.1002/jcc.20289 – year: 2015 ident: e_1_2_8_11_1 publication-title: SoftwareX – ident: e_1_2_8_3_1 – ident: e_1_2_8_22_1 – ident: e_1_2_8_9_1 doi: 10.1016/j.cpc.2013.06.003 – volume-title: In 2009 Symposium on Application Accelerators in High Performance Computing (SAAHPC'09) year: 2009 ident: e_1_2_8_18_1 – ident: e_1_2_8_21_1 doi: 10.1002/jcc.21773 – ident: e_1_2_8_16_1 doi: 10.1038/nsmb.2690 – volume-title: 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing year: 2010 ident: e_1_2_8_20_1 – ident: e_1_2_8_12_1 – ident: e_1_2_8_13_1 doi: 10.1063/1.470117 – start-page: 1 volume-title: In International Conference on Exascale Applications and Software, EASC 2014, Stockholm year: 2015 ident: e_1_2_8_10_1 – ident: e_1_2_8_7_1 doi: 10.1021/ct700301q – volume-title: In XSEDE ‘13 Proceedings of the Conference on Extreme Science and Engineering Discovery Environment: Gateway to Discovery year: 2013 ident: e_1_2_8_19_1 – ident: e_1_2_8_2_1 doi: 10.1002/wcms.1121 – ident: e_1_2_8_8_1 doi: 10.1093/bioinformatics/btt055 – ident: e_1_2_8_1_1 doi: 10.1002/jcc.21287 – ident: e_1_2_8_15_1 doi: 10.1021/ct400140n |
| SSID | ssj0003564 |
| Score | 2.5833957 |
| SecondaryResourceType | review_article |
| Snippet | The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing... |
| SourceID | swepub pubmedcentral proquest pubmed crossref wiley istex |
| SourceType | Open Access Repository Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 1990 |
| SubjectTerms | Analytical chemistry benchmark Benchmarking Benchmarks Central processing units Computer Simulation CPUs energy efficiency GPU hybrid parallelization Molecular chemistry molecular dynamics Molecular Dynamics Simulation parallel computing Simulation Software Software and Updates |
| Title | Best bang for your buck: GPU nodes for GROMACS biomolecular simulations |
| URI | https://api.istex.fr/ark:/67375/WNG-KS3G5DW6-S/fulltext.pdf https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fjcc.24030 https://www.ncbi.nlm.nih.gov/pubmed/26238484 https://www.proquest.com/docview/1711194813 https://www.proquest.com/docview/1709707237 https://pubmed.ncbi.nlm.nih.gov/PMC5042102 https://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-173956 |
| Volume | 36 |
| WOSCitedRecordID | wos000360807700007&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVWIB databaseName: Wiley Online Library Full Collection 2020 customDbUrl: eissn: 1096-987X dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0003564 issn: 0192-8651 databaseCode: DRFUL dateStart: 19960101 isFulltext: true titleUrlDefault: https://onlinelibrary.wiley.com providerName: Wiley-Blackwell |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Zj9MwELZWWyR44T4CyyoghHgJ6xyObXgqKQ3iKNWW7vbNchxntxRS1LQI_j1j50ARi4TES5RkxspkPLY_x843CD3RWELsFtRTOcx0IlxwL8vNcmGcY1VIBQha2mQTdDJhiwWf7qGX7b8wNT9E98HNtAzbX5sGLrPq6Ddp6GelnhsyOZivD3w_pCakg2jadcMhqbmjAMJ4LCZ-SyuEg6OuaG8wGhi__rgIaf65YbKhFe0jWjskja_918tcR1cbJOoO69C5gfZ0eRNdTtoEcLdQ-gqsczNZnrmAbN2fUMDNdmr1wk2nc7dc57qygvT444dhMnPNn_xtsl23Wn5tEoNVt9F8_PpT8sZr8i54ijCOPa6DvFA6AOCts0JxHeW-kgwTLWUBMpYRHVOsNQ9jlksJIBPu6UjlGkeSyfAO2i_Xpb6HXF9iFWuALL4GoMNJRjOqqc-jgpMcrhz0rK0AoRpScpMb44uo6ZQDAZ4R1jMOetypfquZOC5SemprsdOQm5XZukaJOJ2k4t0sTMnoNBYzBx201SyaVlsJn0LPb-hrQgc96sTgdbOIIku93hkdzCmmQQi2362jontYAFiSRSxyEO3FS6dguLz7knJ5bjm9CXSegPXA_jqyekVGy5OhWG_OxGp7DgaEMKcFx9l4-rsrxNsksSf3_131AboCgNBS02JygPa3m51-iC6p79tltTm07QuOdMEO0WB0PJ6__wXhfipA |
| linkProvider | Wiley-Blackwell |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1bb9MwFLamFmm8jDsEBhiEEC9hzsVxgngpKc1gXZnWle3Nchxn6wopaloE_55j54IihoTEWxIfK87xOSefb99B6IUiAmw3Z7bMYKTjkzyy00wvFwYZkbmQgKCFSTbBJpPw7Cw62kJvm7MwFT9EO-GmPcPEa-3gekJ67zdr6KWUrzWbHAzY-z6YEe2h_vB4NBu3kdijFX0UoBg7DKjTMAsRd6-t3Pkf9bVqf1wFNv_cM1kzi3ZBrfkrjW783_fcRDs1GsWDynxuoS1V3EbbcZME7g5K3kHzcCqKcwzoFv-ECjjdyMUbnBzNcLHMVGkKkuNPh4N4ivVp_ibhLi7nX-vkYOVdNBu9P4n37Tr3gi1pGBE7Um6WS-UC-FZpLiPlZ44UIaFKiBzKwpSqgBGlIi8IMyEAaMIz5ctMEV-EwruHesWyUA8QdgSRgQLY4igAOxFNWcoUcyI_j2gGdxZ61fQAlzUxuc6P8YVXlMouB81woxkLPW9Fv1VsHFcJvTTd2EqI1UJvX2OUn04SfjD1Ejo8DfjUQrtNP_Pac0vuMIj-msLGs9Czthi0rhdSRKGWGy1DIkaY60Hb71dm0b7MBTwZ-qFvIdYxmFZA83l3S4r5heH1phBAAe9B-yvT6lQZzj8P-HJ1zhfrC2iAB-NaUJwxqL-rgn-MY3Px8N9Fn6Lt_ZPDMR9_mBw8QtcBIBqqWkJ3UW-92qjH6Jr8vp6Xqye1u_0CjJgtNQ |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1bb9MwFLamDQEv3C-BAQYhxEuYc3EcI15KSgMMSkUZ25vl2M5WCunUtAj-PcfOBVUMCYm3JD5WTo6Pjz_H9ncQemyIBN8tma80zHRiUnK_0Ha5MNFElVIBgpYu2QQbj9OjIz7ZQi-6szANP0T_w832DBevbQc3p7rc-80a-kWpZ5ZNDibsOzGFGGt5neNJH4cj2pBHAYbx04QGHa8QCff6qhuj0Y417I-zoOafOyZbXtFNSOvGpNHl__uaK-hSi0XxoHGeq2jLVNfQhaxLAXcd5S9BPVzI6hgDtsU_oQIu1mr-HOeTA1wttKldQf7xw_tBNsX2LH-XbhfXs29tarD6BjoYvfqUvfbbzAu-oiknPjehLpUJAXqbolTcxDpQMiXUSFlCWVpQkzBiDI-SVEsJMBOemVhpQ2KZyugm2q4WlbmNcCCJSgyAlsAA1OG0YAUzLOBxyamGOw897VpAqJaW3GbH-CoaQuVQgGWEs4yHHvWipw0Xx1lCT1wz9hJyObeb1xgVh-Nc7E-jnA4PEzH10G7XzqLtt7UIGMR-S2ATeehhXwxWt8sosjKLtZUhnBEWRqD7rcYt-peFgCbTOI09xDYcphewbN6bJdXsxLF6UwifgPZA_8a1NqoMZ58HYrE8FvPVCSgQwawWDOcc6u-mEG-zzF3c-XfRB-j8ZDgS796M9--ii4AOHU8tobtoe7Vcm3vonPq-mtXL-66v_QKHMSse |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Best+bang+for+your+buck%3A+GPU+nodes+for+GROMACS+biomolecular+simulations&rft.jtitle=Journal+of+computational+chemistry&rft.au=Kutzner%2C+Carsten&rft.au=P%C3%A1ll%2C+Szil%C3%A1rd&rft.au=Fechner%2C+Martin&rft.au=Esztermann%2C+Ansgar&rft.date=2015-10-05&rft.pub=Blackwell+Publishing+Ltd&rft.issn=0192-8651&rft.eissn=1096-987X&rft.volume=36&rft.issue=26&rft.spage=1990&rft.epage=2008&rft_id=info:doi/10.1002%2Fjcc.24030&rft.externalDBID=n%2Fa&rft.externalDocID=ark_67375_WNG_KS3G5DW6_S |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0192-8651&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0192-8651&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0192-8651&client=summon |