Towards effective assessment of steady state performance in Java software: are we there yet?
Microbenchmarking is a widely used form of performance testing in Java software. A microbenchmark repeatedly executes a small chunk of code while collecting measurements related to its performance. Due to Java Virtual Machine optimizations, microbenchmarks are usually subject to severe performance f...
Uloženo v:
| Vydáno v: | Empirical software engineering : an international journal Ročník 28; číslo 1; s. 13 |
|---|---|
| Hlavní autoři: | , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
New York
Springer US
01.01.2023
Springer Nature B.V |
| Témata: | |
| ISSN: | 1382-3256, 1573-7616 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | Microbenchmarking is a widely used form of performance testing in Java software. A microbenchmark repeatedly executes a small chunk of code while collecting measurements related to its performance. Due to Java Virtual Machine optimizations, microbenchmarks are usually subject to severe performance fluctuations in the first phase of their execution (also known as warmup). For this reason, software developers typically discard measurements of this phase and focus their analysis when benchmarks reach a steady state of performance. Developers estimate the end of the warmup phase based on their expertise, and configure their benchmarks accordingly. Unfortunately, this approach is based on two strong assumptions: (i) benchmarks always reach a steady state of performance and (ii) developers accurately estimate warmup. In this paper, we show that Java microbenchmarks do not always reach a steady state, and often developers fail to accurately estimate the end of the warmup phase. We found that a considerable portion of studied benchmarks do not hit the steady state, and warmup estimates provided by software developers are often inaccurate (with a large error). This has significant implications both in terms of results quality and time-effort. Furthermore, we found that dynamic reconfiguration significantly improves warmup estimation accuracy, but still it induces suboptimal warmup estimates and relevant side-effects. We envision this paper as a starting point for supporting the introduction of more sophisticated automated techniques that can ensure results quality in a timely fashion. |
|---|---|
| AbstractList | Microbenchmarking is a widely used form of performance testing in Java software. A microbenchmark repeatedly executes a small chunk of code while collecting measurements related to its performance. Due to Java Virtual Machine optimizations, microbenchmarks are usually subject to severe performance fluctuations in the first phase of their execution (also known as warmup). For this reason, software developers typically discard measurements of this phase and focus their analysis when benchmarks reach a steady state of performance. Developers estimate the end of the warmup phase based on their expertise, and configure their benchmarks accordingly. Unfortunately, this approach is based on two strong assumptions: (i) benchmarks always reach a steady state of performance and (ii) developers accurately estimate warmup. In this paper, we show that Java microbenchmarks do not always reach a steady state, and often developers fail to accurately estimate the end of the warmup phase. We found that a considerable portion of studied benchmarks do not hit the steady state, and warmup estimates provided by software developers are often inaccurate (with a large error). This has significant implications both in terms of results quality and time-effort. Furthermore, we found that dynamic reconfiguration significantly improves warmup estimation accuracy, but still it induces suboptimal warmup estimates and relevant side-effects. We envision this paper as a starting point for supporting the introduction of more sophisticated automated techniques that can ensure results quality in a timely fashion. |
| ArticleNumber | 13 |
| Author | Cortellessa, Vittorio Di Pompeo, Daniele Tucci, Michele Traini, Luca |
| Author_xml | – sequence: 1 givenname: Luca orcidid: 0000-0003-3676-0645 surname: Traini fullname: Traini, Luca email: luca.traini@univaq.it organization: Department of Information Engineering, Computer Science and Mathematics, University of L’Aquila – sequence: 2 givenname: Vittorio orcidid: 0000-0002-4507-464X surname: Cortellessa fullname: Cortellessa, Vittorio organization: Department of Information Engineering, Computer Science and Mathematics, University of L’Aquila – sequence: 3 givenname: Daniele orcidid: 0000-0003-2041-7375 surname: Di Pompeo fullname: Di Pompeo, Daniele organization: Department of Information Engineering, Computer Science and Mathematics, University of L’Aquila – sequence: 4 givenname: Michele orcidid: 0000-0002-0329-1101 surname: Tucci fullname: Tucci, Michele organization: Department of Distributed and Dependable Systems, Faculty of Mathematics and Physics, Charles University |
| BookMark | eNp9kE1LAzEQhoMo2Fb_gKeA59V87Ca7XkSKnxS89CiENJnolnZTk_Tr3xtdQfDQy8wc3mdm3neIjjvfAUIXlFxRQuR1pESIsiCMFZSwUha7IzSgleSFFFQc55nXrOCsEqdoGOOcENLIshqgt6nf6mAjBufApHYDWMcIMS6hS9g7HBNou89NJ8ArCM6Hpe4M4LbDL3qjcfQu5RVwg3PBW8DpA_Kwh3R7hk6cXkQ4_-0jNH24n46fisnr4_P4blIYLngqrK1tTfiM1rYRteCWcadd6WZWW9EQ42relJWQYCphmeG8nBnWOArSUmYqPkKX_dpV8J9riEnN_Tp0-aJisiTZp5Q8q-peZYKPMYBTps2mWt-loNuFokR9R6n6KFWOUv1EqXYZZf_QVWiXOuwPQ7yHYhZ37xD-vjpAfQHTnYov |
| CitedBy_id | crossref_primary_10_1109_TSE_2024_3380836 crossref_primary_10_1145_3715727 crossref_primary_10_3390_electronics14102067 crossref_primary_10_1145_3729349 crossref_primary_10_3390_en17225631 crossref_primary_10_1007_s10664_025_10712_3 crossref_primary_10_1007_s10791_024_09483_0 |
| Cites_doi | 10.1145/3427921.3450243 10.1017/CBO9780511984679.011 10.1214/aoms/1177729694 10.1109/ICSME.2017.13 10.1109/TSE.2015.2445340 10.1002/smr.2276 10.1016/j.sigpro.2005.01.012 10.1109/MSR.2017.62 10.1145/3485136 10.1007/s10515-015-0188-0 10.1145/2884781.2884871 10.1109/ICSME.2017.67 10.1145/3377811.3380351 10.1145/1508244.1508275 10.1145/3196398.3196407 10.1109/MSR.2017.54 10.1080/01621459.2017.1385466 10.1007/s10664-021-10037-x 10.1080/01621459.2012.737745 10.1109/ASE.2019.00123 10.1145/2491894.2464160 10.1145/3338906.3338912 10.1016/S0378-3758(96)00138-3 10.1145/3030207.3030213 10.1145/3133876 10.1007/s10664-019-09681-1 10.1109/TSE.2019.2927908 10.1145/3368089.3409683 10.1109/TSE.2019.2925345 10.1145/3417990.3418743 10.1016/j.jss.2021.111084 10.1111/j.2517-6161.1954.tb00159.x 10.1109/ICDCSW.2011.20 10.1145/3092703.3092725 10.1145/3030207.3030226 10.1145/1297027.1297033 10.1007/978-3-319-22183-0_29 10.1016/j.scico.2013.02.001 10.1017/CBO9780511802843 10.1007/s10664-021-10069-3 10.4324/9780203771587 10.1145/2884781.2884830 |
| ContentType | Journal Article |
| Copyright | The Author(s) 2022 The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
| Copyright_xml | – notice: The Author(s) 2022 – notice: The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
| DBID | C6C AAYXX CITATION 7SC 8FD 8FE 8FG ABJCF AFKRA ARAPS BENPR BGLVJ CCPQU DWQXO HCIFZ JQ2 L6V L7M L~C L~D M7S P5Z P62 PHGZM PHGZT PKEHL PQEST PQGLB PQQKQ PQUKI PRINS PTHSS S0W |
| DOI | 10.1007/s10664-022-10247-x |
| DatabaseName | Springer Nature OA Free Journals CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest SciTech Collection ProQuest Technology Collection Materials Science & Engineering Collection ProQuest Central UK/Ireland Advanced Technologies & Computer Science Collection ProQuest Central Technology Collection ProQuest One Community College ProQuest Central SciTech Premium Collection ProQuest Computer Science Collection ProQuest Engineering Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional Engineering Database Advanced Technologies & Aerospace Database ProQuest Advanced Technologies & Aerospace Collection Proquest Central Premium ProQuest One Academic (New) ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) One Applied & Life Sciences ProQuest One Academic (retired) ProQuest One Academic UKI Edition ProQuest Central China Engineering Collection DELNET Engineering & Technology Collection |
| DatabaseTitle | CrossRef Technology Collection Technology Research Database Computer and Information Systems Abstracts – Academic ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Computer Science Collection Computer and Information Systems Abstracts SciTech Premium Collection ProQuest One Community College ProQuest Central China ProQuest Central ProQuest One Applied & Life Sciences ProQuest Engineering Collection ProQuest Central Korea ProQuest Central (New) Advanced Technologies Database with Aerospace Engineering Collection Advanced Technologies & Aerospace Collection Engineering Database ProQuest One Academic Eastern Edition ProQuest Technology Collection ProQuest SciTech Collection Computer and Information Systems Abstracts Professional Advanced Technologies & Aerospace Database ProQuest One Academic UKI Edition ProQuest DELNET Engineering and Technology Collection Materials Science & Engineering Collection ProQuest One Academic ProQuest One Academic (New) |
| DatabaseTitleList | Technology Collection CrossRef |
| Database_xml | – sequence: 1 dbid: BENPR name: ProQuest Central url: https://www.proquest.com/central sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISSN | 1573-7616 |
| ExternalDocumentID | 10_1007_s10664_022_10247_x |
| GroupedDBID | -4Z -59 -5G -BR -EM -Y2 -~C .86 .DC .VR 06D 0R~ 0VY 199 1N0 1SB 2.D 203 28- 29G 2J2 2JN 2JY 2KG 2LR 2P1 2VQ 2~H 30V 4.4 406 408 409 40D 40E 5GY 5QI 5VS 67Z 6NX 78A 8FE 8FG 8TC 8UJ 95- 95. 95~ 96X AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAOBN AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYOK AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDZT ABECU ABFTD ABFTV ABHLI ABHQN ABJCF ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABQSL ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABWNU ABXPI ACAOD ACBXY ACDTI ACGFS ACHSB ACHXU ACIWK ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACSNA ACZOJ ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADRFC ADTPH ADURQ ADYFF ADZKW AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFGCZ AFKRA AFLOW AFQWF AFWTZ AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMXSW AMYLF AMYQR AOCGG ARAPS ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN B-. BA0 BBWZM BDATZ BENPR BGLVJ BGNMA BSONS C6C CAG CCPQU COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 EBLON EBS EIOEI EJD ESBYG FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNWQR GQ6 GQ7 GQ8 GXS H13 HCIFZ HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I09 IHE IJ- IKXTQ ITM IWAJR IXC IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ KDC KOV KOW L6V LAK LLZTM M4Y M7S MA- N2Q NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM P19 P62 P9O PF0 PT4 PT5 PTHSS Q2X QOK QOS R4E R89 R9I RHV RNI RNS ROL RPX RSV RZC RZE RZK S0W S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SCO SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TSG TSK TSV TUC U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YLTOR Z45 Z7R Z7S Z7V Z7X Z7Z Z81 Z83 Z86 Z88 Z8M Z8N Z8P Z8R Z8T Z8U Z8W Z92 ZMTXR ~EX AAPKM AAYXX ABBRH ABDBE ABFSG ABRTQ ACSTC ADHKG AEZWR AFDZB AFFHD AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION PHGZM PHGZT PQGLB 7SC 8FD DWQXO JQ2 L7M L~C L~D PKEHL PQEST PQQKQ PQUKI PRINS |
| ID | FETCH-LOGICAL-c363t-dd8d803b18d96863d23faf4fbdad690cf8394567ec56d2c334bc29f1e7d12c53 |
| IEDL.DBID | M7S |
| ISICitedReferencesCount | 15 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000889551000001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1382-3256 |
| IngestDate | Tue Dec 02 16:28:12 EST 2025 Sat Nov 29 05:37:46 EST 2025 Tue Nov 18 22:01:39 EST 2025 Fri Feb 21 02:45:55 EST 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 1 |
| Keywords | Performance evaluation Performance testing Java Microbenchmarking JMH |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c363t-dd8d803b18d96863d23faf4fbdad690cf8394567ec56d2c334bc29f1e7d12c53 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0002-0329-1101 0000-0003-3676-0645 0000-0003-2041-7375 0000-0002-4507-464X |
| OpenAccessLink | https://link.springer.com/10.1007/s10664-022-10247-x |
| PQID | 2740745773 |
| PQPubID | 326341 |
| ParticipantIDs | proquest_journals_2740745773 crossref_citationtrail_10_1007_s10664_022_10247_x crossref_primary_10_1007_s10664_022_10247_x springer_journals_10_1007_s10664_022_10247_x |
| PublicationCentury | 2000 |
| PublicationDate | 2023-01-01 |
| PublicationDateYYYYMMDD | 2023-01-01 |
| PublicationDate_xml | – month: 01 year: 2023 text: 2023-01-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | New York |
| PublicationPlace_xml | – name: New York – name: Dordrecht |
| PublicationSubtitle | An International Journal |
| PublicationTitle | Empirical software engineering : an international journal |
| PublicationTitleAbbrev | Empir Software Eng |
| PublicationYear | 2023 |
| Publisher | Springer US Springer Nature B.V |
| Publisher_xml | – name: Springer US – name: Springer Nature B.V |
| References | AntochJHuškovaMPráškováZEffect of dependence on statistics for determination of changeJ Stat Plan Inference1997602291310145663310.1016/S0378-3758(96)00138-31003.62537https://doi.org/10.1016/S0378-3758(96)00138-3. https://www.sciencedirect.com/science/article/pii/S0378375896001383 DavisonACHinkleyDVBootstrap methods and their application. Cambridge Series in Statistical and Probabilistic Mathematics1997CambridgeCambridge University Press10.1017/CBO97805118028430886.62001https://doi.org/10.1017/CBO9780511802843 VarghaADelaneyHDA critique and improvement of the “cl” common language effect size statistics of Mcgraw and WongJ Educ Behav Stat2000252101132http://www.jstor.org/stable/1165329 AlGhamdi H M, Bezemer C P, Shang W, Hassan A E, Flora P (2020) Towards reducing the time needed for load testing. J Softw: Evol Process e2276. https://doi.org/10.1002/smr.2276. https://onlinelibrary.wiley.com/doi/abs/10.1002/smr.2276, smr.2276 Ding Z, Chen J, Shang W (2020) Towards the use of the readily available tests from the release pipeline as performance tests: are we there yet? In: Rothermel G, Bae D (eds) ICSE ’20: 42nd international conference on software engineering, Seoul, South Korea, 27 June–19 July, 2020. https://doi.org/10.1145/3377811.3380351. ACM, pp 1435–1446 Mytkowicz T, Diwan A, Hauswirth M, Sweeney P F (2009b) Producing wrong data without doing anything obviously wrong!. In: Soffa ML, Irwin MJ (eds) Proceedings of the 14th international conference on architectural support for programming languages and operating systems, ASPLOS 2009, Washington, DC, USA, March 7–11, 2009. https://doi.org/10.1145/1508244.1508275. ACM, pp 265–276 Samoaa H, Leitner P (2021) An exploratory study of the impact of parameterization on jmh measurement results in open-source projects. In: Proceedings of the ACM/SPEC international conference on performance engineering, ICPE ’21. https://doi.org/10.1145/3427921.3450243. Association for Computing Machinery, New York, pp 213–224 FiellerECSome problems in interval estimationJ R Stat Soc B: Stat (Methodol)1954162175185930760057.35311http://www.jstor.org/stable/2984043 TrainiLExploring performance assurance practices and challenges in agile software development: an ethnographic studyEmpir Softw Eng20222737410.1007/s10664-021-10069-3https://doi.org/10.1007/s10664-021-10069-3 Bagley D, Fulgham B, Gouy I (2004) The computer language benchmarks game. https://benchmarksgame-team.pages.debian.net/benchmarksgame. Accessed: 2021-10-12 BulejLBuresTHorkýVKotrcJMarekLTrojánekTTumaPUnit testing performance with stochastic performance logicAutom Softw Eng201724113918710.1007/s10515-015-0188-0https://doi.org/10.1007/s10515-015-0188-0 Rubin J, Rinard M (2016) The challenges of staying together while moving fast: an exploratory study. In: Proceedings of the 38th international conference on software engineering, ICSE ’16. https://doi.org/10.1145/2884781.2884871. Association for Computing Machinery, New York, pp 982–993 BolzCFTrattLThe impact of meta-tracing on vm design and implementationSci Comput Program201598P340842110.1016/j.scico.2013.02.001https://doi.org/10.1016/j.scico.2013.02.001 Cohen J (2013) Statistical power analysis for the behavioral sciences. Taylor & Francis LavielleMUsing penalized contrasts for the change-point problemSignal Process200585815011510331575310.1016/j.sigpro.2005.01.0121160.94341https://doi.org/10.1016/j.sigpro.2005.01.012 Maricq A, Duplyakin D, Jimenez I, Maltzahn C, Stutsman R, Ricci R (2018) Taming performance variability. In: 13th USENIX symposium on operating systems design and implementation (OSDI 18). https://www.usenix.org/conference/osdi18/presentation/maricq. USENIX Association, Carlsbad, pp 409–425 Leitner P, Bezemer C P (2017) An exploratory study of the state of practice of performance testing in java-based open source projects. In: Proceedings of the 8th ACM/SPEC on international conference on performance engineering, ICPE ’17. https://doi.org/10.1145/3030207.3030213. Association for Computing Machinery, New York, pp 373–384 Mytkowicz T, Diwan A, Hauswirth M, Sweeney P F (2009a) Producing wrong data without doing anything obviously wrong! In: Soffa ML, Irwin MJ (eds) Proceedings of the 14th international conference on architectural support for programming languages and operating systems, ASPLOS 2009, Washington, DC, USA, March 7–11, 2009. https://doi.org/10.1145/1508244.1508275. ACM, pp 265–276 KillickRFearnheadPEckleyIAOptimal detection of changepoints with a linear computational costJ Am Stat Assoc201210750015901598303641810.1080/01621459.2012.7377451258.62091https://doi.org/10.1080/01621459.2012.737745 Laaber C, Leitner P (2018) An evaluation of open-source software microbenchmark suites for continuous performance assessment. In: Proceedings of the 15th international conference on mining software repositories, MSR ’18. https://doi.org/10.1145/3196398.3196407. Association for Computing Machinery, New York, pp 119–130 Tukey J W et al (1977) Exploratory data analysis, vol 2. Reading Beller M, Gousios G, Zaidman A (2017) Oops, my tests broke the build: an explorative analysis of travis ci with github. In: 2017 IEEE/ACM 14th international conference on mining software repositories (MSR). https://doi.org/10.1109/MSR.2017.62, pp 356–367 Kalibera T, Jones R (2013) Rigorous benchmarking in reasonable time. In: Proceedings of the 2013 international symposium on memory management, ISMM ’13, pp 63–74. https://doi.org/10.1145/2491894.2464160. Association for Computing Machinery, New York Sarro F, Petrozziello A, Harman M (2016) Multi-objective software effort estimation. In: Proceedings of the 38th international conference on software engineering, ICSE ’16. https://doi.org/10.1145/2884781.2884830. Association for Computing Machinery, New York, pp 619–630 Kalibera T, Jones R (2020) Quantifying performance changes with effect size confidence intervals. 2007.10899 Suchanek M, Navratil M, Bailey L, Boyle C (2017) Performance tuning guide (red hat enterprise Linux 7). https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/performance_tuning_guide/, (Online; accessed 28 June 2021) EckleyIAFearnheadPKillickRAnalysis of changepoint models2011CambridgeCambridge University Press205224https://doi.org/10.1017/CBO9780511984679.011 Oaks S (2014) Java performance—the definitive guide: getting the most out of your code. O’Reilly. http://shop.oreilly.com/product/0636920028499.do Georges A, Buytaert D, Eeckhout L (2007) Statistically rigorous java performance evaluation. In: Proceedings of the 22nd annual ACM SIGPLAN conference on object-oriented programming systems, languages and applications, OOPSLA ’07. https://doi.org/10.1145/1297027.1297033. Association for Computing Machinery, New York, pp 57–76 FearnheadPRigaillGChangepoint detection in the presence of outliersJ Am Stat Assoc2019114525169183394124610.1080/01621459.2017.13854661478.62238 Reichelt D G, Kühne S, Hasselbring W (2019) Peass: a tool for identifying performance changes at code level. In: 34th IEEE/ACM international conference on automated software engineering, ASE 2019, San Diego, CA, USA, November 11–15, 2019. https://doi.org/10.1109/ASE.2019.00123. IEEE, pp 1146–1149 Satopaa V, Albrecht J R, Irwin D E, Raghavan B (2011) Finding a “kneedle” in a haystack: detecting knee points in system behavior. In: 31st IEEE international conference on distributed computing systems workshops (ICDCS 2011 workshops), 20–24 June 2011, Minneapolis, Minnesota, USA. https://doi.org/10.1109/ICDCSW.2011.20. IEEE Computer Society, pp 166–171 Fowler M (2006) Continuous integration. https://www.martinfowler.com/articles/continuousIntegration.html. Accessed: 25 Jan 2022 Laaber C, Würsten S, Gall H C, Leitner P (2020) Dynamically reconfiguring software microbenchmarks: reducing execution time without sacrificing result quality. In: Proceedings of the 28th ACM joint meeting on European software engineering conference and symposium on the foundations of software engineering, ESEC/FSE 2020. https://doi.org/10.1145/3368089.3409683. Association for Computing Machinery, New York, pp 989–1001 Vassallo C, Schermann G, Zampetti F, Romano D, Leitner P, Zaidman A, Di Penta M, Panichella S (2017) A tale of ci build failures: an open source and a financial organization perspective. In: 2017 IEEE International conference on software maintenance and evolution (ICSME). https://doi.org/10.1109/ICSME.2017.67, pp 183–193 CortellessaVDi PompeoDEramoRTucciMA model-driven approach for continuous performance engineering in microservice-based systemsJ Syst Softw202218311108410.1016/j.jss.2021.111084https://doi.org/10.1016/j.jss.2021.111084. https://www.sciencedirect.com/science/article/pii/S0164121221001813 Mostafa S, Wang X, Xie T (2017) Perfranker: prioritization of performance regression tests for collection-intensive software. In: Bultan T, Sen K (eds) Proceedings of the 26th ACM SIGSOFT international symposium on software testing and analysis, Santa Barbara, CA, USA, July 10–14, 2017. https://doi.org/10.1145/3092703.3092725. ACM, pp 23–34 Neumann G, Harman M, Poulding S Barros M, Labiche Y (eds) (2015) Transformed vargha-delaney effect size. Springer International Publishing, Cham Giese H, Lambers L, Zöllner C (2020) From classic to agile: experiences from more than a decade of project-based modeling education. In: Guerra E, Iovino L (eds) MODELS ’20: ACM/IEEE 23rd international conference on model driven engineering languages and systems, virtual event, Canada, 18–23 October, 2020, companion proceedings. https://doi.org/10.1145/3417990.3418743. ACM, pp 22:1–22:10 Traini L, Di Pompeo D, Tucci M, Lin B, Scalabrino S, Bavota G, Lanza M, Oliveto R, Cortellessa V (2021) How software refactoring impacts execution time. ACM Trans Softw Eng Methodol 31(2). https://doi.org/10.1145/3485136 Barrett E, Bolz-Tereick C F, Killick R, Mount S, Tratt L (2017) Virtual machine warmup blows hot and cold. Proc ACM Program Lang 1(OOPSLA). https://doi.org/10.1145/3133876 JiangZMHassanAEA survey on load testing 10247_CR21 10247_CR20 M Lavielle (10247_CR31) 2005; 85 10247_CR29 AV Papadopoulos (10247_CR39) 2021; 47 10247_CR27 IA Eckley (10247_CR14) 2011 10247_CR24 10247_CR23 C Laaber (10247_CR28) 2019; 24 C Laaber (10247_CR30) 2021; 26 10247_CR19 10247_CR8 S Kullback (10247_CR26) 1951; 22 10247_CR9 R Killick (10247_CR25) 2012; 107 CF Bolz (10247_CR6) 2015; 98 10247_CR32 10247_CR1 EC Fieller (10247_CR16) 1954; 16 V Cortellessa (10247_CR10) 2022; 183 10247_CR3 10247_CR4 10247_CR5 10247_CR38 10247_CR37 10247_CR36 10247_CR35 10247_CR34 10247_CR33 D Costa (10247_CR11) 2021; 47 10247_CR43 10247_CR42 10247_CR41 10247_CR40 L Traini (10247_CR49) 2022; 27 10247_CR48 10247_CR47 10247_CR46 10247_CR45 10247_CR44 P Fearnhead (10247_CR15) 2019; 114 J Antoch (10247_CR2) 1997; 60 A Vargha (10247_CR52) 2000; 25 AC Davison (10247_CR12) 1997 L Bulej (10247_CR7) 2017; 24 10247_CR53 10247_CR51 10247_CR50 10247_CR18 10247_CR17 10247_CR13 ZM Jiang (10247_CR22) 2015; 41 |
| References_xml | – reference: Tukey J W et al (1977) Exploratory data analysis, vol 2. Reading – reference: Mytkowicz T, Diwan A, Hauswirth M, Sweeney P F (2009a) Producing wrong data without doing anything obviously wrong! In: Soffa ML, Irwin MJ (eds) Proceedings of the 14th international conference on architectural support for programming languages and operating systems, ASPLOS 2009, Washington, DC, USA, March 7–11, 2009. https://doi.org/10.1145/1508244.1508275. ACM, pp 265–276 – reference: BolzCFTrattLThe impact of meta-tracing on vm design and implementationSci Comput Program201598P340842110.1016/j.scico.2013.02.001https://doi.org/10.1016/j.scico.2013.02.001 – reference: AlGhamdi H M, Bezemer C P, Shang W, Hassan A E, Flora P (2020) Towards reducing the time needed for load testing. J Softw: Evol Process e2276. https://doi.org/10.1002/smr.2276. https://onlinelibrary.wiley.com/doi/abs/10.1002/smr.2276, smr.2276 – reference: Laaber C, Leitner P (2018) An evaluation of open-source software microbenchmark suites for continuous performance assessment. In: Proceedings of the 15th international conference on mining software repositories, MSR ’18. https://doi.org/10.1145/3196398.3196407. Association for Computing Machinery, New York, pp 119–130 – reference: Ratanaworabhan P, Livshits B, Simmons D, Ba Zorn (2009) Jsmeter: characterizing real-world behavior of javascript programs. Tech. Rep. MSR-TR-2009-173. https://www.microsoft.com/en-us/research/publication/jsmeter-characterizing-real-world-behavior-of-javascript-programs/ – reference: LaaberCScheunerJLeitnerPSoftware microbenchmarking in the cloud. how bad is it really?Empir Softw Eng20192442469250810.1007/s10664-019-09681-1https://doi.org/10.1007/s10664-019-09681-1 – reference: Laaber C, Würsten S, Gall H C, Leitner P (2020) Dynamically reconfiguring software microbenchmarks: reducing execution time without sacrificing result quality. In: Proceedings of the 28th ACM joint meeting on European software engineering conference and symposium on the foundations of software engineering, ESEC/FSE 2020. https://doi.org/10.1145/3368089.3409683. Association for Computing Machinery, New York, pp 989–1001 – reference: Leitner P, Bezemer C P (2017) An exploratory study of the state of practice of performance testing in java-based open source projects. In: Proceedings of the 8th ACM/SPEC on international conference on performance engineering, ICPE ’17. https://doi.org/10.1145/3030207.3030213. Association for Computing Machinery, New York, pp 373–384 – reference: Mostafa S, Wang X, Xie T (2017) Perfranker: prioritization of performance regression tests for collection-intensive software. In: Bultan T, Sen K (eds) Proceedings of the 26th ACM SIGSOFT international symposium on software testing and analysis, Santa Barbara, CA, USA, July 10–14, 2017. https://doi.org/10.1145/3092703.3092725. ACM, pp 23–34 – reference: KillickRFearnheadPEckleyIAOptimal detection of changepoints with a linear computational costJ Am Stat Assoc201210750015901598303641810.1080/01621459.2012.7377451258.62091https://doi.org/10.1080/01621459.2012.737745 – reference: Suchanek M, Navratil M, Bailey L, Boyle C (2017) Performance tuning guide (red hat enterprise Linux 7). https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/performance_tuning_guide/, (Online; accessed 28 June 2021) – reference: Barrett E, Bolz-Tereick C F, Killick R, Mount S, Tratt L (2017) Virtual machine warmup blows hot and cold. Proc ACM Program Lang 1(OOPSLA). https://doi.org/10.1145/3133876 – reference: TrainiLExploring performance assurance practices and challenges in agile software development: an ethnographic studyEmpir Softw Eng20222737410.1007/s10664-021-10069-3https://doi.org/10.1007/s10664-021-10069-3 – reference: Kalibera T, Jones R (2020) Quantifying performance changes with effect size confidence intervals. 2007.10899 – reference: Vassallo C, Schermann G, Zampetti F, Romano D, Leitner P, Zaidman A, Di Penta M, Panichella S (2017) A tale of ci build failures: an open source and a financial organization perspective. In: 2017 IEEE International conference on software maintenance and evolution (ICSME). https://doi.org/10.1109/ICSME.2017.67, pp 183–193 – reference: Bagley D, Fulgham B, Gouy I (2004) The computer language benchmarks game. https://benchmarksgame-team.pages.debian.net/benchmarksgame. Accessed: 2021-10-12 – reference: JiangZMHassanAEA survey on load testing of large-scale software systemsIEEE Trans Softw Eng201541111091111810.1109/TSE.2015.2445340https://doi.org/10.1109/TSE.2015.2445340 – reference: Oaks S (2014) Java performance—the definitive guide: getting the most out of your code. O’Reilly. http://shop.oreilly.com/product/0636920028499.do – reference: Haynes K, Eckley I A, Fearnhead P (2014) Efficient penalty search for multiple changepoint problems. 1412.3617 – reference: Rubin J, Rinard M (2016) The challenges of staying together while moving fast: an exploratory study. In: Proceedings of the 38th international conference on software engineering, ICSE ’16. https://doi.org/10.1145/2884781.2884871. Association for Computing Machinery, New York, pp 982–993 – reference: Sarro F, Petrozziello A, Harman M (2016) Multi-objective software effort estimation. In: Proceedings of the 38th international conference on software engineering, ICSE ’16. https://doi.org/10.1145/2884781.2884830. Association for Computing Machinery, New York, pp 619–630 – reference: Reichelt D G, Kühne S, Hasselbring W (2019) Peass: a tool for identifying performance changes at code level. In: 34th IEEE/ACM international conference on automated software engineering, ASE 2019, San Diego, CA, USA, November 11–15, 2019. https://doi.org/10.1109/ASE.2019.00123. IEEE, pp 1146–1149 – reference: FearnheadPRigaillGChangepoint detection in the presence of outliersJ Am Stat Assoc2019114525169183394124610.1080/01621459.2017.13854661478.62238 – reference: Mytkowicz T, Diwan A, Hauswirth M, Sweeney P F (2009b) Producing wrong data without doing anything obviously wrong!. In: Soffa ML, Irwin MJ (eds) Proceedings of the 14th international conference on architectural support for programming languages and operating systems, ASPLOS 2009, Washington, DC, USA, March 7–11, 2009. https://doi.org/10.1145/1508244.1508275. ACM, pp 265–276 – reference: Rausch T, Hummer W, Leitner P, Schulte S (2017) An empirical analysis of build failures in the continuous integration workflows of java-based open-source software. In: 2017 IEEE/ACM 14th international conference on mining software repositories (MSR). https://doi.org/10.1109/MSR.2017.54, pp 345–355 – reference: LavielleMUsing penalized contrasts for the change-point problemSignal Process200585815011510331575310.1016/j.sigpro.2005.01.0121160.94341https://doi.org/10.1016/j.sigpro.2005.01.012 – reference: Cohen J (2013) Statistical power analysis for the behavioral sciences. Taylor & Francis – reference: Traini L, Di Pompeo D, Tucci M, Lin B, Scalabrino S, Bavota G, Lanza M, Oliveto R, Cortellessa V (2021) How software refactoring impacts execution time. ACM Trans Softw Eng Methodol 31(2). https://doi.org/10.1145/3485136 – reference: Beller M, Gousios G, Zaidman A (2017) Oops, my tests broke the build: an explorative analysis of travis ci with github. In: 2017 IEEE/ACM 14th international conference on mining software repositories (MSR). https://doi.org/10.1109/MSR.2017.62, pp 356–367 – reference: LaaberCGallHCLeitnerPApplying test case prioritization to software microbenchmarksEmpir Softw Eng202126613310.1007/s10664-021-10037-xhttps://doi.org/10.1007/s10664-021-10037-x – reference: Chen J, Shang W (2017) An exploratory study of performance regression introducing code changes. In: 2017 IEEE International conference on software maintenance and evolution, ICSME 2017, Shanghai, China, September 17–22, 2017. https://doi.org/10.1109/ICSME.2017.13. IEEE Computer Society, pp 341–352 – reference: Satopaa V, Albrecht J R, Irwin D E, Raghavan B (2011) Finding a “kneedle” in a haystack: detecting knee points in system behavior. In: 31st IEEE international conference on distributed computing systems workshops (ICDCS 2011 workshops), 20–24 June 2011, Minneapolis, Minnesota, USA. https://doi.org/10.1109/ICDCSW.2011.20. IEEE Computer Society, pp 166–171 – reference: BulejLBuresTHorkýVKotrcJMarekLTrojánekTTumaPUnit testing performance with stochastic performance logicAutom Softw Eng201724113918710.1007/s10515-015-0188-0https://doi.org/10.1007/s10515-015-0188-0 – reference: KullbackSLeiblerRAOn information and sufficiencyAnn Math Stat195122179863996810.1214/aoms/11777296940042.38403 – reference: EckleyIAFearnheadPKillickRAnalysis of changepoint models2011CambridgeCambridge University Press205224https://doi.org/10.1017/CBO9780511984679.011 – reference: PapadopoulosAVVersluisLBauerAHerbstNvon KistowskiJAli-EldinAAbadCLAmaralJNTumaPIosupAMethodological principles for reproducible performance evaluation in cloud computingIEEE Trans Softw Eng20214781528154310.1109/TSE.2019.2927908https://doi.org/10.1109/TSE.2019.2927908 – reference: AntochJHuškovaMPráškováZEffect of dependence on statistics for determination of changeJ Stat Plan Inference1997602291310145663310.1016/S0378-3758(96)00138-31003.62537https://doi.org/10.1016/S0378-3758(96)00138-3. https://www.sciencedirect.com/science/article/pii/S0378375896001383 – reference: Samoaa H, Leitner P (2021) An exploratory study of the impact of parameterization on jmh measurement results in open-source projects. In: Proceedings of the ACM/SPEC international conference on performance engineering, ICPE ’21. https://doi.org/10.1145/3427921.3450243. Association for Computing Machinery, New York, pp 213–224 – reference: Stefan P, Horký V, Bulej L, Tuma P (2017) Unit testing performance in java projects: are we there yet? In: Binder W, Cortellessa V, Koziolek A, Smirni E, Poess M (eds) Proceedings of the 8th ACM/SPEC on international conference on performance engineering, ICPE 2017, L’Aquila, Italy, April 22–26, 2017. https://doi.org/10.1145/3030207.3030226. ACM, pp 401–412 – reference: Georges A, Buytaert D, Eeckhout L (2007) Statistically rigorous java performance evaluation. In: Proceedings of the 22nd annual ACM SIGPLAN conference on object-oriented programming systems, languages and applications, OOPSLA ’07. https://doi.org/10.1145/1297027.1297033. Association for Computing Machinery, New York, pp 57–76 – reference: FiellerECSome problems in interval estimationJ R Stat Soc B: Stat (Methodol)1954162175185930760057.35311http://www.jstor.org/stable/2984043 – reference: VarghaADelaneyHDA critique and improvement of the “cl” common language effect size statistics of Mcgraw and WongJ Educ Behav Stat2000252101132http://www.jstor.org/stable/1165329 – reference: Fowler M (2006) Continuous integration. https://www.martinfowler.com/articles/continuousIntegration.html. Accessed: 25 Jan 2022 – reference: Neumann G, Harman M, Poulding S Barros M, Labiche Y (eds) (2015) Transformed vargha-delaney effect size. Springer International Publishing, Cham – reference: Kalibera T, Jones R (2013) Rigorous benchmarking in reasonable time. In: Proceedings of the 2013 international symposium on memory management, ISMM ’13, pp 63–74. https://doi.org/10.1145/2491894.2464160. Association for Computing Machinery, New York – reference: Maricq A, Duplyakin D, Jimenez I, Maltzahn C, Stutsman R, Ricci R (2018) Taming performance variability. In: 13th USENIX symposium on operating systems design and implementation (OSDI 18). https://www.usenix.org/conference/osdi18/presentation/maricq. USENIX Association, Carlsbad, pp 409–425 – reference: He S, Manns G, Saunders J, Wang W, Pollock L, Soffa M L (2019) A statistics-based performance testing methodology for cloud applications. In: Proceedings of the 2019 27th ACM joint meeting on European software engineering conference and symposium on the foundations of software engineering, ESEC/FSE 2019. https://doi.org/10.1145/3338906.3338912. Association for Computing Machinery, New York, pp 188–199 – reference: DavisonACHinkleyDVBootstrap methods and their application. Cambridge Series in Statistical and Probabilistic Mathematics1997CambridgeCambridge University Press10.1017/CBO97805118028430886.62001https://doi.org/10.1017/CBO9780511802843 – reference: Giese H, Lambers L, Zöllner C (2020) From classic to agile: experiences from more than a decade of project-based modeling education. In: Guerra E, Iovino L (eds) MODELS ’20: ACM/IEEE 23rd international conference on model driven engineering languages and systems, virtual event, Canada, 18–23 October, 2020, companion proceedings. https://doi.org/10.1145/3417990.3418743. ACM, pp 22:1–22:10 – reference: CortellessaVDi PompeoDEramoRTucciMA model-driven approach for continuous performance engineering in microservice-based systemsJ Syst Softw202218311108410.1016/j.jss.2021.111084https://doi.org/10.1016/j.jss.2021.111084. https://www.sciencedirect.com/science/article/pii/S0164121221001813 – reference: Ding Z, Chen J, Shang W (2020) Towards the use of the readily available tests from the release pipeline as performance tests: are we there yet? In: Rothermel G, Bae D (eds) ICSE ’20: 42nd international conference on software engineering, Seoul, South Korea, 27 June–19 July, 2020. https://doi.org/10.1145/3377811.3380351. ACM, pp 1435–1446 – reference: CostaDBezemerCPLeitnerPAndrzejakAWhat’s wrong with my benchmark results? Studying bad practices in jmh benchmarksIEEE Trans Softw Eng20214771452146710.1109/TSE.2019.2925345https://doi.org/10.1109/TSE.2019.2925345 – ident: 10247_CR44 doi: 10.1145/3427921.3450243 – start-page: 205 volume-title: Analysis of changepoint models year: 2011 ident: 10247_CR14 doi: 10.1017/CBO9780511984679.011 – volume: 22 start-page: 79 issue: 1 year: 1951 ident: 10247_CR26 publication-title: Ann Math Stat doi: 10.1214/aoms/1177729694 – ident: 10247_CR8 doi: 10.1109/ICSME.2017.13 – volume: 41 start-page: 1091 issue: 11 year: 2015 ident: 10247_CR22 publication-title: IEEE Trans Softw Eng doi: 10.1109/TSE.2015.2445340 – ident: 10247_CR48 – ident: 10247_CR1 doi: 10.1002/smr.2276 – volume: 85 start-page: 1501 issue: 8 year: 2005 ident: 10247_CR31 publication-title: Signal Process doi: 10.1016/j.sigpro.2005.01.012 – ident: 10247_CR5 doi: 10.1109/MSR.2017.62 – ident: 10247_CR50 doi: 10.1145/3485136 – volume: 24 start-page: 139 issue: 1 year: 2017 ident: 10247_CR7 publication-title: Autom Softw Eng doi: 10.1007/s10515-015-0188-0 – ident: 10247_CR3 – ident: 10247_CR43 doi: 10.1145/2884781.2884871 – ident: 10247_CR53 doi: 10.1109/ICSME.2017.67 – ident: 10247_CR13 doi: 10.1145/3377811.3380351 – ident: 10247_CR35 doi: 10.1145/1508244.1508275 – ident: 10247_CR27 doi: 10.1145/3196398.3196407 – ident: 10247_CR41 doi: 10.1109/MSR.2017.54 – volume: 114 start-page: 169 issue: 525 year: 2019 ident: 10247_CR15 publication-title: J Am Stat Assoc doi: 10.1080/01621459.2017.1385466 – volume: 26 start-page: 133 issue: 6 year: 2021 ident: 10247_CR30 publication-title: Empir Softw Eng doi: 10.1007/s10664-021-10037-x – volume: 25 start-page: 101 issue: 2 year: 2000 ident: 10247_CR52 publication-title: J Educ Behav Stat – volume: 107 start-page: 1590 issue: 500 year: 2012 ident: 10247_CR25 publication-title: J Am Stat Assoc doi: 10.1080/01621459.2012.737745 – ident: 10247_CR42 doi: 10.1109/ASE.2019.00123 – ident: 10247_CR23 doi: 10.1145/2491894.2464160 – ident: 10247_CR21 doi: 10.1145/3338906.3338912 – volume: 60 start-page: 291 issue: 2 year: 1997 ident: 10247_CR2 publication-title: J Stat Plan Inference doi: 10.1016/S0378-3758(96)00138-3 – ident: 10247_CR32 doi: 10.1145/3030207.3030213 – ident: 10247_CR4 doi: 10.1145/3133876 – volume: 24 start-page: 2469 issue: 4 year: 2019 ident: 10247_CR28 publication-title: Empir Softw Eng doi: 10.1007/s10664-019-09681-1 – volume: 47 start-page: 1528 issue: 8 year: 2021 ident: 10247_CR39 publication-title: IEEE Trans Softw Eng doi: 10.1109/TSE.2019.2927908 – ident: 10247_CR29 doi: 10.1145/3368089.3409683 – volume: 47 start-page: 1452 issue: 7 year: 2021 ident: 10247_CR11 publication-title: IEEE Trans Softw Eng doi: 10.1109/TSE.2019.2925345 – ident: 10247_CR19 doi: 10.1145/3417990.3418743 – volume: 183 start-page: 111084 year: 2022 ident: 10247_CR10 publication-title: J Syst Softw doi: 10.1016/j.jss.2021.111084 – volume: 16 start-page: 175 issue: 2 year: 1954 ident: 10247_CR16 publication-title: J R Stat Soc B: Stat (Methodol) doi: 10.1111/j.2517-6161.1954.tb00159.x – ident: 10247_CR33 – ident: 10247_CR46 doi: 10.1109/ICDCSW.2011.20 – ident: 10247_CR34 doi: 10.1145/3092703.3092725 – ident: 10247_CR47 doi: 10.1145/3030207.3030226 – ident: 10247_CR18 doi: 10.1145/1297027.1297033 – ident: 10247_CR36 doi: 10.1145/1508244.1508275 – ident: 10247_CR38 – ident: 10247_CR37 doi: 10.1007/978-3-319-22183-0_29 – volume: 98 start-page: 408 issue: P3 year: 2015 ident: 10247_CR6 publication-title: Sci Comput Program doi: 10.1016/j.scico.2013.02.001 – ident: 10247_CR40 – volume-title: Bootstrap methods and their application. Cambridge Series in Statistical and Probabilistic Mathematics year: 1997 ident: 10247_CR12 doi: 10.1017/CBO9780511802843 – volume: 27 start-page: 74 issue: 3 year: 2022 ident: 10247_CR49 publication-title: Empir Softw Eng doi: 10.1007/s10664-021-10069-3 – ident: 10247_CR24 – ident: 10247_CR51 – ident: 10247_CR9 doi: 10.4324/9780203771587 – ident: 10247_CR20 – ident: 10247_CR17 – ident: 10247_CR45 doi: 10.1145/2884781.2884830 |
| SSID | ssj0009745 |
| Score | 2.4759612 |
| Snippet | Microbenchmarking is a widely used form of performance testing in Java software. A microbenchmark repeatedly executes a small chunk of code while collecting... |
| SourceID | proquest crossref springer |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 13 |
| SubjectTerms | Automation Benchmarks Compilers Computer Science Estimates Interpreters Java Programming Languages Reconfiguration Software Software development Software Engineering/Programming and Operating Systems Steady state Virtual environments |
| SummonAdditionalLinks | – databaseName: Springer LINK dbid: RSV link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LS8NAEF6kevBifWK1yh686UKS3Ww2XkTEIh6KaJEehLDZBxSkLU2s9t872SSNigp6ySGZDGFmZ-bb7DwQOqEmTrk0KeGeUITZOCYy5JZYzwYe05rGTLthE1G_L4bD-K4qCsvqbPf6SNJ56g_FbpwzUmSfQ1BkEQHkuArhThTmeP_w2LTajdxo4qK5HqEQ0atSme95fA5HDcb8cizqok2v_b_v3EQbFbrEl-Vy2EIrZryN2vXkBlwZ8g56Grhs2QyX6Rzg8bBctujEE4ud7hfYlRvhaVNcgEdjfCvnEmfgvoGFOcdwwa8GF0DS4IXJL3bRoHc9uLoh1ZwFoiinOdFaaOHR1Bc65oJTHVArLbOplho2z8oCiAKcFRkVch0oSlmqgtj6JtJ-oEK6h1rjydjsI8wtE0DEqIoYSwG5Scpi7ckC9cE93kF-Le1EVT3Ii1EYz0nTPbmQXgLSS5z0krcOOl2-My07cPxK3a2VmFTWmCWw8wakFEYR7aCzWmnN45-5HfyN_BCtF9Poyz80XdTKZy_mCK2peT7KZsdulb4D7qTh-A priority: 102 providerName: Springer Nature |
| Title | Towards effective assessment of steady state performance in Java software: are we there yet? |
| URI | https://link.springer.com/article/10.1007/s10664-022-10247-x https://www.proquest.com/docview/2740745773 |
| Volume | 28 |
| WOSCitedRecordID | wos000889551000001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAVX databaseName: Springer LINK customDbUrl: eissn: 1573-7616 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0009745 issn: 1382-3256 databaseCode: RSV dateStart: 19970101 isFulltext: true titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22 providerName: Springer Nature |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LS8QwEB58Hbz4FtcXOXjTYLdJ09aLqCjiYVl0ERGhpHnAguyudn39eyfZ1KKgFy85tGkonWTmazLzfQB7zOSlkKakIsoU5TbPqUyEpTayccS1ZjnXXmwi7XSyu7u8GzbcqpBWWftE76j1ULk98kP8e8Jol6QpOx49Uaca5U5Xg4TGNMw6loTYp-7dNKS7qRcpdjR7lGFsD0UzoXROCE5dLjuGWJ7S9--BqUGbPw5Ifdy5WPzvGy_BQkCc5GQyRZZhygxWYLFWcyBhca_CQ89n0FZkkuKBXpDIL9pOMrTEz4cP4kuQyKgpOCD9AbmSr5JU6NJxCHNEsCFvhjhwaciHGR-vQe_ivHd2SYP2AlVMsDHVOtNZxMp2pnORCaZjZqXlttRS4w-1sgisEHulRiVCx4oxXqo4t22T6nasErYOM4PhwGwAEZZn2IkzlXJeIpqTjOc6kg4J4jXRgnb93QsVeMmdPMZj0TAqO1sVaKvC26p4b8H-1zOjCSvHn723awMVYYVWRWOdFhzUJm5u_z7a5t-jbcG8U6Sf7NJsw8z4-cXswJx6Hfer512YPT3vdK93_TzFtpvcY3t9c_sJPkzvBw |
| linkProvider | ProQuest |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Nb9QwEB1VBQkuFCioWwrMoZzAIht7nQQJVQio-sWqUvfQA5Ll-EOqVO0uzXa3-6P6Hzt2kkZForceuOSQOJYSP8882zPzALa5K0qpXclkkhsmfFEwPZCe-cSnibCWF8JGsYlsOMxPT4vjFbhuc2FCWGVrE6OhthMT9sg_0-qJvN0gy_jO9A8LqlHhdLWV0KhhceiWC1qyVV_3f9D4fkjT3Z-j73usURVghks-Y9bmNk942c9tIXPJbcq99sKXVltaKhpPlIFYRebMQNrUcC5Kkxa-7zLbT00QiSCL_0gE4x8jBU-6Gr9Z1EQOVf0YJyrR5Og0mXpSChZC58mji4xd3fWDHbn96zw2urndtf_sBz2HZw2fxm_1BHgBK278EtZarQpsTNc6_B7F-OAK6wAWsvGob4uS4sRjRPsSY4IVTrt0Cjwb44Gea6zIYVEX7gvSBRcOA3V2uHSznVcweohPfA2r48nYbQBKL3JqJLjJhCiJq2ouCpvowHPpnuxBvx1mZZqq60H841x19aIDNBRBQ0VoqKsefLx9Z1rXHLm39VaLB9XYn0p1YOjBpxZR3eN_97Z5f2_v4cne6NeROtofHr6Bpykxvno_agtWZxeX7i08NvPZWXXxLk4NBPXASLsBkspIsQ |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Nb9QwEB1VLUJcaPkSS1uYA5zAajb22gkSqhBlRSla7WEPFUKyHH9IldDu0ixt96fx7xg7SSOQ6K0HLjkkjqXEzzNvkpl5AC-5LytpfMVkVlgmQlkyM5KBhSzkmXCOl8IlsQk1mRSnp-V0A351tTAxrbKziclQu4WN38gPKHoibzdSih-ENi1iejQ-XP5gUUEq_mnt5DQaiJz49SWFb_W74yNa61d5Pv44-_CJtQoDzHLJV8y5whUZr4aFK2Uhuct5MEGEyhlHYaMNRB-IYShvR9LllnNR2bwMQ6_cMLdRMIKs_5aiEDPGfdPR177fr0r6yLHDH-NEK9p6nbZqT0rBYho9eXeh2NWfPrEnun_9m00ub7z9H7-sHbjf8mx832yMB7Dh5w9hu9OwwNakPYJvs5Q3XGOT2EK2H811s1JcBEy7YI2p8AqXfZkFns3xs7kwWJMjoyn8W6QDXnqMlNrj2q8OH8PsNh7xCWzOF3P_FFAGUdAgwa0SoiIOa7goXWYi_6VzcgDDbsm1bbuxR1GQ77rvIx1hogkmOsFEXw3g9fU9y6YXyY2j9zps6NYu1boHxgDedOjqL_97tmc3z_YC7hLA9Jfjycku3MuJCDafqfZgc3X-0-_DHXuxOqvPn6ddgqBvGWi_ASSUUZQ |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Towards+effective+assessment+of+steady+state+performance+in+Java+software%3A+are+we+there+yet%3F&rft.jtitle=Empirical+software+engineering+%3A+an+international+journal&rft.au=Traini%2C+Luca&rft.au=Cortellessa%2C+Vittorio&rft.au=Di+Pompeo%2C+Daniele&rft.au=Tucci%2C+Michele&rft.date=2023-01-01&rft.issn=1382-3256&rft.eissn=1573-7616&rft.volume=28&rft.issue=1&rft_id=info:doi/10.1007%2Fs10664-022-10247-x&rft.externalDBID=n%2Fa&rft.externalDocID=10_1007_s10664_022_10247_x |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1382-3256&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1382-3256&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1382-3256&client=summon |