How to certify machine learning based safety-critical systems? A systematic literature review

Context Machine Learning (ML) has been at the heart of many innovations over the past years. However, including it in so-called “safety-critical” systems such as automotive or aeronautic has proven to be very challenging, since the shift in paradigm that ML brings completely changes traditional cert...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Automated software engineering Jg. 29; H. 2; S. 38
Hauptverfasser: Tambon, Florian, Laberge, Gabriel, An, Le, Nikanjam, Amin, Mindom, Paulina Stevia Nouwou, Pequignot, Yann, Khomh, Foutse, Antoniol, Giulio, Merlo, Ettore, Laviolette, François
Format: Journal Article
Sprache:Englisch
Veröffentlicht: New York Springer US 01.11.2022
Springer Nature B.V
Springer Verlag
Schlagworte:
ISSN:0928-8910, 1573-7535
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Context Machine Learning (ML) has been at the heart of many innovations over the past years. However, including it in so-called “safety-critical” systems such as automotive or aeronautic has proven to be very challenging, since the shift in paradigm that ML brings completely changes traditional certification approaches. Objective This paper aims to elucidate challenges related to the certification of ML-based safety-critical systems, as well as the solutions that are proposed in the literature to tackle them, answering the question “How to Certify Machine Learning Based Safety-critical Systems?”. Method We conduct a Systematic Literature Review (SLR) of research papers published between 2015 and 2020, covering topics related to the certification of ML systems. In total, we identified 217 papers covering topics considered to be the main pillars of ML certification: Robustness , Uncertainty , Explainability , Verification , Safe Reinforcement Learning , and Direct Certification . We analyzed the main trends and problems of each sub-field and provided summaries of the papers extracted. Results The SLR results highlighted the enthusiasm of the community for this subject, as well as the lack of diversity in terms of datasets and type of ML models. It also emphasized the need to further develop connections between academia and industries to deepen the domain study. Finally, it also illustrated the necessity to build connections between the above mentioned main pillars that are for now mainly studied separately. Conclusion We highlighted current efforts deployed to enable the certification of ML based software systems, and discuss some future research directions.
AbstractList Context: Machine Learning (ML) has been at the heart of many innovations over the past years. However, including it in so-called “safety-critical” systems such as automotive or aeronautic has proven to be very challenging, since the shift in paradigm that ML brings completely changes traditional certification approaches.Objective: This paper aims to elucidate challenges related to the certification of ML-based safety-critical systems, as well as the solutions that are proposed in the literature to tackle them, answering the question “How to Certify Machine Learning Based Safety-critical Systems?”.Method: We conduct a Systematic Literature Review (SLR) of research papers published between 2015 and 2020, covering topics related to the certification of ML systems. In total, we identified 217 papers covering topics considered to be the main pillars of ML certification: Robustness, Uncertainty, Explainability, Verification, Safe Reinforcement Learning, and Direct Certification. We analyzed the main trends and problems of each sub-field and provided summaries of the papers extracted.Results: The SLR results highlighted the enthusiasm of the community for this subject, as well as the lack of diversity in terms of datasets and type of ML models. It also emphasized the need to further develop connections between academia and industries to deepen the domain study. Finally, it also illustrated the necessity to build connections between the above mentioned main pillars that are for now mainly studied separately.Conclusion: We highlighted current efforts deployed to enable the certification of ML based software systems, and discuss some future research directions.
Context Machine Learning (ML) has been at the heart of many innovations over the past years. However, including it in so-called “safety-critical” systems such as automotive or aeronautic has proven to be very challenging, since the shift in paradigm that ML brings completely changes traditional certification approaches. Objective This paper aims to elucidate challenges related to the certification of ML-based safety-critical systems, as well as the solutions that are proposed in the literature to tackle them, answering the question “How to Certify Machine Learning Based Safety-critical Systems?”. Method We conduct a Systematic Literature Review (SLR) of research papers published between 2015 and 2020, covering topics related to the certification of ML systems. In total, we identified 217 papers covering topics considered to be the main pillars of ML certification: Robustness , Uncertainty , Explainability , Verification , Safe Reinforcement Learning , and Direct Certification . We analyzed the main trends and problems of each sub-field and provided summaries of the papers extracted. Results The SLR results highlighted the enthusiasm of the community for this subject, as well as the lack of diversity in terms of datasets and type of ML models. It also emphasized the need to further develop connections between academia and industries to deepen the domain study. Finally, it also illustrated the necessity to build connections between the above mentioned main pillars that are for now mainly studied separately. Conclusion We highlighted current efforts deployed to enable the certification of ML based software systems, and discuss some future research directions.
ContextMachine Learning (ML) has been at the heart of many innovations over the past years. However, including it in so-called “safety-critical” systems such as automotive or aeronautic has proven to be very challenging, since the shift in paradigm that ML brings completely changes traditional certification approaches.ObjectiveThis paper aims to elucidate challenges related to the certification of ML-based safety-critical systems, as well as the solutions that are proposed in the literature to tackle them, answering the question “How to Certify Machine Learning Based Safety-critical Systems?”.MethodWe conduct a Systematic Literature Review (SLR) of research papers published between 2015 and 2020, covering topics related to the certification of ML systems. In total, we identified 217 papers covering topics considered to be the main pillars of ML certification: Robustness, Uncertainty, Explainability, Verification, Safe Reinforcement Learning, and Direct Certification. We analyzed the main trends and problems of each sub-field and provided summaries of the papers extracted.ResultsThe SLR results highlighted the enthusiasm of the community for this subject, as well as the lack of diversity in terms of datasets and type of ML models. It also emphasized the need to further develop connections between academia and industries to deepen the domain study. Finally, it also illustrated the necessity to build connections between the above mentioned main pillars that are for now mainly studied separately.ConclusionWe highlighted current efforts deployed to enable the certification of ML based software systems, and discuss some future research directions.
ArticleNumber 38
Author Nikanjam, Amin
Khomh, Foutse
Mindom, Paulina Stevia Nouwou
Laviolette, François
An, Le
Antoniol, Giulio
Pequignot, Yann
Laberge, Gabriel
Tambon, Florian
Merlo, Ettore
Author_xml – sequence: 1
  givenname: Florian
  orcidid: 0000-0001-5593-9400
  surname: Tambon
  fullname: Tambon, Florian
  email: florian-2.tambon@polymtl.ca
  organization: Polytechnique Montréal
– sequence: 2
  givenname: Gabriel
  surname: Laberge
  fullname: Laberge, Gabriel
  organization: Polytechnique Montréal
– sequence: 3
  givenname: Le
  surname: An
  fullname: An, Le
  organization: Polytechnique Montréal
– sequence: 4
  givenname: Amin
  surname: Nikanjam
  fullname: Nikanjam, Amin
  organization: Polytechnique Montréal
– sequence: 5
  givenname: Paulina Stevia Nouwou
  surname: Mindom
  fullname: Mindom, Paulina Stevia Nouwou
  organization: Polytechnique Montréal
– sequence: 6
  givenname: Yann
  surname: Pequignot
  fullname: Pequignot, Yann
  organization: Laval University
– sequence: 7
  givenname: Foutse
  surname: Khomh
  fullname: Khomh, Foutse
  organization: Polytechnique Montréal
– sequence: 8
  givenname: Giulio
  surname: Antoniol
  fullname: Antoniol, Giulio
  organization: Polytechnique Montréal
– sequence: 9
  givenname: Ettore
  surname: Merlo
  fullname: Merlo, Ettore
  organization: Polytechnique Montréal
– sequence: 10
  givenname: François
  surname: Laviolette
  fullname: Laviolette, François
  organization: Laval University
BackLink https://hal.science/hal-04194063$$DView record in HAL
BookMark eNp9kMFqGzEQhkVwoI6TF8hJkFMPakc73pV0KiY0dcHQS3MMYixLjsJ615XkJH77rrsphR5ykhj-b_jnu2CTru88Y9cSPkkA9TlLqGUtoKoEAKISr2dsKmuFQtVYT9gUTKWFNhI-sIucnwDANMZM2cOyf-Gl586nEsOR78g9xs7z1lPqYrfla8p-wzMFX47CpViio5bnYy5-l7_wxduXhjlvY_GJyiF5nvxz9C-X7DxQm_3V2ztj93dff94uxerHt--3i5VwWGMRqCpnGhlorlAFMoFkHbBZg_OhIqcD1hC0RgIXVAiaGnQO1jjfSIWVkThjH8e9j9TafYo7SkfbU7TLxcqeZjCXZg4NPp-yN2N2n_pfB5-LfeoPqRvq2WGVrgD14G3G9Jhyqc85-WBdLMORfVcSxdZKsCfxdhRvB_H2j3j7OqDVf-jfRu9COEJ5CHdbn_61eof6DeAKmNc
CitedBy_id crossref_primary_10_1016_j_ress_2024_110682
crossref_primary_10_1109_ACCESS_2024_3440647
crossref_primary_10_1016_j_infsof_2022_107129
crossref_primary_10_1007_s11219_022_09613_1
crossref_primary_10_1146_annurev_control_042920_020211
crossref_primary_10_1145_3672457
crossref_primary_10_1016_j_sysarc_2025_103513
crossref_primary_10_1109_TAC_2024_3523682
crossref_primary_10_1109_TRO_2024_3429191
crossref_primary_10_1016_j_jss_2024_112201
crossref_primary_10_1109_ACCESS_2022_3229233
crossref_primary_10_1007_s10489_024_05277_5
crossref_primary_10_1126_scirobotics_adz8279
crossref_primary_10_4316_AECE_2023_03006
crossref_primary_10_1007_s10664_025_10656_8
crossref_primary_10_1145_3644388
crossref_primary_10_1109_MS_2024_3412406
crossref_primary_10_1109_ACCESS_2025_3548966
crossref_primary_10_3390_aerospace12050412
crossref_primary_10_1016_j_ress_2025_111041
crossref_primary_10_3390_drones9040264
crossref_primary_10_1145_3696110
crossref_primary_10_1016_j_sysconle_2024_105892
crossref_primary_10_1126_scirobotics_adi6421
crossref_primary_10_1016_j_dte_2025_100057
crossref_primary_10_1080_17517575_2024_2448008
crossref_primary_10_1002_stvr_1879
crossref_primary_10_3390_drones7050327
crossref_primary_10_1007_s10515_024_00453_w
Cites_doi 10.1145/2939672.2939778
10.1007/978-3-319-99229-7_35
10.1109/TIE.2019.2907440
10.1007/978-3-319-89960-2_22
10.1109/CVPRW50498.2020.00176
10.1007/BF00992698
10.1609/aimag.v38i3.2741
10.1016/j.csi.2014.02.005
10.1007/978-3-030-58920-2_13
10.1145/3338906.3342502
10.1109/LCSYS.2020.3038221
10.1109/ISSREW.2017.46
10.1016/j.infsof.2008.01.006
10.1109/CVPR.2019.00931
10.1145/3132747.3132785
10.1145/3394486.3403049
10.1109/TAC.2018.2876389
10.23919/DATE.2019.8714971
10.1609/aaai.v33i01.33013387
10.1016/j.inffus.2019.12.012
10.1007/978-3-030-57628-8_14
10.1109/ISSRE5003.2020.00047
10.1109/TNNLS.2021.3056046
10.1109/VAST.2018.8802509
10.24963/ijcai.2019/824
10.1007/978-3-030-01418-6_45
10.1109/EUVIP47703.2019.8946254
10.18293/SEKE2019-094
10.1109/ICCSE.2016.7581557
10.1007/978-3-030-16722-6_10
10.1109/IV47402.2020.9304744
10.1109/ITA50056.2020.9244964
10.1007/978-3-030-41579-2_36
10.23919/ACC45564.2020.9147584
10.1109/IJCNN52387.2021.9533465
10.1007/978-3-030-17462-0_28
10.1109/LES.2019.2953253
10.1007/978-3-030-53288-8_1
10.1002/j.2334-5837.2019.00676.x
10.23919/DATE48585.2020.9116247
10.1109/AITest.2019.000-5
10.1145/3338501.3357372
10.1109/DASC50938.2020.9256581
10.1016/j.scico.2020.102450
10.1109/FUZZ-IEEE.2019.8858899
10.1002/rnc.4962
10.1109/ICCAD45719.2019.8942130
10.1109/ICSME.2019.00078
10.1007/978-3-030-32304-2_15
10.1109/IVS.2018.8500421
10.1145/3293882.3330579
10.1109/SP.2017.49
10.1109/DSN-W.2018.00064
10.1007/978-3-030-33617-2_17
10.1145/3302504.3311814
10.1109/IMCSIT.2008.4747314
10.1109/ICCV.2019.00302
10.1016/j.jss.2021.110938
10.1007/978-3-030-26250-1_29
10.1007/978-3-030-54549-9_14
10.1109/ICCE-Berlin47944.2019.8966168
10.1109/ACCESS.2019.2958406
10.1145/3395363.3397357
10.1109/ICMLA.2019.00145
10.1007/978-3-030-26250-1_30
10.1016/j.infsof.2011.09.002
10.1109/SEAA.2019.00026
10.1038/nature14236
10.1007/978-3-319-94205-6_41
10.1109/CVPRW50498.2020.00170
10.1145/3180155.3180220
10.1007/978-3-030-53288-8_6
10.1016/j.infsof.2020.106296
10.1109/CVPR.2019.00013
10.1109/SBESC49506.2019.9046094
10.1145/3377811.3380400
10.1109/LRA.2018.2857402
10.1109/MAES.2014.140109
10.1109/CVPRW50498.2020.00174
10.1109/SP.2018.00058
10.1145/3243734.3243792
10.1109/IVS.2019.8814134
10.1145/3319535.3354245
10.1016/j.infsof.2010.03.006
10.1109/CVPRW50498.2020.00171
10.1145/3293882.3330566
10.1109/ISSRE.2019.00013
10.1007/978-3-030-01090-4_1
10.1109/TVCG.2019.2903943
10.1109/DSN-W.2019.00016
10.1109/52.776946
10.1109/IROS.2018.8593420
10.1007/978-3-319-77935-5_9
10.1109/IROS.2017.8206247
10.1109/ITSC.2018.8569637
10.1109/CINTI-MACRo49179.2019.9105190
10.1007/978-3-319-97301-2_7
10.1109/CVPRW.2019.00178
10.1109/ACCESS.2019.2962695
10.1007/978-3-030-50086-3_6
10.1007/978-3-030-19212-9_24
10.1109/ICST46399.2020.00060
10.1109/MIS.2019.2957223
10.1145/3238147.3238202
10.1109/ITSC.2018.8569436
10.1109/ITSC45102.2020.9294549
10.1126/science.aam6960
10.1016/j.tcs.2019.05.046
10.23919/ACC45564.2020.9147564
10.1109/REW.2019.00049
10.1109/ICSE-NIER.2019.00030
10.1007/978-3-319-63387-9_1
10.36001/phmconf.2019.v11i1.774
10.1145/3371158.3371383
10.1109/AITEST49225.2020.00009
10.1101/2020.06.16.154542
10.1109/ITSC.2019.8917002
10.1109/ICSE-Companion.2019.00051
10.1109/LRA.2020.3012127
10.1109/CVPR42600.2020.00132
10.1109/ICCAD45719.2019.8942153
10.1007/978-3-319-91578-4
10.1145/3359789.3359831
10.1007/978-3-030-54549-9_13
10.1007/978-3-030-62144-5_10
10.1109/TKDE.2019.2946162
10.1109/ICST.2017.18
10.23919/ECC.2019.8795815
10.1109/ACCESS.2020.3048047
10.1016/j.ifacol.2020.12.2276
10.1007/978-3-319-68167-2_18
10.1109/ICRA.2019.8793611
10.1561/2200000071
10.1109/ITSC.2015.427
10.1038/538020a
10.1109/ICRA.2018.8460635
10.1109/GLOBECOM38437.2019.9013408
10.1109/FormaliSE.2019.00012
10.1109/DASC43569.2019.9081635
10.1109/CVPR.2018.00099
10.1609/aaai.v33i01.33019780
10.1109/CDC42340.2020.9303750
10.1093/comjnl/25.4.465
10.1007/978-3-030-55754-6_4
10.4271/2018-01-1075
10.1109/IJCNN.2019.8851970
10.1109/LCSYS.2021.3050444
10.1109/ACCESS.2019.2951526
10.1109/AITest.2019.00-12
10.1109/DSN-W.2019.00017
10.1007/978-3-319-63387-9_5
10.1145/3236024.3264835
10.1007/s13748-019-00179-x
10.1109/TSE.2019.2962027
10.1016/j.eng.2019.12.012
10.1016/j.cosrev.2020.100270
10.1145/3302509.3311038
10.1109/CVPR.2019.01168
10.1109/ICRA40945.2020.9196709
10.1109/DASC43569.2019.9081748
10.1145/3358233
10.1109/SANER.2019.8668044
10.1109/LRA.2020.2974682
10.1109/IECON.2017.8216790
ContentType Journal Article
Copyright The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022
The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022.
Distributed under a Creative Commons Attribution 4.0 International License
Copyright_xml – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022
– notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022.
– notice: Distributed under a Creative Commons Attribution 4.0 International License
DBID AAYXX
CITATION
8FE
8FG
ABJCF
AFKRA
ARAPS
AZQEC
BENPR
BGLVJ
CCPQU
DWQXO
GNUQQ
HCIFZ
JQ2
K7-
L6V
M7S
P5Z
P62
PHGZM
PHGZT
PKEHL
PQEST
PQGLB
PQQKQ
PQUKI
PTHSS
1XC
DOI 10.1007/s10515-022-00337-x
DatabaseName CrossRef
ProQuest SciTech Collection
ProQuest Technology Collection
Materials Science & Engineering Collection
ProQuest Central UK/Ireland
Advanced Technologies & Computer Science Collection
ProQuest Central Essentials
ProQuest Central
ProQuest Technology Collection
ProQuest One Community College
ProQuest Central
ProQuest Central Student
SciTech Premium Collection
ProQuest Computer Science Collection
Computer Science Database
ProQuest Engineering Collection
Engineering Database
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Premium
ProQuest One Academic (New)
ProQuest One Academic Middle East (New)
ProQuest One Academic Eastern Edition (DO NOT USE)
One Applied & Life Sciences
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
Engineering Collection
Hyper Article en Ligne (HAL)
DatabaseTitle CrossRef
Computer Science Database
ProQuest Central Student
Technology Collection
ProQuest One Academic Middle East (New)
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Computer Science Collection
SciTech Premium Collection
ProQuest One Community College
ProQuest Central
ProQuest One Applied & Life Sciences
ProQuest Engineering Collection
ProQuest Central Korea
ProQuest Central (New)
Engineering Collection
Advanced Technologies & Aerospace Collection
Engineering Database
ProQuest One Academic Eastern Edition
ProQuest Technology Collection
ProQuest SciTech Collection
Advanced Technologies & Aerospace Database
ProQuest One Academic UKI Edition
Materials Science & Engineering Collection
ProQuest One Academic
ProQuest One Academic (New)
DatabaseTitleList

Computer Science Database
Database_xml – sequence: 1
  dbid: BENPR
  name: ProQuest Central
  url: https://www.proquest.com/central
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1573-7535
ExternalDocumentID oai:HAL:hal-04194063v1
10_1007_s10515_022_00337_x
GrantInformation_xml – fundername: canadian network for research and innovation in machining technology, natural sciences and engineering research council of canada
  grantid: CRDPJ 537462-18
  funderid: http://dx.doi.org/10.13039/501100002790
– fundername: consortium for research and innovation in aerospace in québec
  grantid: CRDPJ 537462-18
GroupedDBID -4Z
-59
-5G
-BR
-EM
-Y2
-~C
.86
.DC
.VR
06D
0R~
0VY
199
1N0
1SB
2.D
203
23N
28-
2J2
2JN
2JY
2KG
2LR
2P1
2VQ
2~H
30V
4.4
406
408
409
40D
40E
5GY
5QI
5VS
67Z
6NX
78A
8TC
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AAOBN
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBXA
ABDZT
ABECU
ABFTD
ABFTV
ABHLI
ABHQN
ABJCF
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFS
ACHSB
ACHXU
ACIWK
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACSNA
ACZOJ
ADHHG
ADHIR
ADIMF
ADINQ
ADKNI
ADKPE
ADRFC
ADTPH
ADURQ
ADYFF
ADZKW
AEBTG
AEFIE
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFEXP
AFGCZ
AFKRA
AFLOW
AFQWF
AFWTZ
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMXSW
AMYLF
AMYQR
AOCGG
ARAPS
ARMRJ
ASPBG
AVWKF
AXYYD
AYJHY
AZFZN
B-.
BA0
BBWZM
BDATZ
BENPR
BGLVJ
BGNMA
BSONS
CAG
CCPQU
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
EBLON
EBS
EIOEI
EJD
ESBYG
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNWQR
GQ6
GQ7
GQ8
GXS
H13
HCIFZ
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I09
IHE
IJ-
IKXTQ
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K7-
KDC
KOV
KOW
LAK
LLZTM
M4Y
M7S
MA-
MVM
N2Q
NB0
NDZJH
NPVJJ
NQJWS
NU0
O9-
O93
O9G
O9I
O9J
OAM
OVD
P19
P2P
P9O
PF0
PT4
PT5
PTHSS
QOK
QOS
R4E
R89
R9I
RHV
RNI
RNS
ROL
RPX
RSV
RZC
RZE
RZK
S16
S1Z
S26
S27
S28
S3B
SAP
SCJ
SCLPG
SCO
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
T16
TEORI
TSG
TSK
TSV
TUC
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
Z45
Z7R
Z7X
Z83
Z88
Z8M
Z8R
Z8W
Z92
ZMTXR
~A9
~EX
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ABRTQ
ACSTC
ADHKG
AEZWR
AFDZB
AFFHD
AFHIU
AFOHR
AGQPQ
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
PHGZM
PHGZT
PQGLB
8FE
8FG
AZQEC
DWQXO
GNUQQ
JQ2
L6V
P62
PKEHL
PQEST
PQQKQ
PQUKI
1XC
ID FETCH-LOGICAL-c353t-372c961fa4737fa9fa15f36b0cef2ac8f350f883a0cf7ff8a63cc0b34d1732913
IEDL.DBID RSV
ISICitedReferencesCount 48
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000782599100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0928-8910
IngestDate Tue Oct 14 20:41:20 EDT 2025
Thu Nov 06 15:45:09 EST 2025
Sat Nov 29 02:06:32 EST 2025
Tue Nov 18 22:24:54 EST 2025
Fri Feb 21 02:47:09 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 2
Keywords Safety-critical
Certification
Machine learning
Systematic literature review
Language English
License Distributed under a Creative Commons Attribution 4.0 International License: http://creativecommons.org/licenses/by/4.0
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c353t-372c961fa4737fa9fa15f36b0cef2ac8f350f883a0cf7ff8a63cc0b34d1732913
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0001-5593-9400
PQID 2918203857
PQPubID 2043871
ParticipantIDs hal_primary_oai_HAL_hal_04194063v1
proquest_journals_2918203857
crossref_citationtrail_10_1007_s10515_022_00337_x
crossref_primary_10_1007_s10515_022_00337_x
springer_journals_10_1007_s10515_022_00337_x
PublicationCentury 2000
PublicationDate 20221100
2022-11-00
20221101
2022-11
PublicationDateYYYYMMDD 2022-11-01
PublicationDate_xml – month: 11
  year: 2022
  text: 20221100
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
– name: Dordrecht
PublicationSubtitle An International Journal
PublicationTitle Automated software engineering
PublicationTitleAbbrev Autom Softw Eng
PublicationYear 2022
Publisher Springer US
Springer Nature B.V
Springer Verlag
Publisher_xml – name: Springer US
– name: Springer Nature B.V
– name: Springer Verlag
References Wang, Y.S., Weng, T.W., Daniel, L.: Verification of neural network control policy under persistent adversarial perturbation (2019b). ArXiv preprint arXiv:1908.06353
EverettMLütjensBHowJPCertifiable robustness to adversarial state uncertainty in deep reinforcement learningIEEE Trans. Neural Netw. Learn. Syst202110.1109/TNNLS.2021.3056046
Remeli, V., Morapitiye, S., Rövid, A., Szalay, Z.: Towards verifiable specifications for neural networks in autonomous driving. In: 2019 IEEE 19th International Symposium on Computational Intelligence and Informatics and 7th IEEE International Conference on Recent Achievements in Mechatronics, pp. 000175–000180. Automation, Computer Sciences and Robotics (CINTI-MACRo), IEEE (2019)
Wolschke, C., Kuhn, T., Rombach, D., Liggesmeyer, P.: Observation based creation of minimal test suites for autonomous vehicles. In: 2017 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW), pp. 294–301 (2017)
Kläs, M., Sembach, L.: Uncertainty wrappers for data-driven models. In: International Conference on Computer Safety, Reliability, and Security. Springer, Berlin. pp. 358–364 (2019)
MnihVKavukcuogluKSilverDRusuAAVenessJBellemareMGGravesARiedmillerMFidjelandAKOstrovskiGHuman-level control through deep reinforcement learningNature2015518529533
WenJLiSLinZHuYHuangCSystematic literature review of machine learning based software development effort estimation modelsInf. Softw. Technol.20125414159
Wang W, Wang A, Tamar, A., Chen, X., Abbeel, P.: Safer classification by synthesis (2018d). ArXiv preprint arXiv:1711.08534
Singh, G., Gehr, T., Püschel, M., Vechev, M.: Boosting robustness certification of neural networks. In: International Conference on Learning Representations (2018)
Grefenstette, E., Stanforth, R., O’Donoghue, B., Uesato, J., Swirszcz, G., Kohli, P.: Strength in numbers: Trading-off robustness and computation via adversarially-trained ensembles. CoRR abs/1811.09300 (2018). arXiv:1811.09300
Ren, J., Liu, P.J., Fertig, E., Snoek, J., Poplin, R., DePristo, M.A., Dillon, J.V., Lakshminarayanan, B.: Likelihood Ratios for Out-of-Distribution Detection, pp. 14707–14718. Curran Associates Inc., Red Hook, NY, USA (2019)
Jain, D., Anumasa, S., Srijith, P.: Decision making under uncertainty with convolutional deep gaussian processes. In: Proceedings of the 7th ACM IKDD CoDS and 25th COMAD, pp. 143–151 (2020)
Lee, K., Lee, K., Lee, H., Shin, J.: A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, Curran Associates Inc., Red Hook, NY, USA, NIPS’18, pp. 7167–7177 (2018)
Meinke, A., Hein, M.: Towards neural networks that provably know when they don’t know (2019). ArXiv preprint arXiv:1909.12180
Alagöz, I., Herpel, T., German, R.: A selection method for black box regression testing with a statistically defined quality level. In: 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST), pp. 114–125 (2017). https://doi.org/10.1109/ICST.2017.18
Tang, Y.C., Zhang, J., Salakhutdinov, R.: Worst cases policy gradients (2019). ArXiv preprint arXiv:1911.03618
Daniels, Z.A., Metaxas, D.: Scenarionet: An interpretable data-driven model for scene understanding. In: IJCAI Workshop on Explainable Artificial Intelligence (XAI) 2018 (2018)
YanMWangLFeiAARTDL: Adaptive random testing for deep learning systemsIEEE Access2020830553064
Lütjens, B., Everett, M., How, J.P.: Safe reinforcement learning with model uncertainty estimates. In: 2019 International Conference on Robotics and Automation (ICRA), IEEE, pp. 8662–8668 (2019)
Ghosh, S., Berkenkamp, F., Ranade, G., Qadeer, S., Kapoor, A.: Verifying controllers against adversarial examples with bayesian optimization. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), IEEE, pp. 7306–7313 (2018a)
RamanagopalMSAndersonCVasudevanRJohnson-RobersonMFailing to learn: Autonomously identifying perception failures for self-driving carsIEEE Robot. Autom. Lett.20183438603867
CastelvecchiDCan we open the black box of AI?Nat News20165382023
Hart, P., Rychly, L., Knoll, A.: Lane-merging using policy-based reinforcement learning and post-optimization. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC), IEEE, 3176–3181 (2019)
Pan, R.: Static deep neural network analysis for robustness. In: Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ACM, New York, NY, USA, ESEC/FSE 2019, pp. 1238-1240 (2019)
Hein, M., Andriushchenko, M.: Formal guarantees on the robustness of a classifier against adversarial manipulation. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, Garnett R (eds) Advances in Neural Information Processing Systems, Curran Associates, Inc., vol 30 (2017). https://proceedings.neurips.cc/paper/2017/file/e077e1a544eec4f0307cf5c3c721d944-Paper.pdf
Sinha, A., Namkoong, H., Volpi, R., Duchi, J.: Certifying some distributional robustness with principled adversarial training (2017). ArXiv preprint arXiv:1710.10571
Croce, F., Andriushchenko, M., Hein, M.: Provable robustness of relu networks via maximization of linear regions. In: the 22nd International Conference on Artificial Intelligence and Statistics, PMLR, pp. 2057–2066 (2019)
Levi, D., Gispan, L., Giladi, N., Fetaya, E.: Evaluating and calibrating uncertainty prediction in regression tasks (2019). ArXiv preprint arXiv:1905.11659
Feng, Y., Shi, Q., Gao, X., Wan, J., Fang, C., Chen, Z.: Deepgini: Prioritizing massive tests to enhance the robustness of deep neural networks. In: Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis, Association for Computing Machinery, New York, NY, USA, ISSTA 2020, pp. 177-188 (2020). https://doi.org/10.1145/3395363.3397357
Machida, F.: N-version machine learning models for safety critical systems. In: 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), pp. 48–51 (2019)
Lee, K., Wang, Z., Vlahov, B., Brar, H., Theodorou, E.A.: Ensemble bayesian decision making with redundant deep perceptual control policies. In: 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA), IEEE, pp. 831–837 (2019b)
Liang, S., Li, Y., Srikant, R.: Enhancing the reliability of out-of-distribution image detection in neural networks (2020). ArXiv preprint arXiv:1706.02690
YounWjun Yi B,: Software and hardware certification of safety-critical avionic systems: A comparison studyComput. Stand. Interfaces2014366889898343378210.1016/j.csi.2014.02.005
Guo, W., Mu, D., Xu, J., Su, P., Wang, G., Xing, X.: Lemna: Explaining deep learning based security applications. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 364–379 (2018b)
Tuncali, C.E., Fainekos, G., Ito, H., Kapinski, J.: Simulation-based adversarial test generation for autonomous vehicles with machine learning components. In: 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 1555–1562 (2018)
Henriksson, J., Berger, C., Borg, M., Tornberg, L., Englund, C., Sathyamoorthy, S.R., Ursing, S.: Towards structured evaluation of deep neural network supervisors. In: 2019 IEEE International Conference on Artificial Intelligence Testing (AITest), pp. 27–34 (2019a)
Amini, A., Schwarting, W., Soleimany, A., Rus, D.: Deep evidential regression (2019). ArXiv preprint arXiv:1910.02600
SunYHuangXKroeningDSharpJHillMAshmoreRStructural test coverage criteria for deep neural networksACM Trans. Embed. Comput. Syst.2019185
ISO (2019) ISO/PAS 21448: Road vehicles – Safety of the intended functionality. International Organization of Standardization (ISO), Geneva
Lust, J., Condurache, A.P.: Gran: An efficient gradient-norm based detector for adversarial and misclassified examples (2020). ArXiv preprint arXiv:2004.09179
Aslansefat, K., Sorokos, I., Whiting, D., Kolagari, R.T., Papadopoulos, Y.: Safeml: Safety monitoring of machine learning classifiers through statistical difference measure (2020). ArXiv preprint arXiv:2005.13166
Amit, G., Levy, M., Rosenberg, I., Shabtai, A., Elovici, Y.: Glod: Gaussian likelihood out of distribution detector (2020). ArXiv preprint arXiv:2008.06856
Bar, A., Huger, F., Schlicht, P., Fingscheidt, T.: On the robustness of redundant teacher-student frameworks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1380–1388 (2019)
GuidottiRMonrealeAGiannottiFPedreschiDRuggieriSTuriniFFactual and counterfactual explanations for black box decision makingIEEE Intell. Syst.20193461423
MoravčíkMSchmidMBurchNLisỳVMorrillDBardNDavisTWaughKJohansonMBowlingMDeepstack: Expert-level artificial intelligence in heads-up no-limit pokerScience2017356633750851336769531403.68202
Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: International conference on computer aided verification. Springer, Berlin. pp. 3–29 (2017)
Gambi, A., Mueller, M., Fraser, G.: Automatically testing self-driving cars with search-based procedural content generation. In: Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis, ACM, New York, NY, USA, ISSTA 2019, pp. 318-328 (2019)
Tian, Y., Zhong, Z., Ordonez, V., Kaiser, G., Ray, B.: Testing dnn image classifiers for confusion & bias errors. In: Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering, Association for Computing Machinery, New York, NY, USA, ICSE ’20, pp. 1122-1134 (2020). https://doi.org/10.1145/3377811.3380400
DybåTDingsøyrTEmpirical studies of agile software development: a systematic reviewInf. Softw. Technol.200850983385910.1016/j.infsof.2008.01.006
Hasanbeig, M., Kroening, D., Abate, A.: Towards verifiable and safe model-free reinforcement learning. In: CEUR Workshop Proceedings, CEUR Workshop Proceedi
337_CR119
337_CR118
337_CR117
337_CR238
R Cheng (337_CR38) 2019; 33
337_CR78
337_CR127
337_CR248
337_CR77
337_CR126
337_CR247
J Zhang (337_CR257) 2020; 123
337_CR74
337_CR246
337_CR72
337_CR123
337_CR244
337_CR73
337_CR122
337_CR70
337_CR121
337_CR71
337_CR109
337_CR108
337_CR229
337_CR107
337_CR228
337_CR106
337_CR227
AB Arrieta (337_CR12) 2020; 58
337_CR67
337_CR65
337_CR116
H Fujino (337_CR63) 2019; 29
337_CR66
337_CR115
337_CR236
S Burton (337_CR27) 2019
337_CR114
337_CR235
337_CR64
337_CR113
337_CR234
337_CR112
337_CR233
337_CR62
337_CR111
337_CR232
337_CR110
337_CR231
M Moravčík (337_CR160) 2017; 356
M Everett (337_CR56) 2021
337_CR219
337_CR218
337_CR216
W Youn (337_CR254) 2014; 36
V Mnih (337_CR159) 2015; 518
337_CR58
337_CR59
337_CR57
337_CR105
337_CR226
337_CR104
337_CR225
337_CR52
337_CR224
337_CR53
337_CR102
Y Sun (337_CR211) 2019; 18
337_CR223
337_CR50
337_CR101
337_CR222
337_CR51
337_CR100
337_CR221
337_CR220
337_CR209
337_CR208
337_CR207
337_CR206
337_CR205
I Goodfellow (337_CR75) 2016
JF Fisac (337_CR60) 2019; 64
337_CR49
E Syriani (337_CR212) 2018; 52
337_CR47
R Guidotti (337_CR87) 2019; 34
337_CR45
337_CR46
337_CR43
337_CR215
337_CR44
337_CR214
337_CR41
337_CR213
337_CR42
337_CR40
337_CR210
MS Ramanagopal (337_CR180) 2018; 3
M Revay (337_CR186) 2020; 5
337_CR39
337_CR36
337_CR37
337_CR34
337_CR35
337_CR32
337_CR204
337_CR33
337_CR203
337_CR202
337_CR31
337_CR201
337_CR200
L Gauerhof (337_CR68) 2018
J Wen (337_CR241) 2012; 54
337_CR29
337_CR25
337_CR26
337_CR23
337_CR21
337_CR22
337_CR20
N Rajabli (337_CR178) 2021; 9
V François-Lavet (337_CR61) 2018; 11
337_CR18
337_CR19
337_CR16
HF Eniser (337_CR55) 2019
337_CR14
337_CR15
337_CR13
337_CR10
337_CR11
cr-split#-337_CR83.2
H Kuwajima (337_CR125) 2019; 8
cr-split#-337_CR83.1
J Törnblom (337_CR217) 2020; 194
B Kitchenham (337_CR120) 2010; 52
L Gauerhof (337_CR69) 2020
Y Yan (337_CR250) 2019; 7
W Peng (337_CR175) 2019; 67
Y Roh (337_CR190) 2021; 33
337_CR1
Y Bakhti (337_CR17) 2019; 7
M Wu (337_CR245) 2020; 807
337_CR2
337_CR4
337_CR5
337_CR6
P Pauli (337_CR170) 2022; 6
337_CR196
337_CR195
337_CR194
337_CR193
337_CR192
337_CR191
K Ren (337_CR185) 2020; 6
337_CR199
337_CR198
337_CR197
P Rodriguez-Dapena (337_CR189) 1999; 16
B Goodman (337_CR76) 2017; 38
D Castelvecchi (337_CR30) 2016; 538
337_CR184
337_CR183
337_CR182
337_CR181
S Kuutti (337_CR124) 2019
337_CR7
337_CR8
337_CR9
337_CR188
337_CR187
JM Zhang (337_CR261) 2020; 1
M Wicker (337_CR243) 2018
J Wang (337_CR237) 2019; 25
337_CR174
337_CR173
337_CR172
337_CR171
337_CR179
F Agostinelli (337_CR3) 2018
337_CR177
337_CR176
337_CR162
337_CR161
D Gopinath (337_CR79) 2018
337_CR169
337_CR168
CJ Watkins (337_CR239) 1992; 8
337_CR167
337_CR166
A Biondi (337_CR24) 2020; 12
337_CR165
337_CR164
Z Lyu (337_CR146) 2020; 34
WK Youn (337_CR255) 2015; 30
X Huang (337_CR103) 2020; 37
337_CR152
337_CR151
T Dybå (337_CR54) 2008; 50
337_CR150
M Wen (337_CR240) 2020; 66
M Yan (337_CR251) 2020; 8
337_CR158
337_CR157
337_CR156
337_CR155
337_CR154
337_CR153
337_CR90
337_CR91
337_CR139
337_CR141
KP Wabersich (337_CR230) 2021; 2021
337_CR262
337_CR140
Y Yang (337_CR252) 2020; 30
337_CR260
EJ Weyuker (337_CR242) 1982; 25
337_CR98
337_CR149
337_CR99
337_CR148
337_CR96
337_CR147
337_CR97
337_CR94
337_CR145
337_CR95
337_CR144
337_CR92
337_CR93
337_CR142
337_CR80
F Gualo (337_CR85) 2021; 176
337_CR129
Y Nesterov (337_CR163) 2018
337_CR128
337_CR249
337_CR130
A Loquercio (337_CR143) 2020; 5
337_CR89
S Demir (337_CR48) 2019; 1911
337_CR138
337_CR259
L Cardelli (337_CR28) 2019; 33
337_CR88
337_CR137
337_CR258
337_CR136
337_CR86
337_CR135
337_CR256
337_CR134
337_CR84
337_CR133
337_CR81
337_CR132
337_CR253
337_CR82
337_CR131
References_xml – reference: Xie, X., Ma, L., Juefei-Xu, F., Xue, M., Chen, H., Liu, Y., Zhao, J., Li, B., Yin, J., See, S.: Deephunter: A coverage-guided fuzz testing framework for deep neural networks. In: Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis, ACM, New York, NY, USA, ISSTA 2019, pp. 146–157 (2019)
– reference: Ren, J., Liu, P.J., Fertig, E., Snoek, J., Poplin, R., DePristo, M.A., Dillon, J.V., Lakshminarayanan, B.: Likelihood Ratios for Out-of-Distribution Detection, pp. 14707–14718. Curran Associates Inc., Red Hook, NY, USA (2019)
– reference: Bar, A., Klingner, M., Varghese, S., Huger, F., Schlicht, P., Fingscheidt, T.: Robust semantic segmentation by redundant networks with a layer-specific loss contribution and majority vote. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 332–333 (2020)
– reference: Fan, D.D., Nguyen, J., Thakker, R., Alatur, N., Agha-mohammadi, A.A., Theodorou, E.A.: Bayesian learning-based adaptive control for safety critical systems. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 4093–4099 (2020). https://doi.org/10.1109/ICRA40945.2020.9196709
– reference: TörnblomJNadjm-TehraniSFormal verification of input-output mappings of tree ensemblesSci. Comput. Progr.2020194
– reference: WickerMHuangXKwiatkowskaMBeyerDHuismanMFeature-guided black-box safety testing of deep neural networksTools and Algorithms for the Construction and Analysis of Systems2018ChamSpringer408426
– reference: Tran, H.D., Yang, X., Lopez, D.M., Musau, P., Nguyen, L.V., Xiang, W., Bak, S., Johnson, T.T.: NNV: The neural network verification tool for deep neural networks and learning-enabled cyber-physical systems. In: International Conference on Computer Aided Verification. Springer, Berlin. pp. 3–17 (2020)
– reference: Gal, Y., Ghahramani, Z.: Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In: Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, JMLR.org, ICML’16, pp. 1050-1059 (2016)
– reference: Cheng, C., Nührenberg, G., Yasuoka, H.: Runtime monitoring neuron activation patterns. In: 2019 Design, Automation Test in Europe Conference Exhibition (DATE), pp. 300–303 (2019b). https://doi.org/10.23919/DATE.2019.8714971
– reference: Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: International conference on computer aided verification. Springer, Berlin. pp. 3–29 (2017)
– reference: Tian, Y., Zhong, Z., Ordonez, V., Kaiser, G., Ray, B.: Testing dnn image classifiers for confusion & bias errors. In: Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering, Association for Computing Machinery, New York, NY, USA, ICSE ’20, pp. 1122-1134 (2020). https://doi.org/10.1145/3377811.3380400,
– reference: Zhang, M., Li, H., Kuang, X., Pang, L., Wu, Z.: Neuron selecting: Defending against adversarial examples in deep neural networks. In: International Conference on Information and Communications Security. Springer, Berlin. pp. 613–629 (2019a)
– reference: Cosentino, J., Zaiter, F., Pei, D., Zhu, J.: The search for sparse, robust neural networks (2019). ArXiv preprint arXiv:1912.02386
– reference: Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples (2014). ArXiv preprint arXiv:1412.6572
– reference: Lillicrap, T.P., Hunt, J.J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., Wierstra, D.: Continuous control with deep reinforcement learning (2015). ArXiv preprint arXiv:1509.02971
– reference: Marvi, Z., Kiumarsi, B.: Safe off-policy reinforcement learning using barrier functions. In: 2020 American Control Conference (ACC), IEEE, pp. 2176–2181 (2020)
– reference: Julian, K.D., Kochenderfer, M.J.: Guaranteeing safety for neural network-based aircraft collision avoidance systems. In: 2019 IEEE/AIAA 38th Digital Avionics Systems Conference (DASC), IEEE, pp. 1–10 (2019)
– reference: ZhangJLiJTesting and verification of neural-network-based safety-critical control software: a systematic literature reviewInf. Softw. Technol.202012310.1016/j.infsof.2020.106296
– reference: Isele, D., Nakhaei, A., Fujimura, K.: Safe reinforcement learning on autonomous vehicles. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, pp. 1–6 (2018)
– reference: Baluta, T., Shen, S., Shinde, S., Meel, KS., Saxena, P.: Quantitative verification of neural networks and its security applications. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pp. 1249–1264 (2019)
– reference: GuidottiRMonrealeAGiannottiFPedreschiDRuggieriSTuriniFFactual and counterfactual explanations for black box decision makingIEEE Intell. Syst.20193461423
– reference: Pei, K., Cao, Y., Yang, J., Jana, S.: Towards practical verification of machine learning: The case of computer vision systems (2017b). ArXiv preprint arXiv:1712.01785
– reference: Anderson, BG., Ma, Z., Li, J., Sojoudi, S.: Tightened convex relaxations for neural network robustness certification. In: 2020 59th IEEE Conference on Decision and Control (CDC), IEEE, pp. 2190–2197 (2020)
– reference: Lecuyer, M., Atlidakis, V., Geambasu, R., Hsu, D., Jana, S.: On the connection between differential privacy and adversarial robustness in machine learning (2018). ArXiv preprint arXiv:180203471v1
– reference: PengWYeZSChenNBayesian deep-learning-based health prognostics toward prognostics uncertaintyIEEE Trans. Ind. Electron.201967322832293
– reference: ArrietaABDíaz-RodríguezNDel SerJBennetotATabikSBarbadoAGarcíaSGil-LópezSMolinaDBenjaminsRExplainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AIInf. Fusion20205882115
– reference: Croce, F., Andriushchenko, M., Hein, M.: Provable robustness of relu networks via maximization of linear regions. In: the 22nd International Conference on Artificial Intelligence and Statistics, PMLR, pp. 2057–2066 (2019)
– reference: Jain, D., Anumasa, S., Srijith, P.: Decision making under uncertainty with convolutional deep gaussian processes. In: Proceedings of the 7th ACM IKDD CoDS and 25th COMAD, pp. 143–151 (2020)
– reference: Henne, M., Schwaiger, A., Roscher, K., Weiss, G.: Benchmarking uncertainty estimation methods for deep learning with safety-related metrics. In: SafeAI@ AAAI, pp. 83–90 (2020)
– reference: KuuttiSBowdenRJoshiHde TempleRFallahSYinHCamachoDTinoPTallón-BallesterosAJMenezesRAllmendingerRSafe deep neural network-driven autonomous vehicles using software safety cagesIntelligent Data Engineering and Automated Learning - IDEAL 20192019BerlinSpringer150160
– reference: Toubeh, M., Tokekar, P.: Risk-aware planning by confidence estimation using deep learning-based perception (2019). ArXiv preprint arXiv:1910.00101
– reference: Feng, D., Rosenbaum, L., Glaeser, C., Timm, F., Dietmayer, K.: Can we trust you? on calibration of a probabilistic object detector for autonomous driving (2019). ArXiv preprint arXiv:1909.12358
– reference: WangJGouLZhangWYangHShenHWDeepvid: Deep visual interpretation and diagnosis for image classifiers via knowledge distillationIEEE Trans. Visualiz. Comput. Graph.201925621682180
– reference: Zhan, W., Li, J., Hu, Y., Tomizuka, M.: Safe and feasible motion generation for autonomous driving via constrained policy net. In: IECON 2017-43rd Annual Conference of the IEEE Industrial Electronics Society, IEEE, pp. 4588–4593 (2017)
– reference: Yaghoubi, S., Fainekos, G.: Gray-box adversarial testing for control systems with machine learning components. In: Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control, ACM, New York, NY, USA, HSCC ’19, pp. 179–184 (2019)
– reference: WabersichKPHewingLCarronAZeilingerMNProbabilistic model predictive safety certification for learning-based controlIEEE Trans. Autom. Control202120211007480764
– reference: Chen, Z., Narayanan, N., Fang, B., Li, G., Pattabiraman, K., DeBardeleben, N.: Tensorfi: A flexible fault injection framework for tensorflow applications. In: 2020 IEEE 31st International Symposium on Software Reliability Engineering (ISSRE), pp. 426–435 (2020b). https://doi.org/10.1109/ISSRE5003.2020.00047
– reference: DemirSEniserHFSenADeepsmartfuzzer: Reward guided test generation for deep learningArXiv preprint arXiv:arXiv2019191110621
– reference: Xu, H., Chen, Z., Wu, W., Jin, Z., Kuo, S., Lyu, M.: Nv-dnn: Towards fault-tolerant dnn systems with n-version programming. In: 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), pp. 44–47 (2019)
– reference: Kendall, A., Gal, Y.: What uncertainties do we need in bayesian deep learning for computer vision? In: Proceedings of the 31st International Conference on Neural Information Processing Systems, Curran Associates Inc., Red Hook, NY, USA, NIPS’17, pp. 5580–5590 (2017)
– reference: Dreossi, T., Ghosh, S., Sangiovanni-Vincentelli, A., Seshia, S.A.: Systematic testing of convolutional neural networks for autonomous driving (2017). ArXiv preprint arXiv:1708.03309
– reference: Remeli, V., Morapitiye, S., Rövid, A., Szalay, Z.: Towards verifiable specifications for neural networks in autonomous driving. In: 2019 IEEE 19th International Symposium on Computational Intelligence and Informatics and 7th IEEE International Conference on Recent Achievements in Mechatronics, pp. 000175–000180. Automation, Computer Sciences and Robotics (CINTI-MACRo), IEEE (2019)
– reference: Udeshi, S., Jiang, X., Chattopadhyay, S.: Callisto: Entropy-based test generation and data quality assessment for machine learning systems. In: 2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST), pp. 448–453 (2020)
– reference: MnihVKavukcuogluKSilverDRusuAAVenessJBellemareMGGravesARiedmillerMFidjelandAKOstrovskiGHuman-level control through deep reinforcement learningNature2015518529533
– reference: Sheikholeslami, F., Jain, S., Giannakis, G.B.: Minimum uncertainty based detection of adversaries in deep neural networks. In: 2020 Information Theory and Applications Workshop (ITA), IEEE, pp. 1–16 (2020)
– reference: Meinke, A., Hein, M.: Towards neural networks that provably know when they don’t know (2019). ArXiv preprint arXiv:1909.12180
– reference: Henriksson, J., Berger, C., Borg, M., Tornberg, L., Sathyamoorthy, S.R., Englund, C.: Performance analysis of out-of-distribution detection on various trained neural networks. In: 2019 45th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), pp. 113–120 (2019b). https://doi.org/10.1109/SEAA.2019.00026
– reference: Dey, S., Dasgupta, P., Gangopadhyay, B.: Safety augmentation in decision trees. In: AISafety@ IJCAI (2020)
– reference: SunYHuangXKroeningDSharpJHillMAshmoreRStructural test coverage criteria for deep neural networksACM Trans. Embed. Comput. Syst.2019185
– reference: Wagner, J., Kohler, J.M., Gindele, T., Hetzel, L., Wiedemer, J.T., Behnke, S.: Interpretable and fine-grained visual explanations for convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9097–9107 (2019)
– reference: WenMTopcuUConstrained cross-entropy method for safe reinforcement learningIEEE Trans. Autom. Control202066742841341467.93153
– reference: YanYPeiQA robust deep-neural-network-based compressed model for mobile device assisted by edge serverIEEE Access20197179104179117
– reference: Mani, N., Moh, M., Moh, T.S.: Towards robust ensemble defense against adversarial examples attack. In: 2019 IEEE Global Communications Conference (GLOBECOM), pp. 1–6 (2019a)
– reference: Bunel, R., Lu, J., Turkaslan, I., Torr, P.H., Kohli, P., Kumar, M.P.: Branch and bound for piecewise linear neural network verification. J. MaC.H. Learn. Res. 21(42), 1–39 (2020)
– reference: Cheng, C.H., Huang, C.H., Nührenberg, G.: nn-dependability-kit: Engineering neural networks for safety-critical autonomous driving systems (2019a). ArXiv preprint arXiv:1811.06746
– reference: ISO (2018) ISO 26262: Road vehicles – Functional safety. International Organization of Standardization (ISO), Geneva, Switzerland
– reference: Bacci, E., Parker, D.: Probabilistic guarantees for safe deep reinforcement learning. In: International Conference on Formal Modeling and Analysis of Timed Systems. Springer, Berlin. pp. 231–248 (2020)
– reference: Mani, S., Sankaran, A., Tamilselvam, S., Sethi, A.: Coverage testing of deep learning models using dataset characterization (2019b). ArXiv preprint arXiv:1911.07309
– reference: Chakrabarty, A., Quirynen, R., Danielson, C., Gao, W.: Approximate dynamic programming for linear systems with state and input constraints. In: 2019 18th European Control Conference (ECC), IEEE, pp. 524–529 (2019)
– reference: Lin, W., Yang, Z., Chen, X., Zhao, Q., Li, X., Liu, Z., He, J.: Robustness verification of classification deep neural networks via linear programming. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11418–11427 (2019)
– reference: Rodriguez-DapenaPSoftware safety certification: a multidomain problemIEEE Softw.1999164313810.1109/52.776946
– reference: Li, S., Chen, Y., Peng, Y., Bai, L.: Learning more robust features with adversarial training. ArXiv preprint arXiv:1804.07757 (2018)
– reference: KuwajimaHTanakaMOkutomiMImproving transparency of deep neural inference processProgr. Artif. Intell.201982273285
– reference: Dean, S., Matni, N., Recht, B., Ye, V.: Robust guarantees for perception-based control. In: Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR, vol 120, 350–360 (2020)
– reference: FisacJFAkametaluAKZeilingerMNKaynamaSGillulaJTomlinCJA general safety framework for learning-based control in uncertain robotic systemsIEEE Trans. Autom. Control201964727372752397827110.1109/TAC.2018.28763891482.93720
– reference: CastelvecchiDCan we open the black box of AI?Nat News20165382023
– reference: KitchenhamBPretoriusRBudgenDPearl BreretonOTurnerMNiaziMLinkmanSSystematic literature reviews in software engineering - a tertiary studyInf. Softw. Technol.2010528792805
– reference: Ghosh, S., Jha, S., Tiwari, A., Lincoln, P., Zhu, X.: Model, data and reward repair: Trusted machine learning for markov decision processes. In: 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), pp. 194–199 (2018b)
– reference: Croce, F., Hein, M.: Provable robustness against all adversarial l_p\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l\_p$$\end{document}-perturbations for p≥1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p \ge 1$$\end{document} (2019). ArXiv preprint arXiv:1905.11213
– reference: Varghese, S., Bayzidi, Y., Bar, A., Kapoor, N., Lahiri, S., Schneider, J.D., Schmidt, N.M., Schlicht, P., Huger, F., Fingscheidt, T.: Unsupervised temporal consistency metric for video segmentation in highly-automated driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 336–337 (2020)
– reference: Ye, S., Tan, S.H., Xu, K., Wang, Y., Bao, C., Ma, K.: Brain-inspired reverse adversarial examples (2019). ArXiv preprint arXiv:1905.12171
– reference: Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., Vechev, M.: Ai2: Safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE Symposium on Security and Privacy (SP), IEEE, pp. 3–18 (2018)
– reference: Salay, R., Czarnecki, K.: Using machine learning safely in automotive software: An assessment and adaption of software process requirements in iso 26262 (2018). ArXiv preprint arXiv:1808.01614
– reference: YangYVamvoudakisKGModaresHSafe reinforcement learning for dynamical gamesInt. J. Rob. Nonlinear Control20203093706372641029001466.91038
– reference: Mirman, M., Gehr, T., Vechev, M.: Differentiable abstract interpretation for provably robust neural networks. In: International Conference on Machine Learning, PMLR, pp. 3578–3586 (2018)
– reference: GualoFRodriguezMVerdugoJCaballeroIPiattiniMData quality certification using ISO/IEC 25012: Industrial experiencesJ. Syst. Softw.2021176
– reference: Pandian, M.K.S., Dajsuren, Y., Luo, Y., Barosan, I.: Analysis of iso 26262 compliant techniques for the automotive domain. In: MASE@MoDELS (2015)
– reference: Sehwag, V., Bhagoji, A.N., Song, L., Sitawarin, C., Cullina, D., Chiang, M., Mittal, P.: Analyzing the robustness of open-world machine learning. In: Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, ACM, New York, NY, USA, AISec’19, pp. 105-116 (2019)
– reference: Hein, M., Andriushchenko, M., Bitterwolf, J.: Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 41–50 (2019)
– reference: Rahimi, M., Guo, J.L., Kokaly, S., Chechik, M.: Toward requirements specification for machine-learned components. In: 2019 IEEE 27th International Requirements Engineering Conference Workshops (REW), 241–244 (2019)
– reference: Levi, D., Gispan, L., Giladi, N., Fetaya, E.: Evaluating and calibrating uncertainty prediction in regression tasks (2019). ArXiv preprint arXiv:1905.11659
– reference: EverettMLütjensBHowJPCertifiable robustness to adversarial state uncertainty in deep reinforcement learningIEEE Trans. Neural Netw. Learn. Syst202110.1109/TNNLS.2021.3056046
– reference: Zhang, P., Dai, Q., Ji, S.: Condition-guided adversarial generative testing for deep learning systems. In: 2019 IEEE International Conference on Artificial Intelligence Testing (AITest), pp. 71–77 (2019b)
– reference: Pedreschi, D., Giannotti, F., Guidotti, R., Monreale, A., Pappalardo, L., Ruggieri, S., Turini, F.: Open the black box data-driven explanation of black box decision systems (2018). ArXiv preprint arXiv:1806.09936
– reference: ChengROroszGMurrayRMBurdickJWEnd-to-end safe reinforcement learning through barrier functions for safety-critical continuous control tasksProceedings of the AAAI Conference on Artificial Intelligence20193333873395
– reference: Kaprocki, N., Velikić, G., Teslić, N., Krunić, M.: Multiunit automotive perception framework: Synergy between AI and deterministic processing. In: 2019 IEEE 9th International Conference on Consumer Electronics (ICCE-Berlin), pp. 257–260 (2019)
– reference: François-LavetVHendersonPIslamRBellemareMGPineauJAn Introduction to Deep Reinforcement LearningFound. Trends Mach. Learn.2018113–42193541448.68021
– reference: Guo, W., Mu, D., Xu, J., Su, P., Wang, G., Xing, X.: Lemna: Explaining deep learning based security applications. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 364–379 (2018b)
– reference: Ignatiev, A., Pereira, F., Narodytska, N., Marques-Silva, J.: A sat-based approach to learn explainable decision sets. In: International Joint Conference on Automated Reasoning. Springer, Berlin. pp. 627–645 (2018)
– reference: Wang, Y.S., Weng, T.W., Daniel, L.: Verification of neural network control policy under persistent adversarial perturbation (2019b). ArXiv preprint arXiv:1908.06353
– reference: Vijaykeerthy, D., Suri, A., Mehta, S., Kumaraguru, P.: Hardening deep neural networks via adversarial model cascades (2018). arXiv:1802.01448
– reference: Fremont, D.J., Chiu, J., Margineantu, D.D., Osipychev, D., Seshia, S.A.: Formal analysis and redesign of a neural network-based aircraft taxiing system with verifai. In: International Conference on Computer Aided Verification. Springer, Berlin. pp. 122–134 (2020)
– reference: GauerhofLMunkPBurtonSGallinaBSkavhaugABitschFStructuring validation targets of a machine learning function applied to automated drivingComputer Safety, Reliability, and Security2018BerlinSpringer4558
– reference: Wang, Y., Jha, S., Chaudhuri, K.: Analyzing the robustness of nearest neighbors to adversarial examples. In: Dy J, Krause A (eds) Proceedings of the 35th International Conference on Machine Learning, PMLR, Proceedings of Machine Learning Research, vol. 80, pp. 5133–5142, (2018e). https://proceedings.mlr.press/v80/wang18c.html
– reference: Arnab, A., Miksik, O., Torr, PH.: On the robustness of semantic segmentation models to adversarial attacks. In: 2018 IEEECVF Conference on Computer Vision and Pattern Recognition, pp. 888–897 (2018) https://doi.org/10.1109/CVPR.2018.00099
– reference: Sun, Y., Huang, X., Kroening, D., Sharp, J., Hill, M., Ashmore, R.: DeepConcolic: Testing and debugging deep neural networks. In: 2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion), pp. 111–114 (2019)
– reference: Dapello, J., Marques, T., Schrimpf, M., Geiger, F., Cox, D.D., DiCarlo, J.J.: Simulating a primary visual cortex at the front of CNNs improves robustness to image perturbations (2020). bioRxiv https://doi.org/10.1101/2020.06.16.154542
– reference: Heinzmann, L., Shafaei, S., Osman, M.H., Segler, C., Knoll, A.: A framework for safety violation identification and assessment in autonomous driving. In: AISafety@IJCAI (2019)
– reference: GoodfellowIBengioYCourvilleADeep Learning2016New YorkMIT Press1373.68009
– reference: Delseny, H., Gabreau, C., Gauffriau, A., Beaudouin, B., Ponsolle, L., Alecu, L., Bonnin, H., Beltran, B., Duchel, D., Ginestet, J.B., Hervieu, A., Martinez, G., Pasquet, S., Delmas, K., Pagetti, C., Gabriel, J.M., Chapdelaine, C., Picard, S., Damour, M., Cappi, C., Gardès, L., Grancey, F.D., Jenn, E., Lefevre, B., Flandin, G., Gerchinovitz, S., Mamalet, F., Albore, A.: White paper machine learning in certified systems (2021). ArXiv preprint arXiv:2103.10529
– reference: Liu, J., Shen, Z., Cui, P., Zhou, L., Kuang, K., Li, B., Lin, Y.: Invariant adversarial learning for distributional robustness (2020). ArXiv preprint arXiv:2006.04414
– reference: Xiang, W., Lopez, D.M., Musau, P., Johnson, T.T.: Reachable set estimation and verification for neural network models of nonlinear dynamic systems. In: Safe, Autonomous and Intelligent Vehicles. Springer, Berlin. pp. 123–144 (2019)
– reference: Julian, K.D., Lee, R., Kochenderfer, M.J.: Validation of image-based neural network controllers through adaptive stress testing. In: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), pp. 1–7 (2020). https://doi.org/10.1109/ITSC45102.2020.9294549
– reference: Arcaini, P., Bombarda, A., Bonfanti, S., Gargantini, A.: Dealing with robustness of convolutional neural networks for image classification. In: 2020 IEEE International Conference on Artificial Intelligence Testing (AITest), pp. 7–14 (2020) https://doi.org/10.1109/AITEST49225.2020.00009
– reference: Pan, R.: Static deep neural network analysis for robustness. In: Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ACM, New York, NY, USA, ESEC/FSE 2019, pp. 1238-1240 (2019)
– reference: Aslansefat, K., Sorokos, I., Whiting, D., Kolagari, R.T., Papadopoulos, Y.: Safeml: Safety monitoring of machine learning classifiers through statistical difference measure (2020). ArXiv preprint arXiv:2005.13166
– reference: Gu, X., Easwaran, A.: Towards safe machine learning for cps: infer uncertainty from training data. In: Proceedings of the 10th ACM/IEEE International Conference on Cyber-Physical Systems, pp. 249–258 (2019)
– reference: Le, H., Voloshin, C., Yue, Y.: Batch policy learning under constraints. In: International Conference on Machine Learning, PMLR, pp. 3703–3712 (2019)
– reference: Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: An efficient smt solver for verifying deep neural networks. In: International Conference on Computer Aided Verification. Springer, Berlin. pp. 97–117 (2017)
– reference: Zhang, J., Cheung, B., Finn, C., Levine, S., Jayaraman, D.: Cautious adaptation for reinforcement learning in safety-critical settings. In: International Conference on Machine Learning, PMLR, pp. 11055–11065 (2020a)
– reference: Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Formal security analysis of neural networks using symbolic intervals. In: 27th {\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\{$$\end{document}USENIX}\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\}$$\end{document} Security Symposium ({\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\{$$\end{document}USENIX}\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\}$$\end{document} Security 18), pp. 1599–1614 (2018b)
– reference: Aravantinos, V., Diehl, F.: Traceability of deep neural networks (2019). ArXiv preprint arXiv:1812.06744
– reference: Gopinath, D., Taly, A., Converse, H., Pasareanu, C.S.: Finding invariants in deep neural networks (2019). ArXiv preprint arXiv:190413215v1
– reference: Liang, S., Li, Y., Srikant, R.: Enhancing the reliability of out-of-distribution image detection in neural networks (2020). ArXiv preprint arXiv:1706.02690
– reference: Meyes, R., Schneider, M., Meisen, T.: How do you act? an empirical study to understand behavior of deep reinforcement learning agents (2020b). ArXiv preprint arXiv:2004.03237
– reference: Rakin, A.S., He, Z., Fan, D.: Parametric noise injection: Trainable randomness to improve deep neural network robustness against adversarial attack (2018). ArXiv preprint arXiv:1811.09310
– reference: Nowak, T., Nowicki, M.R., Ćwian, K., Skrzypczyński, P.: How to improve object detection in a driver assistance system applying explainable deep learning. In: 2019 IEEE Intelligent Vehicles Symposium (IV), IEEE, pp. 226–231 (2019)
– reference: Reeb, D., Doerr, A., Gerwinn, S., Rakitsch, B.: Learning gaussian processes by minimizing pac-bayesian generalization bounds. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, Curran Associates Inc., Red Hook, NY, USA, NIPS’18, pp. 3341-3351 (2018)
– reference: Ayers, EW., Eiras, F., Hawasly, M., Whiteside, I.: Parot: a practical framework for robust deep neural network training. In: NASA Formal Methods Symposium. Springer, Berlin. pp. 63–84 (2020)
– reference: CardelliLKwiatkowskaMLaurentiLPataneARobustness guarantees for bayesian inference with gaussian processesProc. AAAI Conf. Artif. Intell.20193377597768
– reference: Lee, K., Wang, Z., Vlahov, B., Brar, H., Theodorou, E.A.: Ensemble bayesian decision making with redundant deep perceptual control policies. In: 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA), IEEE, pp. 831–837 (2019b)
– reference: Hasanbeig, M., Kroening, D., Abate, A.: Towards verifiable and safe model-free reinforcement learning. In: CEUR Workshop Proceedings, CEUR Workshop Proceedings (2020)
– reference: Sehwag, V., Wang, S., Mittal, P., Jana, S.: On pruning adversarially robust neural networks. ArXiv arXiv:2002.10509 (2020)
– reference: Tran, H.D., Musau, P., Lopez, D.M., Yang, X., Nguyen, L.V., Xiang, W., Johnson, T.T.: Parallelizable reachability analysis algorithms for feed-forward neural networks. In: 2019 IEEE/ACM 7th International Conference on Formal Methods in Software Engineering (FormaliSE), IEEE, pp. 51–60 (2019)
– reference: Turchetta, M., Berkenkamp, F., Krause, A.: Safe exploration in finite markov decision processes with gaussian processes. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, Curran Associates Inc., Red Hook, NY, USA, NIPS’16, pp. 4312-4320 (2016)
– reference: GoodmanBFlaxmanSEuropean union regulations on algorithmic decision-making and a“right to explanation”AI magazine20173835057
– reference: FujinoHKobayashiNShirasakaSSafety assurance case description method for systems incorporating off-operational machine learning and safety deviceINCOSE Int. Symp.201929S1152164
– reference: Singh, G., Gehr, T., Püschel, M., Vechev, M.: Boosting robustness certification of neural networks. In: International Conference on Learning Representations (2018)
– reference: LyuZKoCYKongZWongNLinDDanielLFastened crown: Tightened neural network robustness certificatesProc. AAAI Conf. Artif. Intell.20203450375044
– reference: Naseer, M., Minhas, M.F., Khalid, F., Hanif, M.A., Hasan, O., Shafique, M.: Fannet: formal analysis of noise tolerance, training bias and input sensitivity in neural networks. In: 2020 Design, Automation & Test in Europe Conference & Exhibition (DATE), IEEE, pp. 666–669 (2020)
– reference: Hendrycks, D., Dietterich, T.G.: Benchmarking neural network robustness to common corruptions and perturbations. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. https://openreview.net/forum?id=HJz6tiCqYm
– reference: Kornecki, A., Zalewski, J.: Software certification for safety-critical systems: A status report. In: 2008 International Multiconference on Computer Science and Information Technology, pp. 665–672 (2008). https://doi.org/10.1109/IMCSIT.2008.4747314
– reference: HuangXKroeningDRuanWSharpJSunYThamoEWuMYiXA survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretabilityComput. Sci. Rev.202037100270411612710.1016/j.cosrev.2020.1002701478.68308
– reference: Ghosh, S., Berkenkamp, F., Ranade, G., Qadeer, S., Kapoor, A.: Verifying controllers against adversarial examples with bayesian optimization. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), IEEE, pp. 7306–7313 (2018a)
– reference: YounWKHongSBOhKRAhnOSSoftware certification of safety-critical avionic systems: Do-178c and its impactsIEEE Aerospace Electron. Syst. Mag.2015304413
– reference: Berkenkamp, F., Turchetta, M., Schoellig, AP., Krause, A.: Safe model-based reinforcement learning with stability guarantees. In: Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS), pp. 908–919 (2017)
– reference: Fulton, N., Platzer, A.: Verifiably safe off-model reinforcement learning. In: International Conference on Tools and Algorithms for the Construction and Analysis of Systems. Springer, Berlin. pp. 413–430 (2019)
– reference: Amini, A., Schwarting, W., Soleimany, A., Rus, D.: Deep evidential regression (2019). ArXiv preprint arXiv:1910.02600
– reference: Ben Braiek, H., Khomh, F.: Deepevolution: A search-based testing approach for deep neural networks. In: 2019 IEEE International Conference on Software Maintenance and Evolution (ICSME), pp. 454–458 (2019) https://doi.org/10.1109/ICSME.2019.00078
– reference: Lee, K., Lee, K., Lee, H., Shin, J.: A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, Curran Associates Inc., Red Hook, NY, USA, NIPS’18, pp. 7167–7177 (2018)
– reference: Richards, S.M., Berkenkamp, F., Krause, A.: The lyapunov neural network: adaptive stability certification for safe learning of dynamical systems. In: Conference on Robot Learning, PMLR, pp. 466–476 (2018)
– reference: Hendrycks, D., Basart, S., Mazeika, M., Mostajabi, M., Steinhardt, J., Song, D.: Scaling out-of-distribution detection for real-world settings (2020). ArXiv preprint arXiv:1911.11132
– reference: WatkinsCJDayanPQ-learningMach. Learn.199283–42792920773.68062
– reference: Le, M.T., Diehl, F., Brunner, T., Knol, A.: Uncertainty estimation for deep neural object detectors in safety-critical applications. In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC), IEEE, pp. 3873–3878 (2018)
– reference: WeyukerEJOn testing non-testable programsComput. J.1982254465470
– reference: ISO (2019) ISO/PAS 21448: Road vehicles – Safety of the intended functionality. International Organization of Standardization (ISO), Geneva
– reference: Ameyaw, D.A., Deng, Q., Söffker, D.: Probability of detection (pod)-based metric for evaluation of classifiers used in driving behavior prediction. In: Annual Conference of the PHM Society, vol 11 (2019)
– reference: Gambi, A., Mueller, M., Fraser, G.: Automatically testing self-driving cars with search-based procedural content generation. In: Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis, ACM, New York, NY, USA, ISSTA 2019, pp. 318-328 (2019)
– reference: Scheel, O., Schwarz, L., Navab, N., Tombari, F.: Explicit domain adaptation with loosely coupled samples (2020). ArXiv preprint arXiv:2004.11995
– reference: Wang W, Wang A, Tamar, A., Chen, X., Abbeel, P.: Safer classification by synthesis (2018d). ArXiv preprint arXiv:1711.08534
– reference: Duddu, V., Rao, DV., Balas, VE.: Adversarial fault tolerant training for deep neural networks (2019). ArXiv preprint arXiv:1907.03103
– reference: Laidlaw, C., Feizi, S.: Playing it safe: adversarial robustness with an abstain option (2019). ArXiv preprint arXiv:1911.11253
– reference: Hart, P., Rychly, L., Knoll, A.: Lane-merging using policy-based reinforcement learning and post-optimization. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC), IEEE, 3176–3181 (2019)
– reference: Amit, G., Levy, M., Rosenberg, I., Shabtai, A., Elovici, Y.: Glod: Gaussian likelihood out of distribution detector (2020). ArXiv preprint arXiv:2008.06856
– reference: DybåTDingsøyrTEmpirical studies of agile software development: a systematic reviewInf. Softw. Technol.200850983385910.1016/j.infsof.2008.01.006
– reference: Feng, Y., Shi, Q., Gao, X., Wan, J., Fang, C., Chen, Z.: Deepgini: Prioritizing massive tests to enhance the robustness of deep neural networks. In: Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis, Association for Computing Machinery, New York, NY, USA, ISSTA 2020, pp. 177-188 (2020). https://doi.org/10.1145/3395363.3397357
– reference: Gschossmann, A., Jobst, S., Mottok, J., Bierl, R.: A measure of confidence of artificial neural network classifiers. In: ARCS Workshop 2019; 32nd International Conference on Architecture of Computing Systems, pp. 1–5 (2019)
– reference: EniserHFGerasimouSSenAHähnleRvan der AalstWDeepfault: fault localization for deep neural networksFundamental Approaches to Software Engineering2019ChamSpringer171191
– reference: Inouye, D.I., Leqi, L., Kim, J.S., Aragam, B., Ravikumar, P.: Diagnostic curves for black box models (2019). ArXiv preprint arXiv:191201108v1
– reference: Pei, K., Cao, Y., Yang, J., Jana, S.: DeepXplore. Proceedings of the 26th Symposium on Operating Systems Principles (2017a)
– reference: Grefenstette, E., Stanforth, R., O’Donoghue, B., Uesato, J., Swirszcz, G., Kohli, P.: Strength in numbers: Trading-off robustness and computation via adversarially-trained ensembles. CoRR abs/1811.09300 (2018). arXiv:1811.09300
– reference: RenKZhengTQinZLiuXAdversarial attacks and defenses in deep learningEngineering202063346360
– reference: PauliPKochABerberichJKohlerPAllgöwerFTraining robust neural networks using lipschitz boundsIEEE Control Syst. Lett.20226121126445405210.1109/LCSYS.2021.3050444
– reference: Zhao, C., Yang, J., Liang, J., Li, C.: Discover learning behavior patterns to predict certification. In: 2016 11th International Conference on Computer Science & Education (ICCSE), IEEE, pp. 69–73 (2016)
– reference: Cheng, C.H.: Safety-aware hardening of 3d object detection neural network systems (2020). ArXiv preprint arXiv:2003.11242
– reference: Henriksson, J., Berger, C., Borg, M., Tornberg, L., Englund, C., Sathyamoorthy, S.R., Ursing, S.: Towards structured evaluation of deep neural network supervisors. In: 2019 IEEE International Conference on Artificial Intelligence Testing (AITest), pp. 27–34 (2019a)
– reference: Salay, R., Angus, M., Czarnecki, K.: A safety analysis method for perceptual components in automated driving. In: 2019 IEEE 30th International Symposium on Software Reliability Engineering (ISSRE), IEEE, pp. 24–34 (2019)
– reference: WenJLiSLinZHuYHuangCSystematic literature review of machine learning based software development effort estimation modelsInf. Softw. Technol.20125414159
– reference: MoravčíkMSchmidMBurchNLisỳVMorrillDBardNDavisTWaughKJohansonMBowlingMDeepstack: Expert-level artificial intelligence in heads-up no-limit pokerScience2017356633750851336769531403.68202
– reference: Ribeiro, M.T., Singh, S., Guestrin, C.: “why should I trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135–1144 (2016)
– reference: Ma, L., Juefei-Xu, F., Zhang, F., Sun, J., Xue, M., Li, B., Chen, C., Su, T., Li, L., Liu, Y., Zhao, J., Wang, Y.: Deepgauge: Multi-granularity testing criteria for deep learning systems. In: Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, ACM, New York, NY, USA, ASE 2018, pp. 120-131 (2018). https://doi.org/10.1145/3238147.3238202
– reference: Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks (2017). ArXiv preprint arXiv:1706.06083
– reference: Meyes, R., de Puiseau, C.W., Posada-Moreno, A., Meisen, T.: Under the hood of neural networks: Characterizing learned representations by functional neuron populations and network ablations (2020a). ArXiv preprint arXiv:2004.01254
– reference: Machida, F.: N-version machine learning models for safety critical systems. In: 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), pp. 48–51 (2019)
– reference: Göpfert, J.P., Hammer, B., Wersing, H.: Mitigating concept drift via rejection. In: International Conference on Artificial Neural Networks. Springer, Berlin. pp. 456–467 (2018)
– reference: Tuncali, C.E., Fainekos, G., Ito, H., Kapinski, J.: Simulation-based adversarial test generation for autonomous vehicles with machine learning components. In: 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 1555–1562 (2018)
– reference: Guo J, Jiang, Y., Zhao, Y., Chen, Q., Sun, J.: DLFuzz: differential fuzzing testing of deep learning systems. Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (2018a)
– reference: BiondiANestiFCiceroGCasiniDButtazzoGA safe, secure, and predictable software architecture for deep learning in safety-critical systemsIEEE Embed. Syst. Lett.2020123788210.1109/LES.2019.2953253
– reference: Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, OpenReview.net (2017). https://openreview.net/forum?id=Hkg4TI9xl
– reference: NesterovYLectures on Convex Optimization2018BerlinSpringer1427.90003
– reference: Smith, M.T., Grosse, K., Backes, M., Alvarez, M.A.: Adversarial vulnerability bounds for gaussian process classification (2019). ArXiv preprint arXiv:1909.08864
– reference: GopinathDKatzGPăsăreanuCSBarrettCLahiriSKWangCDeepsafe: A data-driven approach for assessing robustness of neural networksAutomated Technology for Verification and Analysis2018ChamSpringer319
– reference: Hein, M., Andriushchenko, M.: Formal guarantees on the robustness of a classifier against adversarial manipulation. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, Garnett R (eds) Advances in Neural Information Processing Systems, Curran Associates, Inc., vol 30 (2017). https://proceedings.neurips.cc/paper/2017/file/e077e1a544eec4f0307cf5c3c721d944-Paper.pdf
– reference: LoquercioASeguMScaramuzzaDA general framework for uncertainty estimation in deep learningIEEE Robot. Autom. Lett.2020523153316010.1109/LRA.2020.2974682
– reference: Amarasinghe, K., Manic, M.: Explaining what a neural network has learned: toward transparent classification. In: 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, pp. 1–6 (2019)
– reference: Jeddi, A., Shafiee, M.J., Karg, M., Scharfenberger, C., Wong, A.: Learn2perturb: An end-to-end feature perturbation learning to improve adversarial robustness. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1238–1247 (2020). https://doi.org/10.1109/CVPR42600.2020.00132
– reference: Li, Y., Liu, Y., Li, M., Tian, Y., Luo, B., Xu, Q.: D2NN: A fine-grained dual modular redundancy framework for deep neural networks. In: Proceedings of the 35th Annual Computer Security Applications Conference (ACSAC’19), ACM, New York, NY, USA, pp. 138-147 (2019b)
– reference: Jin, W., Ma, Y., Liu, X., Tang, X., Wang, S., Tang, J.: Graph structure learning for robust graph neural networks. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, New York, NY, USA, KDD ’20, pp. 66–74 (2020)
– reference: Guidotti, D., Leofante, F., Castellini, C., Tacchella, A.: Repairing learned controllers with convex optimization: a case study. In: International Conference on Integration of Constraint Programming, Artificial Intelligence, and Operations Research. Springer, Berlin. pp. 364–373 (2019a)
– reference: Tian, Y., Pei, K., Jana, S., Ray, B.: DeepTest: Automated testing of deep-neural-network-driven autonomous cars. In: Proceedings of the 40th International Conference on Software Engineering, ACM, New York, NY, USA, ICSE ’18, pp. 303-314 (2018)
– reference: Kuppers, F., Kronenberger, J., Shantia, A., Haselhoff, A.: Multivariate confidence calibration for object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 326–327 (2020)
– reference: Michelmore, R., Kwiatkowska, M., Gal, Y.: Evaluating uncertainty quantification in end-to-end autonomous driving control (2018). ArXiv preprint arXiv:1811.06817
– reference: Abreu, S.: Automated architecture design for deep neural networks (2019). ArXiv preprint arXiv:1908.10714
– reference: RevayMWangRManchesterIRA convex parameterization of robust recurrent neural networksIEEE Control Syst. Lett.202054136313684211687
– reference: O’Brien, M., Goble, W., Hager, G., Bukowski, J.: Dependable neural networks for safety critical tasks. In: International Workshop on Engineering Dependable and Secure Machine Learning Systems. Springer, Berlin. pp. 126–140 (2020)
– reference: Wabersich, K.P., Zeilinger, M.N.: Performance and safety of bayesian model predictive control: scalable model-based RL with guarantees (2020b). ArXiv preprint arXiv:2006.03483
– reference: Bragg, J., Habli, I.: What is acceptably safe for reinforcement learning? In: International Conference on Computer Safety, Reliability, and Security. Springer, Berlin. pp. 418–430 (2018)
– reference: Daniels, Z.A., Metaxas, D.: Scenarionet: An interpretable data-driven model for scene understanding. In: IJCAI Workshop on Explainable Artificial Intelligence (XAI) 2018 (2018)
– reference: Sohn, J., Kang, S., Yoo, S.: Search based repair of deep neural networks (2019). ArXiv preprint arXiv:1912.12463
– reference: Kandel, A., Moura, S.J.: Safe zero-shot model-based learning and control: a wasserstein distributionally robust approaC.H (2020). ArXiv preprint arXiv:2004.00759
– reference: Müller, S., Hospach, D., Bringmann, O., Gerlach, J., Rosenstiel, W.: Robustness evaluation and improvement for vision-based advanced driver assistance systems. In: 2015 IEEE 18th International Conference on Intelligent Transportation Systems, pp. 2659–2664 (2015)
– reference: Liu, L., Saerbeck, M., Dauwels, J.: Affine disentangled gan for interpretable and robust av perception (2019). ArXiv preprint arXiv:1907.05274
– reference: Li, J., Liu, J., Yang, P., Chen, L., Huang, X., Zhang, L.: Analyzing deep neural networks with symbolic propagation: towards higher precision and faster verification. In: International Static Analysis Symposium. Springer, Berlin. pp. 296–319 (2019a)
– reference: Gladisch, C., Heinzemann, C., Herrmann, M., Woehrle, M.: Leveraging combinatorial testing for safety-critical computer vision datasets. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1314–1321 (2020)
– reference: Summers, C., Dinneen, M.J.: Improved adversarial robustness via logit regularization methods (2019). ArXiv preprint arXiv:1906.03749
– reference: Taha, A., Chen, Y., Misu, T., Shrivastava, A., Davis, L.: Unsupervised data uncertainty learning in visual retrieval systems. CoRR abs/1902.02586 (2019). arXiv:1902.02586
– reference: Lee, K., An, G.N., Zakharov, V., Theodorou, E.A.: Perceptual attention-based predictive control (2019a). ArXiv preprint arXiv:1904.11898
– reference: Cofer, D., Amundson, I., Sattigeri, R., Passi, A., Boggs, C., Smith, E., Gilham, L., Byun, T., Rayadurgam, S.: Run-time assurance for learning-based aircraft taxiing. In: 2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC), pp. 1–9 (2020). https://doi.org/10.1109/DASC50938.2020.9256581
– reference: Lütjens, B., Everett, M., How, J.P.: Safe reinforcement learning with model uncertainty estimates. In: 2019 International Conference on Robotics and Automation (ICRA), IEEE, pp. 8662–8668 (2019)
– reference: Pedroza, G., Adedjouma, M.: Safe-by-Design Development Method for Artificial Intelligent Based Systems. In: SEKE 2019 : The 31st International Conference on Software Engineering and Knowledge Engineering, Lisbon, Portugal, pp. 391–397 (2019)
– reference: Sena, L.H., Bessa, I.V., Gadelha, M.R., Cordeiro, L.C., Mota, E.: Incremental bounded model checking of artificial neural networks in cuda. In: 2019 IX Brazilian Symposium on Computing Systems Engineering (SBESC), IEEE, pp. 1–8 (2019)
– reference: RohYHeoGWhangSEA survey on data collection for machine learning: A big data - ai integration perspectiveIEEE Trans. Knowl. Data Eng.202133413281347
– reference: Steinhardt, J., Koh, P.W., Liang, P.: Certified defenses for data poisoning attacks. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, Curran Associates Inc., Red Hook, NY, USA, NIPS’17, pp. 3520-3532 (2017)
– reference: Shwartz-Ziv, R., Tishby, N.: Opening the black box of deep neural networks via information (2017). ArXiv preprint arXiv:1703.00810
– reference: Gros, T.P., Hermanns, H., Hoffmann, J., Klauck, M., Steinmetz, M.: Deep statistical model checking. In: International Conference on Formal Techniques for Distributed Objects, Components, and Systems. Springer, Berlin. pp. 96–114 (2020b)
– reference: WuMWickerMRuanWHuangXKwiatkowskaMA game-based approximate verification of deep neural networks with provable guaranteesTheoret. Comput. Sci.202080729832940537361436.68199
– reference: Bar, A., Huger, F., Schlicht, P., Fingscheidt, T.: On the robustness of redundant teacher-student frameworks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1380–1388 (2019)
– reference: Cheng, C.H., Nührenberg, G., Ruess, H.: Maximum resilience of artificial neural networks. In: International Symposium on Automated Technology for Verification and Analysis. Springer, Berlin. pp. 251–268, (2017)
– reference: Sekhon, J., Fleming, C.: Towards improved testing for deep learning. In: 2019 IEEE/ACM 41st International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER), pp. 85–88 (2019)
– reference: Rubies-Royo, V., Calandra, R., Stipanovic, D.M., Tomlin, C.: Fast neural network verification via shadow prices (2019). ArXiv preprint arXiv:1902.07247
– reference: Kitchenham, B.: Procedures for performing systematic reviews. Joint Technical Report, Computer Science Department, Keele University (TR/SE-0401) and National ICT Australia Ltd (0400011T1) (2004)
– reference: Vidot, G., Gabreau, C., Ober, I., Ober, I.: Certification of embedded systems based on machine learning: a survey (2021). arXiv:2106.07221
– reference: Kläs, M., Sembach, L.: Uncertainty wrappers for data-driven models. In: International Conference on Computer Safety, Reliability, and Security. Springer, Berlin. pp. 358–364 (2019)
– reference: Ren, H., Chandrasekar, S.K., Murugesan, A.: Using quantifier elimination to enhance the safety assurance of deep neural networks. In: 2019 IEEE/AIAA 38th Digital Avionics Systems Conference (DASC), IEEE, pp. 1–8 (2019a)
– reference: Deshmukh, J.V., Kapinski, JP., Yamaguchi, T., Prokhorov, D.: Learning deep neural network controllers for dynamical systems with safety guarantees. In: 2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), IEEE, pp. 1–7 (2019)
– reference: Lust, J., Condurache, A.P.: Gran: An efficient gradient-norm based detector for adversarial and misclassified examples (2020). ArXiv preprint arXiv:2004.09179
– reference: RamanagopalMSAndersonCVasudevanRJohnson-RobersonMFailing to learn: Autonomously identifying perception failures for self-driving carsIEEE Robot. Autom. Lett.20183438603867
– reference: Park, C., Kim, J.M., Ha, S.H., Lee, J.: Sampling-based bayesian inference with gradient uncertainty (2018). ArXiv preprint arXiv:1812.03285
– reference: YanMWangLFeiAARTDL: Adaptive random testing for deep learning systemsIEEE Access2020830553064
– reference: GauerhofLHawkinsRPicardiCPatersonCHagiwaraYHabliICasimiroAOrtmeierFBitschFFerreiraPAssuring the safety of machine learning for pedestrian detection at crossingsComputer Safety, Reliability, and Security2020ChamSpringer197212
– reference: Wolschke, C., Kuhn, T., Rombach, D., Liggesmeyer, P.: Observation based creation of minimal test suites for autonomous vehicles. In: 2017 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW), pp. 294–301 (2017)
– reference: Gandhi, D., Pinto, L., Gupta, A.: Learning to fly by crashing. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, pp. 3948–3955 (2017)
– reference: Hendrycks, D., Carlini, N., Schulman, J., Steinhardt, J.: Unsolved problems in ml safety (2021). arXiv:2109.13916
– reference: Sinha, A., Namkoong, H., Volpi, R., Duchi, J.: Certifying some distributional robustness with principled adversarial training (2017). ArXiv preprint arXiv:1710.10571
– reference: Ma, L., Juefei-Xu, F., Xue, M., Li, B., Li, L., Liu, Y., Zhao, J.: DeepCT: Tomographic combinatorial testing for deep learning systems. In: 2019 IEEE 26th International Conference on Software Analysis, Evolution and Reengineering (SANER), pp. 614–618 (2019)
– reference: Baheri, A., Nageshrao, S., Tseng, H.E., Kolmanovsky, I., Girard, A., Filev, D.: Deep reinforcement learning with enhanced safety for autonomous highway driving. In: 2020 IEEE Intelligent Vehicles Symposium (IV), IEEE, pp. 1550–1555 (2019)
– reference: Uesato, J., Kumar, A., Szepesvari, C., Erez, T., Ruderman, A., Anderson, K., Heess, N., Kohli, P. et al.: Rigorous agent evaluation: An adversarial approach to uncover catastrophic failures (2018). ArXiv preprint arXiv:1812.01647
– reference: Rusak, E., Schott, L., Zimmermann, R., Bitterwolf, J., Bringmann, O., Bethge, M., Brendel, W.: Increasing the robustness of dnns against image corruptions by playing the game of noise (2020). ArXiv preprint arXiv:2001.06057
– reference: AgostinelliFHocquetGSinghSBaldiPFrom reinforcement learning to deep reinforcement learning: an overviewBraverman Readings in Machine Learning2018BerlinSpringer298328
– reference: Tang, Y.C., Zhang, J., Salakhutdinov, R.: Worst cases policy gradients (2019). ArXiv preprint arXiv:1911.03618
– reference: BurtonSGauerhofLSethyBBHabliIHawkinsRRomanovskyATroubitsynaEGashiISchoitschEBitschFConfidence arguments for evidence of performance in machine learning for highly automated driving functionsComputer Safety, Reliability, and Security2019BerlinSpringer365377
– reference: Alagöz, I., Herpel, T., German, R.: A selection method for black box regression testing with a statistically defined quality level. In: 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST), pp. 114–125 (2017). https://doi.org/10.1109/ICST.2017.18
– reference: Gros, S., Zanon, M., Bemporad, A.: Safe reinforcement learning via projection on a safe set: How to achieve optimality? (2020a). ArXiv preprint arXiv:2004.00915
– reference: Wang, T.E., Gu, Y., Mehta, D., Zhao, X., Bernal, E.A.: Towards robust deep neural networks (2018c). ArXiv preprint arXiv:1810.11726
– reference: Chen, TY., Cheung, SC., Yiu, SM.: Metamorphic testing: a new approach for generating next test cases (2020a). ArXiv preprint arXiv:2002.12543
– reference: Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Efficient formal safety analysis of neural networks. In: Bengio S, Wallach H, Larochelle H, Grauman K, Cesa-Bianchi N, Garnett R (eds) Advances in Neural Information Processing Systems, Curran Associates, Inc., vol 31 (2018a). https://proceedings.neurips.cc/paper/2018/file/2ecd2bd94734e5dd392d8678bc64cdab-Paper.pdf
– reference: BakhtiYFezzaSAHamidoucheWDéforgesODDSA: a defense against adversarial attacks using deep denoising sparse autoencoderIEEE Access20197160397160407
– reference: Dutta, S., Jha, S., Sanakaranarayanan, S., Tiwari, A.: Output range analysis for deep neural networks (2017). ArXiv preprint arXiv:1709.09130
– reference: Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57 (2017). https://doi.org/10.1109/SP.2017.49
– reference: Colangelo, F., Neri, A., Battisti, F.: Countering adversarial examples by means of steganographic attacks. In: 2019 8th European Workshop on Visual Information Processing (EUVIP), pp. 193–198 (2019). https://doi.org/10.1109/EUVIP47703.2019.8946254
– reference: Rudolph, A., Voget, S., Mottok, J.: A consistent safety case argumentation for artificial intelligence in safety related automotive systems. In: 9th European Congress on Embedded Real Time Software and Systems (ERTS 2018), Toulouse, France (2018)
– reference: ZhangJMHarmanMMaLLiuYMachine learning testing: Survey, landscapes and horizonsIEEE Trans. Softw. Eng.20201110.1109/TSE.2019.2962027
– reference: SyrianiELuhunuLSahraouiHSystematic mapping study of template-based code generationComput. Lang. Syst. Struct.2018524362
– reference: Wabersich, K.P., Zeilinger, M.: Bayesian model predictive control: Efficient model exploration and regret bounds using posterior sampling. In: Learning for Dynamics and Control, PMLR, pp. 455–464 (2020a)
– reference: YounWjun Yi B,: Software and hardware certification of safety-critical avionic systems: A comparison studyComput. Stand. Interfaces2014366889898343378210.1016/j.csi.2014.02.005
– reference: Postels, J., Ferroni, F., Coskun, H., Navab, N., Tombari, F.: Sampling-free epistemic uncertainty estimation using approximated variance propagation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2931–2940 (2019)
– reference: Ruan, W., Wu, M., Sun, Y., Huang, X., Kroening, D., Kwiatkowska, M.: Global robustness evaluation of deep neural networks with provable guarantees for the hamming distance. In: IJCAI2019 (2019)
– reference: Julian, K.D., Sharma, S., Jeannin, J.B., Kochenderfer, M.J.: Verifying aircraft collision avoidance neural networks through linear approximations of safe regions (2019). ArXiv preprint arXiv:1903.00762
– reference: Liu, M., Liu, S., Su, H., Cao, K., Zhu, J.: Analyzing the noise robustness of deep neural networks. In: 2018 IEEE Conference on Visual Analytics Science and Technology (VAST), IEEE, pp. 60–71, (2018)
– reference: RajabliNFlamminiFNardoneRVittoriniVSoftware verification and validation of safe autonomous cars: A systematic literature reviewIEEE Access202194797481910.1109/ACCESS.2020.3048047
– reference: Bernhard, J., Gieselmann, R., Esterle, K., Knol, A.: Experience-based heuristic search: Robust motion planning with deep q-learning. In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC), IEEE, pp. 3175–3182 (2018)
– reference: Nguyen, H.H., Matschek, J., Zieger, T., Savchenko, A., Noroozi, N., Findeisen, R.: Towards nominal stability certification of deep learning-based controllers. In: 2020 American Control Conference (ACC), IEEE, 3886–3891 (2020)
– ident: 337_CR10
– ident: #cr-split#-337_CR83.1
– ident: 337_CR187
  doi: 10.1145/2939672.2939778
– ident: 337_CR25
  doi: 10.1007/978-3-319-99229-7_35
– volume: 67
  start-page: 2283
  issue: 3
  year: 2019
  ident: 337_CR175
  publication-title: IEEE Trans. Ind. Electron.
  doi: 10.1109/TIE.2019.2907440
– start-page: 408
  volume-title: Tools and Algorithms for the Construction and Analysis of Systems
  year: 2018
  ident: 337_CR243
  doi: 10.1007/978-3-319-89960-2_22
– ident: 337_CR225
  doi: 10.1109/CVPRW50498.2020.00176
– volume: 8
  start-page: 279
  issue: 3–4
  year: 1992
  ident: 337_CR239
  publication-title: Mach. Learn.
  doi: 10.1007/BF00992698
– volume: 38
  start-page: 50
  issue: 3
  year: 2017
  ident: 337_CR76
  publication-title: AI magazine
  doi: 10.1609/aimag.v38i3.2741
– volume: 36
  start-page: 889
  issue: 6
  year: 2014
  ident: 337_CR254
  publication-title: Comput. Stand. Interfaces
  doi: 10.1016/j.csi.2014.02.005
– ident: 337_CR13
  doi: 10.1007/978-3-030-58920-2_13
– ident: 337_CR167
  doi: 10.1145/3338906.3342502
– volume: 5
  start-page: 1363
  issue: 4
  year: 2020
  ident: 337_CR186
  publication-title: IEEE Control Syst. Lett.
  doi: 10.1109/LCSYS.2020.3038221
– ident: 337_CR253
– ident: 337_CR144
– ident: 337_CR244
  doi: 10.1109/ISSREW.2017.46
– volume: 50
  start-page: 833
  issue: 9
  year: 2008
  ident: 337_CR54
  publication-title: Inf. Softw. Technol.
  doi: 10.1016/j.infsof.2008.01.006
– ident: 337_CR231
  doi: 10.1109/CVPR.2019.00931
– ident: 337_CR173
  doi: 10.1145/3132747.3132785
– ident: 337_CR80
– ident: 337_CR111
  doi: 10.1145/3394486.3403049
– ident: 337_CR155
– ident: 337_CR138
– ident: 337_CR22
– ident: 337_CR74
– ident: 337_CR91
– ident: 337_CR206
– volume: 64
  start-page: 2737
  issue: 7
  year: 2019
  ident: 337_CR60
  publication-title: IEEE Trans. Autom. Control
  doi: 10.1109/TAC.2018.2876389
– ident: 337_CR36
  doi: 10.23919/DATE.2019.8714971
– volume: 33
  start-page: 3387
  year: 2019
  ident: 337_CR38
  publication-title: Proceedings of the AAAI Conference on Artificial Intelligence
  doi: 10.1609/aaai.v33i01.33013387
– volume: 58
  start-page: 82
  year: 2020
  ident: 337_CR12
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2019.12.012
– ident: 337_CR179
– ident: 337_CR15
  doi: 10.1007/978-3-030-57628-8_14
– ident: 337_CR33
  doi: 10.1109/ISSRE5003.2020.00047
– year: 2021
  ident: 337_CR56
  publication-title: IEEE Trans. Neural Netw. Learn. Syst
  doi: 10.1109/TNNLS.2021.3056046
– ident: 337_CR140
  doi: 10.1109/VAST.2018.8802509
– ident: 337_CR137
– ident: 337_CR191
  doi: 10.24963/ijcai.2019/824
– ident: 337_CR44
– ident: 337_CR50
– ident: 337_CR77
  doi: 10.1007/978-3-030-01418-6_45
– ident: 337_CR40
  doi: 10.1109/EUVIP47703.2019.8946254
– ident: 337_CR172
  doi: 10.18293/SEKE2019-094
– ident: 337_CR262
  doi: 10.1109/ICCSE.2016.7581557
– start-page: 171
  volume-title: Fundamental Approaches to Software Engineering
  year: 2019
  ident: 337_CR55
  doi: 10.1007/978-3-030-16722-6_10
– ident: 337_CR236
– ident: 337_CR16
  doi: 10.1109/IV47402.2020.9304744
– ident: 337_CR202
  doi: 10.1109/ITA50056.2020.9244964
– ident: 337_CR258
  doi: 10.1007/978-3-030-41579-2_36
– ident: #cr-split#-337_CR83.2
– ident: 337_CR228
– ident: 337_CR115
– ident: 337_CR153
  doi: 10.23919/ACC45564.2020.9147584
– ident: 337_CR8
  doi: 10.1109/IJCNN52387.2021.9533465
– ident: 337_CR64
  doi: 10.1007/978-3-030-17462-0_28
– ident: 337_CR174
– ident: 337_CR97
– volume: 12
  start-page: 78
  issue: 3
  year: 2020
  ident: 337_CR24
  publication-title: IEEE Embed. Syst. Lett.
  doi: 10.1109/LES.2019.2953253
– ident: 337_CR220
  doi: 10.1007/978-3-030-53288-8_1
– volume: 29
  start-page: 152
  issue: S1
  year: 2019
  ident: 337_CR63
  publication-title: INCOSE Int. Symp.
  doi: 10.1002/j.2334-5837.2019.00676.x
– ident: 337_CR168
– ident: 337_CR126
– ident: 337_CR162
  doi: 10.23919/DATE48585.2020.9116247
– ident: 337_CR259
  doi: 10.1109/AITest.2019.000-5
– ident: 337_CR198
  doi: 10.1145/3338501.3357372
– ident: 337_CR131
– ident: 337_CR39
  doi: 10.1109/DASC50938.2020.9256581
– ident: 337_CR52
– volume: 194
  year: 2020
  ident: 337_CR217
  publication-title: Sci. Comput. Progr.
  doi: 10.1016/j.scico.2020.102450
– ident: 337_CR5
  doi: 10.1109/FUZZ-IEEE.2019.8858899
– ident: 337_CR154
– volume: 30
  start-page: 3706
  issue: 9
  year: 2020
  ident: 337_CR252
  publication-title: Int. J. Rob. Nonlinear Control
  doi: 10.1002/rnc.4962
– ident: 337_CR49
  doi: 10.1109/ICCAD45719.2019.8942130
– ident: 337_CR234
– ident: 337_CR21
  doi: 10.1109/ICSME.2019.00078
– ident: 337_CR46
– ident: 337_CR135
  doi: 10.1007/978-3-030-32304-2_15
– ident: 337_CR221
  doi: 10.1109/IVS.2018.8500421
– ident: 337_CR247
  doi: 10.1145/3293882.3330579
– ident: 337_CR29
  doi: 10.1109/SP.2017.49
– ident: 337_CR72
  doi: 10.1109/DSN-W.2018.00064
– start-page: 150
  volume-title: Intelligent Data Engineering and Automated Learning - IDEAL 2019
  year: 2019
  ident: 337_CR124
  doi: 10.1007/978-3-030-33617-2_17
– ident: 337_CR249
  doi: 10.1145/3302504.3311814
– ident: 337_CR122
  doi: 10.1109/IMCSIT.2008.4747314
– ident: 337_CR176
  doi: 10.1109/ICCV.2019.00302
– ident: 337_CR2
– ident: 337_CR95
– volume: 176
  year: 2021
  ident: 337_CR85
  publication-title: J. Syst. Softw.
  doi: 10.1016/j.jss.2021.110938
– ident: 337_CR121
  doi: 10.1007/978-3-030-26250-1_29
– ident: 337_CR37
  doi: 10.1007/978-3-030-54549-9_14
– ident: 337_CR78
– ident: 337_CR107
– ident: 337_CR116
  doi: 10.1109/ICCE-Berlin47944.2019.8966168
– volume: 7
  start-page: 179104
  year: 2019
  ident: 337_CR250
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2019.2958406
– ident: 337_CR59
  doi: 10.1145/3395363.3397357
– ident: 337_CR132
  doi: 10.1109/ICMLA.2019.00145
– ident: 337_CR114
– start-page: 365
  volume-title: Computer Safety, Reliability, and Security
  year: 2019
  ident: 337_CR27
  doi: 10.1007/978-3-030-26250-1_30
– ident: 337_CR207
– volume: 54
  start-page: 41
  issue: 1
  year: 2012
  ident: 337_CR241
  publication-title: Inf. Softw. Technol.
  doi: 10.1016/j.infsof.2011.09.002
– ident: 337_CR96
– ident: 337_CR101
  doi: 10.1109/SEAA.2019.00026
– volume: 518
  start-page: 529
  year: 2015
  ident: 337_CR159
  publication-title: Nature
  doi: 10.1038/nature14236
– ident: 337_CR104
  doi: 10.1007/978-3-319-94205-6_41
– ident: 337_CR213
– ident: 337_CR73
  doi: 10.1109/CVPRW50498.2020.00170
– ident: 337_CR142
– ident: 337_CR169
– ident: 337_CR108
– ident: 337_CR7
– ident: 337_CR215
  doi: 10.1145/3180155.3180220
– ident: 337_CR218
– ident: 337_CR62
  doi: 10.1007/978-3-030-53288-8_6
– ident: 337_CR224
– volume: 123
  year: 2020
  ident: 337_CR257
  publication-title: Inf. Softw. Technol.
  doi: 10.1016/j.infsof.2020.106296
– volume: 1911
  start-page: 10621
  year: 2019
  ident: 337_CR48
  publication-title: ArXiv preprint arXiv:arXiv
– ident: 337_CR192
– ident: 337_CR92
  doi: 10.1109/CVPR.2019.00013
– ident: 337_CR119
– ident: 337_CR201
  doi: 10.1109/SBESC49506.2019.9046094
– ident: 337_CR216
  doi: 10.1145/3377811.3380400
– ident: 337_CR229
– ident: 337_CR235
– ident: 337_CR51
– volume-title: Deep Learning
  year: 2016
  ident: 337_CR75
– volume: 3
  start-page: 3860
  issue: 4
  year: 2018
  ident: 337_CR180
  publication-title: IEEE Robot. Autom. Lett.
  doi: 10.1109/LRA.2018.2857402
– ident: 337_CR181
– volume: 30
  start-page: 4
  issue: 4
  year: 2015
  ident: 337_CR255
  publication-title: IEEE Aerospace Electron. Syst. Mag.
  doi: 10.1109/MAES.2014.140109
– ident: 337_CR20
  doi: 10.1109/CVPRW50498.2020.00174
– ident: 337_CR70
  doi: 10.1109/SP.2018.00058
– ident: 337_CR193
– ident: 337_CR88
  doi: 10.1145/3243734.3243792
– ident: 337_CR158
– ident: 337_CR165
  doi: 10.1109/IVS.2019.8814134
– ident: 337_CR18
  doi: 10.1145/3319535.3354245
– volume: 52
  start-page: 792
  issue: 8
  year: 2010
  ident: 337_CR120
  publication-title: Inf. Softw. Technol.
  doi: 10.1016/j.infsof.2010.03.006
– ident: 337_CR94
– ident: 337_CR123
  doi: 10.1109/CVPRW50498.2020.00171
– ident: 337_CR209
– ident: 337_CR66
  doi: 10.1145/3293882.3330566
– ident: 337_CR238
– ident: 337_CR195
  doi: 10.1109/ISSRE.2019.00013
– ident: 337_CR42
– ident: 337_CR129
– start-page: 3
  volume-title: Automated Technology for Verification and Analysis
  year: 2018
  ident: 337_CR79
  doi: 10.1007/978-3-030-01090-4_1
– volume: 25
  start-page: 2168
  issue: 6
  year: 2019
  ident: 337_CR237
  publication-title: IEEE Trans. Visualiz. Comput. Graph.
  doi: 10.1109/TVCG.2019.2903943
– ident: 337_CR248
  doi: 10.1109/DSN-W.2019.00016
– volume: 16
  start-page: 31
  issue: 4
  year: 1999
  ident: 337_CR189
  publication-title: IEEE Softw.
  doi: 10.1109/52.776946
– ident: 337_CR226
– ident: 337_CR106
  doi: 10.1109/IROS.2018.8593420
– ident: 337_CR53
  doi: 10.1007/978-3-319-77935-5_9
– ident: 337_CR67
  doi: 10.1109/IROS.2017.8206247
– ident: 337_CR130
– ident: 337_CR127
  doi: 10.1109/ITSC.2018.8569637
– ident: 337_CR233
– ident: 337_CR47
– ident: 337_CR182
  doi: 10.1109/CINTI-MACRo49179.2019.9105190
– ident: 337_CR246
  doi: 10.1007/978-3-319-97301-2_7
– ident: 337_CR19
  doi: 10.1109/CVPRW.2019.00178
– ident: 337_CR99
– ident: 337_CR141
– volume: 8
  start-page: 3055
  year: 2020
  ident: 337_CR251
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2019.2962695
– ident: 337_CR58
– ident: 337_CR81
  doi: 10.1007/978-3-030-50086-3_6
– ident: 337_CR86
  doi: 10.1007/978-3-030-19212-9_24
– volume: 66
  start-page: 7
  year: 2020
  ident: 337_CR240
  publication-title: IEEE Trans. Autom. Control
– ident: 337_CR223
  doi: 10.1109/ICST46399.2020.00060
– start-page: 298
  volume-title: Braverman Readings in Machine Learning
  year: 2018
  ident: 337_CR3
– ident: 337_CR118
– volume: 34
  start-page: 14
  issue: 6
  year: 2019
  ident: 337_CR87
  publication-title: IEEE Intell. Syst.
  doi: 10.1109/MIS.2019.2957223
– ident: 337_CR152
– ident: 337_CR148
  doi: 10.1145/3238147.3238202
– ident: 337_CR23
  doi: 10.1109/ITSC.2018.8569436
– ident: 337_CR203
– ident: 337_CR113
  doi: 10.1109/ITSC45102.2020.9294549
– volume: 356
  start-page: 508
  issue: 6337
  year: 2017
  ident: 337_CR160
  publication-title: Science
  doi: 10.1126/science.aam6960
– volume: 807
  start-page: 298
  year: 2020
  ident: 337_CR245
  publication-title: Theoret. Comput. Sci.
  doi: 10.1016/j.tcs.2019.05.046
– ident: 337_CR164
  doi: 10.23919/ACC45564.2020.9147564
– ident: 337_CR177
  doi: 10.1109/REW.2019.00049
– ident: 337_CR188
– ident: 337_CR200
  doi: 10.1109/ICSE-NIER.2019.00030
– ident: 337_CR102
  doi: 10.1007/978-3-319-63387-9_1
– ident: 337_CR6
  doi: 10.36001/phmconf.2019.v11i1.774
– ident: 337_CR109
  doi: 10.1145/3371158.3371383
– ident: 337_CR1
  doi: 10.1109/AITEST49225.2020.00009
– ident: 337_CR45
  doi: 10.1101/2020.06.16.154542
– ident: 337_CR90
  doi: 10.1109/ITSC.2019.8917002
– ident: 337_CR210
  doi: 10.1109/ICSE-Companion.2019.00051
– ident: 337_CR157
– ident: 337_CR208
– ident: 337_CR197
  doi: 10.1109/LRA.2020.3012127
– ident: 337_CR110
  doi: 10.1109/CVPR42600.2020.00132
– ident: 337_CR199
– ident: 337_CR34
  doi: 10.1109/ICCAD45719.2019.8942153
– ident: 337_CR41
– ident: 337_CR214
– volume-title: Lectures on Convex Optimization
  year: 2018
  ident: 337_CR163
  doi: 10.1007/978-3-319-91578-4
– ident: 337_CR136
  doi: 10.1145/3359789.3359831
– start-page: 197
  volume-title: Computer Safety, Reliability, and Security
  year: 2020
  ident: 337_CR69
  doi: 10.1007/978-3-030-54549-9_13
– ident: 337_CR166
  doi: 10.1007/978-3-030-62144-5_10
– volume: 33
  start-page: 1328
  issue: 4
  year: 2021
  ident: 337_CR190
  publication-title: IEEE Trans. Knowl. Data Eng.
  doi: 10.1109/TKDE.2019.2946162
– ident: 337_CR4
  doi: 10.1109/ICST.2017.18
– ident: 337_CR31
  doi: 10.23919/ECC.2019.8795815
– volume: 9
  start-page: 4797
  year: 2021
  ident: 337_CR178
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2020.3048047
– ident: 337_CR82
  doi: 10.1016/j.ifacol.2020.12.2276
– ident: 337_CR35
  doi: 10.1007/978-3-319-68167-2_18
– ident: 337_CR145
  doi: 10.1109/ICRA.2019.8793611
– volume: 11
  start-page: 219
  issue: 3–4
  year: 2018
  ident: 337_CR61
  publication-title: Found. Trends Mach. Learn.
  doi: 10.1561/2200000071
– ident: 337_CR260
– ident: 337_CR205
– ident: 337_CR161
  doi: 10.1109/ITSC.2015.427
– volume: 538
  start-page: 20
  year: 2016
  ident: 337_CR30
  publication-title: Nat News
  doi: 10.1038/538020a
– ident: 337_CR98
– ident: 337_CR71
  doi: 10.1109/ICRA.2018.8460635
– ident: 337_CR151
  doi: 10.1109/GLOBECOM38437.2019.9013408
– ident: 337_CR32
– ident: 337_CR219
  doi: 10.1109/FormaliSE.2019.00012
– ident: 337_CR222
– ident: 337_CR194
– ident: 337_CR183
  doi: 10.1109/DASC43569.2019.9081635
– ident: 337_CR11
  doi: 10.1109/CVPR.2018.00099
– ident: 337_CR134
– ident: 337_CR171
  doi: 10.1609/aaai.v33i01.33019780
– ident: 337_CR9
  doi: 10.1109/CDC42340.2020.9303750
– volume: 25
  start-page: 465
  issue: 4
  year: 1982
  ident: 337_CR242
  publication-title: Comput. J.
  doi: 10.1093/comjnl/25.4.465
– ident: 337_CR26
– ident: 337_CR43
– ident: 337_CR14
  doi: 10.1007/978-3-030-55754-6_4
– ident: 337_CR196
  doi: 10.4271/2018-01-1075
– ident: 337_CR128
– ident: 337_CR227
  doi: 10.1109/IJCNN.2019.8851970
– volume: 34
  start-page: 5037
  year: 2020
  ident: 337_CR146
  publication-title: Proc. AAAI Conf. Artif. Intell.
– volume: 6
  start-page: 121
  year: 2022
  ident: 337_CR170
  publication-title: IEEE Control Syst. Lett.
  doi: 10.1109/LCSYS.2021.3050444
– volume: 7
  start-page: 160397
  year: 2019
  ident: 337_CR17
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2019.2951526
– ident: 337_CR133
– ident: 337_CR100
  doi: 10.1109/AITest.2019.00-12
– ident: 337_CR156
– ident: 337_CR232
– start-page: 45
  volume-title: Computer Safety, Reliability, and Security
  year: 2018
  ident: 337_CR68
– ident: 337_CR149
  doi: 10.1109/DSN-W.2019.00017
– ident: 337_CR117
  doi: 10.1007/978-3-319-63387-9_5
– ident: 337_CR89
  doi: 10.1145/3236024.3264835
– volume: 8
  start-page: 273
  issue: 2
  year: 2019
  ident: 337_CR125
  publication-title: Progr. Artif. Intell.
  doi: 10.1007/s13748-019-00179-x
– volume: 1
  start-page: 1
  year: 2020
  ident: 337_CR261
  publication-title: IEEE Trans. Softw. Eng.
  doi: 10.1109/TSE.2019.2962027
– volume: 6
  start-page: 346
  issue: 3
  year: 2020
  ident: 337_CR185
  publication-title: Engineering
  doi: 10.1016/j.eng.2019.12.012
– volume: 52
  start-page: 43
  year: 2018
  ident: 337_CR212
  publication-title: Comput. Lang. Syst. Struct.
– volume: 37
  start-page: 100270
  year: 2020
  ident: 337_CR103
  publication-title: Comput. Sci. Rev.
  doi: 10.1016/j.cosrev.2020.100270
– ident: 337_CR184
– ident: 337_CR84
  doi: 10.1145/3302509.3311038
– ident: 337_CR65
– volume: 33
  start-page: 7759
  year: 2019
  ident: 337_CR28
  publication-title: Proc. AAAI Conf. Artif. Intell.
– ident: 337_CR139
  doi: 10.1109/CVPR.2019.01168
– ident: 337_CR57
  doi: 10.1109/ICRA40945.2020.9196709
– ident: 337_CR112
  doi: 10.1109/DASC43569.2019.9081748
– ident: 337_CR150
– volume: 2021
  start-page: 10
  year: 2021
  ident: 337_CR230
  publication-title: IEEE Trans. Autom. Control
– volume: 18
  start-page: 5
  year: 2019
  ident: 337_CR211
  publication-title: ACM Trans. Embed. Comput. Syst.
  doi: 10.1145/3358233
– ident: 337_CR93
– ident: 337_CR204
– ident: 337_CR147
  doi: 10.1109/SANER.2019.8668044
– volume: 5
  start-page: 3153
  issue: 2
  year: 2020
  ident: 337_CR143
  publication-title: IEEE Robot. Autom. Lett.
  doi: 10.1109/LRA.2020.2974682
– ident: 337_CR105
– ident: 337_CR256
  doi: 10.1109/IECON.2017.8216790
SSID ssj0009699
Score 2.5639255
Snippet Context Machine Learning (ML) has been at the heart of many innovations over the past years. However, including it in so-called “safety-critical” systems such...
ContextMachine Learning (ML) has been at the heart of many innovations over the past years. However, including it in so-called “safety-critical” systems such...
Context: Machine Learning (ML) has been at the heart of many innovations over the past years. However, including it in so-called “safety-critical” systems such...
SourceID hal
proquest
crossref
springer
SourceType Open Access Repository
Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 38
SubjectTerms Artificial Intelligence
Certification
Computer Science
Literature reviews
Machine learning
Safety critical
Software Engineering/Programming and Operating Systems
Systematic review
SummonAdditionalLinks – databaseName: Computer Science Database
  dbid: K7-
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LS8QwEB58Hby4PnF9EcSbBtukbdKTLKIsKOJBxYuUJJuooLu6rav7703adKuCXry26fObzAzJzPcB7NmYHhApDRY0DnCkZYqFDCRmidaC9lQgSyLtm3N2ccFvb9NLv-CW-7LK2ieWjro3UG6N_JCkjmqc8pgdvbxipxrldle9hMY0zIaEhM7OzxhuSHeTtOLaIxxzGxd904xvnbORHLtadidnxvDHt8A0_eDKIr_knD-2Scvoc9r673svwoLPO1GnMpQlmNL9ZWjVmg7IT_EVuOsO3lExQMrVW5sxei6LLTXy6hL3yIW9HsqF0cUYKy-UgCpC6PwIdVDDDY2eJpzNqGqRWYXr05Or4y72EgxY0ZgW1v0QlSahERGjzIjUiDA2NJGB0oYIxY1F2HBORaAMM4aLhCqLL416IaP20-kazPQHfb0OSCSKJ5S5HCWKiFZcSpt6kSQVPBaMmDaE9f_PlOcndzIZT1nDrOwwyyxmWYlZ9tGG_ck1LxU7x5-jdy2sk4GOWLvbOc_csSAKU5va0FHYhq0ax8xP5zxrQGzDQW0JzenfH7nx9902YZ44Eyx7G7dgphi-6W2YU6PiMR_ulMb8CTII-V4
  priority: 102
  providerName: ProQuest
Title How to certify machine learning based safety-critical systems? A systematic literature review
URI https://link.springer.com/article/10.1007/s10515-022-00337-x
https://www.proquest.com/docview/2918203857
https://hal.science/hal-04194063
Volume 29
WOSCitedRecordID wos000782599100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVPQU
  databaseName: Advanced Technologies & Aerospace Database
  customDbUrl:
  eissn: 1573-7535
  dateEnd: 20241207
  omitProxy: false
  ssIdentifier: ssj0009699
  issn: 0928-8910
  databaseCode: P5Z
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/hightechjournals
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Computer Science Database
  customDbUrl:
  eissn: 1573-7535
  dateEnd: 20241207
  omitProxy: false
  ssIdentifier: ssj0009699
  issn: 0928-8910
  databaseCode: K7-
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/compscijour
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Engineering Database
  customDbUrl:
  eissn: 1573-7535
  dateEnd: 20241207
  omitProxy: false
  ssIdentifier: ssj0009699
  issn: 0928-8910
  databaseCode: M7S
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: http://search.proquest.com
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest Central
  customDbUrl:
  eissn: 1573-7535
  dateEnd: 20241207
  omitProxy: false
  ssIdentifier: ssj0009699
  issn: 0928-8910
  databaseCode: BENPR
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://www.proquest.com/central
  providerName: ProQuest
– providerCode: PRVAVX
  databaseName: SpringerLINK Contemporary 1997-Present
  customDbUrl:
  eissn: 1573-7535
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0009699
  issn: 0928-8910
  databaseCode: RSV
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22
  providerName: Springer Nature
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1bS-QwFD54e_DFu-zoOATxTQNt0jbp0zK7KAPKMHhDBClJJtGFcUZs1Z1_b9KmMyrugr70oT29nZPk-yDnfAdgz2J6QKQ0WNA4wJGWKRYykJglWgvaV4EshbQvT1i3y6-u0p4vCsvrbPd6S7Jcqd8Uu1nsxS773DUgY9gyx3kLd9xNx9Ozy6nUbpJWCnuEY27R0JfKfP6Md3A0e-eSId8wzQ-boyXmHC1_72tXYMlzTNSuBsUqzOjhGizX_RuQn87rcNMZvaBihJTLrTZjdF8mVmrkO0ncIgdxfZQLo4sxVr4pAqrEn_OfqI2mOtBoMNFnRlU5zAZcHB2e_-5g324BKxrTwi41RKVJaETEKDMiNSKMDU1koLQhQnFjo2k4pyJQhhnDRUKVjSWN-iGjJA3pJswNR0P9A5BIFE8oc3wkiohWXEpLs0iSCh4LRkwDwtrrmfJa5K4lxiCbqig7_2XWf1npv-xvA_Yn9zxUShz_td61wZwYOhHtTvskc-eCKEwtjaHPYQOadawzP3XzzP6JZUWUx6wBB3Vsp5f__cqtr5lvwyJxw6Osa2zCXPH4pHdgQT0Xf_LHFsz_Ouz2Tlswe8xwy2WkntljL75ulYP9FfZA9Ko
linkProvider Springer Nature
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Nb9QwEB21BYleKJ9ioYCF4AQWiZ3EzgFVK6DaqssKiYJ6Qa7ttSlS2S1NaLt_it-IJ3E2gERvPXBNHMeJn2cm8cx7AE-DT0-YMZ5qnic0c6ak2iSGisI5zac2MQ2R9qexmEzk_n75fgV-drUwmFbZ2cTGUE_nFv-Rv2QlUo1zmYut4-8UVaNwd7WT0GhhsesWZ-GTrXq18ybM7zPGtt_uvR7RqCpALc95HVYUs2WRep0JLrwuvU5zzwuTWOeZttKHQXspuU6sF95LXXAbhsyzaSp4GAUP_a7ClQytf5Mq-KEn-S3KltuPSSqDH45FOrFUL0QOFHPnUT5N0PM_HOHqIaZh_hbj_rUt23i77Y3_7T3dgOsxribDdiHchBU3uwUbnWYFiSbsNnwezc9IPScW88n9gnxrkkkdieoZXwi69SmptHf1gtooBEFawutqiwxJz31Njpac1KQtAboDHy_lIe_C2mw-c_eA6MLKgguMwbKMOSuNCaElK0otcy2YH0DazbeykX8dZUCOVM8cjRhRASOqwYg6H8Dz5TXHLfvIha2fBBgtGyJx-Gg4VngsydIyhG78NB3AZocbFc1VpXrQDOBFh7z-9L9vef_i3h7DtdHeu7Ea70x2H8A6Q_g3dZybsFaf_HAP4ao9rb9WJ4-ahUTg4LIR-QvNfld2
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3fb9MwED5BQWgvFNgQZR2zEG9gNbGT2Hmaqo2qaFVVCaj6gizbsdmk0lZt1q3__ez8aAsCJMRr4vy6O-c-y3ffB_DO5fSAKGWxpHGAI6NSLFWgMEuMkTTTgSqItMcDNhzyySQd7XXxF9Xu9ZZk2dPgWZpmeWeR2c5e45vLw9hXonsxMoYdinwUedEgv17_PN7R7iZpybZHOOYuM1ZtM7-_x0-p6eGVL4zcQ52_bJQW-afX_P83fwZPK-yJumWwPIcHZvYCmrWuA6qm-SF8689vUT5H2tdc2w36URRcGlQpTHxHPvVlaCWtyTdYV2IJqCSFXp2hLtrxQ6PplrcZlW0yR_C19_HLeR9XMgxY05jm7hdEdJqEVkaMMitTK8PY0kQF2lgiNbfOy5ZzKgNtmbVcJlQ7H9MoCxklaUhfQmM2n5lXgGSieUKZxylRRIzmSjn4RZJU8lgyYlsQ1h4QuuIo91IZU7FjV_b2E85-orCfuGvB--01i5Kh46-j3zrHbgd6cu1-dyD8sSAKUwdv6DpsQbv2u6im9Eq4L3FoifKYteBD7efd6T8_8vW_DT-FJ6OLnhh8Gl4ewwHxkVK0PrahkS9vzAk81uv8erV8U0T6PQRv_BA
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=How+to+certify+machine+learning+based+safety-critical+systems%3F+A+systematic+literature+review&rft.jtitle=Automated+software+engineering&rft.au=Tambon%2C+Florian&rft.au=Laberge%2C+Gabriel&rft.au=An%2C+Le&rft.au=Nikanjam%2C+Amin&rft.date=2022-11-01&rft.pub=Springer+US&rft.issn=0928-8910&rft.eissn=1573-7535&rft.volume=29&rft.issue=2&rft_id=info:doi/10.1007%2Fs10515-022-00337-x&rft.externalDocID=10_1007_s10515_022_00337_x
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0928-8910&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0928-8910&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0928-8910&client=summon