Aggregation algorithm based on consensus verification

Distributed learning, as the most popular solution for training large-scale data for deep learning, consists of multiple participants collaborating on data training tasks. However, the malicious behavior of some during the training process, like Byzantine participants who would interrupt or control...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Scientific reports Ročník 13; číslo 1; s. 12923 - 14
Hlavní autoři: Shichao, Li, Jiwei, Qin
Médium: Journal Article
Jazyk:angličtina
Vydáno: London Nature Publishing Group UK 09.08.2023
Nature Publishing Group
Nature Portfolio
Témata:
ISSN:2045-2322, 2045-2322
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Distributed learning, as the most popular solution for training large-scale data for deep learning, consists of multiple participants collaborating on data training tasks. However, the malicious behavior of some during the training process, like Byzantine participants who would interrupt or control the learning process, will trigger the crisis of data security. Although recent existing defense mechanisms use the variability of Byzantine node gradients to clear Byzantine values, it is still unable to identify and then clear the delicate disturbance/attack. To address this critical issue, we propose an algorithm named consensus aggregation in this paper. This algorithm allows computational nodes to use the information of verification nodes to verify the effectiveness of the gradient in the perturbation attack, reaching a consensus based on the effective verification of the gradient. Then the server node uses the gradient as the valid gradient for gradient aggregation calculation through the consensus reached by other computing nodes. On the MNIST and CIFAR10 datasets, when faced with Drift attacks, the proposed algorithm outperforms common existing aggregation algorithms (Krum, Trimmed Mean, Bulyan), with accuracies of 93.3%, 94.06% (MNIST dataset), and 48.66%, 51.55% (CIFAR10 dataset), respectively. This is an improvement of 3.0%, 3.8% (MNIST dataset), and 19.0%, 26.1% (CIFAR10 dataset) over the current state-of-the-art methods, and successfully defended against other attack methods.
AbstractList Distributed learning, as the most popular solution for training large-scale data for deep learning, consists of multiple participants collaborating on data training tasks. However, the malicious behavior of some during the training process, like Byzantine participants who would interrupt or control the learning process, will trigger the crisis of data security. Although recent existing defense mechanisms use the variability of Byzantine node gradients to clear Byzantine values, it is still unable to identify and then clear the delicate disturbance/attack. To address this critical issue, we propose an algorithm named consensus aggregation in this paper. This algorithm allows computational nodes to use the information of verification nodes to verify the effectiveness of the gradient in the perturbation attack, reaching a consensus based on the effective verification of the gradient. Then the server node uses the gradient as the valid gradient for gradient aggregation calculation through the consensus reached by other computing nodes. On the MNIST and CIFAR10 datasets, when faced with Drift attacks, the proposed algorithm outperforms common existing aggregation algorithms (Krum, Trimmed Mean, Bulyan), with accuracies of 93.3%, 94.06% (MNIST dataset), and 48.66%, 51.55% (CIFAR10 dataset), respectively. This is an improvement of 3.0%, 3.8% (MNIST dataset), and 19.0%, 26.1% (CIFAR10 dataset) over the current state-of-the-art methods, and successfully defended against other attack methods.
Distributed learning, as the most popular solution for training large-scale data for deep learning, consists of multiple participants collaborating on data training tasks. However, the malicious behavior of some during the training process, like Byzantine participants who would interrupt or control the learning process, will trigger the crisis of data security. Although recent existing defense mechanisms use the variability of Byzantine node gradients to clear Byzantine values, it is still unable to identify and then clear the delicate disturbance/attack. To address this critical issue, we propose an algorithm named consensus aggregation in this paper. This algorithm allows computational nodes to use the information of verification nodes to verify the effectiveness of the gradient in the perturbation attack, reaching a consensus based on the effective verification of the gradient. Then the server node uses the gradient as the valid gradient for gradient aggregation calculation through the consensus reached by other computing nodes. On the MNIST and CIFAR10 datasets, when faced with Drift attacks, the proposed algorithm outperforms common existing aggregation algorithms (Krum, Trimmed Mean, Bulyan), with accuracies of 93.3%, 94.06% (MNIST dataset), and 48.66%, 51.55% (CIFAR10 dataset), respectively. This is an improvement of 3.0%, 3.8% (MNIST dataset), and 19.0%, 26.1% (CIFAR10 dataset) over the current state-of-the-art methods, and successfully defended against other attack methods.Distributed learning, as the most popular solution for training large-scale data for deep learning, consists of multiple participants collaborating on data training tasks. However, the malicious behavior of some during the training process, like Byzantine participants who would interrupt or control the learning process, will trigger the crisis of data security. Although recent existing defense mechanisms use the variability of Byzantine node gradients to clear Byzantine values, it is still unable to identify and then clear the delicate disturbance/attack. To address this critical issue, we propose an algorithm named consensus aggregation in this paper. This algorithm allows computational nodes to use the information of verification nodes to verify the effectiveness of the gradient in the perturbation attack, reaching a consensus based on the effective verification of the gradient. Then the server node uses the gradient as the valid gradient for gradient aggregation calculation through the consensus reached by other computing nodes. On the MNIST and CIFAR10 datasets, when faced with Drift attacks, the proposed algorithm outperforms common existing aggregation algorithms (Krum, Trimmed Mean, Bulyan), with accuracies of 93.3%, 94.06% (MNIST dataset), and 48.66%, 51.55% (CIFAR10 dataset), respectively. This is an improvement of 3.0%, 3.8% (MNIST dataset), and 19.0%, 26.1% (CIFAR10 dataset) over the current state-of-the-art methods, and successfully defended against other attack methods.
Abstract Distributed learning, as the most popular solution for training large-scale data for deep learning, consists of multiple participants collaborating on data training tasks. However, the malicious behavior of some during the training process, like Byzantine participants who would interrupt or control the learning process, will trigger the crisis of data security. Although recent existing defense mechanisms use the variability of Byzantine node gradients to clear Byzantine values, it is still unable to identify and then clear the delicate disturbance/attack. To address this critical issue, we propose an algorithm named consensus aggregation in this paper. This algorithm allows computational nodes to use the information of verification nodes to verify the effectiveness of the gradient in the perturbation attack, reaching a consensus based on the effective verification of the gradient. Then the server node uses the gradient as the valid gradient for gradient aggregation calculation through the consensus reached by other computing nodes. On the MNIST and CIFAR10 datasets, when faced with Drift attacks, the proposed algorithm outperforms common existing aggregation algorithms (Krum, Trimmed Mean, Bulyan), with accuracies of 93.3%, 94.06% (MNIST dataset), and 48.66%, 51.55% (CIFAR10 dataset), respectively. This is an improvement of 3.0%, 3.8% (MNIST dataset), and 19.0%, 26.1% (CIFAR10 dataset) over the current state-of-the-art methods, and successfully defended against other attack methods.
ArticleNumber 12923
Author Jiwei, Qin
Shichao, Li
Author_xml – sequence: 1
  givenname: Li
  surname: Shichao
  fullname: Shichao, Li
  organization: School of Information Science and Engineering, Xinjiang University, Key Laboratory of Signal Detection and Processing (Xinjiang University), Xinjiang Uygur Autonomous Region
– sequence: 2
  givenname: Qin
  surname: Jiwei
  fullname: Jiwei, Qin
  email: jwqin_xju@163.com
  organization: School of Information Science and Engineering, Xinjiang University, Key Laboratory of Signal Detection and Processing (Xinjiang University), Xinjiang Uygur Autonomous Region
BackLink https://www.ncbi.nlm.nih.gov/pubmed/37558756$$D View this record in MEDLINE/PubMed
BookMark eNp9kk9PFjEQxhsDEUS-gAfzJl64rPT_tidDCCoJiRc9N7Pd2aVv9m2x3SXx21veRQQP9tJm-ptnZtrnDTmIKSIh7xj9yKgw50UyZU1DuWiE0cY08hU55lSqhgvOD56dj8hpKVtal-JWMvuaHIlWKdMqfUzUxThmHGEOKW5gGlMO8-1u00HBflNDPsWCsSxlc485DMHvybfkcICp4OnjfkJ-fL76fvm1ufn25fry4qbx0vK54dgDR9EpBLS-k7L3RvTK04HpbgAAr33rqenRMMsF1YOgPTCupFDUDkqckOtVt0-wdXc57CD_cgmC2wdSHh3kOfgJnbV1Nt8Ky2wr9aCAdpYZSlGzWpeZqvVp1bpbuh32HuOcYXoh-vImhls3pnvHqGRcyweFs0eFnH4uWGa3C8XjNEHEtBTHjTRGKql0RT_8g27TkmN9qz1FOROCVur985aeevnzPRXgK-BzKiXj8IQw6h5s4FYbuGoDt7eBkzVJrEmlwnHE_Lf2f7J-Azwjs1w
Cites_doi 10.1145/571637.571640
10.1214/aoms/1177729586
10.1145/2640087.2644155
10.3390/su11143974
10.1007/s11227-020-03251-9
10.1109/ACCESS.2019.2909068
10.1145/1281100.1281103
10.1109/TBDATA.2015.2472014
10.1007/978-3-319-63688-7_12
10.1109/ICCCN.2018.8487348
10.1109/TDSC.2019.2952332
10.1109/IJCB48548.2020.9304875
10.1109/CVPR42600.2020.01321
ContentType Journal Article
Copyright The Author(s) 2023
2023. Springer Nature Limited.
Springer Nature Limited 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Springer Nature Limited 2023
Copyright_xml – notice: The Author(s) 2023
– notice: 2023. Springer Nature Limited.
– notice: Springer Nature Limited 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
– notice: Springer Nature Limited 2023
DBID C6C
AAYXX
CITATION
NPM
3V.
7X7
7XB
88A
88E
88I
8FE
8FH
8FI
8FJ
8FK
ABUWG
AEUYN
AFKRA
AZQEC
BBNVY
BENPR
BHPHI
CCPQU
DWQXO
FYUFA
GHDGH
GNUQQ
HCIFZ
K9.
LK8
M0S
M1P
M2P
M7P
PHGZM
PHGZT
PIMPY
PJZUB
PKEHL
PPXIY
PQEST
PQGLB
PQQKQ
PQUKI
Q9U
7X8
5PM
DOA
DOI 10.1038/s41598-023-38688-4
DatabaseName SpringerOpen
CrossRef
PubMed
ProQuest Central (Corporate)
ProQuest Health & Medical Collection
ProQuest Central (purchase pre-March 2016)
Biology Database (Alumni Edition)
Medical Database (Alumni Edition)
Science Database (Alumni Edition)
ProQuest SciTech Collection
ProQuest Natural Science Collection
Hospital Premium Collection
Hospital Premium Collection (Alumni Edition)
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni)
ProQuest One Sustainability
ProQuest Central UK/Ireland
ProQuest Central Essentials Local Electronic Collection Information
Biological Science Database
ProQuest Central
Natural Science Collection
ProQuest One
ProQuest Central
Health Research Premium Collection
Health Research Premium Collection (Alumni)
ProQuest Central Student
SciTech Premium Collection
ProQuest Health & Medical Complete (Alumni)
ProQuest Biological Science Collection
Health & Medical Collection (Alumni Edition)
PML(ProQuest Medical Library)
Science Database
Biological Science Database
ProQuest Central Premium
ProQuest One Academic
Publicly Available Content Database
ProQuest Health & Medical Research Collection
ProQuest One Academic Middle East (New)
ProQuest One Health & Nursing
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ProQuest Central Basic
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
PubMed
Publicly Available Content Database
ProQuest Central Student
ProQuest One Academic Middle East (New)
ProQuest Central Essentials
ProQuest Health & Medical Complete (Alumni)
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ProQuest One Health & Nursing
ProQuest Natural Science Collection
ProQuest Biology Journals (Alumni Edition)
ProQuest Central
ProQuest One Applied & Life Sciences
ProQuest One Sustainability
ProQuest Health & Medical Research Collection
Health Research Premium Collection
Health and Medicine Complete (Alumni Edition)
Natural Science Collection
ProQuest Central Korea
Health & Medical Research Collection
Biological Science Collection
ProQuest Central (New)
ProQuest Medical Library (Alumni)
ProQuest Science Journals (Alumni Edition)
ProQuest Biological Science Collection
ProQuest Central Basic
ProQuest Science Journals
ProQuest One Academic Eastern Edition
ProQuest Hospital Collection
Health Research Premium Collection (Alumni)
Biological Science Database
ProQuest SciTech Collection
ProQuest Hospital Collection (Alumni)
ProQuest Health & Medical Complete
ProQuest Medical Library
ProQuest One Academic UKI Edition
ProQuest One Academic
ProQuest One Academic (New)
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList CrossRef
Publicly Available Content Database
MEDLINE - Academic



PubMed
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 3
  dbid: PIMPY
  name: Publicly Available Content Database
  url: http://search.proquest.com/publiccontent
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Biology
EISSN 2045-2322
EndPage 14
ExternalDocumentID oai_doaj_org_article_99941c73919746f5a0b91800e6144d18
PMC10412648
37558756
10_1038_s41598_023_38688_4
Genre Journal Article
GrantInformation_xml – fundername: The Major science and technology project of Xinjiang Uygur Autonomous Region
  grantid: 2020A03001
– fundername: ;
  grantid: 2020A03001
GroupedDBID 0R~
3V.
4.4
53G
5VS
7X7
88A
88E
88I
8FE
8FH
8FI
8FJ
AAFWJ
AAJSJ
AAKDD
ABDBF
ABUWG
ACGFS
ACSMW
ACUHS
ADBBV
ADRAZ
AENEX
AEUYN
AFKRA
AJTQC
ALIPV
ALMA_UNASSIGNED_HOLDINGS
AOIJS
AZQEC
BAWUL
BBNVY
BCNDV
BENPR
BHPHI
BPHCQ
BVXVI
C6C
CCPQU
DIK
DWQXO
EBD
EBLON
EBS
ESX
FYUFA
GNUQQ
GROUPED_DOAJ
GX1
HCIFZ
HH5
HMCUK
HYE
KQ8
LK8
M0L
M1P
M2P
M48
M7P
M~E
NAO
OK1
PIMPY
PQQKQ
PROAC
PSQYO
RNT
RNTTT
RPM
SNYQT
UKHRP
AASML
AAYXX
AFFHD
AFPKN
CITATION
PHGZM
PHGZT
PJZUB
PPXIY
PQGLB
NPM
7XB
8FK
K9.
PKEHL
PQEST
PQUKI
Q9U
7X8
5PM
ID FETCH-LOGICAL-c492t-2eda2e3b5eae9cb44dc83d5c0f16bfaaac6c7c08de8192306f30da12543509f53
IEDL.DBID DOA
ISICitedReferencesCount 0
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001045574100035&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 2045-2322
IngestDate Fri Oct 03 12:41:07 EDT 2025
Tue Nov 04 02:06:09 EST 2025
Sun Nov 09 10:55:19 EST 2025
Tue Oct 07 08:02:20 EDT 2025
Thu Jan 02 22:51:34 EST 2025
Sat Nov 29 06:04:57 EST 2025
Fri Feb 21 02:39:59 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 1
Language English
License 2023. Springer Nature Limited.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c492t-2eda2e3b5eae9cb44dc83d5c0f16bfaaac6c7c08de8192306f30da12543509f53
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
OpenAccessLink https://doaj.org/article/99941c73919746f5a0b91800e6144d18
PMID 37558756
PQID 2848021330
PQPubID 2041939
PageCount 14
ParticipantIDs doaj_primary_oai_doaj_org_article_99941c73919746f5a0b91800e6144d18
pubmedcentral_primary_oai_pubmedcentral_nih_gov_10412648
proquest_miscellaneous_2848845456
proquest_journals_2848021330
pubmed_primary_37558756
crossref_primary_10_1038_s41598_023_38688_4
springer_journals_10_1038_s41598_023_38688_4
PublicationCentury 2000
PublicationDate 2023-08-09
PublicationDateYYYYMMDD 2023-08-09
PublicationDate_xml – month: 08
  year: 2023
  text: 2023-08-09
  day: 09
PublicationDecade 2020
PublicationPlace London
PublicationPlace_xml – name: London
– name: England
PublicationTitle Scientific reports
PublicationTitleAbbrev Sci Rep
PublicationTitleAlternate Sci Rep
PublicationYear 2023
Publisher Nature Publishing Group UK
Nature Publishing Group
Nature Portfolio
Publisher_xml – name: Nature Publishing Group UK
– name: Nature Publishing Group
– name: Nature Portfolio
References Shen, S., Tople, S. & Saxena, P. Auror: Defending against poisoning attacks in collaborative deep learning systems. In Conference on Computer Security Applications (2016).
Castro, M., & Liskov, B. Practical byzantine fault tolerance. ACM Trans. Comput. Syst. (2002).
LiMAndersenDGSmolaAYuKCommunication efficient distributed machine learning with the parameter serverAdv. Neural Inf. Process. Syst.201411927
Coates, A., et al. Large scale distributed deep networks (2011).
Chen, X., Liu, C., Li, B., Lu, K. & Song, D. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526 (2017).
Ming, Y., Xuexian, H., Qihui, Z., Jianghong, W., & Wenfen, L. Federated learning scheme for mobile network based on reputation evaluation mechanism and blockchain. Chin. J. Netw. Inf. Secur.7 (2021).
RobbinsHMonroSA stochastic approximation methodAnn. Math. Stat.1951224004074266810.1214/aoms/11777295860054.05901
Weng, J. et al. Deepchain: Auditable and privacy-preserving deep learning with blockchain-based incentive. IEEE Trans. Depend. Secure Comput. pp. 1–1.
Mhamdi, E., Guerraoui, R. & Rouault, S. The hidden vulnerability of distributed learning in byzantium (2018).
Shen, S., Tople, S. & Saxena, P. Auror: Defending against poisoning attacks in collaborative deep learning systems. In Proceedings of the 32nd Annual Conference on Computer Security Applications, 508–519 (2016).
Gao, W., Hatcher, W. G. & Yu, W. A survey of blockchain: Techniques, applications, and challenges. In 2018 27th International Conference on Computer Communication and Networks (ICCCN), pp. 1–11, https://doi.org/10.1109/ICCCN.2018.8487348 (2018).
NiuFRechtBReCWrightSJHogwild!: A lock-free approach to parallelizing stochastic gradient descentAdvances in Neural Information Processing Systems201124693701
Li, M., Andersen, D. G., Park, J. W., Smola, A. J. & Su, B. Y. Scaling distributed machine learning with the parameter server. ACM (2014).
MengXBradleyJYavuzBSparksETalwalkarAMllib: Machine learning in apache sparkJournal of Machine Learning Research201517123512411360.68697
GuTLiuKDolan-GavittBGargSBadnets: Evaluating backdooring attacks on deep neural networksIEEE Access20197472304724410.1109/ACCESS.2019.2909068
Nakamoto, S. Bitcoin: A peer-to-peer electronic cash system (2008).
Cong, X., Koyejo, O. & Gupta, I. Generalized byzantine-tolerant sgd (2018).
Fadaeddini, A., Majidi, B. & Eshghi, M. Secure decentralized peer-to-peer training of deep neural networks based on distributed ledger technology. J. Supercomput.76 (2020).
Nakamoto, S. Bitcoin: A peer-to-peer electronic cash system. Consulted (2008).
Blanchard, P., Mhamdi, E., Guerraoui, R. & Stainer, J. Machine learning with adversaries: Byzantine tolerant gradient descent. In Neural Information Processing Systems (2017).
Hao, Z., Zheng, Z., Wei, D., Xing, E. P., & Ho, Q. An efficient communication architecture for distributed deep learning on gpu clusters. Poseidon, 2017.
Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D. & Shmatikov, V. How to backdoor federated learning (2018).
Chandra, T. D., Griesemer, R. & Redstone, J. Paxos made live: An engineering perspective. In Twenty-Sixth ACM Symposium on Principles of Distributed Computing (2007).
Baruch, M., Baruch, G., & Goldberg, Y. Circumventing defenses for distributed learning: A little is enough, 2019.
XingEPPetuum: A new platform for distributed machine learning on big dataIEEE Transactions on Big Data201511335134410.1109/TBDATA.2015.2472014
Duchi, J. C., Agarwal, A. & Wainwright, M. J. Distributed dual averaging in networks. In Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6–9 December 2010, Vancouver, British Columbia, Canada (2010).
Rakin, A. S., He, Z. & Fan, D. Tbt: Targeted neural network attack with bit trojan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13198–13207 (2020).
Dumford, J. & Scheirer, W. Backdooring convolutional neural networks via targeted weight perturbations. In 2020 IEEE International Joint Conference on Biometrics (IJCB), 1–9 (IEEE, 2020).
RathoreSPanYParkJHBlockdeepnet: A blockchain-based secure deep learning for iot networkSustainability201911397410.3390/su11143974
Dean, J. Mapreduce : Simplified data processing on large clusters. In Symposium on Operating System Design & Implementation (2004).
Dong, Y., Chen, Y. & Ramchandran, K., & Bartlett, P. Towards optimal statistical rates, Byzantine-robust distributed learning, 2018.
Kiayias, A., Russell, A., David, B. & Oliynykov, R. Ouroboros: A provably secure proof-of-stake blockchain protocol. In Advances in Cryptology–CRYPTO 2017: 37th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 20–24, 2017, Proceedings, Part I, 357–388 (Springer, 2017).
Fazlali, M., Eftekhar, S. M., Dehshibi, M. M., Malazi, H. T. & Nosrati, M. Raft consensus algorithm: An effective substitute for paxos in high throughput p2p-based systems (2019).
Abadi, M., Barham, P., Chen, J., Chen, Z., & Zhang, X. A system for large-scale machine learning. USENIX Association, Tensorflow, 2016.
Pearlmutter, B. A. Automatic differentiation in machine learning: A survey. Computer Science (2015).
38688_CR14
38688_CR15
T Gu (38688_CR25) 2019; 7
38688_CR16
38688_CR17
38688_CR18
F Niu (38688_CR7) 2011; 24
H Robbins (38688_CR22) 1951; 22
X Meng (38688_CR9) 2015; 17
38688_CR2
38688_CR1
38688_CR4
EP Xing (38688_CR10) 2015; 1
38688_CR20
38688_CR6
38688_CR21
38688_CR5
38688_CR8
38688_CR23
38688_CR24
38688_CR26
38688_CR27
38688_CR28
38688_CR29
M Li (38688_CR3) 2014; 1
38688_CR30
S Rathore (38688_CR19) 2019; 11
38688_CR31
38688_CR32
38688_CR11
38688_CR33
38688_CR12
38688_CR34
38688_CR13
38688_CR35
References_xml – reference: Castro, M., & Liskov, B. Practical byzantine fault tolerance. ACM Trans. Comput. Syst. (2002).
– reference: Mhamdi, E., Guerraoui, R. & Rouault, S. The hidden vulnerability of distributed learning in byzantium (2018).
– reference: Nakamoto, S. Bitcoin: A peer-to-peer electronic cash system (2008).
– reference: Ming, Y., Xuexian, H., Qihui, Z., Jianghong, W., & Wenfen, L. Federated learning scheme for mobile network based on reputation evaluation mechanism and blockchain. Chin. J. Netw. Inf. Secur.7 (2021).
– reference: Rakin, A. S., He, Z. & Fan, D. Tbt: Targeted neural network attack with bit trojan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13198–13207 (2020).
– reference: LiMAndersenDGSmolaAYuKCommunication efficient distributed machine learning with the parameter serverAdv. Neural Inf. Process. Syst.201411927
– reference: Duchi, J. C., Agarwal, A. & Wainwright, M. J. Distributed dual averaging in networks. In Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6–9 December 2010, Vancouver, British Columbia, Canada (2010).
– reference: RobbinsHMonroSA stochastic approximation methodAnn. Math. Stat.1951224004074266810.1214/aoms/11777295860054.05901
– reference: Baruch, M., Baruch, G., & Goldberg, Y. Circumventing defenses for distributed learning: A little is enough, 2019.
– reference: Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D. & Shmatikov, V. How to backdoor federated learning (2018).
– reference: Dean, J. Mapreduce : Simplified data processing on large clusters. In Symposium on Operating System Design & Implementation (2004).
– reference: Shen, S., Tople, S. & Saxena, P. Auror: Defending against poisoning attacks in collaborative deep learning systems. In Conference on Computer Security Applications (2016).
– reference: Dumford, J. & Scheirer, W. Backdooring convolutional neural networks via targeted weight perturbations. In 2020 IEEE International Joint Conference on Biometrics (IJCB), 1–9 (IEEE, 2020).
– reference: Blanchard, P., Mhamdi, E., Guerraoui, R. & Stainer, J. Machine learning with adversaries: Byzantine tolerant gradient descent. In Neural Information Processing Systems (2017).
– reference: Coates, A., et al. Large scale distributed deep networks (2011).
– reference: Li, M., Andersen, D. G., Park, J. W., Smola, A. J. & Su, B. Y. Scaling distributed machine learning with the parameter server. ACM (2014).
– reference: NiuFRechtBReCWrightSJHogwild!: A lock-free approach to parallelizing stochastic gradient descentAdvances in Neural Information Processing Systems201124693701
– reference: RathoreSPanYParkJHBlockdeepnet: A blockchain-based secure deep learning for iot networkSustainability201911397410.3390/su11143974
– reference: MengXBradleyJYavuzBSparksETalwalkarAMllib: Machine learning in apache sparkJournal of Machine Learning Research201517123512411360.68697
– reference: Cong, X., Koyejo, O. & Gupta, I. Generalized byzantine-tolerant sgd (2018).
– reference: Abadi, M., Barham, P., Chen, J., Chen, Z., & Zhang, X. A system for large-scale machine learning. USENIX Association, Tensorflow, 2016.
– reference: Chandra, T. D., Griesemer, R. & Redstone, J. Paxos made live: An engineering perspective. In Twenty-Sixth ACM Symposium on Principles of Distributed Computing (2007).
– reference: GuTLiuKDolan-GavittBGargSBadnets: Evaluating backdooring attacks on deep neural networksIEEE Access20197472304724410.1109/ACCESS.2019.2909068
– reference: Gao, W., Hatcher, W. G. & Yu, W. A survey of blockchain: Techniques, applications, and challenges. In 2018 27th International Conference on Computer Communication and Networks (ICCCN), pp. 1–11, https://doi.org/10.1109/ICCCN.2018.8487348 (2018).
– reference: Dong, Y., Chen, Y. & Ramchandran, K., & Bartlett, P. Towards optimal statistical rates, Byzantine-robust distributed learning, 2018.
– reference: Hao, Z., Zheng, Z., Wei, D., Xing, E. P., & Ho, Q. An efficient communication architecture for distributed deep learning on gpu clusters. Poseidon, 2017.
– reference: Nakamoto, S. Bitcoin: A peer-to-peer electronic cash system. Consulted (2008).
– reference: Kiayias, A., Russell, A., David, B. & Oliynykov, R. Ouroboros: A provably secure proof-of-stake blockchain protocol. In Advances in Cryptology–CRYPTO 2017: 37th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 20–24, 2017, Proceedings, Part I, 357–388 (Springer, 2017).
– reference: XingEPPetuum: A new platform for distributed machine learning on big dataIEEE Transactions on Big Data201511335134410.1109/TBDATA.2015.2472014
– reference: Fadaeddini, A., Majidi, B. & Eshghi, M. Secure decentralized peer-to-peer training of deep neural networks based on distributed ledger technology. J. Supercomput.76 (2020).
– reference: Fazlali, M., Eftekhar, S. M., Dehshibi, M. M., Malazi, H. T. & Nosrati, M. Raft consensus algorithm: An effective substitute for paxos in high throughput p2p-based systems (2019).
– reference: Weng, J. et al. Deepchain: Auditable and privacy-preserving deep learning with blockchain-based incentive. IEEE Trans. Depend. Secure Comput. pp. 1–1.
– reference: Pearlmutter, B. A. Automatic differentiation in machine learning: A survey. Computer Science (2015).
– reference: Chen, X., Liu, C., Li, B., Lu, K. & Song, D. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526 (2017).
– reference: Shen, S., Tople, S. & Saxena, P. Auror: Defending against poisoning attacks in collaborative deep learning systems. In Proceedings of the 32nd Annual Conference on Computer Security Applications, 508–519 (2016).
– ident: 38688_CR31
  doi: 10.1145/571637.571640
– ident: 38688_CR15
– volume: 22
  start-page: 400
  year: 1951
  ident: 38688_CR22
  publication-title: Ann. Math. Stat.
  doi: 10.1214/aoms/1177729586
– volume: 1
  start-page: 19
  year: 2014
  ident: 38688_CR3
  publication-title: Adv. Neural Inf. Process. Syst.
– ident: 38688_CR13
– ident: 38688_CR17
– ident: 38688_CR2
  doi: 10.1145/2640087.2644155
– ident: 38688_CR11
– ident: 38688_CR5
– volume: 11
  start-page: 3974
  year: 2019
  ident: 38688_CR19
  publication-title: Sustainability
  doi: 10.3390/su11143974
– ident: 38688_CR21
  doi: 10.1007/s11227-020-03251-9
– volume: 7
  start-page: 47230
  year: 2019
  ident: 38688_CR25
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2019.2909068
– volume: 24
  start-page: 693
  year: 2011
  ident: 38688_CR7
  publication-title: Advances in Neural Information Processing Systems
– ident: 38688_CR29
– ident: 38688_CR1
– ident: 38688_CR14
– ident: 38688_CR34
  doi: 10.1145/1281100.1281103
– volume: 1
  start-page: 1335
  year: 2015
  ident: 38688_CR10
  publication-title: IEEE Transactions on Big Data
  doi: 10.1109/TBDATA.2015.2472014
– ident: 38688_CR12
– ident: 38688_CR18
– ident: 38688_CR30
– ident: 38688_CR16
– ident: 38688_CR23
  doi: 10.1007/978-3-319-63688-7_12
– ident: 38688_CR32
– ident: 38688_CR35
– ident: 38688_CR8
– ident: 38688_CR33
  doi: 10.1109/ICCCN.2018.8487348
– ident: 38688_CR20
  doi: 10.1109/TDSC.2019.2952332
– ident: 38688_CR6
– ident: 38688_CR27
  doi: 10.1109/IJCB48548.2020.9304875
– ident: 38688_CR4
– volume: 17
  start-page: 1235
  year: 2015
  ident: 38688_CR9
  publication-title: Journal of Machine Learning Research
– ident: 38688_CR28
  doi: 10.1109/CVPR42600.2020.01321
– ident: 38688_CR24
– ident: 38688_CR26
SSID ssj0000529419
Score 2.402052
Snippet Distributed learning, as the most popular solution for training large-scale data for deep learning, consists of multiple participants collaborating on data...
Abstract Distributed learning, as the most popular solution for training large-scale data for deep learning, consists of multiple participants collaborating on...
SourceID doaj
pubmedcentral
proquest
pubmed
crossref
springer
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
Publisher
StartPage 12923
SubjectTerms 639/705/117
639/705/258
Algorithms
Datasets
Deep learning
Humanities and Social Sciences
multidisciplinary
Nodes
Science
Science (multidisciplinary)
Training
SummonAdditionalLinks – databaseName: ProQuest Central
  dbid: BENPR
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpR3LbtQwcAQtSFx4P1IKChI3iBrHduKcqha14oBWFQLUm-X4sa1UsmWzi8TfM-N4t9pSuHC1rWjieXteAG_r0vEQeIcY4E0hglWFkaIuvFKhCZR3HmcsffvUTCbq9LQ9SQ9uQ0qrXMnEKKjdzNIb-R6KUYX6CN3v_csfBU2NouhqGqFxG7apUxnS-fbh0eTk8_qVheJYgrWpWqbkam9AjUVVZRUvuKqRTMSGRoqN-2-yNv9MmrwWOY0K6fjB__7KQ7ifTNH8YKSdR3DL94_h7jic8tcTkAdT9MWnEXO5uZjiFxZn33NSey7HJUt52P2wHHLkBko4iiefwtfjoy8fPhZpyEJhRVstiso7U3neSW98azshnFXcSVsGVnfBGGNr29hSOU-t09DBCLx0hlENPdoaQfJnsNXPev8CcpQPja09XjmV93ZeWdkw5shGcEpJmcG71UXry7GXho4xcK70iBaNaNERLVpkcEi4WJ-kPthxYTaf6sRWGs1bwWzDW4Z-UR2kKbuWoQ3syc91TGWwu0KBTsw56Kv7z-DNehvZimIlpvez5XhGCTIvM3g-In4NCW-kRDcPd9QGSWyAurnTn5_F1t2M2pvVAuF6v6KeK7j-fhc7__6Nl3CvIoKOmSy7sLWYL_0ruGN_Ls6H-evEE78BbY4SXg
  priority: 102
  providerName: ProQuest
Title Aggregation algorithm based on consensus verification
URI https://link.springer.com/article/10.1038/s41598-023-38688-4
https://www.ncbi.nlm.nih.gov/pubmed/37558756
https://www.proquest.com/docview/2848021330
https://www.proquest.com/docview/2848845456
https://pubmed.ncbi.nlm.nih.gov/PMC10412648
https://doaj.org/article/99941c73919746f5a0b91800e6144d18
Volume 13
WOSCitedRecordID wos001045574100035&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: DOA
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: M~E
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
– providerCode: PRVPQU
  databaseName: Biological Science Database
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: M7P
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/biologicalscijournals
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest Central
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: BENPR
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: https://www.proquest.com/central
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest_Health & Medical Collection
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: 7X7
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/healthcomplete
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Publicly Available Content Database
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: PIMPY
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/publiccontent
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Science Database
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: M2P
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/sciencejournals
  providerName: ProQuest
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Lb9QwEB5BC1IviGcbKKsgcYOoSRzHzrFFrUCiqwgBWk6W48e2EmRRs4vEv2fGzi5dHuLCxYfYkUbf59HMyPMAeF7nlnnPOmSAiazyRmaaV3XmpPTCU955mLH08a2YTuVs1rTXRn1RTlhsDxyBO0IHpiqMYE2Bnm_tuc67pkAvx1EkY4tQ5puL5lowFbt6l_hbM1bJ5EweDWipqJqsZBmTNV6PassShYb9f_Iyf0-W_OXFNBiis7twZ_Qg0-Mo-T244fr7cDvOlPz-APjxHEPoeQA81Z_nC4z-L76kZK1sip8MpU_3w2pI8RJTnlA4-RA-nJ2-f_U6G2cjZKZqymVWOqtLxzrutGtMh1AYySw3uS_qzmutTW2EyaV11PEM4wLPcqsLKn1HF8Fz9gh2-kXvDiBFtRamdogYVeV2ThouisKSabdScp7AizVO6mtsgaHC0zWTKqKqEFUVUFVVAicE5eYkta8OH5BUNZKq_kVqAodrItSoU4NCQyrRI2EsT-DZZhu1gZ44dO8Wq3hGVuQVJrAfedtIwgTnGJ3hjtxidEvU7Z3-8iJ03C6oK1ldoVwv1-T_lOvvWDz-H1g8gT0acR-SDptD2FlerdxTuGW-LS-HqwncFDMRVjmB3ZPTaftuElQB1_OypVXgutu-OW8__QB6MQgY
linkProvider Directory of Open Access Journals
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Jb9QwFH6qCggu7EugQJDgBFGT2E6cA0JlqVp1GPVQUG_G8ZJWgkyZzID6p_iNvOdMphq2Ww9cbR9e_D6_JW8DeFqklnnPauQAKxPujUy04EXipPSlp7zzMGPp46gcj-XhYbW_Bj-GWhhKqxxkYhDUdmLoH_kmilGJ-gjd71cnXxOaGkXR1WGERg-LPXf6HV227uXuW-TvszzffnfwZidZTBVIDK_yWZI7q3PHauG0q0zNuTWSWWFSnxW111qbwpQmldZRrzC0qD1Lrc6oaByVq6cpESjyL6AZkachVXB_-U-HomY8qxa1OSmTmx3qR6phy1nCZIGg5Cv6L4wJ-JNt-3uK5i9x2qD-tq_9bxd3Ha4uDO14q38ZN2DNtTfhUj968_QWiK2mmbom4DLWnxukeHb0JSalbmNcMpRl3nbzLsa3TulU4eRt-HAuNN-B9XbSunsQo_QrTeGQxVS8XDtpRJllliwgK6UQETwfGKtO-k4hKkT4mVQ9DBTCQAUYKB7Ba-L98iR1-Q4Lk2mjFkJDofHOM1OyKkOvr_BCp3WVoYXvyIu3mYxgY2C5WoieTp3xO4Iny20UGhQJ0q2bzPszkpPxHMHdHmhLSlgpBDqxuCNXILhC6upOe3wUGpNn1Lyt4EjXiwGtZ3T9_S7u__szHsPlnYP3IzXaHe89gCs5PaaQs7MB67Pp3D2Ei-bb7LibPgqvMYZP543in6S1cDw
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Jb9QwFLaqsogLOzRQIEhwgmji2E6cA0KFMqJqNZoDoN6M4yWtRDNlMgPqX-PX8Z6TTDVstx642j68-H1-S95GyLM8tcx7VgEHWJFwb2SiBc8TJ6UvPOadhxlLnw6KyUQeHpbTDfJjqIXBtMpBJgZBbWcG_5GPQIxK0Efgfo98nxYx3R2_Pv2a4AQpjLQO4zQ6iOy7s-_gvrWv9naB18-zbPzuw9v3ST9hIDG8zBZJ5qzOHKuE0640FefWSGaFST3NK6-1NrkpTCqtw75hYF17llpNsYAcFK3HiREg_i8V2BUL0waz6er_DkbQOC37Op2UyVELuhLr2TKWMJkDQPmaLgwjA_5k5_6ervlLzDaowvGN__kSb5LrvQEe73Qv5hbZcM1tcqUbyXl2h4idup67OuA11l9qoHhxdBKjsrcxLBnMPm_aZRuDDMA0q3DyLvl4ITTfI5vNrHFbJAapWJjcAbuxqLly0oiCUouWkZVSiIi8GJisTrsOIipE_plUHSQUQEIFSCgekTeIg9VJ7P4dFmbzWvXCRIFRz6kpWEnBG8y90GlVUrD8HXr3lsqIbA_sV71IatU57yPydLUNwgQjRLpxs2V3RnI0qiNyvwPdihJWCAHOLezINTiukbq-0xwfhYblFJu65Rzoejkg95yuv9_Fg39_xhNyFcCrDvYm-w_JtQzfVUjl2Sabi_nSPSKXzbfFcTt_HB5mTD5fNIh_AuImeQA
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Aggregation+algorithm+based+on+consensus+verification&rft.jtitle=Scientific+reports&rft.au=Shichao%2C+Li&rft.au=Jiwei%2C+Qin&rft.date=2023-08-09&rft.issn=2045-2322&rft.eissn=2045-2322&rft.volume=13&rft.issue=1&rft.spage=12923&rft_id=info:doi/10.1038%2Fs41598-023-38688-4&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2045-2322&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2045-2322&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2045-2322&client=summon