Exploiting layerwise convexity of rectifier networks with sign constrained weights

By introducing sign constraints on the weights, this paper proposes sign constrained rectifier networks (SCRNs), whose training can be solved efficiently by the well known majorization–minimization (MM) algorithms. We prove that the proposed two-hidden-layer SCRNs, which exhibit negative weights in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks Jg. 105; S. 419 - 430
Hauptverfasser: An, Senjian, Boussaid, Farid, Bennamoun, Mohammed, Sohel, Ferdous
Format: Journal Article
Sprache:Englisch
Veröffentlicht: United States Elsevier Ltd 01.09.2018
Schlagworte:
ISSN:0893-6080, 1879-2782, 1879-2782
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract By introducing sign constraints on the weights, this paper proposes sign constrained rectifier networks (SCRNs), whose training can be solved efficiently by the well known majorization–minimization (MM) algorithms. We prove that the proposed two-hidden-layer SCRNs, which exhibit negative weights in the second hidden layer and negative weights in the output layer, are capable of separating any number of disjoint pattern sets. Furthermore, the proposed two-hidden-layer SCRNs can decompose the patterns of each class into several clusters so that each cluster is convexly separable from all the patterns from the other classes. This provides a means to learn the pattern structures and analyse the discriminant factors between different classes of patterns. Experimental results are provided to show the benefits of sign constraints in improving classification performance and the efficiency of the proposed MM algorithm.
AbstractList By introducing sign constraints on the weights, this paper proposes sign constrained rectifier networks (SCRNs), whose training can be solved efficiently by the well known majorization-minimization (MM) algorithms. We prove that the proposed two-hidden-layer SCRNs, which exhibit negative weights in the second hidden layer and negative weights in the output layer, are capable of separating any number of disjoint pattern sets. Furthermore, the proposed two-hidden-layer SCRNs can decompose the patterns of each class into several clusters so that each cluster is convexly separable from all the patterns from the other classes. This provides a means to learn the pattern structures and analyse the discriminant factors between different classes of patterns. Experimental results are provided to show the benefits of sign constraints in improving classification performance and the efficiency of the proposed MM algorithm.
By introducing sign constraints on the weights, this paper proposes sign constrained rectifier networks (SCRNs), whose training can be solved efficiently by the well known majorization-minimization (MM) algorithms. We prove that the proposed two-hidden-layer SCRNs, which exhibit negative weights in the second hidden layer and negative weights in the output layer, are capable of separating any number of disjoint pattern sets. Furthermore, the proposed two-hidden-layer SCRNs can decompose the patterns of each class into several clusters so that each cluster is convexly separable from all the patterns from the other classes. This provides a means to learn the pattern structures and analyse the discriminant factors between different classes of patterns. Experimental results are provided to show the benefits of sign constraints in improving classification performance and the efficiency of the proposed MM algorithm.By introducing sign constraints on the weights, this paper proposes sign constrained rectifier networks (SCRNs), whose training can be solved efficiently by the well known majorization-minimization (MM) algorithms. We prove that the proposed two-hidden-layer SCRNs, which exhibit negative weights in the second hidden layer and negative weights in the output layer, are capable of separating any number of disjoint pattern sets. Furthermore, the proposed two-hidden-layer SCRNs can decompose the patterns of each class into several clusters so that each cluster is convexly separable from all the patterns from the other classes. This provides a means to learn the pattern structures and analyse the discriminant factors between different classes of patterns. Experimental results are provided to show the benefits of sign constraints in improving classification performance and the efficiency of the proposed MM algorithm.
Author An, Senjian
Sohel, Ferdous
Bennamoun, Mohammed
Boussaid, Farid
Author_xml – sequence: 1
  givenname: Senjian
  orcidid: 0000-0003-4806-1825
  surname: An
  fullname: An, Senjian
  email: senjian.an@uwa.edu.au, senjian.an@gmail.com
  organization: Department of Computer Science and Software Engineering, The University of Western Australia, Australia
– sequence: 2
  givenname: Farid
  surname: Boussaid
  fullname: Boussaid, Farid
  organization: Department of Electrical, Electronic and Computer Engineering, The University of Western Australia, Australia
– sequence: 3
  givenname: Mohammed
  surname: Bennamoun
  fullname: Bennamoun, Mohammed
  organization: Department of Computer Science and Software Engineering, The University of Western Australia, Australia
– sequence: 4
  givenname: Ferdous
  surname: Sohel
  fullname: Sohel, Ferdous
  organization: School of Engineering and Information Technology, Murdoch University, Australia
BackLink https://www.ncbi.nlm.nih.gov/pubmed/29945061$$D View this record in MEDLINE/PubMed
BookMark eNp9kE1PGzEURa2KqgTaf1ChWbKZ4fkjHs8GCSFoKyFVqtq1Zezn4DCxU9tJyL_vRKEsWb3NuffpnjNyElNEQr5S6ChQebXsIm4i1o4BVR3IDmD-gcyo6oeW9YqdkBmogbcSFJySs1KWACCV4J_IKRsGMQdJZ-TX3ct6TKGGuGhGs8e8CwUbm-IWX0LdN8k3GW0NPmBupm-7lJ9Lswv1qSlhEQ9kqdmEiK7ZYVg81fKZfPRmLPjl9Z6TP_d3v2-_tw8_v_24vXloLae0toMCarjw6HsrnHRAWe859agkm3NhesHNAHPphDDGoXqUQy8VtyDAMS-Rn5PLY-86p78bLFWvQrE4jiZi2hTNYJrec8bVhF68opvHFTq9zmFl8l7_9zAB4gjYnErJ6N8QCvqgWy_1Ubc-6NYg9aR7il0fYzjt3E6KdLEBo0UXDta0S-H9gn9FW4r0
Cites_doi 10.1109/TAC.2017.2720970
10.1109/ICCV.2015.123
10.1016/0893-6080(89)90020-8
10.1162/neco.2010.08-09-1081
10.1109/TSP.2016.2601299
10.1098/rsif.2011.0852
10.1109/MSP.2012.2205597
10.1016/j.neunet.2017.06.009
10.1109/TCYB.2016.2581220
10.1109/5.726791
10.1145/2733373.2807412
10.1109/TCSI.2016.2605685
10.1162/NECO_a_00113
10.1016/j.compag.2008.05.005
10.1109/CVPR.2014.220
ContentType Journal Article
Copyright 2018 Elsevier Ltd
Copyright © 2018 Elsevier Ltd. All rights reserved.
Copyright_xml – notice: 2018 Elsevier Ltd
– notice: Copyright © 2018 Elsevier Ltd. All rights reserved.
DBID AAYXX
CITATION
NPM
7X8
DOI 10.1016/j.neunet.2018.06.005
DatabaseName CrossRef
PubMed
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
MEDLINE - Academic
DatabaseTitleList PubMed
MEDLINE - Academic

Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1879-2782
EndPage 430
ExternalDocumentID 29945061
10_1016_j_neunet_2018_06_005
S0893608018301874
Genre Journal Article
GroupedDBID ---
--K
--M
-~X
.DC
.~1
0R~
123
186
1B1
1RT
1~.
1~5
29N
4.4
457
4G.
53G
5RE
5VS
6TJ
7-5
71M
8P~
9JM
9JN
AABNK
AACTN
AADPK
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXLA
AAXUO
AAYFN
ABAOU
ABBOA
ABCQJ
ABEFU
ABFNM
ABFRF
ABHFT
ABIVO
ABJNI
ABLJU
ABMAC
ABXDB
ABYKQ
ACAZW
ACDAQ
ACGFO
ACGFS
ACIUM
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADGUI
ADJOM
ADMUD
ADRHT
AEBSH
AECPX
AEFWE
AEKER
AENEX
AFKWA
AFTJW
AFXIZ
AGHFR
AGUBO
AGWIK
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ARUGR
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
F0J
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-2
G-Q
G8K
GBLVA
GBOLZ
HLZ
HMQ
HVGLF
HZ~
IHE
J1W
JJJVA
K-O
KOM
KZ1
LG9
LMP
M2V
M41
MHUIS
MO0
MOBAO
MVM
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
ROL
RPZ
SBC
SCC
SDF
SDG
SDP
SES
SEW
SNS
SPC
SPCBC
SSN
SST
SSV
SSW
SSZ
T5K
TAE
UAP
UNMZH
VOH
WUQ
XPP
ZMT
~G-
9DU
AATTM
AAXKI
AAYWO
AAYXX
ABDPE
ABWVN
ACLOT
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AGQPQ
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
CITATION
EFKBS
~HD
NPM
7X8
ID FETCH-LOGICAL-c311t-9801a34fef7c4d6d0127f31fe862534a743a9056d44aade8b697683c040d2f6e3
ISICitedReferencesCount 0
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000441874700034&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0893-6080
1879-2782
IngestDate Sat Sep 27 19:55:51 EDT 2025
Wed Feb 19 02:43:50 EST 2025
Sat Nov 29 07:12:10 EST 2025
Fri Feb 23 02:48:56 EST 2024
IsPeerReviewed true
IsScholarly true
Keywords Geometrically interpretable neural network
The majorization–minimization algorithm
Rectifier neural network
Language English
License Copyright © 2018 Elsevier Ltd. All rights reserved.
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c311t-9801a34fef7c4d6d0127f31fe862534a743a9056d44aade8b697683c040d2f6e3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0003-4806-1825
PMID 29945061
PQID 2060873238
PQPubID 23479
PageCount 12
ParticipantIDs proquest_miscellaneous_2060873238
pubmed_primary_29945061
crossref_primary_10_1016_j_neunet_2018_06_005
elsevier_sciencedirect_doi_10_1016_j_neunet_2018_06_005
PublicationCentury 2000
PublicationDate 2018-09-01
PublicationDateYYYYMMDD 2018-09-01
PublicationDate_xml – month: 09
  year: 2018
  text: 2018-09-01
  day: 01
PublicationDecade 2010
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle Neural networks
PublicationTitleAlternate Neural Netw
PublicationYear 2018
Publisher Elsevier Ltd
Publisher_xml – name: Elsevier Ltd
References Deng, Li, Huang, Yao, Yu, Seide (b7) 2013
Wang, Shen, Duan (b37) 2017; 47
Lee, Xie, Gallagher, Zhang, Tu (b21) 2015
Montufar, Ay (b23) 2011; 23
Hinton, Deng, Yu, Dahl, Mohamed, Jaitly (b14) 2012; 29
Ristera, Rubin (b27) 2017; 94
Wei, Park, Karimi, Tian, Jung (b41) 2017
Boob, D., & Lan, G. (2017). Theoretical properties of the global optimizer of two layer neural network. arXiv preprint
Livni, Shalev-Shwartz, Shamir (b22) 2014
Taigman, Y., Yang, M., Ranzato, M., & Wolf, L. (2014). Deepface: Closing the gap to human-level performance in face verification. In
(pp. 797–842).
An, Ke, Bennamoun, Boussaid, Sohel (b2) 2015
An, S., Boussaid, F., & Bennamoun, M. (2015). How can deep rectifier networks achieve linear separability and preserve distances?. In
Auer, Herbster, Warmuth (b3) 1996
Kawaguchi (b17) 2016
Qiu, Karimi (b26) 2017
Xu, Wang, Yao, Lu, Su (b45) 2017
Gupta, Jin, Homma (b10) 2004
Wei, Qiu, Karimi (b43) 2017; 64
(pp. 514–523).
Huang, Liu, Weinberger, van der Maaten (b16) 2017; vol. 1
Sun, J., Qu, Q., & Wright, J. (2015). When are nonconvex problems not scary?. arXiv preprint
Ge, Lee, Ma (b9) 2016
Poston, Lee, Choie, Kwon (b25) 1991; 2
Sun, Babu, Palomar (b32) 2017; 65
Soltanolkotabi, M., Javanmard, A., & Lee, J. D. (2017). Theoretical insights into the optimization landscape of over-parameterized shallow neural networks. arXiv preprint
.
He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In
Bhojanapalli, Neyshabur, Srebro (b4) 2016
Safran, I., & Shamir, O. (2017). Spurious local minima are common in two-layer ReLU neural networks. arXiv preprint
Wang, Shen, Karimi, Duan (b38) 2017
Seide, Li, Yu (b29) 2011
Zeiler, Fergus (b46) 2014
Sun, Chen, Wang, Tang (b33) 2014
Vedaldi, A., & Lenc, K. (2015). MatConvNet – convolutional neural networks for MATLAB. In
Haeffele, B. D., & Vidal, R. (2015). Global optimality in tensor factorization, deep learning, and beyond. arXiv preprint
(pp. 1701–1708).
Hepworth, Nefedov, Muchnik, Morgan (b13) 2012; 9
(pp. 1026–1034).
Hornik, Stinchcombe, White (b15) 1989; 2
Wei, Park, Qiu, Jung (b42) 2017
Nguyen, Q., & Hein, M. (2017). The loss surface of deep and wide neural networks. arXiv preprint
LeCun, Bottou, Bengio, Haffner (b20) 1998
Xu, Lu, Shi, Tao, Xie (b44) 2017
Wathes, Kristensen, Aerts, Berckmans (b40) 2008; 64
Wang, Xia, Shen, Zhou (b39) 2018; 63
Krizhevsky, Sutskever, Hinton (b18) 2012
Ge, R., Huang, F., Jin, C., & Yuan, Y. (2015). Escaping from saddle points—online stochastic gradient for tensor decomposition. In
Ciresan, Meier, Schmidhuber (b6) 2012
Soudry, D., & Carmon, Y. (2016). No bad local minima: Data independent training error guarantees for multilayer neural networks. arXiv preprint
Le Roux, Bengio (b19) 2010; 22
Bhojanapalli (10.1016/j.neunet.2018.06.005_b4) 2016
Gupta (10.1016/j.neunet.2018.06.005_b10) 2004
Wang (10.1016/j.neunet.2018.06.005_b38) 2017
Sun (10.1016/j.neunet.2018.06.005_b32) 2017; 65
Xu (10.1016/j.neunet.2018.06.005_b45) 2017
Le Roux (10.1016/j.neunet.2018.06.005_b19) 2010; 22
LeCun (10.1016/j.neunet.2018.06.005_b20) 1998
Montufar (10.1016/j.neunet.2018.06.005_b23) 2011; 23
Wei (10.1016/j.neunet.2018.06.005_b42) 2017
10.1016/j.neunet.2018.06.005_b30
Wathes (10.1016/j.neunet.2018.06.005_b40) 2008; 64
Wang (10.1016/j.neunet.2018.06.005_b39) 2018; 63
Ciresan (10.1016/j.neunet.2018.06.005_b6) 2012
10.1016/j.neunet.2018.06.005_b36
Hinton (10.1016/j.neunet.2018.06.005_b14) 2012; 29
10.1016/j.neunet.2018.06.005_b35
Ge (10.1016/j.neunet.2018.06.005_b9) 2016
Hornik (10.1016/j.neunet.2018.06.005_b15) 1989; 2
10.1016/j.neunet.2018.06.005_b31
Sun (10.1016/j.neunet.2018.06.005_b33) 2014
Auer (10.1016/j.neunet.2018.06.005_b3) 1996
10.1016/j.neunet.2018.06.005_b12
Poston (10.1016/j.neunet.2018.06.005_b25) 1991; 2
10.1016/j.neunet.2018.06.005_b34
10.1016/j.neunet.2018.06.005_b11
Ristera (10.1016/j.neunet.2018.06.005_b27) 2017; 94
10.1016/j.neunet.2018.06.005_b1
10.1016/j.neunet.2018.06.005_b28
Deng (10.1016/j.neunet.2018.06.005_b7) 2013
10.1016/j.neunet.2018.06.005_b5
Krizhevsky (10.1016/j.neunet.2018.06.005_b18) 2012
Lee (10.1016/j.neunet.2018.06.005_b21) 2015
An (10.1016/j.neunet.2018.06.005_b2) 2015
10.1016/j.neunet.2018.06.005_b8
Wang (10.1016/j.neunet.2018.06.005_b37) 2017; 47
Xu (10.1016/j.neunet.2018.06.005_b44) 2017
Kawaguchi (10.1016/j.neunet.2018.06.005_b17) 2016
Qiu (10.1016/j.neunet.2018.06.005_b26) 2017
Wei (10.1016/j.neunet.2018.06.005_b41) 2017
Huang (10.1016/j.neunet.2018.06.005_b16) 2017; vol. 1
Wei (10.1016/j.neunet.2018.06.005_b43) 2017; 64
Hepworth (10.1016/j.neunet.2018.06.005_b13) 2012; 9
10.1016/j.neunet.2018.06.005_b24
Zeiler (10.1016/j.neunet.2018.06.005_b46) 2014
Livni (10.1016/j.neunet.2018.06.005_b22) 2014
Seide (10.1016/j.neunet.2018.06.005_b29) 2011
References_xml – reference: He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In
– start-page: 855
  year: 2014
  end-page: 863
  ident: b22
  article-title: On the computational efficiency of training neural networks
  publication-title: Advances in neural information processing systems
– reference: Vedaldi, A., & Lenc, K. (2015). MatConvNet – convolutional neural networks for MATLAB. In
– year: 2004
  ident: b10
  article-title: Static and dynamic neural networks: From fundamentals to advanced theory
– start-page: 437
  year: 2011
  end-page: 440
  ident: b29
  article-title: Conversational speech transcription using context-dependent deep neural networks
  publication-title: Interspeech
– year: 2017
  ident: b45
  article-title: State estimation for periodic neural networks with uncertain weight matrices and Markovian jump channel states
  publication-title: IEEE Transactions on Systems, Man, and Cybernetics: Systems
– reference: An, S., Boussaid, F., & Bennamoun, M. (2015). How can deep rectifier networks achieve linear separability and preserve distances?. In
– year: 2017
  ident: b42
  article-title: Reliable output feedback control for piecewise affine systems with Markov-type sensor failure
  publication-title: IEEE Transactions on Circuits and Systems. II: Express Briefs
– volume: 23
  start-page: 1306
  year: 2011
  end-page: 1319
  ident: b23
  article-title: Refinements of universal approximation results for deep belief networks and restricted Boltzmann machines
  publication-title: Neural Computation
– start-page: 562
  year: 2015
  end-page: 570
  ident: b21
  article-title: Deeply-supervised nets
  publication-title: Artificial intelligence and statistics
– reference: Boob, D., & Lan, G. (2017). Theoretical properties of the global optimizer of two layer neural network. arXiv preprint
– volume: 63
  start-page: 219
  year: 2018
  end-page: 224
  ident: b39
  article-title: SMC design for robust stabilization of nonlinear Markovian jump singular systems
  publication-title: IEEE Transactions on Automatic Control
– start-page: 546
  year: 2015
  end-page: 559
  ident: b2
  article-title: Sign constrained rectifier networks with applications to pattern decompositions
  publication-title: Machine learning and knowledge discovery in databases
– reference: Ge, R., Huang, F., Jin, C., & Yuan, Y. (2015). Escaping from saddle points—online stochastic gradient for tensor decomposition. In
– volume: 2
  start-page: 359
  year: 1989
  end-page: 366
  ident: b15
  article-title: Multilayer feedforward networks are universal approximators
  publication-title: Neural Networks
– start-page: 1097
  year: 2012
  end-page: 1105
  ident: b18
  article-title: Imagenet classification with deep convolutional neural networks
  publication-title: Advances in neural information processing systems
– year: 2017
  ident: b41
  article-title: Improved stability and stabilization results for stochastic synchronization of continuous-time semi-Markovian jump neural networks with time-varying delay
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
– volume: 94
  start-page: 34
  year: 2017
  end-page: 45
  ident: b27
  article-title: Piecewise convexity of artificial neural networks
  publication-title: Neural Networks
– reference: Haeffele, B. D., & Vidal, R. (2015). Global optimality in tensor factorization, deep learning, and beyond. arXiv preprint
– year: 2016
  ident: b9
  article-title: Matrix completion has no spurious local minimum
  publication-title: Advances in neural information processing systems
– year: 2017
  ident: b26
  article-title: Fuzzy-affine-model-based memory filter design of nonlinear systems with time-varying delay
  publication-title: IEEE Transactions on Fuzzy Systems
– volume: 9
  start-page: 1934
  year: 2012
  end-page: 1942
  ident: b13
  article-title: Broiler chickens can benefit from machine learning: Support vector machine analysis of observational epidemiological data
  publication-title: Journal of the Royal Society Interface
– reference: Sun, J., Qu, Q., & Wright, J. (2015). When are nonconvex problems not scary?. arXiv preprint
– start-page: 586
  year: 2016
  end-page: 594
  ident: b17
  article-title: Deep learning without poor local minima
  publication-title: Advances in neural information processing systems
– reference: Safran, I., & Shamir, O. (2017). Spurious local minima are common in two-layer ReLU neural networks. arXiv preprint
– reference: (pp. 1701–1708).
– start-page: 818
  year: 2014
  end-page: 833
  ident: b46
  article-title: Visualizing and understanding convolutional networks
  publication-title: Computer vision—ECCV 2014
– volume: 2
  start-page: 173
  year: 1991
  end-page: 176
  ident: b25
  article-title: Local minima and back propagation
  publication-title: International Joint Conference on Neural Networks, IJCNN
– reference: Soltanolkotabi, M., Javanmard, A., & Lee, J. D. (2017). Theoretical insights into the optimization landscape of over-parameterized shallow neural networks. arXiv preprint
– start-page: 3642
  year: 2012
  end-page: 3649
  ident: b6
  article-title: Multi-column deep neural networks for image classification
  publication-title: IEEE conference on computer vision and pattern recognition, CVPR
– reference: (pp. 514–523).
– reference: Soudry, D., & Carmon, Y. (2016). No bad local minima: Data independent training error guarantees for multilayer neural networks. arXiv preprint
– year: 2017
  ident: b44
  article-title: Robust estimation for neural networks with randomly occurring distributed delays and Markovian jump coupling
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
– reference: Taigman, Y., Yang, M., Ranzato, M., & Wolf, L. (2014). Deepface: Closing the gap to human-level performance in face verification. In
– start-page: 8604
  year: 2013
  end-page: 8608
  ident: b7
  article-title: Recent advances in deep learning for speech research at microsoft
  publication-title: IEEE international conference on acoustics, speech and signal processing, ICASSP
– volume: 65
  start-page: 794
  year: 2017
  end-page: 816
  ident: b32
  article-title: Majorization-minimization algorithms in signal processing, communications, and machine learning
  publication-title: IEEE Transactions on Signal Processing
– reference: (pp. 1026–1034).
– reference: Nguyen, Q., & Hein, M. (2017). The loss surface of deep and wide neural networks. arXiv preprint
– year: 2017
  ident: b38
  article-title: Dissipativity-based fuzzy integral sliding mode control of continuous-time ts fuzzy systems
  publication-title: IEEE Transactions on Fuzzy Systems
– volume: 47
  start-page: 3124
  year: 2017
  end-page: 3135
  ident: b37
  article-title: On stabilization of quantized sampled-data neural-network-based control systems
  publication-title: IEEE Transactions on Cybernetics
– start-page: 316
  year: 1996
  end-page: 322
  ident: b3
  article-title: Exponentially many local minima for single neurons
  publication-title: Advances in neural information processing systems
– volume: vol. 1
  start-page: 3
  year: 2017
  ident: b16
  article-title: Densely connected convolutional networks
  publication-title: Proceedings of the IEEE conference on computer vision and pattern recognition
– reference: (pp. 797–842).
– year: 1998
  ident: b20
  article-title: Gradient-based learning applied to document recognition
  publication-title: Proceedings of the IEEE
– start-page: 3873
  year: 2016
  end-page: 3881
  ident: b4
  article-title: Global optimality of local search for low rank matrix recovery
  publication-title: Advances in neural information processing systems
– reference: .
– start-page: 1988
  year: 2014
  end-page: 1996
  ident: b33
  article-title: Deep learning face representation by joint identification-verification
  publication-title: Advances in neural information processing systems
– volume: 64
  start-page: 170
  year: 2017
  end-page: 181
  ident: b43
  article-title: Reliable output feedback control of discrete-time fuzzy affine systems with actuator faults
  publication-title: IEEE Transactions on Circuits and Systems. I. Regular Papers
– volume: 29
  start-page: 82
  year: 2012
  end-page: 97
  ident: b14
  article-title: Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups
  publication-title: IEEE Signal Processing Magazine
– volume: 64
  start-page: 2
  year: 2008
  end-page: 10
  ident: b40
  article-title: Is precision livestock farming an engineer’s daydream or nightmare, an animal’s friend or foe, and a farmer’s panacea or pitfall?
  publication-title: Computers and Electronics in Agriculture
– volume: 22
  start-page: 2192
  year: 2010
  end-page: 2207
  ident: b19
  article-title: Deep belief networks are compact universal approximators
  publication-title: Neural Computation
– volume: 63
  start-page: 219
  issue: 1
  year: 2018
  ident: 10.1016/j.neunet.2018.06.005_b39
  article-title: SMC design for robust stabilization of nonlinear Markovian jump singular systems
  publication-title: IEEE Transactions on Automatic Control
  doi: 10.1109/TAC.2017.2720970
– start-page: 316
  year: 1996
  ident: 10.1016/j.neunet.2018.06.005_b3
  article-title: Exponentially many local minima for single neurons
– year: 2017
  ident: 10.1016/j.neunet.2018.06.005_b26
  article-title: Fuzzy-affine-model-based memory filter design of nonlinear systems with time-varying delay
  publication-title: IEEE Transactions on Fuzzy Systems
– ident: 10.1016/j.neunet.2018.06.005_b1
– ident: 10.1016/j.neunet.2018.06.005_b12
  doi: 10.1109/ICCV.2015.123
– start-page: 3873
  year: 2016
  ident: 10.1016/j.neunet.2018.06.005_b4
  article-title: Global optimality of local search for low rank matrix recovery
– volume: vol. 1
  start-page: 3
  year: 2017
  ident: 10.1016/j.neunet.2018.06.005_b16
  article-title: Densely connected convolutional networks
– ident: 10.1016/j.neunet.2018.06.005_b5
– start-page: 818
  year: 2014
  ident: 10.1016/j.neunet.2018.06.005_b46
  article-title: Visualizing and understanding convolutional networks
– year: 2004
  ident: 10.1016/j.neunet.2018.06.005_b10
– ident: 10.1016/j.neunet.2018.06.005_b31
– start-page: 562
  year: 2015
  ident: 10.1016/j.neunet.2018.06.005_b21
  article-title: Deeply-supervised nets
– year: 2017
  ident: 10.1016/j.neunet.2018.06.005_b42
  article-title: Reliable output feedback control for piecewise affine systems with Markov-type sensor failure
  publication-title: IEEE Transactions on Circuits and Systems. II: Express Briefs
– ident: 10.1016/j.neunet.2018.06.005_b28
– start-page: 586
  year: 2016
  ident: 10.1016/j.neunet.2018.06.005_b17
  article-title: Deep learning without poor local minima
– year: 2017
  ident: 10.1016/j.neunet.2018.06.005_b41
  article-title: Improved stability and stabilization results for stochastic synchronization of continuous-time semi-Markovian jump neural networks with time-varying delay
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
– volume: 2
  start-page: 359
  issue: 5
  year: 1989
  ident: 10.1016/j.neunet.2018.06.005_b15
  article-title: Multilayer feedforward networks are universal approximators
  publication-title: Neural Networks
  doi: 10.1016/0893-6080(89)90020-8
– year: 2017
  ident: 10.1016/j.neunet.2018.06.005_b44
  article-title: Robust estimation for neural networks with randomly occurring distributed delays and Markovian jump coupling
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
– volume: 22
  start-page: 2192
  issue: 8
  year: 2010
  ident: 10.1016/j.neunet.2018.06.005_b19
  article-title: Deep belief networks are compact universal approximators
  publication-title: Neural Computation
  doi: 10.1162/neco.2010.08-09-1081
– start-page: 855
  year: 2014
  ident: 10.1016/j.neunet.2018.06.005_b22
  article-title: On the computational efficiency of training neural networks
– volume: 2
  start-page: 173
  year: 1991
  ident: 10.1016/j.neunet.2018.06.005_b25
  article-title: Local minima and back propagation
– volume: 65
  start-page: 794
  issue: 3
  year: 2017
  ident: 10.1016/j.neunet.2018.06.005_b32
  article-title: Majorization-minimization algorithms in signal processing, communications, and machine learning
  publication-title: IEEE Transactions on Signal Processing
  doi: 10.1109/TSP.2016.2601299
– volume: 9
  start-page: 1934
  issue: 73
  year: 2012
  ident: 10.1016/j.neunet.2018.06.005_b13
  article-title: Broiler chickens can benefit from machine learning: Support vector machine analysis of observational epidemiological data
  publication-title: Journal of the Royal Society Interface
  doi: 10.1098/rsif.2011.0852
– ident: 10.1016/j.neunet.2018.06.005_b24
– ident: 10.1016/j.neunet.2018.06.005_b8
– volume: 29
  start-page: 82
  issue: 6
  year: 2012
  ident: 10.1016/j.neunet.2018.06.005_b14
  article-title: Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups
  publication-title: IEEE Signal Processing Magazine
  doi: 10.1109/MSP.2012.2205597
– start-page: 3642
  year: 2012
  ident: 10.1016/j.neunet.2018.06.005_b6
  article-title: Multi-column deep neural networks for image classification
– start-page: 437
  year: 2011
  ident: 10.1016/j.neunet.2018.06.005_b29
  article-title: Conversational speech transcription using context-dependent deep neural networks
– start-page: 1097
  year: 2012
  ident: 10.1016/j.neunet.2018.06.005_b18
  article-title: Imagenet classification with deep convolutional neural networks
– volume: 94
  start-page: 34
  year: 2017
  ident: 10.1016/j.neunet.2018.06.005_b27
  article-title: Piecewise convexity of artificial neural networks
  publication-title: Neural Networks
  doi: 10.1016/j.neunet.2017.06.009
– volume: 47
  start-page: 3124
  issue: 10
  year: 2017
  ident: 10.1016/j.neunet.2018.06.005_b37
  article-title: On stabilization of quantized sampled-data neural-network-based control systems
  publication-title: IEEE Transactions on Cybernetics
  doi: 10.1109/TCYB.2016.2581220
– year: 2016
  ident: 10.1016/j.neunet.2018.06.005_b9
  article-title: Matrix completion has no spurious local minimum
– year: 1998
  ident: 10.1016/j.neunet.2018.06.005_b20
  article-title: Gradient-based learning applied to document recognition
  publication-title: Proceedings of the IEEE
  doi: 10.1109/5.726791
– ident: 10.1016/j.neunet.2018.06.005_b30
– ident: 10.1016/j.neunet.2018.06.005_b36
  doi: 10.1145/2733373.2807412
– year: 2017
  ident: 10.1016/j.neunet.2018.06.005_b45
  article-title: State estimation for periodic neural networks with uncertain weight matrices and Markovian jump channel states
  publication-title: IEEE Transactions on Systems, Man, and Cybernetics: Systems
– volume: 64
  start-page: 170
  issue: 1
  year: 2017
  ident: 10.1016/j.neunet.2018.06.005_b43
  article-title: Reliable output feedback control of discrete-time fuzzy affine systems with actuator faults
  publication-title: IEEE Transactions on Circuits and Systems. I. Regular Papers
  doi: 10.1109/TCSI.2016.2605685
– volume: 23
  start-page: 1306
  issue: 5
  year: 2011
  ident: 10.1016/j.neunet.2018.06.005_b23
  article-title: Refinements of universal approximation results for deep belief networks and restricted Boltzmann machines
  publication-title: Neural Computation
  doi: 10.1162/NECO_a_00113
– volume: 64
  start-page: 2
  issue: 1
  year: 2008
  ident: 10.1016/j.neunet.2018.06.005_b40
  article-title: Is precision livestock farming an engineer’s daydream or nightmare, an animal’s friend or foe, and a farmer’s panacea or pitfall?
  publication-title: Computers and Electronics in Agriculture
  doi: 10.1016/j.compag.2008.05.005
– ident: 10.1016/j.neunet.2018.06.005_b34
– ident: 10.1016/j.neunet.2018.06.005_b35
  doi: 10.1109/CVPR.2014.220
– ident: 10.1016/j.neunet.2018.06.005_b11
– start-page: 1988
  year: 2014
  ident: 10.1016/j.neunet.2018.06.005_b33
  article-title: Deep learning face representation by joint identification-verification
– start-page: 8604
  year: 2013
  ident: 10.1016/j.neunet.2018.06.005_b7
  article-title: Recent advances in deep learning for speech research at microsoft
– year: 2017
  ident: 10.1016/j.neunet.2018.06.005_b38
  article-title: Dissipativity-based fuzzy integral sliding mode control of continuous-time ts fuzzy systems
  publication-title: IEEE Transactions on Fuzzy Systems
– start-page: 546
  year: 2015
  ident: 10.1016/j.neunet.2018.06.005_b2
  article-title: Sign constrained rectifier networks with applications to pattern decompositions
SSID ssj0006843
Score 2.2357311
Snippet By introducing sign constraints on the weights, this paper proposes sign constrained rectifier networks (SCRNs), whose training can be solved efficiently by...
SourceID proquest
pubmed
crossref
elsevier
SourceType Aggregation Database
Index Database
Publisher
StartPage 419
SubjectTerms Geometrically interpretable neural network
Rectifier neural network
The majorization–minimization algorithm
Title Exploiting layerwise convexity of rectifier networks with sign constrained weights
URI https://dx.doi.org/10.1016/j.neunet.2018.06.005
https://www.ncbi.nlm.nih.gov/pubmed/29945061
https://www.proquest.com/docview/2060873238
Volume 105
WOSCitedRecordID wos000441874700034&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: Elsevier SD Freedom Collection Journals 2021
  customDbUrl:
  eissn: 1879-2782
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0006843
  issn: 0893-6080
  databaseCode: AIEXJ
  dateStart: 19950101
  isFulltext: true
  titleUrlDefault: https://www.sciencedirect.com
  providerName: Elsevier
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3JbtswECWcpIde0r1xl4AFelVgWZRIHtMiRtJDUDQp4JtAiyPERkoFlu363C_vcJOdBkEXoBfB0ErMPD89jh5JQt4DauwJcFRuw8kgYdiDSETBq0RmA1UzBXaSMbfYBD8_F-Ox_Nzr_YhjYVbX3BixXsub_5pq3IfJtkNn_yLd3U1xB_7GpOMW047bP0q8c9VNnZv5WqGg_j71jvQVrIP9wpFcbYeZGG8CD0PcrJfDntm6dSOsL93VTdttAWsn88Csxgs3VQRXRgUz24Lbh2bZtmrqMDTCPrnuDiC7q2_IM64c21zZ6nl38KK58s6BEcz1L3WJVHTGq1AsiwNmNu4kx2kyS4qBX7zpCDznCi6TIRe3SXmQb9EqC7Tq39DMf8m5Q_6-DjE7MrDEKFjbnp-bNdzq9rTaF7YptiXIaW5hwh2yN-S5RGbcOz47GX_q3ueF8N7L2PQ4ANO5BO8-6z6Bc18HxgmZy8dkP_RA6LFHzhPSA_OUPIqre9BA9s_Ilw2QaAck2gGJNjXtgEQjHqgFErVAoltAogFIz8nX0cnlx9MkrMCRVFmaLhKJ4VEZq6HmFdOFtj6FOktrwH5wnjGF8lNJlNCaMaU0iEmB6lZkFb4Z9LAuIHtBdk1j4IBQoQVwXoOs7FTRLJcsryBVqYCq0JLrPkli2MobP9FKGR2Is9KHubRhLp0RM-8THmNbBrHoRWCJcPjNle9iKkrkUvuBTBlAPONJmF-eoYrtk5c-R11bULaxHMXvq39-7mvycPM_eUN2F_MlvCUPqtVi2s4PyQ4fi8OAvZ9WUqi6
linkProvider Elsevier
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Exploiting+layerwise+convexity+of+rectifier+networks+with+sign+constrained+weights&rft.jtitle=Neural+networks&rft.au=An%2C+Senjian&rft.au=Boussaid%2C+Farid&rft.au=Bennamoun%2C+Mohammed&rft.au=Sohel%2C+Ferdous&rft.date=2018-09-01&rft.pub=Elsevier+Ltd&rft.issn=0893-6080&rft.eissn=1879-2782&rft.volume=105&rft.spage=419&rft.epage=430&rft_id=info:doi/10.1016%2Fj.neunet.2018.06.005&rft.externalDocID=S0893608018301874
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0893-6080&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0893-6080&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0893-6080&client=summon