Training Compact DNNs with ℓ1/2 Regularization

•We propose a network compression model based on ℓ1/2 regularization. To the best of our knowledge, it is the first work utilizing non-Lipschitz continuous regularization to compress DNNs.•We strictly prove the correspondence between ℓp(0<p<1) and Hyper-Laplacian prior. Based on this prior, we...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition Vol. 136
Main Authors: Tang, Anda, Niu, Lingfeng, Miao, Jianyu, Zhang, Peng
Format: Journal Article
Language:English
Published: Elsevier Ltd 01.04.2023
Subjects:
ISSN:0031-3203, 1873-5142
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Be the first to leave a comment!
You must be logged in first