Speech Enhancement using Convolutional Autoencoder Network

Abstract—We present an end-to-end deep learning approach to denoising speech signals by processing the raw waveform directly. Given input audio containing speech corrupted by an additive background signal, the system aims to produce a processed signal that contains only the speech content. Recent ap...

Full description

Saved in:
Bibliographic Details
Published in:INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT Vol. 7; no. 12; pp. 1 - 11
Main Authors: Sengupta, Subhadeep, Rihal, Pranav, D’Souza, Allwin
Format: Journal Article
Language:English
Published: 01.12.2023
ISSN:2582-3930, 2582-3930
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract—We present an end-to-end deep learning approach to denoising speech signals by processing the raw waveform directly. Given input audio containing speech corrupted by an additive background signal, the system aims to produce a processed signal that contains only the speech content. Recent approaches have shown promising results using various deep network architec- tures. In this paper, we propose to train a fully-convolutional context aggregation network using a deep feature loss. That loss is based on comparing the internal feature activations in a different network, trained for acoustic environment detection and domestic audio tagging. Our approach outperforms the stateof- the-art in objective speech quality metrics and in large-scale perceptual experiments with human listeners. It also outperforms an identical network trained using traditional regression losses. The advantage of the new approach is particularly pronounced for the hardest data with the most intrusive background noise, for which denoising is most needed and most challenging. Index Terms—speech enhancement,Fully convolutional denois- ing autoencoders, single channel audio source separation, stacked convolutional autoencoders, deep convolutional neural networks, deep learning.
ISSN:2582-3930
2582-3930
DOI:10.55041/IJSREM27573