Design and VLSI Implementation of Multilayered Neural Network Architecture Using Parallel Processing and Pipelining Algorithm for Image Compression

In this paper, an optimized high speed parallel processing architecture with pipelining for multilayer neural network for image compression and decompression is implemented on FPGA (Field-Programmable Gate Array). The multilayered feed forward neural network architecture is trained using 20 sets of...

Full description

Saved in:
Bibliographic Details
Published in:I-Manager's Journal on Software Engineering Vol. 8; no. 3; pp. 13 - 25
Main Authors: Mohan, Murali, ., Sathyanarayana
Format: Journal Article
Language:English
Published: Nagercoil iManager Publications 01.01.2014
Subjects:
ISSN:0973-5151, 2230-7168
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this paper, an optimized high speed parallel processing architecture with pipelining for multilayer neural network for image compression and decompression is implemented on FPGA (Field-Programmable Gate Array). The multilayered feed forward neural network architecture is trained using 20 sets of image data based to obtain the appropriate weights and biases that are used to construct the proposed architecture. Verilog code developed is simulated using ModelSim for verification. The FPGA implementation is carried out using Xilinx ISE 10.1. The implementation is performed on Virtex-5 FPGA board. Once interfacing is done, the corresponding programming file for the top module is generated. The target device is then configured, programming file is generated and can be successfully dumped on Virtex-5. The design is then analyzed using Chip Scope Pro. The Chip Scope output is observed. The output is successfully compared with VCS (Verliog Compiler Simulator) simulation output. The design is optimized for power of 1.01485 W and memory of 540916 KB.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0973-5151
2230-7168
DOI:10.26634/jse.8.3.2808