Design and VLSI Implementation of Multilayered Neural Network Architecture Using Parallel Processing and Pipelining Algorithm for Image Compression

In this paper, an optimized high speed parallel processing architecture with pipelining for multilayer neural network for image compression and decompression is implemented on FPGA (Field-Programmable Gate Array). The multilayered feed forward neural network architecture is trained using 20 sets of...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:I-Manager's Journal on Software Engineering Ročník 8; číslo 3; s. 13 - 25
Hlavní autoři: Mohan, Murali, ., Sathyanarayana
Médium: Journal Article
Jazyk:angličtina
Vydáno: Nagercoil iManager Publications 01.01.2014
Témata:
ISSN:0973-5151, 2230-7168
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:In this paper, an optimized high speed parallel processing architecture with pipelining for multilayer neural network for image compression and decompression is implemented on FPGA (Field-Programmable Gate Array). The multilayered feed forward neural network architecture is trained using 20 sets of image data based to obtain the appropriate weights and biases that are used to construct the proposed architecture. Verilog code developed is simulated using ModelSim for verification. The FPGA implementation is carried out using Xilinx ISE 10.1. The implementation is performed on Virtex-5 FPGA board. Once interfacing is done, the corresponding programming file for the top module is generated. The target device is then configured, programming file is generated and can be successfully dumped on Virtex-5. The design is then analyzed using Chip Scope Pro. The Chip Scope output is observed. The output is successfully compared with VCS (Verliog Compiler Simulator) simulation output. The design is optimized for power of 1.01485 W and memory of 540916 KB.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0973-5151
2230-7168
DOI:10.26634/jse.8.3.2808