Advanced Computer Vision Alignment Technique Using Preprocessing Filters and Deep Learning

Image alignment represents a crucial and essential subject in computer vision applications for image analysis. Getting spatial transformation to align a moving image with a reference image is the aim of image alignment. Deep learning techniques, which have been more and more popular recently, provid...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Ingénierie des systèmes d'Information Ročník 29; číslo 4; s. 1493 - 1499
Hlavný autor: Ghindawi, Ekhlas Watan
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Edmonton International Information and Engineering Technology Association (IIETA) 01.08.2024
Predmet:
ISSN:1633-1311, 2116-7125
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Image alignment represents a crucial and essential subject in computer vision applications for image analysis. Getting spatial transformation to align a moving image with a reference image is the aim of image alignment. Deep learning techniques, which have been more and more popular recently, provide good outcomes when applied to alignment challenges in addition to many other computer vision problems. In this work, a supervised DL technique has been used in order to estimate the spatial transformation parameter. The spatial transformation model is based on the stiff technique. To convert moving images to a fixed image, rigid transformation parameters are estimated using a supervised convolutional neural network (CNN). The primary contribution of the presented research is to use a model to handle input images with quality degradation to carry out supervised rigid image alignment with the regression model of the CNNs. In the study, many parameters have been examined in an attempt to ascertain the impact of noise in each image and the parameters that yield the optimal outcomes for the problem.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1633-1311
2116-7125
DOI:10.18280/isi.290422