Advanced Computer Vision Alignment Technique Using Preprocessing Filters and Deep Learning

Image alignment represents a crucial and essential subject in computer vision applications for image analysis. Getting spatial transformation to align a moving image with a reference image is the aim of image alignment. Deep learning techniques, which have been more and more popular recently, provid...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Ingénierie des systèmes d'Information Ročník 29; číslo 4; s. 1493 - 1499
Hlavní autor: Ghindawi, Ekhlas Watan
Médium: Journal Article
Jazyk:angličtina
Vydáno: Edmonton International Information and Engineering Technology Association (IIETA) 01.08.2024
Témata:
ISSN:1633-1311, 2116-7125
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Image alignment represents a crucial and essential subject in computer vision applications for image analysis. Getting spatial transformation to align a moving image with a reference image is the aim of image alignment. Deep learning techniques, which have been more and more popular recently, provide good outcomes when applied to alignment challenges in addition to many other computer vision problems. In this work, a supervised DL technique has been used in order to estimate the spatial transformation parameter. The spatial transformation model is based on the stiff technique. To convert moving images to a fixed image, rigid transformation parameters are estimated using a supervised convolutional neural network (CNN). The primary contribution of the presented research is to use a model to handle input images with quality degradation to carry out supervised rigid image alignment with the regression model of the CNNs. In the study, many parameters have been examined in an attempt to ascertain the impact of noise in each image and the parameters that yield the optimal outcomes for the problem.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1633-1311
2116-7125
DOI:10.18280/isi.290422