Video compression using hybrid hexagon search and teaching–learning-based optimization technique for 3D reconstruction

Motion estimation from a video sequence is an interesting issue in video processing. Nowadays, research has been focused on global optimization techniques, that estimate the optical flow for pixel neighborhoods. In this paper, a hybrid statistically effective motion estimation procedure has been pro...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Multimedia systems Ročník 27; číslo 1; s. 45 - 59
Hlavní autoři: Veerasamy, B., Annadurai, S.
Médium: Journal Article
Jazyk:angličtina
Vydáno: Berlin/Heidelberg Springer Berlin Heidelberg 01.02.2021
Springer Nature B.V
Témata:
ISSN:0942-4962, 1432-1882
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Motion estimation from a video sequence is an interesting issue in video processing. Nowadays, research has been focused on global optimization techniques, that estimate the optical flow for pixel neighborhoods. In this paper, a hybrid statistically effective motion estimation procedure has been proposed for better effectiveness video compression. This method explores by utilizing a hexagonal search pattern with a secure number of search points at every lattice. It uses the association among bordering pixels within the frame. So as to diminish the computative intricacy, this methodology uses hybrid hexagon search and teaching–learning based optimization algorithm. This method additionally decreases the computational unpredictability of block matching procedure. The image quality has been confirmed through 3D reconstruction using structured light techniques. This strategy has been contrasted with different existing strategies and hereby utilizing the hexagon search-based teaching–learning optimization algorithm could get a higher precision interms of PSNR of 44.36%, MSE of 2.40 and compression ratio of 7.50.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0942-4962
1432-1882
DOI:10.1007/s00530-020-00699-w