UNetDeblur: Optimized Lightweight and Efficient Motion Deblurring for Mobile Platforms in Real-Time Scenarios.

Saved in:
Bibliographic Details
Title: UNetDeblur: Optimized Lightweight and Efficient Motion Deblurring for Mobile Platforms in Real-Time Scenarios.
Authors: Ranjan, Arti1 (AUTHOR) arti.ranja@gmail.com, Ravinder, M.1 (AUTHOR) ravinderm@igdtuw.ac.in
Source: Circuits, Systems & Signal Processing. Oct2025, Vol. 44 Issue 10, p7719-7752. 34p.
Subject Terms: *REAL-time computing, *VIDEO processing, *MOBILE operating systems, *IMAGE reconstruction, *MOTION capture (Human mechanics), *MATHEMATICAL optimization
Abstract: Video deblurring on mobile devices presents significant challenges due to the limited processing power of mobile hardware and the computational requirements of real-time high-resolution video processing. With the rise of 4 K mobile cameras, the increasing data volume has further intensified the need for efficient and lightweight deblurring solutions. To address these challenges, we propose UNetDeblur, a novel, lightweight motion deblurring framework designed specifically for mobile platforms. UNetDeblur effectively balances algorithmic complexity, computational efficiency, and model size by integrating MobileViT (Mobile-friendly Vision Transformer) for fine-grained spatial feature extraction, Convolutional Gated Recurrent Unit (ConvGRU) for capturing temporal dependencies in motion-blurred frames, and a lightweight U-Net-based architecture for high-fidelity image reconstruction. To further enhance model efficiency and make it lightweight, we incorporate a modified Adam optimizer with weight decay and memory efficient gradient update (AdamW) as our optimization algorithm, which compresses our model makes it capable for real time scenarios. Experimental results on standard datasets (BSD, DVD, GoPro, and RealTime_VDBLR) demonstrate the effectiveness of UNetDeblur both before and after optimization. UNetDeblur represents relative improvements of 2.08%, 1.04%, 2.11%, and 2.04% with the existing Uformer model, respectively. The model size was significantly reduced from 300 to 30 MB showing 90% reduction, and computational complexity was optimized for real-time execution making it 6 × faster while maintaining high-quality video deblurring. [ABSTRACT FROM AUTHOR]
Database: Academic Search Index
Description
Abstract:Video deblurring on mobile devices presents significant challenges due to the limited processing power of mobile hardware and the computational requirements of real-time high-resolution video processing. With the rise of 4 K mobile cameras, the increasing data volume has further intensified the need for efficient and lightweight deblurring solutions. To address these challenges, we propose UNetDeblur, a novel, lightweight motion deblurring framework designed specifically for mobile platforms. UNetDeblur effectively balances algorithmic complexity, computational efficiency, and model size by integrating MobileViT (Mobile-friendly Vision Transformer) for fine-grained spatial feature extraction, Convolutional Gated Recurrent Unit (ConvGRU) for capturing temporal dependencies in motion-blurred frames, and a lightweight U-Net-based architecture for high-fidelity image reconstruction. To further enhance model efficiency and make it lightweight, we incorporate a modified Adam optimizer with weight decay and memory efficient gradient update (AdamW) as our optimization algorithm, which compresses our model makes it capable for real time scenarios. Experimental results on standard datasets (BSD, DVD, GoPro, and RealTime_VDBLR) demonstrate the effectiveness of UNetDeblur both before and after optimization. UNetDeblur represents relative improvements of 2.08%, 1.04%, 2.11%, and 2.04% with the existing Uformer model, respectively. The model size was significantly reduced from 300 to 30 MB showing 90% reduction, and computational complexity was optimized for real-time execution making it 6 × faster while maintaining high-quality video deblurring. [ABSTRACT FROM AUTHOR]
ISSN:0278081X
DOI:10.1007/s00034-025-03163-0