Tech

AI model transforms blurry, choppy videos into clear, seamless footage

Share
Share
A cutting-edge AI model to transform blurry, choppy videos into clear, seamless footage
Qualitative comparison on arbitrary scale temporal interpolation. Credit: arXiv (2025). DOI: 10.48550/arxiv.2501.11043

A research team, led by Professor Jaejun Yoo from the Graduate School of Artificial Intelligence at UNIST has announced the development of an advanced artificial intelligence (AI) model, “BF-STVSR (Bidirectional Flow-based Spatio-Temporal Video Super-Resolution),” capable of simultaneously improving both video resolution and frame rate.

This research was led by first author Eunjin Kim, with Hyeonjin Kim serving as co-author. Their findings were presented at the Conference on Computer Vision and Pattern Recognition (CVPR 2025) held in Nashville June 11–15. The study is posted on the arXiv preprint server.

Resolution and frame rate are critical factors that determine video quality. Higher resolution results in sharper images with more detailed visuals, while increased frame rates ensure smoother motion without abrupt jumps.

Traditional AI-based video restoration techniques typically handle resolution and frame rate enhancement separately, relying heavily on pre-trained optical flow prediction networks for motion estimation. Optical flow calculates the direction and speed of object movement to generate intermediate frames. However, this approach involves complex computations and is prone to accumulated errors, limiting both the speed and quality of video restoration.

In contrast, “BF-STVSR” introduces signal processing methods tailored to video characteristics, enabling the model to learn bidirectional motion between frames independently, without dependence on external optical flow networks. By jointly inferring object contours and motion flow, the model effectively enhances both resolution and frame rate simultaneously, resulting in more natural and coherent video reconstruction.

Applying this AI model to low-resolution, low-frame-rate videos demonstrated superior performance compared to existing models, as evidenced by higher Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) scores. Elevated PSNR and SSIM values indicate that even videos with significant motion retain clear, undistorted human figures and details, producing more realistic results.

Professor Yoo explained, “This technology has broad applications, from restoring security camera footage or black-box recordings captured with low-end devices to quickly enhancing compressed streaming videos for high-quality media content. It can also benefit fields such as medical imaging and virtual reality (VR).”

More information:
Eunjin Kim et al, BF-STVSR: B-Splines and Fourier-Best Friends for High Fidelity Spatial-Temporal Video Super-Resolution, arXiv (2025). DOI: 10.48550/arxiv.2501.11043

Journal information:
arXiv


Provided by
UNIST

Citation:
AI model transforms blurry, choppy videos into clear, seamless footage (2025, July 7)
retrieved 7 July 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Low-cost method can remove CO₂ from air using cold temperatures and common materials
Tech

Low-cost method can remove CO₂ from air using cold temperatures and common materials

Molecular simulation results for structures in the CoRE-MOF-DDEC database. Credit: Energy &...

Low-power, nonvolatile RF switch promises energy-efficient 6G and autonomous vehicle communications
Tech

Low-power, nonvolatile RF switch promises energy-efficient 6G and autonomous vehicle communications

Device schematics and images with material characterization. Credit: Advanced Science (2025). DOI:...

NPU core improves inference performance by over 60%
Tech

NPU core improves inference performance by over 60%

Oaken’s quantization algorithm consisting of three components: (a) threshold-based online-offline hybrid quantization,...

MIT’s new Wi-Fi radar trick lets robots see through walls and find tools buried deep in drawers
Tech

MIT’s new Wi-Fi radar trick lets robots see through walls and find tools buried deep in drawers

mmNorm reconstructs complex hidden shapes using Wi-Fi frequencies without touching the object...