Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

10-2024

Abstract

In real-world scenarios, image impairments often manifest as composite degradations, presenting a complex interplay of elements such as low light, haze, rain, and snow. Despite this reality, existing restoration methods typically target isolated degradation types, thereby falling short in environments where multiple degrading factors coexist. To bridge this gap, our study proposes a versatile imaging model that consolidates four physical corruption paradigms to accurately represent complex, composite degradation scenarios. In this context, we propose OneRestore, a novel transformer-based framework designed for adaptive, controllable scene restoration. The proposed framework leverages a unique cross-attention mechanism, merging degraded scene descriptors with image features, allowing for nuanced restoration. Our model allows versatile input scene descriptors, ranging from manual text embeddings to automatic extractions based on visual attributes. Our methodology is further enhanced through a composite degradation restoration loss, using extra degraded images as negative samples to fortify model constraints. Comparative results on synthetic and real-world datasets demonstrate OneRestore as a superior solution, significantly advancing the state-ofthe-art in addressing complex, composite degradations.

Keywords

Image restoration, Imaging model, Transformer-based framework, scene descriptors

Discipline

Databases and Information Systems | Graphics and Human Computer Interfaces

Research Areas

Data Science and Engineering; Intelligent Systems and Optimization

Publication

Proceedings of the 18th European Conference on Computer Vision (ECCV 2024) : Milan, Italy, September 29 - October 4

Publisher

ECCV

City or Country

Italy

Share

COinS