Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
7-2025
Abstract
Images are often obstructed by various obstacles due to capture limitations, hindering the observation of objects of interest. Most existing methods address occlusions from specific elements like fences or raindrops, but are constrained by the wide range of real-world obstructions, making comprehensive data collection impractical. To overcome these challenges, we propose Instruct2See, a novel zero-shot framework capable of handling both seen and unseen obstacles. The core idea of our approach is to unify obstruction removal by treating it as a soft-hard mask restoration problem, where any obstruction can be represented using multi-modal prompts, such as visual semantics and textual instructions, processed through a cross-attention unit to enhance contextual understanding and improve mode control. Additionally, a tunable mask adapter allows for dynamic soft masking, enabling real-time adjustment of inaccurate masks. Extensive experiments on both in-distribution and out-of-distribution obstacles show that Instruct2See consistently achieves strong performance and generalization in obstruction removal, regardless of whether the obstacles were present during the training phase. Code and dataset are available at https://jhscut.github.io/Instruct2See.
Keywords
computer vision, image restoration
Discipline
Graphics and Human Computer Interfaces
Research Areas
Intelligent Systems and Optimization
Areas of Excellence
Digital transformation
Publication
Proceedings of the Forty-Second International Conference on Machine Learning, Vancouver, Canada, 2025 July 13-19
First Page
1
Last Page
18
City or Country
USA
Citation
LI, Junhang; GUO, Yu; XIAN, Chuhua; and HE, Shengfeng.
Instruct2see: Learning to remove any obstructions across distributions. (2025). Proceedings of the Forty-Second International Conference on Machine Learning, Vancouver, Canada, 2025 July 13-19. 1-18.
Available at: https://ink.library.smu.edu.sg/sis_research/10474
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.