🎉

WACV’26] Two papers have been accepted in Round 2!

Tags
Academic
Time
2025/11/09
2 more properties

Title: Patch-wise Retrieval: A Bag of Practical Techniques for Instance-level Matching

Authors: Wonseok Choi (POSTECH), Sohwi Lim (KAIST), Nam Hyeon-Woo (POSTECH), Moon Ye-Bin (POSTECH), Dong-ju Jeong (Samsung Research), Jinyoung Hwang (Samsung Research), Tae-Hyun Oh (KAIST)
Instance-level image retrieval aims to find images containing the same object as a given query, despite variations in size, position, or appearance. To address this challenging task, we propose Patchify, a simple yet effective patch-wise retrieval framework that offers high performance, scalability, and interpretability without requiring fine-tuning. Patchify divides each database image into a small number of structured patches and performs retrieval by comparing these local features with a global query descriptor, enabling accurate and spatially grounded matching. To assess not just retrieval accuracy but also spatial correctness, we introduce LocScore, a localization-aware metric that quantifies whether the retrieved region aligns with the target object. This makes LocScore a valuable diagnostic tool for understanding and improving retrieval behavior. We conduct extensive experiments across multiple benchmarks, backbones, and region selection strategies, showing that Patchify outperforms global methods and complements state-of-the-art reranking pipelines. Furthermore, we apply Product Quantization for efficient large-scale retrieval and highlight the importance of using informative features during compression, which significantly boosts performance.

Title: Beyond the Highlights: Video Retrieval with Salient and Surrounding Contexts

Authors: Jaehun Bang (UNIST), Moon Ye-Bin (POSTECH), Kyungdon Joo (UNIST), Tae-Hyun Oh (KAIST)
When searching for videos, users often rely on surrounding context such as background elements or temporal details beyond salient content. However, existing video models struggle with fine-grained spatio-temporal understanding, particularly surrounding contexts, and there are no datasets that effectively evaluate their performance. We introduce SS Datasets, three video retrieval datasets with detailed salient and surrounding captions. To capture rich, temporally localized contexts aligned with meaningful scene changes, we segment videos by scene transitions and generate captions with a vision-language model. Analyzing current models reveals difficulties in handling surrounding queries and temporally complex videos. To address this, we propose simple yet effective baselines that improve retrieval across diverse query types, enabling more robust generalization to real-world scenarios.