Identifying and localizing objects within images is a fundamental challenge, and numerous efforts have
been made to enhance model accuracy by experimenting with diverse architectures and refining training
strategies. Nevertheless, a prevalent limitation in existing models is overemphasizing the current input
while ignoring the information from the entire dataset.
We introduce an innovative Retriever-Dictionary (RD) module to address this issue. This
architecture
enables YOLO-based models to efficiently retrieve features from a Dictionary that contains the insight
of the dataset, which is built by the knowledge from Visual Models (VM), Large Language Models (LLM), or
Visual Language Models (VLM). The flexible RD enables the model to incorporate such explicit knowledge
that enhances the ability to benefit multiple tasks, specifically, segmentation, detection, and
classification, from pixel to image level.
The experiments show that using the RD significantly improves
model performance, achieving more than a 3% increase in mean Average Precision for object detection
with less than a 1% increase in model parameters. Beyond 1-stage object detection models, the RD module
improves the effectiveness of 2-stage models and DETR-based architectures, such as Faster R-CNN and
DETR.