This repository provides the official PyTorch implementation of our ICLR 2025 paper:
DynaPrompt: Dynamic Test-Time Prompt Tuning
Zehao Xiao, Shilin Yan, Jack Hong, Jiayin Cai, Xiaolong Jiang, Yao Hu, Jiayi Shen, Qi Wang, Cees G. M. Snoek
For more dtails, please check out our paper.
This repository contains the implementation of DynaPrompt for image classification with a pre-trained CLIP, focusing on online and dynamically adapt the learnable prompts at test time.
This implementation is for the single-GPU configuration, evaluated on a single A6000.
The code is tested on PyTorch 1.13.1.
The code is based on CoOp and TPT, with similar required packages to them, such as the dassl.
For out-of-distribution generalization, we consider 5 datasets:
For cross-datasets generalization, we consider 10 datasets:
For cross-dataset generalization, we adopt the same train/val/test splits as CoOp. Please refer to this page, and look for download links of split_zhou_${dataset_name}.json, and put the json files under ./data/data_splits/.
We provide the bash scripts under ./scripts. You can directly run bash scripts run_dynap.sh to run the codes with the default hyperparameters.
An example to run DynaPrompt on out-of-distribution datasets:
bash run_dynap.sh
Change the data_root to your own data path.
Change the dataset to A, R, V, S, or I to evaluate on datasets ImageNet-A, ImageNet-R, ImageNet-V2, ImageNet-Sketch, or ImageNet, respectively.
You can also change the dataset to flower102, dtd, pets, cars, ucf101, caltech101, food101, sun397, aircraft, or eurosat for cross-dataset generalization.
Change the num_p for different numbers of prompts
Change the arch to RN50 or ViT-B/16 for different backbones.
log_date, lr, ntx, and seed denote the name of log files, learning rate, length of prompts, and random seed, respectively.
If you find our code useful or our work relevant, please consider citing:
@inproceedings{
zehao2025dynaprompt,
title={DynaPrompt: Dynamic Test-Time Prompt Tuning},
author={Xiao, Zehao and Yan, Shilin and Hong, Jack and Cai, Jiayin and Jiang, Xiaolong and Hu, Yao and Shen, Jiayi and Wang, Qi and Snoek, Cees GM},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
}
We thank the authors of TPT and CoOp/CoCoOp for their open-source implementation and instructions on data preparation.