arXiv 2022Unsupervised Prompt Learning for Vision-Language Models- Tony Huang, Jack Chu, Fangyun Wei
arXiv 2022Improving Zero-Shot Models with Label Distribution Priors- Jonathan Kahana, Niv Cohen, Yedid Hoshen
ICLR-2023Masked Unsupervised Self-training for Label-free Image Classification- Junnan Li, Silvio Savarese, Steven CH Hoi
ICML-2023A Simple Zero-shot Prompt Weighting Technique to Improve Prompt Ensembling in Text-Image Models- James Urquhart Allingham*, Jie Ren*, Michael W Dusenberry, Xiuye Gu, Yin Cui, Dustin Tran, Jeremiah Zhe Liu, Balaji Lakshminarayanan
ICML-2023POUF: Prompt-Oriented Unsupervised Fine-tuning for Large Pre-trained Models- Korawat Tanwisuth, Shujian Zhang, Huangjie Zheng, Pengcheng He, Mingyuan Zhou
PRCV-2023Unsupervised Prototype Adapter for Vision-Language Models- Yi Zhang, Ce Zhang, Xueting Hu, Zhihai He
NeurIPS-2023Neural Priming for Sample-Efficient Adaptation- Matthew Wallingford, Vivek Ramanujan, Alex Fang, Aditya Kusupati, Roozbeh Mottaghi, Aniruddha Kembhavi, Ludwig Schmidt, Ali Farhadi
NeurIPS-2023LaFTer: Label-Free Tuning of Zero-shot Classifier using Language and Unlabeled Image Collections- M Jehanzeb Mirza, Leonid Karlinsky, Wei Lin, Mateusz Kozinski, Horst Possegger, Rogerio Feris, Horst Bischof
NeurIPS-2023Enhancing CLIP with CLIP: Exploring Pseudolabeling for Limited-Label Prompt Tuning- Cristina Menghini, Andrew Delworth, Stephen H Bach
NeurIPS-2023Intra-Modal Proxy Learning for Zero-Shot Visual Categorization with CLIP- Qi Qian, Yuanhong Xu, Juhua Hu
NeurIPS-2023SwapPrompt: Test-Time Prompt Adaptation for Vision-Language Models- Xiaosong Ma, Jie Zhang, Song Guo, Wenchao Xu
Internationa Journal of Applied Earth Observation and Geoinformation-2023RS-CLIP: Zero Shot Remote Sensing Scene Classification via Contrastive Vision-Language Supervision- Xiang Li, Congcong Wen, Yuan Hu, Nan Zhou
arXiv 2023Prompt Ensemble Self-training for Open-Vocabulary Domain Adaptation- Jiaxing Huang, Jingyi Zhang, Han Qiu, Sheng Jin, Shijian Lu
arXiv 2023Improving CLIP Robustness with Knowledge Distillation and Self-Training- Clement Laroudie, Andrei Bursuc, Mai Lan Ha, Gianni Franchi
WACV-2024ReCLIP: Refine Contrastive Language Image Pre-Training with Source Free Domain Adaptation- Xuefeng Hu, Ke Zhang, Lu Xia, Albert Chen, Jiajia Luo, Yuyin Sun, Ken Wang, Nan Qiao, Xiao Zeng, Min Sun, Cheng-Hao Kuo, Ram Nevatia
CVPR-2024PromptKD: Unsupervised Prompt Distillation for Vision-Language Models- Zheng Li, Xiang Li, Xinyi Fu, Xin Zhang, Weiqiang Wang, Shuo Chen, Jian Yang
CVPR-2024Label Propagation for Zero-shot Classification with Vision-Language Models- Vladan Stojnić, Yannis Kalantidis, Giorgos Tolias
CVPR-2024Transductive Zero-Shot and Few-Shot CLIP- Ségolène Martin, Yunshi Huang, Fereshteh Shakeri, Jean-Christophe Pesquet, Ismail Ben Ayed
ICML-2024Realistic Unsupervised CLIP Fine-tuning with Universal Entropy Optimization- Jian Liang, Lijun Sheng, Zhengbo Wang, Ran He, Tieniu Tan
ICML-2024Candidate Pseudolabel Learning: Enhancing Vision-Language Models by Prompt Tuning with Unlabeled Data- Jiahan Zhang, Qi Wei, Feng Liu, Lei Feng
ICML-2024Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models- Christian Schlarmann, Naman Deep Singh, Francesco Croce, Matthias Hein
ECCV-2024Improving Zero-shot Generalization of Learned Prompts via Unsupervised Knowledge Distillation- Marco Mistretta, Alberto Baldrati, Marco Bertini, Andrew D Bagdanov
ECCV-2024uCAP: An Unsupervised Prompting Method for Vision-Language Models- A Tuan Nguyen, Kai Sheng Tai, Bor-Chun Chen, Satya Narayan Shukla, Hanchao Yu, Philip Torr, Tai-Peng Tian, Ser-Nam Lim
BMVC-2024Noise-Tolerant Few-Shot Unsupervised Adapter for Vision-Language Models- Eman Ali, Muhammad Haris Khan
NeurIPS-2024Boosting Vision-Language Models with Transduction- Maxime Zanella, Benoît Gérin, Ismail Ayed
NeurIPS-2024OTTER: Effortless Label Distribution Adaptation of Zero-shot Models- Changho Shin, Jitian Zhao, Sonia Cromp, Harit Vishwakarma, Frederic Sala
arXiv 2024Training-Free Unsupervised Prompt for Vision-Language Models- Sifan Long, Linbin Wang, Zhen Zhao, Zichang Tan, Yiming Wu, Shengsheng Wang, Jingdong Wang
arXiv 2024Can Language-Guided Unsupervised Adaptation Improve Medical Image Classification Using Unpaired Images and Texts?- Umaima Rahman, Raza Imam, Dwarikanath Mahapatra, Boulbaba Ben Amor
arXiv 2024Lightweight Unsupervised Federated Learning with Pretrained Vision Language Model- Hao Yan, Yuhong Guo
arXiv 2024CLIP meets DINO for Tuning Zero-Shot Classifier using Unlabeled Image Collections- Mohamed Fazli Imam, Rufael Fedaku Marew, Jameel Hassan, Mustansar Fiaz, Alham Fikri Aji, Hisham Cholakkal
arXiv 2024Data-Efficient CLIP-Powered Dual-Branch Networks for Source-Free Unsupervised Domain Adaptation- Yongguang Li, Yueqi Cao, Jindong Li, Qi Wang, Shengsheng Wang
WACV-2025DPA: Dual Prototypes Alignment for Unsupervised Adaptation of Vision-Language Models- Eman Ali, Sathira Silva, Muhammad Haris Khan
WACV-2025LATTECLIP: Unsupervised CLIP Fine-Tuning via LMM-Synthetic Texts- Anh-Quan Cao, Maximilian Jaritz, Matthieu Guillaumin, Raoul de Charette, Loris Bazzani
ICCV-2025FLOSS: Free Lunch in Open-vocabulary Semantic Segmentation- Yasser Benigmim, Mohammad Fahes, Tuan-Hung Vu, Andrei Bursuc, Raoul de Charette
ICCV-2025Generate, Transduct, Adapt: Iterative Transduction with VLMs- Oindrila Saha, Logan Lawrence, Grant Van Horn, Subhransu Maji
arXiv 2025OTFusion: Bridging Vision-only and Vision-Language Models via Optimal Transport for Transductive Zero-Shot Learning- Qiyu Xu, Wenyang Chen, Zhanxuan Hu, Huafeng Li, Yonghang Tai
arXiv 2025microCLIP: Unsupervised CLIP Adaptation via Coarse-Fine Token Fusion for Fine-Grained Image Classification- Sathira Silva, Eman Ali, Chetan Arora, Muhammad Haris Khan