File size: 1,044 Bytes
c59238c c665065 c59238c c665065 44ce1f5 c665065 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
license: mit
---
## A Reality Check of Vision-Language Pre-training in Radiology: Have We Progressed Using Text?
- Code: [DLILP](https://github.com/jusiro/DLILP)
- Paper: [IPMI 2025](https://link.springer.com/chapter/10.1007/978-3-031-96625-5_20) - [ArXiv](https://arxiv.org/abs/2504.05227)
- Docs: [Documentation](https://github.com/jusiro/DLILP)
- Tutorial: [Notebook](https://colab.research.google.com/drive/1_8Ysd8mCKuLX_Q86e-7pOAHFbSR9F4aZ?usp=sharing)
### About "DLILP_CMP" weights:
- Pre-trained using the disentangled language-image-label pre-trianing (DLILP).
- Pre-trained on CheXpert, MIMIC, and PadChest data.
If you find this repository useful, please consider citing this paper:
```
@inproceedings{dlilp,
title={A Reality Check of Vision-Language Pre-training in Radiology: Have We Progressed Using Text?},
author={Julio Silva-Rodríguez and Jose Dolz and Ismail {Ben Ayed}},
booktitle={Information Processing in Medical Imaging (IPMI)},
year={2025}
}
``` |