Lamps: Learning Anatomy from Multiple Perspectives via Self-supervision.- Segment Anything for Cell Tracking.- BioVFM-21M: Benchmarking and Scaling Self-Supervised Vision Foundation Models for Biomedical Image Analysis.- From Pathology to Radiology: Evaluating the Applicability of Pathology Foundation Models.- Pathology Foundation Models are Scanner Sensitive: Benchmark and Mitigation with Contrastive ScanGen Loss.- Improved Training Sample Efficiency and Inter-Device Generalizability in Optical Coherence Tomography Fluid Segmentation via Foundation Models.- Taming Stable Diffusion for Computed Tomography Blind Super-Resolution.- RadiSimCLIP: A Radiology Vision-Language Model Pretrained on Simulated Radiologist Learning Dataset for Zero-Shot Medical Image Understanding.
- Improving Medical Visual Instruction Tuning with Labeled Datasets.- DR.SIMON: Domain-wise Rewrite for Segment-Informed Medical Oversight Network.- The Data Behind the Model: Gaps and Opportunities for Foundation Models in Brain Imaging.- LGE Scar Quantification Using Foundation Models for Cardiac Disease Classification.- Beyond Broad Applications: Can Pathology Foundation Models Adapt to Hematopathology.- EndoTracker: Robustly Tracking Any Point in Endoscopic Surgical Scene.- Temporally-Constrained Video Reasoning Segmentation and Automated Benchmark Construction.
- Cross-Modal Knowledge Distillation for Chest Radiographic Diagnosis via Embedding Expansion, Reconstruction, and Classification.- Random Direct Preference Optimization for Radiography Report Generation.- Test Time Adaptation of Medical Vision-Language Models.- MaskedCLIP: Bridging the Masked and CLIP Space for Semi-Supervised Medical Vision-Language Pre-training.