LanguageBind
- Paper: LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment
- GitHub Link
- Publisher:
ICLR 2024
- Author Affiliation:
Peking University
- Functional Division
- Understanding
- Generation
- Design Division
- Tool-using
- End-to-end
- Input Modalities $\rightarrow$ Output Modalities
(I: Image, V: Video, A: Audio, 3D: Point Cloud, T: Text, ID: Document understanding, IB: Output bounding box, IM: Output segmentation mask, IR: Output retrieved images)- I+V+A+T $\rightarrow$ T
This post is licensed under CC BY 4.0 by the author.