AnyMAL
- Paper: AnyMAL: An Efficient and Scalable Any-Modality Augmented Language Model
- GitHub Link:
None
- Author Affiliation:
FAIR, Meta & Meta Reality Labs
- Functional Division
- Understanding
- Generation
- Design Division
- Tool-using
- End-to-end
- Input Modalities $\rightarrow$ Output Modalities
(I: Image, V: Video, A: Audio, 3D: Point Cloud, T: Text, ID: Document understanding, IB: Output bounding box, IM: Output segmentation mask, IR: Output retrieved images)- I+V+A+T $\rightarrow$ T
- Model Architecture
(Input $\rightarrow$ Modality Encoder $\rightarrow$ Input Projector $\rightarrow$ LLM Backbone $\rightarrow$ Output Projector $\rightarrow$ Modality Generator $\rightarrow$ Output)- Modality Encoder
I: CLIP ViT/L & ViT-G & DinoV2
V: Intervideo
A: CLAP
- Input Projector
I/V: Cross-attention
A: Linear Projector
- LLM Backbone
LLaMA-2
- Output Projector
None
- Modality Generator
None
- Modality Encoder
- Datasets Scale
- Pre-training Stage
Not report
- Instruction-tuning Stage
Not report
- Pre-training Stage
This post is licensed under CC BY 4.0 by the author.