Lyrics
- Paper: Lyrics: Boosting Fine-grained Language-Vision Alignment and Comprehension via Semantic-aware Visual Objects
- GitHub Link:
None
- Publisher:
Arxiv
- Author Affiliation:
International Digital Economy Academy
- Functional Division
- Understanding
- Generation
- Design Division
- Tool-using
- End-to-end
- Input Modalities $\rightarrow$ Output Modalities
(I: Image, V: Video, A: Audio, 3D: Point Cloud, T: Text, ID: Document understanding, IB: Output bounding box, IM: Output segmentation mask, IR: Output retrieved images)- I+T $\rightarrow$ T
- Model Architecture
(Input $\rightarrow$ Modality Encoder $\rightarrow$ Input Projector $\rightarrow$ LLM Backbone $\rightarrow$ Output Projector $\rightarrow$ Modality Generator $\rightarrow$ Output)- Modality Encoder
I: CLIP ViT-L/14 & Grounding-DINO-T w/ Swin-T & SAM-HQ w/ MAE & ViT-H & RAM++ w/ Swin-B
- Input Projector
MQ-Former w/ Linear Projection
- LLM Backbone
Vicuna-13B
- Output Projector
None
- Modality Generator
None
- Modality Encoder
- Datasets Scale
- Pre-training Stage
Not report
- Instruction-tuning Stage
Not report
- Pre-training Stage
This post is licensed under CC BY 4.0 by the author.