BLIP-2
- Paper: BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
- GitHub Link
- Publisher:
ICML 2023
- Author Affiliation:
Salesforce Research
- Functional Division
- Understanding
- Generation
- Design Division
- Tool-using
- End-to-end
- Input Modalities
Output Modalities
(I: Image, V: Video, A: Audio, 3D: Point Cloud, T: Text, ID: Document understanding, IB: Output bounding box, IM: Output segmentation mask, IR: Output retrieved images)- I+T
T
- I+T
- Model Architecture
(Input Modality Encoder Input Projector LLM Backbone Output Projector Modality Generator Output)- Modality Encoder
I: CLIP/Eva-CLIP ViT@224
- Input Projector
Q-Former w/ Linear Projector
- LLM Backbone
Flan-T5/OPT
- Output Projector
None
- Modality Generator
None
- Modality Encoder
- Datasets Scale
- Pre-training Stage
129M
- Instruction-tuning Stage
Not report
- Pre-training Stage
This post is licensed under CC BY 4.0 by the author.