Post

X-InstructBLIP

  • Paper: X-InstructBLIP: A Framework for aligning X-Modal instruction-aware representations to LLMs and Emergent Cross-modal Reasoning
  • GitHub Link
  • Publisher: Arxiv
  • Author Affiliation: University of Pennsylvania
  • Functional Division
    • Understanding
    • Generation
  • Design Division
    • Tool-using
    • End-to-end
  • Input Modalities $\rightarrow$ Output Modalities
    (I: Image, V: Video, A: Audio, 3D: Point Cloud, T: Text, ID: Document understanding, IB: Output bounding box, IM: Output segmentation mask, IR: Output retrieved images)
    • I+A+V+3D+T $\rightarrow$ T
  • Model Architecture
    (Input $\rightarrow$ Modality Encoder $\rightarrow$ Input Projector $\rightarrow$ LLM Backbone $\rightarrow$ Output Projector $\rightarrow$ Modality Generator $\rightarrow$ Output)
    • Modality Encoder
      • I/V: Eva-CLIP ViT-G/14
      • A: BEATs
      • 3D: ULIP-2
    • Input Projector
      • Q-Former w/ Linear Projector
    • LLM Backbone
      • Vicuna-v1.1-7B/13B
    • Output Projector
      • None
    • Modality Generator
      • None
  • Datasets Scale
    • Pre-training Stage
      • Not report
    • Instruction-tuning Stage
      • Not report
This post is licensed under CC BY 4.0 by the author.