Post

T2M

  • Paper: NExT-GPT: Any-to-Any Multimodal LLM
  • GitHub Link
  • Publisher: ICLR 2024
  • Author Affiliation: National University of Singapore
  • Type
    • SFT
    • RLHF
  • Multi-turn
  • Input Modalities $\rightarrow$ Output Modalities
    (I: Image, V: Video, A: Audio, 3D: Point Cloud, T: Text, B: Bounding box, Tab: Table, Web: Web page)
    • T $\rightarrow$ I/V/A+T
  • Source
    • WebVid, CC3M, AudioCap
  • Method
    • Auto.
  • I/V/A Scale
    • I
      • 4.9K
    • V
      • 4.9K
    • A
      • 4.9K
  • Dialog Turn
    • 1
  • Instance Scale
    • 14.7K
This post is licensed under CC BY 4.0 by the author.