RLHF-V’s IT
- Paper: RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback
- GitHub Link
- Publisher:
Arxiv
- Author Affiliation:
Tsinghua University
- Type
- SFT
- RLHF
- Multi-turn
- ✔
- ✖
- Input Modalities $\rightarrow$ Output Modalities
(I: Image, V: Video, A: Audio, 3D: Point Cloud, T: Text, B: Bounding box, Tab: Table, Web: Web page)- I+T $\rightarrow$ T
- Source
Collected human preference
- Method
Manu.
- I/V/A Scale
- I
Not report
- V
Not report
- A
Not report
- I
- Dialog Turn
Not report
- Instance Scale
1.4K
This post is licensed under CC BY 4.0 by the author.