🥇 UniGenBench Leaderboard (Chinese Long)
📚 UniGenBench is a unified benchmark for T2I generation that integrates diverse prompt themes with a comprehensive suite of fine-grained evaluation criteria.
🔧 You can use the official GitHub repo to evaluate your model on UniGenBench.
😊 We release all generated images from the T2I models evaluated in our UniGenBench on UniGenBench-Eval-Images. Feel free to use any evaluation model that is convenient and suitable for you to assess and compare the performance of your models.
📝 To add your own model to the leaderboard, please send an Email to Yibin Wang, then we will help with the evaluation and updating the leaderboard.
2025-12 | ✗ | 96.12 | 98.73 | 99.27 | 94.36 | 89.01 | 98.18 | 95.11 | 95.93 | 98.18 | 97.14 | 98.25 | 99.58 | 94.31 | 96.05 | 96.23 | 98.55 | 94.14 | 95.40 | 91.67 | 96.79 | 96.77 | 94.52 | 99.42 | 96.36 | 98.08 | 98.72 | 96.63 | 96.01 | 97.18 | 96.03 | 94.93 | 96.47 | 97.14 | 95.60 |