🥇 UniGenBench Leaderboard (Chinese Long)
📚 UniGenBench is a unified benchmark for T2I generation that integrates diverse prompt themes with a comprehensive suite of fine-grained evaluation criteria.
🔧 You can use the official GitHub repo to evaluate your model on UniGenBench.
😊 We release all generated images from the T2I models evaluated in our UniGenBench on UniGenBench-Eval-Images. Feel free to use any evaluation model that is convenient and suitable for you to assess and compare the performance of your models.
📝 To add your own model to the leaderboard, please send an Email to Yibin Wang, then we will help with the evaluation and updating the leaderboard.
2025-03 | ✗ | 90.51 | 99.41 | 97.96 | 90.05 | 57.14 | 94.72 | 85.87 | 92.56 | 94.43 | 95.23 | 94.23 | 96.59 | 89.33 | 91.12 | 92.50 | 89.49 | 91.52 | 86.78 | 88.14 | 92.59 | 91.93 | 89.10 | 95.64 | 93.93 | 94.59 | 95.36 | 92.87 | 94.11 | 96.37 | 92.86 | 93.24 | 95.21 | 95.01 | 95.47 |