From 705d7fac01c9e5d34c90cc9638e8ae4dd9116cdc Mon Sep 17 00:00:00 2001 From: echo840 <87795401+echo840@users.noreply.github.com> Date: Fri, 26 Apr 2024 10:39:14 +0800 Subject: [PATCH] Update README.md --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 5f452aa..1cb053c 100644 --- a/README.md +++ b/README.md @@ -30,6 +30,8 @@ You can find the results of Large Multimodal Models in **[OCRBench Leaderboard]( # Evaluation The test code for evaluating models in the paper can be found in [scripts](./scripts). Before conducting the evaluation, you need to configure the model weights and environment based on the official code link provided in the scripts. If you want to evaluate other models, please edit the "TODO" things in [example](./example.py). +You can also use [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) and [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval) for evaluation. + Example evaluation scripts: ```python