diff --git a/README.md b/README.md index 8f282b9..d11a387 100644 --- a/README.md +++ b/README.md @@ -23,7 +23,7 @@ You can find the results of Large Multimodal Models in **[OCRBench Leaderboard]( # Evaluation -The test code for evaluating models in the paper can be found in [scripts](./scripts). If you want to evaluate other models, please edit the "TODO" things in [example](./example.py). +The test code for evaluating models in the paper can be found in [scripts](./scripts). Before conducting the evaluation, you need to configure the model weights and environment based on the official code link provided in the scripts. If you want to evaluate other models, please edit the "TODO" things in [example](./example.py). Example evaluation scripts: ```python