Update README.md

This commit is contained in:
echo840
2024-04-26 10:39:14 +08:00
committed by GitHub
parent 9c67bab29b
commit 705d7fac01

View File

@@ -30,6 +30,8 @@ You can find the results of Large Multimodal Models in **[OCRBench Leaderboard](
# Evaluation
The test code for evaluating models in the paper can be found in [scripts](./scripts). Before conducting the evaluation, you need to configure the model weights and environment based on the official code link provided in the scripts. If you want to evaluate other models, please edit the "TODO" things in [example](./example.py).
You can also use [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) and [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval) for evaluation.
Example evaluation scripts:
```python