Update README.md
This commit is contained in:
12
README.md
12
README.md
@@ -21,10 +21,6 @@
|
||||
| OCRBench Images | [OCRBench Images](https://drive.google.com/file/d/1a3VRJx3V3SdOmPr7499Ky0Ug8AwqGUHO/view?usp=drive_link) | This file only contains the images used in OCRBench. |
|
||||
| Test Results | [Test Results](https://drive.google.com/drive/folders/15XlHCuNTavI1Ihqm4G7u3J34BHpkaqyE?usp=drive_link) | This file file contains the result files for the test models. |
|
||||
|
||||
# Related Dataset
|
||||
| Data | Link | Description |
|
||||
| --- | --- | --- |
|
||||
| EST-VQA Dataset | [Link](https://github.com/xinke-wang/EST-VQA) | On the General Value of Evidence, and Bilingual Scene-Text Visual Question Answering. |
|
||||
|
||||
# OCRBench
|
||||
|
||||
@@ -47,6 +43,14 @@ python ./scripts/monkey.py --image_folder ./OCRBench_Images --OCRBench_file ./OC
|
||||
|
||||
```
|
||||
|
||||
# Other Related Multilingual Datasets
|
||||
| Data | Link | Description |
|
||||
| --- | --- | --- |
|
||||
| EST-VQA Dataset (CVPR 2020, English and Chinese) | [Link](https://github.com/xinke-wang/EST-VQA) | On the General Value of Evidence, and Bilingual Scene-Text Visual Question Answering. |
|
||||
| Swahili Dataset (ICDAR 2024) | [Link](https://arxiv.org/abs/2405.11437) | The First Swahili Language Scene Text Detection and Recognition Dataset. |
|
||||
| Urdu Dataset (ICDAR 2024) | [Link](https://arxiv.org/abs/2405.12533) | Dataset and Benchmark for Urdu Natural Scenes Text Detection, Recognition and Visual Question Answering. |
|
||||
| MTVQA (9 languages) | [Link](https://arxiv.org/abs/2405.11985) | MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering. |
|
||||
|
||||
# Citation
|
||||
If you wish to refer to the baseline results published here, please use the following BibTeX entries:
|
||||
```BibTeX
|
||||
|
Reference in New Issue
Block a user