[New Feature] Support SAM 2.1 (#59)

* support sam 2.1

* refine config path and ckpt path

* update README
This commit is contained in:
Ren Tianhe
2024-10-10 14:55:50 +08:00
committed by GitHub
parent e899ad99e8
commit 82e503604f
340 changed files with 39100 additions and 608 deletions

View File

@@ -9,8 +9,8 @@ The `vos_inference.py` script can be used to generate predictions for semi-super
After installing SAM 2 and its dependencies, it can be used as follows ([DAVIS 2017 dataset](https://davischallenge.org/davis2017/code.html) as an example). This script saves the prediction PNG files to the `--output_mask_dir`.
```bash
python ./tools/vos_inference.py \
--sam2_cfg sam2_hiera_b+.yaml \
--sam2_checkpoint ./checkpoints/sam2_hiera_base_plus.pt \
--sam2_cfg configs/sam2.1/sam2.1_hiera_b+.yaml \
--sam2_checkpoint ./checkpoints/sam2.1_hiera_base_plus.pt \
--base_video_dir /path-to-davis-2017/JPEGImages/480p \
--input_mask_dir /path-to-davis-2017/Annotations/480p \
--video_list_file /path-to-davis-2017/ImageSets/2017/val.txt \
@@ -21,8 +21,8 @@ python ./tools/vos_inference.py \
To evaluate on the SA-V dataset with per-object PNG files for the object masks, we need to **add the `--per_obj_png_file` flag** as follows (using SA-V val as an example). This script will also save per-object PNG files for the output masks under the `--per_obj_png_file` flag.
```bash
python ./tools/vos_inference.py \
--sam2_cfg sam2_hiera_b+.yaml \
--sam2_checkpoint ./checkpoints/sam2_hiera_base_plus.pt \
--sam2_cfg configs/sam2.1/sam2.1_hiera_b+.yaml \
--sam2_checkpoint ./checkpoints/sam2.1_hiera_base_plus.pt \
--base_video_dir /path-to-sav-val/JPEGImages_24fps \
--input_mask_dir /path-to-sav-val/Annotations_6fps \
--video_list_file /path-to-sav-val/sav_val.txt \
@@ -33,4 +33,4 @@ python ./tools/vos_inference.py \
Then, we can use the evaluation tools or servers for each dataset to get the performance of the prediction PNG files above.
**Note: a limitation of the `vos_inference.py` script above is that currently it only supports VOS datasets where all objects to track already appear on frame 0 in each video** (and therefore it doesn't apply to some datasets such as [LVOS](https://lingyihongfd.github.io/lvos.github.io/) that have objects only appearing in the middle of a video).
Note: by default, the `vos_inference.py` script above assumes that all objects to track already appear on frame 0 in each video (as is the case in DAVIS, MOSE or SA-V). **For VOS datasets that don't have all objects to track appearing in the first frame (such as LVOS or YouTube-VOS), please add the `--track_object_appearing_later_in_video` flag when using `vos_inference.py`**.