Update links after renaming the repo from segment-anything-2 to sam2 (#341)

This PR update repo links after we renamed the repo from `segment-anything-2` to `sam2`. It also changes `NAME` in setup.py to `SAM-2` (which is already the named used in pip setup since python packages don't allow whitespace)
This commit is contained in:
Ronghang Hu
2024-09-30 20:27:44 -07:00
committed by GitHub
parent 05d9e57fb3
commit 98fcb164bf
9 changed files with 28 additions and 28 deletions

View File

@@ -48,7 +48,7 @@ I got `ImportError: cannot import name '_C' from 'sam2'`
This is usually because you haven't run the `pip install -e ".[notebooks]"` step above or the installation failed. Please install SAM 2 first, and see the other issues if your installation fails. This is usually because you haven't run the `pip install -e ".[notebooks]"` step above or the installation failed. Please install SAM 2 first, and see the other issues if your installation fails.
In some systems, you may need to run `python setup.py build_ext --inplace` in the SAM 2 repo root as suggested in https://github.com/facebookresearch/segment-anything-2/issues/77. In some systems, you may need to run `python setup.py build_ext --inplace` in the SAM 2 repo root as suggested in https://github.com/facebookresearch/sam2/issues/77.
</details> </details>
<details> <details>
@@ -59,7 +59,7 @@ I got `MissingConfigException: Cannot find primary config 'configs/sam2.1/sam2.1
This is usually because you haven't run the `pip install -e .` step above, so `sam2` isn't in your Python's `sys.path`. Please run this installation step. In case it still fails after the installation step, you may try manually adding the root of this repo to `PYTHONPATH` via This is usually because you haven't run the `pip install -e .` step above, so `sam2` isn't in your Python's `sys.path`. Please run this installation step. In case it still fails after the installation step, you may try manually adding the root of this repo to `PYTHONPATH` via
```bash ```bash
export SAM2_REPO_ROOT=/path/to/segment-anything-2 # path to this repo export SAM2_REPO_ROOT=/path/to/sam2 # path to this repo
export PYTHONPATH="${SAM2_REPO_ROOT}:${PYTHONPATH}" export PYTHONPATH="${SAM2_REPO_ROOT}:${PYTHONPATH}"
``` ```
to manually add `sam2_configs` into your Python's `sys.path`. to manually add `sam2_configs` into your Python's `sys.path`.
@@ -84,7 +84,7 @@ from sam2.modeling import sam2_base
print(sam2_base.__file__) print(sam2_base.__file__)
``` ```
and check whether the content in the printed local path of `sam2/modeling/sam2_base.py` matches the latest one in https://github.com/facebookresearch/segment-anything-2/blob/main/sam2/modeling/sam2_base.py (e.g. whether your local file has `no_obj_embed_spatial`) to indentify if you're still using a previous installation. and check whether the content in the printed local path of `sam2/modeling/sam2_base.py` matches the latest one in https://github.com/facebookresearch/sam2/blob/main/sam2/modeling/sam2_base.py (e.g. whether your local file has `no_obj_embed_spatial`) to indentify if you're still using a previous installation.
</details> </details>
@@ -123,7 +123,7 @@ This usually happens because you have multiple versions of dependencies (PyTorch
In particular, if you have a lower PyTorch version than 2.3.1, it's recommended to upgrade to PyTorch 2.3.1 or higher first. Otherwise, the installation script will try to upgrade to the latest PyTorch using `pip`, which could sometimes lead to duplicated PyTorch installation if you have previously installed another PyTorch version using `conda`. In particular, if you have a lower PyTorch version than 2.3.1, it's recommended to upgrade to PyTorch 2.3.1 or higher first. Otherwise, the installation script will try to upgrade to the latest PyTorch using `pip`, which could sometimes lead to duplicated PyTorch installation if you have previously installed another PyTorch version using `conda`.
We have been building SAM 2 against PyTorch 2.3.1 internally. However, a few user comments (e.g. https://github.com/facebookresearch/segment-anything-2/issues/22, https://github.com/facebookresearch/segment-anything-2/issues/14) suggested that downgrading to PyTorch 2.1.0 might resolve this problem. In case the error persists, you may try changing the restriction from `torch>=2.3.1` to `torch>=2.1.0` in both [`pyproject.toml`](pyproject.toml) and [`setup.py`](setup.py) to allow PyTorch 2.1.0. We have been building SAM 2 against PyTorch 2.3.1 internally. However, a few user comments (e.g. https://github.com/facebookresearch/sam2/issues/22, https://github.com/facebookresearch/sam2/issues/14) suggested that downgrading to PyTorch 2.1.0 might resolve this problem. In case the error persists, you may try changing the restriction from `torch>=2.3.1` to `torch>=2.1.0` in both [`pyproject.toml`](pyproject.toml) and [`setup.py`](setup.py) to allow PyTorch 2.1.0.
</details> </details>
<details> <details>
@@ -168,7 +168,7 @@ You may see error log of:
> unsupported Microsoft Visual Studio version! Only the versions between 2017 and 2022 (inclusive) are supported! The nvcc flag '-allow-unsupported-compiler' can be used to override this version check; however, using an unsupported host compiler may cause compilation failure or incorrect run time execution. Use at your own risk. > unsupported Microsoft Visual Studio version! Only the versions between 2017 and 2022 (inclusive) are supported! The nvcc flag '-allow-unsupported-compiler' can be used to override this version check; however, using an unsupported host compiler may cause compilation failure or incorrect run time execution. Use at your own risk.
This is probably because your versions of CUDA and Visual Studio are incompatible. (see also https://stackoverflow.com/questions/78515942/cuda-compatibility-with-visual-studio-2022-version-17-10 for a discussion in stackoverflow).<br> This is probably because your versions of CUDA and Visual Studio are incompatible. (see also https://stackoverflow.com/questions/78515942/cuda-compatibility-with-visual-studio-2022-version-17-10 for a discussion in stackoverflow).<br>
You may be able to fix this by adding the `-allow-unsupported-compiler` argument to `nvcc` after L48 in the [setup.py](https://github.com/facebookresearch/segment-anything-2/blob/main/setup.py). <br> You may be able to fix this by adding the `-allow-unsupported-compiler` argument to `nvcc` after L48 in the [setup.py](https://github.com/facebookresearch/sam2/blob/main/setup.py). <br>
After adding the argument, `get_extension()` will look like this: After adding the argument, `get_extension()` will look like this:
```python ```python
def get_extensions(): def get_extensions():

View File

@@ -26,9 +26,9 @@
SAM 2 needs to be installed first before use. The code requires `python>=3.10`, as well as `torch>=2.3.1` and `torchvision>=0.18.1`. Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install both PyTorch and TorchVision dependencies. You can install SAM 2 on a GPU machine using: SAM 2 needs to be installed first before use. The code requires `python>=3.10`, as well as `torch>=2.3.1` and `torchvision>=0.18.1`. Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install both PyTorch and TorchVision dependencies. You can install SAM 2 on a GPU machine using:
```bash ```bash
git clone https://github.com/facebookresearch/segment-anything-2.git git clone https://github.com/facebookresearch/sam2.git
cd segment-anything-2 & pip install -e . cd sam2 & pip install -e .
``` ```
If you are installing on Windows, it's strongly recommended to use [Windows Subsystem for Linux (WSL)](https://learn.microsoft.com/en-us/windows/wsl/install) with Ubuntu. If you are installing on Windows, it's strongly recommended to use [Windows Subsystem for Linux (WSL)](https://learn.microsoft.com/en-us/windows/wsl/install) with Ubuntu.
@@ -86,9 +86,9 @@ with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16):
masks, _, _ = predictor.predict(<input_prompts>) masks, _, _ = predictor.predict(<input_prompts>)
``` ```
Please refer to the examples in [image_predictor_example.ipynb](./notebooks/image_predictor_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/segment-anything-2/blob/main/notebooks/image_predictor_example.ipynb)) for static image use cases. Please refer to the examples in [image_predictor_example.ipynb](./notebooks/image_predictor_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/image_predictor_example.ipynb)) for static image use cases.
SAM 2 also supports automatic mask generation on images just like SAM. Please see [automatic_mask_generator_example.ipynb](./notebooks/automatic_mask_generator_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/segment-anything-2/blob/main/notebooks/automatic_mask_generator_example.ipynb)) for automatic mask generation in images. SAM 2 also supports automatic mask generation on images just like SAM. Please see [automatic_mask_generator_example.ipynb](./notebooks/automatic_mask_generator_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/automatic_mask_generator_example.ipynb)) for automatic mask generation in images.
### Video prediction ### Video prediction
@@ -113,7 +113,7 @@ with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16):
... ...
``` ```
Please refer to the examples in [video_predictor_example.ipynb](./notebooks/video_predictor_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/segment-anything-2/blob/main/notebooks/video_predictor_example.ipynb)) for details on how to add click or box prompts, make refinements, and track multiple objects in videos. Please refer to the examples in [video_predictor_example.ipynb](./notebooks/video_predictor_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/video_predictor_example.ipynb)) for details on how to add click or box prompts, make refinements, and track multiple objects in videos.
## Load from 🤗 Hugging Face ## Load from 🤗 Hugging Face

View File

@@ -25,7 +25,7 @@ export const RESEARCH_BY_META_AI = 'By Meta FAIR';
export const DEMO_FRIENDLY_NAME = 'Segment Anything 2 Demo'; export const DEMO_FRIENDLY_NAME = 'Segment Anything 2 Demo';
export const VIDEO_WATERMARK_TEXT = `Modified with ${DEMO_FRIENDLY_NAME}`; export const VIDEO_WATERMARK_TEXT = `Modified with ${DEMO_FRIENDLY_NAME}`;
export const PROJECT_GITHUB_URL = export const PROJECT_GITHUB_URL =
'https://github.com/facebookresearch/segment-anything-2'; 'https://github.com/facebookresearch/sam2';
export const AIDEMOS_URL = 'https://aidemos.meta.com'; export const AIDEMOS_URL = 'https://aidemos.meta.com';
export const ABOUT_URL = 'https://ai.meta.com/sam2'; export const ABOUT_URL = 'https://ai.meta.com/sam2';
export const EMAIL_ADDRESS = 'segment-anything@meta.com'; export const EMAIL_ADDRESS = 'segment-anything@meta.com';

View File

@@ -34,7 +34,7 @@
"id": "4290fb06-a63f-4624-a70c-f7c9aae4b5d5", "id": "4290fb06-a63f-4624-a70c-f7c9aae4b5d5",
"metadata": {}, "metadata": {},
"source": [ "source": [
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/facebookresearch/segment-anything-2/blob/main/notebooks/automatic_mask_generator_example.ipynb\">\n", "<a target=\"_blank\" href=\"https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/automatic_mask_generator_example.ipynb\">\n",
" <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n", " <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
"</a>" "</a>"
] ]
@@ -82,10 +82,10 @@
" print(\"CUDA is available:\", torch.cuda.is_available())\n", " print(\"CUDA is available:\", torch.cuda.is_available())\n",
" import sys\n", " import sys\n",
" !{sys.executable} -m pip install opencv-python matplotlib\n", " !{sys.executable} -m pip install opencv-python matplotlib\n",
" !{sys.executable} -m pip install 'git+https://github.com/facebookresearch/segment-anything-2.git'\n", " !{sys.executable} -m pip install 'git+https://github.com/facebookresearch/sam2.git'\n",
"\n", "\n",
" !mkdir -p images\n", " !mkdir -p images\n",
" !wget -P images https://raw.githubusercontent.com/facebookresearch/segment-anything-2/main/notebooks/images/cars.jpg\n", " !wget -P images https://raw.githubusercontent.com/facebookresearch/sam2/main/notebooks/images/cars.jpg\n",
"\n", "\n",
" !mkdir -p ../checkpoints/\n", " !mkdir -p ../checkpoints/\n",
" !wget -P ../checkpoints/ https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt" " !wget -P ../checkpoints/ https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt"

View File

@@ -33,7 +33,7 @@
"id": "ee822903-7739-4c1b-941a-b292b6e89bcf", "id": "ee822903-7739-4c1b-941a-b292b6e89bcf",
"metadata": {}, "metadata": {},
"source": [ "source": [
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/facebookresearch/segment-anything-2/blob/main/notebooks/image_predictor_example.ipynb\">\n", "<a target=\"_blank\" href=\"https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/image_predictor_example.ipynb\">\n",
" <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n", " <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
"</a>" "</a>"
] ]
@@ -51,7 +51,7 @@
"id": "07fabfee", "id": "07fabfee",
"metadata": {}, "metadata": {},
"source": [ "source": [
"If running locally using jupyter, first install `sam2` in your environment using the [installation instructions](https://github.com/facebookresearch/segment-anything-2#installation) in the repository.\n", "If running locally using jupyter, first install `sam2` in your environment using the [installation instructions](https://github.com/facebookresearch/sam2#installation) in the repository.\n",
"\n", "\n",
"If running from Google Colab, set `using_colab=True` below and run the cell. In Colab, be sure to select 'GPU' under 'Edit'->'Notebook Settings'->'Hardware accelerator'. Note that it's recommended to use **A100 or L4 GPUs when running in Colab** (T4 GPUs might also work, but could be slow and might run out of memory in some cases)." "If running from Google Colab, set `using_colab=True` below and run the cell. In Colab, be sure to select 'GPU' under 'Edit'->'Notebook Settings'->'Hardware accelerator'. Note that it's recommended to use **A100 or L4 GPUs when running in Colab** (T4 GPUs might also work, but could be slow and might run out of memory in some cases)."
] ]
@@ -81,11 +81,11 @@
" print(\"CUDA is available:\", torch.cuda.is_available())\n", " print(\"CUDA is available:\", torch.cuda.is_available())\n",
" import sys\n", " import sys\n",
" !{sys.executable} -m pip install opencv-python matplotlib\n", " !{sys.executable} -m pip install opencv-python matplotlib\n",
" !{sys.executable} -m pip install 'git+https://github.com/facebookresearch/segment-anything-2.git'\n", " !{sys.executable} -m pip install 'git+https://github.com/facebookresearch/sam2.git'\n",
"\n", "\n",
" !mkdir -p images\n", " !mkdir -p images\n",
" !wget -P images https://raw.githubusercontent.com/facebookresearch/segment-anything-2/main/notebooks/images/truck.jpg\n", " !wget -P images https://raw.githubusercontent.com/facebookresearch/sam2/main/notebooks/images/truck.jpg\n",
" !wget -P images https://raw.githubusercontent.com/facebookresearch/segment-anything-2/main/notebooks/images/groceries.jpg\n", " !wget -P images https://raw.githubusercontent.com/facebookresearch/sam2/main/notebooks/images/groceries.jpg\n",
"\n", "\n",
" !mkdir -p ../checkpoints/\n", " !mkdir -p ../checkpoints/\n",
" !wget -P ../checkpoints/ https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt" " !wget -P ../checkpoints/ https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt"

View File

@@ -37,7 +37,7 @@
"id": "a887b90f-6576-4ef8-964e-76d3a156ccb6", "id": "a887b90f-6576-4ef8-964e-76d3a156ccb6",
"metadata": {}, "metadata": {},
"source": [ "source": [
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/facebookresearch/segment-anything-2/blob/main/notebooks/video_predictor_example.ipynb\">\n", "<a target=\"_blank\" href=\"https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/video_predictor_example.ipynb\">\n",
" <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n", " <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
"</a>" "</a>"
] ]
@@ -55,7 +55,7 @@
"id": "8491a127-4c01-48f5-9dc5-f148a9417fdf", "id": "8491a127-4c01-48f5-9dc5-f148a9417fdf",
"metadata": {}, "metadata": {},
"source": [ "source": [
"If running locally using jupyter, first install `sam2` in your environment using the [installation instructions](https://github.com/facebookresearch/segment-anything-2#installation) in the repository.\n", "If running locally using jupyter, first install `sam2` in your environment using the [installation instructions](https://github.com/facebookresearch/sam2#installation) in the repository.\n",
"\n", "\n",
"If running from Google Colab, set `using_colab=True` below and run the cell. In Colab, be sure to select 'GPU' under 'Edit'->'Notebook Settings'->'Hardware accelerator'. Note that it's recommended to use **A100 or L4 GPUs when running in Colab** (T4 GPUs might also work, but could be slow and might run out of memory in some cases)." "If running from Google Colab, set `using_colab=True` below and run the cell. In Colab, be sure to select 'GPU' under 'Edit'->'Notebook Settings'->'Hardware accelerator'. Note that it's recommended to use **A100 or L4 GPUs when running in Colab** (T4 GPUs might also work, but could be slow and might run out of memory in some cases)."
] ]
@@ -85,7 +85,7 @@
" print(\"CUDA is available:\", torch.cuda.is_available())\n", " print(\"CUDA is available:\", torch.cuda.is_available())\n",
" import sys\n", " import sys\n",
" !{sys.executable} -m pip install opencv-python matplotlib\n", " !{sys.executable} -m pip install opencv-python matplotlib\n",
" !{sys.executable} -m pip install 'git+https://github.com/facebookresearch/segment-anything-2.git'\n", " !{sys.executable} -m pip install 'git+https://github.com/facebookresearch/sam2.git'\n",
"\n", "\n",
" !mkdir -p videos\n", " !mkdir -p videos\n",
" !wget -P videos https://dl.fbaipublicfiles.com/segment_anything_2/assets/bedroom.zip\n", " !wget -P videos https://dl.fbaipublicfiles.com/segment_anything_2/assets/bedroom.zip\n",
@@ -1047,7 +1047,7 @@
"id": "e023f91f-0cc5-4980-ae8e-a13c5749112b", "id": "e023f91f-0cc5-4980-ae8e-a13c5749112b",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Note that in addition to clicks or boxes, SAM 2 also supports directly using a **mask prompt** as input via the `add_new_mask` method in the `SAM2VideoPredictor` class. This can be helpful in e.g. semi-supervised VOS evaluations (see [tools/vos_inference.py](https://github.com/facebookresearch/segment-anything-2/blob/main/tools/vos_inference.py) for an example)." "Note that in addition to clicks or boxes, SAM 2 also supports directly using a **mask prompt** as input via the `add_new_mask` method in the `SAM2VideoPredictor` class. This can be helpful in e.g. semi-supervised VOS evaluations (see [tools/vos_inference.py](https://github.com/facebookresearch/sam2/blob/main/tools/vos_inference.py) for an example)."
] ]
}, },
{ {

View File

@@ -329,7 +329,7 @@ def fill_holes_in_mask_scores(mask, max_area):
f"{e}\n\nSkipping the post-processing step due to the error above. You can " f"{e}\n\nSkipping the post-processing step due to the error above. You can "
"still use SAM 2 and it's OK to ignore the error above, although some post-processing " "still use SAM 2 and it's OK to ignore the error above, although some post-processing "
"functionality may be limited (which doesn't affect the results in most cases; see " "functionality may be limited (which doesn't affect the results in most cases; see "
"https://github.com/facebookresearch/segment-anything-2/blob/main/INSTALL.md).", "https://github.com/facebookresearch/sam2/blob/main/INSTALL.md).",
category=UserWarning, category=UserWarning,
stacklevel=2, stacklevel=2,
) )

View File

@@ -108,7 +108,7 @@ class SAM2Transforms(nn.Module):
f"{e}\n\nSkipping the post-processing step due to the error above. You can " f"{e}\n\nSkipping the post-processing step due to the error above. You can "
"still use SAM 2 and it's OK to ignore the error above, although some post-processing " "still use SAM 2 and it's OK to ignore the error above, although some post-processing "
"functionality may be limited (which doesn't affect the results in most cases; see " "functionality may be limited (which doesn't affect the results in most cases; see "
"https://github.com/facebookresearch/segment-anything-2/blob/main/INSTALL.md).", "https://github.com/facebookresearch/sam2/blob/main/INSTALL.md).",
category=UserWarning, category=UserWarning,
stacklevel=2, stacklevel=2,
) )

View File

@@ -8,10 +8,10 @@ import os
from setuptools import find_packages, setup from setuptools import find_packages, setup
# Package metadata # Package metadata
NAME = "SAM 2" NAME = "SAM-2"
VERSION = "1.0" VERSION = "1.0"
DESCRIPTION = "SAM 2: Segment Anything in Images and Videos" DESCRIPTION = "SAM 2: Segment Anything in Images and Videos"
URL = "https://github.com/facebookresearch/segment-anything-2" URL = "https://github.com/facebookresearch/sam2"
AUTHOR = "Meta AI" AUTHOR = "Meta AI"
AUTHOR_EMAIL = "segment-anything@meta.com" AUTHOR_EMAIL = "segment-anything@meta.com"
LICENSE = "Apache 2.0" LICENSE = "Apache 2.0"
@@ -79,7 +79,7 @@ CUDA_ERROR_MSG = (
"Failed to build the SAM 2 CUDA extension due to the error above. " "Failed to build the SAM 2 CUDA extension due to the error above. "
"You can still use SAM 2 and it's OK to ignore the error above, although some " "You can still use SAM 2 and it's OK to ignore the error above, although some "
"post-processing functionality may be limited (which doesn't affect the results in most cases; " "post-processing functionality may be limited (which doesn't affect the results in most cases; "
"(see https://github.com/facebookresearch/segment-anything-2/blob/main/INSTALL.md).\n" "(see https://github.com/facebookresearch/sam2/blob/main/INSTALL.md).\n"
) )