improving warning message and adding further tips for installation (#204)
This commit is contained in:
22
INSTALL.md
22
INSTALL.md
@@ -5,6 +5,7 @@
|
|||||||
- Linux with Python ≥ 3.10, PyTorch ≥ 2.3.1 and [torchvision](https://github.com/pytorch/vision/) that matches the PyTorch installation. Install them together at https://pytorch.org to ensure this.
|
- Linux with Python ≥ 3.10, PyTorch ≥ 2.3.1 and [torchvision](https://github.com/pytorch/vision/) that matches the PyTorch installation. Install them together at https://pytorch.org to ensure this.
|
||||||
* Note older versions of Python or PyTorch may also work. However, the versions above are strongly recommended to provide all features such as `torch.compile`.
|
* Note older versions of Python or PyTorch may also work. However, the versions above are strongly recommended to provide all features such as `torch.compile`.
|
||||||
- [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) that match the CUDA version for your PyTorch installation. This should typically be CUDA 12.1 if you follow the default installation command.
|
- [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) that match the CUDA version for your PyTorch installation. This should typically be CUDA 12.1 if you follow the default installation command.
|
||||||
|
- If you are installing on Windows, it's strongly recommended to use [Windows Subsystem for Linux (WSL)](https://learn.microsoft.com/en-us/windows/wsl/install) with Ubuntu.
|
||||||
|
|
||||||
Then, install SAM 2 from the root of this repository via
|
Then, install SAM 2 from the root of this repository via
|
||||||
```bash
|
```bash
|
||||||
@@ -22,11 +23,13 @@ This would also skip the post-processing step at runtime (removing small holes a
|
|||||||
|
|
||||||
By default, we allow the installation to proceed even if the SAM 2 CUDA extension fails to build. (In this case, the build errors are hidden unless using `-v` for verbose output in `pip install`.)
|
By default, we allow the installation to proceed even if the SAM 2 CUDA extension fails to build. (In this case, the build errors are hidden unless using `-v` for verbose output in `pip install`.)
|
||||||
|
|
||||||
If you see a message like `Skipping the post-processing step due to the error above` at runtime or `Failed to build the SAM 2 CUDA extension due to the error above` during installation, it indicates that the SAM 2 CUDA extension failed to build in your environment. In this case, you can still use SAM 2 for both image and video applications, but the post-processing step (removing small holes and sprinkles in the output masks) will be skipped. This shouldn't affect the results in most cases.
|
If you see a message like `Skipping the post-processing step due to the error above` at runtime or `Failed to build the SAM 2 CUDA extension due to the error above` during installation, it indicates that the SAM 2 CUDA extension failed to build in your environment. In this case, **you can still use SAM 2 for both image and video applications**. The post-processing step (removing small holes and sprinkles in the output masks) will be skipped, but this shouldn't affect the results in most cases.
|
||||||
|
|
||||||
If you would like to enable this post-processing step, you can reinstall SAM 2 on a GPU machine with environment variable `SAM2_BUILD_ALLOW_ERRORS=0` to force building the CUDA extension (and raise errors if it fails to build), as follows
|
If you would like to enable this post-processing step, you can reinstall SAM 2 on a GPU machine with environment variable `SAM2_BUILD_ALLOW_ERRORS=0` to force building the CUDA extension (and raise errors if it fails to build), as follows
|
||||||
```bash
|
```bash
|
||||||
pip uninstall -y SAM-2; rm -f sam2/*.so; SAM2_BUILD_ALLOW_ERRORS=0 pip install -v -e ".[demo]"
|
pip uninstall -y SAM-2 && \
|
||||||
|
rm -f ./sam2/*.so && \
|
||||||
|
SAM2_BUILD_ALLOW_ERRORS=0 pip install -v -e ".[demo]"
|
||||||
```
|
```
|
||||||
|
|
||||||
Note that PyTorch needs to be installed first before building the SAM 2 CUDA extension. It's also necessary to install [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) that match the CUDA version for your PyTorch installation. (This should typically be CUDA 12.1 if you follow the default installation command.) After installing the CUDA toolkits, you can check its version via `nvcc --version`.
|
Note that PyTorch needs to be installed first before building the SAM 2 CUDA extension. It's also necessary to install [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) that match the CUDA version for your PyTorch installation. (This should typically be CUDA 12.1 if you follow the default installation command.) After installing the CUDA toolkits, you can check its version via `nvcc --version`.
|
||||||
@@ -101,6 +104,21 @@ In particular, if you have a lower PyTorch version than 2.3.1, it's recommended
|
|||||||
We have been building SAM 2 against PyTorch 2.3.1 internally. However, a few user comments (e.g. https://github.com/facebookresearch/segment-anything-2/issues/22, https://github.com/facebookresearch/segment-anything-2/issues/14) suggested that downgrading to PyTorch 2.1.0 might resolve this problem. In case the error persists, you may try changing the restriction from `torch>=2.3.1` to `torch>=2.1.0` in both [`pyproject.toml`](pyproject.toml) and [`setup.py`](setup.py) to allow PyTorch 2.1.0.
|
We have been building SAM 2 against PyTorch 2.3.1 internally. However, a few user comments (e.g. https://github.com/facebookresearch/segment-anything-2/issues/22, https://github.com/facebookresearch/segment-anything-2/issues/14) suggested that downgrading to PyTorch 2.1.0 might resolve this problem. In case the error persists, you may try changing the restriction from `torch>=2.3.1` to `torch>=2.1.0` in both [`pyproject.toml`](pyproject.toml) and [`setup.py`](setup.py) to allow PyTorch 2.1.0.
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>
|
||||||
|
I got `CUDA error: no kernel image is available for execution on the device`
|
||||||
|
</summary>
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
A possible cause could be that the CUDA kernel is somehow not compiled towards your GPU's CUDA [capability](https://developer.nvidia.com/cuda-gpus). This could happen if the installation is done in an environment different from the runtime (e.g. in a slurm system).
|
||||||
|
|
||||||
|
You can try pulling the latest code from the SAM 2 repo and running the following
|
||||||
|
```
|
||||||
|
export TORCH_CUDA_ARCH_LIST=9.0 8.0 8.6 8.9 7.0 7.2 7.5 6.0`
|
||||||
|
```
|
||||||
|
to manually specify the CUDA capability in the compilation target that matches your GPU.
|
||||||
|
</details>
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
<summary>
|
<summary>
|
||||||
I got `RuntimeError: No available kernel. Aborting execution.` (or similar errors)
|
I got `RuntimeError: No available kernel. Aborting execution.` (or similar errors)
|
||||||
|
11
README.md
11
README.md
@@ -19,8 +19,9 @@ SAM 2 needs to be installed first before use. The code requires `python>=3.10`,
|
|||||||
```bash
|
```bash
|
||||||
git clone https://github.com/facebookresearch/segment-anything-2.git
|
git clone https://github.com/facebookresearch/segment-anything-2.git
|
||||||
|
|
||||||
cd segment-anything-2; pip install -e .
|
cd segment-anything-2 & pip install -e .
|
||||||
```
|
```
|
||||||
|
If you are installing on Windows, it's strongly recommended to use [Windows Subsystem for Linux (WSL)](https://learn.microsoft.com/en-us/windows/wsl/install) with Ubuntu.
|
||||||
|
|
||||||
To use the SAM 2 predictor and run the example notebooks, `jupyter` and `matplotlib` are required and can be installed by:
|
To use the SAM 2 predictor and run the example notebooks, `jupyter` and `matplotlib` are required and can be installed by:
|
||||||
|
|
||||||
@@ -29,8 +30,9 @@ pip install -e ".[demo]"
|
|||||||
```
|
```
|
||||||
|
|
||||||
Note:
|
Note:
|
||||||
1. It's recommended to create a new Python environment for this installation and install PyTorch 2.3.1 (or higher) via `pip` following https://pytorch.org/. If you have a PyTorch version lower than 2.3.1 in your current environment, the installation command above will try to upgrade it to the latest PyTorch version using `pip`.
|
1. It's recommended to create a new Python environment via [Anaconda](https://www.anaconda.com/) for this installation and install PyTorch 2.3.1 (or higher) via `pip` following https://pytorch.org/. If you have a PyTorch version lower than 2.3.1 in your current environment, the installation command above will try to upgrade it to the latest PyTorch version using `pip`.
|
||||||
2. The step above requires compiling a custom CUDA kernel with the `nvcc` compiler. If it isn't already available on your machine, please install the [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) with a version that matches your PyTorch CUDA version.
|
2. The step above requires compiling a custom CUDA kernel with the `nvcc` compiler. If it isn't already available on your machine, please install the [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) with a version that matches your PyTorch CUDA version.
|
||||||
|
3. If you see a message like `Failed to build the SAM 2 CUDA extension` during installation, you can ignore it and still use SAM 2 (some post-processing functionality may be limited, but it doesn't affect the results in most cases).
|
||||||
|
|
||||||
Please see [`INSTALL.md`](./INSTALL.md) for FAQs on potential issues and solutions.
|
Please see [`INSTALL.md`](./INSTALL.md) for FAQs on potential issues and solutions.
|
||||||
|
|
||||||
@@ -41,8 +43,9 @@ Please see [`INSTALL.md`](./INSTALL.md) for FAQs on potential issues and solutio
|
|||||||
First, we need to download a model checkpoint. All the model checkpoints can be downloaded by running:
|
First, we need to download a model checkpoint. All the model checkpoints can be downloaded by running:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd checkpoints
|
cd checkpoints && \
|
||||||
./download_ckpts.sh
|
./download_ckpts.sh && \
|
||||||
|
cd ..
|
||||||
```
|
```
|
||||||
|
|
||||||
or individually from:
|
or individually from:
|
||||||
|
@@ -189,7 +189,15 @@ def load_video_frames(
|
|||||||
if isinstance(video_path, str) and os.path.isdir(video_path):
|
if isinstance(video_path, str) and os.path.isdir(video_path):
|
||||||
jpg_folder = video_path
|
jpg_folder = video_path
|
||||||
else:
|
else:
|
||||||
raise NotImplementedError("Only JPEG frames are supported at this moment")
|
raise NotImplementedError(
|
||||||
|
"Only JPEG frames are supported at this moment. For video files, you may use "
|
||||||
|
"ffmpeg (https://ffmpeg.org/) to extract frames into a folder of JPEG files, such as \n"
|
||||||
|
"```\n"
|
||||||
|
"ffmpeg -i <your_video>.mp4 -q:v 2 -start_number 0 <output_dir>/'%05d.jpg'\n"
|
||||||
|
"```\n"
|
||||||
|
"where `-q:v` generates high-quality JPEG frames and `-start_number 0` asks "
|
||||||
|
"ffmpeg to start the JPEG file from 00000.jpg."
|
||||||
|
)
|
||||||
|
|
||||||
frame_names = [
|
frame_names = [
|
||||||
p
|
p
|
||||||
@@ -245,8 +253,9 @@ def fill_holes_in_mask_scores(mask, max_area):
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
# Skip the post-processing step on removing small holes if the CUDA kernel fails
|
# Skip the post-processing step on removing small holes if the CUDA kernel fails
|
||||||
warnings.warn(
|
warnings.warn(
|
||||||
f"{e}\n\nSkipping the post-processing step due to the error above. "
|
f"{e}\n\nSkipping the post-processing step due to the error above. You can "
|
||||||
"Consider building SAM 2 with CUDA extension to enable post-processing (see "
|
"still use SAM 2 and it's OK to ignore the error above, although some post-processing "
|
||||||
|
"functionality may be limited (which doesn't affect the results in most cases; see "
|
||||||
"https://github.com/facebookresearch/segment-anything-2/blob/main/INSTALL.md).",
|
"https://github.com/facebookresearch/segment-anything-2/blob/main/INSTALL.md).",
|
||||||
category=UserWarning,
|
category=UserWarning,
|
||||||
stacklevel=2,
|
stacklevel=2,
|
||||||
|
@@ -105,8 +105,9 @@ class SAM2Transforms(nn.Module):
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
# Skip the post-processing step if the CUDA kernel fails
|
# Skip the post-processing step if the CUDA kernel fails
|
||||||
warnings.warn(
|
warnings.warn(
|
||||||
f"{e}\n\nSkipping the post-processing step due to the error above. "
|
f"{e}\n\nSkipping the post-processing step due to the error above. You can "
|
||||||
"Consider building SAM 2 with CUDA extension to enable post-processing (see "
|
"still use SAM 2 and it's OK to ignore the error above, although some post-processing "
|
||||||
|
"functionality may be limited (which doesn't affect the results in most cases; see "
|
||||||
"https://github.com/facebookresearch/segment-anything-2/blob/main/INSTALL.md).",
|
"https://github.com/facebookresearch/segment-anything-2/blob/main/INSTALL.md).",
|
||||||
category=UserWarning,
|
category=UserWarning,
|
||||||
stacklevel=2,
|
stacklevel=2,
|
||||||
|
71
setup.py
71
setup.py
@@ -6,7 +6,6 @@
|
|||||||
import os
|
import os
|
||||||
|
|
||||||
from setuptools import find_packages, setup
|
from setuptools import find_packages, setup
|
||||||
from torch.utils.cpp_extension import BuildExtension, CUDAExtension
|
|
||||||
|
|
||||||
# Package metadata
|
# Package metadata
|
||||||
NAME = "SAM 2"
|
NAME = "SAM 2"
|
||||||
@@ -50,7 +49,8 @@ BUILD_ALLOW_ERRORS = os.getenv("SAM2_BUILD_ALLOW_ERRORS", "1") == "1"
|
|||||||
CUDA_ERROR_MSG = (
|
CUDA_ERROR_MSG = (
|
||||||
"{}\n\n"
|
"{}\n\n"
|
||||||
"Failed to build the SAM 2 CUDA extension due to the error above. "
|
"Failed to build the SAM 2 CUDA extension due to the error above. "
|
||||||
"You can still use SAM 2, but some post-processing functionality may be limited "
|
"You can still use SAM 2 and it's OK to ignore the error above, although some "
|
||||||
|
"post-processing functionality may be limited (which doesn't affect the results in most cases; "
|
||||||
"(see https://github.com/facebookresearch/segment-anything-2/blob/main/INSTALL.md).\n"
|
"(see https://github.com/facebookresearch/segment-anything-2/blob/main/INSTALL.md).\n"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -60,6 +60,8 @@ def get_extensions():
|
|||||||
return []
|
return []
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
from torch.utils.cpp_extension import CUDAExtension
|
||||||
|
|
||||||
srcs = ["sam2/csrc/connected_components.cu"]
|
srcs = ["sam2/csrc/connected_components.cu"]
|
||||||
compile_args = {
|
compile_args = {
|
||||||
"cxx": [],
|
"cxx": [],
|
||||||
@@ -81,29 +83,46 @@ def get_extensions():
|
|||||||
return ext_modules
|
return ext_modules
|
||||||
|
|
||||||
|
|
||||||
class BuildExtensionIgnoreErrors(BuildExtension):
|
try:
|
||||||
|
from torch.utils.cpp_extension import BuildExtension
|
||||||
|
|
||||||
def finalize_options(self):
|
class BuildExtensionIgnoreErrors(BuildExtension):
|
||||||
try:
|
|
||||||
super().finalize_options()
|
|
||||||
except Exception as e:
|
|
||||||
print(CUDA_ERROR_MSG.format(e))
|
|
||||||
self.extensions = []
|
|
||||||
|
|
||||||
def build_extensions(self):
|
def finalize_options(self):
|
||||||
try:
|
try:
|
||||||
super().build_extensions()
|
super().finalize_options()
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(CUDA_ERROR_MSG.format(e))
|
print(CUDA_ERROR_MSG.format(e))
|
||||||
self.extensions = []
|
self.extensions = []
|
||||||
|
|
||||||
def get_ext_filename(self, ext_name):
|
def build_extensions(self):
|
||||||
try:
|
try:
|
||||||
return super().get_ext_filename(ext_name)
|
super().build_extensions()
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(CUDA_ERROR_MSG.format(e))
|
print(CUDA_ERROR_MSG.format(e))
|
||||||
self.extensions = []
|
self.extensions = []
|
||||||
return "_C.so"
|
|
||||||
|
def get_ext_filename(self, ext_name):
|
||||||
|
try:
|
||||||
|
return super().get_ext_filename(ext_name)
|
||||||
|
except Exception as e:
|
||||||
|
print(CUDA_ERROR_MSG.format(e))
|
||||||
|
self.extensions = []
|
||||||
|
return "_C.so"
|
||||||
|
|
||||||
|
cmdclass = {
|
||||||
|
"build_ext": (
|
||||||
|
BuildExtensionIgnoreErrors.with_options(no_python_abi_suffix=True)
|
||||||
|
if BUILD_ALLOW_ERRORS
|
||||||
|
else BuildExtension.with_options(no_python_abi_suffix=True)
|
||||||
|
)
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
cmdclass = {}
|
||||||
|
if BUILD_ALLOW_ERRORS:
|
||||||
|
print(CUDA_ERROR_MSG.format(e))
|
||||||
|
else:
|
||||||
|
raise e
|
||||||
|
|
||||||
|
|
||||||
# Setup configuration
|
# Setup configuration
|
||||||
@@ -124,11 +143,5 @@ setup(
|
|||||||
extras_require=EXTRA_PACKAGES,
|
extras_require=EXTRA_PACKAGES,
|
||||||
python_requires=">=3.10.0",
|
python_requires=">=3.10.0",
|
||||||
ext_modules=get_extensions(),
|
ext_modules=get_extensions(),
|
||||||
cmdclass={
|
cmdclass=cmdclass,
|
||||||
"build_ext": (
|
|
||||||
BuildExtensionIgnoreErrors.with_options(no_python_abi_suffix=True)
|
|
||||||
if BUILD_ALLOW_ERRORS
|
|
||||||
else BuildExtension.with_options(no_python_abi_suffix=True)
|
|
||||||
),
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
Reference in New Issue
Block a user