SAM2.1 checkpoints + training code + Demo
This commit is contained in:
Haitham Khedr
2024-09-28 08:20:56 -07:00
parent 7e1596c0b6
commit aa9b8722d0
325 changed files with 38174 additions and 223 deletions

2
demo/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
data/uploads
data/posters

173
demo/README.md Normal file
View File

@@ -0,0 +1,173 @@
# SAM 2 Demo
Welcome to the SAM 2 Demo! This project consists of a frontend built with React TypeScript and Vite and a backend service using Python Flask and Strawberry GraphQL. Both components can be run in Docker containers or locally on MPS (Metal Performance Shaders) or CPU. However, running the backend service on MPS or CPU devices may result in significantly slower performance (FPS).
## Prerequisites
Before you begin, ensure you have the following installed on your system:
- Docker and Docker Compose
- [OPTIONAL] Node.js and Yarn for running frontend locally
- [OPTIONAL] Anaconda for running backend locally
### Installing Docker
To install Docker, follow these steps:
1. Go to the [Docker website](https://www.docker.com/get-started)
2. Follow the installation instructions for your operating system.
### [OPTIONAL] Installing Node.js and Yarn
To install Node.js and Yarn, follow these steps:
1. Go to the [Node.js website](https://nodejs.org/en/download/).
2. Follow the installation instructions for your operating system.
3. Once Node.js is installed, open a terminal or command prompt and run the following command to install Yarn:
```
npm install -g yarn
```
### [OPTIONAL] Installing Anaconda
To install Anaconda, follow these steps:
1. Go to the [Anaconda website](https://www.anaconda.com/products/distribution).
2. Follow the installation instructions for your operating system.
## Quick Start
To get both the frontend and backend running quickly using Docker, you can use the following command:
```bash
docker compose up --build
```
> [!WARNING]
> On macOS, Docker containers only support running on CPU. MPS is not supported through Docker. If you want to run the demo backend service on MPS, you will need to run it locally (see "Running the Backend Locally" below).
This will build and start both services. You can access them at:
- **Frontend:** [http://localhost:7262](http://localhost:7262)
- **Backend:** [http://localhost:7263/graphql](http://localhost:7263/graphql)
## Running Backend with MPS Support
MPS (Metal Performance Shaders) is not supported with Docker. To use MPS, you need to run the backend on your local machine.
### Setting Up Your Environment
1. **Create Conda environment**
Create a new Conda environment for this project by running the following command or use your existing conda environment for SAM 2:
```
conda create --name sam2-demo python=3.10 --yes
```
This will create a new environment named `sam2-demo` with Python 3.10 as the interpreter.
2. **Activate the Conda environment:**
```bash
conda activate sam2-demo
```
3. **Install ffmpeg**
```bash
conda install -c conda-forge ffmpeg
```
4. **Install SAM 2 demo dependencies:**
Install project dependencies by running the following command in the SAM 2 checkout root directory:
```bash
pip install -e '.[interactive-demo]'
```
### Running the Backend Locally
Download the SAM 2 checkpoints:
```bash
(cd ./checkpoints && ./download_ckpts.sh)
```
Use the following command to start the backend with MPS support:
```bash
cd demo/backend/server/
```
```bash
PYTORCH_ENABLE_MPS_FALLBACK=1 \
APP_ROOT="$(pwd)/../../../" \
APP_URL=http://localhost:7263 \
MODEL_SIZE=base_plus \
DATA_PATH="$(pwd)/../../data" \
DEFAULT_VIDEO_PATH=gallery/05_default_juggle.mp4 \
gunicorn \
--worker-class gthread app:app \
--workers 1 \
--threads 2 \
--bind 0.0.0.0:7263 \
--timeout 60
```
Options for the `MODEL_SIZE` argument are "tiny", "small", "base_plus" (default), and "large".
> [!WARNING]
> Running the backend service on MPS devices can cause fatal crashes with the Gunicorn worker due to insufficient MPS memory. Try switching to CPU devices by setting the `SAM2_DEMO_FORCE_CPU_DEVICE=1` environment variable.
### Starting the Frontend
If you wish to run the frontend separately (useful for development), follow these steps:
1. **Navigate to demo frontend directory:**
```bash
cd demo/frontend
```
2. **Install dependencies:**
```bash
yarn install
```
3. **Start the development server:**
```bash
yarn dev --port 7262
```
This will start the frontend development server on [http://localhost:7262](http://localhost:7262).
## Docker Tips
- To rebuild the Docker containers (useful if you've made changes to the Dockerfile or dependencies):
```bash
docker compose up --build
```
- To stop the Docker containers:
```bash
docker compose down
```
## Contributing
Contributions are welcome! Please read our contributing guidelines to get started.
## License
See the LICENSE file for details.
---
By following these instructions, you should have a fully functional development environment for both the frontend and backend of the SAM 2 Demo. Happy coding!

136
demo/backend/server/app.py Normal file
View File

@@ -0,0 +1,136 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import logging
from typing import Any, Generator
from app_conf import (
GALLERY_PATH,
GALLERY_PREFIX,
POSTERS_PATH,
POSTERS_PREFIX,
UPLOADS_PATH,
UPLOADS_PREFIX,
)
from data.loader import preload_data
from data.schema import schema
from data.store import set_videos
from flask import Flask, make_response, Request, request, Response, send_from_directory
from flask_cors import CORS
from inference.data_types import PropagateDataResponse, PropagateInVideoRequest
from inference.multipart import MultipartResponseBuilder
from inference.predictor import InferenceAPI
from strawberry.flask.views import GraphQLView
logger = logging.getLogger(__name__)
app = Flask(__name__)
cors = CORS(app, supports_credentials=True)
videos = preload_data()
set_videos(videos)
inference_api = InferenceAPI()
@app.route("/healthy")
def healthy() -> Response:
return make_response("OK", 200)
@app.route(f"/{GALLERY_PREFIX}/<path:path>", methods=["GET"])
def send_gallery_video(path: str) -> Response:
try:
return send_from_directory(
GALLERY_PATH,
path,
)
except:
raise ValueError("resource not found")
@app.route(f"/{POSTERS_PREFIX}/<path:path>", methods=["GET"])
def send_poster_image(path: str) -> Response:
try:
return send_from_directory(
POSTERS_PATH,
path,
)
except:
raise ValueError("resource not found")
@app.route(f"/{UPLOADS_PREFIX}/<path:path>", methods=["GET"])
def send_uploaded_video(path: str):
try:
return send_from_directory(
UPLOADS_PATH,
path,
)
except:
raise ValueError("resource not found")
# TOOD: Protect route with ToS permission check
@app.route("/propagate_in_video", methods=["POST"])
def propagate_in_video() -> Response:
data = request.json
args = {
"session_id": data["session_id"],
"start_frame_index": data.get("start_frame_index", 0),
}
boundary = "frame"
frame = gen_track_with_mask_stream(boundary, **args)
return Response(frame, mimetype="multipart/x-savi-stream; boundary=" + boundary)
def gen_track_with_mask_stream(
boundary: str,
session_id: str,
start_frame_index: int,
) -> Generator[bytes, None, None]:
with inference_api.autocast_context():
request = PropagateInVideoRequest(
type="propagate_in_video",
session_id=session_id,
start_frame_index=start_frame_index,
)
for chunk in inference_api.propagate_in_video(request=request):
yield MultipartResponseBuilder.build(
boundary=boundary,
headers={
"Content-Type": "application/json; charset=utf-8",
"Frame-Current": "-1",
# Total frames minus the reference frame
"Frame-Total": "-1",
"Mask-Type": "RLE[]",
},
body=chunk.to_json().encode("UTF-8"),
).get_message()
class MyGraphQLView(GraphQLView):
def get_context(self, request: Request, response: Response) -> Any:
return {"inference_api": inference_api}
# Add GraphQL route to Flask app.
app.add_url_rule(
"/graphql",
view_func=MyGraphQLView.as_view(
"graphql_view",
schema=schema,
# Disable GET queries
# https://strawberry.rocks/docs/operations/deployment
# https://strawberry.rocks/docs/integrations/flask
allow_queries_via_get=False,
),
)
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)

View File

@@ -0,0 +1,55 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import logging
import os
from pathlib import Path
logger = logging.getLogger(__name__)
APP_ROOT = os.getenv("APP_ROOT", "/opt/sam2")
API_URL = os.getenv("API_URL", "http://localhost:7263")
MODEL_SIZE = os.getenv("MODEL_SIZE", "base_plus")
logger.info(f"using model size {MODEL_SIZE}")
FFMPEG_NUM_THREADS = int(os.getenv("FFMPEG_NUM_THREADS", "1"))
# Path for all data used in API
DATA_PATH = Path(os.getenv("DATA_PATH", "/data"))
# Max duration an uploaded video can have in seconds. The default is 10
# seconds.
MAX_UPLOAD_VIDEO_DURATION = float(os.environ.get("MAX_UPLOAD_VIDEO_DURATION", "10"))
# If set, it will define which video is returned by the default video query for
# desktop
DEFAULT_VIDEO_PATH = os.getenv("DEFAULT_VIDEO_PATH")
# Prefix for gallery videos
GALLERY_PREFIX = "gallery"
# Path where all gallery videos are stored
GALLERY_PATH = DATA_PATH / GALLERY_PREFIX
# Prefix for uploaded videos
UPLOADS_PREFIX = "uploads"
# Path where all uploaded videos are stored
UPLOADS_PATH = DATA_PATH / UPLOADS_PREFIX
# Prefix for video posters (1st frame of video)
POSTERS_PREFIX = "posters"
# Path where all posters are stored
POSTERS_PATH = DATA_PATH / POSTERS_PREFIX
# Make sure any of those paths exist
os.makedirs(DATA_PATH, exist_ok=True)
os.makedirs(GALLERY_PATH, exist_ok=True)
os.makedirs(UPLOADS_PATH, exist_ok=True)
os.makedirs(POSTERS_PATH, exist_ok=True)

View File

@@ -0,0 +1,154 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from dataclasses import dataclass
from typing import Iterable, List, Optional
import strawberry
from app_conf import API_URL
from data.resolver import resolve_videos
from dataclasses_json import dataclass_json
from strawberry import relay
@strawberry.type
class Video(relay.Node):
"""Core type for video."""
code: relay.NodeID[str]
path: str
poster_path: Optional[str]
width: int
height: int
@strawberry.field
def url(self) -> str:
return f"{API_URL}/{self.path}"
@strawberry.field
def poster_url(self) -> str:
return f"{API_URL}/{self.poster_path}"
@classmethod
def resolve_nodes(
cls,
*,
info: relay.PageInfo,
node_ids: Iterable[str],
required: bool = False,
):
return resolve_videos(node_ids, required)
@strawberry.type
class RLEMask:
"""Core type for Onevision GraphQL RLE mask."""
size: List[int]
counts: str
order: str
@strawberry.type
class RLEMaskForObject:
"""Type for RLE mask associated with a specific object id."""
object_id: int
rle_mask: RLEMask
@strawberry.type
class RLEMaskListOnFrame:
"""Type for a list of object-associated RLE masks on a specific video frame."""
frame_index: int
rle_mask_list: List[RLEMaskForObject]
@strawberry.input
class StartSessionInput:
path: str
@strawberry.type
class StartSession:
session_id: str
@strawberry.input
class PingInput:
session_id: str
@strawberry.type
class Pong:
success: bool
@strawberry.input
class CloseSessionInput:
session_id: str
@strawberry.type
class CloseSession:
success: bool
@strawberry.input
class AddPointsInput:
session_id: str
frame_index: int
clear_old_points: bool
object_id: int
labels: List[int]
points: List[List[float]]
@strawberry.input
class ClearPointsInFrameInput:
session_id: str
frame_index: int
object_id: int
@strawberry.input
class ClearPointsInVideoInput:
session_id: str
@strawberry.type
class ClearPointsInVideo:
success: bool
@strawberry.input
class RemoveObjectInput:
session_id: str
object_id: int
@strawberry.input
class PropagateInVideoInput:
session_id: str
start_frame_index: int
@strawberry.input
class CancelPropagateInVideoInput:
session_id: str
@strawberry.type
class CancelPropagateInVideo:
success: bool
@strawberry.type
class SessionExpiration:
session_id: str
expiration_time: int
max_expiration_time: int
ttl: int

View File

@@ -0,0 +1,92 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import shutil
import subprocess
from glob import glob
from pathlib import Path
from typing import Dict, Optional
import imagesize
from app_conf import GALLERY_PATH, POSTERS_PATH, POSTERS_PREFIX
from data.data_types import Video
from tqdm import tqdm
def preload_data() -> Dict[str, Video]:
"""
Preload data including gallery videos and their posters.
"""
# Dictionaries for videos and datasets on the backend.
# Note that since Python 3.7, dictionaries preserve their insert order, so
# when looping over its `.values()`, elements inserted first also appear first.
# https://stackoverflow.com/questions/39980323/are-dictionaries-ordered-in-python-3-6
all_videos = {}
video_path_pattern = os.path.join(GALLERY_PATH, "**/*.mp4")
video_paths = glob(video_path_pattern, recursive=True)
for p in tqdm(video_paths):
video = get_video(p, GALLERY_PATH)
all_videos[video.code] = video
return all_videos
def get_video(
filepath: os.PathLike,
absolute_path: Path,
file_key: Optional[str] = None,
generate_poster: bool = True,
width: Optional[int] = None,
height: Optional[int] = None,
verbose: Optional[bool] = False,
) -> Video:
"""
Get video object given
"""
# Use absolute_path to include the parent directory in the video
video_path = os.path.relpath(filepath, absolute_path.parent)
poster_path = None
if generate_poster:
poster_id = os.path.splitext(os.path.basename(filepath))[0]
poster_filename = f"{str(poster_id)}.jpg"
poster_path = f"{POSTERS_PREFIX}/{poster_filename}"
# Extract the first frame from video
poster_output_path = os.path.join(POSTERS_PATH, poster_filename)
ffmpeg = shutil.which("ffmpeg")
subprocess.call(
[
ffmpeg,
"-y",
"-i",
str(filepath),
"-pix_fmt",
"yuv420p",
"-frames:v",
"1",
"-update",
"1",
"-strict",
"unofficial",
str(poster_output_path),
],
stdout=None if verbose else subprocess.DEVNULL,
stderr=None if verbose else subprocess.DEVNULL,
)
# Extract video width and height from poster. This is important to optimize
# rendering previews in the mosaic video preview.
width, height = imagesize.get(poster_output_path)
return Video(
code=video_path,
path=video_path if file_key is None else file_key,
poster_path=poster_path,
width=width,
height=height,
)

View File

@@ -0,0 +1,18 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from typing import Iterable
def resolve_videos(node_ids: Iterable[str], required: bool = False):
"""
Resolve videos given node ids.
"""
from data.store import get_videos
all_videos = get_videos()
return [
all_videos[nid] if required else all_videos.get(nid, None) for nid in node_ids
]

View File

@@ -0,0 +1,357 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import hashlib
import os
import shutil
import tempfile
from pathlib import Path
from typing import Iterable, List, Optional, Tuple, Union
import av
import strawberry
from app_conf import (
DATA_PATH,
DEFAULT_VIDEO_PATH,
MAX_UPLOAD_VIDEO_DURATION,
UPLOADS_PATH,
UPLOADS_PREFIX,
)
from data.data_types import (
AddPointsInput,
CancelPropagateInVideo,
CancelPropagateInVideoInput,
ClearPointsInFrameInput,
ClearPointsInVideo,
ClearPointsInVideoInput,
CloseSession,
CloseSessionInput,
RemoveObjectInput,
RLEMask,
RLEMaskForObject,
RLEMaskListOnFrame,
StartSession,
StartSessionInput,
Video,
)
from data.loader import get_video
from data.store import get_videos
from data.transcoder import get_video_metadata, transcode, VideoMetadata
from inference.data_types import (
AddPointsRequest,
CancelPropagateInVideoRequest,
CancelPropagateInVideoRequest,
ClearPointsInFrameRequest,
ClearPointsInVideoRequest,
CloseSessionRequest,
RemoveObjectRequest,
StartSessionRequest,
)
from inference.predictor import InferenceAPI
from strawberry import relay
from strawberry.file_uploads import Upload
@strawberry.type
class Query:
@strawberry.field
def default_video(self) -> Video:
"""
Return the default video.
The default video can be set with the DEFAULT_VIDEO_PATH environment
variable. It will return the video that matches this path. If no video
is found, it will return the first video.
"""
all_videos = get_videos()
# Find the video that matches the default path and return that as
# default video.
for _, v in all_videos.items():
if v.path == DEFAULT_VIDEO_PATH:
return v
# Fallback is returning the first video
return next(iter(all_videos.values()))
@relay.connection(relay.ListConnection[Video])
def videos(
self,
) -> Iterable[Video]:
"""
Return all available videos.
"""
all_videos = get_videos()
return all_videos.values()
@strawberry.type
class Mutation:
@strawberry.mutation
def upload_video(
self,
file: Upload,
start_time_sec: Optional[float] = None,
duration_time_sec: Optional[float] = None,
) -> Video:
"""
Receive a video file and store it in the configured S3 bucket.
"""
max_time = MAX_UPLOAD_VIDEO_DURATION
filepath, file_key, vm = process_video(
file,
max_time=max_time,
start_time_sec=start_time_sec,
duration_time_sec=duration_time_sec,
)
video = get_video(
filepath,
UPLOADS_PATH,
file_key=file_key,
width=vm.width,
height=vm.height,
generate_poster=False,
)
return video
@strawberry.mutation
def start_session(
self, input: StartSessionInput, info: strawberry.Info
) -> StartSession:
inference_api: InferenceAPI = info.context["inference_api"]
request = StartSessionRequest(
type="start_session",
path=f"{DATA_PATH}/{input.path}",
)
response = inference_api.start_session(request=request)
return StartSession(session_id=response.session_id)
@strawberry.mutation
def close_session(
self, input: CloseSessionInput, info: strawberry.Info
) -> CloseSession:
inference_api: InferenceAPI = info.context["inference_api"]
request = CloseSessionRequest(
type="close_session",
session_id=input.session_id,
)
response = inference_api.close_session(request)
return CloseSession(success=response.success)
@strawberry.mutation
def add_points(
self, input: AddPointsInput, info: strawberry.Info
) -> RLEMaskListOnFrame:
inference_api: InferenceAPI = info.context["inference_api"]
request = AddPointsRequest(
type="add_points",
session_id=input.session_id,
frame_index=input.frame_index,
object_id=input.object_id,
points=input.points,
labels=input.labels,
clear_old_points=input.clear_old_points,
)
reponse = inference_api.add_points(request)
return RLEMaskListOnFrame(
frame_index=reponse.frame_index,
rle_mask_list=[
RLEMaskForObject(
object_id=r.object_id,
rle_mask=RLEMask(counts=r.mask.counts, size=r.mask.size, order="F"),
)
for r in reponse.results
],
)
@strawberry.mutation
def remove_object(
self, input: RemoveObjectInput, info: strawberry.Info
) -> List[RLEMaskListOnFrame]:
inference_api: InferenceAPI = info.context["inference_api"]
request = RemoveObjectRequest(
type="remove_object", session_id=input.session_id, object_id=input.object_id
)
response = inference_api.remove_object(request)
return [
RLEMaskListOnFrame(
frame_index=res.frame_index,
rle_mask_list=[
RLEMaskForObject(
object_id=r.object_id,
rle_mask=RLEMask(
counts=r.mask.counts, size=r.mask.size, order="F"
),
)
for r in res.results
],
)
for res in response.results
]
@strawberry.mutation
def clear_points_in_frame(
self, input: ClearPointsInFrameInput, info: strawberry.Info
) -> RLEMaskListOnFrame:
inference_api: InferenceAPI = info.context["inference_api"]
request = ClearPointsInFrameRequest(
type="clear_points_in_frame",
session_id=input.session_id,
frame_index=input.frame_index,
object_id=input.object_id,
)
response = inference_api.clear_points_in_frame(request)
return RLEMaskListOnFrame(
frame_index=response.frame_index,
rle_mask_list=[
RLEMaskForObject(
object_id=r.object_id,
rle_mask=RLEMask(counts=r.mask.counts, size=r.mask.size, order="F"),
)
for r in response.results
],
)
@strawberry.mutation
def clear_points_in_video(
self, input: ClearPointsInVideoInput, info: strawberry.Info
) -> ClearPointsInVideo:
inference_api: InferenceAPI = info.context["inference_api"]
request = ClearPointsInVideoRequest(
type="clear_points_in_video",
session_id=input.session_id,
)
response = inference_api.clear_points_in_video(request)
return ClearPointsInVideo(success=response.success)
@strawberry.mutation
def cancel_propagate_in_video(
self, input: CancelPropagateInVideoInput, info: strawberry.Info
) -> CancelPropagateInVideo:
inference_api: InferenceAPI = info.context["inference_api"]
request = CancelPropagateInVideoRequest(
type="cancel_propagate_in_video",
session_id=input.session_id,
)
response = inference_api.cancel_propagate_in_video(request)
return CancelPropagateInVideo(success=response.success)
def get_file_hash(video_path_or_file) -> str:
if isinstance(video_path_or_file, str):
with open(video_path_or_file, "rb") as in_f:
result = hashlib.sha256(in_f.read()).hexdigest()
else:
video_path_or_file.seek(0)
result = hashlib.sha256(video_path_or_file.read()).hexdigest()
return result
def _get_start_sec_duration_sec(
start_time_sec: Union[float, None],
duration_time_sec: Union[float, None],
max_time: float,
) -> Tuple[float, float]:
default_seek_t = int(os.environ.get("VIDEO_ENCODE_SEEK_TIME", "0"))
if start_time_sec is None:
start_time_sec = default_seek_t
if duration_time_sec is not None:
duration_time_sec = min(duration_time_sec, max_time)
else:
duration_time_sec = max_time
return start_time_sec, duration_time_sec
def process_video(
file: Upload,
max_time: float,
start_time_sec: Optional[float] = None,
duration_time_sec: Optional[float] = None,
) -> Tuple[Optional[str], str, str, VideoMetadata]:
"""
Process file upload including video trimming and content moderation checks.
Returns the filepath, s3_file_key, hash & video metaedata as a tuple.
"""
with tempfile.TemporaryDirectory() as tempdir:
in_path = f"{tempdir}/in.mp4"
out_path = f"{tempdir}/out.mp4"
with open(in_path, "wb") as in_f:
in_f.write(file.read())
try:
video_metadata = get_video_metadata(in_path)
except av.InvalidDataError:
raise Exception("not valid video file")
if video_metadata.num_video_streams == 0:
raise Exception("video container does not contain a video stream")
if video_metadata.width is None or video_metadata.height is None:
raise Exception("video container does not contain width or height metadata")
if video_metadata.duration_sec in (None, 0):
raise Exception("video container does time duration metadata")
start_time_sec, duration_time_sec = _get_start_sec_duration_sec(
max_time=max_time,
start_time_sec=start_time_sec,
duration_time_sec=duration_time_sec,
)
# Transcode video to make sure videos returned to the app are all in
# the same format, duration, resolution, fps.
transcode(
in_path,
out_path,
video_metadata,
seek_t=start_time_sec,
duration_time_sec=duration_time_sec,
)
os.remove(in_path) # don't need original video now
out_video_metadata = get_video_metadata(out_path)
if out_video_metadata.num_video_frames == 0:
raise Exception(
"transcode produced empty video; check seek time or your input video"
)
filepath = None
file_key = None
with open(out_path, "rb") as file_data:
file_hash = get_file_hash(file_data)
file_data.seek(0)
file_key = UPLOADS_PREFIX + "/" + f"{file_hash}.mp4"
filepath = os.path.join(UPLOADS_PATH, f"{file_hash}.mp4")
assert filepath is not None and file_key is not None
shutil.move(out_path, filepath)
return filepath, file_key, out_video_metadata
schema = strawberry.Schema(
query=Query,
mutation=Mutation,
)

View File

@@ -0,0 +1,28 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from typing import Dict
from data.data_types import Video
ALL_VIDEOS: Dict[str, Video] = []
def set_videos(videos: Dict[str, Video]) -> None:
"""
Set the videos available in the backend. The data is kept in-memory, but a future change could replace the
in-memory storage with a database backend. This would also be more efficient when querying videos given a
dataset name etc.
"""
global ALL_VIDEOS
ALL_VIDEOS = videos
def get_videos() -> Dict[str, Video]:
"""
Return the videos available in the backend.
"""
global ALL_VIDEOS
return ALL_VIDEOS

View File

@@ -0,0 +1,186 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import ast
import math
import os
import shutil
import subprocess
from dataclasses import dataclass
from typing import Optional
import av
from app_conf import FFMPEG_NUM_THREADS
from dataclasses_json import dataclass_json
TRANSCODE_VERSION = 1
@dataclass_json
@dataclass
class VideoMetadata:
duration_sec: Optional[float]
video_duration_sec: Optional[float]
container_duration_sec: Optional[float]
fps: Optional[float]
width: Optional[int]
height: Optional[int]
num_video_frames: int
num_video_streams: int
video_start_time: float
def transcode(
in_path: str,
out_path: str,
in_metadata: Optional[VideoMetadata],
seek_t: float,
duration_time_sec: float,
):
codec = os.environ.get("VIDEO_ENCODE_CODEC", "libx264")
crf = int(os.environ.get("VIDEO_ENCODE_CRF", "23"))
fps = int(os.environ.get("VIDEO_ENCODE_FPS", "24"))
max_w = int(os.environ.get("VIDEO_ENCODE_MAX_WIDTH", "1280"))
max_h = int(os.environ.get("VIDEO_ENCODE_MAX_HEIGHT", "720"))
verbose = ast.literal_eval(os.environ.get("VIDEO_ENCODE_VERBOSE", "False"))
normalize_video(
in_path=in_path,
out_path=out_path,
max_w=max_w,
max_h=max_h,
seek_t=seek_t,
max_time=duration_time_sec,
in_metadata=in_metadata,
codec=codec,
crf=crf,
fps=fps,
verbose=verbose,
)
def get_video_metadata(path: str) -> VideoMetadata:
with av.open(path) as cont:
num_video_streams = len(cont.streams.video)
width, height, fps = None, None, None
video_duration_sec = 0
container_duration_sec = float((cont.duration or 0) / av.time_base)
video_start_time = 0.0
rotation_deg = 0
num_video_frames = 0
if num_video_streams > 0:
video_stream = cont.streams.video[0]
assert video_stream.time_base is not None
# for rotation, see: https://github.com/PyAV-Org/PyAV/pull/1249
rotation_deg = video_stream.side_data.get("DISPLAYMATRIX", 0)
num_video_frames = video_stream.frames
video_start_time = float(video_stream.start_time * video_stream.time_base)
width, height = video_stream.width, video_stream.height
fps = float(video_stream.guessed_rate)
fps_avg = video_stream.average_rate
if video_stream.duration is not None:
video_duration_sec = float(
video_stream.duration * video_stream.time_base
)
if fps is None:
fps = float(fps_avg)
if not math.isnan(rotation_deg) and int(rotation_deg) in (
90,
-90,
270,
-270,
):
width, height = height, width
duration_sec = max(container_duration_sec, video_duration_sec)
return VideoMetadata(
duration_sec=duration_sec,
container_duration_sec=container_duration_sec,
video_duration_sec=video_duration_sec,
video_start_time=video_start_time,
fps=fps,
width=width,
height=height,
num_video_streams=num_video_streams,
num_video_frames=num_video_frames,
)
def normalize_video(
in_path: str,
out_path: str,
max_w: int,
max_h: int,
seek_t: float,
max_time: float,
in_metadata: Optional[VideoMetadata],
codec: str = "libx264",
crf: int = 23,
fps: int = 24,
verbose: bool = False,
):
if in_metadata is None:
in_metadata = get_video_metadata(in_path)
assert in_metadata.num_video_streams > 0, "no video stream present"
w, h = in_metadata.width, in_metadata.height
assert w is not None, "width not available"
assert h is not None, "height not available"
# rescale to max_w:max_h if needed & preserve aspect ratio
r = w / h
if r < 1:
h = min(720, h)
w = h * r
else:
w = min(1280, w)
h = w / r
# h264 cannot encode w/ odd dimensions
w = int(w)
h = int(h)
if w % 2 != 0:
w += 1
if h % 2 != 0:
h += 1
ffmpeg = shutil.which("ffmpeg")
cmd = [
ffmpeg,
"-threads",
f"{FFMPEG_NUM_THREADS}", # global threads
"-ss",
f"{seek_t:.2f}",
"-t",
f"{max_time:.2f}",
"-i",
in_path,
"-threads",
f"{FFMPEG_NUM_THREADS}", # decode (or filter..?) threads
"-vf",
f"fps={fps},scale={w}:{h},setsar=1:1",
"-c:v",
codec,
"-crf",
f"{crf}",
"-pix_fmt",
"yuv420p",
"-threads",
f"{FFMPEG_NUM_THREADS}", # encode threads
out_path,
"-y",
]
if verbose:
print(" ".join(cmd))
subprocess.call(
cmd,
stdout=None if verbose else subprocess.DEVNULL,
stderr=None if verbose else subprocess.DEVNULL,
)

View File

@@ -0,0 +1,191 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from dataclasses import dataclass
from typing import Dict, List, Optional, Union
from dataclasses_json import dataclass_json
from torch import Tensor
@dataclass_json
@dataclass
class Mask:
size: List[int]
counts: str
@dataclass_json
@dataclass
class BaseRequest:
type: str
@dataclass_json
@dataclass
class StartSessionRequest(BaseRequest):
type: str
path: str
session_id: Optional[str] = None
@dataclass_json
@dataclass
class SaveSessionRequest(BaseRequest):
type: str
session_id: str
@dataclass_json
@dataclass
class LoadSessionRequest(BaseRequest):
type: str
session_id: str
@dataclass_json
@dataclass
class RenewSessionRequest(BaseRequest):
type: str
session_id: str
@dataclass_json
@dataclass
class CloseSessionRequest(BaseRequest):
type: str
session_id: str
@dataclass_json
@dataclass
class AddPointsRequest(BaseRequest):
type: str
session_id: str
frame_index: int
clear_old_points: bool
object_id: int
labels: List[int]
points: List[List[float]]
@dataclass_json
@dataclass
class AddMaskRequest(BaseRequest):
type: str
session_id: str
frame_index: int
object_id: int
mask: Mask
@dataclass_json
@dataclass
class ClearPointsInFrameRequest(BaseRequest):
type: str
session_id: str
frame_index: int
object_id: int
@dataclass_json
@dataclass
class ClearPointsInVideoRequest(BaseRequest):
type: str
session_id: str
@dataclass_json
@dataclass
class RemoveObjectRequest(BaseRequest):
type: str
session_id: str
object_id: int
@dataclass_json
@dataclass
class PropagateInVideoRequest(BaseRequest):
type: str
session_id: str
start_frame_index: int
@dataclass_json
@dataclass
class CancelPropagateInVideoRequest(BaseRequest):
type: str
session_id: str
@dataclass_json
@dataclass
class StartSessionResponse:
session_id: str
@dataclass_json
@dataclass
class SaveSessionResponse:
session_id: str
@dataclass_json
@dataclass
class LoadSessionResponse:
session_id: str
@dataclass_json
@dataclass
class RenewSessionResponse:
session_id: str
@dataclass_json
@dataclass
class CloseSessionResponse:
success: bool
@dataclass_json
@dataclass
class ClearPointsInVideoResponse:
success: bool
@dataclass_json
@dataclass
class PropagateDataValue:
object_id: int
mask: Mask
@dataclass_json
@dataclass
class PropagateDataResponse:
frame_index: int
results: List[PropagateDataValue]
@dataclass_json
@dataclass
class RemoveObjectResponse:
results: List[PropagateDataResponse]
@dataclass_json
@dataclass
class CancelPorpagateResponse:
success: bool
@dataclass_json
@dataclass
class InferenceSession:
start_time: float
last_use_time: float
session_id: str
state: Dict[str, Dict[str, Union[Tensor, Dict[int, Tensor]]]]

View File

@@ -0,0 +1,48 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from typing import Dict, Union
class MultipartResponseBuilder:
message: bytes
def __init__(self, boundary: str) -> None:
self.message = b"--" + boundary.encode("utf-8") + b"\r\n"
@classmethod
def build(
cls, boundary: str, headers: Dict[str, str], body: Union[str, bytes]
) -> "MultipartResponseBuilder":
builder = cls(boundary=boundary)
for k, v in headers.items():
builder.__append_header(key=k, value=v)
if isinstance(body, bytes):
builder.__append_body(body)
elif isinstance(body, str):
builder.__append_body(body.encode("utf-8"))
else:
raise ValueError(
f"body needs to be of type bytes or str but got {type(body)}"
)
return builder
def get_message(self) -> bytes:
return self.message
def __append_header(self, key: str, value: str) -> "MultipartResponseBuilder":
self.message += key.encode("utf-8") + b": " + value.encode("utf-8") + b"\r\n"
return self
def __close_header(self) -> "MultipartResponseBuilder":
self.message += b"\r\n"
return self
def __append_body(self, body: bytes) -> "MultipartResponseBuilder":
self.__append_header(key="Content-Length", value=str(len(body)))
self.__close_header()
self.message += body
return self

View File

@@ -0,0 +1,427 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import contextlib
import logging
import os
import uuid
from pathlib import Path
from threading import Lock
from typing import Any, Dict, Generator, List
import numpy as np
import torch
from app_conf import APP_ROOT, MODEL_SIZE
from inference.data_types import (
AddMaskRequest,
AddPointsRequest,
CancelPorpagateResponse,
CancelPropagateInVideoRequest,
ClearPointsInFrameRequest,
ClearPointsInVideoRequest,
ClearPointsInVideoResponse,
CloseSessionRequest,
CloseSessionResponse,
Mask,
PropagateDataResponse,
PropagateDataValue,
PropagateInVideoRequest,
RemoveObjectRequest,
RemoveObjectResponse,
StartSessionRequest,
StartSessionResponse,
)
from pycocotools.mask import decode as decode_masks, encode as encode_masks
from sam2.build_sam import build_sam2_video_predictor
logger = logging.getLogger(__name__)
class InferenceAPI:
def __init__(self) -> None:
super(InferenceAPI, self).__init__()
self.session_states: Dict[str, Any] = {}
self.score_thresh = 0
if MODEL_SIZE == "tiny":
checkpoint = Path(APP_ROOT) / "checkpoints/sam2.1_hiera_tiny.pt"
model_cfg = "configs/sam2.1/sam2.1_hiera_t.yaml"
elif MODEL_SIZE == "small":
checkpoint = Path(APP_ROOT) / "checkpoints/sam2.1_hiera_small.pt"
model_cfg = "configs/sam2.1/sam2.1_hiera_s.yaml"
elif MODEL_SIZE == "large":
checkpoint = Path(APP_ROOT) / "checkpoints/sam2.1_hiera_large.pt"
model_cfg = "configs/sam2.1/sam2.1_hiera_l.yaml"
else: # base_plus (default)
checkpoint = Path(APP_ROOT) / "checkpoints/sam2.1_hiera_base_plus.pt"
model_cfg = "configs/sam2.1/sam2.1_hiera_b+.yaml"
# select the device for computation
force_cpu_device = os.environ.get("SAM2_DEMO_FORCE_CPU_DEVICE", "0") == "1"
if force_cpu_device:
logger.info("forcing CPU device for SAM 2 demo")
if torch.cuda.is_available() and not force_cpu_device:
device = torch.device("cuda")
elif torch.backends.mps.is_available() and not force_cpu_device:
device = torch.device("mps")
else:
device = torch.device("cpu")
logger.info(f"using device: {device}")
if device.type == "cuda":
# turn on tfloat32 for Ampere GPUs (https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices)
if torch.cuda.get_device_properties(0).major >= 8:
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
elif device.type == "mps":
logging.warning(
"\nSupport for MPS devices is preliminary. SAM 2 is trained with CUDA and might "
"give numerically different outputs and sometimes degraded performance on MPS. "
"See e.g. https://github.com/pytorch/pytorch/issues/84936 for a discussion."
)
self.device = device
self.predictor = build_sam2_video_predictor(
model_cfg, checkpoint, device=device
)
self.inference_lock = Lock()
def autocast_context(self):
if self.device.type == "cuda":
return torch.autocast("cuda", dtype=torch.bfloat16)
else:
return contextlib.nullcontext()
def start_session(self, request: StartSessionRequest) -> StartSessionResponse:
with self.autocast_context(), self.inference_lock:
session_id = str(uuid.uuid4())
# for MPS devices, we offload the video frames to CPU by default to avoid
# memory fragmentation in MPS (which sometimes crashes the entire process)
offload_video_to_cpu = self.device.type == "mps"
inference_state = self.predictor.init_state(
request.path,
offload_video_to_cpu=offload_video_to_cpu,
)
self.session_states[session_id] = {
"canceled": False,
"state": inference_state,
}
return StartSessionResponse(session_id=session_id)
def close_session(self, request: CloseSessionRequest) -> CloseSessionResponse:
is_successful = self.__clear_session_state(request.session_id)
return CloseSessionResponse(success=is_successful)
def add_points(
self, request: AddPointsRequest, test: str = ""
) -> PropagateDataResponse:
with self.autocast_context(), self.inference_lock:
session = self.__get_session(request.session_id)
inference_state = session["state"]
frame_idx = request.frame_index
obj_id = request.object_id
points = request.points
labels = request.labels
clear_old_points = request.clear_old_points
# add new prompts and instantly get the output on the same frame
frame_idx, object_ids, masks = self.predictor.add_new_points_or_box(
inference_state=inference_state,
frame_idx=frame_idx,
obj_id=obj_id,
points=points,
labels=labels,
clear_old_points=clear_old_points,
normalize_coords=False,
)
masks_binary = (masks > self.score_thresh)[:, 0].cpu().numpy()
rle_mask_list = self.__get_rle_mask_list(
object_ids=object_ids, masks=masks_binary
)
return PropagateDataResponse(
frame_index=frame_idx,
results=rle_mask_list,
)
def add_mask(self, request: AddMaskRequest) -> PropagateDataResponse:
"""
Add new points on a specific video frame.
- mask is a numpy array of shape [H_im, W_im] (containing 1 for foreground and 0 for background).
Note: providing an input mask would overwrite any previous input points on this frame.
"""
with self.autocast_context(), self.inference_lock:
session_id = request.session_id
frame_idx = request.frame_index
obj_id = request.object_id
rle_mask = {
"counts": request.mask.counts,
"size": request.mask.size,
}
mask = decode_masks(rle_mask)
logger.info(
f"add mask on frame {frame_idx} in session {session_id}: {obj_id=}, {mask.shape=}"
)
session = self.__get_session(session_id)
inference_state = session["state"]
frame_idx, obj_ids, video_res_masks = self.model.add_new_mask(
inference_state=inference_state,
frame_idx=frame_idx,
obj_id=obj_id,
mask=torch.tensor(mask > 0),
)
masks_binary = (video_res_masks > self.score_thresh)[:, 0].cpu().numpy()
rle_mask_list = self.__get_rle_mask_list(
object_ids=obj_ids, masks=masks_binary
)
return PropagateDataResponse(
frame_index=frame_idx,
results=rle_mask_list,
)
def clear_points_in_frame(
self, request: ClearPointsInFrameRequest
) -> PropagateDataResponse:
"""
Remove all input points in a specific frame.
"""
with self.autocast_context(), self.inference_lock:
session_id = request.session_id
frame_idx = request.frame_index
obj_id = request.object_id
logger.info(
f"clear inputs on frame {frame_idx} in session {session_id}: {obj_id=}"
)
session = self.__get_session(session_id)
inference_state = session["state"]
frame_idx, obj_ids, video_res_masks = (
self.predictor.clear_all_prompts_in_frame(
inference_state, frame_idx, obj_id
)
)
masks_binary = (video_res_masks > self.score_thresh)[:, 0].cpu().numpy()
rle_mask_list = self.__get_rle_mask_list(
object_ids=obj_ids, masks=masks_binary
)
return PropagateDataResponse(
frame_index=frame_idx,
results=rle_mask_list,
)
def clear_points_in_video(
self, request: ClearPointsInVideoRequest
) -> ClearPointsInVideoResponse:
"""
Remove all input points in all frames throughout the video.
"""
with self.autocast_context(), self.inference_lock:
session_id = request.session_id
logger.info(f"clear all inputs across the video in session {session_id}")
session = self.__get_session(session_id)
inference_state = session["state"]
self.predictor.reset_state(inference_state)
return ClearPointsInVideoResponse(success=True)
def remove_object(self, request: RemoveObjectRequest) -> RemoveObjectResponse:
"""
Remove an object id from the tracking state.
"""
with self.autocast_context(), self.inference_lock:
session_id = request.session_id
obj_id = request.object_id
logger.info(f"remove object in session {session_id}: {obj_id=}")
session = self.__get_session(session_id)
inference_state = session["state"]
new_obj_ids, updated_frames = self.predictor.remove_object(
inference_state, obj_id
)
results = []
for frame_index, video_res_masks in updated_frames:
masks = (video_res_masks > self.score_thresh)[:, 0].cpu().numpy()
rle_mask_list = self.__get_rle_mask_list(
object_ids=new_obj_ids, masks=masks
)
results.append(
PropagateDataResponse(
frame_index=frame_index,
results=rle_mask_list,
)
)
return RemoveObjectResponse(results=results)
def propagate_in_video(
self, request: PropagateInVideoRequest
) -> Generator[PropagateDataResponse, None, None]:
session_id = request.session_id
start_frame_idx = request.start_frame_index
propagation_direction = "both"
max_frame_num_to_track = None
"""
Propagate existing input points in all frames to track the object across video.
"""
# Note that as this method is a generator, we also need to use autocast_context
# in caller to this method to ensure that it's called under the correct context
# (we've added `autocast_context` to `gen_track_with_mask_stream` in app.py).
with self.autocast_context(), self.inference_lock:
logger.info(
f"propagate in video in session {session_id}: "
f"{propagation_direction=}, {start_frame_idx=}, {max_frame_num_to_track=}"
)
try:
session = self.__get_session(session_id)
session["canceled"] = False
inference_state = session["state"]
if propagation_direction not in ["both", "forward", "backward"]:
raise ValueError(
f"invalid propagation direction: {propagation_direction}"
)
# First doing the forward propagation
if propagation_direction in ["both", "forward"]:
for outputs in self.predictor.propagate_in_video(
inference_state=inference_state,
start_frame_idx=start_frame_idx,
max_frame_num_to_track=max_frame_num_to_track,
reverse=False,
):
if session["canceled"]:
return None
frame_idx, obj_ids, video_res_masks = outputs
masks_binary = (
(video_res_masks > self.score_thresh)[:, 0].cpu().numpy()
)
rle_mask_list = self.__get_rle_mask_list(
object_ids=obj_ids, masks=masks_binary
)
yield PropagateDataResponse(
frame_index=frame_idx,
results=rle_mask_list,
)
# Then doing the backward propagation (reverse in time)
if propagation_direction in ["both", "backward"]:
for outputs in self.predictor.propagate_in_video(
inference_state=inference_state,
start_frame_idx=start_frame_idx,
max_frame_num_to_track=max_frame_num_to_track,
reverse=True,
):
if session["canceled"]:
return None
frame_idx, obj_ids, video_res_masks = outputs
masks_binary = (
(video_res_masks > self.score_thresh)[:, 0].cpu().numpy()
)
rle_mask_list = self.__get_rle_mask_list(
object_ids=obj_ids, masks=masks_binary
)
yield PropagateDataResponse(
frame_index=frame_idx,
results=rle_mask_list,
)
finally:
# Log upon completion (so that e.g. we can see if two propagations happen in parallel).
# Using `finally` here to log even when the tracking is aborted with GeneratorExit.
logger.info(
f"propagation ended in session {session_id}; {self.__get_session_stats()}"
)
def cancel_propagate_in_video(
self, request: CancelPropagateInVideoRequest
) -> CancelPorpagateResponse:
session = self.__get_session(request.session_id)
session["canceled"] = True
return CancelPorpagateResponse(success=True)
def __get_rle_mask_list(
self, object_ids: List[int], masks: np.ndarray
) -> List[PropagateDataValue]:
"""
Return a list of data values, i.e. list of object/mask combos.
"""
return [
self.__get_mask_for_object(object_id=object_id, mask=mask)
for object_id, mask in zip(object_ids, masks)
]
def __get_mask_for_object(
self, object_id: int, mask: np.ndarray
) -> PropagateDataValue:
"""
Create a data value for an object/mask combo.
"""
mask_rle = encode_masks(np.array(mask, dtype=np.uint8, order="F"))
mask_rle["counts"] = mask_rle["counts"].decode()
return PropagateDataValue(
object_id=object_id,
mask=Mask(
size=mask_rle["size"],
counts=mask_rle["counts"],
),
)
def __get_session(self, session_id: str):
session = self.session_states.get(session_id, None)
if session is None:
raise RuntimeError(
f"Cannot find session {session_id}; it might have expired"
)
return session
def __get_session_stats(self):
"""Get a statistics string for live sessions and their GPU usage."""
# print both the session ids and their video frame numbers
live_session_strs = [
f"'{session_id}' ({session['state']['num_frames']} frames, "
f"{len(session['state']['obj_ids'])} objects)"
for session_id, session in self.session_states.items()
]
session_stats_str = (
"Test String Here - -"
f"live sessions: [{', '.join(live_session_strs)}], GPU memory: "
f"{torch.cuda.memory_allocated() // 1024**2} MiB used and "
f"{torch.cuda.memory_reserved() // 1024**2} MiB reserved"
f" (max over time: {torch.cuda.max_memory_allocated() // 1024**2} MiB used "
f"and {torch.cuda.max_memory_reserved() // 1024**2} MiB reserved)"
)
return session_stats_str
def __clear_session_state(self, session_id: str) -> bool:
session = self.session_states.pop(session_id, None)
if session is None:
logger.warning(
f"cannot close session {session_id} as it does not exist (it might have expired); "
f"{self.__get_session_stats()}"
)
return False
else:
logger.info(f"removed session {session_id}; {self.__get_session_stats()}")
return True

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

7
demo/frontend/.babelrc Normal file
View File

@@ -0,0 +1,7 @@
{
"env": {
"production": {
"plugins": ["babel-plugin-strip-invariant"]
}
}
}

View File

@@ -0,0 +1,32 @@
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*
node_modules
dist
dist-ssr
*.local
storybook-static
.env
# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?
# Test results
/coverage/
/test-results/
/playwright-report/
/playwright/.cache/

View File

@@ -0,0 +1,3 @@
node_modules/
dist/
env.d.ts

View File

@@ -0,0 +1,63 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
module.exports = {
root: true,
env: {browser: true, es2020: true},
extends: [
'eslint:recommended',
'plugin:@typescript-eslint/recommended',
'plugin:react/recommended',
'plugin:react-hooks/recommended',
'plugin:import/recommended',
'plugin:prettier/recommended',
],
ignorePatterns: ['dist', '.eslintrc.cjs'],
parser: '@typescript-eslint/parser',
parserOptions: {
ecmaVersion: 'latest',
sourceType: 'module',
project: ['./tsconfig.json', './tsconfig.node.json'],
tsconfigRootDir: __dirname,
},
plugins: ['react-refresh'],
settings: {
react: {
version: 'detect',
},
'import/resolver': {
typescript: {},
node: {},
},
},
rules: {
'no-console': 'warn',
curly: 'warn',
'react/jsx-no-useless-fragment': 'warn',
'@typescript-eslint/no-unused-vars': [
'warn',
{
argsIgnorePattern: '^_',
},
],
'react-refresh/only-export-components': [
'warn',
{
allowConstantExport: true,
},
],
'react/react-in-jsx-scope': 'off',
},
};

32
demo/frontend/.gitignore vendored Normal file
View File

@@ -0,0 +1,32 @@
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*
node_modules
dist
dist-ssr
*.local
storybook-static
.env
# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?
# Test results
/coverage/
/test-results/
/playwright-report/
/playwright/.cache/

View File

@@ -0,0 +1,2 @@
node_modules/
dist/

View File

@@ -0,0 +1,9 @@
{
"arrowParens": "avoid",
"bracketSameLine": true,
"bracketSpacing": false,
"singleQuote": true,
"tabWidth": 2,
"trailingComma": "all",
"endOfLine": "auto"
}

View File

@@ -0,0 +1 @@
{}

View File

@@ -0,0 +1,26 @@
# Stage 1: Build Stage
FROM node:22.9.0 AS build
WORKDIR /app
# Copy package.json and yarn.lock
COPY package.json ./
COPY yarn.lock ./
# Install dependencies
RUN yarn install --frozen-lockfile
# Copy source code
COPY . .
# Build the application
RUN yarn build
# Stage 2: Production Stage
FROM nginx:latest
# Copy built files from the build stage to the production image
COPY --from=build /app/dist /usr/share/nginx/html
# Container startup command for the web server (nginx in this case)
CMD ["nginx", "-g", "daemon off;"]

14
demo/frontend/index.html Normal file
View File

@@ -0,0 +1,14 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, shrink-to-fit=no" />
<title>SAM 2 Demo | By Meta FAIR</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.tsx"></script>
</body>
</html>

View File

@@ -0,0 +1,99 @@
{
"name": "frontend-vite",
"private": true,
"version": "0.0.0",
"type": "module",
"scripts": {
"dev": "vite",
"merge-schemas": "tsx schemas/merge-schemas",
"relay": "yarn merge-schemas && relay-compiler",
"build": "tsc && vite build",
"lint": "eslint . --ext ts,tsx --report-unused-disable-directives --max-warnings 0",
"preview": "vite preview --open"
},
"dependencies": {
"@carbon/icons-react": "^11.34.1",
"@heroicons/react": "^2.0.18",
"@monaco-editor/react": "^4.6.0",
"@stylexjs/stylex": "^0.6.1",
"graphql": "^16.8.1",
"immer": "^10.0.3",
"immutability-helper": "^3.1.1",
"jotai": "^2.6.1",
"jotai-immer": "^0.3.0",
"localforage": "^1.10.0",
"monaco-editor": "^0.48.0",
"mp4box": "^0.5.2",
"pts": "^0.12.8",
"react": "^18.2.0",
"react-daisyui": "^4.1.0",
"react-device-detect": "^2.2.3",
"react-dom": "^18.2.0",
"react-dropzone": "^14.2.3",
"react-error-boundary": "^4.0.11",
"react-photo-album": "^2.3.0",
"react-pts-canvas": "^0.5.2",
"react-relay": "^16.2.0",
"react-router-dom": "^6.15.0",
"relay-runtime": "^16.2.0",
"serialize-error": "^11.0.3",
"use-immer": "^0.9.0",
"use-resize-observer": "^9.1.0"
},
"devDependencies": {
"@graphql-tools/load-files": "^7.0.0",
"@graphql-tools/merge": "^9.0.4",
"@tailwindcss/typography": "^0.5.9",
"@types/dom-webcodecs": "^0.1.11",
"@types/invariant": "^2.2.37",
"@types/node": "^20.14.10",
"@types/react": "^18.2.47",
"@types/react-dom": "^18.2.7",
"@types/react-relay": "^16.0.6",
"@types/relay-runtime": "^14.1.13",
"@typescript-eslint/eslint-plugin": "^6.18.1",
"@typescript-eslint/parser": "^6.18.1",
"@vitejs/plugin-react": "^4.2.1",
"autoprefixer": "^10.4.15",
"babel-plugin-relay": "^16.2.0",
"babel-plugin-strip-invariant": "^1.0.0",
"daisyui": "^3.6.3",
"eslint": "^8.48.0",
"eslint-config-prettier": "^9.0.0",
"eslint-import-resolver-alias": "^1.1.2",
"eslint-import-resolver-typescript": "^3.6.3",
"eslint-plugin-import": "^2.28.1",
"eslint-plugin-prettier": "^5.1.3",
"eslint-plugin-react": "^7.33.2",
"eslint-plugin-react-hooks": "^4.6.0",
"eslint-plugin-react-refresh": "^0.4.3",
"invariant": "^2.2.4",
"postcss": "^8.4.28",
"postinstall-postinstall": "^2.1.0",
"prettier": "^3.0.3",
"relay-compiler": "^16.2.0",
"sass": "^1.66.1",
"strip-ansi": "^7.1.0",
"tailwindcss": "^3.3.3",
"tsx": "^4.16.2",
"typescript": ">=4.3.5 <5.4.0",
"vite": "^5.0.11",
"vite-plugin-babel": "^1.2.0",
"vite-plugin-relay": "^2.0.0",
"vite-plugin-stylex-dev": "^0.5.2"
},
"resolutions": {
"wrap-ansi": "7.0.0"
},
"relay": {
"src": "./src/",
"schema": "./schema.graphql",
"language": "typescript",
"eagerEsModules": true,
"exclude": [
"**/node_modules/**",
"**/__mocks__/**",
"**/__generated__/**"
]
}
}

View File

@@ -0,0 +1,23 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
export default {
plugins: {
'postcss-import': {},
'tailwindcss/nesting': {},
tailwindcss: {},
autoprefixer: {},
},
};

View File

@@ -0,0 +1,212 @@
input AddPointsInput {
sessionId: String!
frameIndex: Int!
clearOldPoints: Boolean!
objectId: Int!
labels: [Int!]!
points: [[Float!]!]!
}
type CancelPropagateInVideo {
success: Boolean!
}
input CancelPropagateInVideoInput {
sessionId: String!
}
input ClearPointsInFrameInput {
sessionId: String!
frameIndex: Int!
objectId: Int!
}
type ClearPointsInVideo {
success: Boolean!
}
input ClearPointsInVideoInput {
sessionId: String!
}
type CloseSession {
success: Boolean!
}
input CloseSessionInput {
sessionId: String!
}
type Mutation {
startSession(input: StartSessionInput!): StartSession!
closeSession(input: CloseSessionInput!): CloseSession!
addPoints(input: AddPointsInput!): RLEMaskListOnFrame!
clearPointsInFrame(input: ClearPointsInFrameInput!): RLEMaskListOnFrame!
clearPointsInVideo(input: ClearPointsInVideoInput!): ClearPointsInVideo!
removeObject(input: RemoveObjectInput!): [RLEMaskListOnFrame!]!
cancelPropagateInVideo(
input: CancelPropagateInVideoInput!
): CancelPropagateInVideo!
createDeletionId: String!
acceptTos: Boolean!
acceptTermsOfService: String!
uploadVideo(
file: Upload!
startTimeSec: Float = null
durationTimeSec: Float = null
): Video!
uploadSharedVideo(file: Upload!): SharedVideo!
uploadAnnotations(file: Upload!): Boolean!
}
input PingInput {
sessionId: String!
}
type Pong {
success: Boolean!
}
type Query {
ping(input: PingInput!): Pong!
defaultVideo: Video!
videos(
"""
Returns the items in the list that come before the specified cursor.
"""
before: String = null
"""
Returns the items in the list that come after the specified cursor.
"""
after: String = null
"""
Returns the first n items from the list.
"""
first: Int = null
"""
Returns the items in the list that come after the specified cursor.
"""
last: Int = null
): VideoConnection!
sharedVideo(path: String!): SharedVideo!
}
type RLEMask {
size: [Int!]!
counts: String!
order: String!
}
type RLEMaskForObject {
objectId: Int!
rleMask: RLEMask!
}
type RLEMaskListOnFrame {
frameIndex: Int!
rleMaskList: [RLEMaskForObject!]!
}
input RemoveObjectInput {
sessionId: String!
objectId: Int!
}
type StartSession {
sessionId: String!
}
input StartSessionInput {
path: String!
}
"""
The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as `"4"`) or integer (such as `4`) input value will be accepted as an ID.
"""
scalar GlobalID
@specifiedBy(url: "https://relay.dev/graphql/objectidentification.htm")
"""
An object with a Globally Unique ID
"""
interface Node {
"""
The Globally Unique ID of this object
"""
id: GlobalID!
}
"""
Information to aid in pagination.
"""
type PageInfo {
"""
When paginating forwards, are there more items?
"""
hasNextPage: Boolean!
"""
When paginating backwards, are there more items?
"""
hasPreviousPage: Boolean!
"""
When paginating backwards, the cursor to continue.
"""
startCursor: String
"""
When paginating forwards, the cursor to continue.
"""
endCursor: String
}
type SharedVideo {
path: String!
url: String!
}
scalar Upload
type Video implements Node {
"""
The Globally Unique ID of this object
"""
id: GlobalID!
path: String!
posterPath: String
width: Int!
height: Int!
url: String!
posterUrl: String!
}
"""
A connection to a list of items.
"""
type VideoConnection {
"""
Pagination data for this connection
"""
pageInfo: PageInfo!
"""
Contains the nodes in this connection
"""
edges: [VideoEdge!]!
}
"""
An edge in a connection.
"""
type VideoEdge {
"""
A cursor for use in pagination
"""
cursor: String!
"""
The item at the end of the edge
"""
node: Video!
}
schema {
query: Query
mutation: Mutation
}

View File

@@ -0,0 +1,105 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
input AddPointsInput {
sessionId: String!
frameIndex: Int!
clearOldPoints: Boolean!
objectId: Int!
labels: [Int!]!
points: [[Float!]!]!
}
type CancelPropagateInVideo {
success: Boolean!
}
input CancelPropagateInVideoInput {
sessionId: String!
}
input ClearPointsInFrameInput {
sessionId: String!
frameIndex: Int!
objectId: Int!
}
type ClearPointsInVideo {
success: Boolean!
}
input ClearPointsInVideoInput {
sessionId: String!
}
type CloseSession {
success: Boolean!
}
input CloseSessionInput {
sessionId: String!
}
type Mutation {
startSession(input: StartSessionInput!): StartSession!
closeSession(input: CloseSessionInput!): CloseSession!
addPoints(input: AddPointsInput!): RLEMaskListOnFrame!
clearPointsInFrame(input: ClearPointsInFrameInput!): RLEMaskListOnFrame!
clearPointsInVideo(input: ClearPointsInVideoInput!): ClearPointsInVideo!
removeObject(input: RemoveObjectInput!): [RLEMaskListOnFrame!]!
cancelPropagateInVideo(
input: CancelPropagateInVideoInput!
): CancelPropagateInVideo!
}
input PingInput {
sessionId: String!
}
type Pong {
success: Boolean!
}
type Query {
ping(input: PingInput!): Pong!
}
type RLEMask {
size: [Int!]!
counts: String!
order: String!
}
type RLEMaskForObject {
objectId: Int!
rleMask: RLEMask!
}
type RLEMaskListOnFrame {
frameIndex: Int!
rleMaskList: [RLEMaskForObject!]!
}
input RemoveObjectInput {
sessionId: String!
objectId: Int!
}
type StartSession {
sessionId: String!
}
input StartSessionInput {
path: String!
}

View File

@@ -0,0 +1,33 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {loadFilesSync} from '@graphql-tools/load-files';
import {mergeTypeDefs} from '@graphql-tools/merge';
import fs from 'fs';
import {print} from 'graphql';
import path from 'path';
import * as prettier from 'prettier';
import {fileURLToPath} from 'url';
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
const loadedFiles = loadFilesSync(`${__dirname}/*.graphql`);
const typeDefs = mergeTypeDefs(loadedFiles);
const printedTypeDefs = print(typeDefs);
const prettyTypeDefs = await prettier.format(printedTypeDefs, {
parser: 'graphql',
});
fs.writeFileSync('schema.graphql', prettyTypeDefs);

View File

@@ -0,0 +1,143 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as `"4"`) or integer (such as `4`) input value will be accepted as an ID.
"""
scalar GlobalID
@specifiedBy(url: "https://relay.dev/graphql/objectidentification.htm")
type Mutation {
createDeletionId: String!
acceptTos: Boolean!
acceptTermsOfService: String!
uploadVideo(
file: Upload!
startTimeSec: Float = null
durationTimeSec: Float = null
): Video!
uploadSharedVideo(file: Upload!): SharedVideo!
uploadAnnotations(file: Upload!): Boolean!
}
"""
An object with a Globally Unique ID
"""
interface Node {
"""
The Globally Unique ID of this object
"""
id: GlobalID!
}
"""
Information to aid in pagination.
"""
type PageInfo {
"""
When paginating forwards, are there more items?
"""
hasNextPage: Boolean!
"""
When paginating backwards, are there more items?
"""
hasPreviousPage: Boolean!
"""
When paginating backwards, the cursor to continue.
"""
startCursor: String
"""
When paginating forwards, the cursor to continue.
"""
endCursor: String
}
type Query {
defaultVideo: Video!
videos(
"""
Returns the items in the list that come before the specified cursor.
"""
before: String = null
"""
Returns the items in the list that come after the specified cursor.
"""
after: String = null
"""
Returns the first n items from the list.
"""
first: Int = null
"""
Returns the items in the list that come after the specified cursor.
"""
last: Int = null
): VideoConnection!
sharedVideo(path: String!): SharedVideo!
}
type SharedVideo {
path: String!
url: String!
}
scalar Upload
type Video implements Node {
"""
The Globally Unique ID of this object
"""
id: GlobalID!
path: String!
posterPath: String
width: Int!
height: Int!
url: String!
posterUrl: String!
}
"""
A connection to a list of items.
"""
type VideoConnection {
"""
Pagination data for this connection
"""
pageInfo: PageInfo!
"""
Contains the nodes in this connection
"""
edges: [VideoEdge!]!
}
"""
An edge in a connection.
"""
type VideoEdge {
"""
A cursor for use in pagination
"""
cursor: String!
"""
The item at the end of the edge
"""
node: Video!
}

33
demo/frontend/src/App.tsx Normal file
View File

@@ -0,0 +1,33 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import SAM2DemoApp from '@/demo/SAM2DemoApp';
import SettingsContextProvider from '@/settings/SettingsContextProvider';
import {RouterProvider, createBrowserRouter} from 'react-router-dom';
export default function App() {
const router = createBrowserRouter([
{
path: '*',
element: (
<SettingsContextProvider>
<SAM2DemoApp />
</SettingsContextProvider>
),
},
]);
return <RouterProvider router={router} />;
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

View File

@@ -0,0 +1,374 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
@tailwind base;
@tailwind components;
@tailwind utilities;
.tab {
display: flex;
padding: 0px 0px;
margin-right: 6px;
align-items: center;
height: 100%;
}
@layer base {
@font-face {
font-family: 'Inter';
src: url(/fonts/Inter-VariableFont.ttf) format('truetype-variations');
}
}
body {
font-family: 'Inter', sans-serif;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
body,
html,
#root {
height: 100%;
@media screen and (max-width: '768px') {
overflow: hidden;
}
}
:root {
--segEv-font: 'Inter', system-ui, -apple-system, BlinkMacSystemFont,
'Segoe UI', Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue',
sans-serif;
--perspective: 4000px;
color-scheme: dark;
}
h1,
h2,
h3,
h4,
h5,
h6 {
font-family: 'Inter', sans-serif;
}
.prose .display h1 {
@apply text-4xl text-gray-800 font-medium leading-tight;
}
.prose .display h2 {
@apply text-gray-800 font-medium leading-tight;
font-size: 2.5rem;
}
.prose h1 {
@apply text-3xl text-gray-800 font-medium leading-tight mt-2 mb-4;
letter-spacing: 0.016rem;
}
.prose h2 {
@apply text-2xl text-gray-800 font-medium leading-tight my-2;
letter-spacing: 0.01rem;
}
.prose h3 {
@apply text-xl text-gray-800 font-medium leading-tight my-2;
letter-spacing: 0.005rem;
}
.prose h4 {
@apply text-lg text-gray-800 font-medium leading-tight my-2;
}
.prose h5 {
@apply text-xl text-gray-700 font-normal leading-normal my-2;
letter-spacing: 0.005rem;
}
.prose h6 {
@apply text-base text-gray-700 font-normal leading-normal my-2;
}
.prose p {
@apply text-sm text-gray-700 font-normal leading-normal;
@apply leading-snug;
}
.prose ol,
.prose ul {
@apply text-sm text-gray-700 font-normal leading-normal;
padding-right: 2rem;
}
.dark-mode h1,
.dark-mode h2,
.dark-mode h3,
.dark-mode h4,
.dark-mode h5,
.dark-mode p,
.dark-mode ol,
.dark-mode ul,
.dark-mode p *,
.dark-mode ol *,
.dark-mode ul *,
{
@apply text-white;
}
.dark-mode h4,
.dark-mode h6,
.dark-mode li::marker,
.dark-mode a {
@apply text-gray-200;
}
.flex-grow-2 {
flex-grow: 2;
}
.flex-grow-3 {
flex-grow: 3;
}
.flex-grow-4 {
flex-grow: 4;
}
.flex-grow-5 {
flex-grow: 5;
}
.nav-title {
font-family: var(--segEv-font);
}
.segment-active {
animation: segment-highlight 2s linear infinite;
stroke-dasharray: 5, 10;
stroke-width: 4px;
}
@keyframes segment-highlight {
to {
stroke-dashoffset: 60;
}
}
.segment-select {
animation: segment-dotted 2s linear infinite;
stroke-dasharray: 3, 5;
stroke-width: 3px;
}
@keyframes segment-dotted {
to {
stroke-dashoffset: 24;
}
}
/**
* Daisy UI customizations
*/
.btn {
@apply normal-case rounded-md;
}
.comp_summary h1,
.comp_summary h2,
.comp_summary h3 {
@apply mb-4;
}
.disabled {
opacity: 0.4;
pointer-events: none;
}
.absolute-center {
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
}
@screen lg {
.drawer .grid {
grid-template-columns: max-content 1fr;
}
}
.fade-in {
transition: opacity 0.5s;
opacity: 1 !important;
}
.react-photo-gallery--gallery > div {
gap: 0.25rem;
}
.sticker {
filter: drop-shadow(0.25rem 0.25rem 5px #fff)
drop-shadow(-0.25rem 0.25rem 5px #fff)
drop-shadow(0.25rem -0.25rem 5px #fff)
drop-shadow(-0.25rem -0.25rem 5px #fff);
transition: filter 0.3s ease-out;
}
.sticker:hover,
.sticker-select {
filter: drop-shadow(0.25rem 0.25rem 1px #2962d9)
drop-shadow(-0.25rem 0.25rem 1px #2962d9)
drop-shadow(0.25rem -0.25rem 1px #2962d9)
drop-shadow(-0.25rem -0.25rem 1px #2962d9);
}
/* keyframe animations */
.mask-path,
.reveal {
opacity: 0;
animation: reveal 0.4s ease-in forwards;
}
.slow-reveal {
animation: reveal 1s ease-in;
}
.reveal-then-conceal {
opacity: 0;
animation: reveal-then-conceal 1.5s ease-in-out forwards;
}
@keyframes reveal {
from {
opacity: 0;
}
to {
opacity: 1;
}
}
@keyframes reveal-then-conceal {
from {
opacity: 0;
}
50% {
opacity: 1;
}
to {
opacity: 0;
}
}
.background-animate {
background-size: 400%;
animation: pulse 3s ease infinite;
}
@keyframes pulse {
0%,
100% {
background-position: 0% 50%;
}
50% {
background-position: 100% 50%;
}
}
/* Fix for Safari and Mobile Safari:
Extracted Tailwind progress-bar styles and applied
them to a <div> instead of a <progress> element */
.loading-bar {
position: relative;
width: 100%;
-webkit-appearance: none;
-moz-appearance: none;
appearance: none;
overflow: hidden;
height: 0.5rem;
border-radius: 1rem;
border-radius: var(--rounded-box, 1rem);
vertical-align: baseline;
background-color: hsl(var(--n) / var(--tw-bg-opacity));
--tw-bg-opacity: 0.2;
&::after {
--tw-bg-opacity: 1;
background-color: hsl(var(--n) / var(--tw-bg-opacity));
content: '';
position: absolute;
top: 0px;
bottom: 0px;
left: -40%;
width: 33.333333%;
border-radius: 1rem;
border-radius: var(--rounded-box, 1rem);
animation: loading 5s infinite ease-in-out;
}
}
@keyframes loading {
50% {
left: 107%;
}
}
@keyframes inAnimation {
0% {
opacity: 0;
max-height: 0px;
}
50% {
opacity: 1;
}
100% {
opacity: 1;
max-height: 600px;
}
}
@keyframes outAnimation {
0% {
opacity: 1;
max-height: 600px;
}
50% {
opacity: 0;
}
100% {
opacity: 0;
max-height: 0px;
}
}
@keyframes ellipsisAnimation {
0% {
content: '';
}
25% {
content: '.';
}
50% {
content: '..';
}
75% {
content: '...';
}
}
.ellipsis::after {
content: '';
animation: ellipsisAnimation 1.5s infinite;
}

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.6 KiB

View File

@@ -0,0 +1,284 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {cloneFrame} from '@/common/codecs/WebCodecUtils';
import {FileStream} from '@/common/utils/FileUtils';
import {
createFile,
DataStream,
MP4ArrayBuffer,
MP4File,
MP4Sample,
MP4VideoTrack,
} from 'mp4box';
import {isAndroid, isChrome, isEdge, isWindows} from 'react-device-detect';
export type ImageFrame = {
bitmap: VideoFrame;
timestamp: number;
duration: number;
};
export type DecodedVideo = {
width: number;
height: number;
frames: ImageFrame[];
numFrames: number;
fps: number;
};
function decodeInternal(
identifier: string,
onReady: (mp4File: MP4File) => Promise<void>,
onProgress: (decodedVideo: DecodedVideo) => void,
): Promise<DecodedVideo> {
return new Promise((resolve, reject) => {
const imageFrames: ImageFrame[] = [];
const globalSamples: MP4Sample[] = [];
let decoder: VideoDecoder;
let track: MP4VideoTrack | null = null;
const mp4File = createFile();
mp4File.onError = reject;
mp4File.onReady = async info => {
if (info.videoTracks.length > 0) {
track = info.videoTracks[0];
} else {
// The video does not have a video track, so looking if there is an
// "otherTracks" available. Note, I couldn't find any documentation
// about "otherTracks" in WebCodecs [1], but it was available in the
// info for MP4V-ES, which isn't supported by Chrome [2].
// However, we'll still try to get the track and then throw an error
// further down in the VideoDecoder.isConfigSupported if the codec is
// not supported by the browser.
//
// [1] https://www.w3.org/TR/webcodecs/
// [2] https://developer.mozilla.org/en-US/docs/Web/Media/Formats/Video_codecs#mp4v-es
track = info.otherTracks[0];
}
if (track == null) {
reject(new Error(`${identifier} does not contain a video track`));
return;
}
const timescale = track.timescale;
const edits = track.edits;
let frame_n = 0;
decoder = new VideoDecoder({
// Be careful with any await in this function. The VideoDecoder will
// not await output and continue calling it with decoded frames.
async output(inputFrame) {
if (track == null) {
reject(new Error(`${identifier} does not contain a video track`));
return;
}
const saveTrack = track;
// If the track has edits, we'll need to check that only frames are
// returned that are within the edit list. This can happen for
// trimmed videos that have not been transcoded and therefore the
// video track contains more frames than those visually rendered when
// playing back the video.
if (edits != null && edits.length > 0) {
const cts = Math.round(
(inputFrame.timestamp * timescale) / 1_000_000,
);
if (cts < edits[0].media_time) {
inputFrame.close();
return;
}
}
// Workaround for Chrome where the decoding stops at ~17 frames unless
// the VideoFrame is closed. So, the workaround here is to create a
// new VideoFrame and close the decoded VideoFrame.
// The frame has to be cloned, or otherwise some frames at the end of the
// video will be black. Note, the default VideoFrame.clone doesn't work
// and it is using a frame cloning found here:
// https://webcodecs-blogpost-demo.glitch.me/
if (
(isAndroid && isChrome) ||
(isWindows && isChrome) ||
(isWindows && isEdge)
) {
const clonedFrame = await cloneFrame(inputFrame);
inputFrame.close();
inputFrame = clonedFrame;
}
const sample = globalSamples[frame_n];
if (sample != null) {
const duration = (sample.duration * 1_000_000) / sample.timescale;
imageFrames.push({
bitmap: inputFrame,
timestamp: inputFrame.timestamp,
duration,
});
// Sort frames in order of timestamp. This is needed because Safari
// can return decoded frames out of order.
imageFrames.sort((a, b) => (a.timestamp < b.timestamp ? 1 : -1));
// Update progress on first frame and then every 40th frame
if (onProgress != null && frame_n % 100 === 0) {
onProgress({
width: saveTrack.track_width,
height: saveTrack.track_height,
frames: imageFrames,
numFrames: saveTrack.nb_samples,
fps:
(saveTrack.nb_samples / saveTrack.duration) *
saveTrack.timescale,
});
}
}
frame_n++;
if (saveTrack.nb_samples === frame_n) {
// Sort frames in order of timestamp. This is needed because Safari
// can return decoded frames out of order.
imageFrames.sort((a, b) => (a.timestamp > b.timestamp ? 1 : -1));
resolve({
width: saveTrack.track_width,
height: saveTrack.track_height,
frames: imageFrames,
numFrames: saveTrack.nb_samples,
fps:
(saveTrack.nb_samples / saveTrack.duration) *
saveTrack.timescale,
});
}
},
error(error) {
reject(error);
},
});
let description;
const trak = mp4File.getTrackById(track.id);
const entries = trak?.mdia?.minf?.stbl?.stsd?.entries;
if (entries == null) {
return;
}
for (const entry of entries) {
if (entry.avcC || entry.hvcC) {
const stream = new DataStream(undefined, 0, DataStream.BIG_ENDIAN);
if (entry.avcC) {
entry.avcC.write(stream);
} else if (entry.hvcC) {
entry.hvcC.write(stream);
}
description = new Uint8Array(stream.buffer, 8); // Remove the box header.
break;
}
}
const configuration: VideoDecoderConfig = {
codec: track.codec,
codedWidth: track.track_width,
codedHeight: track.track_height,
description,
};
const supportedConfig =
await VideoDecoder.isConfigSupported(configuration);
if (supportedConfig.supported == true) {
decoder.configure(configuration);
mp4File.setExtractionOptions(track.id, null, {
nbSamples: Infinity,
});
mp4File.start();
} else {
reject(
new Error(
`Decoder config faile: config ${JSON.stringify(
supportedConfig.config,
)} is not supported`,
),
);
return;
}
};
mp4File.onSamples = async (
_id: number,
_user: unknown,
samples: MP4Sample[],
) => {
for (const sample of samples) {
globalSamples.push(sample);
decoder.decode(
new EncodedVideoChunk({
type: sample.is_sync ? 'key' : 'delta',
timestamp: (sample.cts * 1_000_000) / sample.timescale,
duration: (sample.duration * 1_000_000) / sample.timescale,
data: sample.data,
}),
);
}
await decoder.flush();
decoder.close();
};
onReady(mp4File);
});
}
export function decode(
file: File,
onProgress: (decodedVideo: DecodedVideo) => void,
): Promise<DecodedVideo> {
return decodeInternal(
file.name,
async (mp4File: MP4File) => {
const reader = new FileReader();
reader.onload = function () {
const result = this.result as MP4ArrayBuffer;
if (result != null) {
result.fileStart = 0;
mp4File.appendBuffer(result);
}
mp4File.flush();
};
reader.readAsArrayBuffer(file);
},
onProgress,
);
}
export function decodeStream(
fileStream: FileStream,
onProgress: (decodedVideo: DecodedVideo) => void,
): Promise<DecodedVideo> {
return decodeInternal(
'stream',
async (mp4File: MP4File) => {
let part = await fileStream.next();
while (part.done === false) {
const result = part.value.data.buffer as MP4ArrayBuffer;
if (result != null) {
result.fileStart = part.value.range.start;
mp4File.appendBuffer(result);
}
mp4File.flush();
part = await fileStream.next();
}
},
onProgress,
);
}

View File

@@ -0,0 +1,139 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {ImageFrame} from '@/common/codecs/VideoDecoder';
import {MP4ArrayBuffer, createFile} from 'mp4box';
// The selection of timescale and seconds/key-frame value are
// explained in the following docs: https://github.com/vjeux/mp4-h264-re-encode
const TIMESCALE = 90000;
const SECONDS_PER_KEY_FRAME = 2;
export function encode(
width: number,
height: number,
numFrames: number,
framesGenerator: AsyncGenerator<ImageFrame, unknown>,
progressCallback?: (progress: number) => void,
): Promise<MP4ArrayBuffer> {
return new Promise((resolve, reject) => {
let encodedFrameIndex = 0;
let nextKeyFrameTimestamp = 0;
let trackID: number | null = null;
const durations: number[] = [];
const outputFile = createFile();
const encoder = new VideoEncoder({
output(chunk, metaData) {
const uint8 = new Uint8Array(chunk.byteLength);
chunk.copyTo(uint8);
const description = metaData?.decoderConfig?.description;
if (trackID === null) {
trackID = outputFile.addTrack({
width: width,
height: height,
timescale: TIMESCALE,
avcDecoderConfigRecord: description,
});
}
const shiftedDuration = durations.shift();
if (shiftedDuration != null) {
outputFile.addSample(trackID, uint8, {
duration: getScaledDuration(shiftedDuration),
is_sync: chunk.type === 'key',
});
encodedFrameIndex++;
progressCallback?.(encodedFrameIndex / numFrames);
}
if (encodedFrameIndex === numFrames) {
resolve(outputFile.getBuffer());
}
},
error(error) {
reject(error);
return;
},
});
const setConfigurationAndEncodeFrames = async () => {
// The codec value was taken from the following implementation and seems
// reasonable for our use case for now:
// https://github.com/vjeux/mp4-h264-re-encode/blob/main/mp4box.html#L103
// Additional details about codecs can be found here:
// - https://developer.mozilla.org/en-US/docs/Web/Media/Formats/codecs_parameter
// - https://www.w3.org/TR/webcodecs-codec-registry/#video-codec-registry
//
// The following setting is a good compromise between output video file
// size and quality. The latencyMode "realtime" is needed for Safari,
// which otherwise will produce 20x larger files when in quality
// latencyMode. Chrome does a really good job with file size even when
// latencyMode is set to quality.
const configuration: VideoEncoderConfig = {
codec: 'avc1.4d0034',
width: roundToNearestEven(width),
height: roundToNearestEven(height),
bitrate: 14_000_000,
alpha: 'discard',
bitrateMode: 'variable',
latencyMode: 'realtime',
};
const supportedConfig =
await VideoEncoder.isConfigSupported(configuration);
if (supportedConfig.supported === true) {
encoder.configure(configuration);
} else {
throw new Error(
`Unsupported video encoder config ${JSON.stringify(supportedConfig)}`,
);
}
for await (const frame of framesGenerator) {
const {bitmap, duration, timestamp} = frame;
durations.push(duration);
let keyFrame = false;
if (timestamp >= nextKeyFrameTimestamp) {
await encoder.flush();
keyFrame = true;
nextKeyFrameTimestamp = timestamp + SECONDS_PER_KEY_FRAME * 1e6;
}
encoder.encode(bitmap, {keyFrame});
bitmap.close();
}
await encoder.flush();
encoder.close();
};
setConfigurationAndEncodeFrames();
});
}
function getScaledDuration(rawDuration: number) {
return rawDuration / (1_000_000 / TIMESCALE);
}
function roundToNearestEven(dim: number) {
const rounded = Math.round(dim);
if (rounded % 2 === 0) {
return rounded;
} else {
return rounded + (rounded > dim ? -1 : 1);
}
}

View File

@@ -0,0 +1,50 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
// https://github.com/w3c/webcodecs/issues/88
// https://issues.chromium.org/issues/40725065
// https://webcodecs-blogpost-demo.glitch.me/
export async function cloneFrame(frame: VideoFrame): Promise<VideoFrame> {
const {
codedHeight,
codedWidth,
colorSpace,
displayHeight,
displayWidth,
format,
timestamp,
} = frame;
const rect = {x: 0, y: 0, width: codedWidth, height: codedHeight};
const data = new ArrayBuffer(frame.allocationSize({rect}));
try {
await frame.copyTo(data, {rect});
} catch (error) {
// The VideoFrame#copyTo on x64 builds on macOS fails. The workaround here
// is to clone the frame.
// https://stackoverflow.com/questions/77898766/inconsistent-behavior-of-webcodecs-copyto-method-across-different-browsers-an
return frame.clone();
}
return new VideoFrame(data, {
codedHeight,
codedWidth,
colorSpace,
displayHeight,
displayWidth,
duration: frame.duration ?? undefined,
format: format!,
timestamp,
visibleRect: frame.visibleRect ?? undefined,
});
}

View File

@@ -0,0 +1,74 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import ChangeVideoModal from '@/common/components/gallery/ChangeVideoModal';
import {DEMO_SHORT_NAME} from '@/demo/DemoConfig';
import {spacing} from '@/theme/tokens.stylex';
import {ImageCopy} from '@carbon/icons-react';
import stylex from '@stylexjs/stylex';
import {Button} from 'react-daisyui';
const styles = stylex.create({
container: {
position: 'relative',
backgroundColor: '#000',
padding: spacing[5],
paddingVertical: spacing[6],
display: 'flex',
flexDirection: 'column',
gap: spacing[4],
},
});
export default function MobileFirstClickBanner() {
return (
<div {...stylex.props(styles.container)}>
<div className="flex text-white text-lg">
Click an object in the video to start
</div>
<div className="text-sm text-[#A7B3BF]">
<p>
You&apos;ll be able to use {DEMO_SHORT_NAME} to make fun edits to any
video by tracking objects and applying visual effects. To start, click
any object in the video.
</p>
</div>
<div className="flex items-center">
<ChangeVideoModal
videoGalleryModalTrigger={MobileVideoGalleryModalTrigger}
showUploadInGallery={true}
/>
</div>
</div>
);
}
type MobileVideoGalleryModalTriggerProps = {
onClick: () => void;
};
function MobileVideoGalleryModalTrigger({
onClick,
}: MobileVideoGalleryModalTriggerProps) {
return (
<Button
color="ghost"
startIcon={<ImageCopy size={20} />}
onClick={onClick}
className="text-white p-0">
Change video
</Button>
);
}

View File

@@ -0,0 +1,41 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {PropsWithChildren} from 'react';
type Props = PropsWithChildren<{
className?: string;
message: string;
position?: 'left' | 'top' | 'right' | 'bottom';
}>;
/**
* This is a custom Tooltip component because React Daisy UI does not have an
* option to *only* show tooltip on large devices.
*/
export default function Tooltip({
children,
className = '',
message,
position = 'top',
}: Props) {
return (
<div
className={`lg:tooltip tooltip-${position} ${className}`}
data-tip={message}>
{children}
</div>
);
}

View File

@@ -0,0 +1,49 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import useMessagesSnackbar from '@/common/components/snackbar/useDemoMessagesSnackbar';
import useVideo from '@/common/components/video/editor/useVideo';
import {activeTrackletObjectIdAtom, labelTypeAtom} from '@/demo/atoms';
import {Add} from '@carbon/icons-react';
import {useSetAtom} from 'jotai';
export default function AddObjectButton() {
const video = useVideo();
const setActiveTrackletId = useSetAtom(activeTrackletObjectIdAtom);
const setLabelType = useSetAtom(labelTypeAtom);
const {enqueueMessage} = useMessagesSnackbar();
async function addObject() {
enqueueMessage('addObjectClick');
const tracklet = await video?.createTracklet();
if (tracklet != null) {
setActiveTrackletId(tracklet.id);
setLabelType('positive');
}
}
return (
<div
onClick={addObject}
className="group flex justify-start mx-4 px-4 bg-transparent text-white !rounded-xl border-none cursor-pointer">
<div className="flex gap-6 items-center">
<div className=" group-hover:bg-graydark-700 border border-white relative h-12 w-12 md:w-20 md:h-20 shrink-0 rounded-lg flex items-center justify-center">
<Add size={36} className="group-hover:text-white text-gray-300" />
</div>
<div className="font-medium text-base">Add another object</div>
</div>
</div>
);
}

View File

@@ -0,0 +1,81 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import useRestartSession from '@/common/components/session/useRestartSession';
import useMessagesSnackbar from '@/common/components/snackbar/useDemoMessagesSnackbar';
import useVideo from '@/common/components/video/editor/useVideo';
import {isPlayingAtom, isStreamingAtom, labelTypeAtom} from '@/demo/atoms';
import {Reset} from '@carbon/icons-react';
import stylex from '@stylexjs/stylex';
import {useAtomValue, useSetAtom} from 'jotai';
import {useState} from 'react';
import {Button, Loading} from 'react-daisyui';
const styles = stylex.create({
container: {
display: 'flex',
alignItems: 'center',
},
});
type Props = {
onRestart: () => void;
};
export default function ClearAllPointsInVideoButton({onRestart}: Props) {
const [isLoading, setIsLoading] = useState<boolean>(false);
const isPlaying = useAtomValue(isPlayingAtom);
const isStreaming = useAtomValue(isStreamingAtom);
const setLabelType = useSetAtom(labelTypeAtom);
const {clearMessage} = useMessagesSnackbar();
const {restartSession} = useRestartSession();
const video = useVideo();
async function handleRestart() {
if (video === null) {
return;
}
setIsLoading(true);
if (isPlaying) {
video.pause();
}
if (isStreaming) {
await video.abortStreamMasks();
}
const isSuccessful = await video.clearPointsInVideo();
if (!isSuccessful) {
await restartSession();
}
video.frame = 0;
setLabelType('positive');
onRestart();
clearMessage();
setIsLoading(false);
}
return (
<div {...stylex.props(styles.container)}>
<Button
color="ghost"
onClick={handleRestart}
className="!px-4 !rounded-full font-medium text-white hover:bg-black"
startIcon={isLoading ? <Loading size="sm" /> : <Reset size={20} />}>
Start over
</Button>
</div>
);
}

View File

@@ -0,0 +1,38 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import PrimaryCTAButton from '@/common/components/button/PrimaryCTAButton';
import useVideo from '@/common/components/video/editor/useVideo';
import {ChevronRight} from '@carbon/icons-react';
type Props = {
onSessionClose: () => void;
};
export default function CloseSessionButton({onSessionClose}: Props) {
const video = useVideo();
function handleCloseSession() {
video?.closeSession();
video?.logAnnotations();
onSessionClose();
}
return (
<PrimaryCTAButton onClick={handleCloseSession} endIcon={<ChevronRight />}>
Good to go
</PrimaryCTAButton>
);
}

View File

@@ -0,0 +1,49 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import ChangeVideo from '@/common/components/gallery/ChangeVideoModal';
import useMessagesSnackbar from '@/common/components/snackbar/useDemoMessagesSnackbar';
import {DEMO_SHORT_NAME} from '@/demo/DemoConfig';
import {useEffect, useRef} from 'react';
export default function FirstClickView() {
const isFirstClickMessageShown = useRef(false);
const {enqueueMessage} = useMessagesSnackbar();
useEffect(() => {
if (!isFirstClickMessageShown.current) {
isFirstClickMessageShown.current = true;
enqueueMessage('firstClick');
}
}, [enqueueMessage]);
return (
<div className="w-full h-full flex flex-col p-8">
<div className="grow flex flex-col gap-6">
<h2 className="text-2xl">Click an object in the video to start</h2>
<p className="!text-gray-60">
You&apos;ll be able to use {DEMO_SHORT_NAME} to make fun edits to any
video by tracking objects and applying visual effects.
</p>
<p className="!text-gray-60">
To start, click any object in the video.
</p>
</div>
<div className="flex items-center">
<ChangeVideo />
</div>
</div>
);
}

View File

@@ -0,0 +1,30 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {InformationFilled} from '@carbon/icons-react';
export default function LimitNotice() {
return (
<div className="mt-6 gap-3 mx-6 flex items-center text-gray-400">
<div>
<InformationFilled size={32} />
</div>
<div className="text-sm leading-snug">
In this demo, you can track up to 3 objects, even though the SAM 2 model
does not have a limit.
</div>
</div>
);
}

View File

@@ -0,0 +1,78 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import ClearAllPointsInVideoButton from '@/common/components/annotations/ClearAllPointsInVideoButton';
import ObjectThumbnail from '@/common/components/annotations/ObjectThumbnail';
import {OBJECT_TOOLBAR_INDEX} from '@/common/components/toolbar/ToolbarConfig';
import {BaseTracklet} from '@/common/tracker/Tracker';
import {activeTrackletObjectIdAtom, trackletObjectsAtom} from '@/demo/atoms';
import {spacing} from '@/theme/tokens.stylex';
import stylex from '@stylexjs/stylex';
import {useAtomValue, useSetAtom} from 'jotai';
const styles = stylex.create({
container: {
display: 'flex',
padding: spacing[5],
borderTop: '1px solid #DEE3E9',
},
trackletsContainer: {
flexGrow: 1,
display: 'flex',
alignItems: 'center',
gap: spacing[5],
},
});
type Props = {
showActiveObject: () => void;
onTabChange: (newIndex: number) => void;
};
export default function MobileObjectsList({
showActiveObject,
onTabChange,
}: Props) {
const tracklets = useAtomValue(trackletObjectsAtom);
const setActiveTrackletId = useSetAtom(activeTrackletObjectIdAtom);
function handleSelectObject(tracklet: BaseTracklet) {
setActiveTrackletId(tracklet.id);
showActiveObject();
}
return (
<div {...stylex.props(styles.container)}>
<div {...stylex.props(styles.trackletsContainer)}>
{tracklets.map(tracklet => {
const {id, color, thumbnail} = tracklet;
return (
<ObjectThumbnail
key={id}
color={color}
thumbnail={thumbnail}
onClick={() => {
handleSelectObject(tracklet);
}}
/>
);
})}
</div>
<ClearAllPointsInVideoButton
onRestart={() => onTabChange(OBJECT_TOOLBAR_INDEX)}
/>
</div>
);
}

View File

@@ -0,0 +1,51 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import MobileObjectsToolbarHeader from '@/common/components/annotations/MobileObjectsToolbarHeader';
import ObjectsToolbarBottomActions from '@/common/components/annotations/ObjectsToolbarBottomActions';
import {getObjectLabel} from '@/common/components/annotations/ObjectUtils';
import ToolbarObject from '@/common/components/annotations/ToolbarObject';
import MobileFirstClickBanner from '@/common/components/MobileFirstClickBanner';
import {activeTrackletObjectAtom, isFirstClickMadeAtom} from '@/demo/atoms';
import {useAtomValue} from 'jotai';
type Props = {
onTabChange: (newIndex: number) => void;
};
export default function MobileObjectsToolbar({onTabChange}: Props) {
const activeTracklet = useAtomValue(activeTrackletObjectAtom);
const isFirstClickMade = useAtomValue(isFirstClickMadeAtom);
if (!isFirstClickMade) {
return <MobileFirstClickBanner />;
}
return (
<div className="w-full">
<MobileObjectsToolbarHeader />
{activeTracklet != null && (
<ToolbarObject
label={getObjectLabel(activeTracklet)}
tracklet={activeTracklet}
isActive={true}
isMobile={true}
/>
)}
<ObjectsToolbarBottomActions onTabChange={onTabChange} />
</div>
);
}

View File

@@ -0,0 +1,36 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import ToolbarProgressChip from '@/common/components/toolbar/ToolbarProgressChip';
import {isStreamingAtom, streamingStateAtom} from '@/demo/atoms';
import {useAtomValue} from 'jotai';
export default function MobileObjectsToolbarHeader() {
const isStreaming = useAtomValue(isStreamingAtom);
const streamingState = useAtomValue(streamingStateAtom);
return (
<div className="w-full flex gap-4 items-center px-5 py-5">
<div className="grow text-sm text-white">
<ToolbarProgressChip />
{streamingState === 'full'
? 'Review your selected objects across the video, and continue to edit if needed. Once everything looks good, press “Next” to continue.'
: isStreaming
? 'Watch the video closely for any places where your objects arent tracked correctly. You can also stop tracking to make additional edits.'
: 'Edit your object selection with a few more clicks if needed. Press “Track objects” to track your objects throughout the video.'}
</div>
</div>
);
}

View File

@@ -0,0 +1,116 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import PointsToggle from '@/common/components/annotations/PointsToggle';
import useVideo from '@/common/components/video/editor/useVideo';
import useReportError from '@/common/error/useReportError';
import {
activeTrackletObjectIdAtom,
isPlayingAtom,
isStreamingAtom,
} from '@/demo/atoms';
import {
AddFilled,
Select_02,
SubtractFilled,
TrashCan,
} from '@carbon/icons-react';
import {useAtom, useAtomValue} from 'jotai';
import {useState} from 'react';
import type {ButtonProps} from 'react-daisyui';
import {Button} from 'react-daisyui';
type Props = {
objectId: number;
active: boolean;
};
function CustomButton({className, ...props}: ButtonProps) {
return (
<Button
size="sm"
color="ghost"
className={`font-medium border-none hover:bg-black px-2 h-10 ${className}`}
{...props}>
{props.children}
</Button>
);
}
export default function ObjectActions({objectId, active}: Props) {
const [isRemovingObject, setIsRemovingObject] = useState<boolean>(false);
const [activeTrackId, setActiveTrackletId] = useAtom(
activeTrackletObjectIdAtom,
);
const isStreaming = useAtomValue(isStreamingAtom);
const isPlaying = useAtom(isPlayingAtom);
const video = useVideo();
const reportError = useReportError();
async function handleRemoveObject(
event: React.MouseEvent<HTMLButtonElement>,
) {
try {
event.stopPropagation();
setIsRemovingObject(true);
if (isStreaming) {
await video?.abortStreamMasks();
}
if (isPlaying) {
video?.pause();
}
await video?.deleteTracklet(objectId);
} catch (error) {
reportError(error);
} finally {
setIsRemovingObject(false);
if (activeTrackId === objectId) {
setActiveTrackletId(null);
}
}
}
return (
<div>
{active && (
<div className="text-sm mt-1 leading-snug text-gray-400 hidden md:block ml-2 md:mb-4">
Select <AddFilled size={14} className="inline" /> to add areas to the
object and <SubtractFilled size={14} className="inline" /> to remove
areas from the object in the video. Click on an existing point to
delete it.
</div>
)}
<div className="flex justify-between items-center md:mt-2 mt-0">
{active ? (
<PointsToggle />
) : (
<>
<CustomButton startIcon={<Select_02 size={24} />}>
Edit selection
</CustomButton>
<CustomButton
loading={isRemovingObject}
onClick={handleRemoveObject}
startIcon={!isRemovingObject && <TrashCan size={24} />}>
<span className="hidden md:inline">Clear</span>
</CustomButton>
</>
)}
</div>
</div>
);
}

View File

@@ -0,0 +1,46 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {BLUE_PINK_FILL_BR} from '@/theme/gradientStyle';
type Props = {
showPlus?: boolean;
onClick?: () => void;
};
export default function ObjectPlaceholder({showPlus = true, onClick}: Props) {
return (
<div
className={`relative ${BLUE_PINK_FILL_BR} h-12 w-12 md:h-20 md:w-20 shrink-0 rounded-lg`}
onClick={onClick}>
{showPlus && (
<div className="absolute left-1/2 top-1/2 -translate-x-1/2 -translate-y-1/2">
<svg
xmlns="http://www.w3.org/2000/svg"
width="16"
height="16"
viewBox="0 0 16 16"
fill="none">
<path
d="M16 7H9V0H7V7H0V9H7V16H9V9H16V7Z"
fill="#667788"
fillOpacity={1}
/>
</svg>
</div>
)}
</div>
);
}

View File

@@ -0,0 +1,37 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
type Props = {
thumbnail: string | null;
color: string;
onClick?: () => void;
};
export default function ObjectThumbnail({thumbnail, color, onClick}: Props) {
return (
<div
className="relative h-12 w-12 md:w-20 md:h-20 shrink-0 p-2 rounded-lg bg-contain bg-no-repeat bg-center"
style={{
backgroundColor: color,
}}
onClick={onClick}>
<div
className="w-full h-full bg-contain bg-no-repeat bg-center"
style={{
backgroundImage: thumbnail == null ? 'none' : `url(${thumbnail})`,
}}></div>
</div>
);
}

View File

@@ -0,0 +1,20 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {BaseTracklet} from '@/common/tracker/Tracker';
export function getObjectLabel(tracklet: BaseTracklet) {
return `Object ${tracklet.id + 1}`;
}

View File

@@ -0,0 +1,72 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import AddObjectButton from '@/common/components/annotations/AddObjectButton';
import FirstClickView from '@/common/components/annotations/FirstClickView';
import LimitNotice from '@/common/components/annotations/LimitNotice';
import ObjectsToolbarBottomActions from '@/common/components/annotations/ObjectsToolbarBottomActions';
import ObjectsToolbarHeader from '@/common/components/annotations/ObjectsToolbarHeader';
import {getObjectLabel} from '@/common/components/annotations/ObjectUtils';
import ToolbarObject from '@/common/components/annotations/ToolbarObject';
import {
activeTrackletObjectAtom,
activeTrackletObjectIdAtom,
isAddObjectEnabledAtom,
isFirstClickMadeAtom,
isTrackletObjectLimitReachedAtom,
trackletObjectsAtom,
} from '@/demo/atoms';
import {useAtomValue, useSetAtom} from 'jotai';
type Props = {
onTabChange: (newIndex: number) => void;
};
export default function ObjectsToolbar({onTabChange}: Props) {
const tracklets = useAtomValue(trackletObjectsAtom);
const activeTracklet = useAtomValue(activeTrackletObjectAtom);
const setActiveTrackletId = useSetAtom(activeTrackletObjectIdAtom);
const isFirstClickMade = useAtomValue(isFirstClickMadeAtom);
const isObjectLimitReached = useAtomValue(isTrackletObjectLimitReachedAtom);
const isAddObjectEnabled = useAtomValue(isAddObjectEnabledAtom);
if (!isFirstClickMade) {
return <FirstClickView />;
}
return (
<div className="flex flex-col h-full">
<ObjectsToolbarHeader />
<div className="grow w-full overflow-y-auto">
{tracklets.map(tracklet => {
return (
<ToolbarObject
key={tracklet.id}
label={getObjectLabel(tracklet)}
tracklet={tracklet}
isActive={activeTracklet?.id === tracklet.id}
onClick={() => {
setActiveTrackletId(tracklet.id);
}}
/>
);
})}
{isAddObjectEnabled && <AddObjectButton />}
{isObjectLimitReached && <LimitNotice />}
</div>
<ObjectsToolbarBottomActions onTabChange={onTabChange} />
</div>
);
}

View File

@@ -0,0 +1,52 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import ClearAllPointsInVideoButton from '@/common/components/annotations/ClearAllPointsInVideoButton';
import CloseSessionButton from '@/common/components/annotations/CloseSessionButton';
import TrackAndPlayButton from '@/common/components/button/TrackAndPlayButton';
import ToolbarBottomActionsWrapper from '@/common/components/toolbar/ToolbarBottomActionsWrapper';
import {
EFFECT_TOOLBAR_INDEX,
OBJECT_TOOLBAR_INDEX,
} from '@/common/components/toolbar/ToolbarConfig';
import {streamingStateAtom} from '@/demo/atoms';
import {useAtomValue} from 'jotai';
type Props = {
onTabChange: (newIndex: number) => void;
};
export default function ObjectsToolbarBottomActions({onTabChange}: Props) {
const streamingState = useAtomValue(streamingStateAtom);
const isTrackingEnabled =
streamingState !== 'none' && streamingState !== 'full';
function handleSwitchToEffectsTab() {
onTabChange(EFFECT_TOOLBAR_INDEX);
}
return (
<ToolbarBottomActionsWrapper>
<ClearAllPointsInVideoButton
onRestart={() => onTabChange(OBJECT_TOOLBAR_INDEX)}
/>
{isTrackingEnabled && <TrackAndPlayButton />}
{streamingState === 'full' && (
<CloseSessionButton onSessionClose={handleSwitchToEffectsTab} />
)}
</ToolbarBottomActionsWrapper>
);
}

View File

@@ -0,0 +1,43 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import ToolbarHeaderWrapper from '@/common/components/toolbar/ToolbarHeaderWrapper';
import {isStreamingAtom, streamingStateAtom} from '@/demo/atoms';
import {useAtomValue} from 'jotai';
export default function ObjectsToolbarHeader() {
const isStreaming = useAtomValue(isStreamingAtom);
const streamingState = useAtomValue(streamingStateAtom);
return (
<ToolbarHeaderWrapper
title={
streamingState === 'full'
? 'Review tracked objects'
: isStreaming
? 'Tracking objects'
: 'Select objects'
}
description={
streamingState === 'full'
? 'Review your selected objects across the video, and continue to edit if needed. Once everything looks good, press “Next” to continue.'
: isStreaming
? 'Watch the video closely for any places where your objects arent tracked correctly. You can also stop tracking to make additional edits.'
: 'Adjust the selection of your object, or add additional objects. Press “Track objects” to track your objects throughout the video.'
}
className="mb-8"
/>
);
}

View File

@@ -0,0 +1,44 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {labelTypeAtom} from '@/demo/atoms';
import {AddFilled, SubtractFilled} from '@carbon/icons-react';
import {useAtom} from 'jotai';
export default function PointsToggle() {
const [labelType, setLabelType] = useAtom(labelTypeAtom);
const isPositive = labelType === 'positive';
const buttonStyle = (selected: boolean) =>
`btn-md bg-graydark-800 !text-white md:px-2 lg:px-4 py-0.5 ${selected ? `border border-white hover:bg-graydark-800` : `border-graydark-700 hover:bg-graydark-700`}`;
return (
<div className="flex items-center w-full md:ml-2">
<div className="join group grow gap-[1px]">
<button
className={`w-1/2 btn join-item text-white ${buttonStyle(isPositive)}`}
onClick={() => setLabelType('positive')}>
<AddFilled size={24} className="text-blue-500" /> Add
</button>
<button
className={`w-1/2 btn join-item text-red-700 ${buttonStyle(!isPositive)}`}
onClick={() => setLabelType('negative')}>
<SubtractFilled size={24} className="text-red-400" />
Remove
</button>
</div>
</div>
);
}

View File

@@ -0,0 +1,40 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import GradientBorder from '@/common/components/button/GradientBorder';
import type {ReactNode} from 'react';
type Props = {
disabled?: boolean;
endIcon?: ReactNode;
} & React.DOMAttributes<HTMLButtonElement>;
export default function PrimaryCTAButton({
children,
disabled,
endIcon,
...props
}: Props) {
return (
<GradientBorder disabled={disabled}>
<button
className={`btn ${disabled && 'btn-disabled'} !rounded-full !bg-black !text-white !border-none`}
{...props}>
{children}
{endIcon != null && endIcon}
</button>
</GradientBorder>
);
}

View File

@@ -0,0 +1,88 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import ObjectActions from '@/common/components/annotations/ObjectActions';
import ObjectPlaceholder from '@/common/components/annotations/ObjectPlaceholder';
import ObjectThumbnail from '@/common/components/annotations/ObjectThumbnail';
import ToolbarObjectContainer from '@/common/components/annotations/ToolbarObjectContainer';
import useVideo from '@/common/components/video/editor/useVideo';
import {BaseTracklet} from '@/common/tracker/Tracker';
import emptyFunction from '@/common/utils/emptyFunction';
import {activeTrackletObjectIdAtom} from '@/demo/atoms';
import {useSetAtom} from 'jotai';
type Props = {
label: string;
tracklet: BaseTracklet;
isActive: boolean;
isMobile?: boolean;
onClick?: () => void;
onThumbnailClick?: () => void;
};
export default function ToolbarObject({
label,
tracklet,
isActive,
isMobile = false,
onClick,
onThumbnailClick = emptyFunction,
}: Props) {
const video = useVideo();
const setActiveTrackletId = useSetAtom(activeTrackletObjectIdAtom);
async function handleCancelNewObject() {
try {
await video?.deleteTracklet(tracklet.id);
} catch (error) {
reportError(error);
} finally {
setActiveTrackletId(null);
}
}
if (!tracklet.isInitialized) {
return (
<ToolbarObjectContainer
alignItems="center"
isActive={isActive}
title="New object"
subtitle="No object is currently selected. Click an object in the video."
thumbnail={<ObjectPlaceholder showPlus={false} />}
isMobile={isMobile}
onClick={onClick}
onCancel={handleCancelNewObject}
/>
);
}
return (
<ToolbarObjectContainer
isActive={isActive}
onClick={onClick}
title={label}
subtitle=""
thumbnail={
<ObjectThumbnail
thumbnail={tracklet.thumbnail}
color={tracklet.color}
onClick={onThumbnailClick}
/>
}
isMobile={isMobile}>
<ObjectActions objectId={tracklet.id} active={isActive} />
</ToolbarObjectContainer>
);
}

View File

@@ -0,0 +1,123 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {spacing} from '@/theme/tokens.stylex';
import {Close} from '@carbon/icons-react';
import stylex from '@stylexjs/stylex';
import {PropsWithChildren, ReactNode} from 'react';
const sharedStyles = stylex.create({
container: {
display: 'flex',
overflow: 'hidden',
cursor: 'pointer',
flexShrink: 0,
borderTop: 'none',
backgroundColor: {
'@media screen and (max-width: 768px)': '#000',
},
paddingHorizontal: {
default: spacing[8],
'@media screen and (max-width: 768px)': spacing[5],
},
paddingBottom: {
default: spacing[8],
'@media screen and (max-width: 768px)': 10,
},
},
activeContainer: {
background: '#000',
borderRadius: 16,
marginHorizontal: 16,
padding: {
default: spacing[4],
'@media screen and (max-width: 768px)': spacing[5],
},
marginBottom: {
default: spacing[8],
'@media screen and (max-width: 768px)': 0,
},
},
itemsCenter: {
alignItems: 'center',
},
rightColumn: {
marginStart: {
default: spacing[4],
'@media screen and (max-width: 768px)': 0,
},
flexGrow: 1,
alignItems: 'center',
},
});
type ToolbarObjectContainerProps = PropsWithChildren<{
alignItems?: 'top' | 'center';
isActive: boolean;
title: string;
subtitle: string;
thumbnail: ReactNode;
isMobile: boolean;
onCancel?: () => void;
onClick?: () => void;
}>;
export default function ToolbarObjectContainer({
alignItems = 'top',
children,
isActive,
title,
subtitle,
thumbnail,
isMobile,
onClick,
onCancel,
}: ToolbarObjectContainerProps) {
if (isMobile) {
return (
<div
onClick={onClick}
{...stylex.props(sharedStyles.container, sharedStyles.itemsCenter)}>
<div {...stylex.props(sharedStyles.rightColumn)}>{children}</div>
</div>
);
}
return (
<div
onClick={onClick}
{...stylex.props(
sharedStyles.container,
isActive && sharedStyles.activeContainer,
alignItems === 'center' && sharedStyles.itemsCenter,
)}>
{thumbnail}
<div {...stylex.props(sharedStyles.rightColumn)}>
<div className="text-md font-semibold ml-2">{title}</div>
{subtitle.length > 0 && (
<div className="text-sm text-gray-400 leading-5 mt-2 ml-2">
{subtitle}
</div>
)}
{children}
</div>
{onCancel != null && (
<div className="items-start self-stretch" onClick={onCancel}>
<Close size={32} />
</div>
)}
</div>
);
}

View File

@@ -0,0 +1,173 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import useSelectedFrameHelper from '@/common/components/video/filmstrip/useSelectedFrameHelper';
import {BaseTracklet, DatalessMask} from '@/common/tracker/Tracker';
import {spacing, w} from '@/theme/tokens.stylex';
import stylex from '@stylexjs/stylex';
import {useMemo} from 'react';
const styles = stylex.create({
container: {
display: 'flex',
alignItems: 'center',
gap: spacing[4],
width: '100%',
},
trackletNameContainer: {
width: w[12],
textAlign: 'center',
fontSize: '10px',
color: 'white',
},
swimlaneContainer: {
flexGrow: 1,
position: 'relative',
display: 'flex',
height: 12,
marginVertical: '0.25rem' /* 4px */,
'@media screen and (max-width: 768px)': {
marginVertical: 0,
},
},
swimlane: {
position: 'absolute',
left: 0,
top: '50%',
width: '100%',
height: 1,
transform: 'translate3d(0, -50%, 0)',
opacity: 0.4,
},
segment: {
position: 'absolute',
top: '50%',
height: 1,
transform: 'translate3d(0, -50%, 0)',
},
segmentationPoint: {
position: 'absolute',
top: '50%',
transform: 'translate3d(0, -50%, 0)',
borderRadius: '50%',
cursor: 'pointer',
width: 12,
height: 12,
'@media screen and (max-width: 768px)': {
width: 8,
height: 8,
},
},
});
type SwimlineSegment = {
left: number;
width: number;
};
type Props = {
tracklet: BaseTracklet;
onSelectFrame: (tracklet: BaseTracklet, index: number) => void;
};
function getSwimlaneSegments(masks: DatalessMask[]): SwimlineSegment[] {
if (masks.length === 0) {
return [];
}
const swimlineSegments: SwimlineSegment[] = [];
let left = -1;
for (let frameIndex = 0; frameIndex < masks.length; ++frameIndex) {
const isEmpty = masks?.[frameIndex]?.isEmpty ?? true;
if (left === -1 && !isEmpty) {
left = frameIndex;
} else if (left !== -1 && (isEmpty || frameIndex == masks.length - 1)) {
swimlineSegments.push({
left,
width: frameIndex - left + 1,
});
left = -1;
}
}
return swimlineSegments;
}
export default function TrackletSwimlane({tracklet, onSelectFrame}: Props) {
const selection = useSelectedFrameHelper();
const segments = useMemo(() => {
return getSwimlaneSegments(tracklet.masks);
}, [tracklet.masks]);
const framesWithPoints = tracklet.points.reduce<number[]>(
(frames, pts, frameIndex) => {
if (pts != null && pts.length > 0) {
frames.push(frameIndex);
}
return frames;
},
[],
);
if (selection === null) {
return;
}
return (
<div {...stylex.props(styles.container)}>
<div {...stylex.props(styles.trackletNameContainer)}>
Object {tracklet.id + 1}
</div>
<div {...stylex.props(styles.swimlaneContainer)}>
<div
{...stylex.props(styles.swimlane)}
style={{
backgroundColor: tracklet.color,
}}
/>
{segments.map(segment => {
return (
<div
key={segment.left}
{...stylex.props(styles.segment)}
style={{
backgroundColor: tracklet.color,
left: selection.toPosition(segment.left),
width: selection.toPosition(segment.width),
}}
/>
);
})}
{framesWithPoints.map(index => {
return (
<div
key={`frame${index}`}
onClick={() => {
onSelectFrame?.(tracklet, index);
}}
{...stylex.props(styles.segmentationPoint)}
style={{
left: Math.floor(selection.toPosition(index) - 4),
backgroundColor: tracklet.color,
}}
/>
);
})}
</div>
</div>
);
}

View File

@@ -0,0 +1,55 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import TrackletSwimlane from '@/common/components/annotations/TrackletSwimlane';
import useTracklets from '@/common/components/annotations/useTracklets';
import useVideo from '@/common/components/video/editor/useVideo';
import {BaseTracklet} from '@/common/tracker/Tracker';
import {m, spacing} from '@/theme/tokens.stylex';
import stylex from '@stylexjs/stylex';
const styles = stylex.create({
container: {
marginTop: m[3],
height: 75,
paddingHorizontal: spacing[4],
'@media screen and (max-width: 768px)': {
height: 25,
},
},
});
export default function TrackletsAnnotation() {
const video = useVideo();
const tracklets = useTracklets();
function handleSelectFrame(_tracklet: BaseTracklet, index: number) {
if (video !== null) {
video.frame = index;
}
}
return (
<div {...stylex.props(styles.container)}>
{tracklets.map(tracklet => (
<TrackletSwimlane
key={tracklet.id}
tracklet={tracklet}
onSelectFrame={handleSelectFrame}
/>
))}
</div>
);
}

View File

@@ -0,0 +1,21 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {trackletObjectsAtom} from '@/demo/atoms';
import {useAtomValue} from 'jotai';
export default function useTracklets() {
return useAtomValue(trackletObjectsAtom);
}

View File

@@ -0,0 +1,73 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import stylex from '@stylexjs/stylex';
import {gradients} from '@/theme/tokens.stylex';
enum GradientTypes {
fullGradient = 'fullGradient',
bluePinkGradient = 'bluePinkGradient',
}
type Props = {
gradientType?: GradientTypes;
disabled?: boolean;
rounded?: boolean;
className?: string;
} & React.DOMAttributes<HTMLDivElement>;
const styles = stylex.create({
animationHover: {
':hover': {
backgroundPosition: '300% 100%',
},
},
fullGradient: {
border: '2px solid transparent',
background: gradients['rainbow'],
backgroundSize: '100% 400%',
transition: 'background 0.35s ease-in-out',
},
bluePinkGradient: {
border: '2px solid transparent',
background: gradients['rainbow'],
},
});
export default function GradientBorder({
gradientType = GradientTypes.fullGradient,
disabled,
rounded = true,
className = '',
children,
}: Props) {
const gradient = (name: GradientTypes) => {
if (name === GradientTypes.fullGradient) {
return styles.fullGradient;
} else if (name === GradientTypes.bluePinkGradient) {
return styles.bluePinkGradient;
}
};
return (
<div
className={`${stylex(gradient(gradientType), !disabled && styles.animationHover)} ${disabled && 'opacity-30'} ${rounded && 'rounded-full'} ${className}`}>
{children}
</div>
);
}

View File

@@ -0,0 +1,94 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {OBJECT_TOOLBAR_INDEX} from '@/common/components/toolbar/ToolbarConfig';
import Tooltip from '@/common/components/Tooltip';
import useVideo from '@/common/components/video/editor/useVideo';
import {isPlayingAtom, streamingStateAtom, toolbarTabIndex} from '@/demo/atoms';
import {PauseFilled, PlayFilledAlt} from '@carbon/icons-react';
import {useAtomValue} from 'jotai';
import {useCallback, useEffect} from 'react';
export default function PlaybackButton() {
const tabIndex = useAtomValue(toolbarTabIndex);
const streamingState = useAtomValue(streamingStateAtom);
const isPlaying = useAtomValue(isPlayingAtom);
const video = useVideo();
const isDisabled =
tabIndex === OBJECT_TOOLBAR_INDEX &&
streamingState !== 'none' &&
streamingState !== 'full';
const handlePlay = useCallback(() => {
video?.play();
}, [video]);
const handlePause = useCallback(() => {
video?.pause();
}, [video]);
const handleClick = useCallback(() => {
if (isDisabled) {
return;
}
if (isPlaying) {
handlePause();
} else {
handlePlay();
}
}, [isDisabled, isPlaying, handlePlay, handlePause]);
useEffect(() => {
const handleKey = (event: KeyboardEvent) => {
const callback = {
KeyK: handleClick,
}[event.code];
if (callback != null) {
event.preventDefault();
callback();
}
};
document.addEventListener('keydown', handleKey);
return () => {
document.removeEventListener('keydown', handleKey);
};
}, [handleClick]);
return (
<Tooltip message={`${isPlaying ? 'Pause' : 'Play'} (k)`}>
<button
disabled={isDisabled}
className={`group !rounded-full !w-10 !h-10 flex items-center justify-center ${getButtonStyles(isDisabled)}`}
onClick={handleClick}>
{isPlaying ? (
<PauseFilled size={18} />
) : (
<PlayFilledAlt
size={18}
className={!isDisabled ? 'group-hover:text-green-500' : ''}
/>
)}
</button>
</Tooltip>
);
}
function getButtonStyles(isDisabled: boolean): string {
if (isDisabled) {
return '!bg-gray-600 !text-graydark-700';
}
return `!text-black bg-white`;
}

View File

@@ -0,0 +1,40 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import GradientBorder from '@/common/components/button/GradientBorder';
import type {ReactNode} from 'react';
type Props = {
disabled?: boolean;
endIcon?: ReactNode;
} & React.DOMAttributes<HTMLButtonElement>;
export default function PrimaryCTAButton({
children,
disabled,
endIcon,
...props
}: Props) {
return (
<GradientBorder disabled={disabled}>
<button
className={`btn ${disabled && 'btn-disabled'} !rounded-full !bg-black !text-white !border-none`}
{...props}>
{children}
{endIcon != null && endIcon}
</button>
</GradientBorder>
);
}

View File

@@ -0,0 +1,27 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import useScreenSize from '@/common/screen/useScreenSize';
import type {ReactNode} from 'react';
import type {ButtonProps} from 'react-daisyui';
import {Button} from 'react-daisyui';
type Props = ButtonProps & {startIcon: ReactNode};
export default function ResponsiveButton(props: Props) {
const {isMobile} = useScreenSize();
return <Button {...props}>{!isMobile && props.children}</Button>;
}

View File

@@ -0,0 +1,123 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import PrimaryCTAButton from '@/common/components/button/PrimaryCTAButton';
import useMessagesSnackbar from '@/common/components/snackbar/useDemoMessagesSnackbar';
import useFunctionThrottle from '@/common/components/useFunctionThrottle';
import useVideo from '@/common/components/video/editor/useVideo';
import {
areTrackletObjectsInitializedAtom,
isStreamingAtom,
sessionAtom,
streamingStateAtom,
} from '@/demo/atoms';
import {ChevronRight} from '@carbon/icons-react';
import {useAtom, useAtomValue, useSetAtom} from 'jotai';
import {useCallback, useEffect} from 'react';
export default function TrackAndPlayButton() {
const video = useVideo();
const [isStreaming, setIsStreaming] = useAtom(isStreamingAtom);
const streamingState = useAtomValue(streamingStateAtom);
const areObjectsInitialized = useAtomValue(areTrackletObjectsInitializedAtom);
const setSession = useSetAtom(sessionAtom);
const {enqueueMessage} = useMessagesSnackbar();
const {isThrottled, maxThrottles, throttle} = useFunctionThrottle(250, 4);
const isTrackAndPlayDisabled =
streamingState === 'aborting' || streamingState === 'requesting';
useEffect(() => {
function onStreamingStarted() {
setIsStreaming(true);
}
video?.addEventListener('streamingStarted', onStreamingStarted);
function onStreamingCompleted() {
enqueueMessage('trackAndPlayComplete');
setIsStreaming(false);
}
video?.addEventListener('streamingCompleted', onStreamingCompleted);
return () => {
video?.removeEventListener('streamingStarted', onStreamingStarted);
video?.removeEventListener('streamingCompleted', onStreamingCompleted);
};
}, [video, setIsStreaming, enqueueMessage]);
const handleTrackAndPlay = useCallback(() => {
if (isTrackAndPlayDisabled) {
return;
}
if (maxThrottles && isThrottled) {
enqueueMessage('trackAndPlayThrottlingWarning');
}
// Throttling is only applied while streaming because we should
// only throttle after a user has aborted inference. This way,
// a user can still quickly abort a stream if they notice the
// inferred mask is misaligned.
throttle(
() => {
if (!isStreaming) {
enqueueMessage('trackAndPlayClick');
video?.streamMasks();
setSession(previousSession =>
previousSession == null
? previousSession
: {...previousSession, ranPropagation: true},
);
} else {
video?.abortStreamMasks();
}
},
{enableThrottling: isStreaming},
);
}, [
isTrackAndPlayDisabled,
isThrottled,
isStreaming,
maxThrottles,
video,
setSession,
enqueueMessage,
throttle,
]);
useEffect(() => {
const handleKey = (event: KeyboardEvent) => {
const callback = {
KeyK: handleTrackAndPlay,
}[event.code];
if (callback != null) {
event.preventDefault();
callback();
}
};
document.addEventListener('keydown', handleKey);
return () => {
document.removeEventListener('keydown', handleKey);
};
}, [handleTrackAndPlay]);
return (
<PrimaryCTAButton
disabled={isThrottled || !areObjectsInitialized}
onClick={handleTrackAndPlay}
endIcon={isStreaming ? undefined : <ChevronRight size={20} />}>
{isStreaming ? 'Cancel Tracking' : 'Track objects'}
</PrimaryCTAButton>
);
}

View File

@@ -0,0 +1,36 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {loader} from '@monaco-editor/react';
import Logger from '@/common/logger/Logger';
import * as monaco from 'monaco-editor';
import editorWorker from 'monaco-editor/esm/vs/editor/editor.worker?worker';
import tsWorker from 'monaco-editor/esm/vs/language/typescript/ts.worker?worker';
self.MonacoEnvironment = {
getWorker(_, label) {
if (label === 'typescript' || label === 'javascript') {
return new tsWorker();
}
return new editorWorker();
},
};
loader.config({monaco});
loader.init().then(monaco => {
Logger.debug('initialized monaco', monaco);
});

View File

@@ -0,0 +1,61 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {backgroundEffects} from '@/common/components/effects/EffectsUtils';
import EffectVariantBadge from '@/common/components/effects/EffectVariantBadge';
import ToolbarActionIcon from '@/common/components/toolbar/ToolbarActionIcon';
import ToolbarSection from '@/common/components/toolbar/ToolbarSection';
import useVideoEffect from '@/common/components/video/editor/useVideoEffect';
import {EffectIndex} from '@/common/components/video/effects/Effects';
import {activeBackgroundEffectAtom} from '@/demo/atoms';
import {useAtomValue} from 'jotai';
export default function BackgroundEffects() {
const setEffect = useVideoEffect();
const activeEffect = useAtomValue(activeBackgroundEffectAtom);
return (
<ToolbarSection title="Background" borderBottom={false}>
{backgroundEffects.map(backgroundEffect => {
return (
<ToolbarActionIcon
variant="toggle"
key={backgroundEffect.title}
icon={backgroundEffect.Icon}
title={backgroundEffect.title}
isActive={activeEffect.name === backgroundEffect.effectName}
badge={
activeEffect.name === backgroundEffect.effectName && (
<EffectVariantBadge
label={`${activeEffect.variant + 1}/${activeEffect.numVariants}`}
/>
)
}
onClick={() => {
if (activeEffect.name === backgroundEffect.effectName) {
setEffect(backgroundEffect.effectName, EffectIndex.BACKGROUND, {
variant:
(activeEffect.variant + 1) % activeEffect.numVariants,
});
} else {
setEffect(backgroundEffect.effectName, EffectIndex.BACKGROUND);
}
}}
/>
);
})}
</ToolbarSection>
);
}

View File

@@ -0,0 +1,41 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {right, top} from '@/theme/tokens.stylex';
import stylex from '@stylexjs/stylex';
const styles = stylex.create({
variantBadge: {
position: 'absolute',
top: top[1],
right: right[1],
backgroundColor: '#280578',
color: '#D2D2FF',
fontVariantNumeric: 'tabular-nums',
paddingHorizontal: 4,
paddingVertical: 1,
fontSize: 9,
borderRadius: 6,
fontWeight: 'bold',
},
});
type Props = {
label: string;
};
export default function VariantBadge({label}: Props) {
return <div {...stylex.props(styles.variantBadge)}>{label}</div>;
}

View File

@@ -0,0 +1,93 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {CarouselContainerShadow} from '@/common/components/effects/EffectsCarouselShadow';
import {DemoEffect} from '@/common/components/effects/EffectsUtils';
import useVideoEffect from '@/common/components/video/editor/useVideoEffect';
import type {EffectIndex} from '@/common/components/video/effects/Effects';
import {Effects} from '@/common/components/video/effects/Effects';
import {color, fontSize, spacing} from '@/theme/tokens.stylex';
import stylex from '@stylexjs/stylex';
type Props = {
label: string;
effects: DemoEffect[];
activeEffect: keyof Effects;
index: EffectIndex;
};
const styles = stylex.create({
container: {
display: 'flex',
flexDirection: 'column',
gap: spacing[2],
width: '100%',
},
label: {
fontSize: fontSize['xs'],
color: '#A6ACB2',
textAlign: 'center',
},
carouselContainer: {
position: 'relative',
borderRadius: '8px',
overflow: 'hidden',
width: '100%',
height: '120px',
backgroundColor: color['gray-700'],
},
});
export default function EffectsCarousel({
label,
effects,
activeEffect,
index: effectIndex,
}: Props) {
const setEffect = useVideoEffect();
return (
<div {...stylex.props(styles.container)}>
<div {...stylex.props(styles.label)}>{label}</div>
<div {...stylex.props(styles.carouselContainer)}>
<CarouselContainerShadow isTop={true} />
<div className="carousel carousel-vertical w-full h-full text-white">
<div className={`carousel-item h-6`} />
{effects.map(({effectName, Icon, title}, index) => {
const isActive = activeEffect === effectName;
return (
<div
key={index}
className={`carousel-item flex items-center h-6 gap-2 px-4`}
onClick={() => setEffect(effectName, effectIndex)}>
<Icon
color={isActive ? '#FB73A5' : undefined}
size={18}
fontWeight={10}
/>
<div
className={`text-sm ${isActive ? 'text-[#FB73A5] font-bold' : 'font-medium'}`}>
{title}
</div>
</div>
);
})}
<div className={`carousel-item h-6`} />
</div>
<CarouselContainerShadow isTop={false} />
</div>
</div>
);
}

View File

@@ -0,0 +1,46 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {spacing} from '@/theme/tokens.stylex';
import stylex from '@stylexjs/stylex';
const styles = stylex.create({
container: {
position: 'absolute',
width: '100%',
height: spacing[8],
pointerEvents: 'none',
},
});
type CarouselContainerShadowProps = {
isTop: boolean;
};
const edgeColor = 'rgba(55, 62, 65, 1)';
const transitionColor = 'rgba(55, 62, 65, 0.2)';
export function CarouselContainerShadow({isTop}: CarouselContainerShadowProps) {
return (
<div
{...stylex.props(styles.container)}
style={{
background: `linear-gradient(${isTop ? `${edgeColor}, ${transitionColor}` : `${transitionColor}, ${edgeColor}`})`,
top: isTop ? 0 : undefined,
bottom: isTop ? undefined : 0,
}}
/>
);
}

View File

@@ -0,0 +1,48 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import BackgroundEffects from '@/common/components/effects/BackgroundEffects';
import EffectsToolbarBottomActions from '@/common/components/effects/EffectsToolbarBottomActions';
import EffectsToolbarHeader from '@/common/components/effects/EffectsToolbarHeader';
import HighlightEffects from '@/common/components/effects/HighlightEffects';
import useMessagesSnackbar from '@/common/components/snackbar/useDemoMessagesSnackbar';
import {useEffect, useRef} from 'react';
type Props = {
onTabChange: (newIndex: number) => void;
};
export default function EffectsToolbar({onTabChange}: Props) {
const isEffectsMessageShown = useRef(false);
const {enqueueMessage} = useMessagesSnackbar();
useEffect(() => {
if (!isEffectsMessageShown.current) {
isEffectsMessageShown.current = true;
enqueueMessage('effectsMessage');
}
}, [enqueueMessage]);
return (
<div className="flex flex-col h-full">
<EffectsToolbarHeader />
<div className="grow overflow-y-auto">
<HighlightEffects />
<BackgroundEffects />
</div>
<EffectsToolbarBottomActions onTabChange={onTabChange} />
</div>
);
}

View File

@@ -0,0 +1,46 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import PrimaryCTAButton from '@/common/components/button/PrimaryCTAButton';
import RestartSessionButton from '@/common/components/session/RestartSessionButton';
import ToolbarBottomActionsWrapper from '@/common/components/toolbar/ToolbarBottomActionsWrapper';
import {
MORE_OPTIONS_TOOLBAR_INDEX,
OBJECT_TOOLBAR_INDEX,
} from '@/common/components/toolbar/ToolbarConfig';
import {ChevronRight} from '@carbon/icons-react';
type Props = {
onTabChange: (newIndex: number) => void;
};
export default function EffectsToolbarBottomActions({onTabChange}: Props) {
function handleSwitchToMoreOptionsTab() {
onTabChange(MORE_OPTIONS_TOOLBAR_INDEX);
}
return (
<ToolbarBottomActionsWrapper>
<RestartSessionButton
onRestartSession={() => onTabChange(OBJECT_TOOLBAR_INDEX)}
/>
<PrimaryCTAButton
onClick={handleSwitchToMoreOptionsTab}
endIcon={<ChevronRight />}>
Next
</PrimaryCTAButton>
</ToolbarBottomActionsWrapper>
);
}

View File

@@ -0,0 +1,62 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import ToolbarHeaderWrapper from '@/common/components/toolbar/ToolbarHeaderWrapper';
import useVideoEffect from '@/common/components/video/editor/useVideoEffect';
import {
EffectIndex,
effectPresets,
} from '@/common/components/video/effects/Effects';
import {BLUE_PINK_FILL} from '@/theme/gradientStyle';
import {MagicWandFilled} from '@carbon/icons-react';
import {useCallback, useRef} from 'react';
import {Button} from 'react-daisyui';
export default function EffectsToolbarHeader() {
const preset = useRef(0);
const setEffect = useVideoEffect();
const handleTogglePreset = useCallback(() => {
preset.current++;
const [background, highlight] =
effectPresets[preset.current % effectPresets.length];
setEffect(background.name, EffectIndex.BACKGROUND, {
variant: background.variant,
});
setEffect(highlight.name, EffectIndex.HIGHLIGHT, {
variant: highlight.variant,
});
}, [setEffect]);
return (
<ToolbarHeaderWrapper
title="Add effects"
description="Apply visual effects to your selected objects and the background. Keeping clicking the same effect for different variations."
bottomSection={
<div className="flex mt-1">
<Button
color="ghost"
size="md"
className={`font-medium bg-black !rounded-full hover:!bg-gradient-to-br ${BLUE_PINK_FILL} border-none`}
endIcon={<MagicWandFilled size={20} className="text-white " />}
onClick={handleTogglePreset}>
Surprise Me
</Button>
</div>
}
className="pb-4"
/>
);
}

View File

@@ -0,0 +1,76 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {Effects} from '@/common/components/video/effects/Effects';
import type {CarbonIconType} from '@carbon/icons-react';
import {
AppleDash,
Asterisk,
Barcode,
CenterCircle,
ColorPalette,
ColorSwitch,
Development,
Erase,
FaceWink,
Humidity,
Image,
Overlay,
TextFont,
} from '@carbon/icons-react';
export type DemoEffect = {
title: string;
Icon: CarbonIconType;
effectName: keyof Effects;
};
export const backgroundEffects: DemoEffect[] = [
{title: 'Original', Icon: Image, effectName: 'Original'},
{title: 'Erase', Icon: Erase, effectName: 'EraseBackground'},
{
title: 'Gradient',
Icon: ColorPalette,
effectName: 'Gradient',
},
{
title: 'Pixelate',
Icon: Development,
effectName: 'Pixelate',
},
{title: 'Desaturate', Icon: ColorSwitch, effectName: 'Desaturate'},
{title: 'Text', Icon: TextFont, effectName: 'BackgroundText'},
{title: 'Blur', Icon: Humidity, effectName: 'BackgroundBlur'},
{title: 'Outline', Icon: AppleDash, effectName: 'Sobel'},
];
export const highlightEffects: DemoEffect[] = [
{title: 'Original', Icon: Image, effectName: 'Cutout'},
{title: 'Erase', Icon: Erase, effectName: 'EraseForeground'},
{title: 'Gradient', Icon: ColorPalette, effectName: 'VibrantMask'},
{title: 'Pixelate', Icon: Development, effectName: 'PixelateMask'},
{
title: 'Overlay',
Icon: Overlay,
effectName: 'Overlay',
},
{title: 'Emoji', Icon: FaceWink, effectName: 'Replace'},
{title: 'Burst', Icon: Asterisk, effectName: 'Burst'},
{title: 'Spotlight', Icon: CenterCircle, effectName: 'Scope'},
];
export const moreEffects: DemoEffect[] = [
{title: 'Noisy', Icon: Barcode, effectName: 'NoisyMask'},
];

View File

@@ -0,0 +1,64 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import EffectVariantBadge from '@/common/components/effects/EffectVariantBadge';
import ToolbarActionIcon from '@/common/components/toolbar/ToolbarActionIcon';
import ToolbarSection from '@/common/components/toolbar/ToolbarSection';
import useVideoEffect from '@/common/components/video/editor/useVideoEffect';
import {EffectIndex} from '@/common/components/video/effects/Effects';
import {
activeHighlightEffectAtom,
activeHighlightEffectGroupAtom,
} from '@/demo/atoms';
import {useAtomValue} from 'jotai';
export default function HighlightEffects() {
const setEffect = useVideoEffect();
const activeEffect = useAtomValue(activeHighlightEffectAtom);
const activeEffectsGroup = useAtomValue(activeHighlightEffectGroupAtom);
return (
<ToolbarSection title="Selected Objects" borderBottom={true}>
{activeEffectsGroup.map(highlightEffect => {
return (
<ToolbarActionIcon
variant="toggle"
key={highlightEffect.title}
icon={highlightEffect.Icon}
title={highlightEffect.title}
isActive={activeEffect.name === highlightEffect.effectName}
badge={
activeEffect.name === highlightEffect.effectName && (
<EffectVariantBadge
label={`${activeEffect.variant + 1}/${activeEffect.numVariants}`}
/>
)
}
onClick={() => {
if (activeEffect.name === highlightEffect.effectName) {
setEffect(highlightEffect.effectName, EffectIndex.HIGHLIGHT, {
variant:
(activeEffect.variant + 1) % activeEffect.numVariants,
});
} else {
setEffect(highlightEffect.effectName, EffectIndex.HIGHLIGHT);
}
}}
/>
);
})}
</ToolbarSection>
);
}

View File

@@ -0,0 +1,115 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import EffectsCarousel from '@/common/components/effects/EffectsCarousel';
import {backgroundEffects} from '@/common/components/effects/EffectsUtils';
import useVideoEffect from '@/common/components/video/editor/useVideoEffect';
import {
EffectIndex,
effectPresets,
} from '@/common/components/video/effects/Effects';
import {ListBoxes, MagicWand, MagicWandFilled} from '@carbon/icons-react';
import {useCallback, useRef, useState} from 'react';
import {Button} from 'react-daisyui';
import EffectsToolbarBottomActions from '@/common/components/effects/EffectsToolbarBottomActions';
import ToolbarProgressChip from '@/common/components/toolbar/ToolbarProgressChip';
import {
activeBackgroundEffectAtom,
activeHighlightEffectAtom,
activeHighlightEffectGroupAtom,
} from '@/demo/atoms';
import {BLUE_PINK_FILL} from '@/theme/gradientStyle';
import {useAtomValue} from 'jotai';
type Props = {
onTabChange: (newIndex: number) => void;
};
export default function MobileEffectsToolbar({onTabChange}: Props) {
const preset = useRef(0);
const setEffect = useVideoEffect();
const [showEffectsCarousels, setShowEffectsCarousels] = useState<boolean>();
const activeBackground = useAtomValue(activeBackgroundEffectAtom);
const activeHighlight = useAtomValue(activeHighlightEffectAtom);
const activeHighlightEffectsGroup = useAtomValue(
activeHighlightEffectGroupAtom,
);
const handleTogglePreset = useCallback(() => {
preset.current++;
const [background, highlight] =
effectPresets[preset.current % effectPresets.length];
setEffect(background.name, EffectIndex.BACKGROUND, {
variant: background.variant,
});
setEffect(highlight.name, EffectIndex.HIGHLIGHT, {
variant: highlight.variant,
});
}, [setEffect]);
return (
<div className="w-full">
{showEffectsCarousels ? (
<div className="flex gap-2 px-2 py-4 items-center p-6">
<Button
color="ghost"
className="mt-6 !px-2 !text-[#FB73A5]"
startIcon={<MagicWand size={20} />}
onClick={handleTogglePreset}
/>
<EffectsCarousel
label="Highlights"
effects={activeHighlightEffectsGroup}
activeEffect={activeHighlight.name}
index={1}
/>
<EffectsCarousel
label="Background"
effects={backgroundEffects}
activeEffect={activeBackground.name}
index={0}
/>
</div>
) : (
<div className="flex flex-col gap-6 p-6">
<div className="text-sm text-white">
<ToolbarProgressChip />
Apply visual effects to your selected objects and the background.
</div>
<div className="grid grid-cols-2 gap-2">
<Button
color="ghost"
endIcon={<MagicWandFilled size={20} />}
className={`font-bold bg-black !rounded-full !bg-gradient-to-br ${BLUE_PINK_FILL} border-none text-white`}
onClick={handleTogglePreset}>
Surprise Me
</Button>
<Button
color="ghost"
className={`font-bold bg-black !rounded-full border-none text-white`}
startIcon={<ListBoxes size={20} />}
onClick={() => setShowEffectsCarousels(true)}>
More effects
</Button>
</div>
</div>
)}
<EffectsToolbarBottomActions onTabChange={onTabChange} />
</div>
);
}

View File

@@ -0,0 +1,54 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {moreEffects} from '@/common/components/effects/EffectsUtils';
import EffectVariantBadge from '@/common/components/effects/EffectVariantBadge';
import ToolbarActionIcon from '@/common/components/toolbar/ToolbarActionIcon';
import ToolbarSection from '@/common/components/toolbar/ToolbarSection';
import useVideoEffect from '@/common/components/video/editor/useVideoEffect';
import {EffectIndex} from '@/common/components/video/effects/Effects';
import {activeHighlightEffectAtom} from '@/demo/atoms';
import {useAtomValue} from 'jotai';
export default function MoreFunEffects() {
const setEffect = useVideoEffect();
const activeEffect = useAtomValue(activeHighlightEffectAtom);
return (
<ToolbarSection title="Selected Objects" borderBottom={true}>
{moreEffects.map(effect => {
return (
<ToolbarActionIcon
variant="toggle"
key={effect.title}
icon={effect.Icon}
title={effect.title}
isActive={activeEffect.name === effect.effectName}
badge={
activeEffect.name === effect.effectName && (
<EffectVariantBadge
label={`${activeEffect.variant + 1}/${activeEffect.numVariants}`}
/>
)
}
onClick={() => {
setEffect(effect.effectName, EffectIndex.HIGHLIGHT);
}}
/>
);
})}
</ToolbarSection>
);
}

View File

@@ -0,0 +1,83 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import type {VideoGalleryTriggerProps} from '@/common/components/gallery/DemoVideoGalleryModal';
import DemoVideoGalleryModal from '@/common/components/gallery/DemoVideoGalleryModal';
import useVideo from '@/common/components/video/editor/useVideo';
import Logger from '@/common/logger/Logger';
import {isStreamingAtom, uploadingStateAtom, VideoData} from '@/demo/atoms';
import {useAtomValue, useSetAtom} from 'jotai';
import {ComponentType, useCallback} from 'react';
import {useNavigate} from 'react-router-dom';
type Props = {
videoGalleryModalTrigger?: ComponentType<VideoGalleryTriggerProps>;
showUploadInGallery?: boolean;
onChangeVideo?: () => void;
};
export default function ChangeVideoModal({
videoGalleryModalTrigger: VideoGalleryModalTriggerComponent,
showUploadInGallery = true,
onChangeVideo,
}: Props) {
const isStreaming = useAtomValue(isStreamingAtom);
const setUploadingState = useSetAtom(uploadingStateAtom);
const video = useVideo();
const navigate = useNavigate();
const handlePause = useCallback(() => {
video?.pause();
}, [video]);
function handlePauseOrAbortVideo() {
if (isStreaming) {
video?.abortStreamMasks();
} else {
handlePause();
}
}
function handleSwitchVideos(video: VideoData) {
// Retain any search parameter
navigate(
{
pathname: location.pathname,
search: location.search,
},
{
state: {
video,
},
},
);
onChangeVideo?.();
}
function handleUploadVideoError(error: Error) {
setUploadingState('error');
Logger.error(error);
}
return (
<DemoVideoGalleryModal
trigger={VideoGalleryModalTriggerComponent}
showUploadInGallery={showUploadInGallery}
onOpen={handlePauseOrAbortVideo}
onSelect={handleSwitchVideos}
onUploadVideoError={handleUploadVideoError}
/>
);
}

View File

@@ -0,0 +1,32 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import ResponsiveButton from '@/common/components/button/ResponsiveButton';
import type {VideoGalleryTriggerProps} from '@/common/components/gallery/DemoVideoGalleryModal';
import {ImageCopy} from '@carbon/icons-react';
export default function DefaultVideoGalleryModalTrigger({
onClick,
}: VideoGalleryTriggerProps) {
return (
<ResponsiveButton
color="ghost"
className="hover:!bg-black"
startIcon={<ImageCopy size={20} />}
onClick={onClick}>
Change video
</ResponsiveButton>
);
}

View File

@@ -0,0 +1,209 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {DemoVideoGalleryQuery} from '@/common/components/gallery/__generated__/DemoVideoGalleryQuery.graphql';
import VideoGalleryUploadVideo from '@/common/components/gallery/VideoGalleryUploadPhoto';
import VideoPhoto from '@/common/components/gallery/VideoPhoto';
import useScreenSize from '@/common/screen/useScreenSize';
import {VideoData} from '@/demo/atoms';
import {DEMO_SHORT_NAME} from '@/demo/DemoConfig';
import {fontSize, fontWeight, spacing} from '@/theme/tokens.stylex';
import stylex from '@stylexjs/stylex';
import {useMemo} from 'react';
import PhotoAlbum, {Photo, RenderPhotoProps} from 'react-photo-album';
import {graphql, useLazyLoadQuery} from 'react-relay';
import {useLocation, useNavigate} from 'react-router-dom';
const styles = stylex.create({
container: {
display: 'flex',
flexDirection: 'column',
marginHorizontal: spacing[1],
height: '100%',
lineHeight: 1.2,
paddingTop: spacing[8],
},
headerContainer: {
marginBottom: spacing[8],
fontWeight: fontWeight['medium'],
fontSize: fontSize['2xl'],
'@media screen and (max-width: 768px)': {
marginTop: spacing[0],
marginBottom: spacing[8],
marginHorizontal: spacing[4],
fontSize: fontSize['xl'],
},
},
albumContainer: {
flex: '1 1 0%',
width: '100%',
overflowY: 'auto',
},
});
type Props = {
showUploadInGallery?: boolean;
onSelect?: (video: VideoPhotoData) => void;
onUpload: (video: VideoData) => void;
onUploadStart?: () => void;
onUploadError?: (error: Error) => void;
};
type VideoPhotoData = Photo &
VideoData & {
poster: string;
isUploadOption: boolean;
};
export default function DemoVideoGallery({
showUploadInGallery = false,
onSelect,
onUpload,
onUploadStart,
onUploadError,
}: Props) {
const navigate = useNavigate();
const location = useLocation();
const {isMobile: isMobileScreenSize} = useScreenSize();
const data = useLazyLoadQuery<DemoVideoGalleryQuery>(
graphql`
query DemoVideoGalleryQuery {
videos {
edges {
node {
id
path
posterPath
url
posterUrl
height
width
posterUrl
}
}
}
}
`,
{},
);
const allVideos: VideoPhotoData[] = useMemo(() => {
return data.videos.edges.map(video => {
return {
src: video.node.url,
path: video.node.path,
poster: video.node.posterPath,
posterPath: video.node.posterPath,
url: video.node.url,
posterUrl: video.node.posterUrl,
width: video.node.width,
height: video.node.height,
isUploadOption: false,
} as VideoPhotoData;
});
}, [data.videos.edges]);
const shareableVideos: VideoPhotoData[] = useMemo(() => {
const filteredVideos = [...allVideos];
if (showUploadInGallery) {
const uploadOption = {
src: '',
width: 1280,
height: 720,
poster: '',
isUploadOption: true,
} as VideoPhotoData;
filteredVideos.unshift(uploadOption);
}
return filteredVideos;
}, [allVideos, showUploadInGallery]);
const renderPhoto = ({
photo: video,
imageProps,
}: RenderPhotoProps<VideoPhotoData>) => {
const {style} = imageProps;
const {url, posterUrl} = video;
return video.isUploadOption ? (
<VideoGalleryUploadVideo
style={style}
onUpload={handleUploadVideo}
onUploadError={onUploadError}
onUploadStart={onUploadStart}
/>
) : (
<VideoPhoto
src={url}
poster={posterUrl}
style={style}
onClick={() => {
navigate(location.pathname, {
state: {
video,
},
});
onSelect?.(video);
}}
/>
);
};
function handleUploadVideo(video: VideoData) {
navigate(location.pathname, {
state: {
video,
},
});
onUpload?.(video);
}
const descriptionStyle = 'text-sm md:text-base text-gray-400 leading-snug';
return (
<div {...stylex.props(styles.container)}>
<div {...stylex.props(styles.albumContainer)}>
<div className="pt-0 md:px-16 md:pt-8 md:pb-8">
<div {...stylex.props(styles.headerContainer)}>
<h3 className="mb-2">
Select a video to try{' '}
<span className="hidden md:inline">
with the {DEMO_SHORT_NAME}
</span>
</h3>
<p className={descriptionStyle}>
Youll be able to download what you make.
</p>
</div>
<PhotoAlbum<VideoPhotoData>
layout="rows"
photos={shareableVideos}
targetRowHeight={isMobileScreenSize ? 120 : 200}
rowConstraints={{
singleRowMaxHeight: isMobileScreenSize ? 120 : 240,
maxPhotos: 3,
}}
renderPhoto={renderPhoto}
spacing={4}
/>
</div>
</div>
</div>
);
}

View File

@@ -0,0 +1,148 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import DefaultVideoGalleryModalTrigger from '@/common/components/gallery/DefaultVideoGalleryModalTrigger';
import {
frameIndexAtom,
sessionAtom,
uploadingStateAtom,
VideoData,
} from '@/demo/atoms';
import {spacing} from '@/theme/tokens.stylex';
import {Close} from '@carbon/icons-react';
import stylex from '@stylexjs/stylex';
import {useSetAtom} from 'jotai';
import {ComponentType, useCallback, useRef} from 'react';
import {Modal} from 'react-daisyui';
import DemoVideoGallery from './DemoVideoGallery';
const styles = stylex.create({
container: {
position: 'relative',
minWidth: '85vw',
minHeight: '85vh',
overflow: 'hidden',
color: '#fff',
boxShadow: '0 0 100px 50px #000',
borderRadius: 16,
border: '2px solid transparent',
background:
'linear-gradient(#1A1C1F, #1A1C1F) padding-box, linear-gradient(to right bottom, #FB73A5,#595FEF,#94EAE2,#FCCB6B) border-box',
},
closeButton: {
position: 'absolute',
top: 0,
right: 0,
padding: spacing[3],
zIndex: 10,
cursor: 'pointer',
':hover': {
opacity: 0.7,
},
},
galleryContainer: {
position: 'absolute',
top: spacing[4],
left: 0,
right: 0,
bottom: 0,
overflowY: 'auto',
},
});
export type VideoGalleryTriggerProps = {
onClick: () => void;
};
type Props = {
trigger?: ComponentType<VideoGalleryTriggerProps>;
showUploadInGallery?: boolean;
onOpen?: () => void;
onSelect?: (video: VideoData, isUpload?: boolean) => void;
onUploadVideoError?: (error: Error) => void;
};
export default function DemoVideoGalleryModal({
trigger: VideoGalleryModalTrigger = DefaultVideoGalleryModalTrigger,
showUploadInGallery = false,
onOpen,
onSelect,
onUploadVideoError,
}: Props) {
const modalRef = useRef<HTMLDialogElement | null>(null);
const setFrameIndex = useSetAtom(frameIndexAtom);
const setUploadingState = useSetAtom(uploadingStateAtom);
const setSession = useSetAtom(sessionAtom);
function openModal() {
const modal = modalRef.current;
if (modal != null) {
modal.style.display = 'grid';
modal.showModal();
}
}
function closeModal() {
const modal = modalRef.current;
if (modal != null) {
modal.close();
modal.style.display = 'none';
}
}
const handleSelect = useCallback(
async (video: VideoData, isUpload?: boolean) => {
closeModal();
setFrameIndex(0);
onSelect?.(video, isUpload);
setUploadingState('default');
setSession(null);
},
[setFrameIndex, onSelect, setUploadingState, setSession],
);
function handleUploadVideoStart() {
setUploadingState('uploading');
closeModal();
}
function handleOpenVideoGalleryModal() {
onOpen?.();
openModal();
}
return (
<>
<VideoGalleryModalTrigger onClick={handleOpenVideoGalleryModal} />
<Modal ref={modalRef} {...stylex.props(styles.container)}>
<div onClick={closeModal} {...stylex.props(styles.closeButton)}>
<Close size={28} />
</div>
<Modal.Body>
<div {...stylex.props(styles.galleryContainer)}>
<DemoVideoGallery
showUploadInGallery={showUploadInGallery}
onSelect={video => handleSelect(video)}
onUpload={video => handleSelect(video, true)}
onUploadStart={handleUploadVideoStart}
onUploadError={onUploadVideoError}
/>
</div>
</Modal.Body>
</Modal>
</>
);
}

View File

@@ -0,0 +1,102 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import useUploadVideo from '@/common/components/gallery/useUploadVideo';
import useScreenSize from '@/common/screen/useScreenSize';
import {VideoData} from '@/demo/atoms';
import {MAX_UPLOAD_FILE_SIZE} from '@/demo/DemoConfig';
import {BLUE_PINK_FILL_BR} from '@/theme/gradientStyle';
import {RetryFailed, Upload} from '@carbon/icons-react';
import {CSSProperties, ReactNode} from 'react';
import {Loading} from 'react-daisyui';
type Props = {
style: CSSProperties;
onUpload: (video: VideoData) => void;
onUploadStart?: () => void;
onUploadError?: (error: Error) => void;
};
export default function VideoGalleryUploadVideo({
style,
onUpload,
onUploadStart,
onUploadError,
}: Props) {
const {getRootProps, getInputProps, isUploading, error} = useUploadVideo({
onUpload,
onUploadStart,
onUploadError,
});
const {isMobile} = useScreenSize();
return (
<div className={`cursor-pointer ${BLUE_PINK_FILL_BR}`} style={style}>
<span {...getRootProps()}>
<input {...getInputProps()} />
<div className="relative w-full h-full">
<div className="absolute top-1/2 left-1/2 -translate-x-1/2 -translate-y-1/2">
{isUploading && (
<IconWrapper
icon={
<Loading
size={isMobile ? 'md' : 'lg'}
className="text-white"
/>
}
title="Uploading ..."
/>
)}
{error !== null && (
<IconWrapper
icon={<RetryFailed color="white" size={isMobile ? 24 : 32} />}
title={error}
/>
)}
{!isUploading && error === null && (
<IconWrapper
icon={<Upload color="white" size={isMobile ? 24 : 32} />}
title={
<>
Upload{' '}
<div className="text-xs opacity-70">
Max {MAX_UPLOAD_FILE_SIZE}
</div>
</>
}
/>
)}
</div>
</div>
</span>
</div>
);
}
type IconWrapperProps = {
icon: ReactNode;
title: ReactNode | string;
};
function IconWrapper({icon, title}: IconWrapperProps) {
return (
<>
<div className="flex justify-center">{icon}</div>
<div className="mt-1 text-sm md:text-lg text-white font-medium text-center leading-tight">
{title}
</div>
</>
);
}

View File

@@ -0,0 +1,112 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import Logger from '@/common/logger/Logger';
import stylex from '@stylexjs/stylex';
import {
CSSProperties,
MouseEventHandler,
useCallback,
useEffect,
useRef,
} from 'react';
const styles = stylex.create({
background: {
backgroundRepeat: 'no-repeat',
backgroundSize: 'cover',
backgroundPosition: 'center',
cursor: 'pointer',
},
video: {
width: '100%',
height: '100%',
},
});
type Props = {
onClick: MouseEventHandler<HTMLVideoElement> | undefined;
src: string;
poster: string;
style: CSSProperties;
};
export default function VideoPhoto({src, poster, style, onClick}: Props) {
const videoRef = useRef<HTMLVideoElement | null>(null);
const playPromiseRef = useRef<Promise<void> | null>(null);
const play = useCallback(() => {
const video = videoRef.current;
// Only play video if it is not already playing
if (video != null && video.paused) {
// This quirky way of handling video play/pause in the browser is needed
// due to the async nature of the video play API:
// https://developer.chrome.com/blog/play-request-was-interrupted/
const playPromise = video.play();
playPromise.catch(error => {
Logger.error('Failed to play video', error);
});
playPromiseRef.current = playPromise;
}
}, []);
const pause = useCallback(() => {
// Only pause video if it is playing
const playPromise = playPromiseRef.current;
if (playPromise != null) {
playPromise
.then(() => {
videoRef.current?.pause();
})
.catch(error => {
Logger.error('Failed to pause video', error);
})
.finally(() => {
playPromiseRef.current = null;
});
}
}, []);
useEffect(() => {
return () => {
pause();
};
}, [pause]);
return (
<div
style={{
...style,
backgroundImage: `url(${poster})`,
}}
{...stylex.props(styles.background)}>
<video
ref={videoRef}
{...stylex.props(styles.video)}
preload="none"
playsInline
loop
muted
title="Gallery Video"
poster={poster}
onMouseEnter={play}
onMouseLeave={pause}
onClick={onClick}>
<source src={src} type="video/mp4" />
Sorry, your browser does not support embedded videos.
</video>
</div>
);
}

View File

@@ -0,0 +1,303 @@
/**
* @generated SignedSource<<db7e183e1996cf656749b4e33c2424e6>>
* @lightSyntaxTransform
* @nogrep
*/
/* tslint:disable */
/* eslint-disable */
// @ts-nocheck
import { ConcreteRequest, Query } from 'relay-runtime';
import { FragmentRefs } from "relay-runtime";
export type DemoVideoGalleryModalQuery$variables = Record<PropertyKey, never>;
export type DemoVideoGalleryModalQuery$data = {
readonly " $fragmentSpreads": FragmentRefs<"DatasetsDropdown_datasets" | "VideoGallery_videos">;
};
export type DemoVideoGalleryModalQuery = {
response: DemoVideoGalleryModalQuery$data;
variables: DemoVideoGalleryModalQuery$variables;
};
const node: ConcreteRequest = (function(){
var v0 = [
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "name",
"storageKey": null
}
],
v1 = [
{
"kind": "Literal",
"name": "after",
"value": ""
},
{
"kind": "Literal",
"name": "first",
"value": 20
}
],
v2 = {
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "__typename",
"storageKey": null
};
return {
"fragment": {
"argumentDefinitions": [],
"kind": "Fragment",
"metadata": null,
"name": "DemoVideoGalleryModalQuery",
"selections": [
{
"args": null,
"kind": "FragmentSpread",
"name": "DatasetsDropdown_datasets"
},
{
"args": null,
"kind": "FragmentSpread",
"name": "VideoGallery_videos"
}
],
"type": "Query",
"abstractKey": null
},
"kind": "Request",
"operation": {
"argumentDefinitions": [],
"kind": "Operation",
"name": "DemoVideoGalleryModalQuery",
"selections": [
{
"alias": null,
"args": null,
"concreteType": "DatasetConnection",
"kind": "LinkedField",
"name": "datasets",
"plural": false,
"selections": [
{
"alias": null,
"args": null,
"concreteType": "DatasetEdge",
"kind": "LinkedField",
"name": "edges",
"plural": true,
"selections": [
{
"alias": null,
"args": null,
"concreteType": "Dataset",
"kind": "LinkedField",
"name": "node",
"plural": false,
"selections": (v0/*: any*/),
"storageKey": null
}
],
"storageKey": null
}
],
"storageKey": null
},
{
"alias": null,
"args": (v1/*: any*/),
"concreteType": "VideoConnection",
"kind": "LinkedField",
"name": "videos",
"plural": false,
"selections": [
(v2/*: any*/),
{
"alias": null,
"args": null,
"concreteType": "PageInfo",
"kind": "LinkedField",
"name": "pageInfo",
"plural": false,
"selections": [
(v2/*: any*/),
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "hasPreviousPage",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "hasNextPage",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "startCursor",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "endCursor",
"storageKey": null
}
],
"storageKey": null
},
{
"alias": null,
"args": null,
"concreteType": "VideoEdge",
"kind": "LinkedField",
"name": "edges",
"plural": true,
"selections": [
(v2/*: any*/),
{
"alias": null,
"args": null,
"concreteType": "Video",
"kind": "LinkedField",
"name": "node",
"plural": false,
"selections": [
(v2/*: any*/),
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "id",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "path",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "posterPath",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "url",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "posterUrl",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "width",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "height",
"storageKey": null
},
{
"alias": null,
"args": null,
"concreteType": "Dataset",
"kind": "LinkedField",
"name": "dataset",
"plural": false,
"selections": (v0/*: any*/),
"storageKey": null
},
{
"alias": null,
"args": null,
"concreteType": "VideoPermissions",
"kind": "LinkedField",
"name": "permissions",
"plural": false,
"selections": [
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "canShare",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "canDownload",
"storageKey": null
}
],
"storageKey": null
}
],
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "cursor",
"storageKey": null
}
],
"storageKey": null
}
],
"storageKey": "videos(after:\"\",first:20)"
},
{
"alias": null,
"args": (v1/*: any*/),
"filters": [
"datasetName"
],
"handle": "connection",
"key": "VideoGallery_videos",
"kind": "LinkedHandle",
"name": "videos"
}
]
},
"params": {
"cacheID": "e0bccf553377682e6bc283c2ce53bee5",
"id": null,
"metadata": {},
"name": "DemoVideoGalleryModalQuery",
"operationKind": "query",
"text": "query DemoVideoGalleryModalQuery {\n ...DatasetsDropdown_datasets\n ...VideoGallery_videos\n}\n\nfragment DatasetsDropdown_datasets on Query {\n datasets {\n edges {\n node {\n name\n }\n }\n }\n}\n\nfragment VideoGallery_videos on Query {\n videos(first: 20, after: \"\") {\n __typename\n pageInfo {\n __typename\n hasPreviousPage\n hasNextPage\n startCursor\n endCursor\n }\n edges {\n __typename\n node {\n __typename\n id\n path\n posterPath\n url\n posterUrl\n width\n height\n dataset {\n name\n }\n permissions {\n canShare\n canDownload\n }\n }\n cursor\n }\n }\n}\n"
}
};
})();
(node as any).hash = "d09e34e2b9f2e25c2d564106de5f9c89";
export default node;

View File

@@ -0,0 +1,148 @@
/**
* @generated SignedSource<<20d31a82b5f3b251b0e42b4f0e3522b8>>
* @lightSyntaxTransform
* @nogrep
*/
/* tslint:disable */
/* eslint-disable */
// @ts-nocheck
import { ConcreteRequest, Query } from 'relay-runtime';
export type DemoVideoGalleryQuery$variables = Record<PropertyKey, never>;
export type DemoVideoGalleryQuery$data = {
readonly videos: {
readonly edges: ReadonlyArray<{
readonly node: {
readonly height: number;
readonly id: any;
readonly path: string;
readonly posterPath: string | null | undefined;
readonly posterUrl: string;
readonly url: string;
readonly width: number;
};
}>;
};
};
export type DemoVideoGalleryQuery = {
response: DemoVideoGalleryQuery$data;
variables: DemoVideoGalleryQuery$variables;
};
const node: ConcreteRequest = (function(){
var v0 = [
{
"alias": null,
"args": null,
"concreteType": "VideoConnection",
"kind": "LinkedField",
"name": "videos",
"plural": false,
"selections": [
{
"alias": null,
"args": null,
"concreteType": "VideoEdge",
"kind": "LinkedField",
"name": "edges",
"plural": true,
"selections": [
{
"alias": null,
"args": null,
"concreteType": "Video",
"kind": "LinkedField",
"name": "node",
"plural": false,
"selections": [
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "id",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "path",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "posterPath",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "url",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "posterUrl",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "height",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "width",
"storageKey": null
}
],
"storageKey": null
}
],
"storageKey": null
}
],
"storageKey": null
}
];
return {
"fragment": {
"argumentDefinitions": [],
"kind": "Fragment",
"metadata": null,
"name": "DemoVideoGalleryQuery",
"selections": (v0/*: any*/),
"type": "Query",
"abstractKey": null
},
"kind": "Request",
"operation": {
"argumentDefinitions": [],
"kind": "Operation",
"name": "DemoVideoGalleryQuery",
"selections": (v0/*: any*/)
},
"params": {
"cacheID": "4dae74153a5528f2631b59dfb0adb021",
"id": null,
"metadata": {},
"name": "DemoVideoGalleryQuery",
"operationKind": "query",
"text": "query DemoVideoGalleryQuery {\n videos {\n edges {\n node {\n id\n path\n posterPath\n url\n posterUrl\n height\n width\n }\n }\n }\n}\n"
}
};
})();
(node as any).hash = "d22ac5e58f6e4eb696651be49b410e4e";
export default node;

View File

@@ -0,0 +1,137 @@
/**
* @generated SignedSource<<76014dced98d6c8989e7322712e38963>>
* @lightSyntaxTransform
* @nogrep
*/
/* tslint:disable */
/* eslint-disable */
// @ts-nocheck
import { ConcreteRequest, Mutation } from 'relay-runtime';
export type useUploadVideoMutation$variables = {
file: any;
};
export type useUploadVideoMutation$data = {
readonly uploadVideo: {
readonly height: number;
readonly id: any;
readonly path: string;
readonly posterPath: string | null | undefined;
readonly posterUrl: string;
readonly url: string;
readonly width: number;
};
};
export type useUploadVideoMutation = {
response: useUploadVideoMutation$data;
variables: useUploadVideoMutation$variables;
};
const node: ConcreteRequest = (function(){
var v0 = [
{
"defaultValue": null,
"kind": "LocalArgument",
"name": "file"
}
],
v1 = [
{
"alias": null,
"args": [
{
"kind": "Variable",
"name": "file",
"variableName": "file"
}
],
"concreteType": "Video",
"kind": "LinkedField",
"name": "uploadVideo",
"plural": false,
"selections": [
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "id",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "height",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "width",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "url",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "path",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "posterPath",
"storageKey": null
},
{
"alias": null,
"args": null,
"kind": "ScalarField",
"name": "posterUrl",
"storageKey": null
}
],
"storageKey": null
}
];
return {
"fragment": {
"argumentDefinitions": (v0/*: any*/),
"kind": "Fragment",
"metadata": null,
"name": "useUploadVideoMutation",
"selections": (v1/*: any*/),
"type": "Mutation",
"abstractKey": null
},
"kind": "Request",
"operation": {
"argumentDefinitions": (v0/*: any*/),
"kind": "Operation",
"name": "useUploadVideoMutation",
"selections": (v1/*: any*/)
},
"params": {
"cacheID": "dcbaf1bf411627fdb9dfbb827592cfc0",
"id": null,
"metadata": {},
"name": "useUploadVideoMutation",
"operationKind": "mutation",
"text": "mutation useUploadVideoMutation(\n $file: Upload!\n) {\n uploadVideo(file: $file) {\n id\n height\n width\n url\n path\n posterPath\n posterUrl\n }\n}\n"
}
};
})();
(node as any).hash = "710e462504d76597af8695b7fc70b4cf";
export default node;

View File

@@ -0,0 +1,124 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {useUploadVideoMutation} from '@/common/components/gallery/__generated__/useUploadVideoMutation.graphql';
import Logger from '@/common/logger/Logger';
import {VideoData} from '@/demo/atoms';
import {useState} from 'react';
import {FileRejection, FileWithPath, useDropzone} from 'react-dropzone';
import {graphql, useMutation} from 'react-relay';
const ACCEPT_VIDEOS = {
'video/mp4': ['.mp4'],
'video/quicktime': ['.mov'],
};
// 70 MB default max video upload size
const MAX_FILE_SIZE_IN_MB = 70;
const MAX_VIDEO_UPLOAD_SIZE = MAX_FILE_SIZE_IN_MB * 1024 ** 2;
type Props = {
onUpload: (video: VideoData) => void;
onUploadStart?: () => void;
onUploadError?: (error: Error) => void;
};
export default function useUploadVideo({
onUpload,
onUploadStart,
onUploadError,
}: Props) {
const [error, setError] = useState<string | null>(null);
const [commit, isMutationInFlight] = useMutation<useUploadVideoMutation>(
graphql`
mutation useUploadVideoMutation($file: Upload!) {
uploadVideo(file: $file) {
id
height
width
url
path
posterPath
posterUrl
}
}
`,
);
const {getRootProps, getInputProps} = useDropzone({
accept: ACCEPT_VIDEOS,
multiple: false,
maxFiles: 1,
onDrop: (
acceptedFiles: FileWithPath[],
fileRejections: FileRejection[],
) => {
setError(null);
// Check if any of the files (only 1 file allowed) is rejected. The
// rejected file has an error (e.g., 'file-too-large'). Rendering an
// appropriate message.
if (fileRejections.length > 0 && fileRejections[0].errors.length > 0) {
const code = fileRejections[0].errors[0].code;
if (code === 'file-too-large') {
setError(
`File too large. Try a video under ${MAX_FILE_SIZE_IN_MB} MB`,
);
return;
}
}
if (acceptedFiles.length === 0) {
setError('File not accepted. Please try again.');
return;
}
if (acceptedFiles.length > 1) {
setError('Too many files. Please try again with 1 file.');
return;
}
onUploadStart?.();
const file = acceptedFiles[0];
commit({
variables: {
file,
},
uploadables: {
file,
},
onCompleted: response => onUpload(response.uploadVideo),
onError: error => {
Logger.error(error);
onUploadError?.(error);
setError('Upload failed.');
},
});
},
onError: error => {
Logger.error(error);
setError('File not supported.');
},
maxSize: MAX_VIDEO_UPLOAD_SIZE,
});
return {
getRootProps,
getInputProps,
isUploading: isMutationInFlight,
error,
setError,
};
}

View File

@@ -0,0 +1,29 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
type Props = {
className?: string;
};
export function GitHubIcon({className}: Props) {
return (
<svg viewBox="0 0 24 24" aria-hidden="true" className={className}>
<path
fillRule="evenodd"
clipRule="evenodd"
d="M12 2C6.477 2 2 6.463 2 11.97c0 4.404 2.865 8.14 6.839 9.458.5.092.682-.216.682-.48 0-.236-.008-.864-.013-1.695-2.782.602-3.369-1.337-3.369-1.337-.454-1.151-1.11-1.458-1.11-1.458-.908-.618.069-.606.069-.606 1.003.07 1.531 1.027 1.531 1.027.892 1.524 2.341 1.084 2.91.828.092-.643.35-1.083.636-1.332-2.22-.251-4.555-1.107-4.555-4.927 0-1.088.39-1.979 1.029-2.675-.103-.252-.446-1.266.098-2.638 0 0 .84-.268 2.75 1.022A9.607 9.607 0 0 1 12 6.82c.85.004 1.705.114 2.504.336 1.909-1.29 2.747-1.022 2.747-1.022.546 1.372.202 2.386.1 2.638.64.696 1.028 1.587 1.028 2.675 0 3.83-2.339 4.673-4.566 4.92.359.307.678.915.678 1.846 0 1.332-.012 2.407-.012 2.734 0 .267.18.577.688.48 3.97-1.32 6.833-5.054 6.833-9.458C22 6.463 17.522 2 12 2Z"></path>
</svg>
);
}

View File

@@ -0,0 +1,34 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import {Package} from '@carbon/icons-react';
import OptionButton from './OptionButton';
import useDownloadVideo from './useDownloadVideo';
export default function DownloadOption() {
const {download, state} = useDownloadVideo();
return (
<OptionButton
title="Download"
Icon={Package}
loadingProps={{
loading: state === 'started' || state === 'encoding',
label: 'Downloading...',
}}
onClick={download}
/>
);
}

View File

@@ -0,0 +1,46 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import ChangeVideoModal from '@/common/components/gallery/ChangeVideoModal';
import type {VideoGalleryTriggerProps} from '@/common/components/gallery/DemoVideoGalleryModal';
import useScreenSize from '@/common/screen/useScreenSize';
import {ImageCopy} from '@carbon/icons-react';
import OptionButton from './OptionButton';
type Props = {
onChangeVideo: () => void;
};
export default function GalleryOption({onChangeVideo}: Props) {
return (
<ChangeVideoModal
videoGalleryModalTrigger={GalleryTrigger}
showUploadInGallery={false}
onChangeVideo={onChangeVideo}
/>
);
}
function GalleryTrigger({onClick}: VideoGalleryTriggerProps) {
const {isMobile} = useScreenSize();
return (
<OptionButton
variant="flat"
title={isMobile ? 'Gallery' : 'Browse gallery'}
Icon={ImageCopy}
onClick={onClick}
/>
);
}

View File

@@ -0,0 +1,57 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import MoreOptionsToolbarBottomActions from '@/common/components/options/MoreOptionsToolbarBottomActions';
import ShareSection from '@/common/components/options/ShareSection';
import TryAnotherVideoSection from '@/common/components/options/TryAnotherVideoSection';
import useMessagesSnackbar from '@/common/components/snackbar/useDemoMessagesSnackbar';
import ToolbarHeaderWrapper from '@/common/components/toolbar/ToolbarHeaderWrapper';
import useScreenSize from '@/common/screen/useScreenSize';
import {useEffect, useRef} from 'react';
type Props = {
onTabChange: (newIndex: number) => void;
};
export default function MoreOptionsToolbar({onTabChange}: Props) {
const {isMobile} = useScreenSize();
const {clearMessage} = useMessagesSnackbar();
const didClearMessageSnackbar = useRef(false);
useEffect(() => {
if (!didClearMessageSnackbar.current) {
didClearMessageSnackbar.current = true;
clearMessage();
}
}, [clearMessage]);
return (
<div className="flex flex-col h-full">
<div className="grow">
<ToolbarHeaderWrapper
title="Nice work! What's next?"
className="pb-0 !border-b-0 !text-white"
showProgressChip={false}
/>
<ShareSection />
{!isMobile && <div className="h-[1px] bg-black mt-4 mb-8"></div>}
<TryAnotherVideoSection onTabChange={onTabChange} />
</div>
{!isMobile && (
<MoreOptionsToolbarBottomActions onTabChange={onTabChange} />
)}
</div>
);
}

Some files were not shown because too many files have changed in this diff Show More