From f776d247b9f006b3c2bba1ec83503017ee2bdee6 Mon Sep 17 00:00:00 2001 From: rentainhe <596106517@qq.com> Date: Tue, 29 Oct 2024 10:43:26 +0800 Subject: [PATCH] refine README --- README.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/README.md b/README.md index 401333f..20438f3 100644 --- a/README.md +++ b/README.md @@ -19,8 +19,6 @@ In this repo, we've supported the following demo with **simple implementations** Grounded SAM 2 does not introduce significant methodological changes compared to [Grounded SAM: Assembling Open-World Models for Diverse Visual Tasks](https://arxiv.org/abs/2401.14159). Both approaches leverage the capabilities of open-world models to address complex visual tasks. Consequently, we try to **simplify the code implementation** in this repository, aiming to enhance user convenience. -[![Video Name](./assets/grounded_sam_2_intro.jpg)](https://github.com/user-attachments/assets/f0fb0022-779a-49fb-8f46-3a18a8b4e893) - ## Latest updates - `2024/10/24`: Support [SAHI (Slicing Aided Hyper Inference)](https://docs.ultralytics.com/guides/sahi-tiled-inference/) on Grounded SAM 2 (with Grounding DINO 1.5) which may be helpful for inferencing high resolution image with dense small objects (e.g. **4K** images).