Home > Research > Publications & Outputs > On Efficient Variants of Segment Anything Model

Electronic data

  • 2410.04960v3

    Accepted author manuscript, 4.88 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

On Efficient Variants of Segment Anything Model: A Survey

Research output: Contribution to Journal/MagazineJournal articlepeer-review

E-pub ahead of print

Standard

On Efficient Variants of Segment Anything Model: A Survey. / Sun, X.; Liu, J.; Shen, H. et al.
In: International Journal of Computer Vision, 30.07.2025.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Sun, X, Liu, J, Shen, H, Zhu, X & Hu, P 2025, 'On Efficient Variants of Segment Anything Model: A Survey', International Journal of Computer Vision. https://doi.org/10.1007/s11263-025-02539-8

APA

Sun, X., Liu, J., Shen, H., Zhu, X., & Hu, P. (2025). On Efficient Variants of Segment Anything Model: A Survey. International Journal of Computer Vision. Advance online publication. https://doi.org/10.1007/s11263-025-02539-8

Vancouver

Sun X, Liu J, Shen H, Zhu X, Hu P. On Efficient Variants of Segment Anything Model: A Survey. International Journal of Computer Vision. 2025 Jul 30. Epub 2025 Jul 30. doi: 10.1007/s11263-025-02539-8

Author

Sun, X. ; Liu, J. ; Shen, H. et al. / On Efficient Variants of Segment Anything Model : A Survey. In: International Journal of Computer Vision. 2025.

Bibtex

@article{7ddd7bd51040410bb605c2f9ee4a79d8,
title = "On Efficient Variants of Segment Anything Model: A Survey",
abstract = "The Segment Anything Model (SAM) is a foundational model for image segmentation tasks, known for its strong generalization across diverse applications. However, its impressive performance comes with significant computational and resource demands, making it challenging to deploy in resource-limited environments such as edge devices. To address this, a variety of SAM variants have been proposed to enhance efficiency while keeping accuracy. This survey provides the first comprehensive review of these efficient SAM variants. We begin by exploring the motivations driving this research. We then present core techniques used in SAM and model acceleration. This is followed by a detailed exploration of SAM acceleration strategies, categorized by approach, and a discussion of several future research directions. Finally, we offer a unified and extensive evaluation of these methods across various hardware, assessing their efficiency and accuracy on representative benchmarks, and providing a clear comparison of their overall performance. To complement this survey, we summarize the papers and codes related to efficient SAM variants at https://github.com/Image-and-Video-Computing-Group/On-Efficient-Variants-of-Segment-Anything-Model.",
author = "X. Sun and J. Liu and H. Shen and X. Zhu and P. Hu",
note = "Export Date: 18 August 2025; Cited By: 0",
year = "2025",
month = jul,
day = "30",
doi = "10.1007/s11263-025-02539-8",
language = "English",
journal = "International Journal of Computer Vision",
issn = "0920-5691",
publisher = "Springer Netherlands",

}

RIS

TY - JOUR

T1 - On Efficient Variants of Segment Anything Model

T2 - A Survey

AU - Sun, X.

AU - Liu, J.

AU - Shen, H.

AU - Zhu, X.

AU - Hu, P.

N1 - Export Date: 18 August 2025; Cited By: 0

PY - 2025/7/30

Y1 - 2025/7/30

N2 - The Segment Anything Model (SAM) is a foundational model for image segmentation tasks, known for its strong generalization across diverse applications. However, its impressive performance comes with significant computational and resource demands, making it challenging to deploy in resource-limited environments such as edge devices. To address this, a variety of SAM variants have been proposed to enhance efficiency while keeping accuracy. This survey provides the first comprehensive review of these efficient SAM variants. We begin by exploring the motivations driving this research. We then present core techniques used in SAM and model acceleration. This is followed by a detailed exploration of SAM acceleration strategies, categorized by approach, and a discussion of several future research directions. Finally, we offer a unified and extensive evaluation of these methods across various hardware, assessing their efficiency and accuracy on representative benchmarks, and providing a clear comparison of their overall performance. To complement this survey, we summarize the papers and codes related to efficient SAM variants at https://github.com/Image-and-Video-Computing-Group/On-Efficient-Variants-of-Segment-Anything-Model.

AB - The Segment Anything Model (SAM) is a foundational model for image segmentation tasks, known for its strong generalization across diverse applications. However, its impressive performance comes with significant computational and resource demands, making it challenging to deploy in resource-limited environments such as edge devices. To address this, a variety of SAM variants have been proposed to enhance efficiency while keeping accuracy. This survey provides the first comprehensive review of these efficient SAM variants. We begin by exploring the motivations driving this research. We then present core techniques used in SAM and model acceleration. This is followed by a detailed exploration of SAM acceleration strategies, categorized by approach, and a discussion of several future research directions. Finally, we offer a unified and extensive evaluation of these methods across various hardware, assessing their efficiency and accuracy on representative benchmarks, and providing a clear comparison of their overall performance. To complement this survey, we summarize the papers and codes related to efficient SAM variants at https://github.com/Image-and-Video-Computing-Group/On-Efficient-Variants-of-Segment-Anything-Model.

U2 - 10.1007/s11263-025-02539-8

DO - 10.1007/s11263-025-02539-8

M3 - Journal article

JO - International Journal of Computer Vision

JF - International Journal of Computer Vision

SN - 0920-5691

ER -