Home > Research > Publications & Outputs > On Efficient Variants of Segment Anything Model

Electronic data

  • 2410.04960v3

    Accepted author manuscript, 4.88 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

On Efficient Variants of Segment Anything Model: A Survey

Research output: Contribution to Journal/MagazineJournal articlepeer-review

E-pub ahead of print
  • X. Sun
  • J. Liu
  • H. Shen
  • X. Zhu
  • P. Hu
Close
<mark>Journal publication date</mark>30/07/2025
<mark>Journal</mark>International Journal of Computer Vision
Publication StatusE-pub ahead of print
Early online date30/07/25
<mark>Original language</mark>English

Abstract

The Segment Anything Model (SAM) is a foundational model for image segmentation tasks, known for its strong generalization across diverse applications. However, its impressive performance comes with significant computational and resource demands, making it challenging to deploy in resource-limited environments such as edge devices. To address this, a variety of SAM variants have been proposed to enhance efficiency while keeping accuracy. This survey provides the first comprehensive review of these efficient SAM variants. We begin by exploring the motivations driving this research. We then present core techniques used in SAM and model acceleration. This is followed by a detailed exploration of SAM acceleration strategies, categorized by approach, and a discussion of several future research directions. Finally, we offer a unified and extensive evaluation of these methods across various hardware, assessing their efficiency and accuracy on representative benchmarks, and providing a clear comparison of their overall performance. To complement this survey, we summarize the papers and codes related to efficient SAM variants at https://github.com/Image-and-Video-Computing-Group/On-Efficient-Variants-of-Segment-Anything-Model.

Bibliographic note

Export Date: 18 August 2025; Cited By: 0