Semantic change detection (SCD) involves the simultaneous extraction of changed regions and their corresponding semantic classifications (pre- and post-change) in remote sensing images (RSIs). Despite recent advancements in vision foundation models (VFMs), the fast-segment anything model has demonstrated insufficient performance in SCD. In this article, we propose a novel VFMs architecture for SCD, designated as VFM-ReSCD. This architecture integrates a side adapter (SA) into the VFM-ReSCD to fine-tune the fast segment anything model (FastSAM) network, enabling zero-shot transfer to novel image distributions and tasks. This enhancement facilitates the extraction of spatial features from very high-resolution (VHR) RSIs. Moreover, we introduce a recurrent neural network (RNN) to model semantic correlation and capture feature changes. We evaluated the proposed methodology on two benchmark datasets. Extensive experiments show that our method achieves state-of-the-art (SOTA) performances over existing approaches and outperforms other CNN-based methods on two RSI datasets.