Home > Research > Publications & Outputs > Model Leeching

Electronic data

  • kwyfzhmspxbkwhghkbdwrhnvfcdgsvmg

    Accepted author manuscript, 460 KB, application/zip

    Available under license: CC BY-NC-SA: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

View graph of relations

Model Leeching: An Extraction Attack Targeting LLMs

Research output: Contribution to conference - Without ISBN/ISSN Conference paperpeer-review

Published
Publication date20/10/2023
<mark>Original language</mark>English
EventConference on Applied Machine Learning for Information Security - 1000 Wilson Boulevard, 30th Floor, Arlington, United States
Duration: 19/10/202320/10/2023
https://www.camlis.org/

Conference

ConferenceConference on Applied Machine Learning for Information Security
Abbreviated titleCAMLIS
Country/TerritoryUnited States
CityArlington
Period19/10/2320/10/23
Internet address

Abstract

Model Leeching is a novel extraction attack targeting Large Language Models (LLMs), capable of distilling task-specific knowledge from a target LLM into a reduced parameter model. We demonstrate the effectiveness of our attack by extracting task capability from ChatGPT-3.5-Turbo, achieving 73% Exact Match (EM) similarity, and SQuAD EM and F1 accuracy scores of 75% and 87%, respectively for only $50 in API cost. We further demonstrate the feasibility of adversarial attack transferability from an extracted model extracted via Model Leeching to perform ML attack staging against a target LLM, resulting in an 11% increase to attack success rate when applied to ChatGPT-3.5-Turbo.