Accepted author manuscript, 751 KB, PDF document
Available under license: CC BY: Creative Commons Attribution 4.0 International License
Final published version
Research output: Contribution to Journal/Magazine › Conference article › peer-review
Research output: Contribution to Journal/Magazine › Conference article › peer-review
}
TY - JOUR
T1 - Standardization of Behavioral Use Clauses is Necessary for the Adoption of Responsible Licensing of AI
AU - McDuff, Daniel
AU - Korjakow, Tim
AU - Cambo, Scott
AU - Benjamin, Jesse Josua
AU - Lee, Jenny
AU - Jernite, Yacine
AU - Ferrandis, Carlos Muñoz
AU - Gokaslan, Aaron
AU - Tarkowski, Alek
AU - Lindley, Joseph
AU - Cooper, A. Feder
AU - Contractor, Danish
PY - 2024/7/21
Y1 - 2024/7/21
N2 - Growing concerns over negligent or malicious uses of AI have increased the appetite for tools that help manage the risks of the technology. In 2018, licenses with behaviorial-use clauses (commonly referred to as Responsible AI Licenses) were proposed to give developers a framework for releasing AI assets while specifying their users to mitigate negative applications. As of the end of 2023, on the order of 40,000 software and model repositories have adopted responsible AI licenses licenses. Notable models licensed with behavioral use clauses include BLOOM (language) and LLaMA2 (language), Stable Diffusion (image), and GRID (robotics). This paper explores why and how these licenses have been adopted, and why and how they have been adapted to fit particular use cases. We use a mixed-methods methodology of qualitative interviews, clustering of license clauses, and quantitative analysis of license adoption. Based on this evidence we take the position that responsible AI licenses need standardization to avoid confusing users or diluting their impact. At the same time, customization of behavioral restrictions is also appropriate in some contexts (e.g., medical domains). We advocate for “standardized customization” that can meet users' needs and can be supported via tooling.
AB - Growing concerns over negligent or malicious uses of AI have increased the appetite for tools that help manage the risks of the technology. In 2018, licenses with behaviorial-use clauses (commonly referred to as Responsible AI Licenses) were proposed to give developers a framework for releasing AI assets while specifying their users to mitigate negative applications. As of the end of 2023, on the order of 40,000 software and model repositories have adopted responsible AI licenses licenses. Notable models licensed with behavioral use clauses include BLOOM (language) and LLaMA2 (language), Stable Diffusion (image), and GRID (robotics). This paper explores why and how these licenses have been adopted, and why and how they have been adapted to fit particular use cases. We use a mixed-methods methodology of qualitative interviews, clustering of license clauses, and quantitative analysis of license adoption. Based on this evidence we take the position that responsible AI licenses need standardization to avoid confusing users or diluting their impact. At the same time, customization of behavioral restrictions is also appropriate in some contexts (e.g., medical domains). We advocate for “standardized customization” that can meet users' needs and can be supported via tooling.
M3 - Conference article
AN - SCOPUS:85203786628
VL - 235
SP - 35255
EP - 35266
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
SN - 1938-7228
T2 - 41st International Conference on Machine Learning, ICML 2024
Y2 - 21 July 2024 through 27 July 2024
ER -