Understanding Secret Use of Large Language Models

CHI 2021

The advancements of Large Language Models (LLMs) have decentralized the responsibility for the transparency of AI usage. Specifically, LLM users are now encouraged or required to disclose the use of LLM-generated content for varied types of real-world tasks. However, an emerging phenomenon, users’ secret use of LLM, raises challenges in ensuring end users adhere to the transparency requirement. Our study used mixed-methods with an exploratory survey (125 real-world secret use cases reported) and a controlled experiment among 300 users to investigate the contexts and causes behind the secret use of LLMs. We found that such secretive behavior is often triggered by certain tasks, transcending demographic and personality differences among users. Task types were found to affect users’ intentions to use secretive behavior, primarily through influencing perceived external judgment regarding LLM usage. Our results yield important insights for future work on designing interventions to encourage more transparent disclosure of the use of LLMs or other AI technologies

Chenxinran Elise Shen
Chenxinran Elise Shen
PhD Student

I am interested in Virtual Reality (VR) and Artificial Intelligence (AI). My enthusiasm lies in exploring how VR and AI-mediated technologies can influence an individual’s perception and behavior, and the potential of this influence to enhance communication among people.