DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature

Gespeichert in:
Bibliographische Detailangaben
Titel: DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature
Autoren: Mitchell, Eric, Lee, Yoonho, Khazatsky, Alexander, Manning, Christopher D., Finn, Chelsea
Verlagsinformationen: 2023-01-26 2023-07-23
Publikationsart: Electronic Resource
Abstract: The increasing fluency and widespread usage of large language models (LLMs) highlight the desirability of corresponding tools aiding detection of LLM-generated text. In this paper, we identify a property of the structure of an LLM's probability function that is useful for such detection. Specifically, we demonstrate that text sampled from an LLM tends to occupy negative curvature regions of the model's log probability function. Leveraging this observation, we then define a new curvature-based criterion for judging if a passage is generated from a given LLM. This approach, which we call DetectGPT, does not require training a separate classifier, collecting a dataset of real or generated passages, or explicitly watermarking generated text. It uses only log probabilities computed by the model of interest and random perturbations of the passage from another generic pre-trained language model (e.g., T5). We find DetectGPT is more discriminative than existing zero-shot methods for model sample detection, notably improving detection of fake news articles generated by 20B parameter GPT-NeoX from 0.81 AUROC for the strongest zero-shot baseline to 0.95 AUROC for DetectGPT. See https://ericmitchell.ai/detectgpt for code, data, and other project information.
Comment: ICML 2023
Index Begriffe: Computer Science - Computation and Language, Computer Science - Artificial Intelligence, text
URL: http://arxiv.org/abs/2301.11305
Verfügbarkeit: Open access content. Open access content
Other Numbers: COO oai:arXiv.org:2301.11305
1381598066
Originalquelle: CORNELL UNIV
From OAIster®, provided by the OCLC Cooperative.
Dokumentencode: edsoai.on1381598066
Datenbank: OAIster
Beschreibung
Abstract:The increasing fluency and widespread usage of large language models (LLMs) highlight the desirability of corresponding tools aiding detection of LLM-generated text. In this paper, we identify a property of the structure of an LLM's probability function that is useful for such detection. Specifically, we demonstrate that text sampled from an LLM tends to occupy negative curvature regions of the model's log probability function. Leveraging this observation, we then define a new curvature-based criterion for judging if a passage is generated from a given LLM. This approach, which we call DetectGPT, does not require training a separate classifier, collecting a dataset of real or generated passages, or explicitly watermarking generated text. It uses only log probabilities computed by the model of interest and random perturbations of the passage from another generic pre-trained language model (e.g., T5). We find DetectGPT is more discriminative than existing zero-shot methods for model sample detection, notably improving detection of fake news articles generated by 20B parameter GPT-NeoX from 0.81 AUROC for the strongest zero-shot baseline to 0.95 AUROC for DetectGPT. See https://ericmitchell.ai/detectgpt for code, data, and other project information.<br />Comment: ICML 2023