Large Language Models for Anomaly Detection in Computational Workflows: From Supervised Fine-Tuning to In-Context Learning

Anomaly detection in computational workflows is critical for ensuring system reliability and security. However, traditional rule-based methods struggle to detect novel anomalies. This paper leverages large language models (LLMs) for workflow anomaly detection by exploiting their ability to learn com...

Full description

Saved in:
Bibliographic Details
Published in:SC24: International Conference for High Performance Computing, Networking, Storage and Analysis pp. 1 - 17
Main Authors: Jin, Hongwei, Papadimitriou, George, Raghavan, Krishnan, Zuk, Pawel, Balaprakash, Prasanna, Wang, Cong, Mandal, Anirban, Deelman, Ewa
Format: Conference Proceeding
Language:English
Published: IEEE 17.11.2024
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Anomaly detection in computational workflows is critical for ensuring system reliability and security. However, traditional rule-based methods struggle to detect novel anomalies. This paper leverages large language models (LLMs) for workflow anomaly detection by exploiting their ability to learn complex data patterns. Two approaches are investigated: (1) supervised fine-tuning (SFT), where pretrained LLMs are fine-tuned on labeled data for sentence classification to identify anomalies, and (2) in-context learning (ICL), where prompts containing task descriptions and examples guide LLMs in few-shot anomaly detection without fine-tuning. The paper evaluates the performance, efficiency, and generalization of SFT models and explores zeroshot and few-shot ICL prompts and interpretability enhancement via chain-of-thought prompting. Experiments across multiple workflow datasets demonstrate the promising potential of LLMs for effective anomaly detection in complex executions.
DOI:10.1109/SC41406.2024.00098