Papers
arXiv:2511.01144

AthenaBench: A Dynamic Benchmark for Evaluating LLMs in Cyber Threat Intelligence

Published on Nov 3
· Submitted by Tanvirul Alam on Nov 4
Authors:
,
,
,
,

Abstract

AthenaBench, an enhanced benchmark for evaluating LLMs in CTI, reveals limitations in reasoning capabilities of current models, especially for tasks like threat actor attribution and risk mitigation.

AI-generated summary

Large Language Models (LLMs) have demonstrated strong capabilities in natural language reasoning, yet their application to Cyber Threat Intelligence (CTI) remains limited. CTI analysis involves distilling large volumes of unstructured reports into actionable knowledge, a process where LLMs could substantially reduce analyst workload. CTIBench introduced a comprehensive benchmark for evaluating LLMs across multiple CTI tasks. In this work, we extend CTIBench by developing AthenaBench, an enhanced benchmark that includes an improved dataset creation pipeline, duplicate removal, refined evaluation metrics, and a new task focused on risk mitigation strategies. We evaluate twelve LLMs, including state-of-the-art proprietary models such as GPT-5 and Gemini-2.5 Pro, alongside seven open-source models from the LLaMA and Qwen families. While proprietary LLMs achieve stronger results overall, their performance remains subpar on reasoning-intensive tasks, such as threat actor attribution and risk mitigation, with open-source models trailing even further behind. These findings highlight fundamental limitations in the reasoning capabilities of current LLMs and underscore the need for models explicitly tailored to CTI workflows and automation.

Community

Paper submitter

This is the updated version of our work on CTIBench, NeurIPS 2024. It includes an additional task, improved data construction pipeline, and more recent datasets with refined evaluation.

Code and dataset: https://github.com/Athena-Software-Group/athenabench

Please use this version of the dataset going forward.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2511.01144 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2511.01144 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2511.01144 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.