Datasets:

Modalities:
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:
TSAIA / README.md
wendyxwz's picture
Update README.md
5646575 verified
metadata
license: cc-by-4.0
configs:
  - config_name: analysis_questions
    data_files: analysis_questions.csv
    default: true
  - config_name: multiple_choice
    data_files: multiple_choice.csv
task_categories:
  - time-series-forecasting

TSAIA: Time Series AI Assistant

Paper | Code

About

The rapid advancement of Large Language Models (LLMs) has sparked growing interest in their application to time series analysis tasks. However, their ability to perform complex reasoning over temporal data in real-world application domains remains underexplored. To move toward this goal, a first step is to establish a rigorous benchmark dataset for evaluation. In this work, we introduce the TSAIA Benchmark, a first attempt to evaluate LLMs as time-series AI assistants. To ensure both scientific rigor and practical relevance, we surveyed over 20 academic publications and identified 33 real-world task formulations. The benchmark encompasses a broad spectrum of challenges, ranging from constraint-aware forecasting to anomaly detection with threshold calibration: tasks that require compositional reasoning and multi-step time series analysis. The question generator is designed to be dynamic and extensible, supporting continuous expansion as new datasets or task types are introduced. Given the heterogeneous nature of the tasks, we adopt task-specific success criteria and tailored inference-quality metrics to ensure meaningful evaluation for each task. We apply this benchmark to assess eight state-of-the-art LLMs under a unified evaluation protocol. Our analysis reveals limitations in current models' ability to assemble complex time series analysis workflows, underscoring the need for specialized methodologies for domain-specific adaptation.

Alt text

πŸ“š Dataset Structure

The dataset comprises two subsets:

  • analysis_questions: 904 samples
  • multiple_choice: 150 samples

Fields in analysis_questions:

  • question_id: Unique identifier for each question
  • question_type: Type of question (e.g., easy_stock-future price)
  • prompt: Natural language description of the task
  • data_str: Embedded time series data (typically stock prices)
  • executor_variables: Definitions of variables available for model execution
  • ground_truth_data: Reference answer or target output
  • context: Contextual information for the task
  • constraint: Constraints on output format or variable naming

Fields in multiple_choice:

  • question_id: Unique identifier for each question
  • question_type: Type of question
  • prompt: Natural language description of the task
  • options: A list of multiple-choice options
  • answer: The correct option(s)
  • data_info: Description of the data
  • answer_info: Description of the answer
  • executor_variables: Definitions of variables available for model execution

⚑ Sample Usage

This section provides a quick start guide to use the TSAIA dataset with its associated code. For more comprehensive instructions and troubleshooting, please refer to the GitHub Repository.

1. Clone the Repository

git clone https://github.com/USC-Melady/TSAIA.git
cd TSAIA

2. Set up Conda Environment

conda env create -f agent_environment.yml
conda activate agentenv

3. Configure API Keys

Create a secure API key file my_api_keys.py by running:

printf '%s
' '# my_api_keys.py  --  DO NOT COMMIT TO GIT' '# Fill in your API Token' '' 'OPENAI_API_KEY = "your_openai_key_here"' 'DEEPSEEK_API_KEY = "your_deepseek_key_here"' 'GEMINI_API_KEY = "your_gemini_key_here"' 'QWEN_API_KEY = "your_qwen_key_here"' 'CODESTRAL_API_KEY = "your_codestral_key_here"' 'MISTRAL_API_KEY = "your_mistral_key_here"' 'CLAUDE_API_KEY = "your_claude_key_here"' 'LLAMA_API_KEY = "your_llama_key_here"' > my_api_keys.py && chmod 600 my_api_keys.py

⚠️ Important: Replace placeholders (your_..._key_here) in my_api_keys.py with your actual API keys. Ensure this file is never committed to git.

4. Download Raw Dataset

This command downloads the raw dataset from our Hugging Face repository (Melady/TSAIA) into the directory ./data/raw_data.

git clone https://huggingface.co/datasets/Melady/TSAIA ./data/raw_data

5. Generate Pickle Files

Running get_pkl.py converts the raw dataset downloaded in the previous step into two .pkl files and stores them under the ./data directory.

python get_pkl.py

πŸ”¬ Running Experiments

Activate your conda environment (agentenv) before running experiments. The following two commands use the GPT-4o model. Ensure that you have set a valid OpenAI API key for OPENAI_API_KEY in your my_api_keys.py file.

  • Analysis Questions:

    python3 -u static_query_codeact.py --question_path data/analysis_questions.pkl --model_id gpt-4o
    
  • Multiple Choice Questions:

    python3 -u static_query_CodeAct_mc.py --question_path data/multiple_choice.pkl --model_id gpt-4o
    

πŸ“„ License

This dataset is licensed under the CC BY 4.0 license. You are free to use and distribute it, provided appropriate credit is given.

🀝 Citation and Contribution

If you find this dataset helpful or utilize the dataset in your research or projects, please cite it accordingly.

@article{ye2025llm,
  title={When LLM Meets Time Series: Can LLMs Perform Multi-Step Time Series Reasoning and Inference},
  author={Ye, Wen and Liu, Jinbo and Cao, Defu and Yang, Wei and Liu, Yan},
  journal={arXiv preprint arXiv:2509.01822},
  year={2025}
}

Contributions are welcome! Feel free to submit pull requests or open issues to suggest improvements or add new task samples.