StudyPal-LLM-1.0 / README.md
aerodynamics21's picture
Update README.md
9371958 verified
metadata
license: apache-2.0
datasets:
  - StudyPal/education
language:
  - hr
  - en
base_model:
  - Qwen/Qwen2.5-32B
library_name: transformers
tags:
  - education
  - croatian
  - qwen2
  - fine-tuned
  - study-assistant

StudyPal-LLM-1.0

A fine-tuned Croatian educational assistant based on Qwen2.5-32B-Instruct, designed to help students with learning and study materials.

Model Details

Model Description

StudyPal-LLM-1.0 is a large language model fine-tuned specifically for educational purposes in Croatian. The model excels at generating educational content, answering study questions, creating flashcards, and
providing learning assistance.

  • Developed by: aerodynamics21
  • Model type: Causal Language Model
  • Language(s): Croatian (primary), English (secondary)
  • License: Apache 2.0
  • Finetuned from model: Qwen/Qwen2.5-32B
  • Parameters: 32.8B

Model Sources

Uses

Direct Use

This model is designed for educational applications:

  • Generating study materials in Croatian
  • Creating flashcards and quiz questions
  • Providing explanations of complex topics
  • Assisting with homework and learning

Usage Examples

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("aerodynamics21/StudyPal-LLM-1.0")
tokenizer = AutoTokenizer.from_pretrained("aerodynamics21/StudyPal-LLM-1.0")

# Generate educational content
prompt = "Objasni koncept fotosinteze:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

API Usage

import requests

API_URL = "https://api-inference.huggingface.co/models/aerodynamics21/StudyPal-LLM-1.0"
headers = {"Authorization": f"Bearer {your_token}"}

def query(payload):
    response = requests.post(API_URL, headers=headers, json=payload)
    return response.json()

output = query({"inputs": "Stvori kviz o hrvatskoj povijesti:"})

Training Details

Training Data

The model was fine-tuned on a Croatian educational dataset containing:
- Educational conversations and Q&A pairs
- Flashcard datasets
- Quiz and summary materials
- Croatian academic content

Training Procedure

- Base Model: Qwen2.5-32B
- Training Method: LoRA (Low-Rank Adaptation)
- Training Framework: Transformers + PEFT
- Hardware: RunPod GPU instance

Evaluation

The model demonstrates strong performance in:
- Croatian language comprehension and generation
- Educational content creation
- Study material generation
- Academic question answering

Bias, Risks, and Limitations

- Primary focus on Croatian educational content
- May reflect biases present in training data
- Best suited for educational contexts
- Performance may vary on non-educational tasks

Citation

@model{studypal-llm-1.0,
  title={StudyPal-LLM-1.0: A Croatian Educational Assistant},
  author={aerodynamics21},
  year={2025},
  url={https://huggingface.co/aerodynamics21/StudyPal-LLM-1.0}
}

Model Card Authors

aerodynamics21

Model Card Contact

For questions about this model, please visit the repository or create an issue.