🧠 Exaone-Bang-Merged

Exaone-Bang-MergedλŠ” LGAI-EXAONE/EXAONE-Deep-2.4B λͺ¨λΈμ„ 기반으둜 ν•œκ΅­μ–΄ λ³΄λ“œκ²Œμž„ 데이터셋에 λŒ€ν•΄ LoRA λ°©μ‹μœΌλ‘œ νŒŒμΈνŠœλ‹ν•œ ν›„, base λͺ¨λΈκ³Ό μ–΄λŒ‘ν„°λ₯Ό 병합(merged)ν•˜μ—¬ μƒμ„±λœ λͺ¨λΈμž…λ‹ˆλ‹€.

이 λͺ¨λΈμ€ 주둜 λ³΄λ“œκ²Œμž„ κ·œμΉ™ μ•ˆλ‚΄, κ²Œμž„λ³„ μ „λž΅ μ„€λͺ…, μΉ΄λ“œ 효과 해석 λ“±μ˜ μ§ˆμ˜μ‘λ‹΅ νƒœμŠ€ν¬μ— μ΅œμ ν™”λ˜μ–΄ 있으며, Exaone의 κ³ ν’ˆμ§ˆ μ–Έμ–΄ λŠ₯λ ₯을 λ°”νƒ•μœΌλ‘œ λͺ…ν™•ν•˜κ³  κ°„κ²°ν•œ 응닡을 μ œκ³΅ν•©λ‹ˆλ‹€.


πŸ“¦ λͺ¨λΈ 정보

  • Base Model: LGAI-EXAONE/EXAONE-Deep-2.4B
  • Fine-tuning method: LoRA (PEFT)
  • Merge 방식: merge_and_unload() μ‚¬μš©ν•˜μ—¬ base + adapter 병합
  • μ–Έμ–΄: ν•œκ΅­μ–΄
  • μš©λ„: λ³΄λ“œκ²Œμž„ κ·œμΉ™ μ§ˆμ˜μ‘λ‹΅, μΉ΄λ“œ μ„€λͺ…, κ²Œμž„ κ°€μ΄λ“œ

πŸ”§ μ‚¬μš© μ˜ˆμ‹œ

from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline

model_id = "minjeongHuggingFace/exaone-bang-merged"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt =
Downloads last month
3
Safetensors
Model size
2B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for minjeongHuggingFace/exaone-bang-merged

Quantizations
1 model