π§ Exaone-Bang-Merged
Exaone-Bang-Mergedλ LGAI-EXAONE/EXAONE-Deep-2.4B λͺ¨λΈμ κΈ°λ°μΌλ‘ νκ΅μ΄ 보λκ²μ λ°μ΄ν°μ
μ λν΄ LoRA λ°©μμΌλ‘ νμΈνλν ν, base λͺ¨λΈκ³Ό μ΄λν°λ₯Ό λ³ν©(merged)νμ¬ μμ±λ λͺ¨λΈμ
λλ€.
μ΄ λͺ¨λΈμ μ£Όλ‘ λ³΄λκ²μ κ·μΉ μλ΄, κ²μλ³ μ λ΅ μ€λͺ , μΉ΄λ ν¨κ³Ό ν΄μ λ±μ μ§μμλ΅ νμ€ν¬μ μ΅μ νλμ΄ μμΌλ©°, Exaoneμ κ³ νμ§ μΈμ΄ λ₯λ ₯μ λ°νμΌλ‘ λͺ ννκ³ κ°κ²°ν μλ΅μ μ 곡ν©λλ€.
π¦ λͺ¨λΈ μ 보
- Base Model:
LGAI-EXAONE/EXAONE-Deep-2.4B - Fine-tuning method: LoRA (PEFT)
- Merge λ°©μ:
merge_and_unload()μ¬μ©νμ¬ base + adapter λ³ν© - μΈμ΄: νκ΅μ΄
- μ©λ: 보λκ²μ κ·μΉ μ§μμλ΅, μΉ΄λ μ€λͺ , κ²μ κ°μ΄λ
π§ μ¬μ© μμ
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
model_id = "minjeongHuggingFace/exaone-bang-merged"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt =
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support