phi3-full-resume-enhancer
This is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct for resume enhancement and professional writing.
Model Description
This model transforms unstructured, informal resumes into professional, well-formatted resumes with:
- Quantified achievements
- Action-oriented language
- Professional formatting
- Enhanced skill descriptions
- Structured sections
Training Data
The model was fine-tuned on 5 high-quality examples demonstrating transformation from casual/unstructured resumes to professional formats across various roles (Software Developer, Marketing Professional, Data Analyst, Project Manager, Software Engineer).
How to Use
Installation
pip install transformers peft torch
Basic Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
# Load model and tokenizer
base_model_id = "microsoft/Phi-3-mini-4k-instruct"
adapter_model_id = "aditismile/resume_enhnaced"
tokenizer = AutoTokenizer.from_pretrained(base_model_id, trust_remote_code=True)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id,
device_map="auto",
torch_dtype=torch.float16,
trust_remote_code=True
)
model = PeftModel.from_pretrained(base_model, adapter_model_id)
model.eval()
# Prepare input
resume_text = """John Doe
john@email.com
Summary: Developer with some experience
Work:
- Coded stuff at Company XYZ
- Fixed bugs
Skills: Python, JavaScript"""
prompt = f"""<|system|>
You are an expert resume writer. Transform the following resume into a professional format.<|end|>
<|user|>
{resume_text}<|end|>
<|assistant|>
"""
# Generate enhanced resume
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=800,
temperature=0.7,
top_p=0.9,
do_sample=True
)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
enhanced_resume = result.split("<|assistant|>")[-1].strip()
print(enhanced_resume)
Training Details
- Base Model: microsoft/Phi-3-mini-4k-instruct
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Training Examples: 5 full resume transformations
- Max Length: 2048 tokens
- Epochs: 5
- Learning Rate: 2e-4
- LoRA Rank: 16
- LoRA Alpha: 32
Limitations
- Trained on a small dataset (5 examples) - may benefit from additional training data
- Best suited for technical and professional roles
- May hallucinate quantifiable metrics if not present in original resume
- Designed for English resumes only
Intended Use
This model is intended for:
- Resume enhancement and professional writing assistance
- Career coaching tools
- Job application preparation
- Educational purposes in resume writing
Citation
If you use this model, please cite:
@misc{phi3_full_resume_enhancer,
author = {aditismile},
title = {{Phi-3 Resume Enhancement Model}},
year = {{2024}},
publisher = {{Hugging Face}},
howpublished = {{\url{{https://huggingface.co/aditismile/resume_enhnaced}}}}
}
License
This model inherits the license from microsoft/Phi-3-mini-4k-instruct. The fine-tuned adapters are released under MIT license.
Contact
For questions or feedback, please open an issue on the model repository.
Model tree for aditismile/resume_enhnaced
Base model
microsoft/Phi-3-mini-4k-instruct