-
L4Q: Parameter Efficient Quantization-Aware Training on Large Language Models via LoRA-wise LSQ
Paper • 2402.04902 • Published • 5 -
QWHA: Quantization-Aware Walsh-Hadamard Adaptation for Parameter-Efficient Fine-Tuning on Large Language Models
Paper • 2509.17428 • Published • 9 -
LRAgent: Efficient KV Cache Sharing for Multi-LoRA LLM Agents
Paper • 2602.01053 • Published • 6
Hyesung Jeon
hjeon2k
AI & ML interests
None yet
Recent Activity
updated
a collection
1 day ago
Authored Paper
submitted
a paper
1 day ago
LRAgent: Efficient KV Cache Sharing for Multi-LoRA LLM Agents