url
stringlengths 56
57
| repository_url
stringclasses 1
value | labels_url
stringlengths 70
71
| comments_url
stringlengths 65
66
| events_url
stringlengths 63
64
| html_url
stringlengths 46
47
| id
int64 1.88B
2.91B
| node_id
stringlengths 18
18
| number
int64 906
2.42k
| title
stringlengths 3
380
| user
dict | labels
listlengths 0
3
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
1
| milestone
null | comments
int64 0
42
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 0
classes | pull_request
dict | body
stringlengths 4
45.9k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 65
66
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 1
class |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/peft/issues/2415
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2415/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2415/comments
|
https://api.github.com/repos/huggingface/peft/issues/2415/events
|
https://github.com/huggingface/peft/issues/2415
| 2,905,929,237
|
I_kwDOIf9iDM6tNPYV
| 2,415
|
size mismatch for lm_head when fintune QWEN2.5
|
{
"login": "minmie",
"id": 40080081,
"node_id": "MDQ6VXNlcjQwMDgwMDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/40080081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minmie",
"html_url": "https://github.com/minmie",
"followers_url": "https://api.github.com/users/minmie/followers",
"following_url": "https://api.github.com/users/minmie/following{/other_user}",
"gists_url": "https://api.github.com/users/minmie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minmie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minmie/subscriptions",
"organizations_url": "https://api.github.com/users/minmie/orgs",
"repos_url": "https://api.github.com/users/minmie/repos",
"events_url": "https://api.github.com/users/minmie/events{/privacy}",
"received_events_url": "https://api.github.com/users/minmie/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-03-10T02:45:29
| 2025-03-10T02:45:29
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
transformers version: 4.49.0
Platform: Linux-6.6.0-72.0.0.64.oe2403.x86_64-x86_64-with-glibc2.38
Python version: 3.10.16
Huggingface_hub version: 0.29.1
Safetensors version: 0.5.3
Accelerate version: 1.4.0
Accelerate config: not found
DeepSpeed version: not installed
PyTorch version (GPU?): 2.2.2+cu121 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using distributed or parallel set-up in script?:
Using GPU in script?:
GPU type: NVIDIA L4
### Who can help?
@benjaminbossan @sayakpaul
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
I load an adapter for Qwen/Qwen2.5-0.5B using the following code and an error occur:
```python
import torch
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer, pipeline
from peft import PeftConfig, PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "/home/chenjq/pythonWork/nlp/Qwen2.5-0.5B-SFT-Capybara/checkpoint-31"
# peft_model_id = args.output_dir
tokenizer = AutoTokenizer.from_pretrained(peft_model_id)
# Load Model with PEFT adapter
model = AutoPeftModelForCausalLM.from_pretrained(
peft_model_id,
device_map="auto",
torch_dtype=torch.float16
)
```
Error info as follow:
```python
Sliding Window Attention is enabled but not implemented for `sdpa`; unexpected results may be encountered.
Traceback (most recent call last):
File "/home/chenjq/.pycharm_helpers/pydev/pydevd.py", line 1500, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/chenjq/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/chenjq/pythonWork/nlp/test14.py", line 11, in <module>
model = AutoPeftModelForCausalLM.from_pretrained(
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/peft/auto.py", line 130, in from_pretrained
return cls._target_peft_class.from_pretrained(
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/peft/peft_model.py", line 581, in from_pretrained
load_result = model.load_adapter(
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/peft/peft_model.py", line 1239, in load_adapter
load_result = set_peft_model_state_dict(
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/peft/utils/save_and_load.py", line 451, in set_peft_model_state_dict
load_result = model.load_state_dict(peft_model_state_dict, strict=False)
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2153, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for PeftModelForCausalLM:
size mismatch for base_model.model.lm_head.modules_to_save.default.weight: copying a param with shape torch.Size([151936, 896]) from checkpoint, the shape in current model is torch.Size([151665, 896]).
Process finished with exit code 1
```
However, if I use the following code to load model, everything just work fine:
```python
from peft import PeftConfig, PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model_name ='/home/models/qwen/Qwen2.5-0.5B'
adapter_model_name = "/home/chenjq/pythonWork/nlp/Qwen2.5-0.5B-SFT-Capybara/checkpoint-31"
model = AutoModelForCausalLM.from_pretrained(base_model_name)
model = PeftModel.from_pretrained(model, adapter_model_name)
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
```
Some info from [here ](https://github.com/huggingface/transformers/issues/36550#issuecomment-2708336059)that maybe help:
Hi everyone! I did some research and found out that the error occurs because the len(tokenizer)(151665) and the embedding size (151936) of Qwen/Qwen2.5-0.5B do not match. _BaseAutoPeftModel.from_pretrained resizes the base model embeddings to match with the tokenizer ([here](https://github.com/huggingface/peft/blob/8edaae9460e4b76bce9431dc187402178ff7b689/src/peft/auto.py#L137)) and as a result, it is unable to load the saved weights. I think a possible solution might be to only resize base model embeddings if the tokenizer size differs from the base tokenizer size. What do you think?
The adapter trained using the following code:
```python
from datasets import load_dataset
from trl import SFTConfig, SFTTrainer
from peft import LoraConfig
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
dataset = load_dataset("trl-lib/Capybara", split="train")
dataset = dataset.select(range(500))
MODEL_ID = 'Qwen/Qwen2.5-0.5B'
peft_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
target_modules="all-linear",
modules_to_save=["lm_head", "embed_token"],
task_type="CAUSAL_LM",
)
args = SFTConfig(
output_dir="Qwen2.5-0.5B-SFT-Capybara", # directory to save and repository id
num_train_epochs=1, # number of training epochs
per_device_train_batch_size=4, # batch size per device during training
gradient_accumulation_steps=4, # number of steps before performing a backward/update pass
gradient_checkpointing=True, # use gradient checkpointing to save memory
optim="adamw_torch_fused", # use fused adamw optimizer
logging_steps=10, # log every 10 steps
save_strategy="epoch", # save checkpoint every epoch
bf16=True, # use bfloat16 precision
tf32=True, # use tf32 precision
learning_rate=2e-4, # learning rate, based on QLoRA paper
max_grad_norm=0.3, # max gradient norm based on QLoRA paper
warmup_ratio=0.03, # warmup ratio based on QLoRA paper
lr_scheduler_type="constant", # use constant learning rate scheduler
push_to_hub=False, # push model to hub
# report_to="tensorboard", # report metrics to tensorboard
)
trainer = SFTTrainer(
MODEL_ID,
train_dataset=dataset,
args=args,
peft_config=peft_config
)
trainer.train()
print('end')
```
### Expected behavior
Hope the model can predict normally.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2415/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2413
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2413/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2413/comments
|
https://api.github.com/repos/huggingface/peft/issues/2413/events
|
https://github.com/huggingface/peft/issues/2413
| 2,901,962,025
|
I_kwDOIf9iDM6s-G0p
| 2,413
|
`LoraConfig` multiple properties should be unified
|
{
"login": "Qubitium",
"id": 417764,
"node_id": "MDQ6VXNlcjQxNzc2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/417764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Qubitium",
"html_url": "https://github.com/Qubitium",
"followers_url": "https://api.github.com/users/Qubitium/followers",
"following_url": "https://api.github.com/users/Qubitium/following{/other_user}",
"gists_url": "https://api.github.com/users/Qubitium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Qubitium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Qubitium/subscriptions",
"organizations_url": "https://api.github.com/users/Qubitium/orgs",
"repos_url": "https://api.github.com/users/Qubitium/repos",
"events_url": "https://api.github.com/users/Qubitium/events{/privacy}",
"received_events_url": "https://api.github.com/users/Qubitium/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 9
| 2025-03-07T04:14:24
| 2025-03-10T14:59:51
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
@BenjaminBossan I am trying to add dynamic Lora support to both vLLM and SGLang as LoraConfig already supports this dynamic control via the following variables:
- `rank_pattern`: regex matching of which different `r`/`rank` values are applied
- `exclude_modules`: regex: which modules are not excluded from lora completedly
- `alpha_pattern`: regex matching of `alpha` override. extactly the same as `rank_pattern` but different property.
Nothing wrong with them individually but together, they become unncessary detached and has negative impact on code cost but also on dynamic control efficiency.
GPTQModel uses a single `dynamic`: Diction[str, Dict[]] where the `str` is a regex with `+:` (positive prefix, optional), `-:` negative prefix (Optional).
The dict value is the property override in string: value format.
Example as applied to PEFT (Proposal):
```
# implicit +: prefix if not used
# prefixs are stripped before the regex is performed
"mlp\.down_proj": { "r": 128 } # implicit positive
"+:mlp\.down_proj": { "r": 256 } # explicit positive
"-:mlp\.gate_proj": {} # negative
```
This simple control allows 3 states.
- Positive match == override any property values in base config (LoraConfig).
- Negative match == skip this modele for Lora (no LoraConfig at all)
- No match == There is no module matched so Base LoraConfig is used.
This single control replaces all existing PEFT control with same functionally while allowing ALL properties to be dynamically overriden (if necessary) without any additional apis/LoraConfig vars. As it exists, you need to add code and logic to every LoraConfig property that participates in dynamic override/control.
Basically I want Peft LoraConfig to the clean standard for vLLM and SGLang when it comes to dynamic control. Having a unified `dynamic` override system makes everyone's life so much easier and at the same time eliminate the issue that we have to write code each time a new LoraConfig property comes into pace.
Let me know what you think. I am willing to spend time working on it. You can also reach me at qubitium@modelcloud.ai and on [X: qubitium](https://x.com/qubitium). I really would love to chat with you for like 15 minutes to ping-pong this idea with you.
CC: @SunMarc @MekkCyber
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2413/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2412
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2412/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2412/comments
|
https://api.github.com/repos/huggingface/peft/issues/2412/events
|
https://github.com/huggingface/peft/issues/2412
| 2,901,275,403
|
I_kwDOIf9iDM6s7fML
| 2,412
|
Lora_B weight becomes 0 when using AuotModel
|
{
"login": "makcedward",
"id": 36614806,
"node_id": "MDQ6VXNlcjM2NjE0ODA2",
"avatar_url": "https://avatars.githubusercontent.com/u/36614806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/makcedward",
"html_url": "https://github.com/makcedward",
"followers_url": "https://api.github.com/users/makcedward/followers",
"following_url": "https://api.github.com/users/makcedward/following{/other_user}",
"gists_url": "https://api.github.com/users/makcedward/gists{/gist_id}",
"starred_url": "https://api.github.com/users/makcedward/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makcedward/subscriptions",
"organizations_url": "https://api.github.com/users/makcedward/orgs",
"repos_url": "https://api.github.com/users/makcedward/repos",
"events_url": "https://api.github.com/users/makcedward/events{/privacy}",
"received_events_url": "https://api.github.com/users/makcedward/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-03-06T19:45:29
| 2025-03-06T19:45:29
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
transformers version: 4.49.0
peft version: 0.14.0
### Who can help?
@benjaminbossan @sayakpaul
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModel, AutoModelForCausalLM
from peft import PeftModel
base_model_id = "meta-llama/Llama-3.2-1B"
adapter_id = "makcedward/Llama-3.2-1B-Instruct-LoRA-Adapter"
auto_model = PeftModel.from_pretrained(
AutoModel.from_pretrained(
base_model_id,
),
adapter_id
)
auto_casual_model = PeftModel.from_pretrained(
AutoModelForCausalLM.from_pretrained(
base_model_id,
),
adapter_id
)
print("Auto Model")
print(auto_model.base_model.model.layers[0].self_attn.q_proj.lora_A.default.weight)
# tensor([[-0.0168, 0.0056, -0.0009, ..., 0.0149, -0.0161, -0.0064],
print(auto_model.base_model.model.layers[0].self_attn.q_proj.lora_B.default.weight)
# tensor([[0., 0., 0., ..., 0., 0., 0.],
print("AutoModelForCausalLM")
print(auto_casual_model.base_model.model.model.layers[0].self_attn.q_proj.lora_A.default.weight)
# tensor([[ 1.5867e-02, 2.7307e-02, -1.8503e-02, ..., -1.2035e-02,
print(auto_casual_model.base_model.model.model.layers[0].self_attn.q_proj.lora_B.default.weight)
# tensor([[-7.1123e-04, -4.3834e-03, -1.7415e-03, ..., 4.3514e-03,
```
### Expected behavior
Able to load LoRA weights by using AutoModel
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2412/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2410
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2410/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2410/comments
|
https://api.github.com/repos/huggingface/peft/issues/2410/events
|
https://github.com/huggingface/peft/issues/2410
| 2,899,373,069
|
I_kwDOIf9iDM6s0OwN
| 2,410
|
running forward loop using get_peft_model disables requires_grad on output
|
{
"login": "Hamidreza3252",
"id": 27887474,
"node_id": "MDQ6VXNlcjI3ODg3NDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/27887474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hamidreza3252",
"html_url": "https://github.com/Hamidreza3252",
"followers_url": "https://api.github.com/users/Hamidreza3252/followers",
"following_url": "https://api.github.com/users/Hamidreza3252/following{/other_user}",
"gists_url": "https://api.github.com/users/Hamidreza3252/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hamidreza3252/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hamidreza3252/subscriptions",
"organizations_url": "https://api.github.com/users/Hamidreza3252/orgs",
"repos_url": "https://api.github.com/users/Hamidreza3252/repos",
"events_url": "https://api.github.com/users/Hamidreza3252/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hamidreza3252/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 3
| 2025-03-06T05:12:42
| 2025-03-06T15:35:13
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
I would like to report a recent issue I have been facing, but I am not sure if it is a bug or I am doing something wrong in the process. The steps to re-create the steps are easy. The issue happens when I try to convert **Qwen2-VL-2B-Instruct** model into a PEFT model using `get_peft_model` method. Simply load the model using the sample code in https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct and try to convert it to a PEFT model using a typical **8bit** LoraConfig with just sample `target_modules=["q_proj", "v_proj"]`. Then simply run a forward call to the model using a dummy input, such as `input_ids = torch.zeros((4, 1247)).to(device)`. When I inspect the `requires_grad` of `logits` attribute of the output, it is False. Meaning that I cannot run backward based on that output. This issue has been puzzling me for a while. I would appreciate if you can help me with a solution or advice how to address it properly.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2410/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2407
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2407/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2407/comments
|
https://api.github.com/repos/huggingface/peft/issues/2407/events
|
https://github.com/huggingface/peft/issues/2407
| 2,895,061,583
|
I_kwDOIf9iDM6sjyJP
| 2,407
|
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:3! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
|
{
"login": "maxliang114514",
"id": 196797831,
"node_id": "U_kgDOC7rlhw",
"avatar_url": "https://avatars.githubusercontent.com/u/196797831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxliang114514",
"html_url": "https://github.com/maxliang114514",
"followers_url": "https://api.github.com/users/maxliang114514/followers",
"following_url": "https://api.github.com/users/maxliang114514/following{/other_user}",
"gists_url": "https://api.github.com/users/maxliang114514/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxliang114514/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxliang114514/subscriptions",
"organizations_url": "https://api.github.com/users/maxliang114514/orgs",
"repos_url": "https://api.github.com/users/maxliang114514/repos",
"events_url": "https://api.github.com/users/maxliang114514/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxliang114514/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 6
| 2025-03-04T18:09:43
| 2025-03-10T11:17:16
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
**When I attempted to swap out the Lora configuration in Q-Lora(see qlora.py in _https://github.com/artidoro/qlora_) for Vera, I ran into the following error:**
Traceback (most recent call last):
File "qvera.py", line 859, in <module>
train()
File "qvera.py", line 821, in train
train_result = trainer.train()
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/trainer.py", line 1809, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/trainer.py", line 2654, in training_step
loss = self.compute_loss(model, inputs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/trainer.py", line 2679, in compute_loss
outputs = model(**inputs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/peft/peft_model.py", line 1644, in forward
return self.base_model(
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/peft/tuners/tuners_utils.py", line 197, in forward
return self.model.forward(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 806, in forward
outputs = self.model(
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 685, in forward
layer_outputs = torch.utils.checkpoint.checkpoint(
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/utils/checkpoint.py", line 249, in checkpoint
return CheckpointFunction.apply(function, preserve, *args)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/utils/checkpoint.py", line 107, in forward
outputs = run_function(*args)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 681, in custom_forward
return module(*inputs, output_attentions, None)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 408, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 305, in forward
query_states = self.q_proj(hidden_states)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/peft/tuners/vera/layer.py", line 287, in forward
result = result + lambda_b * F.linear(lambda_d * F.linear(dropout(x), sliced_A), sliced_B)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:3! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
**However, with the original settings, everything was trainable. My GPU specs are as follows:**
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.135 Driver Version: 550.135 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 2080 Ti Off | 00000000:02:00.0 Off | N/A |
| 22% 19C P8 11W / 250W | 1MiB / 11264MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA GeForce RTX 2080 Ti Off | 00000000:03:00.0 Off | N/A |
| 22% 19C P8 21W / 250W | 1MiB / 11264MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 2 NVIDIA GeForce RTX 2080 Ti Off | 00000000:82:00.0 Off | N/A |
| 22% 20C P8 17W / 250W | 1MiB / 11264MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 3 NVIDIA GeForce RTX 2080 Ti Off | 00000000:83:00.0 Off | N/A |
| 22% 19C P8 8W / 250W | 1MiB / 11264MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
**Is this an issue specific to Vera's unique characteristics? Given the scarcity of resources on Vera, I'd greatly appreciate any help with this problem, thank you!**
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2407/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2405
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2405/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2405/comments
|
https://api.github.com/repos/huggingface/peft/issues/2405/events
|
https://github.com/huggingface/peft/issues/2405
| 2,890,200,666
|
I_kwDOIf9iDM6sRPZa
| 2,405
|
SafetensorError when Merging LoRA Weights
|
{
"login": "Nothern-ai",
"id": 143473220,
"node_id": "U_kgDOCI06RA",
"avatar_url": "https://avatars.githubusercontent.com/u/143473220?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nothern-ai",
"html_url": "https://github.com/Nothern-ai",
"followers_url": "https://api.github.com/users/Nothern-ai/followers",
"following_url": "https://api.github.com/users/Nothern-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/Nothern-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nothern-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nothern-ai/subscriptions",
"organizations_url": "https://api.github.com/users/Nothern-ai/orgs",
"repos_url": "https://api.github.com/users/Nothern-ai/repos",
"events_url": "https://api.github.com/users/Nothern-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nothern-ai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 1
| 2025-03-03T05:22:05
| 2025-03-03T10:11:44
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Original Working Environment: Python 3.8, transformers==4.46.0.dev0, safetensors==0.4.4, peft==0.12.0, trl==0.10.1
New Environment with Issue: transformers==4.45.2, safetensors==0.4.4, peft==0.12.0, trl==0.10.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
When migrating from the original environment to a new machine with slightly different package versions, I encountered an error during the model merging process.
My workflow involves:
Saving LoRA weights
Merging these weights with the base model
The error occurs specifically during the loading of safetensors files after merging/
Reproduction Steps
no need to train directly save LoRA weights (this step succeeds)
Attempt to merge the saved weights with the original model
The merge fails with the above error
```
# train_critic.py
import os
import time
import shutil
import argparse
import torch
import torch.distributed as dist
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
GenerationConfig,
BitsAndBytesConfig,
)
from datasets import load_dataset
from trl import DPOTrainer, DPOConfig
from peft import LoraConfig, PeftModel
import wandb
from datetime import datetime
def print_rank_0(message):
if dist.get_rank() == 0:
print(message)
def main():
# ------------- Parse Arguments -------------
parser = argparse.ArgumentParser()
parser.add_argument("--epoch", type=int, required=True, help="Current outer training iteration (which round)")
parser.add_argument("--pref_dir", type=str, required=True, help="Folder for storing the preference dataset")
parser.add_argument("--weights_dir", type=str, required=True, help="Folder for saving and loading weights")
parser.add_argument("--train_epochs", type=int, default=1, help="Number of epochs to run in this DPO fine-tuning")
parser.add_argument("--beta", type=float, default=0.2, help="Beta hyperparameter for DPO")
parser.add_argument("--learning_rate", type=float, default=5e-6, help="Learning rate")
parser.add_argument("--batch_size", type=int, default=1, help="Batch Size")
args = parser.parse_args()
# ------------- Distributed Initialization -------------
local_rank = int(os.environ.get("LOCAL_RANK", -1))
if local_rank >= 0:
torch.cuda.set_device(local_rank)
dist.init_process_group(
backend='nccl',
init_method='env://',
world_size=int(os.environ.get("WORLD_SIZE", 1)),
rank=int(os.environ.get("RANK", 0))
)
print_rank_0(f"CUDA_VISIBLE_DEVICES: {os.environ.get('CUDA_VISIBLE_DEVICES')}")
print_rank_0(f"LOCAL_RANK: {os.environ.get('LOCAL_RANK')}")
print_rank_0(f"WORLD_SIZE: {os.environ.get('WORLD_SIZE')}")
# ------------- config -------------
epoch = args.epoch
weights_dir = args.weights_dir
pref_dir = args.pref_dir
batch_size = args.batch_size
base_model_path = "meta-llama/Llama-3.1-8B-Instruct"
print("base_model_path:", base_model_path)
data_path = os.path.join(pref_dir, f"critic_{epoch}.jsonl")
output_model_path = os.path.join(weights_dir, f"critic_{epoch}")
os.makedirs(output_model_path, exist_ok=True)
print_rank_0(f"Loading base model from: {base_model_path}")
model = AutoModelForCausalLM.from_pretrained(
base_model_path,
torch_dtype=torch.bfloat16,
device_map={'': torch.cuda.current_device()}
# device_map={'': torch.cuda.current_device()} if local_rank >= 0 else "auto",
)
tokenizer = AutoTokenizer.from_pretrained(base_model_path, use_fast=False)
model.generation_config = GenerationConfig(
max_new_tokens=512,
temperature=0.7,
do_sample=True,
)
# padding_side/pad_token
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
tokenizer.padding_side = 'right'
tokenizer.pad_token = '[PAD]'
model.config.pad_token_id = tokenizer.pad_token_id
model.config.eos_token_id = tokenizer.eos_token_id
with torch.no_grad():
model.resize_token_embeddings(len(tokenizer))
print_rank_0(f"Loading dataset from: {data_path}")
dataset = load_dataset('json', data_files=data_path)['train']
def convert_format(example):
messages = example['messages']
formatted = "<|begin_of_text|>"
# system
system_msg = messages[0]
formatted += f"<|start_header_id|>system<|end_header_id|>\n\n{system_msg['content']}<|eot_id|>"
# user
user_msg = messages[1]
formatted += f"<|start_header_id|>user<|end_header_id|>\n\n{user_msg['content']}<|eot_id|>"
# assistant
formatted += "<|start_header_id|>assistant<|end_header_id|>\n\n"
chosen_response = example['chosen'] + tokenizer.eos_token
rejected_response = example['rejected'] + tokenizer.eos_token
return {
"prompt": formatted,
"chosen": chosen_response,
"rejected": rejected_response
}
train_dataset = dataset.map(
convert_format,
remove_columns=dataset.column_names,
load_from_cache_file=False
)
base_lr = args.learning_rate
scaled_lr = base_lr * dist.get_world_size() * batch_size
warmup_steps = 100
dpo_config = DPOConfig(
beta=args.beta,
warmup_steps=warmup_steps,
weight_decay=0.01,
learning_rate=scaled_lr,
rpo_alpha=1.0,
# lr_scheduler_type="cosine",
output_dir=output_model_path,
num_train_epochs=args.train_epochs,
per_device_train_batch_size=batch_size,
fp16=False,
bf16=True,
logging_steps=10,
save_strategy="no",
save_total_limit=1,
report_to="none",
ddp_backend='nccl',
remove_unused_columns=False,
dataloader_drop_last=True,
max_length=2048,
max_prompt_length=2048,
local_rank=local_rank,
)
# LoRA
peft_config = LoraConfig(
r=256,
lora_alpha=32,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
lora_dropout=0.0,
bias="none",
task_type="CAUSAL_LM",
)
trainer = DPOTrainer(
model=model,
args=dpo_config,
train_dataset=train_dataset,
tokenizer=tokenizer,
peft_config=peft_config,
)
trainer.train()
# ------------- merge LoRA -------------
if dist.get_rank() == 0:
lora_weights_path = os.path.join(output_model_path, "lora_weights")
trainer.model.save_pretrained(lora_weights_path)
# print("lora weight saved")
# trainer.model.save_pretrained(lora_weights_path, safe_serialization=False)
print("lora weight saved")
base_merged_model = AutoModelForCausalLM.from_pretrained(
base_model_path,
device_map=None,
low_cpu_mem_usage=False,
)
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
tokenizer.pad_token = '[PAD]'
base_merged_model.config.pad_token_id = tokenizer.pad_token_id
base_merged_model.config.eos_token_id = tokenizer.eos_token_id
with torch.no_grad():
base_merged_model.resize_token_embeddings(len(tokenizer))
peft_model = PeftModel.from_pretrained(
base_merged_model,
lora_weights_path,
device_map=None,
)
merged_model = peft_model.merge_and_unload()
# save
print_rank_0(f"Saving merged model to: {output_model_path}")
merged_model.save_pretrained(output_model_path)
print_rank_0("Model saved successfully")
tokenizer.save_pretrained(output_model_path)
# delete lora weights
shutil.rmtree(lora_weights_path)
dist.barrier(device_ids=[local_rank] if local_rank >= 0 else None)
print_rank_0("DPO Training complete.")
dist.destroy_process_group()
if __name__ == "__main__":
main()
```
When trying to skip saving the LoRA weights and directly merging them, the merge operation succeeds
```
peft_model = trainer.model
merged_model = peft_model.merge_and_unload()
print_rank_0(f"Saving merged model to: {output_model_path}")
merged_model.save_pretrained(output_model_path)
tokenizer.save_pretrained(output_model_path)
print_rank_0("Merged model saved successfully")
```
However, attempting to AutoModelForCausalLM.from_pretrained the merged safetensors weights later results in the error2
### Expected behavior
error1(save lora weights and merge):
> 100%|██████████| 1/1 [00:01<00:00, 1.91s/it]
> 100%|██████████| 1/1 [00:01<00:00, 1.92s/it]
> /home//miniconda3/envs/py39env/lib/python3.8/site-packages/peft/utils/save_and_load.py:232: UserWarning: Setting `save_embedding_layers` to `True` as the embedding layer has been resized during finetuning.
> warnings.warn(
> lora weight saved
>
> Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]
> Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:02, 1.28it/s]
> Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.32it/s]
> Loading checkpoint shards: 75%|███████▌ | 3/4 [00:02<00:00, 1.31it/s]
> Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 1.74it/s]
> Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 1.55it/s]
> [rank0]: Traceback (most recent call last):
> [rank0]: File "/users/w/ac/train/train_critic.py", line 249, in <module>
> [rank0]: main()
> [rank0]: File "/users/w/ac/train/train_critic.py", line 225, in main
> [rank0]: peft_model = PeftModel.from_pretrained(
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/peft/peft_model.py", line 545, in from_pretrained
> [rank0]: model.load_adapter(
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/peft/peft_model.py", line 1113, in load_adapter
> [rank0]: adapters_weights = load_peft_weights(model_id, device=torch_device, **hf_hub_download_kwargs)
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/peft/utils/save_and_load.py", line 486, in load_peft_weights
> [rank0]: adapters_weights = safe_load_file(filename, device=device)
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/safetensors/torch.py", line 311, in load_file
> [rank0]: with safe_open(filename, framework="pt", device=device) as f:
> [rank0]: safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
> E0302 21:17:38.377842 2650981 site-packages/torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 2651079) of binary: /home//miniconda3/envs/py39env/bin/python
> Traceback (most recent call last):
> File "/home//miniconda3/envs/py39env/bin/torchrun", line 8, in <module>
> sys.exit(main())
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
> return f(*args, **kwargs)
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/run.py", line 919, in main
> run(args)
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/run.py", line 910, in run
> elastic_launch(
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
> return launch_agent(self._config, self._entrypoint, list(args))
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
> raise ChildFailedError(
> torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
error2:(directly merge, and load the model after merge
> CUDA_VISIBLE_DEVICES: 1
> LOCAL_RANK: 0
> WORLD_SIZE: 1
> base_model_path: /train/runs/301_wd/weights/_1
> Loading base model from: /train/runs/301_wd/weights/_1
>
> Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s]
> Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s]
> [rank0]: Traceback (most recent call last):
> [rank0]: File "/train/train_.py", line 216, in <module>
> [rank0]: main()
> [rank0]: File "/train/train_.py", line 91, in main
> [rank0]: model = AutoModelForCausalLM.from_pretrained(
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
> [rank0]: return model_class.from_pretrained(
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/transformers/modeling_utils.py", line 4014, in from_pretrained
> [rank0]: ) = cls._load_pretrained_model(
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/transformers/modeling_utils.py", line 4482, in _load_pretrained_model
> [rank0]: state_dict = load_state_dict(shard_file, is_quantized=is_quantized)
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/transformers/modeling_utils.py", line 549, in load_state_dict
> [rank0]: with safe_open(checkpoint_file, framework="pt") as f:
> [rank0]: safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
> E0302 20:39:06.398025 2565872 site-packages/torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 2566031) of binary: /home//miniconda3/envs/py39env/bin/python
> Traceback (most recent call last):
> File "/home//miniconda3/envs/py39env/bin/torchrun", line 8, in <module>
> sys.exit(main())
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
> return f(*args, **kwargs)
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/run.py", line 919, in main
> run(args)
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/run.py", line 910, in run
> elastic_launch(
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
> return launch_agent(self._config, self._entrypoint, list(args))
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
> raise ChildFailedError(
> torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
> ============================================================
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2405/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2400
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2400/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2400/comments
|
https://api.github.com/repos/huggingface/peft/issues/2400/events
|
https://github.com/huggingface/peft/issues/2400
| 2,881,481,036
|
I_kwDOIf9iDM6rv-lM
| 2,400
|
processing_class and tokenizer arguments on SFTTrainer()
|
{
"login": "ErikKankaTrea",
"id": 18656607,
"node_id": "MDQ6VXNlcjE4NjU2NjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/18656607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ErikKankaTrea",
"html_url": "https://github.com/ErikKankaTrea",
"followers_url": "https://api.github.com/users/ErikKankaTrea/followers",
"following_url": "https://api.github.com/users/ErikKankaTrea/following{/other_user}",
"gists_url": "https://api.github.com/users/ErikKankaTrea/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ErikKankaTrea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ErikKankaTrea/subscriptions",
"organizations_url": "https://api.github.com/users/ErikKankaTrea/orgs",
"repos_url": "https://api.github.com/users/ErikKankaTrea/repos",
"events_url": "https://api.github.com/users/ErikKankaTrea/events{/privacy}",
"received_events_url": "https://api.github.com/users/ErikKankaTrea/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2025-02-26T12:48:33
| 2025-02-27T03:39:02
| 2025-02-27T03:39:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi!!!
I got unexpected error from my side when running the example train.py with deepspeed [(link)](https://github.com/huggingface/peft/tree/main/examples/sft)
Argument "**tokenizer**" should be now "**processing_class**".
Could anyone please, let me know whether with the example provided (link above) changing the arguments names on SFTTrainer() for passing the tokenizer should be enough ?
I am worried if I make that change switching arguments the example scripts will miss sense.
Thanks in advance!
|
{
"login": "ErikKankaTrea",
"id": 18656607,
"node_id": "MDQ6VXNlcjE4NjU2NjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/18656607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ErikKankaTrea",
"html_url": "https://github.com/ErikKankaTrea",
"followers_url": "https://api.github.com/users/ErikKankaTrea/followers",
"following_url": "https://api.github.com/users/ErikKankaTrea/following{/other_user}",
"gists_url": "https://api.github.com/users/ErikKankaTrea/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ErikKankaTrea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ErikKankaTrea/subscriptions",
"organizations_url": "https://api.github.com/users/ErikKankaTrea/orgs",
"repos_url": "https://api.github.com/users/ErikKankaTrea/repos",
"events_url": "https://api.github.com/users/ErikKankaTrea/events{/privacy}",
"received_events_url": "https://api.github.com/users/ErikKankaTrea/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2400/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2394
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2394/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2394/comments
|
https://api.github.com/repos/huggingface/peft/issues/2394/events
|
https://github.com/huggingface/peft/issues/2394
| 2,874,191,172
|
I_kwDOIf9iDM6rUK1E
| 2,394
|
TP + DP training error
|
{
"login": "iMountTai",
"id": 35353688,
"node_id": "MDQ6VXNlcjM1MzUzNjg4",
"avatar_url": "https://avatars.githubusercontent.com/u/35353688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iMountTai",
"html_url": "https://github.com/iMountTai",
"followers_url": "https://api.github.com/users/iMountTai/followers",
"following_url": "https://api.github.com/users/iMountTai/following{/other_user}",
"gists_url": "https://api.github.com/users/iMountTai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iMountTai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iMountTai/subscriptions",
"organizations_url": "https://api.github.com/users/iMountTai/orgs",
"repos_url": "https://api.github.com/users/iMountTai/repos",
"events_url": "https://api.github.com/users/iMountTai/events{/privacy}",
"received_events_url": "https://api.github.com/users/iMountTai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 7
| 2025-02-24T08:30:53
| 2025-02-27T16:50:07
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
peft: 0.14.1.dev0
transformers: 4.50.dev0
accelerate: 1.4.0.dev0
python: 3.11
linux
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
After adding the LoRA module to the model, an error occurred:
NotImplementederror: ColwiseParallel currently only support nn.linear and nn.embedding
### Expected behavior
lora module training with TP
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2394/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2390
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2390/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2390/comments
|
https://api.github.com/repos/huggingface/peft/issues/2390/events
|
https://github.com/huggingface/peft/issues/2390
| 2,866,034,838
|
I_kwDOIf9iDM6q1DiW
| 2,390
|
Bug: Using 2 LoRA configs with `target_modules='all-linear'` leads to nested LoRA layers
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 4838806417,
"node_id": "LA_kwDOIf9iDM8AAAABIGpTkQ",
"url": "https://api.github.com/repos/huggingface/peft/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 4838806434,
"node_id": "LA_kwDOIf9iDM8AAAABIGpTog",
"url": "https://api.github.com/repos/huggingface/peft/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false
| null |
[] | null | 0
| 2025-02-20T12:34:35
| 2025-03-04T16:16:16
| 2025-03-04T16:16:16
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
-
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModelForCausalLM
from peft import LoraConfig, get_peft_model
model_id = "hf-internal-testing/tiny-random-OPTForCausalLM"
model = AutoModelForCausalLM.from_pretrained(model_id)
config0 = LoraConfig(target_modules="all-linear")
config1 = LoraConfig(target_modules="all-linear")
model = get_peft_model(model, config0)#, adapter_name="default")
model.add_adapter("adapter1", config1)
print(model.base_model.model.model.decoder.layers[0].self_attn.k_proj)
```
prints:
```
lora.Linear(
(base_layer): lora.Linear(
(base_layer): Linear(in_features=16, out_features=16, bias=True)
(lora_dropout): ModuleDict(
(adapter1): Identity()
)
(lora_A): ModuleDict(
(adapter1): Linear(in_features=16, out_features=8, bias=False)
)
(lora_B): ModuleDict(
(adapter1): Linear(in_features=8, out_features=16, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(lora_dropout): ModuleDict(
(default): Identity()
)
(lora_A): ModuleDict(
(default): lora.Linear(
(base_layer): Linear(in_features=16, out_features=8, bias=False)
(lora_dropout): ModuleDict(
(adapter1): Identity()
)
(lora_A): ModuleDict(
(adapter1): Linear(in_features=16, out_features=8, bias=False)
)
(lora_B): ModuleDict(
(adapter1): Linear(in_features=8, out_features=8, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
)
(lora_B): ModuleDict(
(default): lora.Linear(
(base_layer): Linear(in_features=8, out_features=16, bias=False)
(lora_dropout): ModuleDict(
(adapter1): Identity()
)
(lora_A): ModuleDict(
(adapter1): Linear(in_features=8, out_features=8, bias=False)
)
(lora_B): ModuleDict(
(adapter1): Linear(in_features=8, out_features=16, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
```
### Expected behavior
Instead of getting nested LoRA layers, the linear layers belonging to a LoRA layer should not be targeted by `all-linear`.
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2390/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2388
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2388/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2388/comments
|
https://api.github.com/repos/huggingface/peft/issues/2388/events
|
https://github.com/huggingface/peft/issues/2388
| 2,863,639,986
|
I_kwDOIf9iDM6qr62y
| 2,388
|
ValueError: Target module Qwen2_5_VisionTransformerPretrainedModel is not supported.
|
{
"login": "samuellimabraz",
"id": 115582014,
"node_id": "U_kgDOBuOkPg",
"avatar_url": "https://avatars.githubusercontent.com/u/115582014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samuellimabraz",
"html_url": "https://github.com/samuellimabraz",
"followers_url": "https://api.github.com/users/samuellimabraz/followers",
"following_url": "https://api.github.com/users/samuellimabraz/following{/other_user}",
"gists_url": "https://api.github.com/users/samuellimabraz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samuellimabraz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samuellimabraz/subscriptions",
"organizations_url": "https://api.github.com/users/samuellimabraz/orgs",
"repos_url": "https://api.github.com/users/samuellimabraz/repos",
"events_url": "https://api.github.com/users/samuellimabraz/events{/privacy}",
"received_events_url": "https://api.github.com/users/samuellimabraz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 3
| 2025-02-19T15:09:17
| 2025-03-06T16:30:36
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
## Context
I'm finetuning the Qwen2.5-Vl model with swift for data extraction using LoRA. I'm not sure what is the correct way to save and upload the adapter and be able to recharge it correctly.
In short, I followed these steps
```python
# load model
model, processor = get_model_tokenizer(
'Qwen/Qwen2.5-VL-3B-Instruct',
torch_dtype=torch.bfloat16,
use_hf=True,
attn_impl="flash_attn",
)
# get lora
...
model = Swift.prepare_model(model, lora_config)
# train config e run
...
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
data_collator=template.data_collator,
train_dataset=train_dataset,
eval_dataset=val_dataset,
template=template,
callbacks= [
EarlyStoppingCallback(
early_stopping_patience=6,
early_stopping_threshold=0.001
)
]
)
stats = trainer.train()
# push adapter
model.push_to_hub(f"tech4humans/{model_name}", private=True)
```
debugging the peft model was loaded with the class `PeftModelForCausalLM`.
## Problem
Then after I tried to recharge the adapter and I get an error with peft
```python
from transformers import Qwen2_5_VLForConditionalGeneration
model = Qwen2_5_VLForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-VL-3B-Instruct", device_map="auto")
model.load_adapter("tech4humans/Qwen2.5-VL-3B-Instruct-r4-tuned")
```
```python
/usr/local/lib/python3.10/dist-packages/peft/tuners/lora/model.py in _create_new_module(lora_config, adapter_name, target, **kwargs)
345 if new_module is None:
346 # no module could be matched
--> 347 raise ValueError(
348 f"Target module {target} is not supported. Currently, only the following modules are supported: "
349 "`torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv1d`, `torch.nn.Conv2d`, `torch.nn.Conv3d`, ".
ValueError: Target module Qwen2_5_VisionTransformerPretrainedModel(
(patch_embed): Qwen2_5_VisionPatchEmbed(
(proj): Conv3d(3, 1280, kernel_size=(2, 14, 14), stride=(2, 14, 14), bias=False)
)
(rotary_pos_emb): Qwen2_5_VisionRotaryEmbedding()
(blocks): ModuleList(
(0-31): 32 x Qwen2_5_VLVisionBlock(
(norm1): Qwen2RMSNorm((1280,), eps=1e-06)
(norm2): Qwen2RMSNorm((1280,), eps=1e-06)
(attn): Qwen2_5_VLVisionSdpaAttention(
(qkv): Linear(in_features=1280, out_features=3840, bias=True)
(proj): Linear(in_features=1280, out_features=1280, bias=True)
)
(mlp): Qwen2_5_VLMLP(
(gate_proj): Linear(in_features=1280, out_features=3420, bias=True)
(up_proj): Linear(in_features=1280, out_features=3420, bias=True)
(down_proj): Linear(in_features=3420, out_features=1280, bias=True)
(act_fn): SiLU()
)
)
)
(merger): Qwen2_5_VLPatchMerger(
(ln_q): Qwen2RMSNorm((1280,), eps=1e-06)
(mlp): Sequential(
(0): Linear(in_features=5120, out_features=5120, bias=True)
(1): GELU(approximate='none')
(2): Linear(in_features=5120, out_features=2048, bias=True)
)
)
) is not supported. Currently, only the following modules are supported: `torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv1d`, `torch.nn.Conv2d`, `torch.nn.Conv3d`, `transformers.pytorch_utils.Conv1D`, `torch.nn.MultiheadAttention.`.
```
## Sytem info
```
transformers 4.50.0.dev0
peft 0.14.1.dev0
ms-swift 3.2.0.dev0
Python 3.10.12
CUDA Version: 12.6
```
Am I missing something or doing something wrong? Any pointers would be appreciated. Thanks!
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2388/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2381
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2381/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2381/comments
|
https://api.github.com/repos/huggingface/peft/issues/2381/events
|
https://github.com/huggingface/peft/issues/2381
| 2,857,556,037
|
I_kwDOIf9iDM6qUthF
| 2,381
|
Bug when deleting adapters of a model with modules_to_save
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 4838806417,
"node_id": "LA_kwDOIf9iDM8AAAABIGpTkQ",
"url": "https://api.github.com/repos/huggingface/peft/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2025-02-17T11:22:34
| 2025-02-20T12:35:13
| null |
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
All PEFT versions.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModelForSequenceClassification
from peft import LoraConfig, get_peft_model
model_id = "facebook/opt-125m"
config = LoraConfig(task_type="SEQ_CLS")
model = AutoModelForSequenceClassification.from_pretrained(model_id)
adapter_to_delete = "delete_me"
model = get_peft_model(model, config)
model.add_adapter(adapter_to_delete, config)
# sanity check
assert "delete_me" in model.base_model.model.score.modules_to_save
model.delete_adapter(adapter_to_delete)
assert "delete_me" not in model.base_model.model.score.modules_to_save
```
### Expected behavior
When adding, say, a LoRA adapter with `modules_to_save`, then deleting the adapter, the LoRA part is correctly removed but the `modules_to_save` part is not removed.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2381/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2379
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2379/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2379/comments
|
https://api.github.com/repos/huggingface/peft/issues/2379/events
|
https://github.com/huggingface/peft/issues/2379
| 2,854,940,754
|
I_kwDOIf9iDM6qKvBS
| 2,379
|
prompt_tuning_peft tutorial raises cache layer error
|
{
"login": "jakerobers",
"id": 1840629,
"node_id": "MDQ6VXNlcjE4NDA2Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1840629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jakerobers",
"html_url": "https://github.com/jakerobers",
"followers_url": "https://api.github.com/users/jakerobers/followers",
"following_url": "https://api.github.com/users/jakerobers/following{/other_user}",
"gists_url": "https://api.github.com/users/jakerobers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jakerobers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jakerobers/subscriptions",
"organizations_url": "https://api.github.com/users/jakerobers/orgs",
"repos_url": "https://api.github.com/users/jakerobers/repos",
"events_url": "https://api.github.com/users/jakerobers/events{/privacy}",
"received_events_url": "https://api.github.com/users/jakerobers/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 3
| 2025-02-15T00:10:11
| 2025-02-19T10:21:15
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Following the prompt tuning guide leads to an error when executing in a local environment:
- https://huggingface.co/learn/cookbook/en/prompt_tuning_peft
When executing, an exception is raised when calling `model.generate()` with the prompt-tuned model. Everything up to that point seems to be working as expected (i.e. the `peft_outputs_prompt` and `peft_outputs_sentences` directories containing the prompt-tunings have checkpoints).
Having a look at the stacktrace, it looks like `model_kwargs["past_key_values"]` is being referenced in `peft/peft_model.py`. I'm curious if this is possibly related to https://github.com/huggingface/peft/issues/1962.
```
Traceback (most recent call last):
File "/main.py", line 148, in <module>
loaded_model_prompt_outputs = get_outputs(loaded_model_prompt, input_prompt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./main.py", line 17, in get_outputs
outputs = model.generate(
^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/peft/peft_model.py", line 1140, in generate
outputs = self.base_model.generate(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/transformers/generation/utils.py", line 2255, in generate
result = self._sample(
^^^^^^^^^^^^^
File "lib/python3.11/site-packages/transformers/generation/utils.py", line 3247, in _sample
model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/peft/peft_model.py", line 1169, in prepare_inputs_for_generation
if model_kwargs["past_key_values"][0][0].shape[-2] >= model_kwargs["input_ids"].shape[1]:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^
File "lib/python3.11/site-packages/transformers/cache_utils.py", line 390, in __getitem__
raise KeyError(f"Cache only has {len(self)} layers, attempted to access layer with index {layer_idx}")
KeyError: 'Cache only has 0 layers, attempted to access layer with index 0'
```
cc @BenjaminBossan since you have some context around how `past_key_values` [works with transformers](https://github.com/huggingface/peft/pull/2096/files)
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
This is the code provided in the article https://huggingface.co/learn/cookbook/en/prompt_tuning_peft, condensed into a single script.
```
#!/usr/bin/env python
# TODO: https://huggingface.co/learn/cookbook/en/prompt_tuning_peft
# TODO: https://huggingface.co/docs/peft/en/package_reference/prompt_tuning
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "bigscience/bloomz-560m"
# model_name="bigscience/bloom-1b1"
NUM_VIRTUAL_TOKENS = 4
NUM_EPOCHS = 6
tokenizer = AutoTokenizer.from_pretrained(model_name)
foundational_model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
def get_outputs(model, inputs, max_new_tokens=100):
outputs = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
max_new_tokens=max_new_tokens,
# temperature=0.2,
# top_p=0.95,
# do_sample=True,
repetition_penalty=1.5, # Avoid repetition.
early_stopping=True, # The model can stop before reach the max_length
eos_token_id=tokenizer.eos_token_id,
)
return outputs
input_prompt = tokenizer("I want you to act as a motivational coach. ", return_tensors="pt")
foundational_outputs_prompt = get_outputs(foundational_model, input_prompt, max_new_tokens=50)
print(tokenizer.batch_decode(foundational_outputs_prompt, skip_special_tokens=True))
import os
from IPython.display import display
# os.environ["TOKENIZERS_PARALLELISM"] = "false"
from datasets import load_dataset
dataset_prompt = "fka/awesome-chatgpt-prompts"
# Create the Dataset to create prompts.
#
data_prompt = load_dataset(dataset_prompt)
data_prompt = data_prompt.map(lambda samples: tokenizer(samples["prompt"]), batched=True)
train_sample_prompt = data_prompt["train"].select(range(50))
display(train_sample_prompt)
print(train_sample_prompt[:1])
dataset_sentences = load_dataset("Abirate/english_quotes")
data_sentences = dataset_sentences.map(lambda samples: tokenizer(samples["quote"]), batched=True)
train_sample_sentences = data_sentences["train"].select(range(25))
train_sample_sentences = train_sample_sentences.remove_columns(["author", "tags"])
display(train_sample_sentences)
print(train_sample_sentences[:1])
from peft import get_peft_model, PromptTuningConfig, TaskType, PromptTuningInit
generation_config = PromptTuningConfig(
task_type=TaskType.CAUSAL_LM, # This type indicates the model will generate text.
prompt_tuning_init=PromptTuningInit.RANDOM, # The added virtual tokens are initializad with random numbers
num_virtual_tokens=NUM_VIRTUAL_TOKENS, # Number of virtual tokens to be added and trained.
tokenizer_name_or_path=model_name, # The pre-trained model.
)
peft_model_prompt = get_peft_model(foundational_model, generation_config)
print(peft_model_prompt.print_trainable_parameters())
peft_model_sentences = get_peft_model(foundational_model, generation_config)
print(peft_model_sentences.print_trainable_parameters())
from transformers import TrainingArguments
def create_training_arguments(path, learning_rate=0.0035, epochs=6):
training_args = TrainingArguments(
output_dir=path, # Where the model predictions and checkpoints will be written
use_cpu=True, # This is necessary for CPU clusters.
auto_find_batch_size=True, # Find a suitable batch size that will fit into memory automatically
learning_rate=learning_rate, # Higher learning rate than full Fine-Tuning
num_train_epochs=epochs,
)
return training_args
import os
working_dir = "./"
# Is best to store the models in separate folders.
# Create the name of the directories where to store the models.
output_directory_prompt = os.path.join(working_dir, "peft_outputs_prompt")
output_directory_sentences = os.path.join(working_dir, "peft_outputs_sentences")
# Just creating the directoris if not exist.
if not os.path.exists(working_dir):
os.mkdir(working_dir)
if not os.path.exists(output_directory_prompt):
os.mkdir(output_directory_prompt)
if not os.path.exists(output_directory_sentences):
os.mkdir(output_directory_sentences)
training_args_prompt = create_training_arguments(output_directory_prompt, 0.003, NUM_EPOCHS)
training_args_sentences = create_training_arguments(output_directory_sentences, 0.003, NUM_EPOCHS)
from transformers import Trainer, DataCollatorForLanguageModeling
def create_trainer(model, training_args, train_dataset):
trainer = Trainer(
model=model, # We pass in the PEFT version of the foundation model, bloomz-560M
args=training_args, # The args for the training.
train_dataset=train_dataset, # The dataset used to tyrain the model.
data_collator=DataCollatorForLanguageModeling(
tokenizer, mlm=False
), # mlm=False indicates not to use masked language modeling
)
return trainer
trainer_prompt = create_trainer(peft_model_prompt, training_args_prompt, train_sample_prompt)
trainer_prompt.train()
trainer_sentences = create_trainer(peft_model_sentences, training_args_sentences, train_sample_sentences)
trainer_sentences.train()
trainer_prompt.model.save_pretrained(output_directory_prompt)
trainer_sentences.model.save_pretrained(output_directory_sentences)
from peft import PeftModel
loaded_model_prompt = PeftModel.from_pretrained(
foundational_model,
output_directory_prompt,
# device_map='auto',
is_trainable=False,
)
loaded_model_prompt_outputs = get_outputs(loaded_model_prompt, input_prompt)
print(tokenizer.batch_decode(loaded_model_prompt_outputs, skip_special_tokens=True))
loaded_model_prompt.load_adapter(output_directory_sentences, adapter_name="quotes")
loaded_model_prompt.set_adapter("quotes")
loaded_model_sentences_outputs = get_outputs(loaded_model_prompt, input_sentences)
print(tokenizer.batch_decode(loaded_model_sentences_outputs, skip_special_tokens=True))
# Notes:
# - https://github.com/huggingface/peft/issues/1962
# - https://github.com/huggingface/peft/issues/869#issuecomment-2263322623
```
### Expected behavior
The `loaded_model_prompt` should be able to execute `generate` and return a prompt-tuned response.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2379/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2377
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2377/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2377/comments
|
https://api.github.com/repos/huggingface/peft/issues/2377/events
|
https://github.com/huggingface/peft/issues/2377
| 2,853,540,672
|
I_kwDOIf9iDM6qFZNA
| 2,377
|
Contributing new model merging method to PEFT
|
{
"login": "SpeeeedLee",
"id": 132431571,
"node_id": "U_kgDOB-S-0w",
"avatar_url": "https://avatars.githubusercontent.com/u/132431571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SpeeeedLee",
"html_url": "https://github.com/SpeeeedLee",
"followers_url": "https://api.github.com/users/SpeeeedLee/followers",
"following_url": "https://api.github.com/users/SpeeeedLee/following{/other_user}",
"gists_url": "https://api.github.com/users/SpeeeedLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SpeeeedLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SpeeeedLee/subscriptions",
"organizations_url": "https://api.github.com/users/SpeeeedLee/orgs",
"repos_url": "https://api.github.com/users/SpeeeedLee/repos",
"events_url": "https://api.github.com/users/SpeeeedLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/SpeeeedLee/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 1
| 2025-02-14T12:17:46
| 2025-02-14T15:57:51
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Feature request
Hi all,
I noticed that several model merging methods, such as TIES and DARE, have been implemented in this library, as mentioned [here](https://github.com/huggingface/peft/blob/main/docs/source/developer_guides/model_merging.md).
I was wondering if there is a way for me to contribute a recently accepted model merging method to this repo.
I would really appreciate any guidance or suggestions on how to proceed.
Thanks in advance!
### Motivation
Enhance the diversity of model merging supported in this library.
### Your contribution
I can submit a PR.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2377/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2368
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2368/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2368/comments
|
https://api.github.com/repos/huggingface/peft/issues/2368/events
|
https://github.com/huggingface/peft/issues/2368
| 2,838,153,330
|
I_kwDOIf9iDM6pKshy
| 2,368
|
[FSDP] After training embed_tokens in modules_to_save model has hallucinations
|
{
"login": "DmitryDiTy",
"id": 90377536,
"node_id": "MDQ6VXNlcjkwMzc3NTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/90377536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DmitryDiTy",
"html_url": "https://github.com/DmitryDiTy",
"followers_url": "https://api.github.com/users/DmitryDiTy/followers",
"following_url": "https://api.github.com/users/DmitryDiTy/following{/other_user}",
"gists_url": "https://api.github.com/users/DmitryDiTy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DmitryDiTy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DmitryDiTy/subscriptions",
"organizations_url": "https://api.github.com/users/DmitryDiTy/orgs",
"repos_url": "https://api.github.com/users/DmitryDiTy/repos",
"events_url": "https://api.github.com/users/DmitryDiTy/events{/privacy}",
"received_events_url": "https://api.github.com/users/DmitryDiTy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 17
| 2025-02-07T13:23:07
| 2025-02-14T08:23:35
| 2025-02-14T08:21:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
### Libs
```
absl-py==2.1.0
accelerate==1.3.0
aiohappyeyeballs==2.4.4
aiohttp==3.11.10
aiosignal==1.3.2
annotated-types==0.7.0
asttokens @ file:///home/conda/feedstock_root/build_artifacts/asttokens_1733250440834/work
async-timeout==5.0.1
attrs==24.3.0
beartype==0.14.1
bert-score==0.3.13
better-abc==0.0.3
certifi==2024.12.14
charset-normalizer==3.4.0
circuitsvis @ git+https://github.com/callummcdougall/CircuitsVis.git@1e6129d08cae7af9242d9ab5d3ed322dd44b4dd3#subdirectory=python
click==8.1.7
comm @ file:///home/conda/feedstock_root/build_artifacts/comm_1733502965406/work
contourpy==1.3.1
cycler==0.12.1
datasets==3.2.0
debugpy @ file:///home/conda/feedstock_root/build_artifacts/debugpy_1734158947252/work
decorator @ file:///home/conda/feedstock_root/build_artifacts/decorator_1733236420667/work
dill==0.3.8
docker-pycreds==0.4.0
einops==0.8.0
evaluate==0.4.3
exceptiongroup @ file:///home/conda/feedstock_root/build_artifacts/exceptiongroup_1733208806608/work
executing @ file:///home/conda/feedstock_root/build_artifacts/executing_1733569351617/work
fancy-einsum==0.0.3
filelock==3.16.1
fonttools==4.55.6
frozenlist==1.5.0
fsspec==2024.9.0
gitdb==4.0.11
GitPython==3.1.43
huggingface-hub==0.27.0
idna==3.10
importlib-metadata==5.2.0
ipykernel @ file:///home/conda/feedstock_root/build_artifacts/ipykernel_1719845459717/work
ipython @ file:///home/conda/feedstock_root/build_artifacts/ipython_1732896932739/work
ipywidgets==8.1.5
jaxtyping==0.2.36
jedi @ file:///home/conda/feedstock_root/build_artifacts/jedi_1733300866624/work
Jinja2==3.1.4
joblib==1.4.2
jupyter_client @ file:///home/conda/feedstock_root/build_artifacts/jupyter_client_1733440914442/work
jupyter_core @ file:///home/conda/feedstock_root/build_artifacts/jupyter_core_1727163409502/work
jupyterlab_widgets==3.0.13
kiwisolver==1.4.8
markdown-it-py==3.0.0
MarkupSafe==3.0.2
matplotlib==3.10.0
matplotlib-inline @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-inline_1733416936468/work
mdurl==0.1.2
mpmath==1.3.0
multidict==6.1.0
multiprocess==0.70.16
nest_asyncio @ file:///home/conda/feedstock_root/build_artifacts/nest-asyncio_1733325553580/work
networkx==3.4.2
nltk==3.9.1
numpy==1.26.4
nvidia-cublas-cu12==12.4.5.8
nvidia-cuda-cupti-cu12==12.4.127
nvidia-cuda-nvrtc-cu12==12.4.127
nvidia-cuda-runtime-cu12==12.4.127
nvidia-cudnn-cu12==9.1.0.70
nvidia-cufft-cu12==11.2.1.3
nvidia-curand-cu12==10.3.5.147
nvidia-cusolver-cu12==11.6.1.9
nvidia-cusparse-cu12==12.3.1.170
nvidia-nccl-cu12==2.21.5
nvidia-nvjitlink-cu12==12.4.127
nvidia-nvtx-cu12==12.4.127
packaging @ file:///home/conda/feedstock_root/build_artifacts/packaging_1733203243479/work
pandas==2.2.3
parso @ file:///home/conda/feedstock_root/build_artifacts/parso_1733271261340/work
peft==0.14.0
pexpect @ file:///home/conda/feedstock_root/build_artifacts/pexpect_1733301927746/work
pickleshare @ file:///home/conda/feedstock_root/build_artifacts/pickleshare_1733327343728/work
pillow==11.1.0
platformdirs @ file:///home/conda/feedstock_root/build_artifacts/platformdirs_1733232627818/work
prompt_toolkit @ file:///home/conda/feedstock_root/build_artifacts/prompt-toolkit_1733302527033/work
propcache==0.2.1
protobuf==5.29.1
psutil @ file:///home/conda/feedstock_root/build_artifacts/psutil_1729847040822/work
ptyprocess @ file:///home/conda/feedstock_root/build_artifacts/ptyprocess_1733302279685/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl#sha256=92c32ff62b5fd8cf325bec5ab90d7be3d2a8ca8c8a3813ff487a8d2002630d1f
pure_eval @ file:///home/conda/feedstock_root/build_artifacts/pure_eval_1733569405015/work
pyarrow==18.1.0
pydantic==2.10.3
pydantic_core==2.27.1
Pygments @ file:///home/conda/feedstock_root/build_artifacts/pygments_1733221634316/work
pyparsing==3.2.1
python-dateutil @ file:///home/conda/feedstock_root/build_artifacts/python-dateutil_1733215673016/work
pytz==2024.2
PyYAML==6.0.2
pyzmq @ file:///home/conda/feedstock_root/build_artifacts/pyzmq_1728642224099/work
regex==2024.11.6
requests==2.32.3
rich==13.9.4
rouge_score==0.1.2
safetensors==0.4.5
scikit-learn==1.6.1
scipy==1.15.1
sentence-transformers==3.3.1
sentencepiece==0.2.0
sentry-sdk==2.19.2
setproctitle==1.3.4
six @ file:///home/conda/feedstock_root/build_artifacts/six_1733380938961/work
smmap==5.0.1
stack_data @ file:///home/conda/feedstock_root/build_artifacts/stack_data_1733569443808/work
sympy==1.13.1
threadpoolctl==3.5.0
tokenizers==0.21.0
torch==2.5.1
tornado @ file:///home/conda/feedstock_root/build_artifacts/tornado_1732615898999/work
tqdm==4.67.1
traitlets @ file:///home/conda/feedstock_root/build_artifacts/traitlets_1733367359838/work
transformer-lens==2.10.0
transformers==4.48.2
triton==3.1.0
trl==0.14.0
typeguard==4.4.1
typing_extensions @ file:///home/conda/feedstock_root/build_artifacts/typing_extensions_1733188668063/work
tzdata==2024.2
urllib3==2.2.3
wandb==0.19.1
wcwidth @ file:///home/conda/feedstock_root/build_artifacts/wcwidth_1733231326287/work
widgetsnbextension==4.0.13
xxhash==3.5.0
yarl==1.18.3
zipp @ file:///home/conda/feedstock_root/build_artifacts/zipp_1732827521216/work
```
### Cuda
```
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
```
```
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.08 Driver Version: 545.23.08 CUDA Version: 12.3 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA RTX 6000 Ada Gene... Off | 00000000:01:00.0 Off | Off |
| 30% 40C P8 27W / 300W | 43531MiB / 49140MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA RTX 6000 Ada Gene... Off | 00000000:25:00.0 Off | Off |
| 30% 34C P8 23W / 300W | 3021MiB / 49140MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 2 NVIDIA RTX 6000 Ada Gene... Off | 00000000:41:00.0 Off | Off |
| 30% 37C P8 29W / 300W | 6MiB / 49140MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 3 NVIDIA RTX 6000 Ada Gene... Off | 00000000:61:00.0 Off | Off |
| 30% 40C P8 30W / 300W | 10881MiB / 49140MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 4 NVIDIA RTX 6000 Ada Gene... Off | 00000000:81:00.0 Off | Off |
| 30% 34C P8 24W / 300W | 1319MiB / 49140MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 5 NVIDIA RTX 6000 Ada Gene... Off | 00000000:A1:00.0 Off | Off |
| 40% 59C P2 71W / 300W | 5763MiB / 49140MiB | 6% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 6 NVIDIA RTX 6000 Ada Gene... Off | 00000000:C1:00.0 Off | Off |
| 30% 47C P2 91W / 300W | 43307MiB / 49140MiB | 74% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
+---------------------------------------------------------------------------------------+
```
### Who can help?
@benjaminbossan @sayakpaul
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
## Context
I do my model training for text generation just for CompletionOnlyLM with my own dataset (long dialogues with system/user/assistant remarks). I added to my model and tokenizer new tokens using:
```python
tokenizer.add_tokens(
[
AddedToken("<|start_thinking|>", normalized=False, special=False),
AddedToken("<|end_thinking|>", normalized=False, special=False),
AddedToken("<tool_response>", normalized=False, special=False),
AddedToken("</tool_response>", normalized=False, special=False),
AddedToken("<|start_response|>", normalized=False, special=False),
AddedToken("<|end_response|>", normalized=False, special=False),
]
)
model.resize_token_embeddings(len(tokenizer))
```
and I have saved it before training.
After that I just wanted training my extend model with PEFT + TRL + FSDP.
Model that I used like base:
```
Qwen2ForCausalLM(
(model): Qwen2Model(
(embed_tokens): Embedding(151671, 3584)
(layers): ModuleList(
(0-27): 28 x Qwen2DecoderLayer(
(self_attn): Qwen2Attention(
(q_proj): Linear(in_features=3584, out_features=3584, bias=True)
(k_proj): Linear(in_features=3584, out_features=512, bias=True)
(v_proj): Linear(in_features=3584, out_features=512, bias=True)
(o_proj): Linear(in_features=3584, out_features=3584, bias=False)
)
(mlp): Qwen2MLP(
(gate_proj): Linear(in_features=3584, out_features=18944, bias=False)
(up_proj): Linear(in_features=3584, out_features=18944, bias=False)
(down_proj): Linear(in_features=18944, out_features=3584, bias=False)
(act_fn): SiLU()
)
(input_layernorm): Qwen2RMSNorm((3584,), eps=1e-06)
(post_attention_layernorm): Qwen2RMSNorm((3584,), eps=1e-06)
)
)
(norm): Qwen2RMSNorm((3584,), eps=1e-06)
(rotary_emb): Qwen2RotaryEmbedding()
)
(lm_head): Linear(in_features=3584, out_features=151671, bias=False)
)
```
## Code
### Accelerate config
```yaml
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch: BACKWARD_PRE
fsdp_cpu_ram_efficient_loading: true
fsdp_forward_prefetch: false
fsdp_offload_params: false
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
### Training script
```python
import warnings
warnings.filterwarnings("ignore")
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0, 1, 2, 3'
os.environ['TOKENIZERS_PARALLELISM'] = 'true'
import wandb
import numpy as np
import torch
import json
from typing import List, Optional, Union, Any, Literal
from datasets import load_dataset, Dataset
import evaluate
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
EarlyStoppingCallback,
DataCollatorForLanguageModeling,
AddedToken,
)
from peft import (
LoraConfig,
get_peft_model,
TaskType,
PeftModelForCausalLM
)
from trl import (
SFTConfig,
SFTTrainer,
DataCollatorForCompletionOnlyLM
)
from special_utils import DataCollatorForMultiCompletionOnlyLM, CustomLossTrainer
##################################
# Enviroments and configurations #
##################################
CHECKPOINT_PATH = None
DATA_CACHE_DIR = "/home/raid/datasets/"
MODEL_CACHE_DIR = "/home/raid/hf_cache/"
MODEL_PATH = "/home/raid/models/extended_qwen"
METRICS_CACHE = "/home/raid/metrics_cache"
MAX_PROMPT_LENGTH = 5000
LR = 1e-5
STEP_SIZE = 10
BATCH_SIZE = 2
GA_SIZE = 4
TRAIN_EPOCHS = 1
REPORT_TO = ['none', 'wandb'][0]
LORA_R = 48
LORA_ALPHA = 96
TARGET_MODULES = [
"self_attn.q_proj",
"self_attn.k_proj",
"self_attn.v_proj",
"self_attn.o_proj",
"mlp.gate_proj",
"mlp.up_proj",
"mlp.down_proj",
]
MODULES_TO_SAVE = [
"embed_tokens",
"lm_head"
]
REVISION_NAME = f"TEST_qwen-tp-({LR})LR-({BATCH_SIZE})BATCH_SIZE-({GA_SIZE})GA_SIZE-({TRAIN_EPOCHS})TRAIN_EPOCHS-({LORA_R})LORA_R-({LORA_ALPHA})LORA_ALPHA"
LOGS_PATH = f"/home/raid/models/{REVISION_NAME}/logs"
print(REVISION_NAME)
def main():
#####################
# Model & Tokenizer #
#####################
model = AutoModelForCausalLM.from_pretrained(
MODEL_PATH,
# cache_dir=MODEL_CACHE_DIR,
torch_dtype=torch.bfloat16,
use_cache=False,
)
tokenizer = AutoTokenizer.from_pretrained(
MODEL_PATH,
# cache_dir=MODEL_CACHE_DIR,
)
tokenizer.padding_side = 'right'
### FREEZING ###
for param in model.parameters():
param.requires_grad = False
print(tokenizer.added_tokens_decoder)
###########
# Dataset #
###########
dataset = load_dataset(
"my/dataset",
"train",
cache_dir=DATA_CACHE_DIR
)
def prepare_texts(example):
example['text'] = tokenizer.apply_chat_template(
conversation=json.loads(example['conversation']),
tools=json.loads(example['tools']),
tokenize=False
)
return example
dataset = dataset.map(prepare_texts)
dataset_vvalid = Dataset.from_dict(dataset['train'][:100]) # For tests
print(dataset)
########
# PEFT #
########
lora_config = LoraConfig(
task_type=TaskType.CAUSAL_LM,
r=LORA_R,
lora_alpha=LORA_ALPHA,
target_modules=TARGET_MODULES,
modules_to_save=MODULES_TO_SAVE,
lora_dropout=0.1,
bias="none",
)
##################
# Trainer & Args #
##################
bertscore = evaluate.load(
"bertscore",
cache_dir=METRICS_CACHE
)
rouge = evaluate.load(
"rouge",
cache_dir=METRICS_CACHE
)
def preprocess_logits_for_metrics(logits, labels):
pred_ids = torch.argmax(logits, dim=-1)
return pred_ids, labels
def compute_metrics(eval_pred):
pred_ids = torch.tensor(eval_pred.predictions[0])
label_ids = torch.tensor(eval_pred.label_ids)
preds = tokenizer.batch_decode(torch.where(label_ids == -100, tokenizer.eos_token_id, pred_ids), skip_special_tokens=True)
labels = tokenizer.batch_decode(torch.where(label_ids == -100, tokenizer.eos_token_id, label_ids), skip_special_tokens=True)
if not os.path.exists(LOGS_PATH):
os.makedirs(LOGS_PATH, exist_ok=True)
with open(LOGS_PATH + "/data", "w") as f:
f.write(json.dumps([preds, labels]))
print("PREDS:", preds[0], "###")
print("LABELS:", labels[0], "###")
bertscore_results = bertscore.compute(
predictions=preds,
references=labels,
lang='en'
)
rouge_results = rouge.compute(
predictions=preds,
references=labels,
)
return {
"bert_score_f1": np.mean(bertscore_results['f1']),
"bert_score_recall": np.mean(bertscore_results['recall']),
"bert_score_precision": np.mean(bertscore_results['precision']),
"rouge1": rouge_results['rouge1'],
'rouge2': rouge_results['rouge2'],
'rougeL': rouge_results['rougeL'],
}
data_collator = DataCollatorForMultiCompletionOnlyLM(
tokenizer=tokenizer,
response_template="<|im_start|>assistant\n",
end_response_template="<|im_end|>",
mlm=False
)
special_token_ids = [151665, 151666, 151667, 151668, 151669, 151670]
special_token_weight = 1.2
training_args = SFTConfig(
## SFT Arguments ##
max_seq_length=MAX_PROMPT_LENGTH,
## Standard Arguments ##
do_train=True,
do_eval=True,
output_dir=f"/home/raid/checkpoints/{REVISION_NAME}",
overwrite_output_dir=True,
eval_strategy="steps",
eval_steps=STEP_SIZE,
torch_empty_cache_steps=STEP_SIZE,
num_train_epochs=TRAIN_EPOCHS,
per_device_train_batch_size=BATCH_SIZE,
per_device_eval_batch_size=BATCH_SIZE,
gradient_accumulation_steps=GA_SIZE,
optim="adamw_torch",
save_steps=STEP_SIZE,
save_total_limit=4,
logging_steps=STEP_SIZE,
learning_rate=LR,
lr_scheduler_type="cosine",
bf16=True,
gradient_checkpointing=True,
gradient_checkpointing_kwargs = {"use_reentrant": True},
load_best_model_at_end=True,
metric_for_best_model="eval_rougeL",
greater_is_better=True,
report_to=REPORT_TO,
run_name=REVISION_NAME,
resume_from_checkpoint=True if CHECKPOINT_PATH else False,
)
trainer = CustomLossTrainer(
model=model,
args=training_args,
peft_config=lora_config,
train_dataset=dataset_vvalid,#dataset['train'],
eval_dataset=dataset_vvalid,#dataset['valid'],
processing_class=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
preprocess_logits_for_metrics=preprocess_logits_for_metrics,
callbacks=[EarlyStoppingCallback(early_stopping_patience=100)],
special_token_ids=special_token_ids,
special_token_weight=special_token_weight,
)
print("MODEL DTYPE: ", trainer.model.dtype)
# handle PEFT+FSDP case
trainer.model.print_trainable_parameters()
if getattr(trainer.accelerator.state, "fsdp_plugin", None):
from peft.utils.other import fsdp_auto_wrap_policy
fsdp_plugin = trainer.accelerator.state.fsdp_plugin
fsdp_plugin.auto_wrap_policy = fsdp_auto_wrap_policy(trainer.model)
# Training
if CHECKPOINT_PATH is not None:
trainer.train(resume_from_checkpoint=CHECKPOINT_PATH)
else:
trainer.train()
if trainer.is_fsdp_enabled:
trainer.accelerator.state.fsdp_plugin.set_state_dict_type("FULL_STATE_DICT")
trainer.save_model(f"/home/raid/models/{REVISION_NAME}/adapter")
if __name__ == "__main__":
main()
```
### Custom Collator & Trainer (special_utils.py)
```python
import torch
from transformers import DataCollatorForLanguageModeling
from typing import List, Optional, Union, Any, Literal
from trl import SFTTrainer
import numpy as np
# Adding weights to new tokens
class CustomLossTrainer(SFTTrainer):
def __init__(self, *args, special_token_ids, special_token_weight=1.2, **kwargs):
super().__init__(*args, **kwargs)
self.special_token_ids = special_token_ids
self.special_token_weight = special_token_weight
self.weights = None
def _init_weights(self, model):
self.weights = torch.ones(model.config.vocab_size, device=model.device)
for token_id in self.special_token_ids:
self.weights[token_id] = self.special_token_weight
self.cross_entropy = torch.nn.CrossEntropyLoss(weight=self.weights)
def compute_loss(self, model, inputs, return_outputs=False, **kwargs):
if self.weights is None:
self._init_weights(model)
labels = inputs.pop("labels").to(model.device)
outputs = model(**inputs)
logits = outputs.get("logits").to(model.device)
loss = self.cross_entropy(logits.view(-1, logits.size(-1)), labels.view(-1))
if return_outputs:
return loss, outputs
return loss
# For Completion with many different instruction templates
class DataCollatorForMultiCompletionOnlyLM(DataCollatorForLanguageModeling):
def __init__(
self,
response_template: Union[str, list[int]],
end_response_template: Union[str, list[int]],
instruction_template: Optional[Union[str, list[int]]] = None,
*args,
mlm: bool = False,
ignore_index: int = -100,
padding_free: bool = False,
**kwargs,
):
super().__init__(*args, mlm=mlm, **kwargs)
self.instruction_template = instruction_template
if isinstance(instruction_template, str):
# The user provides a string, must tokenize
self.instruction_token_ids = self.tokenizer.encode(self.instruction_template, add_special_tokens=False)
else:
# The user already provides the token ids
self.instruction_token_ids = instruction_template
self.response_template = response_template
if isinstance(response_template, str):
# The user provides a string, must tokenize
self.response_token_ids = self.tokenizer.encode(self.response_template, add_special_tokens=False)
else:
# The user already provides the token ids
self.response_token_ids = response_template
self.end_response_template = end_response_template
if isinstance(end_response_template, str):
# The user provides a string, must tokenize
self.end_response_token_ids = self.tokenizer.encode(self.end_response_template, add_special_tokens=False)
else:
# The user already provides the token ids
self.end_response_token_ids = end_response_template
if not self.mlm and self.instruction_template and self.tokenizer.pad_token_id == self.tokenizer.eos_token_id:
warnings.warn(
"The pad_token_id and eos_token_id values of this tokenizer are identical. "
"If you are planning for multi-turn training, "
"it can result in the model continuously generating questions and answers without eos token. "
"To avoid this, set the pad_token_id to a different value.",
UserWarning,
)
self.ignore_index = ignore_index
self.padding_free = padding_free
def torch_call(self, examples: list[Union[list[int], Any, dict[str, Any]]]) -> dict[str, Any]:
batch = super().torch_call(examples)
for i in range(len(examples)):
batch["labels"][i] = torch.where(batch["labels"][i] == 0, 999999, batch["labels"][i])
response_token_ids_start_ids = []
for idx in np.where(batch["labels"][i] == self.response_token_ids[0])[0]:
# `response_token_ids` is `'### Response:\n'`, here we are just making sure that the token IDs match
if (
self.response_token_ids
== batch["labels"][i][idx : idx + len(self.response_token_ids)].tolist()
):
response_token_ids_start_ids.append(idx)
if len(response_token_ids_start_ids) == 0:
warnings.warn(
f"Could not find response key `{self.response_template}` in the following instance: "
f"{self.tokenizer.decode(batch['input_ids'][i])}. This instance will be ignored in loss "
"calculation. Note, if this happens often, consider increasing the `max_seq_length`.",
UserWarning,
)
batch["labels"][i, :] = self.ignore_index
else:
response_token_ids_end_ids = [response_token_ids_start_idx + len(self.response_token_ids) for response_token_ids_start_idx in response_token_ids_start_ids]
end_response_token_ids_idxs = []
for idx in np.where(batch["labels"][i] == self.end_response_token_ids[0])[0]:
# `response_token_ids` is `'### Response:\n'`, here we are just making sure that the token IDs match
if (
self.end_response_token_ids
== batch["labels"][i][idx : idx + len(self.end_response_token_ids)].tolist()
):
end_response_token_ids_idxs.append(idx)
if len(end_response_token_ids_idxs) == 0:
warnings.warn(
f"Could not find end response key `{self.response_template}` in the following instance: "
f"{self.tokenizer.decode(batch['input_ids'][i])}. This instance will be ignored in loss "
"calculation. Note, if this happens often, consider increasing the `max_seq_length`.",
UserWarning,
)
batch["labels"][i, :] = self.ignore_index
assistant_end_idxs = []
for assistant_start_idx in response_token_ids_end_ids:
for assistant_end_idx in end_response_token_ids_idxs:
if assistant_start_idx < assistant_end_idx:
assistant_end_idxs.append(assistant_end_idx)
break
assert len(response_token_ids_end_ids) == len(assistant_end_idxs), "Error, need count assistant replics == count after assistant end suffixes"
mask = torch.ones_like(batch['labels'][i, :]) * -1
mask = torch.where(batch['labels'][i, :] == self.ignore_index, 1, mask)
for start_id, end_id in zip(response_token_ids_end_ids, assistant_end_idxs):
mask[start_id : end_id + 1] = 1
labels = mask * batch['labels'][i, :]
batch['labels'][i, :] = torch.where(labels < 0, self.ignore_index, labels)
batch["labels"][i] = torch.where(batch["labels"][i] == 999999, 0, batch["labels"][i])
if self.padding_free:
# remove padding, `attention_mask` and add `position_ids`
attn_mask = batch.pop("attention_mask")
batch["input_ids"] = batch["input_ids"][attn_mask.bool()].unsqueeze(0)
batch["position_ids"] = attn_mask.cumsum(1)[attn_mask.bool()].unsqueeze(0) - 1
batch["labels"] = batch["labels"][attn_mask.bool()].unsqueeze(0)
batch["labels"][batch["position_ids"] == 0] = self.ignore_index
# Calculate cumulative sequence lengths for queries and keys to prevent graph breaks during further computations.
flattened_position_ids = batch["position_ids"].flatten()
indices_q = torch.arange(
flattened_position_ids.size(0), device=flattened_position_ids.device, dtype=torch.int32
)
batch["cu_seq_lens_q"] = torch.cat(
(
indices_q[flattened_position_ids == 0],
torch.tensor(
flattened_position_ids.size(), device=flattened_position_ids.device, dtype=torch.int32
),
)
)
batch["cu_seq_lens_k"] = batch["cu_seq_lens_q"]
# Determine maximum sequence lengths to prevent graph breaks during further computations.
batch["max_length_k"] = flattened_position_ids.max().item() + 1
batch["max_length_q"] = batch["max_length_k"]
return batch
```
## During training
To be as sure as possible that this error is not in the learning process, I additionally save the validation examples to a separate file and log the metrics.
Metrics from wandb:

I tracked the direct text saved for validation, everything was fine.
## After training
After training process I have tried load model to check autoregressive inference:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_CACHE_DIR = "/home/raid/hf_cache"
DATA_CACHE_DIR = "/home/raid/datasets"
MODEL_PATH = "/home/raid/models/extended_qwen"
lora_path = "/home/raid/models/tool-plannings/qwen-tp-(1e-05)LR-(2)BATCH_SIZE-(4)GA_SIZE-(6)TRAIN_EPOCHS-(48)LORA_R-(96)LORA_ALPHA/adapter"
model = AutoModelForCausalLM.from_pretrained(
MODEL_PATH,
torch_dtype=torch.bfloat16,
use_cache=False,
)
tokenizer = AutoTokenizer.from_pretrained(
MODEL_PATH,
)
from peft import PeftModelForCausalLM
model = PeftModelForCausalLM.from_pretrained(
model,
lora_path # This contains adapter_model.safetensors, adapter_config.json, etc.
)
model
```
```
PeftModelForCausalLM(
(base_model): LoraModel(
(model): Qwen2ForCausalLM(
(model): Qwen2Model(
(embed_tokens): ModulesToSaveWrapper(
(original_module): Embedding(151671, 3584)
(modules_to_save): ModuleDict(
(default): Embedding(151671, 3584)
)
)
(layers): ModuleList(
(0-27): 28 x Qwen2DecoderLayer(
(self_attn): Qwen2Attention(
(q_proj): lora.Linear(
(base_layer): Linear(in_features=3584, out_features=3584, bias=True)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=3584, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=3584, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(k_proj): lora.Linear(
(base_layer): Linear(in_features=3584, out_features=512, bias=True)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=3584, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=512, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(v_proj): lora.Linear(
(base_layer): Linear(in_features=3584, out_features=512, bias=True)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=3584, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=512, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(o_proj): lora.Linear(
(base_layer): Linear(in_features=3584, out_features=3584, bias=False)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=3584, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=3584, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
)
(mlp): Qwen2MLP(
(gate_proj): lora.Linear(
(base_layer): Linear(in_features=3584, out_features=18944, bias=False)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=3584, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=18944, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(up_proj): lora.Linear(
(base_layer): Linear(in_features=3584, out_features=18944, bias=False)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=3584, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=18944, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(down_proj): lora.Linear(
(base_layer): Linear(in_features=18944, out_features=3584, bias=False)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=18944, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=3584, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(act_fn): SiLU()
)
(input_layernorm): Qwen2RMSNorm((3584,), eps=1e-06)
(post_attention_layernorm): Qwen2RMSNorm((3584,), eps=1e-06)
)
)
(norm): Qwen2RMSNorm((3584,), eps=1e-06)
(rotary_emb): Qwen2RotaryEmbedding()
)
(lm_head): ModulesToSaveWrapper(
(original_module): Linear(in_features=3584, out_features=151671, bias=False)
(modules_to_save): ModuleDict(
(default): Linear(in_features=3584, out_features=151671, bias=False)
)
)
)
)
)
```
And during inference I had something like that:
```python
outputs = model.generate(
**inputs_tokens,
max_new_tokens=20,
)[0]
print(tokenizer.decode(outputs, skip_special_tokens=False))
```
```
...ngle stepA journey of a thousand miles'.<|im_end|>
<|im_start|>assistant # here start new tokens
write write write write write write write write write write write write write write write write write write write...
```
## Problem
I thought there was a mistake in saving the adapter and instead of saving the adapter, I tried to merge model and adapter immediately after the end of the training in script like that:
```python
merged_model = trainer.model.merge_and_unload(safe_merge=True)
merged_model.save_pretrained(f"/home/raid/models/{REVISION_NAME}")
```
and I have occured the error:
```
MODEL DTYPE: torch.bfloat16
trainable params: 1,107,362,816 || all params: 8,720,162,304 || trainable%: 12.6989
{'train_runtime': 79.4632, 'train_samples_per_second': 1.258, 'train_steps_per_second': 0.038, 'train_loss': 108.3709716796875, 'epoch': 0.92}
100%|██████████████████████████████████████████████████████████████| 3/3 [01:19<00:00, 26.51s/it]
[rank2]: Traceback (most recent call last):
[rank2]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 268, in <module>
[rank2]: main()
[rank2]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 264, in main
[rank2]: merged_model = trainer.model.merge_and_unload(safe_merge=True)
[rank2]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 892, in merge_and_unload
[rank2]: return self._unload_and_optionally_merge(
[rank2]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 514, in _unload_and_optionally_merge
[rank2]: target.merge(safe_merge=safe_merge, adapter_names=adapter_names)
[rank2]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 477, in merge
[rank2]: delta_weight = self.get_delta_weight(active_adapter)
[rank2]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 585, in get_delta_weight
[rank2]: output_tensor = transpose(weight_B @ weight_A, self.fan_in_fan_out) * self.scaling[adapter]
[rank2]: RuntimeError: inconsistent tensor size, expected tensor [1024] and src [7168] to have the same number of elements, but got 1024 and 7168 elements respectively
[rank1]: Traceback (most recent call last):
[rank1]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 268, in <module>
[rank1]: main()
[rank1]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 264, in main
[rank1]: merged_model = trainer.model.merge_and_unload(safe_merge=True)
[rank1]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 892, in merge_and_unload
[rank1]: return self._unload_and_optionally_merge(
[rank1]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 514, in _unload_and_optionally_merge
[rank1]: target.merge(safe_merge=safe_merge, adapter_names=adapter_names)
[rank1]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 477, in merge
[rank1]: delta_weight = self.get_delta_weight(active_adapter)
[rank1]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 585, in get_delta_weight
[rank1]: output_tensor = transpose(weight_B @ weight_A, self.fan_in_fan_out) * self.scaling[adapter]
[rank1]: RuntimeError: inconsistent tensor size, expected tensor [1024] and src [7168] to have the same number of elements, but got 1024 and 7168 elements respectively
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 268, in <module>
[rank0]: main()
[rank0]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 264, in main
[rank0]: merged_model = trainer.model.merge_and_unload(safe_merge=True)
[rank0]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 892, in merge_and_unload
[rank0]: return self._unload_and_optionally_merge(
[rank0]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 514, in _unload_and_optionally_merge
[rank0]: target.merge(safe_merge=safe_merge, adapter_names=adapter_names)
[rank0]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 477, in merge
[rank0]: delta_weight = self.get_delta_weight(active_adapter)
[rank0]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 585, in get_delta_weight
[rank0]: output_tensor = transpose(weight_B @ weight_A, self.fan_in_fan_out) * self.scaling[adapter]
[rank0]: RuntimeError: inconsistent tensor size, expected tensor [1024] and src [7168] to have the same number of elements, but got 1024 and 7168 elements respectively
[rank3]: Traceback (most recent call last):
[rank3]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 268, in <module>
[rank3]: main()
[rank3]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 264, in main
[rank3]: merged_model = trainer.model.merge_and_unload(safe_merge=True)
[rank3]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 892, in merge_and_unload
[rank3]: return self._unload_and_optionally_merge(
[rank3]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 514, in _unload_and_optionally_merge
[rank3]: target.merge(safe_merge=safe_merge, adapter_names=adapter_names)
[rank3]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 477, in merge
[rank3]: delta_weight = self.get_delta_weight(active_adapter)
[rank3]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 585, in get_delta_weight
[rank3]: output_tensor = transpose(weight_B @ weight_A, self.fan_in_fan_out) * self.scaling[adapter]
[rank3]: RuntimeError: inconsistent tensor size, expected tensor [1024] and src [7168] to have the same number of elements, but got 1024 and 7168 elements respectively
```
Besides, I tried load adapter manually by safetensors script smth like that:
```python
from safetensors import safe_open
lora_state_dict = {}
with safe_open(lora_path, framework="pt", device="cpu") as f:
for key in f.keys():
new_key = key.replace("lora_A.", "lora_A.default.").replace("lora_B.", "lora_B.default.")
new_key = new_key.replace("embed_tokens.weight", "embed_tokens.original_module.weight")
new_key = new_key.replace("lm_head.weight", "lm_head.modules_to_save.default.weight")
lora_state_dict[new_key] = f.get_tensor(key)
m, u = model.load_state_dict(lora_state_dict, strict=False)
```
I was able to upload the adapter in my model, but I was still getting catastrophical hallucinations like:
```
...<|im_start|>assistant
# generated spaces
```
I assume that the error lies in the adapter merge and may be floating bf16 fp16 or something.
P.S. BTW I tried to train model with fp16 and I had same problem
### Expected behavior
Expected behavior of generation after merging adapter with my model
|
{
"login": "DmitryDiTy",
"id": 90377536,
"node_id": "MDQ6VXNlcjkwMzc3NTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/90377536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DmitryDiTy",
"html_url": "https://github.com/DmitryDiTy",
"followers_url": "https://api.github.com/users/DmitryDiTy/followers",
"following_url": "https://api.github.com/users/DmitryDiTy/following{/other_user}",
"gists_url": "https://api.github.com/users/DmitryDiTy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DmitryDiTy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DmitryDiTy/subscriptions",
"organizations_url": "https://api.github.com/users/DmitryDiTy/orgs",
"repos_url": "https://api.github.com/users/DmitryDiTy/repos",
"events_url": "https://api.github.com/users/DmitryDiTy/events{/privacy}",
"received_events_url": "https://api.github.com/users/DmitryDiTy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2368/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2367
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2367/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2367/comments
|
https://api.github.com/repos/huggingface/peft/issues/2367/events
|
https://github.com/huggingface/peft/issues/2367
| 2,838,045,820
|
I_kwDOIf9iDM6pKSR8
| 2,367
|
Some weights of MistralForSequenceClassification were not initialized from the model checkpoint at mistralai/Mistral-7B-Instruct-v0.3 and are newly initialized: ['score.weight']
|
{
"login": "amritansh6",
"id": 46628209,
"node_id": "MDQ6VXNlcjQ2NjI4MjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/46628209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amritansh6",
"html_url": "https://github.com/amritansh6",
"followers_url": "https://api.github.com/users/amritansh6/followers",
"following_url": "https://api.github.com/users/amritansh6/following{/other_user}",
"gists_url": "https://api.github.com/users/amritansh6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amritansh6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amritansh6/subscriptions",
"organizations_url": "https://api.github.com/users/amritansh6/orgs",
"repos_url": "https://api.github.com/users/amritansh6/repos",
"events_url": "https://api.github.com/users/amritansh6/events{/privacy}",
"received_events_url": "https://api.github.com/users/amritansh6/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2025-02-07T12:29:22
| 2025-02-10T11:01:57
| 2025-02-10T11:01:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
I have been trying to fine tune mistral 7b v0.3 for a downstream task using lora and I get the following warning while running inference.
```python
base_model = AutoModelForSequenceClassification.from_pretrained(
model_id, use_auth_token="hf_***",
num_labels=2,
problem_type="single_label_classification"
)
base_model.config.pad_token_id = tokenizer.pad_token_id
lora_config = LoraConfig(
r=8,
lora_alpha=32,
target_modules=["q_proj", "v_proj"],
bias="none",
task_type="SEQ_CLS",
modules_to_save=["score"]
)
model_with_lora = get_peft_model(base_model, lora_config)
model_with_lora.print_trainable_parameters()
training_args = TrainingArguments(
output_dir="./results_4",
evaluation_strategy="epoch",
save_strategy="steps",
save_steps=0.1,
logging_dir="./logs",
learning_rate=5e-5,
per_device_train_batch_size=2,
num_train_epochs=2,
weight_decay=0.01,
report_to="wandb",
save_total_limit=2,
logging_steps=10,
)
trainer = Trainer(
model=model_with_lora,
args=training_args,
train_dataset=hf_dataset,
eval_dataset=hf_eval_dataset,
tokenizer=tokenizer,
compute_metrics=None,
)
```
This is my training script and while loading for inference I get the warning as,
Some weights of MistralForSequenceClassification were not initialized from the model checkpoint at mistralai/Mistral-7B-Instruct-v0.3 and are newly initialized: ['score.weight']
Can someone check this.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```python
base_model = AutoModelForSequenceClassification.from_pretrained(
model_id, use_auth_token="hf_***",
num_labels=2,
problem_type="single_label_classification"
)
base_model.config.pad_token_id = tokenizer.pad_token_id
lora_config = LoraConfig(
r=8,
lora_alpha=32,
target_modules=["q_proj", "v_proj"],
bias="none",
task_type="SEQ_CLS",
modules_to_save=["score"]
)
model_with_lora = get_peft_model(base_model, lora_config)
model_with_lora.print_trainable_parameters()
training_args = TrainingArguments(
output_dir="./results_4",
evaluation_strategy="epoch",
save_strategy="steps",
save_steps=0.1,
logging_dir="./logs",
learning_rate=5e-5,
per_device_train_batch_size=2,
num_train_epochs=2,
weight_decay=0.01,
report_to="wandb",
save_total_limit=2,
logging_steps=10,
)
trainer = Trainer(
model=model_with_lora,
args=training_args,
train_dataset=hf_dataset,
eval_dataset=hf_eval_dataset,
tokenizer=tokenizer,
compute_metrics=None,
)
```
### Expected behavior
Ideally this warning should not come.
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2367/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2364
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2364/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2364/comments
|
https://api.github.com/repos/huggingface/peft/issues/2364/events
|
https://github.com/huggingface/peft/issues/2364
| 2,835,746,171
|
I_kwDOIf9iDM6pBg17
| 2,364
|
docs: broken links to boft
|
{
"login": "makelinux",
"id": 2335185,
"node_id": "MDQ6VXNlcjIzMzUxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2335185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/makelinux",
"html_url": "https://github.com/makelinux",
"followers_url": "https://api.github.com/users/makelinux/followers",
"following_url": "https://api.github.com/users/makelinux/following{/other_user}",
"gists_url": "https://api.github.com/users/makelinux/gists{/gist_id}",
"starred_url": "https://api.github.com/users/makelinux/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makelinux/subscriptions",
"organizations_url": "https://api.github.com/users/makelinux/orgs",
"repos_url": "https://api.github.com/users/makelinux/repos",
"events_url": "https://api.github.com/users/makelinux/events{/privacy}",
"received_events_url": "https://api.github.com/users/makelinux/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2025-02-06T14:48:16
| 2025-02-07T10:14:44
| 2025-02-07T10:14:44
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
on page: https://huggingface.co/docs/peft/v0.14.0/en/conceptual_guides/oft
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
on page: https://huggingface.co/docs/peft/v0.14.0/en/conceptual_guides/oft
Snippet:
Take a look at the following step-by-step guides on how to finetune a model with BOFT:
[Dreambooth finetuning with BOFT](https://huggingface.co/docs/peft/v0.14.0/en/task_guides/boft_dreambooth)
[Controllable generation finetuning with BOFT (ControlNet)](https://huggingface.co/docs/peft/v0.14.0/en/task_guides/boft_controlnet)
### Expected behavior
perhaps the links should lead to
https://github.com/huggingface/peft/blob/main/examples/boft_dreambooth/boft_dreambooth.md
https://github.com/huggingface/peft/blob/main/examples/boft_controlnet/boft_controlnet.md
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2364/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2362
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2362/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2362/comments
|
https://api.github.com/repos/huggingface/peft/issues/2362/events
|
https://github.com/huggingface/peft/issues/2362
| 2,833,885,059
|
I_kwDOIf9iDM6o6aeD
| 2,362
|
Import error
|
{
"login": "ikamensh",
"id": 23004004,
"node_id": "MDQ6VXNlcjIzMDA0MDA0",
"avatar_url": "https://avatars.githubusercontent.com/u/23004004?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ikamensh",
"html_url": "https://github.com/ikamensh",
"followers_url": "https://api.github.com/users/ikamensh/followers",
"following_url": "https://api.github.com/users/ikamensh/following{/other_user}",
"gists_url": "https://api.github.com/users/ikamensh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ikamensh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ikamensh/subscriptions",
"organizations_url": "https://api.github.com/users/ikamensh/orgs",
"repos_url": "https://api.github.com/users/ikamensh/repos",
"events_url": "https://api.github.com/users/ikamensh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ikamensh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2025-02-05T20:19:35
| 2025-02-05T20:38:50
| 2025-02-05T20:38:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Successfully installed accelerate-1.3.0 aiohappyeyeballs-2.4.4 aiohttp-3.11.11 aiosignal-1.3.2 bitsandbytes-0.45.1 datasets-3.2.0 dill-0.3.8 frozenlist-1.5.0 huggingface_hub-0.28.1 multidict-6.1.0 multiprocess-0.70.16 pandas-2.2.3 peft-0.14.0 propcache-0.2.1 pyarrow-19.0.0 pytz-2025.1 regex-2024.11.6 safetensors-0.5.2 tokenizers-0.13.3 tqdm-4.67.1 transformers-4.30.2 tzdata-2025.1 xxhash-3.5.0 yarl-1.18.3
root@77c297c83b18:/workspace# python qlora.py
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1086, in _get_module
return importlib.import_module("." + module_name, self.__name__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[...]
File "/usr/local/lib/python3.11/dist-packages/transformers/trainer.py", line 212, in <module>
from peft import PeftModel
File "/usr/local/lib/python3.11/dist-packages/peft/__init__.py", line 22, in <module>
from .auto import (
File "/usr/local/lib/python3.11/dist-packages/peft/auto.py", line 32, in <module>
from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING
File "/usr/local/lib/python3.11/dist-packages/peft/mapping.py", line 25, in <module>
from .mixed_model import PeftMixedModel
File "/usr/local/lib/python3.11/dist-packages/peft/mixed_model.py", line 29, in <module>
from .peft_model import PeftModel
File "/usr/local/lib/python3.11/dist-packages/peft/peft_model.py", line 37, in <module>
from transformers import Cache, DynamicCache, EncoderDecoderCache, PreTrainedModel
ImportError: cannot import name 'Cache' from 'transformers' (/usr/local/lib/python3.11/dist-packages/transformers/__init__.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/qlora.py", line 17, in <module>
from transformers import (
File "<frozen importlib._bootstrap>", line 1229, in _handle_fromlist
File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1076, in __getattr__
module = self._get_module(self._class_to_module[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1088, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):
cannot import name 'Cache' from 'transformers' (/usr/local/lib/python3.11/dist-packages/transformers/__init__.py)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
`pip install peft-0.14.0 transformers-4.30.2` on linux + py3.11
run following:
```python
from transformers import (
LlamaForCausalLM,
LlamaTokenizer,
Trainer,
TrainingArguments,
DataCollatorForLanguageModeling,
)
```
### Expected behavior
imports work (or crash outside peft)
|
{
"login": "ikamensh",
"id": 23004004,
"node_id": "MDQ6VXNlcjIzMDA0MDA0",
"avatar_url": "https://avatars.githubusercontent.com/u/23004004?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ikamensh",
"html_url": "https://github.com/ikamensh",
"followers_url": "https://api.github.com/users/ikamensh/followers",
"following_url": "https://api.github.com/users/ikamensh/following{/other_user}",
"gists_url": "https://api.github.com/users/ikamensh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ikamensh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ikamensh/subscriptions",
"organizations_url": "https://api.github.com/users/ikamensh/orgs",
"repos_url": "https://api.github.com/users/ikamensh/repos",
"events_url": "https://api.github.com/users/ikamensh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ikamensh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2362/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2359
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2359/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2359/comments
|
https://api.github.com/repos/huggingface/peft/issues/2359/events
|
https://github.com/huggingface/peft/issues/2359
| 2,829,346,186
|
I_kwDOIf9iDM6opGWK
| 2,359
|
Inconsistent documentation
|
{
"login": "makelinux",
"id": 2335185,
"node_id": "MDQ6VXNlcjIzMzUxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2335185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/makelinux",
"html_url": "https://github.com/makelinux",
"followers_url": "https://api.github.com/users/makelinux/followers",
"following_url": "https://api.github.com/users/makelinux/following{/other_user}",
"gists_url": "https://api.github.com/users/makelinux/gists{/gist_id}",
"starred_url": "https://api.github.com/users/makelinux/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makelinux/subscriptions",
"organizations_url": "https://api.github.com/users/makelinux/orgs",
"repos_url": "https://api.github.com/users/makelinux/repos",
"events_url": "https://api.github.com/users/makelinux/events{/privacy}",
"received_events_url": "https://api.github.com/users/makelinux/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 5
| 2025-02-04T07:25:29
| 2025-03-06T15:03:57
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Content of https://huggingface.co/docs/peft/index is not synchronised with ToC.
"How-to guides" is already "PEFT method guides".
"PEFT method guides" are under directory `task_guides`.

### Expected behavior
Consistent documentation.
Clear unambiguous names.
Links match titles and the content.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2359/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2355
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2355/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2355/comments
|
https://api.github.com/repos/huggingface/peft/issues/2355/events
|
https://github.com/huggingface/peft/issues/2355
| 2,823,704,539
|
I_kwDOIf9iDM6oTk_b
| 2,355
|
dataclass config handling
|
{
"login": "moghadas76",
"id": 23231913,
"node_id": "MDQ6VXNlcjIzMjMxOTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/23231913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moghadas76",
"html_url": "https://github.com/moghadas76",
"followers_url": "https://api.github.com/users/moghadas76/followers",
"following_url": "https://api.github.com/users/moghadas76/following{/other_user}",
"gists_url": "https://api.github.com/users/moghadas76/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moghadas76/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moghadas76/subscriptions",
"organizations_url": "https://api.github.com/users/moghadas76/orgs",
"repos_url": "https://api.github.com/users/moghadas76/repos",
"events_url": "https://api.github.com/users/moghadas76/events{/privacy}",
"received_events_url": "https://api.github.com/users/moghadas76/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2025-01-31T14:48:29
| 2025-03-10T15:04:18
| 2025-03-10T15:04:18
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 555.42.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900F
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.0
[pip3] torchtune==0.5.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.3.52 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] easy-torch 1.3.2 pypi_0 pypi
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcurand 10.3.4.52 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.8 py311h5eee18b_0
[conda] mkl_random 1.2.4 py311hdb19cb5_0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-forecasting 1.2.0 pypi_0 pypi
[conda] pytorch-lightning 2.2.0 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.3.0 pypi_0 pypi
[conda] torch-cluster 1.6.3+pt23cu121 pypi_0 pypi
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt23cu121 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt23cu121 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt23cu121 pypi_0 pypi
[conda] torch-summary 1.4.5 pypi_0 pypi
[conda] torchaudio 2.3.0 pypi_0 pypi
[conda] torchinfo 1.8.0 pypi_0 pypi
[conda] torchmetrics 1.3.0.post0 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchtune 0.5.0 pypi_0 pypi
[conda] torchvision 0.18.0 pypi_0 pypi
[conda] triton 2.3.0 pypi_0 pypi
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
See PR
### Expected behavior
See PR
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2355/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2354
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2354/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2354/comments
|
https://api.github.com/repos/huggingface/peft/issues/2354/events
|
https://github.com/huggingface/peft/issues/2354
| 2,823,156,387
|
I_kwDOIf9iDM6oRfKj
| 2,354
|
Commented PeftConfig
|
{
"login": "moghadas76",
"id": 23231913,
"node_id": "MDQ6VXNlcjIzMjMxOTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/23231913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moghadas76",
"html_url": "https://github.com/moghadas76",
"followers_url": "https://api.github.com/users/moghadas76/followers",
"following_url": "https://api.github.com/users/moghadas76/following{/other_user}",
"gists_url": "https://api.github.com/users/moghadas76/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moghadas76/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moghadas76/subscriptions",
"organizations_url": "https://api.github.com/users/moghadas76/orgs",
"repos_url": "https://api.github.com/users/moghadas76/repos",
"events_url": "https://api.github.com/users/moghadas76/events{/privacy}",
"received_events_url": "https://api.github.com/users/moghadas76/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2025-01-31T11:33:50
| 2025-03-10T15:04:20
| 2025-03-10T15:04:20
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
# from .config import PeftConfig, PeftType, PromptLearningConfig, TaskType
@ ./peft/utils/__init__.py
Why?
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
from peft.utils import PeftConfig
### Expected behavior
accessing to PeftConfig!
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2354/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2348
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2348/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2348/comments
|
https://api.github.com/repos/huggingface/peft/issues/2348/events
|
https://github.com/huggingface/peft/issues/2348
| 2,811,752,952
|
I_kwDOIf9iDM6nl_H4
| 2,348
|
Incorrect Magnitude Calculation for DoRA Linear Layers (Violates DoRA Paper Methodology)
|
{
"login": "arcteryox",
"id": 195980235,
"node_id": "U_kgDOC65ryw",
"avatar_url": "https://avatars.githubusercontent.com/u/195980235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arcteryox",
"html_url": "https://github.com/arcteryox",
"followers_url": "https://api.github.com/users/arcteryox/followers",
"following_url": "https://api.github.com/users/arcteryox/following{/other_user}",
"gists_url": "https://api.github.com/users/arcteryox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arcteryox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arcteryox/subscriptions",
"organizations_url": "https://api.github.com/users/arcteryox/orgs",
"repos_url": "https://api.github.com/users/arcteryox/repos",
"events_url": "https://api.github.com/users/arcteryox/events{/privacy}",
"received_events_url": "https://api.github.com/users/arcteryox/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2025-01-26T19:43:50
| 2025-01-30T18:56:52
| 2025-01-30T18:41:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### **Description**
The current `DoraLinearLayer` incorrectly computes weight magnitude norms **per input channel** instead of **per output channel**, violating the methodology outlined in the [DoRA paper (Section 3.1)](https://arxiv.org/abs/2402.09353). This leads to degraded performance for linear layers (e.g., in LLMs).
---
### **Issue Details**
#### **Affected Code**:
`peft/tuners/lora/dora.py` → `DoraLinearLayer.get_weight_norm`
```python
def get_weight_norm(self, weight, lora_weight, scaling):
weight = transpose(weight, self.fan_in_fan_out) # ❌ Transposes to [in_features, out_features]
weight = weight + scaling * lora_weight
weight_norm = torch.linalg.norm(weight, dim=1) # Norm over input channels (dim=1)
return weight_norm
```
#### **Problem**:
- For a linear layer with weight shape `[out_features, in_features]`, transposing to `[in_features, out_features]` causes `dim=1` to represent **input channels**, not output channels.
- This contradicts the DoRA paper’s requirement to compute magnitude **per output channel** (rows of the weight matrix).
---
### **Steps to Reproduce**
1. Initialize a DoRA-linear layer:
```python
base_layer = nn.Linear(10, 5) # out_features=5, in_features=10
dora_layer = DoraLinearLayer(fan_in_fan_out=False)
```
2. Check weight norm dimensions:
```python
weight = base_layer.weight # Shape [5, 10]
lora_weight = torch.randn(5, 10) # Simulate LoRA delta
norm = dora_layer.get_weight_norm(weight, lora_weight, scaling=1.0)
print(norm.shape) # Outputs [10] (input channels) instead of [5] (output channels)
```
---
### **Expected vs Actual Behavior**
| Expected (Per Paper) | Actual (Current Code) |
|-----------------------|-----------------------|
| Norms computed over **output channels** (`out_features`). | Norms computed over **input channels** (`in_features`). |
---
### **Proposed Fix**
Remove the transpose and compute norms over `dim=1` directly:
```python
def get_weight_norm(self, weight, lora_weight, scaling):
# Remove transpose - work directly with [out_features, in_features]
weight = weight + scaling * lora_weight
weight_norm = torch.linalg.norm(weight, dim=1) # ✅ Norm over output channels (dim=1)
return weight_norm
```
#### **Impact of Fix**:
- Aligns with DoRA paper’s methodology for linear layers.
- Convolutional layers (e.g., `DoraConv2dLayer`) are unaffected and already correct.
---
### **Additional Context**
1. **Paper Reference**:
- Section 3.1 defines magnitude as the L2 norm of **rows** (output channels) for linear layers.
- Example: For weight matrix `W ∈ ℝ^{d×k}`, magnitude `m_j = ||W_j||_2` (row-wise norm).
2. **Why This Matters**:
- Magnitude scaling is critical for DoRA’s ability to decouple direction and magnitude updates.
- Incorrect scaling invalidates the method’s theoretical guarantees and reduces performance (e.g., on LLM fine-tuning tasks).
---
### **Verification**
After applying the fix:
```python
print(norm.shape) # Now outputs [5] (correct for out_features=5)
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
### **Steps to Reproduce**
1. Initialize a DoRA-linear layer:
```python
base_layer = nn.Linear(10, 5) # out_features=5, in_features=10
dora_layer = DoraLinearLayer(fan_in_fan_out=False)
```
2. Check weight norm dimensions:
```python
weight = base_layer.weight # Shape [5, 10]
lora_weight = torch.randn(5, 10) # Simulate LoRA delta
norm = dora_layer.get_weight_norm(weight, lora_weight, scaling=1.0)
print(norm.shape) # Outputs [10] (input channels) instead of [5] (output channels)
```
### Expected behavior
### **Expected vs Actual Behavior**
| Expected (Per Paper) | Actual (Current Code) |
|-----------------------|-----------------------|
| Norms computed over **output channels** (`out_features`). | Norms computed over **input channels** (`in_features`). |
|
{
"login": "arcteryox",
"id": 195980235,
"node_id": "U_kgDOC65ryw",
"avatar_url": "https://avatars.githubusercontent.com/u/195980235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arcteryox",
"html_url": "https://github.com/arcteryox",
"followers_url": "https://api.github.com/users/arcteryox/followers",
"following_url": "https://api.github.com/users/arcteryox/following{/other_user}",
"gists_url": "https://api.github.com/users/arcteryox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arcteryox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arcteryox/subscriptions",
"organizations_url": "https://api.github.com/users/arcteryox/orgs",
"repos_url": "https://api.github.com/users/arcteryox/repos",
"events_url": "https://api.github.com/users/arcteryox/events{/privacy}",
"received_events_url": "https://api.github.com/users/arcteryox/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2348/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2344
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2344/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2344/comments
|
https://api.github.com/repos/huggingface/peft/issues/2344/events
|
https://github.com/huggingface/peft/issues/2344
| 2,807,348,808
|
I_kwDOIf9iDM6nVL5I
| 2,344
|
FSDP2 and peft
|
{
"login": "psinger",
"id": 1677826,
"node_id": "MDQ6VXNlcjE2Nzc4MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1677826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/psinger",
"html_url": "https://github.com/psinger",
"followers_url": "https://api.github.com/users/psinger/followers",
"following_url": "https://api.github.com/users/psinger/following{/other_user}",
"gists_url": "https://api.github.com/users/psinger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/psinger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/psinger/subscriptions",
"organizations_url": "https://api.github.com/users/psinger/orgs",
"repos_url": "https://api.github.com/users/psinger/repos",
"events_url": "https://api.github.com/users/psinger/events{/privacy}",
"received_events_url": "https://api.github.com/users/psinger/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2025-01-23T16:20:47
| 2025-03-03T15:04:06
| 2025-03-03T15:04:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hey, sorry if this is the wrong place. Feel free to move it to discussion.
I am trying to get peft working with fsdp2 and am wondering if someone else attempted that already?
The issue is that Im always getting errors along the lines of:
`RuntimeError: aten.mm.default: got mixed torch.Tensor and DTensor, need to convert all torch.Tensor to DTensor before calling distributed operators!`
Happy for any pointers.
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2344/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2342
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2342/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2342/comments
|
https://api.github.com/repos/huggingface/peft/issues/2342/events
|
https://github.com/huggingface/peft/issues/2342
| 2,806,843,497
|
I_kwDOIf9iDM6nTQhp
| 2,342
|
CI: Add gptqmodel to the CI
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5192585063,
"node_id": "LA_kwDOIf9iDM8AAAABNYCPZw",
"url": "https://api.github.com/repos/huggingface/peft/labels/wip",
"name": "wip",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 4
| 2025-01-23T12:57:29
| 2025-02-28T10:35:25
| null |
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This issue is to track the TODO from [this comment](https://github.com/huggingface/peft/pull/2247#pullrequestreview-2569656574). Once optimum 1.24.0 and transformers 4.49.0 are released, we should enable gptqmodel in the CI (and remove auto-gptq).
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2342/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2339
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2339/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2339/comments
|
https://api.github.com/repos/huggingface/peft/issues/2339/events
|
https://github.com/huggingface/peft/issues/2339
| 2,802,697,166
|
I_kwDOIf9iDM6nDcPO
| 2,339
|
Peft version upgrade from 0.4.0 to 0.14.0 results in "No module named \u0027peft.utils.config\u0027" error
|
{
"login": "incchar",
"id": 184541983,
"node_id": "U_kgDOCv_jHw",
"avatar_url": "https://avatars.githubusercontent.com/u/184541983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/incchar",
"html_url": "https://github.com/incchar",
"followers_url": "https://api.github.com/users/incchar/followers",
"following_url": "https://api.github.com/users/incchar/following{/other_user}",
"gists_url": "https://api.github.com/users/incchar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/incchar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/incchar/subscriptions",
"organizations_url": "https://api.github.com/users/incchar/orgs",
"repos_url": "https://api.github.com/users/incchar/repos",
"events_url": "https://api.github.com/users/incchar/events{/privacy}",
"received_events_url": "https://api.github.com/users/incchar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2025-01-21T20:00:07
| 2025-03-02T15:03:46
| 2025-03-02T15:03:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Hello,
I'm migrating my sagemaker endpoint from the `huggingface-pytorch-inference:2.1.0-transformers4.37.0-gpu-py310-cu118-ubuntu20.04` image (which is being deprecated) to the `huggingface-pytorch-inference:2.3.0-transformers4.46.1-gpu-py311-cu121-ubuntu20.04-v1.0` image, which is supported.
This new version does not support the 0.4.0 version of peft, so we have upgraded to 1.14.0 and upgraded to a compatible diffusers version. The sagemaker endpoint deploys correctly with these new versions, but once it's run, we receive the following error:
`No module named \u0027peft.utils.config\u0027`
I dug around and found that there' no usage of peft.utils.config in our inference code. The only usage I could find is here, in the peft code itself: https://github.com/huggingface/peft/blob/main/src/peft/config.py. However, in this code, It looks like utils.config does not exist at all.
Here's what I'm currently using:
diffusers==0.32.2
peft==0.14.0
Is the peft library somehow breaking itself by looking for a peft.utils.config that doesn't exist? Have I missed a step that would create the utils.config file? Or is there another hidden dependency using peft.utils.config?
### Who can help?
@BenjaminBossan @sayakpaul
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
Create a sagemaker endpoint using the new `huggingface-pytorch-inference:2.3.0-transformers4.46.1-gpu-py311-cu121-ubuntu20.04-v1.0` huggingface DLC image.
Use a requirements.txt that looks like the following:
diffusers==0.32.2
peft==0.14.0
Observe that all requests to the sagemaker endpoint respond with 500 errors.
### Expected behavior
The Sagemaker endpoint should continue to process requests as it did before the version upgrade (using peft 0.4.0)
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2339/timeline
| null |
completed
| false
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 13