cpu没报错,换gpu就报错。以下是一些踩坑:

坑1:要指定gpu,可以在import torch之前指定gpu。

model = LlamaForCausalLM.from_pretrained(model_path, trust_remote_code=True).to(device)

报错: RuntimeError('Expected all tensors to be on the same device, but found at least two devices, cuda:6 and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)')

坑2:model和input_ids都需要 .to(device),tokenizer不需要。

坑3:不要用device_map="auto",不然变量不在一张卡上。就算model和输入都to(device)了,也会报错。

报错:You can't move a model that has some modules offloaded to cpu or disk.

可以检查参数都在哪个卡,cpu/gpu。但是这样其实检查不出来啥:

坑4:custom_llama不能用AutoModelForCausalLM,要用LlamaForCausalLM。

精彩文章

评论可见,请评论后查看内容,谢谢!!!
 您阅读本篇文章共花了: