使用镜像:
modelscope/ms-swift/swift_lora_qwen2:v1
数据集和模型下载:
数据集内容:
启动命令:
CUDA_VISIBLE_DEVICES=0 \
swift sft \
--model Qwen/Qwen2.5-7B-Instruct \
--train_type lora \
--dataset 'AI-ModelScope/alpaca-gpt4-data-zh#500' \
'AI-ModelScope/alpaca-gpt4-data-en#500' \
'swift/self-cognition#500' \
--torch_dtype bfloat16 \
--num_train_epochs 1 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--learning_rate 1e-4 \
--lora_rank 8 \
--lora_alpha 32 \
--target_modules all-linear \
--gradient_accumulation_steps 16 \
--eval_steps 50 \
--save_steps 50 \
--save_total_limit 5 \
--logging_steps 5 \
--max_length 2048 \
--output_dir output \
--system 'You are a helpful assistant.' \
--warmup_ratio 0.05 \
--dataloader_num_workers 4 \
--model_author swift \
--model_name swift-robot
显存占用:
lora训练过程:
验证集:
可视化:
学习率:
inference:
CUDA_VISIBLE_DEVICES=0 \ swift infer \--adapters output/vx-xxx/checkpoint-xxx \--stream true \--temperature 0 \--max_new_tokens 2048
和Qwen进行沟通:
和Qwen沟通时需要的显存:
merge-lora:
CUDA_VISIBLE_DEVICES=0 \ swift infer \--adapters output/vx-xxx/checkpoint-xxx \--stream true \--merge_lora true \--infer_backend vllm \--max_model_len 8192 \--temperature 0 \--max_new_tokens 2048
tensorboard