文章目录
- 算力云基础环境搭建以及大模型部署
- 创建实例(租用算力云)
- 选择服务器(一台RTX 3090)
- 选择镜像版本
- 连接实例
- 确认环境和显卡信息
- 安装大模型
- 1.选择大模型
- 2. 下载并安装大模型
- 设置资源下载加速
- 安装git-lfs
- 下载大模型
- 安装依赖的包
- 3.测试大模型
- 测试部署的大模型代码
- 测试部署
- 错误调试1
- 错误调试2
- 错误调试3
- 错误调试4
- 运行成功
算力云基础环境搭建以及大模型部署
Step by Step 实践如下文章
实战PG vector 构建DBA 个人知识库之一:基础环境搭建以及大模型部署
https://pgfans.cn/a/3605
创建实例(租用算力云)
选择服务器(一台RTX 3090)
https://www.autodl.com/market/list
选择镜像版本
选择:pytorch 1.7.0 版本对应的 cuda版本 11.0,python 23.8
连接实例
通过Jupyter Lab的网页界面连接
确认环境和显卡信息
通过终端ssh连接并进行环操作。
确认环境显卡信息:nvidia-smi
+--------------------------------------------------AutoDL--------------------------------------------------------+
目录说明:
╔═════════════════╦════════╦════╦═════════════════════════════════════════════════════════════════════════╗
║目录 ║名称 ║速度║说明 ║
╠═════════════════╬════════╬════╬═════════════════════════════════════════════════════════════════════════╣
║/ ║系 统 盘║一般║实例关机数据不会丢失,可存放代码等。会随保存镜像一起保存。 ║
║/root/autodl-tmp ║数 据 盘║ 快 ║实例关机数据不会丢失,可存放读写IO要求高的数据。但不会随保存镜像一起保存 ║
╚═════════════════╩════════╩════╩═════════════════════════════════════════════════════════════════════════╝
CPU :15 核心
内存:60 GB
GPU :NVIDIA GeForce RTX 3090, 1
存储:系 统 盘/ :1% 47M/30G数 据 盘/root/autodl-tmp:1% 8.0K/50G
+----------------------------------------------------------------------------------------------------------------+
*注意:
1.系统盘较小请将大的数据存放于数据盘或网盘中,重置系统时数据盘和网盘中的数据不受影响
2.清理系统盘请参考:https://www.autodl.com/docs/qa1/
root@autodl-container-616f40a3b3-41cb82d9:~# nvidia-smi
Tue Sep 24 05:29:35 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.107.02 Driver Version: 550.107.02 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3090 On | 00000000:A0:00.0 Off | N/A |
| 30% 30C P8 20W / 350W | 1MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------++-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
root@autodl-container-616f40a3b3-41cb82d9:~#
安装大模型
1.选择大模型
访问魔搭社区 https://www.modelscope.cn/home
选择支持中文的大模型
选择 Llama3-Chinese 版本
https://www.modelscope.cn/models/seanzhang/Llama3-Chinese
2. 下载并安装大模型
下载到数 据 盘/root/autodl-tmp中。
进入数 据 盘/root/autodl-tmp。
root@autodl-container-616f40a3b3-41cb82d9:~# cd /root/autodl-tmp
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp# pwd
/root/autodl-tmp
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp#
设置资源下载加速
命令:
source /etc/network_turbo
例:
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp# source /etc/network_turbo
设置成功
注意:仅限于学术用途,不承诺稳定性保证
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp#
参考:
https://www.autodl.com/docs/network_turbo/
安装git-lfs
命令:
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
apt install git-lfs
例:
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp# curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
Detected operating system as Ubuntu/bionic.
Checking for curl...
Detected curl...
Checking for gpg...
Detected gpg...
Detected apt version as 1.6.12ubuntu0.1
Running apt-get update... done.
Installing apt-transport-https... done.
Installing /etc/apt/sources.list.d/github_git-lfs.list...done.
Importing packagecloud gpg key... Packagecloud gpg key imported to /etc/apt/keyrings/github_git-lfs-archive-keyring.gpg
done.
Running apt-get update... done.The repository is setup! You can now install packages.
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp# apt install git-lfs
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:git-lfs
0 upgraded, 1 newly installed, 0 to remove and 176 not upgraded.
Need to get 7,168 kB of archives.
After this operation, 15.6 MB of additional disk space will be used.
Get:1 https://packagecloud.io/github/git-lfs/ubuntu bionic/main amd64 git-lfs amd64 3.2.0 [7,168 kB]
Fetched 7,168 kB in 2s (3,008 kB/s)
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package git-lfs.
(Reading database ... 42112 files and directories currently installed.)
Preparing to unpack .../git-lfs_3.2.0_amd64.deb ...
Unpacking git-lfs (3.2.0) ...
Setting up git-lfs (3.2.0) ...
Git LFS initialized.
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp#
下载大模型
下载命令:
Download Llama3-Chinese (Merged Model)
From ModelScopegit lfs install
git clone https://www.modelscope.cn/seanzhang/Llama3-Chinese.git
From HuggingFacegit lfs install
git clone https://huggingface.co/zhichen/Llama3-Chinese
参考:
https://www.modelscope.cn/models/seanzhang/Llama3-Chinese
例:
创建下载路径
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp# mkdir LLM
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp# cd LLM
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM# pwd
/root/autodl-tmp/LLM
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM#
下载大模型(约30GB)
root@autodl-container-a6464993b0-bb9a1a4f:~# git clone https://www.modelscope.cn/seanzhang/Llama3-Chinese.git
Cloning into 'Llama3-Chinese'...
remote: Enumerating objects: 43, done.
remote: Counting objects: 100% (43/43), done.
remote: Compressing objects: 100% (41/41), done.
remote: Total 43 (delta 11), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (43/43), done.error: unable to write file model-00003-of-00005.safetensors
Filtering content: 100% (5/5), 11.30 GiB | 12.75 MiB/s, done.
fatal: unable to checkout working tree
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry the checkout with 'git checkout -f HEAD'root@autodl-container-a6464993b0-bb9a1a4f:~#
模型大小:30g左右,下载约10~20分钟
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM# git clone https://www.modelscope.cn/seanzhang/Llama3-Chinese.git
Cloning into 'Llama3-Chinese'...
remote: Enumerating objects: 43, done.
remote: Counting objects: 100% (43/43), done.
remote: Compressing objects: 100% (41/41), done.
remote: Total 43 (delta 11), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (43/43), done.
Filtering content: 100% (5/5), 14.95 GiB | 76.56 MiB/s, done.
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM#
查看大小
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM# ls -l
total 4
drwxr-xr-x 4 root root 4096 Sep 24 05:44 Llama3-Chinese
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM# du -h ./ --max-depth=1
30G ./Llama3-Chinese
30G ./
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM#
安装依赖的包
命令:
pip install transformers
例:
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM# pip install transformers
Looking in indexes: http://mirrors.aliyun.com/pypi/simple
Collecting transformersDownloading http://mirrors.aliyun.com/pypi/packages/75/35/07c9879163b603f0e464b0f6e6e628a2340cfc7cdc5ca8e7d52d776710d4/transformers-4.44.2-py3-none-any.whl (9.5 MB)|████████████████████████████████| 9.5 MB 168.6 MB/s
Requirement already satisfied: tqdm>=4.27 in /root/miniconda3/lib/python3.8/site-packages (from transformers) (4.61.2)
Requirement already satisfied: requests in /root/miniconda3/lib/python3.8/site-packages (from transformers) (2.25.1)
Collecting regex!=2019.12.17Downloading http://mirrors.aliyun.com/pypi/packages/75/d1/ea4e9b22e2b19463d0def76418e21316b9a8acc88ce6b764353834015ee0/regex-2024.9.11-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (785 kB)|████████████████████████████████| 785 kB 165.6 MB/s
Collecting filelockDownloading http://mirrors.aliyun.com/pypi/packages/b9/f8/feced7779d755758a52d1f6635d990b8d98dc0a29fa568bbe0625f18fdf3/filelock-3.16.1-py3-none-any.whl (16 kB)
Collecting pyyaml>=5.1Downloading http://mirrors.aliyun.com/pypi/packages/fd/7f/2c3697bba5d4aa5cc2afe81826d73dfae5f049458e44732c7a0938baa673/PyYAML-6.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (746 kB)|████████████████████████████████| 746 kB 164.5 MB/s
Collecting huggingface-hub<1.0,>=0.23.2Downloading http://mirrors.aliyun.com/pypi/packages/5f/f1/15dc793cb109a801346f910a6b350530f2a763a6e83b221725a0bcc1e297/huggingface_hub-0.25.1-py3-none-any.whl (436 kB)|████████████████████████████████| 436 kB 21.5 MB/s
Requirement already satisfied: numpy>=1.17 in /root/miniconda3/lib/python3.8/site-packages (from transformers) (1.21.2)
Collecting safetensors>=0.4.1Downloading http://mirrors.aliyun.com/pypi/packages/53/62/1d6ffba0a2bc0e6b9f5b50d421493276d9fac5ef49670d06f7b66ea73500/safetensors-0.4.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (436 kB)|████████████████████████████████| 436 kB 167.5 MB/s
Requirement already satisfied: packaging>=20.0 in /root/miniconda3/lib/python3.8/site-packages (from transformers) (21.0)
Collecting tokenizers<0.20,>=0.19Downloading http://mirrors.aliyun.com/pypi/packages/18/0d/ee99f50407788149bc9eddae6af0b4016865d67fb687730d151683b13b80/tokenizers-0.19.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.6 MB)|████████████████████████████████| 3.6 MB 169.9 MB/s
Collecting fsspec>=2023.5.0Downloading http://mirrors.aliyun.com/pypi/packages/1d/a0/6aaea0c2fbea2f89bfd5db25fb1e3481896a423002ebe4e55288907a97a3/fsspec-2024.9.0-py3-none-any.whl (179 kB)|████████████████████████████████| 179 kB 176.1 MB/s
Requirement already satisfied: typing-extensions>=3.7.4.3 in /root/miniconda3/lib/python3.8/site-packages (from huggingface-hub<1.0,>=0.23.2->transformers) (3.10.0.2)
Requirement already satisfied: pyparsing>=2.0.2 in /root/miniconda3/lib/python3.8/site-packages (from packaging>=20.0->transformers) (2.4.7)
Requirement already satisfied: certifi>=2017.4.17 in /root/miniconda3/lib/python3.8/site-packages (from requests->transformers) (2021.5.30)
Requirement already satisfied: idna<3,>=2.5 in /root/miniconda3/lib/python3.8/site-packages (from requests->transformers) (2.10)
Requirement already satisfied: chardet<5,>=3.0.2 in /root/miniconda3/lib/python3.8/site-packages (from requests->transformers) (4.0.0)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /root/miniconda3/lib/python3.8/site-packages (from requests->transformers) (1.26.6)
Installing collected packages: pyyaml, fsspec, filelock, huggingface-hub, tokenizers, safetensors, regex, transformers
Successfully installed filelock-3.16.1 fsspec-2024.9.0 huggingface-hub-0.25.1 pyyaml-6.0.2 regex-2024.9.11 safetensors-0.4.5 tokenizers-0.19.1 transformers-4.44.2
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM#
3.测试大模型
测试部署的大模型代码
测试代码:
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "/root/autodl-tmp/LLM/Llama3-Chinese"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto")
messages = [{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": """鲁迅(1881年9月25日—1936年10月19日),原名周樟寿,后改名周树人,字豫山,后改字豫才,浙江绍兴人。中国著名文学家、思想家、革命家、教育家、美术家、书法家、民主战士,新文化运动的重要参与者,中国现代文学的奠基人之一.请问鲁迅活了多少岁?"""},
]
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
outputs = model.generate(input_ids,max_new_tokens=2048,do_sample=True,temperature=0.7,top_p=0.95,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
来自:https://pgfans.cn/a/3605
也可使用Llama3-Chinese的Inference内容
测试部署
使用启动页的Notebbook
复制测试代码到窗口:
点击运行Notebbook
错误调试1
直接运行遇到如下错误
错误详细:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
/tmp/ipykernel_2751/1093059028.py in <module>
----> 1 from transformers import AutoTokenizer, AutoModelForCausalLM2 model_id = "/root/autodl-tmp/LLM/Llama3-Chinese"3 tokenizer = AutoTokenizer.from_pretrained(model_id)4 model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto")5 messages = [~/miniconda3/lib/python3.8/site-packages/transformers/__init__.py in <module>24 25 # Check the dependencies satisfy the minimal versions required.
---> 26 from . import dependency_versions_check27 from .utils import (28 OptionalDependencyNotAvailable,~/miniconda3/lib/python3.8/site-packages/transformers/dependency_versions_check.py in <module>14 15 from .dependency_versions_table import deps
---> 16 from .utils.versions import require_version, require_version_core17 18 ~/miniconda3/lib/python3.8/site-packages/transformers/utils/__init__.py in <module>32 replace_return_docstrings,33 )
---> 34 from .generic import (35 ContextManagers,36 ExplicitEnum,~/miniconda3/lib/python3.8/site-packages/transformers/utils/generic.py in <module>460 461 if is_torch_available():
--> 462 import torch.utils._pytree as _torch_pytree463 464 def _model_output_flatten(output: ModelOutput) -> Tuple[List[Any], "_torch_pytree.Context"]:ModuleNotFoundError: No module named 'torch.utils._pytree'
根据错误内容,缺少某些模块。可能和transformer或者pytorch的版本有关。。。
错误调试2
升级pytorch的版本。
使用pip升级到PyTorch的2.3.1版本(例如2.3.1)
pip install --upgrade torchpip install --upgrade torch==<版本>
例:
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM# pip install --upgrade torch==2.3.1
Looking in indexes: http://mirrors.aliyun.com/pypi/simple
Collecting torch==2.3.1Downloading http://mirrors.aliyun.com/pypi/packages/c0/7e/309d63c6330a0b821a6f55e06dcef6704a7ab8b707534a4923837570624e/torch-2.3.1-cp38-cp38-manylinux1_x86_64.whl (779.1 MB)|████████████████████████████████| 779.1 MB 11.8 MB/s
Collecting nvidia-nccl-cu12==2.20.5Downloading http://mirrors.aliyun.com/pypi/packages/4b/2a/0a131f572aa09f741c30ccd45a8e56316e8be8dfc7bc19bf0ab7cfef7b19/nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_x86_64.whl (176.2 MB)|████████████████████████████████| 176.2 MB 5.1 MB/s
Requirement already satisfied: filelock in /root/miniconda3/lib/python3.8/site-packages (from torch==2.3.1) (3.16.1)
Collecting nvidia-cudnn-cu12==8.9.2.26Downloading http://mirrors.aliyun.com/pypi/packages/ff/74/a2e2be7fb83aaedec84f391f082cf765dfb635e7caa9b49065f73e4835d8/nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl (731.7 MB)|████████████████████████████████| 731.7 MB 5.1 MB/s
Collecting nvidia-cuda-nvrtc-cu12==12.1.105Downloading http://mirrors.aliyun.com/pypi/packages/b6/9f/c64c03f49d6fbc56196664d05dba14e3a561038a81a638eeb47f4d4cfd48/nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (23.7 MB)|████████████████████████████████| 23.7 MB 15.5 MB/s
Requirement already satisfied: fsspec in /root/miniconda3/lib/python3.8/site-packages (from torch==2.3.1) (2024.9.0)
Requirement already satisfied: jinja2 in /root/miniconda3/lib/python3.8/site-packages (from torch==2.3.1) (3.0.1)
Collecting nvidia-cufft-cu12==11.0.2.54Downloading http://mirrors.aliyun.com/pypi/packages/86/94/eb540db023ce1d162e7bea9f8f5aa781d57c65aed513c33ee9a5123ead4d/nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl (121.6 MB)|████████████████████████████████| 121.6 MB 15.3 MB/s
Collecting sympyDownloading http://mirrors.aliyun.com/pypi/packages/99/ff/c87e0622b1dadea79d2fb0b25ade9ed98954c9033722eb707053d310d4f3/sympy-1.13.3-py3-none-any.whl (6.2 MB)|████████████████████████████████| 6.2 MB 5.3 MB/s
Collecting networkxDownloading http://mirrors.aliyun.com/pypi/packages/a8/05/9d4f9b78ead6b2661d6e8ea772e111fc4a9fbd866ad0c81906c11206b55e/networkx-3.1-py3-none-any.whl (2.1 MB)|████████████████████████████████| 2.1 MB 8.3 MB/s
Collecting nvidia-cuda-cupti-cu12==12.1.105Downloading http://mirrors.aliyun.com/pypi/packages/7e/00/6b218edd739ecfc60524e585ba8e6b00554dd908de2c9c66c1af3e44e18d/nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (14.1 MB)|████████████████████████████████| 14.1 MB 4.9 MB/s
Collecting nvidia-curand-cu12==10.3.2.106Downloading http://mirrors.aliyun.com/pypi/packages/44/31/4890b1c9abc496303412947fc7dcea3d14861720642b49e8ceed89636705/nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl (56.5 MB)|████████████████████████████████| 56.5 MB 5.4 MB/s
Collecting typing-extensions>=4.8.0Downloading http://mirrors.aliyun.com/pypi/packages/26/9f/ad63fc0248c5379346306f8668cda6e2e2e9c95e01216d2b8ffd9ff037d0/typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Collecting nvidia-cusolver-cu12==11.4.5.107Downloading http://mirrors.aliyun.com/pypi/packages/bc/1d/8de1e5c67099015c834315e333911273a8c6aaba78923dd1d1e25fc5f217/nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl (124.2 MB)|████████████████████████████████| 124.2 MB 4.0 MB/s
Collecting nvidia-cusparse-cu12==12.1.0.106Downloading http://mirrors.aliyun.com/pypi/packages/65/5b/cfaeebf25cd9fdec14338ccb16f6b2c4c7fa9163aefcf057d86b9cc248bb/nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl (196.0 MB)|████████████████████████████████| 196.0 MB 5.6 MB/s
Collecting nvidia-cublas-cu12==12.1.3.1Downloading http://mirrors.aliyun.com/pypi/packages/37/6d/121efd7382d5b0284239f4ab1fc1590d86d34ed4a4a2fdb13b30ca8e5740/nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl (410.6 MB)|████████████████████████████████| 410.6 MB 6.8 MB/s
Collecting triton==2.3.1Downloading http://mirrors.aliyun.com/pypi/packages/d3/55/45b3882019a8d69ad73b5b2bd1714cb2d6653b39e7376b7ac5accf745760/triton-2.3.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (168.0 MB)|████████████████████████████████| 168.0 MB 4.3 MB/s
Collecting nvidia-nvtx-cu12==12.1.105Downloading http://mirrors.aliyun.com/pypi/packages/da/d3/8057f0587683ed2fcd4dbfbdfdfa807b9160b809976099d36b8f60d08f03/nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (99 kB)|████████████████████████████████| 99 kB 9.7 MB/s
Collecting nvidia-cuda-runtime-cu12==12.1.105Downloading http://mirrors.aliyun.com/pypi/packages/eb/d5/c68b1d2cdfcc59e72e8a5949a37ddb22ae6cade80cd4a57a84d4c8b55472/nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (823 kB)|████████████████████████████████| 823 kB 9.3 MB/s
Collecting nvidia-nvjitlink-cu12Downloading http://mirrors.aliyun.com/pypi/packages/a8/48/a9775d377cb95585fb188b469387f58ba6738e268de22eae2ad4cedb2c41/nvidia_nvjitlink_cu12-12.6.68-py3-none-manylinux2014_x86_64.whl (19.7 MB)|████████████████████████████████| 19.7 MB 5.8 MB/s
Requirement already satisfied: MarkupSafe>=2.0 in /root/miniconda3/lib/python3.8/site-packages (from jinja2->torch==2.3.1) (2.0.1)
Collecting mpmath<1.4,>=1.1.0Downloading http://mirrors.aliyun.com/pypi/packages/43/e3/7d92a15f894aa0c9c4b49b8ee9ac9850d6e63b03c9c32c0367a13ae62209/mpmath-1.3.0-py3-none-any.whl (536 kB)|████████████████████████████████| 536 kB 6.2 MB/s
Installing collected packages: nvidia-nvjitlink-cu12, nvidia-cusparse-cu12, nvidia-cublas-cu12, mpmath, typing-extensions, triton, sympy, nvidia-nvtx-cu12, nvidia-nccl-cu12, nvidia-cusolver-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cudnn-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, networkx, torchAttempting uninstall: typing-extensionsFound existing installation: typing-extensions 3.10.0.2Uninstalling typing-extensions-3.10.0.2:Successfully uninstalled typing-extensions-3.10.0.2Attempting uninstall: torchFound existing installation: torch 1.7.0+cu110Uninstalling torch-1.7.0+cu110:Successfully uninstalled torch-1.7.0+cu110
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torchvision 0.8.1+cu110 requires torch==1.7.0, but you have torch 2.3.1 which is incompatible.
Successfully installed mpmath-1.3.0 networkx-3.1 nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.20.5 nvidia-nvjitlink-cu12-12.6.68 nvidia-nvtx-cu12-12.1.105 sympy-1.13.3 torch-2.3.1 triton-2.3.1 typing-extensions-4.12.2
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM#
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM# python -c "import torch; print(torch.__version__)"
2.3.1+cu121
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM#
点击运行Notebbook的代码。
错误详细:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
/tmp/ipykernel_1677/1093059028.py in <module>2 model_id = "/root/autodl-tmp/LLM/Llama3-Chinese"3 tokenizer = AutoTokenizer.from_pretrained(model_id)
----> 4 model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto")5 messages = [6 {"role": "system", "content": "You are a helpful assistant."},~/miniconda3/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)562 elif type(config) in cls._model_mapping.keys():563 model_class = _get_model_class(config, cls._model_mapping)
--> 564 return model_class.from_pretrained(565 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs566 )~/miniconda3/lib/python3.8/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, *model_args, **kwargs)3316 )3317 elif not is_accelerate_available():
-> 3318 raise ImportError(3319 "Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install accelerate`"3320 )ImportError: Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install accelerate`
根据提示运行pip install accelerate
错误调试3
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM# pip install accelerate
Looking in indexes: http://mirrors.aliyun.com/pypi/simple
Collecting accelerateDownloading http://mirrors.aliyun.com/pypi/packages/b0/5e/80cee674cdbe529ef008721d7eebb50ae5def4314211d82123aa23e828f8/accelerate-0.34.2-py3-none-any.whl (324 kB)|████████████████████████████████| 324 kB 11.0 MB/s
Requirement already satisfied: huggingface-hub>=0.21.0 in /root/miniconda3/lib/python3.8/site-packages (from accelerate) (0.25.1)
Requirement already satisfied: torch>=1.10.0 in /root/miniconda3/lib/python3.8/site-packages (from accelerate) (2.3.1)
Requirement already satisfied: safetensors>=0.4.3 in /root/miniconda3/lib/python3.8/site-packages (from accelerate) (0.4.5)
Requirement already satisfied: packaging>=20.0 in /root/miniconda3/lib/python3.8/site-packages (from accelerate) (21.0)
Requirement already satisfied: pyyaml in /root/miniconda3/lib/python3.8/site-packages (from accelerate) (6.0.2)
Requirement already satisfied: numpy<3.0.0,>=1.17 in /root/miniconda3/lib/python3.8/site-packages (from accelerate) (1.21.2)
Collecting psutilDownloading http://mirrors.aliyun.com/pypi/packages/19/74/f59e7e0d392bc1070e9a70e2f9190d652487ac115bb16e2eff6b22ad1d24/psutil-6.0.0-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (290 kB)|████████████████████████████████| 290 kB 5.8 MB/s
Requirement already satisfied: filelock in /root/miniconda3/lib/python3.8/site-packages (from huggingface-hub>=0.21.0->accelerate) (3.16.1)
Requirement already satisfied: tqdm>=4.42.1 in /root/miniconda3/lib/python3.8/site-packages (from huggingface-hub>=0.21.0->accelerate) (4.61.2)
Requirement already satisfied: requests in /root/miniconda3/lib/python3.8/site-packages (from huggingface-hub>=0.21.0->accelerate) (2.25.1)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /root/miniconda3/lib/python3.8/site-packages (from huggingface-hub>=0.21.0->accelerate) (4.12.2)
Requirement already satisfied: fsspec>=2023.5.0 in /root/miniconda3/lib/python3.8/site-packages (from huggingface-hub>=0.21.0->accelerate) (2024.9.0)
Requirement already satisfied: pyparsing>=2.0.2 in /root/miniconda3/lib/python3.8/site-packages (from packaging>=20.0->accelerate) (2.4.7)
Requirement already satisfied: nvidia-curand-cu12==10.3.2.106 in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.10.0->accelerate) (10.3.2.106)
Requirement already satisfied: nvidia-cusparse-cu12==12.1.0.106 in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.10.0->accelerate) (12.1.0.106)
Requirement already satisfied: triton==2.3.1 in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.10.0->accelerate) (2.3.1)
Requirement already satisfied: nvidia-cudnn-cu12==8.9.2.26 in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.10.0->accelerate) (8.9.2.26)
Requirement already satisfied: jinja2 in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.10.0->accelerate) (3.0.1)
Requirement already satisfied: nvidia-cuda-runtime-cu12==12.1.105 in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.10.0->accelerate) (12.1.105)
Requirement already satisfied: nvidia-cublas-cu12==12.1.3.1 in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.10.0->accelerate) (12.1.3.1)
Requirement already satisfied: nvidia-cufft-cu12==11.0.2.54 in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.10.0->accelerate) (11.0.2.54)
Requirement already satisfied: nvidia-nccl-cu12==2.20.5 in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.10.0->accelerate) (2.20.5)
Requirement already satisfied: nvidia-cusolver-cu12==11.4.5.107 in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.10.0->accelerate) (11.4.5.107)
Requirement already satisfied: nvidia-cuda-cupti-cu12==12.1.105 in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.10.0->accelerate) (12.1.105)
Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.1.105 in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.10.0->accelerate) (12.1.105)
Requirement already satisfied: nvidia-nvtx-cu12==12.1.105 in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.10.0->accelerate) (12.1.105)
Requirement already satisfied: sympy in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.10.0->accelerate) (1.13.3)
Requirement already satisfied: networkx in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.10.0->accelerate) (3.1)
Requirement already satisfied: nvidia-nvjitlink-cu12 in /root/miniconda3/lib/python3.8/site-packages (from nvidia-cusolver-cu12==11.4.5.107->torch>=1.10.0->accelerate) (12.6.68)
Requirement already satisfied: MarkupSafe>=2.0 in /root/miniconda3/lib/python3.8/site-packages (from jinja2->torch>=1.10.0->accelerate) (2.0.1)
Requirement already satisfied: chardet<5,>=3.0.2 in /root/miniconda3/lib/python3.8/site-packages (from requests->huggingface-hub>=0.21.0->accelerate) (4.0.0)
Requirement already satisfied: idna<3,>=2.5 in /root/miniconda3/lib/python3.8/site-packages (from requests->huggingface-hub>=0.21.0->accelerate) (2.10)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /root/miniconda3/lib/python3.8/site-packages (from requests->huggingface-hub>=0.21.0->accelerate) (1.26.6)
Requirement already satisfied: certifi>=2017.4.17 in /root/miniconda3/lib/python3.8/site-packages (from requests->huggingface-hub>=0.21.0->accelerate) (2021.5.30)
Requirement already satisfied: mpmath<1.4,>=1.1.0 in /root/miniconda3/lib/python3.8/site-packages (from sympy->torch>=1.10.0->accelerate) (1.3.0)
Installing collected packages: psutil, accelerate
Successfully installed accelerate-0.34.2 psutil-6.0.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM#
再次运行。
错误详细:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
/tmp/ipykernel_1983/1093059028.py in <module>10 请问鲁迅活了多少岁?"""},11 ]
---> 12 input_ids = tokenizer.apply_chat_template(13 messages, add_generation_prompt=True, return_tensors="pt"14 ).to(model.device)~/miniconda3/lib/python3.8/site-packages/transformers/tokenization_utils_base.py in apply_chat_template(self, conversation, tools, documents, chat_template, add_generation_prompt, tokenize, padding, truncation, max_length, return_tensors, return_dict, return_assistant_tokens_mask, tokenizer_kwargs, **kwargs)1792 1793 # Compilation function uses a cache to avoid recompiling the same template
-> 1794 compiled_template = self._compile_jinja_template(chat_template)1795 1796 if isinstance(conversation, (list, tuple)) and (~/miniconda3/lib/python3.8/site-packages/transformers/tokenization_utils_base.py in _compile_jinja_template(self, chat_template)1918 1919 if version.parse(jinja2.__version__) < version.parse("3.1.0"):
-> 1920 raise ImportError(1921 "apply_chat_template requires jinja2>=3.1.0 to be installed. Your version is " f"{jinja2.__version__}."1922 )ImportError: apply_chat_template requires jinja2>=3.1.0 to be installed. Your version is 3.0.1.
从错误错误信息看正在使用的 transformers 库中的 apply_chat_template 方法需要一个至少为 3.1.0 版本的 jinja2 库,但是当前安装的 jinja2 版本是 3.0.1。为了解决这个问题,需要更新 jinja2 库到至少 3.1.0 版本。
通过以下命令来更新 jinja2:
pip install --upgrade jinja2
这条命令会检查当前安装的 jinja2 版本,并将其更新到最新的稳定版本(如果可用)。
错误调试4
更新 jinja2 库
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM# pip install --upgrade jinja2
Looking in indexes: http://mirrors.aliyun.com/pypi/simple
Requirement already satisfied: jinja2 in /root/miniconda3/lib/python3.8/site-packages (3.0.1)
Collecting jinja2Downloading http://mirrors.aliyun.com/pypi/packages/31/80/3a54838c3fb461f6fec263ebf3a3a41771bd05190238de3486aae8540c36/jinja2-3.1.4-py3-none-any.whl (133 kB)|████████████████████████████████| 133 kB 4.4 MB/s
Requirement already satisfied: MarkupSafe>=2.0 in /root/miniconda3/lib/python3.8/site-packages (from jinja2) (2.0.1)
Installing collected packages: jinja2Attempting uninstall: jinja2Found existing installation: Jinja2 3.0.1Uninstalling Jinja2-3.0.1:Successfully uninstalled Jinja2-3.0.1
Successfully installed jinja2-3.1.4
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM#
运行成功
再次运行。
运行成功。
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.
The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
鲁迅活了55岁。