ubuntu下安装 git 及部署cosyvoice(2)

上一节已经可以了一部分。这一节,主要是让他动起来。

1.第一个错误

(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$ python webui.py
Traceback (most recent call last):
  File "webui.py", line 17, in <module>
    import gradio as gr
ModuleNotFoundError: No module named 'gradio'
(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$

解决方法 

 

pip install gradio


忘了一件事,应该是找文件安装的。

哈。

pip install -r requirements.txt 

 

--extra-index-url https://download.pytorch.org/whl/cu118
conformer==0.3.2
deepspeed==0.14.2; sys_platform == 'linux'
diffusers==0.27.2
gdown==5.1.0
gradio==4.32.2
grpcio==1.57.0
grpcio-tools==1.57.0
huggingface-hub==0.23.5
hydra-core==1.3.2
HyperPyYAML==1.2.2
inflect==7.3.1
librosa==0.10.2
lightning==2.2.4
matplotlib==3.7.5
modelscope==1.15.0
networkx==3.1
omegaconf==2.3.0
onnx==1.16.0
onnxruntime-gpu==1.16.0; sys_platform == 'linux'
onnxruntime==1.16.0; sys_platform == 'darwin' or sys_platform == 'windows'
openai-whisper==20231117
protobuf==4.25
pydantic==2.7.0
rich==13.7.1
soundfile==0.12.1
tensorboard==2.14.0
torch==2.0.1
torchaudio==2.0.2
uvicorn==0.30.0
wget==3.2
fastapi==0.111.0
fastapi-cli==0.0.4
WeTextProcessing==1.0.3

漫长的等等。

太慢了,用下载器下

下载后安装

(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$ pip install /home/duyicheng/Downloads/torch-2.0.1+cu118-cp38-cp38-linux_x86_64_1.whl

(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$ pip install /home/duyicheng/Downloads/torch-2.0.1+cu118-cp38-cp38-linux_x86_64_1.whl
Looking in indexes: https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
ERROR: torch-2.0.1+cu118-cp38-cp38-linux_x86_64_1.whl is not a supported wheel on this platform.
(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$

出错了。

sudo apt install nvidia-cuda-toolkit

 又是很长的等等

(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Jan__6_16:45:21_PST_2023
Cuda compilation tools, release 12.0, V12.0.140
Build cuda_12.0.r12.0/compiler.32267302_0

 

如何不确定可使用在线

pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu118

我用的是120的版本。

所以

(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$ pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu120
Looking in indexes: https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple, https://download.pytorch.org/whl/cu120
Collecting torchDownloading https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/a9/71/45aac46b75742e08d2d6f9fc2b612223b5e36115b8b2ed673b23c21b5387/torch-2.4.1-cp38-cp38-manylinux1_x86_64.whl (797.1 MB)━━━━━━━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━ 393.5/797.1 MB 11.1 MB/s eta 0:00:37

 修改了需要列表

--extra-index-url https://download.pytorch.org/whl/cu120
conformer==0.3.2
deepspeed==0.14.2
diffusers==0.27.2
gdown==5.1.0
gradio==4.32.2
grpcio==1.57.0
grpcio-tools==1.57.0
huggingface-hub==0.23.5
hydra-core==1.3.2
HyperPyYAML==1.2.2
inflect==7.3.1
librosa==0.10.2
lightning==2.2.4
matplotlib==3.7.5
modelscope==1.15.0
networkx==3.1
omegaconf==2.3.0
onnx==1.16.0
onnxruntime-gpu==1.16.0
openai-whisper==20231117
protobuf==4.25
pydantic==2.7.0
rich==13.7.1
soundfile==0.12.1
tensorboard==2.14.0
torch==2.0.1
torchaudio==2.0.2
uvicorn==0.30.0
wget==3.2
fastapi==0.111.0
fastapi-cli==0.0.4
WeTextProcessing==1.0.3

有一个提示。与cuda12.0兼容的问题,似乎可以先忽略。过下测试下,如果不成的,再更新这个。

ERROR: pip's dependency resolver does not currently take into account all the packages t77hat are installed. This behaviour is the source of the following dependency conflicts.
torchvision 0.19.1 requires torch==2.4.1, but you have torch 2.0.1 which is incompatible.

 再测试一下。

(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$ python webui.py
2024-11-06 17:48:34,901 - modelscope - INFO - PyTorch version 2.0.1 Found.
2024-11-06 17:48:34,902 - modelscope - INFO - Loading ast index from /home/duyicheng/.cache/modelscope/ast_indexer
2024-11-06 17:48:34,902 - modelscope - INFO - No valid ast index found from /home/duyicheng/.cache/modelscope/ast_indexer, generating ast index from prebuilt!
2024-11-06 17:48:35,021 - modelscope - INFO - Loading done! Current index file version is 1.15.0, with md5 87f1fcfbd39133baac40066388797472 and a total number of 980 components indexed
transformer is not installed, please install it if you want to use related modules
failed to import ttsfrd, use WeTextProcessing instead
2024-11-06 17:48:44,066 DEBUG Starting new HTTPS connection (1): www.modelscope.cn:443
2024-11-06 17:48:44,260 DEBUG https://www.modelscope.cn:443 "GET /api/v1/models/pretrained_models/CosyVoice-300M/revisions HTTP/11" 404 152
2024-11-06 17:48:44,262 - modelscope - ERROR - The request model: pretrained_models/CosyVoice-300M does not exist!
Traceback (most recent call last):File "webui.py", line 184, in <module>cosyvoice = CosyVoice(args.model_dir)File "/home/duyicheng/gitee/CosyVoice/cosyvoice/cli/cosyvoice.py", line 30, in __init__model_dir = snapshot_download(model_dir)File "/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/modelscope/hub/snapshot_download.py", line 94, in snapshot_downloadrevision_detail = _api.get_valid_revision_detail(File "/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/modelscope/hub/api.py", line 499, in get_valid_revision_detailall_branches_detail, all_tags_detail = self.get_model_branches_and_tags_details(File "/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/modelscope/hub/api.py", line 579, in get_model_branches_and_tags_detailshandle_http_response(r, logger, cookies, model_id)File "/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/modelscope/hub/errors.py", line 117, in handle_http_responseraise HTTPError(http_error_msg, response=response)
requests.exceptions.HTTPError: The request model: pretrained_models/CosyVoice-300M does not exist!
(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$ 

没装模型啊,写一个py文件,用于下载模型,如果没有modelscope的话,pip install modelscope

# SDK模型下载
from modelscope import snapshot_download
snapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
snapshot_download('iic/CosyVoice-300M-25Hz', local_dir='pretrained_models/CosyVoice-300M-25Hz')
snapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
snapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
snapshot_download('iic/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')

运行这个py

(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$ python downmodels.py
2024-11-07 07:49:34,382 - modelscope - INFO - PyTorch version 2.0.1 Found.
2024-11-07 07:49:34,383 - modelscope - INFO - Loading ast index from /home/duyicheng/.cache/modelscope/ast_indexer
2024-11-07 07:49:34,453 - modelscope - INFO - Loading done! Current index file version is 1.15.0, with md5 87f1fcfbd39133baac40066388797472 and a total number of 980 components indexed
transformer is not installed, please install it if you want to use related modules

漫长的等待。

提示:transformer 没安装?等下看看

 

`transformer` 是一种深度学习模型架构,最初由 Vaswani 等人在 2017 年的论文《Attention is All You Need》中提出。这种模型在自然语言处理(NLP)任务中取得了显著的成功,并且已经被广泛应用于各种领域,包括但不限于文本生成、机器翻译、情感分析、问答系统等。

### 主要特点
1. **自注意力机制(Self-Attention Mechanism)**:
   - **并行化**:与传统的循环神经网络(RNN)不同,Transformer 模型可以完全并行化,从而大大加快训练速度。
   - **长距离依赖**:自注意力机制使得模型能够更好地捕捉输入序列中的长距离依赖关系。

2. **位置编码(Positional Encoding)**:
   - 由于 Transformer 模型没有内在的时间/顺序信息,位置编码被添加到输入嵌入中,以提供序列中每个位置的信息。

3. **多头注意力(Multi-Head Attention)**:
   - 多头注意力机制允许模型在不同的表示子空间中并行地关注不同的信息,从而增强模型的表达能力。

4. **前馈神经网络(Feed-Forward Neural Network)**:
   - 每个 Transformer 层包含一个前馈神经网络,用于对输入进行非线性变换。

### 应用场景
1. **自然语言处理(NLP)**:
   - **机器翻译**:如 Google 的 Transformer 模型在多种语言对的翻译任务中取得了显著的性能提升。
   - **文本生成**:如 GPT-3 和 BERT 等模型在生成高质量的文本方面表现出色。
   - **情感分析**:用于识别和分类文本中的情感倾向。
   - **问答系统**:如 BERT 在 SQuAD 数据集上的表现。

2. **计算机视觉(CV)**:
   - **图像分类**:如 ViT(Vision Transformer)在图像分类任务中取得了与传统卷积神经网络(CNN)相当甚至更好的性能。
   - **目标检测**:如 DETR(DEtection TRansformer)在目标检测任务中表现出色。

3. **语音处理**:
   - **语音识别**:如 Conformer 模型在语音识别任务中取得了显著的性能提升。
   - **语音合成**:用于生成高质量的语音信号。

### 实现库
- **Hugging Face Transformers**:这是一个非常流行的库,提供了多种预训练的 Transformer 模型,支持多种任务和语言。
- **PyTorch** 和 **TensorFlow**:这两个深度学习框架都提供了实现 Transformer 模型的工具和库。

### 示例代码
以下是一个简单的示例,展示如何使用 Hugging Face 的 `transformers` 库进行文本分类:

```python
from transformers import pipeline

# 加载预训练的文本分类模型
classifier = pipeline("sentiment-analysis")

# 输入文本
text = "I love using transformers for NLP tasks!"

# 进行情感分析
result = classifier(text)

print(result)
```


### 总结
Transformer 模型通过其独特的自注意力机制和并行化能力,在多种任务中展现了强大的性能。它不仅在自然语言处理领域取得了重大突破,还在计算机视觉和语音处理等领域展示了广泛的应用前景。

pip install transformers

 

export PYTHONPATH=third_party/Matcha-TTS   非win平台,set PYTHONPATH=third_party\Matcha-TTS  win平台,真坑

出错了

(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$ python webui.py
2024-11-07 08:34:18,477 - modelscope - INFO - PyTorch version 2.0.1 Found.
2024-11-07 08:34:18,478 - modelscope - INFO - Loading ast index from /home/duyicheng/.cache/modelscope/ast_indexer
2024-11-07 08:34:18,546 - modelscope - INFO - Loading done! Current index file version is 1.15.0, with md5 87f1fcfbd39133baac40066388797472 and a total number of 980 components indexed
Traceback (most recent call last):
  File "/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1778, in _get_module
    return importlib.import_module("." + module_name, self.__name__)
  File "/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 843, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/transformers/modeling_utils.py", line 48, in <module>
    from .loss.loss_utils import LOSS_MAPPING
  File "/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/transformers/loss/loss_utils.py", line 19, in <module>
    from .loss_deformable_detr import DeformableDetrForObjectDetectionLoss, DeformableDetrForSegmentationLoss
  File "/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/transformers/loss/loss_deformable_detr.py", line 4, in <module>
    from ..image_transforms import center_to_corners_format
  File "/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/transformers/image_transforms.py", line 22, in <module>
    from .image_utils import (
  File "/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/transformers/image_utils.py", line 58, in <module>
    from torchvision.transforms import InterpolationMode
  File "/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torchvision/__init__.py", line 10, in <module>
    from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils  # usort:skip
  File "/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torchvision/_meta_registrations.py", line 4, in <module>
    import torch._custom_ops
ModuleNotFoundError: No module named 'torch._custom_ops'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "webui.py", line 25, in <module>
    from cosyvoice.cli.cosyvoice import CosyVoice
  File "/home/duyicheng/gitee/CosyVoice/cosyvoice/cli/cosyvoice.py", line 18, in <module>
    from modelscope import snapshot_download
  File "/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/modelscope/__init__.py", line 115, in <module>
    fix_transformers_upgrade()
  File "/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/modelscope/utils/automodel_utils.py", line 44, in fix_transformers_upgrade
    from transformers import PreTrainedModel
  File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist
  File "/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1766, in __getattr__
    module = self._get_module(self._class_to_module[name])
  File "/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1780, in _get_module
    raise RuntimeError(
RuntimeError: Failed to import transformers.modeling_utils because of the following error (look up to see its traceback):
No module named 'torch._custom_ops'

 更新:

pip install --upgrade torch torchvision torchaudio

 又出错了。

(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$ python webui.py
Traceback (most recent call last):
  File "webui.py", line 19, in <module>
    import torch
  File "/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/__init__.py", line 290, in <module>
    from torch._C import *  # noqa: F403
ImportError: /home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so: undefined symbol: ncclCommRegister
(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$

【ImportError】from torch._C import * # noqa: F403; ImportError: xxx: defined symbol: iJIT_NotifyEvent-CSDN博客

按照上面的完成后,又出错了。

(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$ python webui.py
Traceback (most recent call last):
  File "webui.py", line 19, in <module>
    import torch
  File "/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/__init__.py", line 290, in <module>
    from torch._C import *  # noqa: F403
ImportError: /home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so: undefined symbol: ncclCommRegister

 再改:

根据你提供的错误信息,`libtorch_cuda.so` 文件中缺少 `ncclCommRegister` 符号,这通常是因为 NCCL 库未正确安装或版本不匹配。以下是一些详细的步骤来解决这个问题:

### 1. 确认 NCCL 库已安装
首先,确保 NCCL 库已正确安装。你可以通过以下命令检查:

```bash
dpkg -l | grep libnccl
```


如果没有安装,可以使用以下命令安装:

```bash
sudo apt-get update
sudo apt-get install libnccl2 libnccl-dev
```


### 2. 确认 NCCL 库路径
确保 NCCL 库路径已正确设置。你可以通过以下命令检查:

```bash
ldconfig -p | grep nccl
```


如果路径不正确或缺失,你可以手动添加路径到 `LD_LIBRARY_PATH`:

```bash
export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH
```


### 3. 重新安装 PyTorch
有时重新安装 PyTorch 可以解决依赖问题。你可以尝试卸载并重新安装 PyTorch:

```bash
pip uninstall torch torchvision torchaudio
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu120
```


### 4. 使用 Conda 安装 PyTorch
如果你使用的是 Anaconda,可以尝试使用 Conda 来安装 PyTorch,以确保依赖关系正确:

```bash
conda remove pytorch torchvision torchaudio
conda install pytorch torchvision torchaudio cudatoolkit=12.0 -c pytorch
```


### 5. 检查 CUDA 版本
确保你的 CUDA 版本与 PyTorch 版本兼容。你可以通过以下命令检查 CUDA 版本:

```bash
nvcc --version
```


### 6. 检查 `libtorch_cuda.so` 文件
确保 `libtorch_cuda.so` 文件存在且未损坏。你可以通过以下命令检查文件:

```bash
ls -l /home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so
```


### 7. 检查 NCCL 库路径
确保 NCCL 库路径正确。你可以通过以下命令检查 NCCL 库路径:

```bash
ldd /home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so | grep nccl
```


如果 `nccl` 库路径不正确或缺失,你可以手动添加路径到 `LD_LIBRARY_PATH`:

```bash
export LD_LIBRARY_PATH=/usr/local/cuda-12.0/lib64:/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH
```


### 8. 重新创建虚拟环境
如果上述方法都无法解决问题,你可以尝试重新创建虚拟环境并重新安装所有依赖项:

```bash
conda create -n new_cosyvoice python=3.8
conda activate new_cosyvoice
conda install pytorch torchvision torchaudio cudatoolkit=12.0 -c pytorch
pip install -r requirements.txt  # 假设你有一个 requirements.txt 文件
```


### 9. 检查 PyTorch 安装日志
如果问题仍然存在,可以查看 PyTorch 的安装日志,看看是否有其他依赖项缺失或配置错误。你可以通过以下命令查看安装日志:

```bash
pip show torch
```


### 10. 手动安装 NCCL
如果上述方法都无法解决问题,你可以尝试手动下载并安装 NCCL 库。你可以从 NVIDIA 的官方网站下载 NCCL 库,并按照说明进行安装。

### 11. 检查 PyTorch 版本
确保你使用的 PyTorch 版本支持 CUDA 12.0。你可以通过以下命令检查 PyTorch 版本:

```python
import torch
print(torch.__version__)
```


如果版本不匹配,可以尝试安装一个支持 CUDA 12.0 的 PyTorch 版本。

希望这些方法能帮助你解决问题。如果问题仍然存在,请提供更多的错误信息和环境细节,以便进一步诊断。

(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$ dpkg -l | grep libnccl
(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$ ldconfig -p | grep nccl
(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Jan__6_16:45:21_PST_2023
Cuda compilation tools, release 12.0, V12.0.140
Build cuda_12.0.r12.0/compiler.32267302_0
(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$ conda install pytorch torchvision torchaudio cudatoolkit=12.0 -c pytorch
Channels:- pytorch- defaults
Platform: linux-64
Collecting package metadata (repodata.json): done
Solving environment: failedPackagesNotFoundError: The following packages are not available from current channels:- cudatoolkit=12.0*Current channels:- https://conda.anaconda.org/pytorch- https://repo.anaconda.com/pkgs/main- https://repo.anaconda.com/pkgs/rTo search for alternate channels that may provide the conda package you're
looking for, navigate tohttps://anaconda.organd use the search bar at the top of the page.(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$ 
(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$ pip show torch
Name: torch
Version: 2.4.1
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: packages@pytorch.org
License: BSD-3
Location: /home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages
Requires: filelock, fsspec, jinja2, networkx, nvidia-cublas-cu12, nvidia-cuda-cupti-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-runtime-cu12, nvidia-cudnn-cu12, nvidia-cufft-cu12, nvidia-curand-cu12, nvidia-cusolver-cu12, nvidia-cusparse-cu12, nvidia-nccl-cu12, nvidia-nvtx-cu12, sympy, triton, typing-extensions
Required-by: conformer, deepspeed, lightning, openai-whisper, pytorch-lightning, torchaudio, torchmetrics, torchvision

 上面第7项出错

(cosyvoice) duyicheng@duyicheng-computer:~/gitee/CosyVoice$ ldd /home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so | grep nccl
        libnccl.so.2 => /home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/lib/../../nvidia/nccl/lib/libnccl.so.2 (0x00007e4264a00000)

 

根据你提供的 `ldd` 输出,`libtorch_cuda.so` 正在链接到 `/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/lib/../../nvidia/nccl/lib/libnccl.so.2`。这表明 NCCL 库路径是正确的,但仍然出现了 `undefined symbol: ncclCommRegister` 错误。这可能是由于 NCCL 库版本不匹配或其他依赖问题导致的。

### 解决步骤

#### 1. 检查 NCCL 库版本
确保你安装的 NCCL 库版本与 PyTorch 兼容。你可以通过以下命令检查 NCCL 库版本:

```bash
nccl-info -v
```


如果没有 `nccl-info` 工具,你可以通过以下命令安装:

```bash
sudo apt-get install libnccl-dev
```


#### 2. 重新安装 NCCL 库
尝试重新安装 NCCL 库,确保安装的是最新版本:

```bash
sudo apt-get update
sudo apt-get install --reinstall libnccl2 libnccl-dev
```


#### 3. 检查 PyTorch 和 NCCL 的兼容性
确保你安装的 PyTorch 版本与 NCCL 库版本兼容。你可以通过以下命令安装特定版本的 NCCL 库:

```bash
conda install -c nvidia nccl
```


#### 4. 检查 CUDA 版本
确保你的 CUDA 版本与 PyTorch 和 NCCL 库版本兼容。你可以通过以下命令检查 CUDA 版本:

```bash
nvcc --version
```


#### 5. 重新安装 PyTorch
尝试重新安装 PyTorch,确保安装的是支持 CUDA 12.0 的版本:

```bash
pip uninstall torch torchvision torchaudio
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu120
```


#### 6. 检查环境变量
确保 `LD_LIBRARY_PATH` 包含了 NCCL 库的路径。你可以通过以下命令检查和设置环境变量:

```bash
echo $LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/lib/../../nvidia/nccl/lib:$LD_LIBRARY_PATH
```


#### 7. 检查 PyTorch 安装
确保 PyTorch 安装时没有遗漏任何依赖项。你可以通过以下 Python 代码检查 PyTorch 版本和 CUDA 版本:

```python
import torch
print(f"PyTorch version: {torch.__version__}")
print(f"CUDA version: {torch.version.cuda}")
print(f"Is CUDA available: {torch.cuda.is_available()}")
```


#### 8. 检查 `libtorch_cuda.so` 文件
确保 `libtorch_cuda.so` 文件存在且未损坏。你可以通过以下命令检查文件:

```bash
ls -l /home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so
```


#### 9. 检查 NCCL 库路径
确保 NCCL 库路径正确。你可以通过以下命令检查 NCCL 库路径:

```bash
ldd /home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so | grep nccl
```


如果 `nccl` 库路径不正确或缺失,你可以手动添加路径到 `LD_LIBRARY_PATH`:

```bash
export LD_LIBRARY_PATH=/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/lib/../../nvidia/nccl/lib:$LD_LIBRARY_PATH
```


### 10. 重新创建虚拟环境
如果上述方法都无法解决问题,你可以尝试重新创建虚拟环境并重新安装所有依赖项:

```bash
conda create -n new_cosyvoice python=3.8
conda activate new_cosyvoice
conda install pytorch torchvision torchaudio cudatoolkit=12.0 -c pytorch
pip install -r requirements.txt  # 假设你有一个 requirements.txt 文件
```


希望这些方法能帮助你解决问题。如果问题仍然存在,请提供更多的错误信息和环境细节,以便进一步诊断。

没招了,降低porch版本了。

pip uninstall torch torchvision torchaudio
pip install torch==2.0.1 torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu120

 降低后的列表

--extra-index-url https://download.pytorch.org/whl/cu120
conformer==0.3.2
deepspeed==0.14.2
diffusers==0.27.2
gdown==5.1.0
gradio==4.32.2
grpcio==1.57.0
grpcio-tools==1.57.0
huggingface-hub==0.23.5
hydra-core==1.3.2
HyperPyYAML==1.2.2
inflect==7.3.1
librosa==0.10.2
lightning==2.2.4
matplotlib==3.7.5
modelscope==1.15.0
networkx==3.1
omegaconf==2.3.0
onnx==1.16.0
onnxruntime-gpu==1.16.0
openai-whisper==20231117
protobuf==4.25
pydantic==2.7.0
rich==13.7.1
soundfile==0.12.1
tensorboard==2.14.0
torch==2.0.1
torchvision==0.15.2
torchaudio==2.0.2
uvicorn==0.30.0
wget==3.2
fastapi==0.111.0
fastapi-cli==0.0.4
WeTextProcessing==1.0.3

运行下面的测试代码。python cs.py.成功如下。

from cosyvoice.cli.cosyvoice import CosyVoice
from cosyvoice.utils.file_utils import load_wav
import torchaudiocosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-SFT', load_jit=True, load_onnx=False, fp16=True)
# sft usage
print(cosyvoice.list_avaliable_spks())
# change stream=True for chunk stream inference
for i, j in enumerate(cosyvoice.inference_sft('你好,我是通义生成式语音大模型,请问有什么可以帮您的吗?', '中文女', stream=False)):torchaudio.save('sft_{}.wav'.format(i), j['tts_speech'], 22050)cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-25Hz')  # or change to pretrained_models/CosyVoice-300M for 50Hz inference
# zero_shot usage, <|zh|><|en|><|jp|><|yue|><|ko|> for Chinese/English/Japanese/Cantonese/Korean
prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。','希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], 22050)
# cross_lingual usage
prompt_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
for i, j in enumerate(cosyvoice.inference_cross_lingual('<|en|>And then later on, fully acquiring that company. So keeping management in line, interest in line with the asset that\'s coming into the family is a reason why sometimes we don\'t buy the whole thing.',prompt_speech_16k, stream=False)):torchaudio.save('cross_lingual_{}.wav'.format(i), j['tts_speech'], 22050)
# vc usage
prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
source_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
for i, j in enumerate(cosyvoice.inference_vc(source_speech_16k, prompt_speech_16k, stream=False)):torchaudio.save('vc_{}.wav'.format(i), j['tts_speech'], 22050)cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-Instruct')
# instruct usage, support <laughter></laughter><strong></strong>[laughter][breath]
for i, j in enumerate(cosyvoice.inference_instruct('在面对挑战时,他展现了非凡的<strong>勇气</strong>与<strong>智慧</strong>。','中文男','Theo \'Crimson\', is a fiery, passionate rebel leader. Fights with fervor for justice, but struggles with impulsiveness.',stream=False)):torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], 22050)

生成了音频如图:

运行提供的webui.py

如下信息提示:

python3 webui.py --port 8888 --model_dir pretrained_models/CosyVoice-300M
2024-11-07 10:12:11,500 - modelscope - INFO - PyTorch version 2.0.1 Found.
2024-11-07 10:12:11,500 - modelscope - INFO - Loading ast index from /home/duyicheng/.cache/modelscope/ast_indexer
2024-11-07 10:12:11,569 - modelscope - INFO - Loading done! Current index file version is 1.15.0, with md5 87f1fcfbd39133baac40066388797472 and a total number of 980 components indexed
/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torchvision/datapoints/__init__.py:12: UserWarning: The torchvision.datapoints and torchvision.transforms.v2 namespaces are still Beta. While we do not expect major breaking changes, some APIs may still change according to user feedback. Please submit any feedback you may have in this issue: https://github.com/pytorch/vision/issues/6753, and you can also check out https://github.com/pytorch/vision/issues/7319 to learn more about the APIs that we suspect might involve future changes. You can silence this warning by calling torchvision.disable_beta_transforms_warning().
  warnings.warn(_BETA_TRANSFORMS_WARNING)
/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/torchvision/transforms/v2/__init__.py:54: UserWarning: The torchvision.datapoints and torchvision.transforms.v2 namespaces are still Beta. While we do not expect major breaking changes, some APIs may still change according to user feedback. Please submit any feedback you may have in this issue: https://github.com/pytorch/vision/issues/6753, and you can also check out https://github.com/pytorch/vision/issues/7319 to learn more about the APIs that we suspect might involve future changes. You can silence this warning by calling torchvision.disable_beta_transforms_warning().
  warnings.warn(_BETA_TRANSFORMS_WARNING)
failed to import ttsfrd, use WeTextProcessing instead
/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/diffusers/models/lora.py:393: FutureWarning: `LoRACompatibleLinear` is deprecated and will be removed in version 1.0.0. Use of `LoRACompatibleLinear` is deprecated. Please switch to PEFT backend by installing PEFT: `pip install peft`.
  deprecate("LoRACompatibleLinear", "1.0.0", deprecation_message)
2024-11-07 10:12:28,980 INFO input frame rate=50
2024-11-07 10:12:34.343806583 [W:onnxruntime:, session_state.cc:1162 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2024-11-07 10:12:34.343849370 [W:onnxruntime:, session_state.cc:1164 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
2024-11-07 10:12:35,290 WETEXT INFO found existing fst: /home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/tn/zh_tn_tagger.fst
2024-11-07 10:12:35,290 INFO found existing fst: /home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/tn/zh_tn_tagger.fst
2024-11-07 10:12:35,290 WETEXT INFO                     /home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/tn/zh_tn_verbalizer.fst
2024-11-07 10:12:35,290 INFO                     /home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/tn/zh_tn_verbalizer.fst
2024-11-07 10:12:35,290 WETEXT INFO skip building fst for zh_normalizer ...
2024-11-07 10:12:35,290 INFO skip building fst for zh_normalizer ...
2024-11-07 10:12:36,132 WETEXT INFO found existing fst: /home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/tn/en_tn_tagger.fst
2024-11-07 10:12:36,132 INFO found existing fst: /home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/tn/en_tn_tagger.fst
2024-11-07 10:12:36,132 WETEXT INFO                     /home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/tn/en_tn_verbalizer.fst
2024-11-07 10:12:36,132 INFO                     /home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/tn/en_tn_verbalizer.fst
2024-11-07 10:12:36,132 WETEXT INFO skip building fst for en_normalizer ...
2024-11-07 10:12:36,132 INFO skip building fst for en_normalizer ...
2024-11-07 10:12:48,711 DEBUG load_ssl_context verify=True cert=None trust_env=True http2=False
2024-11-07 10:12:48,712 DEBUG load_ssl_context verify=True cert=None trust_env=True http2=False
2024-11-07 10:12:48,713 DEBUG Using selector: EpollSelector
2024-11-07 10:12:48,713 DEBUG load_verify_locations cafile='/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/certifi/cacert.pem'
2024-11-07 10:12:48,714 DEBUG load_verify_locations cafile='/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/certifi/cacert.pem'
2024-11-07 10:12:48,778 DEBUG connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None
2024-11-07 10:12:48,789 DEBUG connect_tcp.started host='checkip.amazonaws.com' port=443 local_address=None timeout=3 socket_options=None
/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/gradio/components/base.py:190: UserWarning: 'scale' value should be an integer. Using 0.5 will cause issues.
  warnings.warn(
/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/gradio/components/base.py:190: UserWarning: 'scale' value should be an integer. Using 0.25 will cause issues.
  warnings.warn(
/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/gradio/layouts/column.py:55: UserWarning: 'scale' value should be an integer. Using 0.25 will cause issues.
  warnings.warn(
2024-11-07 10:12:48,914 DEBUG connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x73142afe62b0>
2024-11-07 10:12:48,919 DEBUG start_tls.started ssl_context=<ssl.SSLContext object at 0x731439a92540> server_hostname='checkip.amazonaws.com' timeout=3
2024-11-07 10:12:49,026 DEBUG connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x73142af86370>
2024-11-07 10:12:49,026 DEBUG start_tls.complete return_value=<httpcore._backends.sync.SyncStream object at 0x73142afe6760>
2024-11-07 10:12:49,026 DEBUG start_tls.started ssl_context=<ssl.SSLContext object at 0x731439a92340> server_hostname='api.gradio.app' timeout=3
2024-11-07 10:12:49,042 DEBUG send_request_headers.started request=<Request [b'GET']>
2024-11-07 10:12:49,051 DEBUG send_request_headers.complete
2024-11-07 10:12:49,051 DEBUG send_request_body.started request=<Request [b'GET']>
2024-11-07 10:12:49,051 DEBUG send_request_body.complete
2024-11-07 10:12:49,051 DEBUG receive_response_headers.started request=<Request [b'GET']>
Running on local URL:  http://0.0.0.0:8888
2024-11-07 10:12:49,103 DEBUG load_ssl_context verify=True cert=None trust_env=True http2=False
2024-11-07 10:12:49,104 DEBUG load_verify_locations cafile='/home/duyicheng/anaconda3/envs/cosyvoice/lib/python3.8/site-packages/certifi/cacert.pem'
2024-11-07 10:12:49,131 DEBUG receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'', [(b'Date', b'Thu, 07 Nov 2024 02:12:49 GMT'), (b'Content-Type', b'text/plain;charset=UTF-8'), (b'Content-Length', b'14'), (b'Connection', b'keep-alive'), (b'Server', b'nginx'), (b'Vary', b'Origin'), (b'Vary', b'Access-Control-Request-Method'), (b'Vary', b'Access-Control-Request-Headers')])
2024-11-07 10:12:49,132 INFO HTTP Request: GET https://checkip.amazonaws.com/ "HTTP/1.1 200 "
2024-11-07 10:12:49,132 DEBUG receive_response_body.started request=<Request [b'GET']>
2024-11-07 10:12:49,132 DEBUG receive_response_body.complete
2024-11-07 10:12:49,132 DEBUG response_closed.started
2024-11-07 10:12:49,132 DEBUG response_closed.complete
2024-11-07 10:12:49,132 DEBUG close.started
2024-11-07 10:12:49,133 DEBUG close.complete
2024-11-07 10:12:49,136 DEBUG Starting new HTTPS connection (1): huggingface.co:443
2024-11-07 10:12:49,159 DEBUG connect_tcp.started host='localhost' port=8888 local_address=None timeout=None socket_options=None
2024-11-07 10:12:49,160 DEBUG connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x73142c381310>
2024-11-07 10:12:49,160 DEBUG send_request_headers.started request=<Request [b'GET']>
2024-11-07 10:12:49,160 DEBUG send_request_headers.complete
2024-11-07 10:12:49,160 DEBUG send_request_body.started request=<Request [b'GET']>
2024-11-07 10:12:49,160 DEBUG send_request_body.complete
2024-11-07 10:12:49,161 DEBUG receive_response_headers.started request=<Request [b'GET']>
2024-11-07 10:12:49,162 DEBUG receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 07 Nov 2024 02:12:49 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')])
2024-11-07 10:12:49,163 INFO HTTP Request: GET http://localhost:8888/startup-events "HTTP/1.1 200 OK"
2024-11-07 10:12:49,163 DEBUG receive_response_body.started request=<Request [b'GET']>
2024-11-07 10:12:49,163 DEBUG receive_response_body.complete
2024-11-07 10:12:49,163 DEBUG response_closed.started
2024-11-07 10:12:49,163 DEBUG response_closed.complete
2024-11-07 10:12:49,163 DEBUG close.started
2024-11-07 10:12:49,163 DEBUG close.complete
2024-11-07 10:12:49,165 DEBUG load_ssl_context verify=False cert=None trust_env=True http2=False
2024-11-07 10:12:49,167 DEBUG connect_tcp.started host='localhost' port=8888 local_address=None timeout=3 socket_options=None
2024-11-07 10:12:49,167 DEBUG connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x73142c391430>
2024-11-07 10:12:49,167 DEBUG send_request_headers.started request=<Request [b'HEAD']>
2024-11-07 10:12:49,168 DEBUG send_request_headers.complete
2024-11-07 10:12:49,168 DEBUG send_request_body.started request=<Request [b'HEAD']>
2024-11-07 10:12:49,168 DEBUG send_request_body.complete
2024-11-07 10:12:49,168 DEBUG receive_response_headers.started request=<Request [b'HEAD']>
2024-11-07 10:12:49,204 DEBUG receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 07 Nov 2024 02:12:49 GMT'), (b'server', b'uvicorn'), (b'content-length', b'17055'), (b'content-type', b'text/html; charset=utf-8')])
2024-11-07 10:12:49,205 INFO HTTP Request: HEAD http://localhost:8888/ "HTTP/1.1 200 OK"
2024-11-07 10:12:49,205 DEBUG receive_response_body.started request=<Request [b'HEAD']>
2024-11-07 10:12:49,205 DEBUG receive_response_body.complete
2024-11-07 10:12:49,205 DEBUG response_closed.started
2024-11-07 10:12:49,206 DEBUG response_closed.complete
2024-11-07 10:12:49,206 DEBUG close.started
2024-11-07 10:12:49,206 DEBUG close.complete

To create a public link, set `share=True` in `launch()`.
2024-11-07 10:12:49,210 DEBUG Starting new HTTPS connection (1): huggingface.co:443
2024-11-07 10:12:49,489 DEBUG start_tls.complete return_value=<httpcore._backends.sync.SyncStream object at 0x73142afb9a30>
2024-11-07 10:12:49,490 DEBUG send_request_headers.started request=<Request [b'GET']>
2024-11-07 10:12:49,490 DEBUG send_request_headers.complete
2024-11-07 10:12:49,490 DEBUG send_request_body.started request=<Request [b'GET']>
2024-11-07 10:12:49,490 DEBUG send_request_body.complete
2024-11-07 10:12:49,490 DEBUG receive_response_headers.started request=<Request [b'GET']>
2024-11-07 10:12:49,713 DEBUG receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 07 Nov 2024 02:12:49 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')])
2024-11-07 10:12:49,714 INFO HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK"
2024-11-07 10:12:49,714 DEBUG receive_response_body.started request=<Request [b'GET']>
2024-11-07 10:12:49,714 DEBUG receive_response_body.complete
2024-11-07 10:12:49,714 DEBUG response_closed.started
2024-11-07 10:12:49,714 DEBUG response_closed.complete
2024-11-07 10:12:49,714 DEBUG close.started
2024-11-07 10:12:49,714 DEBUG close.complete
IMPORTANT: You are using gradio version 4.32.2, however version 4.44.1 is available, please upgrade.

秋熟如下:

从日志信息来看,您的应用程序启动并运行在本地URL `http://0.0.0.0:8888` 上。以下是日志中的一些关键点:

- **PyTorch 版本检测**:成功检测到 PyTorch 版本为 2.0.1。
- **模型加载**:从缓存中加载了模型索引文件,版本为 1.15.0,包含 980 个组件。
- **警告信息**:
  - `torchvision` 的 `datapoints` 和 `transforms.v2` 命名空间仍处于 Beta 阶段,可能会根据用户反馈进行更改。
  - `LoRACompatibleLinear` 已弃用,建议使用 PEFT 后端。
  - `gradio` 中的 `scale` 值应为整数,使用浮点数可能会导致问题。
- **ONNX 运行时警告**:某些节点未分配到首选执行提供程序,可能会影响性能。
- **FST 文件加载**:成功找到了现有的 FST 文件,跳过了构建步骤。
- **SSL 上下文加载**:成功加载了 SSL 上下文和证书。
- **HTTP 请求**:成功发送了多个 HTTP 请求,包括检查 IP 地址和 Gradio API 版本。
- **Gradio 应用启动**:应用成功启动并运行在本地 URL 上。

### 建议
1. **处理警告信息**:
   - 对于 `torchvision` 的 Beta 警告,可以考虑提交反馈或查看相关文档以了解未来的变更。
   - 对于 `LoRACompatibleLinear` 的弃用警告,建议安装 PEFT 并进行相应的代码更新。
   - 对于 `gradio` 的 `scale` 警告,确保所有 `scale` 参数都设置为整数。

2. **性能优化**:
   - 查看 ONNX 运行时的警告信息,如果性能受到影响,可以尝试调整节点分配或使用不同的执行提供程序。

3. **安全性**:
   - 确保 SSL 证书和上下文配置正确,以保证通信的安全性。

4. **公共链接**:
   - 如果需要创建公共链接,可以在 `launch()` 方法中设置 `share=True`。

如果您有其他具体问题或需要进一步的帮助,请随时告知。

打开网页。

事情好像很多。新做的系统,还没有播放器。先这样吧。反正我的显卡很低。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/469381.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

16S,18S引物覆盖度测试:SILVA和PR2

16S 进入网站&#xff1a;https://www.arb-silva.de/search/testprime/ 填写引物和错配阈值 结果进入&#xff1a;Inspect Results inTaxonomy Browser 18S 进入网站&#xff1a;Primer database 进入&#xff1a;Test your primer set 可选择感兴趣的物种group&#xff0c;红…

【Kafka 实战】如何解决Kafka Topic数量过多带来的性能问题?

&#x1f449;博主介绍&#xff1a; 博主从事应用安全和大数据领域&#xff0c;有8年研发经验&#xff0c;5年面试官经验&#xff0c;Java技术专家&#xff0c;WEB架构师&#xff0c;阿里云专家博主&#xff0c;华为云云享专家&#xff0c;51CTO 专家博主 ⛪️ 个人社区&#x…

middleware中间件概述

中间件定义 中间件&#xff08;middleware&#xff09;是基础软件的一大类&#xff0c;属于可复用软件的范畴。顾名思义&#xff0c;中间件处在操作系统、网络和数据库之上&#xff0c;应用软件的下层&#xff08;如图 15-1 所示&#xff09;​&#xff0c;也有人认为它应该属…

大模型 | 2024年中国智能算力行业白皮书 | 附PDF免费下载

智能算力&#xff0c;是数字经济发展的重要基础性资源。由于美国的科技禁运政策与国内人工智能技术差距&#xff0c;我国在实现智算资源完全国产化的道路上仍需努力。为了谋求可用算力资源在物理空间的释放和高效利用&#xff0c;国家层面持续推进“东数西算”工程&#xff0c;…

面试题之---解释一下原型和原型链

实例化对象 和普调函数一样&#xff0c;只不过调用的时候要和new连用&#xff08;实例化&#xff09;&#xff0c;不然就是一个普通函数调用 function Person () {} const o1 new Person() //能得到一个空对象 const o2 Person() //什么也得不到&#xff0c;这就是普通的…

面试:TCP、UDP如何解决丢包问题

文章目录 一、TCP丢包原因、解决办法1.1 TCP为什么会丢包1.2 TCP传输协议如何解决丢包问题1.3 其他丢包情况&#xff08;拓展&#xff09;1.4 补充1.4.1 TCP端口号1.4.2 多个TCP请求的逻辑1.4.3 处理大量TCP连接请求的方法1.4.4 总结 二、UDP丢包2.1 UDP协议2.1.1 UDP简介2.1.2…

飞凌嵌入式FET527N-C核心板现已适配Android 13

飞凌嵌入式FET527N-C核心板现已成功适配Android13&#xff0c;新系统的支持能够为用户提供更优质的使用体验。那么&#xff0c;运行Android13系统的FET527N-C核心板具有哪些突出的优势呢&#xff1f; 1、性能与兼容性提升 飞凌嵌入式FET527N-C核心板搭载了全志T527系列高性能处…

Java static静态变量 C语言文件读写

1. &#xff08;1&#xff09; public class test1 {public static void main(String[] args) {javabean1.teachername"jianjun";//直接在类调用&#xff0c;方便一点点javabean1 s1 new javabean1();s1.setName("liujiawei");s1.setAge(18);s1.setGend…

Linux驱动开发(4):Linux的设备模型

在前面写的驱动中&#xff0c;我们发现编写驱动有个固定的模式只有往里面套代码就可以了&#xff0c;它们之间的大致流程可以总结如下&#xff1a; 实现入口函数xxx_init()和卸载函数xxx_exit() 申请设备号 register_chrdev_region() 初始化字符设备&#xff0c;cdev_init函数…

MYSQL隔离性原理——MVCC

表的隐藏字段 表的列包含用户自定义的列和由系统自动创建的隐藏字段。我们介绍3个隐藏字段&#xff0c;不理解也没有关系&#xff0c;理解后面的undo log就懂了&#xff1a; DB_TRX_ID &#xff1a;6 byte&#xff0c;最近修改( 修改/插入 )事务ID&#xff0c;记录创建这条记…

鸿蒙next打包流程

目录 下载团结引擎 添加开源鸿蒙打包支持 打包报错 路径问题 安装DevEcoStudio 可以在DevEcoStudio进行打包hap和app 包结构 没法直接用previewer运行 真机运行和测试需要配置签名,DevEcoStudio可以自动配置, 模拟器安装hap提示报错 安装成功,但无法打开 团结1.3版本新增工具…

计算机毕业设计Python+大模型斗鱼直播可视化 直播预测 直播爬虫 直播数据分析 直播大数据 大数据毕业设计 机器学习 深度学习

温馨提示&#xff1a;文末有 CSDN 平台官方提供的学长联系方式的名片&#xff01; 温馨提示&#xff1a;文末有 CSDN 平台官方提供的学长联系方式的名片&#xff01; 温馨提示&#xff1a;文末有 CSDN 平台官方提供的学长联系方式的名片&#xff01; 作者简介&#xff1a;Java领…

【Vue】Vue3.0(二十)Vue 3.0 中mitt的使用示例

上篇文章 【Vue】Vue3.0&#xff08;十九&#xff09;Vue 3.0 中一种组件间通信方式-自定义事件 &#x1f3e1;作者主页&#xff1a;点击&#xff01; &#x1f916;Vue专栏&#xff1a;点击&#xff01; ⏰️创作时间&#xff1a;2024年11月11日12点23分 文章目录 一、mitt 在…

显示器接口种类 | 附图片

显示器接口类型主要包括VGA、DVI、HDMI、DP和USB Type-C等。 VGA、DVI、HDMI、DP和USB Type-C 1. 观察 VGA接口:15针 DP接口&#xff1a;在DP接口旁&#xff0c;都有一个“D”型的标志。 电脑主机&#xff1a;DP(D) 显示器&#xff1a;VGA(15针) Ref https://cloud.tenc…

什么是数据平台?10 个值得了解的大数据平台示例

目前尚不清楚普通的 “数据” 是何时变成了 “大数据”。后一个术语可能起源于 20 世纪 90 年代的硅谷推介会和午餐室。更容易确定的是数据在 21 世纪是如何爆炸式增长的&#xff08;据估计&#xff0c;到 2025 年&#xff0c;人类每天将产生 463 EB的数据&#xff09;&#xf…

2024最新版JavaScript逆向爬虫教程-------基础篇之Chrome开发者工具学习

目录 一、打开Chrome DevTools的三种方式二、Elements元素面板三、Console控制台面板四、Sources面板五、Network面板六、Application面板七、逆向调试技巧7.1 善用搜索7.2 查看请求调用堆栈7.3 XHR 请求断点7.4 Console 插桩7.5 堆内存函数调用7.6 复制Console面板输出 工欲善…

Local Dimming和Mini LED简介

文章目录 Local Dimming和Mini LED的介绍区别和联系联系区别总结 Local Dimming和Mini LED的介绍 电视显示技术中的Local Dimming和Mini LED都是用于提升画面质量的背光技术&#xff0c;主要目的是增强对比度和改善黑色表现。以下是对它们的详细介绍&#xff1a; Local Dimmin…

VSCode中python插件安装后无法调试

问题 VSCode中python插件安装后无法调试&#xff0c;如下&#xff0c;点击调试&#xff0c;VScode中不报错&#xff0c;也没有调试 解决方法 1、查看配置 打开所在路径 2、拷贝 将整个文件夹拷贝到vscode默认路径下 3、问题解决 再次调试&#xff0c;可以正常使用了…

前端知识点---选择输入框confirm(Javascript)

文章目录 1. 基本用法2. 功能特点①阻塞行为&#xff1a;confirm 对话框会阻塞脚本的执行&#xff0c;直到用户作出选择。②简单交互&#xff1a;主要用于简单的确认操作&#xff0c;不支持自定义样式或多种交互。③ 示例 3 注意事项4 常见用途 1. 基本用法 let result confi…

android studio 配置过程

Android studio版本&#xff1a;Android Studio Ladybug | 2024.2.1 windows 10 x64 关键问题解决方法&#xff1a; 1.设置代理&#xff1a; 退出首次配置&#xff0c;进入ide&#xff08;必要时新建工程&#xff09;然后&#xff1a; 然后重启ide 等待下载完成。 代理地…