【大模型】大模型 CPU 推理之 llama.cpp

【大模型】大模型 CPU 推理之 llama.cpp

  • llama.cpp
  • 安装llama.cpp
  • Memory/Disk Requirements
  • Quantization
  • 测试推理
    • 下载模型
    • 测试
  • 参考

llama.cpp

  • 描述

    The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.

    • Plain C/C++ implementation without any dependencies
    • Apple silicon is a first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks
    • AVX, AVX2 and AVX512 support for x86 architectures
    • 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory use
    • Custom CUDA kernels for running LLMs on NVIDIA GPUs (support for AMD GPUs via HIP)
    • Vulkan, SYCL, and (partial) OpenCL backend support
    • CPU+GPU hybrid inference to partially accelerate models larger than the total VRAM capacity
  • 官网
    https://github.com/ggerganov/llama.cpp

  • Supported platforms:

     Mac OSLinuxWindows (via CMake)DockerFreeBSD
    
  • Supported models:

    • Typically finetunes of the base models below are supported as well.

    LLaMA 🦙
    LLaMA 2 🦙🦙
    Mistral 7B
    Mixtral MoE
    Falcon
    Chinese LLaMA / Alpaca and Chinese LLaMA-2 / Alpaca-2
    Vigogne (French)
    Koala
    Baichuan 1 & 2 + derivations
    Aquila 1 & 2
    Starcoder models
    Refact
    Persimmon 8B
    MPT
    Bloom
    Yi models
    StableLM models
    Deepseek models
    Qwen models
    PLaMo-13B
    Phi models
    GPT-2
    Orion 14B
    InternLM2
    CodeShell
    Gemma
    Mamba
    Xverse
    Command-R

    • Multimodal models:

    LLaVA 1.5 models, LLaVA 1.6 models
    BakLLaVA
    Obsidian
    ShareGPT4V
    MobileVLM 1.7B/3B models
    Yi-VL

安装llama.cpp

  • 下载代码
    git clone https://github.com/ggerganov/llama.cpp
  • Build
    On Linux or MacOS:
    cd llama.cppmake
    
    其他编译方法参考官网https://github.com/ggerganov/llama.cpp

Memory/Disk Requirements

在这里插入图片描述

Quantization

在这里插入图片描述

测试推理

下载模型

快速下载模型,参考: 无需 VPN 即可急速下载 huggingface 上的 LLM 模型
我这里下 qwen/Qwen1.5-1.8B-Chat-GGUF 进行测试

huggingface-cli download --resume-download  qwen/Qwen1.5-1.8B-Chat-GGUF  --local-dir  qwen/Qwen1.5-1.8B-Chat-GGUF

测试

cd ./llama.cpp./main -m /your/path/qwen/Qwen1.5-1.8B-Chat-GGUF/qwen1_5-1_8b-chat-q4_k_m.gguf -n 512 --color -i -cml -f ./prompts/chat-with-qwen.txt

需要修改提示语,可以编辑 ./prompts/chat-with-qwen.txt 进行修改。

加载模型输出信息:

llama.cpp# ./main -m /mnt/data/llm/Qwen1.5-1.8B-Chat-GGUF/qwen1_5-1_8b-chat-q4_k_m.gguf -n 512 --color -i -cml -f ./prompts/chat-with-qwen
.txt
Log start
main: build = 2527 (ad3a0505)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: seed  = 1711760850
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /mnt/data/llm/Qwen1.5-1.8B-Chat-GGUF/qwen1_5-1_8b-chat-q4_k_m.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.name str              = Qwen1.5-1.8B-Chat-AWQ-fp16
llama_model_loader: - kv   2:                          qwen2.block_count u32              = 24
llama_model_loader: - kv   3:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv   4:                     qwen2.embedding_length u32              = 2048
llama_model_loader: - kv   5:                  qwen2.feed_forward_length u32              = 5504
llama_model_loader: - kv   6:                 qwen2.attention.head_count u32              = 16
llama_model_loader: - kv   7:              qwen2.attention.head_count_kv u32              = 16
llama_model_loader: - kv   8:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   9:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  10:                qwen2.use_parallel_residual bool             = true
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  13:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  14:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  15:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  16:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  18:                    tokenizer.chat_template str              = {% for message in messages %}{{'<|im_...
llama_model_loader: - kv  19:               general.quantization_version u32              = 2
llama_model_loader: - kv  20:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  121 tensors
llama_model_loader: - type q5_0:   12 tensors
llama_model_loader: - type q8_0:   12 tensors
llama_model_loader: - type q4_K:  133 tensors
llama_model_loader: - type q6_K:   13 tensors
llm_load_vocab: special tokens definition check successful ( 293/151936 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 151936
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 2048
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 16
llm_load_print_meta: n_layer          = 24
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 2048
llm_load_print_meta: n_embd_v_gqa     = 2048
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 5504
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 1B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 1.84 B
llm_load_print_meta: model size       = 1.13 GiB (5.28 BPW)
llm_load_print_meta: general.name     = Qwen1.5-1.8B-Chat-AWQ-fp16
llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_tensors: ggml ctx size =    0.11 MiB
llm_load_tensors:        CPU buffer size =  1155.67 MiB
...................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =    96.00 MiB
llama_new_context_with_model: KV self size  =   96.00 MiB, K (f16):   48.00 MiB, V (f16):   48.00 MiB
llama_new_context_with_model:        CPU  output buffer size =   296.75 MiB
llama_new_context_with_model:        CPU compute buffer size =   300.75 MiB
llama_new_context_with_model: graph nodes  = 868
llama_new_context_with_model: graph splits = 1system_info: n_threads = 4 / 4 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
main: interactive mode on.
Reverse prompt: '<|im_start|>user
'
sampling:repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order:
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
generate: n_ctx = 512, n_batch = 2048, n_predict = 512, n_keep = 10== Running in interactive mode. ==- Press Ctrl+C to interject at any time.- Press Return to return control to LLaMa.- To return control without starting a new line, end your input with '/'.- If you want to submit another line, end your input with '\'.system
You are a helpful assistant.
user>

输入文本:What’s AI?

输出示例:
在这里插入图片描述

参考

  • https://github.com/ggerganov/llama.cpp

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/295888.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

GPTs构建广告文案Agent(只需要一个网址链接即可生成文案及配图)

在大家已经有账号的前提下&#xff0c;我们来看看怎么做&#xff01;&#xff01;&#xff01; 进入GPTs的编辑界面 如下图&#xff1a; 如何配置呢&#xff1f; Name&#xff1a;给我们的GPTs起个名字。Description&#xff1a;简单介绍一下&#xff0c;我们创建的GPTs是…

亚马逊卧式婴儿车和坐式婴儿车需要那些认证?欧盟美国加拿大要求那些检测标准

卧式婴儿车和坐式婴儿车上亚马逊需要那些认证&#xff1f;欧盟美国加拿大要求那些检测标准&#xff1f; 很荣幸您能看到我的资讯&#xff0c;亚马逊各国认证是我公司的优势产品&#xff0c;确保申诉成功并正常销售。服务周到&#xff0c;速度快&#xff0c;周期短。 欧盟 卧…

消息存储与同步策略设计

消息存储与同步策略 https://github.com/robinfoxnan/BirdTalkServer 思路&#xff1a; 私聊写扩散&#xff0c;以用户为中心&#xff0c;存储2次&#xff1b;群聊读扩散&#xff0c;以群组为中心&#xff0c;存储一次&#xff1b;scylladb易于扩展&#xff0c;适合并发&…

双机 Cartogtapher 建图文件配置

双机cartogtapher建图 最近在做硕士毕设的最后一个实验&#xff0c;其中涉及到多机建图&#xff0c;经过调研最终采用cartographer建图算法&#xff0c;其中配置多机建图的文件有些麻烦&#xff0c;特此博客以记录 非常感谢我的同门 ”叶少“ 山上的稻草人-CSDN博客的帮助&am…

【2024红明谷】三道Web题目的记录

红明谷 文章目录 红明谷Web1 | SOLVED LaterWeb2 | UNSOLVEDWeb3 | SOLVED 容器已经关咯&#xff0c;所以有些场景只能靠回忆描述啦&#xff0c;学习为主&#xff0c;题目只是一个载体~ 本次比赛学习为主&#xff0c;确实再一次感受到久违的web题目的魅力了&#xff0c;可能也是…

3.java openCV4.x 入门-数据类型(CvType)与Scalar

专栏简介 &#x1f492;个人主页 &#x1f4f0;专栏目录 点击上方查看更多内容 &#x1f4d6;心灵鸡汤&#x1f4d6;我们唯一拥有的就是今天&#xff0c;唯一能把握的也是今天 &#x1f9ed;文章导航&#x1f9ed; ⬆️ 2.hello openCV ⬇️ 4.待更新 数据类型&#xff…

RD55UP06-V 三菱iQ-R系列C语言功能模块

RD55UP06-V 三菱iQ-R系列C语言功能模块 RD55UP06-V用户手册&#xff0c;RD55UP06-V功能&#xff0c;RD55UP06-V系统配置 RD55UP06-V参数规格&#xff1a;10BASE-T/100BASE-TX/1000BASE-T 1通道&#xff1b;字节存储次序格式小端模式; 可使用SD存储卡插槽&#xff1b;工作RAM 1…

结构体,联合体,枚举( 2 )

目录 2.联合体 2.1联合体类型的声明 2.2联合体的特点 2.3联合体的内存大小 3.枚举 3.1枚举类型的声明 3.2枚举类型的优点 3.3枚举类型的使用 2.联合体 联合体&#xff08;Union&#xff09;是另一种复合数据类型&#xff0c;它允许我们在同一内存位置存储不同的数据类型…

Windows搭建Lychee图片管理系统结合内网穿透实现公网访问本地图床

文章目录 1.前言2. Lychee网站搭建2.1. Lychee下载和安装2.2 Lychee网页测试2.3 cpolar的安装和注册 3.本地网页发布3.1 Cpolar云端设置3.2 Cpolar本地设置 4.公网访问测试5.结语 1.前言 图床作为图片集中存放的服务网站&#xff0c;可以看做是云存储的一部分&#xff0c;既可…

Windows系统安装OpenSSH结合VS Code远程ssh连接Ubuntu【内网穿透】

&#x1f308;个人主页: Aileen_0v0 &#x1f525;热门专栏: 华为鸿蒙系统学习|计算机网络|数据结构与算法|MySQL| ​&#x1f4ab;个人格言:“没有罗马,那就自己创造罗马~” #mermaid-svg-AwzyR2lkHKjD9HYl {font-family:"trebuchet ms",verdana,arial,sans-serif;f…

linux设置Nacos自启动

前提&#xff1a;已经安装好nacos应用 可参考&#xff1a;Nacos单机版安装-CSDN博客 1. 创建nacos.service 1.1 在 /lib/systemd/system 目录底下&#xff0c;新建nacos.service文件 [Unit] Descriptionnacos Afternetwork.target[Service]Typeforking# 单机启动方式&#…

让手机平板成为AI开发利器:AidLux

想ssh登录自己的手机吗&#xff1f; 想在手机上自由的安装lynx、python、vscode、jupyter甚至飞桨PaddlePaddle、Tensorflow、Pytorch和昇思Mindspore吗&#xff1f; 那么看这里....装上AidLux&#xff0c;以上全都有&#xff01; AidLux是一个综合的AI开发平台&#xff0c;…

JAVAEE之Spring, Spring Boot 和Spring MVC的关系以及区别

1.Spring, Spring Boot 和Spring MVC的关系以及区别 Spring: 简单来说, Spring 是⼀个开发应⽤框架&#xff0c;什么样的框架呢&#xff0c;有这么⼏个标签&#xff1a;轻量级、⼀ 站式、模块化&#xff0c;其⽬的是⽤于简化企业级应⽤程序开发 Spring的主要功能: 管理对象&am…

LabVIEW电力设备在线监测系统

LabVIEW电力设备在线监测系统 在电力行业中&#xff0c;变电站的稳定运行对于保障电力系统的安全性和可靠性至关重要。开发了一种基于LabVIEW软件开发的变电站电力设备在线监测系统&#xff0c;实时监控变电站内部的电力设备状态&#xff0c;确保电力传输的高效与安全。通过对…

C++的字节对齐

什么是字节对齐 参考什么是字节对齐&#xff0c;为什么要对齐? 现代计算机中&#xff0c;内存空间按照字节划分&#xff0c;理论上可以从任何起始地址访问任意类型的变量。但实际中在访问特定类型变量时经常在特定的内存地址访问&#xff0c;这就需要各种类型数据按照一定的规…

java学习之路-类和对象

前言 本文内容&#xff1a; 类的定义及其使用 this的引用 对象的构造及初始化 封装 static成员 代码块讲解 内部类 文章目录 1.类定义和使用 1.1了解什么是面向对象 1.2简单认识类 1.3定义类 1.4栗子 2.类的使用-类的实例化 2.1什么是实例化 2.2类和对象的说明 3.this引…

使用python实现i茅台自动预约

使用python实现i茅台自动预约[仅限于学习&#xff0c;不可商用] 运行&#xff1a; 直接运行 imtApi.py 打包&#xff1a;切换到imt脚本目录&#xff0c;执行打包命令&#xff1a; pyinstaller --onefile imtApi.py这个应用程序可以帮助你进行茅台自动化配置。以下是一些使用…

【热门话题】文言一心与ChatGPT-4:一场跨时代智能对话系统的深度比较

&#x1f308;个人主页: 鑫宝Code &#x1f525;热门专栏: 闲话杂谈&#xff5c; 炫酷HTML | JavaScript基础 ​&#x1f4ab;个人格言: "如无必要&#xff0c;勿增实体" 文章目录 文言一心与ChatGPT-4&#xff1a;一场跨时代智能对话系统的深度比较一、技术背景…

第N6周:使用Word2vec实现文本分类

import torch import torch.nn as nn import torchvision from torchvision import transforms,datasets import os,PIL,pathlib,warnings #忽略警告信息 warnings.filterwarnings("ignore") # win10系统 device torch.device("cuda"if torch.cuda.is_ava…

Golang面试系列3-内存管理

3.1 内存分配机制 Go内存管理本质上是一个经过内部优化的内存池&#xff1a;自动伸缩内存池大小&#xff0c;合理的切割内存块。 分配逻辑&#xff1a;针对不同大小对象有不同的分配逻辑 (0,16B)且不含指针的对象&#xff1a;Tiny分配(0,16B)且含指针的对象&#xff1a;正常…