DeepSeek渣机部署编程用的模型,边缘设备部署模型
文章目录
- DeepSeek渣机部署编程用的模型,边缘设备部署模型
- 前言
- 一、python代码
- 二、构建一个简单的前端来接入接口
- 2.读入数据
- 总结
前言
也许大家伙都想完成一些部署DeepSeek的东西,不过部署并不难,只是环境难而已,首先如果不想用GPU跑的话,那就代码随便复制去运行都行,环境的话,我用Docker部署出来的环境,可以看我这篇博客DeepSeek部署WSL版,这篇博客其实只是为DeepSeek部署了一个环境而已,我个没云资源的人怎么可能部署的了这样的大模型,除非,给我几台服务器,我分布式技术还是可以的,运用资源还是强的。
一、python代码
下面这个代码只是把接口在本地6780打开
访问本地127.0.0.1:6780/chat就行,而且这个接口我写的并不是说非常严谨,并且模型需要下载对于的结构文件和参数权重,如果说想改成每一个人都有记录,那得接入数据库,不然不好弄,而且我建议拿个1.3B的模型自己去训练一个用自己文本训练的模型,可以满足一些微小的业务了哈哈哈。
from typing import List, Dict
from fastapi import FastAPI
from transformers import AutoTokenizer, AutoModel
from fastapi.middleware.cors import CORSMiddleware
import torch
from pydantic import BaseModel
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-1.3b-instruct", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-1.3b-instruct", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
model = model.to('cuda')
model = model.eval()
messages = []app = FastAPI()
app.add_middleware(CORSMiddleware,allow_origins=["*"], # 允许所有来源allow_credentials=True,allow_methods=["*"], # 允许所有请求方法allow_headers=["*"], # 允许所有请求头
)
@app.post("/chat")
async def chat(user_input: str):user_input = {'role': 'user', 'content': user_input}messages.append(user_input)try:# 调用模型进行对话inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)# tokenizer.eos_token_id is the id of <|EOT|> tokenoutputs = model.generate(inputs,max_new_tokens=2048, do_sample=False, top_k=50, top_p=0.95,num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)response=tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)print(messages)# response, history = generate_response(user_input, history=history)return {"response": response,}except Exception as e:return {"error": f"Failed to generate response: {str(e)}"}if __name__ == "__main__":import uvicornuvicorn.run(app, host="0.0.0.0", port=6708)
二、构建一个简单的前端来接入接口
前端兄弟写的代码,简单的扒了下KIMI的前端,如果侵权了请告知,里面的图片要自己去改改,自定义就行,这个代码直接运行肯定会报错的,因为图片文件没得
<!DOCTYPE html>
<html lang="zh-CN"><head><meta charset="UTF-8" /><link rel="icon" href="./images/icon.png" /><meta name="viewport" content="width=device-width, initial-scale=1.0" /><title>你好 - chat</title><link rel="stylesheet" href="./index.css" /><script src="./axios/axios.js"></script></head><body><main class="main"><div class="main-tip"><div class="main-tip-txt">问候~</div></div><div class="main-list" id="message-list"><!-- 消息列表将在这里动态渲染 --></div><div class="main-bottom"><inputtype="text"id="input-value"class="rounded-lg bg-gray text-black"placeholder="请输入内容..."/><button @click="sendMessage" id="send-button" class="rounded-lg"><img src="./images/send.png" alt="" /></button></div></main><script>// 获取消息列表容器const messageList = document.getElementById("message-list");// 获取输入框domconst input = document.getElementById("input-value");// 获取发送按钮const sendButton = document.getElementById("send-button");// 定义消息列表const messages = [{isUser: false,text: "Hi,很高兴遇见你!你可以随时把网址🔗或者文件📃发给我,我来帮你看看",avatar: "./images/avatar_share.png", // AI头像},];let inputValue;// 发送消息的函数const sendMessage = async () => {inputValue = input.value.trim();if (inputValue === "") return; // 如果输入为空,不发送// 加入结果数组中messages.push({isUser: true,text: inputValue,avatar: "./images/user.png", // AI头像});// 创建消息元素const messageDiv = document.createElement("div");messageDiv.className = "flex justify-end";messageDiv.id = `message-${inputValue}`;messageDiv.style.minHeight = "48px";// 创建头像const avatarImg = document.createElement("img");avatarImg.src = "./images/user.png";avatarImg.alt = "User";avatarImg.className = `avatar avatar-right`;const textDiv = document.createElement("div");textDiv.className = "rounded-lg bg-blue text-white";textDiv.textContent = inputValue;messageDiv.appendChild(textDiv);messageDiv.appendChild(avatarImg);// 将消息添加到消息列表messageList.appendChild(messageDiv);const response = await axios.post(`http://127.0.0.1:6708/chat?user_input=${inputValue}`);if (response.status === 200) {// 加入结果数组中messages.push({isUser: false,text: response.data.response,avatar: "./images/avatar_share.png", // AI头像});// 创建消息元素const messageDiv = document.createElement("div");messageDiv.className = "flex justify-start";messageDiv.id = `message-${response.data.response}`;messageDiv.style.minHeight = "48px";// 创建头像const avatarImg = document.createElement("img");avatarImg.src = "./images/avatar_share.png";avatarImg.alt = "AI";avatarImg.className = "avatar";const textDiv = document.createElement("div");textDiv.className = "rounded-lg bg-gray text-black";textDiv.textContent = response.data.response;messageDiv.appendChild(avatarImg);messageDiv.appendChild(textDiv);// 将消息添加到消息列表messageList.appendChild(messageDiv);}// 清空输入框input.value = "";inputValue = "";// 滚动到消息列表底部messageList.scrollTop = messageList.scrollHeight;};// 绑定事件sendButton.addEventListener("click", sendMessage);input.addEventListener("keyup", (event) => {if (event.key === "Enter") {sendMessage();}});messageList.innerHTML = ""; // 清空现有内容messages.forEach((msg, index) => {const messageDiv = document.createElement("div");messageDiv.className = `flex ${msg.isUser ? "justify-end" : "justify-start"}`;messageDiv.id = `message-${index}`;messageDiv.style.minHeight = "48px";// 创建头像const avatarImg = document.createElement("img");avatarImg.src = msg.avatar;avatarImg.alt = msg.isUser ? "User" : "AI";avatarImg.className = `avatar ${msg.isUser ? "avatar-right" : ""}`;const textDiv = document.createElement("div");textDiv.className = `rounded-lg ${msg.isUser ? "bg-blue text-white" : "bg-gray text-black"}`;textDiv.textContent = msg.text;// 根据消息发送者调整头像和文本的顺序if (msg.isUser) {messageDiv.appendChild(textDiv);messageDiv.appendChild(avatarImg);} else {messageDiv.appendChild(avatarImg);messageDiv.appendChild(textDiv);}messageList.appendChild(messageDiv);});</script></body>
</html>
2.读入数据
代码如下(示例):
data = pd.read_csv('https://labfile.oss.aliyuncs.com/courses/1283/adult.data.csv')
print(data.head())
使用的记录
展示效果
总结
本次的模型过程中,知道了DeepSeek1.3BCoder的实力,并且如果超过7B的模型,那就别去弄了,至少要32GB显存,不然吃不消,然后我才16GB,虽然可以部署一个6B的模型。不过这样的模型,勉强可以完成一些简单的编程任务,其他的就别想了。
后续训练的内容,我会在后续博客里写的。1.3B的模型需要的显存是3.6GB,我已经测试过了,如果想要部署更强的模型,第一GPU显存资源达到80G,CPU内存也要多一点。