结合 react-webcam、three.js 与 electron 实现桌面人脸动捕应用

系列文章目录

  1. React 使用 three.js 加载 gltf 3D模型 | three.js 入门
  2. React + three.js 3D模型骨骼绑定
  3. React + three.js 3D模型面部表情控制
  4. React + three.js 实现人脸动捕与3D模型表情同步
  5. 结合 react-webcam、three.js 与 electron 实现桌面人脸动捕应用

示例项目(github):https://github.com/couchette/simple-facial-expression-sync-desktop-app-demo

文章目录

  • 系列文章目录
  • 前言
  • 一、准备工作
    • 1. 创建项目
    • 2. 安装依赖
    • 3. 本地化配置
      • 下载AI模型
      • 下载模型配置文件
      • 复制transcoder配置文件
      • 下载3D模型
      • 配置CSP允许使用wasm
      • 设置webpack配置
  • 二、实现步骤
    • 1.创建detecor
    • 2. 实现WebCam
    • 3. 创建3D模型渲染容器
    • 4. 将三者组合起来实现一个页面
  • 三、最终效果
  • 总结


前言

开发桌面级别的人脸动捕应用程序是一个具有挑战性但也充满创意的任务。本文将介绍如何利用 React-Webcam、Three.js 和 Electron 结合起来,轻松实现一个功能强大的桌面人脸动捕应用。

在本文中,我们将详细介绍如何结合这三种技术,从搭建基础环境到实现人脸捕捉和渲染,最终打造出一个功能完善的桌面人脸动捕应用。无论您是想要深入了解这些技术的原理,还是想要快速构建一个个性化的人脸动捕应用,本文都将为您提供全面的指导和实践经验。


一、准备工作

1. 创建项目

参考让你的软件自动更新,如何使用github和electron-react-boilerplate跨平台应用模板项目实现软件的自动更新功能 使用electron-react-boilerpalte 作为模版项目创建项目,本地将以下面的fork项目作为模版项目,创建项目,该fork包括electron配置脚本(用于配置镜像地址),和一些自制的组件,有实力的小伙伴也可以自制属于自己的模板项目。

https://github.com/couchette/electron-react-boilerplate

2. 安装依赖

npm i
npm i react-webcam antd three @mediapipe/tasks-vision
npm i -D copy-webpack-plugin

3. 本地化配置

下载AI模型

从下面的地址下载模型到assets/ai_models目录下

https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task

下载模型配置文件

从下面地址下载wasm目录到assets/fileset_resolver目录下

https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@0.10.0/wasm

复制transcoder配置文件

复制目录 node_modules/three/examples/jsm/libs/basis到assets目录下

下载3D模型

从下面地址下载3D人脸模型到目录assets/models目录下

https://github.com/mrdoob/three.js/blob/master/examples/models/gltf/facecap.glb

配置CSP允许使用wasm

修改src/render/index.ejs内容如下(这将导致CSP安全警告,目前没有不清楚有其他方法可以在electron-react-boilerplate使用wasm,有知道的大神请告诉我该如何配置,拜谢了)

<!doctype html>
<html><head><meta charset="utf-8" /><metahttp-equiv="Content-Security-Policy"content="script-src 'self' 'unsafe-inline' 'unsafe-eval'; worker-src 'self' blob:;"/><title>emberface</title></script></head><body><div id="root"></div></body>
</html>

设置webpack配置

修改.erb/configs/webpack.config.base.ts文件内容如下

/*** Base webpack config used across other specific configs*/import path from 'path';
import webpack from 'webpack';
import TsconfigPathsPlugins from 'tsconfig-paths-webpack-plugin';
import CopyPlugin from 'copy-webpack-plugin';
import webpackPaths from './webpack.paths';
import { dependencies as externals } from '../../release/app/package.json';const configuration: webpack.Configuration = {externals: [...Object.keys(externals || {})],stats: 'errors-only',module: {rules: [{test: /\.[jt]sx?$/,exclude: /node_modules/,use: {loader: 'ts-loader',options: {// Remove this line to enable type checking in webpack buildstranspileOnly: true,compilerOptions: {module: 'esnext',},},},},],},output: {path: webpackPaths.srcPath,// https://github.com/webpack/webpack/issues/1114library: {type: 'commonjs2',},},/*** Determine the array of extensions that should be used to resolve modules.*/resolve: {extensions: ['.js', '.jsx', '.json', '.ts', '.tsx'],modules: [webpackPaths.srcPath, 'node_modules'],// There is no need to add aliases here, the paths in tsconfig get mirroredplugins: [new TsconfigPathsPlugins()],},plugins: [new CopyPlugin({patterns: [{from: path.join(webpackPaths.rootPath, 'assets', 'models'),to: path.join(webpackPaths.distRendererPath, 'models'),},{from: path.join(webpackPaths.rootPath, 'assets', 'ai_models'),to: path.join(webpackPaths.distRendererPath, 'ai_models'),},{from: path.join(webpackPaths.rootPath, 'assets', 'basis'),to: path.join(webpackPaths.distRendererPath, 'basis'),},{from: path.join(webpackPaths.rootPath, 'assets', 'fileset_resolver'),to: path.join(webpackPaths.distRendererPath, 'fileset_resolver'),},],}),new webpack.EnvironmentPlugin({NODE_ENV: 'production',}),],
};export default configuration;

二、实现步骤

1.创建detecor

在src/renderer/components/Detector.js中创建Detector单例类,封装模型的初始化和运行方法

import { FaceLandmarker, FilesetResolver } from '@mediapipe/tasks-vision';export default class Detector {constructor() {}// 获取单例实例的静态方法static async getInstance() {if (!Detector.instance) {Detector.instance = new Detector();Detector.instance.results = null;Detector.instance.video = null;const filesetResolver = await FilesetResolver.forVisionTasks(// 'https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@0.10.0/wasm','fileset_resolver/wasm',);Detector.instance.faceLandmarker = await FaceLandmarker.createFromOptions(filesetResolver,{baseOptions: {modelAssetPath:// "https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task",'ai_models/face_landmarker.task',delegate: 'GPU',},outputFaceBlendshapes: true,outputFacialTransformationMatrixes: true,runningMode: 'VIDEO',numFaces: 1,},);}return Detector.instance;}bindVideo(video) {this.video = video;}detect() {return this.faceLandmarker.detectForVideo(this.video, Date.now());}async detectLoop() {this.results = this.detect();this.detectLoop();}async run() {if (!this.video.srcObject) throw new Error('Video bind is null');this.detectLoop();}
}

2. 实现WebCam

在src/renderer/components/WebCam.js中使用react-webcam实现webcam功能,代码如下

import React, { useState, useRef, useEffect } from 'react';
import { Button, Select, Layout } from 'antd';
import Webcam from 'react-webcam';const { Option } = Select;
const { Content } = Layout;function WebCam({isShowCanvas,canvasHeight,canvasWidth,stream,setStream,videoRef,loaded,setLoaded,
}) {const [isStartCamButtonLoading, setIsStartCamButtonLoading] = useState(false);const [selectedDevice, setSelectedDevice] = useState(null);const [devices, setDevices] = useState([]);const handleVideoLoad = (videoNode) => {const video = videoNode.target;videoRef.current = videoNode.target;if (video.readyState !== 4) return;if (loaded) return;setLoaded(true);setIsStartCamButtonLoading(false);console.log('loaded');};// 获取可用的视频设备列表const getVideoDevices = async () => {const devicesInfo = await navigator.mediaDevices.enumerateDevices();const videoDevices = devicesInfo.filter((device) => device.kind === 'videoinput',);setDevices(videoDevices);};// 切换相机设备const handleDeviceChange = (deviceId) => {setSelectedDevice(deviceId);};// 开启摄像头const startCamera = async () => {setIsStartCamButtonLoading(true);if (!loaded) {const constraints = {video: {deviceId: selectedDevice ? { exact: selectedDevice } : undefined,},};const tempStream = await navigator.mediaDevices.getUserMedia(constraints);setStream(tempStream);}};// 停止摄像头const stopCamera = () => {if (stream) {stream.getTracks().forEach((track) => track.stop());setStream(null);setSelectedDevice(null);setLoaded(false);}};// 获取视频设备列表React.useEffect(() => {getVideoDevices();}, []);return (<Contentstyle={{display: 'flex',flexDirection: 'column',alignItems: 'center',justifyContent: 'center',}}><Selectplaceholder="选择相机"style={{ width: '150px', marginBottom: 16 }}onChange={handleDeviceChange}>{devices.map((device) => (<Option key={device.deviceId} value={device.deviceId}>{device.label || `Camera ${device.deviceId}`}</Option>))}</Select><divstyle={{display: 'flex',flexDirection: 'row',alignContent: 'center',justifyContent: 'center',}}><Button onClick={startCamera} loading={isStartCamButtonLoading}>启动相机</Button><Button onClick={stopCamera} style={{ marginLeft: 8 }}>关闭相机</Button></div><divstyle={{height: String(canvasHeight) + 'px',width: String(canvasWidth) + 'px',margin: '10px',position: 'relative',}}>{stream && (<Webcamstyle={{visibility: isShowCanvas ? 'visible' : 'hidden',position: 'absolute',top: '0',bottom: '0',left: '0',right: '0',}}width={canvasWidth}height={canvasHeight}videoConstraints={{deviceId: selectedDevice ? { exact: selectedDevice } : undefined,}}onLoadedData={handleVideoLoad}/>)}</div></Content>);
}export default WebCam;

3. 创建3D模型渲染容器

在src/renderer/components/ThreeContainer.js中创建3D模型渲染组件,代码内容如下

import * as THREE from 'three';import { OrbitControls } from 'three/addons/controls/OrbitControls.js';
import { RoomEnvironment } from 'three/addons/environments/RoomEnvironment.js';import { GLTFLoader } from 'three/addons/loaders/GLTFLoader.js';
import { KTX2Loader } from 'three/addons/loaders/KTX2Loader.js';
import { MeshoptDecoder } from 'three/addons/libs/meshopt_decoder.module.js';import { GUI } from 'three/addons/libs/lil-gui.module.min.js';
import { useRef, useEffect, useState } from 'react';// Mediapipeconst blendshapesMap = {// '_neutral': '',browDownLeft: 'browDown_L',browDownRight: 'browDown_R',browInnerUp: 'browInnerUp',browOuterUpLeft: 'browOuterUp_L',browOuterUpRight: 'browOuterUp_R',cheekPuff: 'cheekPuff',cheekSquintLeft: 'cheekSquint_L',cheekSquintRight: 'cheekSquint_R',eyeBlinkLeft: 'eyeBlink_L',eyeBlinkRight: 'eyeBlink_R',eyeLookDownLeft: 'eyeLookDown_L',eyeLookDownRight: 'eyeLookDown_R',eyeLookInLeft: 'eyeLookIn_L',eyeLookInRight: 'eyeLookIn_R',eyeLookOutLeft: 'eyeLookOut_L',eyeLookOutRight: 'eyeLookOut_R',eyeLookUpLeft: 'eyeLookUp_L',eyeLookUpRight: 'eyeLookUp_R',eyeSquintLeft: 'eyeSquint_L',eyeSquintRight: 'eyeSquint_R',eyeWideLeft: 'eyeWide_L',eyeWideRight: 'eyeWide_R',jawForward: 'jawForward',jawLeft: 'jawLeft',jawOpen: 'jawOpen',jawRight: 'jawRight',mouthClose: 'mouthClose',mouthDimpleLeft: 'mouthDimple_L',mouthDimpleRight: 'mouthDimple_R',mouthFrownLeft: 'mouthFrown_L',mouthFrownRight: 'mouthFrown_R',mouthFunnel: 'mouthFunnel',mouthLeft: 'mouthLeft',mouthLowerDownLeft: 'mouthLowerDown_L',mouthLowerDownRight: 'mouthLowerDown_R',mouthPressLeft: 'mouthPress_L',mouthPressRight: 'mouthPress_R',mouthPucker: 'mouthPucker',mouthRight: 'mouthRight',mouthRollLower: 'mouthRollLower',mouthRollUpper: 'mouthRollUpper',mouthShrugLower: 'mouthShrugLower',mouthShrugUpper: 'mouthShrugUpper',mouthSmileLeft: 'mouthSmile_L',mouthSmileRight: 'mouthSmile_R',mouthStretchLeft: 'mouthStretch_L',mouthStretchRight: 'mouthStretch_R',mouthUpperUpLeft: 'mouthUpperUp_L',mouthUpperUpRight: 'mouthUpperUp_R',noseSneerLeft: 'noseSneer_L',noseSneerRight: 'noseSneer_R',// '': 'tongueOut'
};function ThreeContainer({ videoHeight, videoWidth, detectorRef }) {const containerRef = useRef(null);const isContainerRunning = useRef(false);useEffect(() => {if (!isContainerRunning.current && containerRef.current) {isContainerRunning.current = true;init();}async function init() {const renderer = new THREE.WebGLRenderer({ antialias: true });renderer.setPixelRatio(window.devicePixelRatio);renderer.setSize(window.innerWidth, window.innerHeight);renderer.toneMapping = THREE.ACESFilmicToneMapping;const element = renderer.domElement;containerRef.current.appendChild(element);const camera = new THREE.PerspectiveCamera(60,window.innerWidth / window.innerHeight,1,100,);camera.position.z = 5;const scene = new THREE.Scene();scene.scale.x = -1;const environment = new RoomEnvironment(renderer);const pmremGenerator = new THREE.PMREMGenerator(renderer);scene.background = new THREE.Color(0x666666);scene.environment = pmremGenerator.fromScene(environment).texture;// const controls = new OrbitControls(camera, renderer.domElement);// Facelet face, eyeL, eyeR;const eyeRotationLimit = THREE.MathUtils.degToRad(30);const ktx2Loader = new KTX2Loader().setTranscoderPath('/basis/').detectSupport(renderer);new GLTFLoader().setKTX2Loader(ktx2Loader).setMeshoptDecoder(MeshoptDecoder).load('models/facecap.glb', (gltf) => {const mesh = gltf.scene.children[0];scene.add(mesh);const head = mesh.getObjectByName('mesh_2');head.material = new THREE.MeshNormalMaterial();face = mesh.getObjectByName('mesh_2');eyeL = mesh.getObjectByName('eyeLeft');eyeR = mesh.getObjectByName('eyeRight');// GUIconst gui = new GUI();gui.close();const influences = head.morphTargetInfluences;for (const [key, value] of Object.entries(head.morphTargetDictionary,)) {gui.add(influences, value, 0, 1, 0.01).name(key.replace('blendShape1.', '')).listen(influences);}renderer.setAnimationLoop(() => {animation();});});// const texture = new THREE.VideoTexture(video);// texture.colorSpace = THREE.SRGBColorSpace;const geometry = new THREE.PlaneGeometry(1, 1);const material = new THREE.MeshBasicMaterial({// map: texture,depthWrite: false,});const videomesh = new THREE.Mesh(geometry, material);scene.add(videomesh);const transform = new THREE.Object3D();function animation() {if (detectorRef.current !== null) {// console.log(detectorRef.current);const results = detectorRef.current.detect();if (results.facialTransformationMatrixes.length > 0) {const facialTransformationMatrixes =results.facialTransformationMatrixes[0].data;transform.matrix.fromArray(facialTransformationMatrixes);transform.matrix.decompose(transform.position,transform.quaternion,transform.scale,);const object = scene.getObjectByName('grp_transform');object.position.x = transform.position.x;object.position.y = transform.position.z + 40;object.position.z = -transform.position.y;object.rotation.x = transform.rotation.x;object.rotation.y = transform.rotation.z;object.rotation.z = -transform.rotation.y;}if (results.faceBlendshapes.length > 0) {const faceBlendshapes = results.faceBlendshapes[0].categories;// Morph values does not exist on the eye meshes, so we map the eyes blendshape score into rotation valuesconst eyeScore = {leftHorizontal: 0,rightHorizontal: 0,leftVertical: 0,rightVertical: 0,};for (const blendshape of faceBlendshapes) {const categoryName = blendshape.categoryName;const score = blendshape.score;const index =face.morphTargetDictionary[blendshapesMap[categoryName]];if (index !== undefined) {face.morphTargetInfluences[index] = score;}// There are two blendshape for movement on each axis (up/down , in/out)// Add one and subtract the other to get the final score in -1 to 1 rangeswitch (categoryName) {case 'eyeLookInLeft':eyeScore.leftHorizontal += score;break;case 'eyeLookOutLeft':eyeScore.leftHorizontal -= score;break;case 'eyeLookInRight':eyeScore.rightHorizontal -= score;break;case 'eyeLookOutRight':eyeScore.rightHorizontal += score;break;case 'eyeLookUpLeft':eyeScore.leftVertical -= score;break;case 'eyeLookDownLeft':eyeScore.leftVertical += score;break;case 'eyeLookUpRight':eyeScore.rightVertical -= score;break;case 'eyeLookDownRight':eyeScore.rightVertical += score;break;}}eyeL.rotation.z = eyeScore.leftHorizontal * eyeRotationLimit;eyeR.rotation.z = eyeScore.rightHorizontal * eyeRotationLimit;eyeL.rotation.x = eyeScore.leftVertical * eyeRotationLimit;eyeR.rotation.x = eyeScore.rightVertical * eyeRotationLimit;}}videomesh.scale.x = videoWidth / 100;videomesh.scale.y = videoHeight / 100;renderer.render(scene, camera);// controls.update();}window.addEventListener('resize', function () {camera.aspect = window.innerWidth / window.innerHeight;camera.updateProjectionMatrix();renderer.setSize(window.innerWidth, window.innerHeight);});}}, []);return (<divref={containerRef}style={{position: 'relative',top: '0',left: '0',width: '100vw',height: '100vh',}}></div>);
}export default ThreeContainer;

4. 将三者组合起来实现一个页面

详细代码如下

import React, { useEffect, useState, useRef } from 'react';
import { Page } from './Page';
import ThreeContainer from '../components/ThreeContainer';
import WebCam from '../components/WebCam';
import Detector from '../components/Detector';function PageContent() {const [stream, setStream] = useState(null);const videoRef = useRef(null);const [isWebCamLoaded, setIsWebCamLoaded] = useState(false);const detectorRef = useRef(null);useEffect(() => {async function init() {Detector.getInstance().then((rtn) => {detectorRef.current = rtn;detectorRef.current.bindVideo(videoRef.current);});}if (isWebCamLoaded) {init();}}, [isWebCamLoaded]);return (<divstyle={{position: 'relative',top: '0',left: '0',bottom: '0',right: '0',}}><ThreeContainerid="background"videoHeight={window.innerHeight}videoWidth={window.innerWidth}detectorRef={detectorRef}/><divid="UI"style={{position: 'absolute',top: '0',left: '0',bottom: '0',right: '0',}}><WebCamisShowCanvas={false}canvasHeight={480}canvasWidth={640}stream={stream}setStream={setStream}videoRef={videoRef}isLoaded={isWebCamLoaded}setLoaded={setIsWebCamLoaded}/></div></div>);
}export function App() {return (<Page><PageContent /></Page>);
}
export default App;

三、最终效果

运行项目npm start,效果如下
请添加图片描述


总结

本文介绍了如何结合 react-webcam、three.js 与 electron 实现桌面人脸动捕应用,希望对您有所帮助,如果文章中存在任何问题、疏漏,或者您对文章有任何建议,请在评论区提出。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/312569.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

JDBC入门

JDBC java database connectivity: 就是使用java语言操作关系型数据库的一套API

Opengl 坐标系统概述

1.谈到opengl 坐标系统 首先要知道三个坐标转换矩阵&#xff0c;模型矩阵&#xff0c;观察矩阵&#xff0c;投影矩阵。 模型矩阵作用在将以物体中心为原点的坐标系统&#xff0c;转换到世界坐标。 观察矩阵作用在将世界坐标系统转换到观察坐标系统 投影矩阵作用在将观察坐标…

利用Sentinel解决雪崩问题(二)线程隔离和熔断降级

前言&#xff1a; 虽然限流可以尽量避免因高并发而引起的服务故障&#xff0c;但服务还会因为其它原因而故障。而要将这些故障控制在一定范围避免雪崩&#xff0c;就要靠线程隔离(舱壁模式)和熔断降级手段了&#xff0c;不管是线程隔离还是熔断降级&#xff0c;都是对客户端(调…

使用UDP实现TCP的功能,会带来什么好处?

比较孤陋寡闻&#xff0c;只知道QUIC TCPQUIC握手延迟TCP需要三次握手TLS握手三次握手TLS握手放在一起&#xff0c;实现0RTT头阻塞问题TCP丢失保文&#xff0c;会影响所有的应用数据包基于UDP封装传输层Stream&#xff0c;Stream内部保序&#xff0c;Stream之间不存在相互影响…

【模拟】Leetcode 提莫攻击

题目讲解 495. 提莫攻击 算法讲解 前后的两个数字之间的关系要么是相减之差 > 中毒时间 &#xff0c;要么反之 那即可通过示例&#xff0c;进行算法的模拟&#xff0c;得出上图的计算公式 class Solution { public:int findPoisonedDuration(vector<int>& time…

【电控笔记2.3】速度回路+系统延迟

总结: 遗留问题: 根据奈奎斯特采样定理,中断(传感器反馈)的频率是电角度频率的2倍以上,实际中一般是他5倍到10倍,这样才能采集出完整的信号,而控制频率应该要等于传感器反馈频率2.3.1速度回路pi控制器设计 pi伯德图近似设计(不考虑延时理想情况下) Tl:负载转矩

gpt4.0人工智能网页版

在最新的AI基准测试中&#xff0c;OpenAI几天前刚刚发布的GPT-4-Turbo-2024-04-09版本&#xff0c;大幅超越了Claude3 Opus&#xff0c;重新夺回了全球第一的AI王座。 GPT-4-Turbo-2024-04-09版本是目前国内外最强的大模型&#xff0c;官网需要20美元每月才能使用&#xff0c;…

基于Springboot的某大药房管理系统

开发语言&#xff1a;Java 框架&#xff1a;springboot JDK版本&#xff1a;JDK1.8 服务器&#xff1a;tomcat7 数据库&#xff1a;mysql 5.7&#xff08;一定要5.7版本&#xff09; 数据库工具&#xff1a;Navicat11 开发软件&#xff1a;eclipse/myeclipse/idea Maven…

计算机网络——实现smtp和pop3邮件客户端

实验目的 运用各种编程语言实现基于 smtp 协议的 Email 客户端软件。 实验内容 1. 选择合适的编程语言编程实现基于 smtp 协议的 Email 客户端软件。 2. 安装 Email 服务器或选择已有的 Email 服务器&#xff0c;验证自己的 Email 客户端软件是否能进行正常的 Email 收发功…

设计模式之大话西游

8年前深究设计模式&#xff0c;现如今再次回锅&#xff5e; 还是大话设计模式 这本书还是可以的 大话西游经典的台词&#xff1a;“曾经有一份真挚的爱情摆在我面前,我没有珍惜,等我失去的时候,我才后悔莫及,人世间最痛苦的事莫过于此。如果上天能够给我一个再来一次的机会,我会…

C++ 类和对象(二)

目录 1.前言 2.类的六个默认成员函数 3.构造函数 3.1概念 3.2特性 3.2.1 函数名与类名相同 3.2.2 无返回值 3.2.3对象实例化时自动调用 3.2.4 构造函数可以重载 3.2.5 默认构造函数的自动生成 3.2.6 默认构造函数对内置类型成员的初始化 3.2.7 默认构造函数的定义 4…

Docker安装(一)

一、安装Docker 服务器系统&#xff1a;centos 7 1.本地有docker的首先卸载本机docker yum remove docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-selinux \docker-engine-selinux \dock…

3_4.mysql数据库的基本管理

### 一.数据库的介绍 ### 1.什么是数据库 数据库就是个高级的表格软件 2.常见数据库 Mysql Oracle mongodb db2 sqlite sqlserver ....... 3.Mysql (SUN -----> Oracle) 4.mariadb ##数据库中的常用名词## ①字段 &#xff1a;表格中的表头 ②表 &#xff1a;表格 ③库 &…

lua基本语法

Lua语法入门 初识lua vi hello.lua print("hello,lua") lua hello.lua 变量和循环 变量 循环 条件控制、函数 条件控制

视觉SLAM学习打卡【11】-尾述

到目前为止&#xff0c;视觉SLAM14讲已经到了终章&#xff0c;历时一个半月&#xff0c;时间有限&#xff0c;有些地方挖掘的不够深入&#xff0c;只能在后续的学习中更进一步。接下来&#xff0c;会着手ORB-SLAM2的开源框架&#xff0c;同步学习C。 视觉SLAM学习打卡【11】-尾…

专业清洁工匠服务网站模板 html网站

目录 一.前言 二.页面展示 三.下载链接 一.前言 该HTML代码生成了一个网页&#xff0c;包括以下内容&#xff1a; 头部信息&#xff1a;指定了网页的基本设置和元数据&#xff0c;例如字符编码、视口大小等。CSS文件&#xff1a;引入了多个CSS文件&#xff0c;用于设置网页…

前端开发攻略---实现与ChatGPT同款光标闪烁打字效果。

1、演示 2、实现代码 <!DOCTYPE html> <html lang"ch-ZN"><head><meta charset"UTF-8" /><meta http-equiv"X-UA-Compatible" content"IEedge" /><meta name"viewport" content"widt…

代码随想录算法训练营第三十七天| LeetCode 738.单调递增的数字、总结

一、LeetCode 738.单调递增的数字 题目链接/文章讲解/视频讲解&#xff1a;https://programmercarl.com/0738.%E5%8D%95%E8%B0%83%E9%80%92%E5%A2%9E%E7%9A%84%E6%95%B0%E5%AD%97.html 状态&#xff1a;已解决 1.思路 如何求得小于等于N的最大单调递增的整数&#xff1f;98&am…

秋叶Stable diffusion的创世工具安装-带安装包链接

来自B站up秋葉aaaki&#xff0c;近期发布了Stable Diffusion整合包v4.7版本&#xff0c;一键在本地部署Stable Diffusion&#xff01;&#xff01; 适用于零基础想要使用AI绘画的小伙伴~本整合包支持SDXL&#xff0c;预装多种必须模型。无需安装git、python、cuda等任何内容&am…

Python文件操作大全

1 文件操作 1.1 文件打开与关闭 1.1.1 打开文件 在Python中&#xff0c;你可以使用 open() 函数来打开文件。以下是一个简单的例子&#xff1a; # 打开文件&#xff08;默认为只读模式&#xff09; file_path example.txt with open(file_path, r) as file:# 执行文件操作…