数据采集工具之Canal

本文主要介绍canal采集mysql数据的tcp、datahub(kafka)模式如何实现

1、下载canal

https://aliyun-datahub.oss-cn-hangzhou.aliyuncs.com/tools/canal.deployer-1.1.5-SNAPSHOT.tar.gz

 

canal的原理类似于mysql的主从复制,canal模拟的是从节点拉取主节点的binlog数据 

2、TCP模式的实现

a、canal.properties

打开看看即可,不需要调整

#################################################
#########               common argument         #############
#################################################
# tcp bind ip
canal.ip =
# register ip to zookeeper
canal.register.ip =
canal.port = 11111
canal.metrics.pull.port = 11112
# canal instance user/passwd
# canal.user = canal
# canal.passwd = E3619321C1A937C46A0D8BD1DAC39F93B27D4458# canal admin config
#canal.admin.manager = 127.0.0.1:8089
canal.admin.port = 11110
canal.admin.user = admin
canal.admin.passwd = 4ACFE3202A5FF5CF467898FC58AAB1D615029441canal.zkServers =
# flush data to zk
canal.zookeeper.flush.period = 1000
canal.withoutNetty = false
# tcp, kafka, rocketMQ, rabbitMQ
canal.serverMode = tcp
# flush meta cursor/parse position to file
canal.file.data.dir = ${canal.conf.dir}
canal.file.flush.period = 1000
## memory store RingBuffer size, should be Math.pow(2,n)
canal.instance.memory.buffer.size = 16384
## memory store RingBuffer used memory unit size , default 1kb
canal.instance.memory.buffer.memunit = 1024 
## meory store gets mode used MEMSIZE or ITEMSIZE
canal.instance.memory.batch.mode = MEMSIZE
canal.instance.memory.rawEntry = true## detecing config
canal.instance.detecting.enable = false
#canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now()
canal.instance.detecting.sql = select 1
canal.instance.detecting.interval.time = 3
canal.instance.detecting.retry.threshold = 3
canal.instance.detecting.heartbeatHaEnable = false# support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery
canal.instance.transaction.size =  1024
# mysql fallback connected to new master should fallback times
canal.instance.fallbackIntervalInSeconds = 60# network config
canal.instance.network.receiveBufferSize = 16384
canal.instance.network.sendBufferSize = 16384
canal.instance.network.soTimeout = 30# binlog filter config
canal.instance.filter.druid.ddl = true
canal.instance.filter.query.dcl = false
canal.instance.filter.query.dml = false
canal.instance.filter.query.ddl = false
canal.instance.filter.table.error = false
canal.instance.filter.rows = false
canal.instance.filter.transaction.entry = false# binlog format/image check
canal.instance.binlog.format = ROW,STATEMENT,MIXED 
canal.instance.binlog.image = FULL,MINIMAL,NOBLOB# binlog ddl isolation
canal.instance.get.ddl.isolation = false# parallel parser config
canal.instance.parser.parallel = true
## concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors()
#canal.instance.parser.parallelThreadSize = 16
## disruptor ringbuffer size, must be power of 2
canal.instance.parser.parallelBufferSize = 256# table meta tsdb info
canal.instance.tsdb.enable = true
canal.instance.tsdb.dir = ${canal.file.data.dir:../conf}/${canal.instance.destination:}
canal.instance.tsdb.url = jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL;
canal.instance.tsdb.dbUsername = canal
canal.instance.tsdb.dbPassword = canal
# dump snapshot interval, default 24 hour
canal.instance.tsdb.snapshot.interval = 24
# purge snapshot expire , default 360 hour(15 days)
canal.instance.tsdb.snapshot.expire = 360#################################################
#########               destinations            #############
#################################################
canal.destinations = example  ##这里可以设置多个逗号分开
# conf root dir
canal.conf.dir = ../conf
# auto scan instance dir add/remove and start/stop instance
canal.auto.scan = true
canal.auto.scan.interval = 5canal.instance.tsdb.spring.xml = classpath:spring/tsdb/h2-tsdb.xml
#canal.instance.tsdb.spring.xml = classpath:spring/tsdb/mysql-tsdb.xmlcanal.instance.global.mode = spring
canal.instance.global.lazy = false
canal.instance.global.manager.address = ${canal.admin.manager}
#canal.instance.global.spring.xml = classpath:spring/memory-instance.xml
canal.instance.global.spring.xml = classpath:spring/file-instance.xml
#canal.instance.global.spring.xml = classpath:spring/default-instance.xml##################################################
#########             MQ Properties      #############
##################################################
canal.mq.flat.message = true
canal.mq.database.hash = true
canal.mq.parallel.thread.size = 8
canal.mq.canal.batch.size = 50
canal.mq.canal.fetch.timeout = 100
# Set this value to "cloud", if you want open message trace feature in aliyun.
canal.mq.access.channel = local# aliyun ak/sk , support rds/mq
canal.aliyun.accessKey =
canal.aliyun.secretKey =
canal.aliyun.uid=##################################################
#########                    Kafka                   #############
##################################################
kafka.bootstrap.servers = 127.0.0.1:9092
kafka.acks = all
kafka.compression.type = none
kafka.batch.size = 16384
kafka.linger.ms = 1
kafka.max.request.size = 1048576
kafka.buffer.memory = 33554432
kafka.max.in.flight.requests.per.connection = 1
kafka.retries = 0canal.mq.kafka.kerberos.enable = false
canal.mq.kafka.kerberos.krb5.file = "../conf/kerberos/krb5.conf"
canal.mq.kafka.kerberos.jaas.file = "../conf/kerberos/jaas.conf"##################################################
#########                   RocketMQ         #############
##################################################
rocketmq.producer.group = test
rocketmq.enable.message.trace = false
rocketmq.customized.trace.topic =
rocketmq.namespace =
rocketmq.namesrv.addr = 127.0.0.1:9876
rocketmq.retry.times.when.send.failed = 0
rocketmq.vip.channel.enabled = false##################################################
#########                   RabbitMQ         #############
##################################################
rabbitmq.host =
rabbitmq.virtual.host =
rabbitmq.exchange =
rabbitmq.username =

 b、example/instance.properties

canal.instance.master.address=192.168.140.1:3306  ###修改为自己的mysql信息

#################################################
## mysql serverId , v1.0.26+ will autoGen
# canal.instance.mysql.slaveId=0# enable gtid use true/false
canal.instance.gtidon=false# position info
canal.instance.master.address=192.168.140.1:3306  ###修改为自己的mysql信息
canal.instance.master.journal.name=
canal.instance.master.position=
canal.instance.master.timestamp=
canal.instance.master.gtid=# rds oss binlog
canal.instance.rds.accesskey=
canal.instance.rds.secretkey=
canal.instance.rds.instanceId=# table meta tsdb info
canal.instance.tsdb.enable=true
#canal.instance.tsdb.url=jdbc:mysql://127.0.0.1:3306/canal_tsdb
#canal.instance.tsdb.dbUsername=canal
#canal.instance.tsdb.dbPassword=canal#canal.instance.standby.address =
#canal.instance.standby.journal.name =
#canal.instance.standby.position =
#canal.instance.standby.timestamp =
#canal.instance.standby.gtid=# username/password
canal.instance.dbUsername=flink
canal.instance.dbPassword=flink
canal.instance.connectionCharset = UTF-8
# enable druid Decrypt database password
canal.instance.enableDruid=false
#canal.instance.pwdPublicKey=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ==# table regex
canal.instance.filter.regex=.*\\..*
# table black regex
canal.instance.filter.black.regex=
# table field filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
#canal.instance.filter.field=test1.t_product:id/subject/keywords,test2.t_company:id/name/contact/ch
# table field black filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
#canal.instance.filter.black.field=test1.t_product:subject/product_image,test2.t_company:id/name/contact/ch# mq config
canal.mq.topic=example
# dynamic topic route by schema or table regex
#canal.mq.dynamicTopic=mytest1.user,mytest2\\..*,.*\\..*
canal.mq.partition=0
# hash partition config
#canal.mq.partitionsNum=3
#canal.mq.partitionHash=test.table:id^name,.*\\..*
#################################################

启动:bin/startup.sh

jps:

到此,canal服务端配置完成 

 c、canal客户端开发

依赖

<dependency><groupId>com.alibaba.otter</groupId><artifactId>canal.client</artifactId><version>1.1.2</version></dependency>

开发代码:

package com.tbea;import com.alibaba.fastjson.JSONObject;
import com.alibaba.otter.canal.client.CanalConnector;
import com.alibaba.otter.canal.client.CanalConnectors;
import com.alibaba.otter.canal.protocol.CanalEntry;
import com.alibaba.otter.canal.protocol.Message;
import com.google.protobuf.ByteString;
import com.google.protobuf.InvalidProtocolBufferException;import java.net.InetSocketAddress;
import java.util.List;public class CanalClient {public static void main(String[] args) throws InterruptedException, InvalidProtocolBufferException {//获取链接CanalConnector canalConnector = CanalConnectors.newSingleConnector(new InetSocketAddress("192.168.140.129", 11111), "example", "", "");//尝试获取新数据while (true){//todo 连接canalConnector.connect();//todo 订阅数据库canalConnector.subscribe("flinkcdc.*");//todo 批量拉取数据Message message = canalConnector.get(100);//todo 获取entryList<CanalEntry.Entry> entries = message.getEntries();//todo 遍历 判断集合状态if (entries.size()<=0){System.out.println("当次未抓取到数据,休息一会~~~~~~~");Thread.sleep(1000);}else {//遍历解析for (CanalEntry.Entry entry:entries){//1.获取表名String tableName = entry.getHeader().getTableName();//2.获取类型CanalEntry.EntryType entryType = entry.getEntryType();//3.获取序列化数据ByteString storeValue = entry.getStoreValue();//4.判断entry是否rowdata类型if (CanalEntry.EntryType.ROWDATA.equals(entryType)){//5.反序列化数据CanalEntry.RowChange rowChange = CanalEntry.RowChange.parseFrom(storeValue);//6.获取当前事件操作类型CanalEntry.EventType eventType = rowChange.getEventType();//7.获取数据集List<CanalEntry.RowData> rowDatasList = rowChange.getRowDatasList();//8.遍历rowdatalist,并打印数据集for (CanalEntry.RowData rowData:rowDatasList){JSONObject beforeData = new JSONObject();JSONObject afterData = new JSONObject();List<CanalEntry.Column> beforeColumnsList = rowData.getBeforeColumnsList();List<CanalEntry.Column> afterColumnsList = rowData.getAfterColumnsList();for (CanalEntry.Column column:beforeColumnsList){beforeData.put(column.getName(),column.getValue());}for (CanalEntry.Column column:afterColumnsList){afterData.put(column.getName(),column.getValue());}System.out.println("Table:"+tableName+",EventType:"+eventType+",Before:"+beforeData+",After:"+afterData);}}else {System.out.println("数据类型不是所需要的");}}}}}
}

执行:

到此,我们可以实时获取到mysql数据库的各种操作日志,接下来需要将数据写到哪里 可以按需实现。

3、kafka模式的实现

a.canal.properties

修改:canal.serverMode = kafka

    kafka信息:kafka.bootstrap.servers = 192.168.140.128:9092

#################################################
#########               common argument         #############
#################################################
# tcp bind ip
canal.ip =
# register ip to zookeeper
canal.register.ip =
canal.port = 11111
canal.metrics.pull.port = 11112
# canal instance user/passwd
# canal.user = canal
# canal.passwd = E3619321C1A937C46A0D8BD1DAC39F93B27D4458# canal admin config
#canal.admin.manager = 127.0.0.1:8089
canal.admin.port = 11110
canal.admin.user = admin
canal.admin.passwd = 4ACFE3202A5FF5CF467898FC58AAB1D615029441canal.zkServers =
# flush data to zk
canal.zookeeper.flush.period = 1000
canal.withoutNetty = false
# tcp, kafka, rocketMQ, rabbitMQ
canal.serverMode = kafka
# flush meta cursor/parse position to file
canal.file.data.dir = ${canal.conf.dir}
canal.file.flush.period = 1000
## memory store RingBuffer size, should be Math.pow(2,n)
canal.instance.memory.buffer.size = 16384
## memory store RingBuffer used memory unit size , default 1kb
canal.instance.memory.buffer.memunit = 1024 
## meory store gets mode used MEMSIZE or ITEMSIZE
canal.instance.memory.batch.mode = MEMSIZE
canal.instance.memory.rawEntry = true## detecing config
canal.instance.detecting.enable = false
#canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now()
canal.instance.detecting.sql = select 1
canal.instance.detecting.interval.time = 3
canal.instance.detecting.retry.threshold = 3
canal.instance.detecting.heartbeatHaEnable = false# support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery
canal.instance.transaction.size =  1024
# mysql fallback connected to new master should fallback times
canal.instance.fallbackIntervalInSeconds = 60# network config
canal.instance.network.receiveBufferSize = 16384
canal.instance.network.sendBufferSize = 16384
canal.instance.network.soTimeout = 30# binlog filter config
canal.instance.filter.druid.ddl = true
canal.instance.filter.query.dcl = false
canal.instance.filter.query.dml = false
canal.instance.filter.query.ddl = false
canal.instance.filter.table.error = false
canal.instance.filter.rows = false
canal.instance.filter.transaction.entry = false# binlog format/image check
canal.instance.binlog.format = ROW,STATEMENT,MIXED 
canal.instance.binlog.image = FULL,MINIMAL,NOBLOB# binlog ddl isolation
canal.instance.get.ddl.isolation = false# parallel parser config
canal.instance.parser.parallel = true
## concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors()
#canal.instance.parser.parallelThreadSize = 16
## disruptor ringbuffer size, must be power of 2
canal.instance.parser.parallelBufferSize = 256# table meta tsdb info
canal.instance.tsdb.enable = true
canal.instance.tsdb.dir = ${canal.file.data.dir:../conf}/${canal.instance.destination:}
canal.instance.tsdb.url = jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL;
canal.instance.tsdb.dbUsername = canal
canal.instance.tsdb.dbPassword = canal
# dump snapshot interval, default 24 hour
canal.instance.tsdb.snapshot.interval = 24
# purge snapshot expire , default 360 hour(15 days)
canal.instance.tsdb.snapshot.expire = 360#################################################
#########               destinations            #############
#################################################
canal.destinations = example
# conf root dir
canal.conf.dir = ../conf
# auto scan instance dir add/remove and start/stop instance
canal.auto.scan = true
canal.auto.scan.interval = 5canal.instance.tsdb.spring.xml = classpath:spring/tsdb/h2-tsdb.xml
#canal.instance.tsdb.spring.xml = classpath:spring/tsdb/mysql-tsdb.xmlcanal.instance.global.mode = spring
canal.instance.global.lazy = false
canal.instance.global.manager.address = ${canal.admin.manager}
#canal.instance.global.spring.xml = classpath:spring/memory-instance.xml
canal.instance.global.spring.xml = classpath:spring/file-instance.xml
#canal.instance.global.spring.xml = classpath:spring/default-instance.xml##################################################
#########             MQ Properties      #############
##################################################
canal.mq.flat.message = true
canal.mq.database.hash = true
canal.mq.parallel.thread.size = 8
canal.mq.canal.batch.size = 50
canal.mq.canal.fetch.timeout = 100
# Set this value to "cloud", if you want open message trace feature in aliyun.
canal.mq.access.channel = local# aliyun ak/sk , support rds/mq
canal.aliyun.accessKey =
canal.aliyun.secretKey =
canal.aliyun.uid=##################################################
#########                    Kafka                   #############
##################################################
kafka.bootstrap.servers = 192.168.140.128:9092
kafka.acks = all
kafka.compression.type = none
kafka.batch.size = 16384
kafka.linger.ms = 1
kafka.max.request.size = 1048576
kafka.buffer.memory = 33554432
kafka.max.in.flight.requests.per.connection = 1
kafka.retries = 0canal.mq.kafka.kerberos.enable = false
canal.mq.kafka.kerberos.krb5.file = "../conf/kerberos/krb5.conf"
canal.mq.kafka.kerberos.jaas.file = "../conf/kerberos/jaas.conf"##################################################
#########                   RocketMQ         #############
##################################################
rocketmq.producer.group = test
rocketmq.enable.message.trace = false
rocketmq.customized.trace.topic =
rocketmq.namespace =
rocketmq.namesrv.addr = 127.0.0.1:9876
rocketmq.retry.times.when.send.failed = 0
rocketmq.vip.channel.enabled = false##################################################
#########                   RabbitMQ         #############
##################################################
rabbitmq.host =
rabbitmq.virtual.host =
rabbitmq.exchange =
rabbitmq.username =
rabbitmq.password =

b、example/instance.properties

修改:canal.instance.master.address=192.168.140.1:3306

canal.instance.dbUsername=flink

canal.instance.dbPassword=flink

canal.mq.topic=mysql_binlogs

#################################################
## mysql serverId , v1.0.26+ will autoGen
# canal.instance.mysql.slaveId=0# enable gtid use true/false
canal.instance.gtidon=false# position info
canal.instance.master.address=192.168.140.1:3306
canal.instance.master.journal.name=
canal.instance.master.position=
canal.instance.master.timestamp=
canal.instance.master.gtid=# rds oss binlog
canal.instance.rds.accesskey=
canal.instance.rds.secretkey=
canal.instance.rds.instanceId=# table meta tsdb info
canal.instance.tsdb.enable=true
#canal.instance.tsdb.url=jdbc:mysql://127.0.0.1:3306/canal_tsdb
#canal.instance.tsdb.dbUsername=canal
#canal.instance.tsdb.dbPassword=canal#canal.instance.standby.address =
#canal.instance.standby.journal.name =
#canal.instance.standby.position =
#canal.instance.standby.timestamp =
#canal.instance.standby.gtid=# username/password
canal.instance.dbUsername=flink
canal.instance.dbPassword=flink
canal.instance.connectionCharset = UTF-8
# enable druid Decrypt database password
canal.instance.enableDruid=false
#canal.instance.pwdPublicKey=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ==# table regex
canal.instance.filter.regex=.*\\..*
# table black regex
canal.instance.filter.black.regex=
# table field filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
#canal.instance.filter.field=test1.t_product:id/subject/keywords,test2.t_company:id/name/contact/ch
# table field black filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
#canal.instance.filter.black.field=test1.t_product:subject/product_image,test2.t_company:id/name/contact/ch# mq config
canal.mq.topic=mysql_binlogs
# dynamic topic route by schema or table regex
#canal.mq.dynamicTopic=mytest1.user,mytest2\\..*,.*\\..*
canal.mq.partition=0
# hash partition config
#canal.mq.partitionsNum=3
#canal.mq.partitionHash=test.table:id^name,.*\\..*
#################################################

c、启动

startup.sh

d、查看数据生产情况

kafka-console-consumer.sh --bootstrap-server 192.168.140.128:9092 --topic mysql_binlogs --from-beginning

4、datahub兼容kafka的实现

如何实现啊

什么是Canal插件,如何使用Canal插件_数据总线 DataHub(DataHub)-阿里云帮助中心

看样子需要datahub兼容kafka的ip:port 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/394472.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

LeetCode 热题 HOT 100 (015/100)【宇宙最简单版】

【栈】No. 0155 最小栈【中等】&#x1f449;力扣对应题目指路 希望对你有帮助呀&#xff01;&#xff01;&#x1f49c;&#x1f49c; 如有更好理解的思路&#xff0c;欢迎大家留言补充 ~ 一起加油叭 &#x1f4a6; 欢迎关注、订阅专栏 【力扣详解】谢谢你的支持&#xff01; …

深入了解核函数:连接机器学习与统计学的桥梁

引言 在机器学习中&#xff0c;支持向量机&#xff08;SVM&#xff09;是一种强大的监督学习模型&#xff0c;特别适合处理分类问题。然而&#xff0c;SVM最初被设计用于线性可分的数据集&#xff0c;现实中的数据往往不是线性可分的。为了解决这一问题&#xff0c;我们引入了…

共享之道——享元模式(Python实现)

共享之道——享元模式&#xff08;Python实现&#xff09; 大家好&#xff0c;今天我们继续来讲结构型设计模式&#xff0c;上一期我们介绍了外观模式&#xff0c;这一期我们来讲享元模式&#xff08;Flyweight Pattern&#xff09;。 享元模式&#xff08;Flyweight Pattern…

Bitwise 首席投资官:忽略短期的市场波动,关注加密货币的发展前景

原文标题&#xff1a;《The Crypto Market Sell-Off: What Happened and Where We Go From Here》撰文&#xff1a;Matt Hougan&#xff0c;Bitwise 首席投资官编译&#xff1a;Chris&#xff0c;Techub News 加密货币市场在周末经历了大幅下跌。从上周五下午 4 点到周一早上 7…

优质电器/机械岗位推荐:经验不限大厂直招,薪资最高30K!

本周优质电器/机械岗位推荐&#xff0c;涵盖C、自动化、开发、安卓开发、项目管理等岗位&#xff0c;经验不限&#xff0c;更有大厂直招岗位&#xff0c;薪资最高30K&#xff01;&#xff01; 抓紧投递&#xff0c;早投早入职&#xff01; &#x1f447;点击职位名称查看详情…

PHP + Laravel + RabbitMQ + Redis 实现消息队列 (三) 消费队列在RabbitMQ和redis中的发布和订阅

发布订阅&#xff08;Pub/Sub&#xff09; 对于消息队列传统的模式来说&#xff0c;一个消费者消费一条消息&#xff0c;这条消息被消费之后就不会再次被其它的消费者消费。但是在发布订阅模式中&#xff0c;一条消息是可以被多个消费者消费的&#xff0c;这些消费者其实相当于…

前端构建工具|vite快速入门

认识vite vite组成部分 Vite是一种新型前端构建工具&#xff0c;能够显著提升前端开发体验。它主要由两部分组成&#xff1a; 一个开发服务器&#xff0c;它基于 原生 ES 模块 提供了 丰富的内建功能&#xff0c;如速度快到惊人的 模块热更新&#xff08;HMR&#xff09;。一…

C++——类模板经典案例——自定义通用数组类

案例&#xff1a;自定义数组类 需求&#xff1a; 1&#xff0c;对内置数据及自定义数据类型的数据存储 2&#xff0c;将数组中的数据存储到堆区 3&#xff0c;构造函数中可以存入数组的容量 4&#xff0c;提供对应的拷贝构造函数和运算符重载防止浅拷贝问题的发生 5&#xff0c…

基于Springboot + Vue的宿舍管理系统

前言 文末获取源码数据库 感兴趣的可以先收藏起来&#xff0c;需要学编程的可以给我留言咨询&#xff0c;希望帮助更多的人 精彩专栏推荐订阅 不然下次找不到哟 Java精品毕设原创实战项目 作者的B站地址&#xff1a;程序员云翼的个人空间-程序员云翼个人主页-哔哩哔哩视频 csd…

vue3+axios请求导出excel文件

在Vue 3中使用axios请求导出Excel文件&#xff0c;可以发送一个GET或POST请求&#xff0c;并设置响应类型为blob或arraybuffer&#xff0c;然后使用new Blob()构造函数创建一个二进制文件&#xff0c;最后使用URL.createObjectURL()生成一个可以下载的链接。 先看代码 import…

Stable Diffusion绘画 | 必备插件安装推荐

新手必备安装的插件推荐如下&#xff1a; 汉化语言包&#xff1a;汉化插件GitHub地址&#xff1b;双语对照插件GitHub地址无边图库&#xff1a;无边图库插件GitHub地址ControlNet&#xff1a;已默认安装 插件安装 最推荐的安装方式&#xff1a;通过「可下载」、「从网址安装…

Qt Modbus 寄存器读写实例

一.线圈状态寄存器读写 项目效果如下 1. 写单个寄存器 MODBUS_API int modbus_write_bit(modbus_t *ctx, int coil_addr, int status); int addrui->spinBoxwirte_addr->value();int dataui->spinBoxwirte_data->value();int ret modbus_write_bit(mb,addr,d…

学习c#-4语句 ,条件,循环

代码&#xff1a; string name "小赵"; //条件判断 if (name "小赵") { Console.WriteLine("我是小赵"); } else { Console.WriteLine("我不是小赵"); } // switch条件判断 switch (name) { case "小…

5.3 匿名函数:Python编程中的隐士大师

欢迎来到我的博客&#xff0c;很高兴能够在这里和您见面&#xff01;欢迎订阅相关专栏&#xff1a; 工&#x1f497;重&#x1f497;hao&#x1f497;&#xff1a;野老杂谈 ⭐️ 全网最全IT互联网公司面试宝典&#xff1a;收集整理全网各大IT互联网公司技术、项目、HR面试真题.…

嵌入式初学-C语言-十七

#接嵌入式初学-C语言-十六# 函数的递归调用 含义&#xff1a; 在一个函数中直接或者间接调用了函数本身&#xff0c;称之为函数的递归调用 // 直接调用a()->a(); // 间接调用a()->b()->a();a()->b()->..->a();递归调用的本质&#xff1a; 本是是一种循环…

【QT】Qt 音视频

Qt 音视频 Qt 音视频1. Qt 音频2. Qt 视频 Qt 音视频 在 Qt 中&#xff0c;音频主要是通过 QSound 类来实现。但是需要注意的是 QSound 类只支持播放 wav 格式的音频文件。也就是说如果想要添加音频效果&#xff0c;那么首先需要将非 wav 格式的音频文件转换为 wav 格式。 通…

JavaWeb之servlet关于Ajax实现前后端分离

一、什么是Ajax: AJAX Asynchronous JavaScript and XML&#xff08;异步的 JavaScript 和 XML&#xff09;。 AJAX 不是新的编程语言&#xff0c;而是一种使用现有标准的新方法。 AJAX 最大的优点是在不重新加载整个页面的情况下&#xff0c;可以与服务器交换数据并更新部…

Nuxt2:强制删除window.__NUXT__中的数据

一、问题描述 在以前的一篇文章《Nuxt3: 强制删除__NUXT_DATA__的一种方式》中曾介绍了在Nuxt3中如何删除存在于页面id为__NUXT_DATA__的script节点中的数据。 此次&#xff0c;Nuxt2与Nuxt3不同在于它的数据是存在于window.__NUXT__&#xff0c;那么该如何处理呢&#xff1f;…

2025深圳国际户外庭院营地用品博览会

2025深圳国际户外庭院营地用品博览会 2025 Shenzhen International Outdoor Courtyard Camping Supplies Expo 时间&#xff1a;2025年02月27-3月01日 地点&#xff1a;深圳会展中心&#xff08;福田馆&#xff09; 详询主办方陆先生 I38&#xff08;前三位&#xff09; …

如何用OceanBase与DataWorks,打造一站式的数据集成、开发和数据服务

导语&#xff1a;在OceanBase 2024年开发者大会的技术生态论坛上&#xff0c;阿里云DataWorks团队的高级技术专家罗海伟&#xff0c;详细阐述了一站式大数据开发治理平台DataWorks的能力&#xff0c;并对于如何基于OceanBase和Dataworks构建一站式数据集成、开发以及数据服务进…