Flink从入门到实践(三):数据实时采集 - Flink MySQL CDC

文章目录

  • 系列文章索引
  • 一、概述
    • 1、版本匹配
    • 2、导包
  • 二、编码实现
    • 1、基本使用
    • 2、更多配置
    • 3、自定义序列化器
    • 4、Flink SQL方式
  • 三、踩坑
    • 1、The MySQL server has a timezone offset (0 seconds ahead of UTC) which does not match the configured timezone Asia/Shanghai.
  • 参考资料

系列文章索引

Flink从入门到实践(一):Flink入门、Flink部署
Flink从入门到实践(二):Flink DataStream API
Flink从入门到实践(三):数据实时采集 - Flink MySQL CDC

一、概述

1、版本匹配

注意MySQL的版本,本次是使用MySQL8.0进行演示。
同时,Flink支持很多数据库的cdc。
在这里插入图片描述
同时也要对应好版本,我们本次使用Flink是1.18,同时FlinkCDC也是3.0版本
在这里插入图片描述

2、导包

<dependency><groupId>org.apache.flink</groupId><artifactId>flink-clients</artifactId><version>1.18.0</version>
</dependency>
<dependency><groupId>org.apache.flink</groupId><artifactId>flink-streaming-java</artifactId><version>1.18.0</version>
</dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-connector-base</artifactId><version>1.18.0</version>
</dependency><dependency><groupId>com.ververica</groupId><artifactId>flink-connector-mysql-cdc</artifactId><version>3.0.0</version>
</dependency><dependency><groupId>mysql</groupId><artifactId>mysql-connector-java</artifactId><version>8.0.27</version>
</dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-table-runtime</artifactId><version>1.18.0</version>
</dependency>

二、编码实现

1、基本使用

import org.apache.flink.api.common.eventtime.WatermarkStrategy;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import com.ververica.cdc.debezium.JsonDebeziumDeserializationSchema;
import com.ververica.cdc.connectors.mysql.source.MySqlSource;/*** Flink MySql CDC* 每次启动之后,会将所有数据采集一遍*/
public class FlinkCDC01 {public static void main(String[] args) throws Exception {MySqlSource<String> mySqlSource = MySqlSource.<String>builder().hostname("192.168.56.10").port(3306).databaseList("testdb") // 要监听的数据库,可以填多个,支持正则表达式.tableList("testdb.access") // 监听的表,可以填多个,需要db.表,支持正则表达式.username("root").password("root").deserializer(new JsonDebeziumDeserializationSchema()) // converts SourceRecord to JSON String.build();StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();// 开启检查点env.enableCheckpointing(3000);env.fromSource(mySqlSource, WatermarkStrategy.noWatermarks(), "MySQL Source")// 1个并行任务.setParallelism(1).print().setParallelism(1); // 对接收器使用并行性1来保持消息顺序env.execute("Print MySQL Snapshot + Binlog");}
}

结果是json数据:

{
“before”: null,
“after”: {
“id”: 1,
“name”: “1”
},
“source”: {
“version”: “1.9.7.Final”,
“connector”: “mysql”,
“name”: “mysql_binlog_source”,
“ts_ms”: 1707353812000,
“snapshot”: “false”,
“db”: “testdb”, // 库名
“sequence”: null,
“table”: “access”, // 表名
“server_id”: 1,
“gtid”: null,
“file”: “binlog.000005”,
“pos”: 374,
“row”: 0,
“thread”: 9,
“query”: null
},
“op”: “c”, // 操作 c是create;u是update;d是delete;r是read
“ts_ms”: 1707353812450,
“transaction”: null
}

2、更多配置

https://ververica.github.io/flink-cdc-connectors/master/content/connectors/mysql-cdc%28ZH%29.html

配置选项scan.startup.mode指定 MySQL CDC 使用者的启动模式。有效枚举包括:
initial (默认):在第一次启动时对受监视的数据库表执行初始快照,并继续读取最新的 binlog。
earliest-offset:跳过快照阶段,从可读取的最早 binlog 位点开始读取
latest-offset:首次启动时,从不对受监视的数据库表执行快照, 连接器仅从 binlog 的结尾处开始读取,这意味着连接器只能读取在连接器启动之后的数据更改。
specific-offset:跳过快照阶段,从指定的 binlog 位点开始读取。位点可通过 binlog 文件名和位置指定,或者在 GTID 在集群上启用时通过 GTID 集合指定。
timestamp:跳过快照阶段,从指定的时间戳开始读取 binlog 事件。

3、自定义序列化器

import com.ververica.cdc.debezium.DebeziumDeserializationSchema;
import io.debezium.data.Envelope;
import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.util.Collector;
import org.apache.kafka.connect.data.Field;
import org.apache.kafka.connect.data.Struct;
import org.apache.kafka.connect.source.SourceRecord;import java.util.List;public class DomainDeserializationSchema implements DebeziumDeserializationSchema<String> {@Overridepublic void deserialize(SourceRecord sourceRecord, Collector<String> collector) throws Exception {String topic = sourceRecord.topic();String[] split = topic.split("\\.");System.out.println("数据库:" + split[1]);System.out.println("表:" + split[2]);Struct value = (Struct)sourceRecord.value();// 获取before信息Struct before = value.getStruct("before");System.out.println("before:" + before);if (before != null) {// 所有字段List<Field> fields = before.schema().fields();for (Field field : fields) {System.out.println("before field:" + field.name() + " value:" + before.get(field));}}// 获取after信息Struct after = value.getStruct("after");System.out.println("after:" + after);if (after != null) {// 所有字段List<Field> fields = after.schema().fields();for (Field field : fields) {System.out.println("after field:" + field.name() + " value:" + after.get(field));}}// 操作类型Envelope.Operation operation = Envelope.operationFor(sourceRecord);System.out.println("操作:" + operation);// 收集序列化后的结果collector.collect("aaaaaaaaaaaaa");}@Overridepublic TypeInformation<String> getProducedType() {return BasicTypeInfo.STRING_TYPE_INFO; // 类型}
}
MySqlSource<String> mySqlSource = MySqlSource.<String>builder().hostname("192.168.56.10").port(3306).databaseList("testdb") // 要监听的数据库,可以填多个.tableList("testdb.access") // 监听的表,可以填多个.username("root").password("root").deserializer(new DomainDeserializationSchema()) // 序列化器.build();

4、Flink SQL方式

CDC用的少,还是StreamAPI用的多。

三、踩坑

1、The MySQL server has a timezone offset (0 seconds ahead of UTC) which does not match the configured timezone Asia/Shanghai.

2024-02-08 08:36:33 INFO 5217 — [lt-dispatcher-6] o.a.f.r.executiongraph.ExecutionGraph : Source: MySQL Source -> Sink: Print to Std. Out (1/1) (e2371dabd0c952a5dfa7c053cbde80c3_cbc357ccb763df2852fee8c4fc7d55f2_0_2) switched from CREATED to SCHEDULED.
2024-02-08 08:36:33 INFO 5217 — [lt-dispatcher-8] o.a.f.r.r.s.FineGrainedSlotManager : Received resource requirements from job 369b1c979674a0444f679dd13264ea88: [ResourceRequirement{resourceProfile=ResourceProfile{UNKNOWN}, numberOfRequiredSlots=1}]
2024-02-08 08:36:33 INFO 5218 — [lt-dispatcher-6] o.a.flink.runtime.jobmaster.JobMaster : Trying to recover from a global failure.
org.apache.flink.util.FlinkException: Global failure triggered by OperatorCoordinator for ‘Source: MySQL Source -> Sink: Print to Std. Out’ (operator cbc357ccb763df2852fee8c4fc7d55f2).
at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder L a z y I n i t i a l i z e d C o o r d i n a t o r C o n t e x t . f a i l J o b ( O p e r a t o r C o o r d i n a t o r H o l d e r . j a v a : 624 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . o p e r a t o r s . c o o r d i n a t i o n . R e c r e a t e O n R e s e t O p e r a t o r C o o r d i n a t o r LazyInitializedCoordinatorContext.failJob(OperatorCoordinatorHolder.java:624) at org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator LazyInitializedCoordinatorContext.failJob(OperatorCoordinatorHolder.java:624)atorg.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinatorQuiesceableContext.failJob(RecreateOnResetOperatorCoordinator.java:248)
at org.apache.flink.runtime.source.coordinator.SourceCoordinatorContext.failJob(SourceCoordinatorContext.java:395)
at org.apache.flink.runtime.source.coordinator.SourceCoordinator.start(SourceCoordinator.java:225)
at org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator D e f e r r a b l e C o o r d i n a t o r . r e s e t A n d S t a r t ( R e c r e a t e O n R e s e t O p e r a t o r C o o r d i n a t o r . j a v a : 416 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . o p e r a t o r s . c o o r d i n a t i o n . R e c r e a t e O n R e s e t O p e r a t o r C o o r d i n a t o r . l a m b d a DeferrableCoordinator.resetAndStart(RecreateOnResetOperatorCoordinator.java:416) at org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator.lambda DeferrableCoordinator.resetAndStart(RecreateOnResetOperatorCoordinator.java:416)atorg.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator.lambdaresetToCheckpoint 7 ( R e c r e a t e O n R e s e t O p e r a t o r C o o r d i n a t o r . j a v a : 156 ) a t j a v a . u t i l . c o n c u r r e n t . C o m p l e t a b l e F u t u r e . u n i W h e n C o m p l e t e ( C o m p l e t a b l e F u t u r e . j a v a : 774 ) a t j a v a . u t i l . c o n c u r r e n t . C o m p l e t a b l e F u t u r e . u n i W h e n C o m p l e t e S t a g e ( C o m p l e t a b l e F u t u r e . j a v a : 792 ) a t j a v a . u t i l . c o n c u r r e n t . C o m p l e t a b l e F u t u r e . w h e n C o m p l e t e ( C o m p l e t a b l e F u t u r e . j a v a : 2153 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . o p e r a t o r s . c o o r d i n a t i o n . R e c r e a t e O n R e s e t O p e r a t o r C o o r d i n a t o r . r e s e t T o C h e c k p o i n t ( R e c r e a t e O n R e s e t O p e r a t o r C o o r d i n a t o r . j a v a : 143 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . o p e r a t o r s . c o o r d i n a t i o n . O p e r a t o r C o o r d i n a t o r H o l d e r . r e s e t T o C h e c k p o i n t ( O p e r a t o r C o o r d i n a t o r H o l d e r . j a v a : 284 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . c h e c k p o i n t . C h e c k p o i n t C o o r d i n a t o r . r e s t o r e S t a t e T o C o o r d i n a t o r s ( C h e c k p o i n t C o o r d i n a t o r . j a v a : 2044 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . c h e c k p o i n t . C h e c k p o i n t C o o r d i n a t o r . r e s t o r e L a t e s t C h e c k p o i n t e d S t a t e I n t e r n a l ( C h e c k p o i n t C o o r d i n a t o r . j a v a : 1719 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . c h e c k p o i n t . C h e c k p o i n t C o o r d i n a t o r . r e s t o r e L a t e s t C h e c k p o i n t e d S t a t e T o A l l ( C h e c k p o i n t C o o r d i n a t o r . j a v a : 1647 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . s c h e d u l e r . S c h e d u l e r B a s e . r e s t o r e S t a t e ( S c h e d u l e r B a s e . j a v a : 434 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . s c h e d u l e r . D e f a u l t S c h e d u l e r . r e s t a r t T a s k s ( D e f a u l t S c h e d u l e r . j a v a : 419 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . s c h e d u l e r . D e f a u l t S c h e d u l e r . l a m b d a 7(RecreateOnResetOperatorCoordinator.java:156) at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) at java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:792) at java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2153) at org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator.resetToCheckpoint(RecreateOnResetOperatorCoordinator.java:143) at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.resetToCheckpoint(OperatorCoordinatorHolder.java:284) at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreStateToCoordinators(CheckpointCoordinator.java:2044) at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreLatestCheckpointedStateInternal(CheckpointCoordinator.java:1719) at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreLatestCheckpointedStateToAll(CheckpointCoordinator.java:1647) at org.apache.flink.runtime.scheduler.SchedulerBase.restoreState(SchedulerBase.java:434) at org.apache.flink.runtime.scheduler.DefaultScheduler.restartTasks(DefaultScheduler.java:419) at org.apache.flink.runtime.scheduler.DefaultScheduler.lambda 7(RecreateOnResetOperatorCoordinator.java:156)atjava.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)atjava.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:792)atjava.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2153)atorg.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator.resetToCheckpoint(RecreateOnResetOperatorCoordinator.java:143)atorg.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.resetToCheckpoint(OperatorCoordinatorHolder.java:284)atorg.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreStateToCoordinators(CheckpointCoordinator.java:2044)atorg.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreLatestCheckpointedStateInternal(CheckpointCoordinator.java:1719)atorg.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreLatestCheckpointedStateToAll(CheckpointCoordinator.java:1647)atorg.apache.flink.runtime.scheduler.SchedulerBase.restoreState(SchedulerBase.java:434)atorg.apache.flink.runtime.scheduler.DefaultScheduler.restartTasks(DefaultScheduler.java:419)atorg.apache.flink.runtime.scheduler.DefaultScheduler.lambdanull 2 ( D e f a u l t S c h e d u l e r . j a v a : 379 ) a t j a v a . u t i l . c o n c u r r e n t . C o m p l e t a b l e F u t u r e . u n i R u n ( C o m p l e t a b l e F u t u r e . j a v a : 719 ) a t j a v a . u t i l . c o n c u r r e n t . C o m p l e t a b l e F u t u r e 2(DefaultScheduler.java:379) at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:719) at java.util.concurrent.CompletableFuture 2(DefaultScheduler.java:379)atjava.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:719)atjava.util.concurrent.CompletableFutureUniRun.tryFire(CompletableFuture.java:701)
at java.util.concurrent.CompletableFuture C o m p l e t i o n . r u n ( C o m p l e t a b l e F u t u r e . j a v a : 456 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . r p c . p e k k o . P e k k o R p c A c t o r . l a m b d a Completion.run(CompletableFuture.java:456) at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambda Completion.run(CompletableFuture.java:456)atorg.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambdahandleRunAsync 4 ( P e k k o R p c A c t o r . j a v a : 451 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . c o n c u r r e n t . C l a s s L o a d i n g U t i l s . r u n W i t h C o n t e x t C l a s s L o a d e r ( C l a s s L o a d i n g U t i l s . j a v a : 68 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . r p c . p e k k o . P e k k o R p c A c t o r . h a n d l e R u n A s y n c ( P e k k o R p c A c t o r . j a v a : 451 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . r p c . p e k k o . P e k k o R p c A c t o r . h a n d l e R p c M e s s a g e ( P e k k o R p c A c t o r . j a v a : 218 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . r p c . p e k k o . F e n c e d P e k k o R p c A c t o r . h a n d l e R p c M e s s a g e ( F e n c e d P e k k o R p c A c t o r . j a v a : 85 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . r p c . p e k k o . P e k k o R p c A c t o r . h a n d l e M e s s a g e ( P e k k o R p c A c t o r . j a v a : 168 ) a t o r g . a p a c h e . p e k k o . j a p i . p f . U n i t C a s e S t a t e m e n t . a p p l y ( C a s e S t a t e m e n t s . s c a l a : 33 ) a t o r g . a p a c h e . p e k k o . j a p i . p f . U n i t C a s e S t a t e m e n t . a p p l y ( C a s e S t a t e m e n t s . s c a l a : 29 ) a t s c a l a . P a r t i a l F u n c t i o n . a p p l y O r E l s e ( P a r t i a l F u n c t i o n . s c a l a : 127 ) a t s c a l a . P a r t i a l F u n c t i o n . a p p l y O r E l s e 4(PekkoRpcActor.java:451) at org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRunAsync(PekkoRpcActor.java:451) at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:218) at org.apache.flink.runtime.rpc.pekko.FencedPekkoRpcActor.handleRpcMessage(FencedPekkoRpcActor.java:85) at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:168) at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) at scala.PartialFunction.applyOrElse(PartialFunction.scala:127) at scala.PartialFunction.applyOrElse 4(PekkoRpcActor.java:451)atorg.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)atorg.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRunAsync(PekkoRpcActor.java:451)atorg.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:218)atorg.apache.flink.runtime.rpc.pekko.FencedPekkoRpcActor.handleRpcMessage(FencedPekkoRpcActor.java:85)atorg.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:168)atorg.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33)atorg.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29)atscala.PartialFunction.applyOrElse(PartialFunction.scala:127)atscala.PartialFunction.applyOrElse(PartialFunction.scala:126)
at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29)
at scala.PartialFunction O r E l s e . a p p l y O r E l s e ( P a r t i a l F u n c t i o n . s c a l a : 175 ) a t s c a l a . P a r t i a l F u n c t i o n OrElse.applyOrElse(PartialFunction.scala:175) at scala.PartialFunction OrElse.applyOrElse(PartialFunction.scala:175)atscala.PartialFunctionOrElse.applyOrElse(PartialFunction.scala:176)
at scala.PartialFunction O r E l s e . a p p l y O r E l s e ( P a r t i a l F u n c t i o n . s c a l a : 176 ) a t o r g . a p a c h e . p e k k o . a c t o r . A c t o r . a r o u n d R e c e i v e ( A c t o r . s c a l a : 547 ) a t o r g . a p a c h e . p e k k o . a c t o r . A c t o r . a r o u n d R e c e i v e OrElse.applyOrElse(PartialFunction.scala:176) at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) at org.apache.pekko.actor.Actor.aroundReceive OrElse.applyOrElse(PartialFunction.scala:176)atorg.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547)atorg.apache.pekko.actor.Actor.aroundReceive(Actor.scala:545)
at org.apache.pekko.actor.AbstractActor.aroundReceive(AbstractActor.scala:229)
at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590)
at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557)
at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280)
at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241)
at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1067)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1703)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:172)
Caused by: org.apache.flink.table.api.ValidationException: The MySQL server has a timezone offset (0 seconds ahead of UTC) which does not match the configured timezone Asia/Shanghai. Specify the right server-time-zone to avoid inconsistencies for time-related fields.
at com.ververica.cdc.connectors.mysql.MySqlValidator.checkTimeZone(MySqlValidator.java:184)
at com.ververica.cdc.connectors.mysql.MySqlValidator.validate(MySqlValidator.java:72)
at com.ververica.cdc.connectors.mysql.source.MySqlSource.createEnumerator(MySqlSource.java:197)
at org.apache.flink.runtime.source.coordinator.SourceCoordinator.start(SourceCoordinator.java:221)
… 42 common frames omitted

查看mysql:
show variables like ‘%time_zone%’;
在这里插入图片描述
解决方案:

SET time_zone = 'Asia/Shanghai';
SET @@global.time_zone = 'Asia/Shanghai';
#再次查看
SELECT @@global.time_zone;
show variables like '%time_zone%';

在这里插入图片描述

参考资料

源码:https://github.com/ververica/flink-cdc-connectors
文档:https://ververica.github.io/flink-cdc-connectors/master/content/overview/cdc-connectors.html
官网:https://ververica.github.io/flink-cdc-connectors/

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/256113.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Structured Streaming

目录 一、概述 &#xff08;一&#xff09;基本概念 &#xff08;二&#xff09;两种处理模型 &#xff08;三&#xff09;Structured Streaming和Spark SQL、Spark Streaming关系 二、编写Structured Streaming程序的基本步骤 &#xff08;一&#xff09;实现步骤 &…

人工智能|深度学习——使用多层级注意力机制和keras实现问题分类

代码下载 使用多层级注意力机制和keras实现问题分类资源-CSDN文库 1 准备工作 1.1 什么是词向量? ”词向量”&#xff08;词嵌入&#xff09;是将一类将词的语义映射到向量空间中去的自然语言处理技术。即将一个词用特定的向量来表示&#xff0c;向量之间的距离&#xff08;例…

【 buuctf--刷新过的图片】

前言&#xff1a;这题主要运用到了新的工具F5-steganography由于 java 环境不合适的原因&#xff0c;我不得不重新配java11.0.18。 具体思路&#xff1a;非常帅气的一张图片。。。用 binwalk&#xff0c;stegsolve&#xff0c;zsteg&#xff0c;exiftool 等工具无果后&#xf…

【Java程序设计】【C00253】基于Springboot的在线考试管理系统(有论文)

基于Springboot的在线考试管理系统&#xff08;有论文&#xff09; 项目简介项目获取开发环境项目技术运行截图 项目简介 这是一个基于Springboot的在线考试系统 本系统分为系统功能模块、管理员功能模块以及用户功能模块。 系统功能模块&#xff1a;系统登录&#xff0c;管理…

【Django】Django文件上传

文件上传 1 定义&场景 定义&#xff1a;用户可以通过浏览器将图片等文件上传至网站。 场景&#xff1a; 用户上传头像。 上传流程性的文档[pdf&#xff0c;txt等] 2 上传规范-前端[html] 文件上传必须为POST提交方式 表单 <form> 中文件上传时必须带有 enctype…

图(高阶数据结构)

目录 一、图的基本概念 二、图的存储结构 2.1 邻接矩阵 2.2 邻接表 三、图的遍历 3.1 广度优先遍历 3.2 深度优先遍历 四、最小生成树 4.1 Kruskal算法 4.2 Prim算法 五、最短路径 5.1 单源最短路径-Dijkstra算法 5.2 单源最短路径-Bellman-Ford算法 5.3 多源最…

JCIM | MD揭示PTP1B磷酸酶激活RtcB连接酶的机制

Background 内质网应激反应&#xff08;UPR&#xff09; 中的一个重要过程。UPR是由内质网中的三种跨膜传感器&#xff08;IRE1、PERK和ATF6&#xff09;控制的细胞应激反应&#xff0c;当内质网中的蛋白质折叠能力受到压力时&#xff0c;UPR通过减少蛋白质合成和增加未折叠或错…

springcloud分布式架构网上商城源码和论文

首先,论文一开始便是清楚的论述了系统的研究内容。其次,剖析系统需求分析,弄明白“做什么”,分析包括业务分析和业务流程的分析以及用例分析,更进一步明确系统的需求。然后在明白了系统的需求基础上需要进一步地设计系统,主要包罗软件架构模式、整体功能模块、数据库设计。本项…

集合进阶(双列集合、HashMap、LinkedHashMap、TreeMap、Collections)

目录 一、双列集合 1、双列集合的特点 2、双列集合的常见API 3、Map的遍历方式 3.1第一种遍历方式&#xff1a;键找值&#xff08;keySet&#xff09; 3.2第二种遍历方式&#xff1a;键值对&#xff08;entrySet&#xff09;Entry&#xff1a;键值对对象 3.3第三种遍历方…

vue 引入 百度地图API 和 路书

公司项目中&#xff0c;偶尔都会涉及到地图的使用&#xff0c;这里以百度地图为例&#xff0c;我们梳理一下引用流程及注意点 账号和获取密钥 百度地图示例 百度地图 类参考 1、账号和获取密钥 // api.map.baidu.com/api?typewebgl&v3.0&ak您的密钥<script type…

EasyExcel下载带下拉框和批注模板

EasyExcel下载带下拉框和批注模板 一、 代码实现 controller下载入口 /***下载excel模板* author youlu* date 2023/8/14 17:31* param response* param request* return void*/PostMapping("/downloadTemplate")public void downloadExcel(HttpServletResponse r…

Linux笔记之xhost +和docker的关系以及GDK_SCALE和GDK_DPI_SCALE详解

Linux笔记之xhost 和docker的关系以及GDK_SCALE和GDK_DPI_SCALE详解 ——2024-02-11 code review! 文章目录 Linux笔记之xhost 和docker的关系以及GDK_SCALE和GDK_DPI_SCALE详解xhost 的作用xhost 与 Docker 的关系 -e GDK_SCALE 和 -e GDK_DPI_SCALE详解GDK_SCALEGDK_DPI_SC…

Java设计模式大全:23种常见的设计模式详解(一)

本系列文章简介&#xff1a; 设计模式是在软件开发过程中&#xff0c;经过实践和总结得到的一套解决特定问题的可复用的模板。它是一种在特定情境中经过验证的经验和技巧的集合&#xff0c;可以帮助开发人员设计出高效、可维护、可扩展和可复用的软件系统。设计模式提供了一种在…

华为 huawei 交换机 接口 MAC 地址学习限制接入用户数量 配置示例

目录 组网需求: 配置思路&#xff1a; 操作步骤&#xff1a; 配置文件&#xff1a; 组网需求: 如 图 2-14 所示&#xff0c;用户网络 1 和用户网络 2 通过 LSW 与 Switch 相连&#xff0c; Switch 连接 LSW 的接口为GE0/0/1 。用户网络 1 和用户网络 2 分别属于 VLAN10 和 V…

幻兽帕鲁PalWorld服务器2024年配置选择

幻兽帕鲁PalWorld是一款备受期待的虚拟游戏&#xff0c;其独特的幻兽系统和丰富的世界观吸引了大量玩家。然而&#xff0c;随着游戏日益受到关注&#xff0c;服务器的配置选择成为了关键问题。2024年&#xff0c;随着技术不断发展&#xff0c;玩家对于游戏体验的需求也在不断提…

Quartus工程的qsf配置约束文件介绍

一、qsf文件概述 qsf&#xff1a;Quartus Setting File&#xff0c;是Quartus工程的配置文件&#xff1b; 包含一个Quartus工程的所有约束&#xff0c;包括工程的软件版本信息、FPGA器件信息、引脚约分配、引脚电平分配&#xff0c;编译约束和用于Classic TimingAnalyzer的时…

通过QT制作一个模仿微信主界面的界面(不要求实现具体通信功能)

main.cpp #include "widget.h" #include "second.h"#include <QApplication>int main(int argc, char *argv[]) {QApplication a(argc, argv);Widget w;w.show();//实例化第二个界面Second s;QObject::connect(&w, &Widget::my_jump, &…

微信,支付宝在线换钱平台系统源码

探索全新、全开源的在线换钱系统源码&#xff0c;它将以前所未有的方式改变您的支付体验。我们为您精心打造了一个集简单易用与安全高效于一身的优质产品&#xff0c;它采用最新的技术开发&#xff0c;为您带来前所未有的便捷与安心。 这款在线换钱系统源码设计直观&#xff0…

netty-websocket扩展协议及token鉴权补充

文章源码&#xff1a;gitee 源码部分可以看上一篇文章中的源码分析netty-websocket 鉴权token及统一请求和响应头&#xff08;鉴权控制器&#xff09; 最近刚好没事&#xff0c;看到有朋友说自定义协议好搞&#xff0c;我就想了想&#xff0c;发现上面那种方式实现确实麻烦&…