Hudi Flink SQL源码调试学习(1)

前言

本着学习hudi-flink源码的目的,利用之前总结的文章Hudi Flink SQL代码示例及本地调试中的代码进行调试,记录调试学习过程中主要的步骤及对应源码片段。

版本

  • Flink 1.15.4
  • Hudi 0.13.0

目标

在文章Hudi Flink SQL代码示例及本地调试中提到:我们发现Table API的入口和DataStream API的入口差不多,DataStream API的入口是在HoodiePipelinesinksource方法里,而这两个方法也是分别调用了HoodieTableFactorycreateDynamicTableSinkcreateDynamicTableSource。那么Table API的代码怎么一步一步走到createDynamicTableSinkcreateDynamicTableSource的呢?返回HoodieTableSink之后又是怎么写数据的?因为我发现Hudi写数据的主要逻辑入口好像是在HoodieTableSink.getSinkRuntimeProvider的方法体里,这些问题之前都没有搞清楚,所以这次的目标就是要搞清楚:1、Table API 的入口到createDynamicTableSink返回HoodieTableSink的主要代码步骤; 2、在哪里调用HoodieTableSink.getSinkRuntimeProvider的方法体进行后面的写Hudi逻辑的

相关类:

  • HoodiePipeline (DataStream API)
  • HoodieTableFactory
  • HoodieTableSink
  • DataStreamSinkProviderAdapter (函数式接口)
  • TableEnvironmentImpl
  • BatchPlanner
  • PlannerBase
  • FactoryUtil
  • BatchExecSink
  • CommonExecSink

DataStream API

其实上面的问题在DataStream API代码里很容易看出来,我们先看一下DataStream API写Hudi的代码,详细代码在文章:Flink Hudi DataStream API代码示例

DataStream<RowData> dataStream = env.fromElements(GenericRowData.of(1, StringData.fromString("hudi1"), 1.1, 1000L, StringData.fromString("2023-04-07")),GenericRowData.of(2, StringData.fromString("hudi2"), 2.2, 2000L, StringData.fromString("2023-04-08"))
);HoodiePipeline.Builder builder = HoodiePipeline.builder(targetTable).column("id int").column("name string").column("price double").column("ts bigint").column("dt string").pk("id").partition("dt").options(options);builder.sink(dataStream, false);

HoodiePipeline.Builder.sink

    public DataStreamSink<?> sink(DataStream<RowData> input, boolean bounded) {TableDescriptor tableDescriptor = getTableDescriptor();return HoodiePipeline.sink(input, tableDescriptor.getTableId(), tableDescriptor.getResolvedCatalogTable(), bounded);}

HoodiePipeline.sink

  private static DataStreamSink<?> sink(DataStream<RowData> input, ObjectIdentifier tablePath, ResolvedCatalogTable catalogTable, boolean isBounded) {FactoryUtil.DefaultDynamicTableContext context = Utils.getTableContext(tablePath, catalogTable, Configuration.fromMap(catalogTable.getOptions()));HoodieTableFactory hoodieTableFactory = new HoodieTableFactory();return ((DataStreamSinkProvider) hoodieTableFactory.createDynamicTableSink(context).getSinkRuntimeProvider(new SinkRuntimeProviderContext(isBounded))).consumeDataStream(input);}

HoodiePipeline.sink就可以找到答案:
1、HoodieTableFactory.createDynamicTableSink 返回HoodieTableSink
2、HoodieTableSink.getSinkRuntimeProvider 返回DataStreamSinkProviderAdapter
3、DataStreamSinkProviderAdapter.consumeDataStream调用HoodieTableSink.getSinkRuntimeProvider中的方法体执行后面的写Hudi逻辑。这里的dataStream为我们最开始在程序里创建的DataStream<RowData>

HoodieTableSink.getSinkRuntimeProvider

getSinkRuntimeProvider返回DataStreamSinkProviderAdapter,其中Lambda 表达式dataStream -> {}DataStreamSinkProviderAdapter.consumeDataStream(dataStream)的具体实现

  @Overridepublic SinkRuntimeProvider getSinkRuntimeProvider(Context context) {return (DataStreamSinkProviderAdapter) dataStream -> {// setup configurationlong ckpTimeout = dataStream.getExecutionEnvironment().getCheckpointConfig().getCheckpointTimeout();conf.setLong(FlinkOptions.WRITE_COMMIT_ACK_TIMEOUT, ckpTimeout);// set up default parallelismOptionsInference.setupSinkTasks(conf, dataStream.getExecutionConfig().getParallelism());RowType rowType = (RowType) schema.toSinkRowDataType().notNull().getLogicalType();// bulk_insert modefinal String writeOperation = this.conf.get(FlinkOptions.OPERATION);if (WriteOperationType.fromValue(writeOperation) == WriteOperationType.BULK_INSERT) {return Pipelines.bulkInsert(conf, rowType, dataStream);}// Append modeif (OptionsResolver.isAppendMode(conf)) {DataStream<Object> pipeline = Pipelines.append(conf, rowType, dataStream, context.isBounded());if (OptionsResolver.needsAsyncClustering(conf)) {return Pipelines.cluster(conf, rowType, pipeline);} else {return Pipelines.dummySink(pipeline);}}DataStream<Object> pipeline;// bootstrapfinal DataStream<HoodieRecord> hoodieRecordDataStream =Pipelines.bootstrap(conf, rowType, dataStream, context.isBounded(), overwrite);// write pipelinepipeline = Pipelines.hoodieStreamWrite(conf, hoodieRecordDataStream);// compactionif (OptionsResolver.needsAsyncCompaction(conf)) {// use synchronous compaction for bounded source.if (context.isBounded()) {conf.setBoolean(FlinkOptions.COMPACTION_ASYNC_ENABLED, false);}return Pipelines.compact(conf, pipeline);} else {return Pipelines.clean(conf, pipeline);}};}

DataStreamSinkProviderAdapter其实是一个函数式接口,它是一种只包含一个抽象方法的接口。Lambda 表达式可以被赋值给一个函数式接口,从而实现接口的实例化

public interface DataStreamSinkProviderAdapter extends DataStreamSinkProvider {DataStreamSink<?> consumeDataStream(DataStream<RowData> dataStream);@Overridedefault DataStreamSink<?> consumeDataStream(ProviderContext providerContext, DataStream<RowData> dataStream) {return consumeDataStream(dataStream);}
}

函数式接口和Lambda 表达式参考下面两篇文章:
https://it.sohu.com/a/682888110_100123073
https://blog.csdn.net/Speechless_/article/details/123746047

Table API

知道了 DataStream API 调用步骤后,来对比看一下 Table API 的大致调用步骤,调试代码入口。

tableEnv.executeSql(String.format("insert into %s values (1,'hudi',10,100,'2023-05-28')", tableName));

整体调用流程

1、tableEnv.executeSql->TableEnvironmentImpl.executeSql->executeInternal(Operation operation)->executeInternal(List<ModifyOperation> operations)->this.translate->(PlannerBase)this.planner.translate

2.1、PlannerBase.translate->PlannerBase.translateToRel->getTableSink(catalogSink.getContextResolvedTable, dynamicOptions)->FactoryUtil.createDynamicTableSink->HoodieTableFactory.createDynamicTableSink

2.2、PlannerBase.translate->(BatchPlanner)translateToPlan(execGraph)->(ExecNodeBase)node.translateToPlan->(BatchExecSink)translateToPlanInternal->(CommonExecSink)createSinkTransformation->(HoodieTableSink)getSinkRuntimeProvider->(CommonExecSink)applySinkProvider->provider.consumeDataStream

具体代码

TableEnvironmentImpl

(TableEnvironmentImpl)executeSql

    public TableResult executeSql(String statement) {List<Operation> operations = this.getParser().parse(statement);if (operations.size() != 1) {throw new TableException("Unsupported SQL query! executeSql() only accepts a single SQL statement of type CREATE TABLE, DROP TABLE, ALTER TABLE, CREATE DATABASE, DROP DATABASE, ALTER DATABASE, CREATE FUNCTION, DROP FUNCTION, ALTER FUNCTION, CREATE CATALOG, DROP CATALOG, USE CATALOG, USE [CATALOG.]DATABASE, SHOW CATALOGS, SHOW DATABASES, SHOW TABLES, SHOW [USER] FUNCTIONS, SHOW PARTITIONSCREATE VIEW, DROP VIEW, SHOW VIEWS, INSERT, DESCRIBE, LOAD MODULE, UNLOAD MODULE, USE MODULES, SHOW [FULL] MODULES.");} else {// 关键步骤:executeInternalreturn this.executeInternal((Operation)operations.get(0));}}

executeInternal(Operation operation)

    public TableResultInternal executeInternal(Operation operation) {if (operation instanceof ModifyOperation) {// 关键步骤:executeInternalreturn this.executeInternal(Collections.singletonList((ModifyOperation)operation));} else if (operation instanceof StatementSetOperation) {return this.executeInternal(((StatementSetOperation)operation).getOperations());

executeInternal(List<ModifyOperation> operations)

    public TableResultInternal executeInternal(List<ModifyOperation> operations) {// 关键步骤:translateList<Transformation<?>> transformations = this.translate(operations);List<String> sinkIdentifierNames = this.extractSinkIdentifierNames(operations);TableResultInternal result = this.executeInternal(transformations, sinkIdentifierNames);if ((Boolean)this.tableConfig.get(TableConfigOptions.TABLE_DML_SYNC)) {try {result.await();} catch (ExecutionException | InterruptedException var6) {result.getJobClient().ifPresent(JobClient::cancel);throw new TableException("Fail to wait execution finish.", var6);}}return result;}

translate
这里的planner为BatchPlanner,因为我们设置了batch模式EnvironmentSettings.inBatchMode()

    protected List<Transformation<?>> translate(List<ModifyOperation> modifyOperations) {// 这里的planner为BatchPlanner,因为我们设置了batch模式EnvironmentSettings.inBatchMode()// 关键步骤:PlannerBase.translatereturn this.planner.translate(modifyOperations);}

BatchPlanner

(BatchPlanner的父类)PlannerBase.translate

  override def translate(modifyOperations: util.List[ModifyOperation]): util.List[Transformation[_]] = {beforeTranslation()if (modifyOperations.isEmpty) {return List.empty[Transformation[_]]}// 关键步骤:translateToRelval relNodes = modifyOperations.map(translateToRel)val optimizedRelNodes = optimize(relNodes)val execGraph = translateToExecNodeGraph(optimizedRelNodes, isCompiled = false)// 关键步骤:translateToPlanval transformations = translateToPlan(execGraph)afterTranslation()transformations}

PlannerBase.translateToRel

  private[flink] def translateToRel(modifyOperation: ModifyOperation): RelNode = {val dataTypeFactory = catalogManager.getDataTypeFactorymodifyOperation match {case s: UnregisteredSinkModifyOperation[_] =>val input = getRelBuilder.queryOperation(s.getChild).build()val sinkSchema = s.getSink.getTableSchema// validate query schema and sink schema, and apply cast if possibleval query = validateSchemaAndApplyImplicitCast(input,catalogManager.getSchemaResolver.resolve(sinkSchema.toSchema),null,dataTypeFactory,getTypeFactory)LogicalLegacySink.create(query,s.getSink,"UnregisteredSink",ConnectorCatalogTable.sink(s.getSink, !isStreamingMode))case collectModifyOperation: CollectModifyOperation =>val input = getRelBuilder.queryOperation(modifyOperation.getChild).build()DynamicSinkUtils.convertCollectToRel(getRelBuilder,input,collectModifyOperation,getTableConfig,getFlinkContext.getClassLoader)case catalogSink: SinkModifyOperation =>val input = getRelBuilder.queryOperation(modifyOperation.getChild).build()val dynamicOptions = catalogSink.getDynamicOptions// 关键步骤:getTableSinkgetTableSink(catalogSink.getContextResolvedTable, dynamicOptions).map {case (table, sink: TableSink[_]) =>// Legacy tables can't be anonymousval identifier = catalogSink.getContextResolvedTable.getIdentifier// check the logical field type and physical field type are compatibleval queryLogicalType = FlinkTypeFactory.toLogicalRowType(input.getRowType)// validate logical schema and physical schema are compatiblevalidateLogicalPhysicalTypesCompatible(table, sink, queryLogicalType)// validate TableSinkvalidateTableSink(catalogSink, identifier, sink, table.getPartitionKeys)// validate query schema and sink schema, and apply cast if possibleval query = validateSchemaAndApplyImplicitCast(input,table.getResolvedSchema,identifier.asSummaryString,dataTypeFactory,getTypeFactory)val hints = new util.ArrayList[RelHint]if (!dynamicOptions.isEmpty) {hints.add(RelHint.builder("OPTIONS").hintOptions(dynamicOptions).build)}LogicalLegacySink.create(query,hints,sink,identifier.toString,table,catalogSink.getStaticPartitions.toMap)case (table, sink: DynamicTableSink) =>DynamicSinkUtils.convertSinkToRel(getRelBuilder, input, catalogSink, sink)} match {case Some(sinkRel) => sinkRelcase None =>throw new TableException(s"Sink '${catalogSink.getContextResolvedTable}' does not exists")}

PlannerBase.getTableSink

  private def getTableSink(contextResolvedTable: ContextResolvedTable,dynamicOptions: JMap[String, String]): Option[(ResolvedCatalogTable, Any)] = {contextResolvedTable.getTable[CatalogBaseTable] match {case connectorTable: ConnectorCatalogTable[_, _] =>val resolvedTable = contextResolvedTable.getResolvedTable[ResolvedCatalogTable]toScala(connectorTable.getTableSink) match {case Some(sink) => Some(resolvedTable, sink)case None => None}case regularTable: CatalogTable =>val resolvedTable = contextResolvedTable.getResolvedTable[ResolvedCatalogTable]...if (!contextResolvedTable.isAnonymous &&TableFactoryUtil.isLegacyConnectorOptions(catalogManager.getCatalog(objectIdentifier.getCatalogName).orElse(null),tableConfig,isStreamingMode,objectIdentifier,resolvedTable.getOrigin,isTemporary)) {...} else {...// 关键步骤:FactoryUtil.createDynamicTableSinkval tableSink = FactoryUtil.createDynamicTableSink(factory,objectIdentifier,tableToFind,Collections.emptyMap(),getTableConfig,getFlinkContext.getClassLoader,isTemporary)Option(resolvedTable, tableSink)}case _ => None}

FactoryUtil.createDynamicTableSink

根据’connector’=‘hudi’ 找到factory为org.apache.hudi.table.HoodieTableFactory,接着调用HoodieTableFactory.createDynamicTableSink

    public static DynamicTableSink createDynamicTableSink(@Nullable DynamicTableSinkFactory preferredFactory,ObjectIdentifier objectIdentifier,ResolvedCatalogTable catalogTable,Map<String, String> enrichmentOptions,ReadableConfig configuration,ClassLoader classLoader,boolean isTemporary) {final DefaultDynamicTableContext context =new DefaultDynamicTableContext(objectIdentifier,catalogTable,enrichmentOptions,configuration,classLoader,isTemporary);try {// 'connector'='hudi' // org.apache.hudi.table.HoodieTableFactoryfinal DynamicTableSinkFactory factory =preferredFactory != null? preferredFactory: discoverTableFactory(DynamicTableSinkFactory.class, context);// 关键步骤:HoodieTableFactory.createDynamicTableSinkreturn factory.createDynamicTableSink(context);} catch (Throwable t) {throw new ValidationException(String.format("Unable to create a sink for writing table '%s'.\n\n"+ "Table options are:\n\n"+ "%s",objectIdentifier.asSummaryString(),catalogTable.getOptions().entrySet().stream().map(e -> stringifyOption(e.getKey(), e.getValue())).sorted().collect(Collectors.joining("\n"))),t);}}

HoodieTableFactory.createDynamicTableSink

第一个问题解决

  public DynamicTableSink createDynamicTableSink(Context context) {Configuration conf = FlinkOptions.fromMap(context.getCatalogTable().getOptions());checkArgument(!StringUtils.isNullOrEmpty(conf.getString(FlinkOptions.PATH)),"Option [path] should not be empty.");setupTableOptions(conf.getString(FlinkOptions.PATH), conf);ResolvedSchema schema = context.getCatalogTable().getResolvedSchema();sanityCheck(conf, schema);setupConfOptions(conf, context.getObjectIdentifier(), context.getCatalogTable(), schema);// 关键步骤:HoodieTableSinkreturn new HoodieTableSink(conf, schema);}

BatchExecSink

回到方法PlannerBase.translate,它会在后面调用translateToPlanexecGraph.getRootNodes返回的内容为BatchExecSink (想知道为啥是BatchExecSink,可以看PlannerBase.translate中调用的translateToExecNodeGraph方法),
BatchExecSinkBatchExecNode的子类,所以会执行node.translateToPlan

PlannerBase.translateToPlan

  override protected def translateToPlan(execGraph: ExecNodeGraph): util.List[Transformation[_]] = {beforeTranslation()val planner = createDummyPlanner()val transformations = execGraph.getRootNodes.map {// BatchExecSink// 关键步骤:ExecNodeBase.translateToPlancase node: BatchExecNode[_] => node.translateToPlan(planner)case _ =>throw new TableException("Cannot generate BoundedStream due to an invalid logical plan. " +"This is a bug and should not happen. Please file an issue.")}afterTranslation()transformations}

BatchExecSink

public class BatchExecSink extends CommonExecSink implements BatchExecNode<Object> {...
public abstract class CommonExecSink extends ExecNodeBase<Object>implements MultipleTransformationTranslator<Object> {...

ExecNodeBase.translateToPlan

    public final Transformation<T> translateToPlan(Planner planner) {if (transformation == null) {transformation =// 关键步骤:BatchExecSink.translateToPlanInternaltranslateToPlanInternal((PlannerBase) planner,ExecNodeConfig.of(((PlannerBase) planner).getTableConfig(),persistedConfig,isCompiled));if (this instanceof SingleTransformationTranslator) {if (inputsContainSingleton()) {transformation.setParallelism(1);transformation.setMaxParallelism(1);}}}return transformation;}

BatchExecSink.translateToPlanInternal

    protected Transformation<Object> translateToPlanInternal(PlannerBase planner, ExecNodeConfig config) {final Transformation<RowData> inputTransform =(Transformation<RowData>) getInputEdges().get(0).translateToPlan(planner);// org.apache.hudi.table.HoodieTableSink        final DynamicTableSink tableSink = tableSinkSpec.getTableSink(planner.getFlinkContext());// 关键步骤:CommonExecSink.createSinkTransformationreturn createSinkTransformation(planner.getExecEnv(), config, inputTransform, tableSink, -1, false);}

CommonExecSink.createSinkTransformation

这里的tableSink为HoodieTableSink,会调用HoodieTableSink的getSinkRuntimeProvider方法返回runtimeProvider(没有执行里面的方法体)

protected Transformation<Object> createSinkTransformation(StreamExecutionEnvironment streamExecEnv,ExecNodeConfig config,Transformation<RowData> inputTransform,// 这里的tableSink为HoodieTableSinkDynamicTableSink tableSink,int rowtimeFieldIndex,boolean upsertMaterialize) {final ResolvedSchema schema = tableSinkSpec.getContextResolvedTable().getResolvedSchema();final SinkRuntimeProvider runtimeProvider =// 关键步骤:HoodieTableSink.getSinkRuntimeProvidertableSink.getSinkRuntimeProvider(new SinkRuntimeProviderContext(isBounded));final RowType physicalRowType = getPhysicalRowType(schema);final int[] primaryKeys = getPrimaryKeyIndices(physicalRowType, schema);final int sinkParallelism = deriveSinkParallelism(inputTransform, runtimeProvider);final int inputParallelism = inputTransform.getParallelism();final boolean inputInsertOnly = inputChangelogMode.containsOnly(RowKind.INSERT);final boolean hasPk = primaryKeys.length > 0;...return (Transformation<Object>)// 关键步骤:CommonExecSink.applySinkProviderapplySinkProvider(sinkTransform,streamExecEnv,runtimeProvider,rowtimeFieldIndex,sinkParallelism,config);}

CommonExecSink.applySinkProvider

先通过new DataStream<>(env, sinkTransformation)生成dataStream,接着通过执行provider.consumeDataStream调用HoodieTableSink.getSinkRuntimeProvider中的方法体,这里的provider为HoodieTableSink.getSinkRuntimeProvider返回的DataStreamSinkProviderAdapter

    private Transformation<?> applySinkProvider(Transformation<RowData> inputTransform,StreamExecutionEnvironment env,SinkRuntimeProvider runtimeProvider,int rowtimeFieldIndex,int sinkParallelism,ExecNodeConfig config) {TransformationMetadata sinkMeta = createTransformationMeta(SINK_TRANSFORMATION, config);if (runtimeProvider instanceof DataStreamSinkProvider) {Transformation<RowData> sinkTransformation =applyRowtimeTransformation(inputTransform, rowtimeFieldIndex, sinkParallelism, config);// 生成dataStreamfinal DataStream<RowData> dataStream = new DataStream<>(env, sinkTransformation);final DataStreamSinkProvider provider = (DataStreamSinkProvider) runtimeProvider;// 关键步骤:provider.consumeDataStreamreturn provider.consumeDataStream(createProviderContext(config), dataStream).getTransformation();} else if (runtimeProvider instanceof TransformationSinkProvider) {...

provider.consumeDataStream(已经在上面的类DataStreamSinkProviderAdapter提过)

它会调用HoodieTableSink.getSinkRuntimeProvider中的方法体(Lambda 表达式)执行后面的写hudi逻辑
第二个问题解决

  default DataStreamSink<?> consumeDataStream(ProviderContext providerContext, DataStream<RowData> dataStream) {return consumeDataStream(dataStream);}

总结

本文主要简单记录了自己调试 Hudi Flink SQL 源码的过程,并没有对源码进行深入的分析(自己水平也不够)。主要目的是为了弄清楚从Table API 的入口到createDynamicTableSink返回HoodieTableSink的主要代码步骤以及在哪里调用HoodieTableSink.getSinkRuntimeProvider的方法体以进行后面的写Hudi逻辑,这样便于后面对Hudi源码的分析和学习。

本文新学习知识点:函数式接口以及对应的 Lambda 表达式的实现

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/74214.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

2023网络安全学习路线 非常详细 推荐学习

首先咱们聊聊&#xff0c;学习网络安全方向通常会有哪些问题 1、打基础时间太长 学基础花费很长时间&#xff0c;光语言都有几门&#xff0c;有些人会倒在学习 linux 系统及命令的路上&#xff0c;更多的人会倒在学习语言上&#xff1b; 2、知识点掌握程度不清楚 对于网络安…

Qsys介绍

文章目录 前言一、为什么需要Qsys1、简化了系统的设计流程2、Qsys涉及的技术 二、Qsys真身1、一种系统集成工具2、何为Nios II1、内核架构2、Nios II选型 三、Qsys设计涉及到的软件&工具四、总结五、参考资料 前言 Qsys是Altera下的一个系统集成工具&#xff0c;可用于搭建…

使用tinyxml解析和修改XML文件

首先要清楚XML文件包含哪些元素&#xff1a; 他是由元素、文本或者两者混合物组成。元素可以拥有属性&#xff0c;元素是指从开始标签到结束标签的部分。 <?xml version"1.0" encoding"UTF-8" ?> <books><book id"1001">&…

新一代开源流数据湖平台Apache Paimon入门实操-上

文章目录 概述定义核心功能适用场景架构原理总体架构统一存储基本概念文件布局 部署环境准备环境部署 实战Catalog文件系统Hive Catalog 创建表创建Catalog管理表查询创建表&#xff08;CTAS&#xff09;创建外部表创建临时表 修改表修改表修改列修改水印 概述 定义 Apache Pa…

Nginx安装和Nginx配置虚拟主机

Nginx安装 源码包获取地址&#xff1a;http://nginx.org/download/ RPM包获取地址&#xff1a;http://nginx.org/packages/centos/7Server/x86_64/RPMS/ RPM安装 这里选择的RPM包是 nginx-1.22.0-1.el7.ngx.x86_64.rpm [rootlocalhost ~]# yum install nginx-1.22.0-1.el7.…

快速排序——“数据结构与算法”

各位CSDN的uu们好呀&#xff0c;今天又是小雅兰的数据结构与算法专栏啦&#xff0c;下面&#xff0c;就让我们进入快速排序的世界吧&#xff01;&#xff01;&#xff01; 快速排序 快速排序是Hoare于1962年提出的一种二叉树结构的交换排序方法&#xff0c;其基本思想为&…

【C语言学习】数据类型转换

一、自动类型转换 1.当运算符两边的数据类型不同时&#xff0c;C语言会帮我们将其转换为较大的类型。即将数据转换成表达范围更大的类型。 将前一种类型转换为后一种类型 char --> short --> int --> long --> long long int --> float --> double2.对于…

C/C++的5大内存分区

1、堆区&#xff08;heap&#xff09;——由程序员分配和释放&#xff0c; 若程序员不释放&#xff0c;程序结束时可能由OS回收 。注意它与数据结构中的堆是两回事 2、栈区&#xff08;stack&#xff09;——由编译器自动分配释放 &#xff0c;存放函数的参数值&#xff0c;局…

python基础

本篇博客的内容是《python编程从入门到实践》的精简版&#xff0c;主要是书中&#xff08;本人认为的&#xff09;重点精简&#xff0c;以及自己学习的一些理解。 文章目录 更好的阅读体验变量和简单的数据类型变量字符串字符串的几种定义方法字符串拼接 列表简介列表是什么运…

ELK + Fliebeat + Kafka日志系统

参考&#xff1a; ELKFilebeatKafka分布式日志管理平台搭建_51CTO博客_elk 搭建 ELK 日志分析系统概述及部署&#xff08;上&#xff09;-阿里云开发者社区 ELK是三个开源软件的缩写&#xff0c;分别表示&#xff1a;Elasticsearch , Logstash, Kibana , 它们都是开源软件。…

python 数据分析面试题:求分组排第n名的记录数据

近期面试遇到一个面试题&#xff0c;分享给大家。 文中会提供详细的解题思路以及问题延伸 一、面试题 面试题&#xff1a;输出各学科总分第一名的学员姓名、年龄、分数数据&#xff1a; class_a {name: [学员1, 学员2, 学员3, 学员4,学员5],age: [23, 24, 26, 27,25],course…

05|Oracle学习(UNIQUE约束)

1. UNIQUE约束介绍 也叫&#xff1a;唯一键约束&#xff0c;用于限定数据表中字段值的唯一性。 1.1 UNIQUE和primary key区别&#xff1a; 主键/联合主键每张表中只有一个。UNIQUE约束可以在一张表中&#xff0c;多个字段中存在。例如&#xff1a;学生的电话、身份证号都是…

AB 压力测试

服务器配置 阿里云Ubuntu 64位 CPU1 核 内存2 GB 公网带宽1 Mbps ab -c100 -n1000 http://127.0.0.1:9501/ -n&#xff1a;在测试会话中所执行的请求个数。默认时&#xff0c;仅执行一个请求。 -c&#xff1a;一次产生的请求个数。默认是一次一个。 ab -c 100 -n 200 ht…

PHP从入门到精通—PHP开发入门-PHP概述、PHP开发环境搭建、PHP开发环境搭建、第一个PHP程序、PHP开发流程

每开始学习一门语言&#xff0c;都要了解这门语言和进行开发环境的搭建。同样&#xff0c;学生开始PHP学习之前&#xff0c;首先要了解这门语言的历史、语言优势等内容以及了解开发环境的搭建。 PHP概述 认识PHP PHP最初是由Rasmus Lerdorf于1994年为了维护个人网页而编写的一…

无涯教程-Lua - 函数声明

函数是一起执行任务的一组语句&#xff0c;您可以将代码分成单独的函数。 Lua语言提供了程序可以调用的许多内置方法。如方法 print()打印在控制台中作为输入传递的参数。 定义函数 Lua编程语言中方法定义的一般形式如下- optional_function_scope function function_name(…

什么?你还没有用过JPA Buddy,那么你工作肯定没5年

1. 概述 JPA Buddy是一个广泛使用的IntelliJ IDEA插件&#xff0c;面向使用JPA数据模型和相关技术&#xff08;如Spring DataJPA&#xff0c;DB版本控制工具&#xff08;Flyway&#xff0c;Liquibase&#xff09;&#xff0c;MapStruct等&#xff09;的新手和有经验的开发人员…

【论文阅读】通过解缠绕表示学习提升领域泛化能力用于主题感知的作文评分

摘要 本文工作聚焦于从领域泛化的视角提升AES模型的泛化能力&#xff0c;在该情况下&#xff0c;目标主题的数据在训练时不能被获得。本文提出了一个主题感知的神经AES模型&#xff08;PANN&#xff09;来抽取用于作文评分的综合的表示&#xff0c;包括主题无关&#xff08;pr…

【MySQL】表的增删查改

文章目录 一、创建表create二、查看表desc三、修改表3.1 修改表名alter3.2 在表中插入数据insert3.3 在表中新增字段alter3.4 修改指定列的属性alter3.5 移除表中的一列alter3.6 修改表中某一列的列名alter 四、删除表drop 一、创建表create mysql> create table if not ex…

Python爬虫教程篇+图形化整理数据(数学建模可用)

一、首先我们先看要求 1.写一个爬虫程序 2、爬取目标网站数据&#xff0c;关键项不能少于5项。 3、存储数据到数据库&#xff0c;可以进行增删改查操作。 4、扩展&#xff1a;将库中数据进行可视化展示。 二、操作步骤&#xff1a; 首先我们根据要求找到一个适合自己的网…

【深度学习】High-Resolution Image Synthesis with Latent Diffusion Models,论文

13 Apr 2022 论文&#xff1a;https://arxiv.org/abs/2112.10752 代码&#xff1a;https://github.com/CompVis/latent-diffusion 文章目录 PS基本概念运作原理 AbstractIntroductionRelated WorkMethodPerceptual Image CompressionLatent Diffusion Models Conditioning Mec…