二百六十四、Java——Java采集Kafka主题A的JSON数据,解析成一条条数据,然后写入Kafka主题B中

一、目的

由于Hive是单机环境,因此庞大的原始JSON数据在Hive中解析的话就太慢了,必须放在Hive之前解析成一个个字段、一条条CSV数据

二、IDEA创建SpringBoot项目

三、项目中各个文件

3.1 pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><groupId>com.hurys</groupId><artifactId>hurrys-jw-kafka</artifactId><version>1.0.0</version><properties><maven.compiler.source>8</maven.compiler.source><maven.compiler.target>8</maven.compiler.target><java.version>1.8</java.version><project.build.sourceEncoding>UTF-8</project.build.sourceEncoding><project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding><spring-boot.version>2.6.13</spring-boot.version></properties><dependencyManagement><dependencies><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-dependencies</artifactId><version>${spring-boot.version}</version><type>pom</type><scope>import</scope></dependency></dependencies></dependencyManagement><dependencies><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-web</artifactId><exclusions><exclusion><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-logging</artifactId></exclusion></exclusions></dependency><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-log4j2</artifactId></dependency><dependency><groupId>org.springframework.kafka</groupId><artifactId>spring-kafka</artifactId></dependency><dependency><groupId>org.projectlombok</groupId><artifactId>lombok</artifactId><optional>true</optional></dependency><dependency><groupId>com.alibaba</groupId><artifactId>fastjson</artifactId><version>1.2.83</version></dependency></dependencies><build><plugins><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-compiler-plugin</artifactId><version>3.8.1</version><configuration><source>1.8</source><target>1.8</target><encoding>UTF-8</encoding></configuration></plugin><plugin><groupId>org.springframework.boot</groupId><artifactId>spring-boot-maven-plugin</artifactId><version>${spring-boot.version}</version><executions><execution><id>repackage</id><goals><goal>repackage</goal></goals></execution></executions></plugin></plugins></build></project>

3.2 application.yml

kafka:servers: 192.168.10.12:9092server:port: 9830spring:application:name: jw-kafkakafka:bootstrap-servers: ${kafka.servers}consumer:group-id: jw-kafkakey-deserializer: org.apache.kafka.common.serialization.StringDeserializervalue-deserializer: org.apache.kafka.common.serialization.StringDeserializerauto-offset-reset: earliestproducer:key-serializer: org.apache.kafka.common.serialization.StringSerializervalue-serializer: org.apache.kafka.common.serialization.StringSerializer

3.3 log4j2.xml

<?xml version="1.0" encoding="UTF-8"?>
<!--Configuration后面的status,这个用于设置log4j2自身内部的信息输出,可以不设置,当设置成trace时,你会看到log4j2内部各种详细输出-->
<!--monitorInterval:Log4j能够自动检测修改配置 文件和重新配置本身,设置间隔秒数-->
<Configuration status="OFF" monitorInterval="600"><!--日志级别以及优先级排序: OFF > FATAL > ERROR > WARN > INFO > DEBUG > TRACE > ALL如果查看DEBUG级别日志,需要修改<RollingFile name="RollingFileInfo"> <ThresholdFilter level="INFO">和<root level="DEBUG">--><!--变量配置--><Properties><!-- 格式化输出:%date表示日期,%thread表示线程名,%-5level:级别从左显示5个字符宽度 %msg:日志消息,%n是换行符--><!-- %logger{36} 表示 Logger 名字最长36个字符 --><property name="LOG_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %class{36} %M() @%L - %msg%n"/><!-- 定义日志存储的路径 --><property name="FILE_PATH" value="/home/hurys-log/jw-kafka"/><property name="FILE_DAY_PATH" value="/home/hurys-log/jw-kafka/%d{yyyy-MM}/%d{yyyy-MM-dd}"/></Properties><Appenders><!-- 这个输出到控制台的配置--><Console name="Console" target="SYSTEM_OUT"><!--控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch) --><ThresholdFilter level="DEBUG" onMatch="ACCEPT" onMismatch="DENY"/><!--输出日志的格式--><PatternLayout pattern="${LOG_PATTERN}"/></Console><!-- 这个会打印出所有的info及以下级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档--><RollingFile name="RollingFileInfo" fileName="${FILE_PATH}/info.log"filePattern="${FILE_DAY_PATH}/INFO-%d{yyyy-MM-dd}_%i.log.gz"><!--控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch) --><ThresholdFilter level="DEBUG" onMatch="ACCEPT" onMismatch="DENY"/><!-- 输出格式 --><PatternLayout pattern="${LOG_PATTERN}"/><Policies><!--interval属性用来指定多久滚动一次,默认是1 hour--><TimeBasedTriggeringPolicy modulate="true" interval="1"/><!-- 此处为每个文件大小策略限制,使用它一般会在文件中filePattern采用%i模式 --><SizeBasedTriggeringPolicy size="100MB"/></Policies><!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖--><DefaultRolloverStrategy max="20"><!-- 从basePath到日志文件路径%d{yyyy-MM}/%d{yyyy-MM-dd}/INFO-%d{yyyy-MM-dd}_%i.log.gz的maxDepth是3--><Delete basePath="${FILE_PATH}" maxDepth="3"><!-- 这里的age必须和filePattern协调, 后者是精确到dd, 这里就要写成xd, xD就不起作用另外, 数字最好>2, 否则可能造成删除的时候, 最近的文件还处于被占用状态,导致删除不成功!--><IfLastModified age="30d"/></Delete></DefaultRolloverStrategy></RollingFile><!-- 这个会打印出所有的warn及以下级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档--><RollingFile name="RollingFileWarn" fileName="${FILE_PATH}/warn.log"filePattern="${FILE_DAY_PATH}/WARN-%d{yyyy-MM-dd}_%i.log.gz"><!--控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--><ThresholdFilter level="WARN" onMatch="ACCEPT" onMismatch="DENY"/><PatternLayout pattern="${LOG_PATTERN}"/><Policies><!--interval属性用来指定多久滚动一次,默认是1 hour--><TimeBasedTriggeringPolicy modulate="true" interval="1"/><SizeBasedTriggeringPolicy size="100MB"/></Policies><!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖--><DefaultRolloverStrategy max="15"/></RollingFile><!-- 这个会打印出所有的error及以下级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档--><RollingFile name="RollingFileError" fileName="${FILE_PATH}/error.log"filePattern="${FILE_DAY_PATH}/ERROR-%d{yyyy-MM-dd}_%i.log.gz"><!--控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--><ThresholdFilter level="ERROR" onMatch="ACCEPT" onMismatch="DENY"/><PatternLayout pattern="${LOG_PATTERN}"/><Policies><!--interval属性用来指定多久滚动一次,默认是1 hour--><TimeBasedTriggeringPolicy interval="1"/><SizeBasedTriggeringPolicy size="100MB"/></Policies><!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖--><DefaultRolloverStrategy max="15"/></RollingFile></Appenders><!--Logger节点用来单独指定日志的形式,比如要为指定包下的class指定不同的日志级别等。--><!--然后定义loggers,只有定义了logger并引入的appender,appender才会生效--><loggers><!--过滤掉spring和mybatis的一些无用的DEBUG信息--><logger name="org.mybatis" level="error" additivity="false"><AppenderRef ref="Console"/></logger><!--监控系统信息--><!--若是additivity设为false,则 子Logger 只会在自己的appender里输出,而不会在 父Logger 的appender里输出。--><Logger name="org.springframework" level="error" additivity="false"><AppenderRef ref="Console"/></Logger><root level="INFO"><appender-ref ref="Console"/><appender-ref ref="RollingFileInfo"/><appender-ref ref="RollingFileWarn"/><appender-ref ref="RollingFileError"/></root></loggers>
</Configuration>

3.4 KafkaConstants

package com.hurys.kafka.constant;public interface KafkaConstants {/*** 静态排队数据*/String TOPIC_INTERNAL_DATA_STATIC_QUEUE = "topic_internal_data_static_queue";/*** 动态排队数据*/String TOPIC_INTERNAL_DATA_DYNAMIC_QUEUE = "topic_internal_data_dynamic_queue";}

3.5 JsonUtil

package com.hurys.kafka.util;import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.serializer.SerializerFeature;public class JsonUtil {/*** 将对象转换为 JSON 字符串,不忽略空值字段。** @param object 要序列化的对象* @return 转换后的 JSON 字符串*/public static String objectToJson(Object object) {return JSON.toJSONString(object, SerializerFeature.WriteMapNullValue);}
}

3.6 KafkaApplication

package com.hurys.kafka;import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;@SpringBootApplication
public class KafkaApplication {public static void main(String[] args) {SpringApplication.run(KafkaApplication.class, args);}}

3.7 KafkaServiceListener

package com.hurys.kafka.listener;import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.JSONArray;
import com.alibaba.fastjson.JSONObject;
import com.hurys.kafka.constant.KafkaConstants;
import com.hurys.kafka.util.JsonUtil;
import lombok.extern.log4j.Log4j2;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Service;import javax.annotation.Resource;
import java.util.List;/*** kafka消费service** @author wangjing* @Date 2024/09/09*/
@Service
@Log4j2
public class KafkaServiceListener {@Resourceprivate KafkaTemplate kafkaTemplate;// 1、转向比数据@KafkaListener(topics = KafkaConstants.TOPIC_INTERNAL_DATA_TURN_RATIO)public void processData(String message) {try {JSONObject jsonObject = JSON.parseObject(message);//   System.out.println("原始数据"+JsonUtil.objectToJson(jsonObject));//获取雷达数据String device_no = jsonObject.getString("deviceNo");String source_device_type = jsonObject.getString("sourceDeviceType");String sn = jsonObject.getString("sn");String model =  jsonObject.getString("model");String createTime =  jsonObject.getString("createTime");String create_time = createTime.substring(0,19);JSONObject data = jsonObject.getJSONObject("data");String cycle = data.getString("cycle");String volume_sum = data.getString("volumeSum");String speed_avg = data.getString("speedAvg");String volume_left = data.getString("volumeLeft");String speed_left = data.getString("speedLeft");String volume_straight = data.getString("volumeStraight");String speed_straight = data.getString("speedStraight");String volume_right = data.getString("volumeRight");String speed_right = data.getString("speedRight");String volume_turn = data.getString("volumeTurn");String speed_turn = data.getString("speedTurn");String outputLine = (device_no +","+source_device_type+","+sn+","+model+","+create_time+","+cycle+","+volume_sum+","+speed_avg+","+ volume_left+","+speed_left+","+volume_straight+","+speed_straight+","+volume_right+","+speed_right+","+volume_turn+","+speed_turn);//   System.out.println("outputLine数据1"+outputLine);kafkaTemplate.send("topic_db_data_turn_ratio", outputLine);} catch (Exception e) {log.error("process turn_ratio error", e);}}// 2、静态排队数据@KafkaListener(topics = KafkaConstants.TOPIC_INTERNAL_DATA_STATIC_QUEUE)public void processData2(String message) {try {JSONObject jsonObject = JSON.parseObject(message);//获取雷达数据String device_no = jsonObject.getString("deviceNo");String source_device_type = jsonObject.getString("sourceDeviceType");String sn = jsonObject.getString("sn");String model =  jsonObject.getString("model");String createTime =  jsonObject.getString("createTime");String create_time = createTime.substring(0,19);JSONObject data = jsonObject.getJSONObject("data");List<JSONObject> queueList = data.getJSONArray("queueList").toJavaList(JSONObject.class);for (JSONObject queueItem:queueList) {String lane_no = queueItem.getString("laneNo");String lane_type = queueItem.getString("laneType");String queue_count = queueItem.getString("queueCount");String queue_len = queueItem.getString("queueLen");String queue_head = queueItem.getString("queueHead");String queue_tail = queueItem.getString("queueTail");String outputLine = ( device_no+","+ source_device_type+","+ sn+","+ model+","+ create_time+","+ lane_no+","+ lane_type+","+ queue_count+","+ queue_len+","+ queue_head+","+queue_tail);System.out.println("outputLine数据2"+outputLine);kafkaTemplate.send("topic_db_data_static_queue", outputLine);}} catch (Exception e) {log.error("process static_queue error", e);}}// 7、区域数据@KafkaListener(topics = KafkaConstants.TOPIC_INTERNAL_DATA_AREA)public void processData7(String message) {try {JSONObject jsonObject = JSON.parseObject(message);//获取雷达数据String device_no = jsonObject.getString("deviceNo");String source_device_type = jsonObject.getString("sourceDeviceType");String sn = jsonObject.getString("sn");String model =  jsonObject.getString("model");String createTime =  jsonObject.getString("createTime");String create_time = createTime.substring(0,19);JSONObject data = jsonObject.getJSONObject("data");List<JSONObject> areaStatusList = data.getJSONArray("areaStatusList").toJavaList(JSONObject.class);for (JSONObject areaStatus:areaStatusList) {String area_no = areaStatus.getString("areaNo");List<JSONObject> laneStatusList = areaStatus.getJSONArray("laneStatusList").toJavaList(JSONObject.class);for (JSONObject laneItem : laneStatusList) {String lane_no = laneItem.getString("laneNo");String lane_type = laneItem.getString("laneType");String target_count = laneItem.getString("targetCount");String space_occupancy = laneItem.getString("spaceOccupancy");String pareto = laneItem.getString("pareto");String speed_avg = laneItem.getString("speedAvg");String speed_head = laneItem.getString("speedHead");String speed_tail = laneItem.getString("speedTail");String pos_head = laneItem.getString("posHead");String pos_tail = laneItem.getString("posTail");String average_arrival_time = laneItem.getString("averageArrivalTime");String head_position = laneItem.getString("headPosition");String tail_position = laneItem.getString("tailPosition");String outputLine = (device_no + "," + source_device_type + "," + sn+ "," +model+ "," +create_time+ "," + lane_no+ "," + lane_type+ "," + target_count+ "," + space_occupancy+ "," + pareto+ "," + speed_avg+ "," + speed_head+ "," + speed_tail+ "," + pos_head+ "," + pos_tail+ "," + area_no+ "," + average_arrival_time+ "," + head_position+ "," + tail_position);//    System.out.println("outputLine数据7" + outputLine);kafkaTemplate.send("topic_db_data_area", outputLine);}}} catch (Exception e) {log.error("process area error", e);}}// 8、统计数据@KafkaListener(topics = KafkaConstants.TOPIC_INTERNAL_DATA_STATISTICS)public void processData8(String message) {try {JSONObject jsonObject = JSON.parseObject(message);//获取雷达数据String device_no = jsonObject.getString("deviceNo");String source_device_type = jsonObject.getString("sourceDeviceType");String sn = jsonObject.getString("sn");String model =  jsonObject.getString("model");String createTime =  jsonObject.getString("createTime");String create_time = createTime.substring(0,19);JSONObject data = jsonObject.getJSONObject("data");String cycle =  data.getString("cycle");List<JSONObject> sectionList = data.getJSONArray("sectionList").toJavaList(JSONObject.class);for (JSONObject sectionStatus:sectionList) {String section_no = sectionStatus.getString("sectionNo");List<JSONObject> coilList = sectionStatus.getJSONArray("coilList").toJavaList(JSONObject.class);for (JSONObject coilItem : coilList) {String lane_no = coilItem.getString("laneNo");String lane_type = coilItem.getString("laneType");String coil_no = coilItem.getString("coilNo");String volume_sum = coilItem.getString("volumeSum");String volume_person = coilItem.getString("volumePerson");String volume_car_non = coilItem.getString("volumeCarNon");String volume_car_small = coilItem.getString("volumeCarSmall");String volume_car_middle = coilItem.getString("volumeCarMiddle");String volume_car_big = coilItem.getString("volumeCarBig");String speed_avg = coilItem.getString("speedAvg");String speed_85 = coilItem.getString("speed85");String time_occupancy = coilItem.getString("timeOccupancy");String average_headway = coilItem.getString("averageHeadway");String average_gap = coilItem.getString("averageGap");String outputLine = (device_no + "," + source_device_type + "," + sn+ "," +model+ "," +create_time+ "," + cycle+ "," + lane_no+ "," + lane_type+ "," + section_no+ "," + coil_no+ "," + volume_sum+ "," +  volume_person+ "," + volume_car_non+ "," + volume_car_small+ "," + volume_car_middle+ "," + volume_car_big+ "," + speed_avg+ "," +  speed_85+ "," + time_occupancy+ "," + average_headway+ "," + average_gap);//    System.out.println("outputLine数据8" + outputLine);kafkaTemplate.send("topic_db_data_statistics", outputLine);}}} catch (Exception e) {log.error("process statistics error", e);}}// 9、事件资源@KafkaListener(topics = KafkaConstants.TOPIC_INTERNAL_DATA_EVENT_RESOURCE)public void processData9(String message) {try {JSONObject jsonObject = JSON.parseObject(message);//获取雷达数据String device_no = jsonObject.getString("deviceNo");String source_device_type = jsonObject.getString("sourceDeviceType");String sn = jsonObject.getString("sn");String model =  jsonObject.getString("model");String createTime =  jsonObject.getString("createTime");String create_time = createTime.substring(0,19);JSONObject data = jsonObject.getJSONObject("data");String event_id = data.getString("eventId");// 获取数组中的第一个元素,并转换为文本JSONArray pictureArray = data.getJSONArray("picture");String picture = pictureArray.getString(0);// 获取数组中的第一个元素// 获取数组中的第一个元素,并转换为文本JSONArray videoArray = data.getJSONArray("video");String video = videoArray.getString(0);// 获取数组中的第一个元素String outputLine = (device_no +","+source_device_type+","+sn+","+model+","+create_time+","+event_id+","+picture+","+ video);//    System.out.println("outputLine数据9" + outputLine);kafkaTemplate.send("topic_db_data_event_resource", outputLine);} catch (Exception e) {log.error("process event_resource error", e);}}// 10、事件数据@KafkaListener(topics = KafkaConstants.TOPIC_INTERNAL_DATA_EVENT)public void processData10(String message) {try {JSONObject jsonObject = JSON.parseObject(message);//    System.out.println("原始数据"+JsonUtil.objectToJson(jsonObject));//获取雷达数据String device_no = jsonObject.getString("deviceNo");String source_device_type = jsonObject.getString("sourceDeviceType");String sn = jsonObject.getString("sn");String model =  jsonObject.getString("model");String createTime =  jsonObject.getString("createTime");String create_time = createTime.substring(0,19);String event_id = jsonObject.getString("eventId");String event_type = jsonObject.getString("eventType");String state = jsonObject.getString("state");switch (event_type) {case "QueueOverrun":// 1、处理QueueOverrun事件JSONObject data = jsonObject.getJSONObject("data");String station = data.getString("station");String flow = data.getString("flow");List<JSONObject> queueList = data.getJSONArray("queueList").toJavaList(JSONObject.class);for (JSONObject queueItem:queueList) {String lane_no = queueItem.getString("laneNo");String queue_len = queueItem.getString("queueLen");String geography_head = queueItem.getString("geographyHead");String geography_tail = queueItem.getString("geographyTail");String queue_count = queueItem.getString("queueCount");String speed_avg = queueItem.getString("speedAvg");String event_type_detail = null;String area_no = null;String lane_no_original = null;String target_id = null;String target_type = null;String speed = null;String limit_speed = null;String pos_x = null;String pos_y = null;String pos_z = null;String longitude = null;String latitude = null;String altitude = null;String area_num = null;String space_occupancy = null;String congestion_grade = null;String congestion_length = null;String length = null;String width = null;String height = null;String vehicle_type = null;String vehicle_color = null;String plate_type = null;String plate_color = null;String plate_number = null;String outputLine = (device_no+","+ source_device_type+","+sn+","+model+","+ create_time+","+ event_id+","+event_type+","+event_type_detail+","+ state+","+area_no+","+station+","+ flow+","+ lane_no+","+ lane_no_original+","+target_id+","+target_type+","+ queue_len+","+ queue_count+","+ speed+","+ speed_avg+","+limit_speed+","+ pos_x+","+ pos_y+","+ pos_z+","+ geography_head+","+ geography_tail+","+longitude+","+latitude+","+altitude+","+area_num+","+ space_occupancy+","+ congestion_grade+","+ congestion_length+","+length+","+width+","+height+","+ vehicle_type+","+vehicle_color+","+plate_type+","+plate_color+","+ plate_number);//    System.out.println("outputLine数据10"+outputLine);kafkaTemplate.send("topic_db_data_event", outputLine);}break;case "Debris":// 12、处理Debris事件JSONObject data12 = jsonObject.getJSONObject("data");String event_type_detail12 = null;String area_no12 = data12.getString("areaNo");String station12 = data12.getString("station");String flow12 = null;String lane_no12 = null;String lane_no_original12 = null;String target_id12 = null;String target_type12 =null;String queue_len12 = null;String queue_count12 = null;String speed12 = null;String speed_avg12 =null;String limit_speed12 = null;String pos_x12 = data12.getString("posX");String pos_y12 = data12.getString("posY");String pos_z12 = data12.getString("posZ");String geography_head12 = null;String geography_tail12 = null;String longitude12 = data12.getString("longitude");String latitude12 = data12.getString("latitude");String altitude12 = data12.getString("altitude");String area_num12 = null;String space_occupancy12 = null;String congestion_grade12 = null;String congestion_length12 =null;String length12 = data12.getString("length");String width12 = data12.getString("width");String height12 = data12.getString("height");String vehicle_type12 = null;String vehicle_color12 = null;String plate_type12 = null;String plate_color12 = null;String plate_number12 = null;String outputLine12 = (device_no+","+ source_device_type+","+sn+","+model+","+ create_time+","+ event_id+","+event_type+","+event_type_detail12+"," + state+","+area_no12+"," +station12+","+ flow12+","+ lane_no12+","+ lane_no_original12+","+target_id12+","+target_type12+","+ queue_len12+"," + queue_count12+","+ speed12+","+ speed_avg12+"," +limit_speed12+","+ pos_x12+","+ pos_y12+","+ pos_z12+","+ geography_head12+","+ geography_tail12+"," +longitude12+","+latitude12+","+altitude12+","+area_num12+"," + space_occupancy12+","+ congestion_grade12+","+ congestion_length12+","+length12+"," +width12+","+height12+","+ vehicle_type12+","+vehicle_color12+"," +plate_type12+","+plate_color12+","+ plate_number12);//   System.out.println("outputLine数据22"+outputLine12);kafkaTemplate.send("topic_db_data_event", outputLine12);break;default:// 默认处理break;}} catch (Exception e) {log.error("process event error", e);}}}

四、启动KafkaApplication任务,可以打开Kafka主题B的消费窗口进行查看

4.1 启动KafkaApplication任务

4.2 打开Kafka主题B的消费窗口

搞定!!!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/426930.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Java设计模式—面向对象设计原则(三) -----> 依赖倒转原则DIP(完整详解,附有代码+案例)

文章目录 3.3 依赖倒转原则(DIP)3.3.1概述3.3.2 案例 3.3 依赖倒转原则(DIP) 依赖倒转原则&#xff1a;Dependency Inversion Principle&#xff0c;DIP 3.3.1概述 高层模块不应该依赖低层模块&#xff0c;两者都应该依赖其抽象&#xff1b;抽象不应该依赖细节&#xff0c;细…

PXE服务

一.PXE服务的功能介绍 1.无盘启动&#xff1a;PXE允许计算机在没有本地存储设备的情况下启动操作系统。这对于构建无盘工作站非常有用&#xff0c;因为计算机可以直接从网络加载操作系统和其他应用程序1。 2.远程安装操作系统&#xff1a;PXE技术可以用于远程安装操作系统&…

HTML讲解(二)head部分

目录 1. 2.的使用 2.1 charset 2.2 name 2.2.1 describe关键字 2.2.2 keywords关键字 2.2.3 author关键字 2.2.4 http-equiv 小心&#xff01;VS2022不可直接接触&#xff0c;否则&#xff01;没这个必要&#xff0c;方源面色淡然一把抓住&#xff01;顷刻炼化&#x…

VSCode C++(Code Runner)+ OpenSSL开发环境搭建

本章教程,主要介绍在VSCode中配置OpenSSL环境。 操作系统:wsl+ubuntu22.04 一、安装必备组件 1、安装g++ sudo apt install g++ 2、安装 OpenSSL sudo apt-get install libssl-dev 3、安装Code Runner插件 这个在vscode的插件市场可以找到,极力推荐使用,安装插件,可以…

nodejs 007:错误npm error Error: EPERM: operation not permitted, symlink

完整错误信息 npm error Error: EPERM: operation not permitted, symlink npm warn cleanup Failed to remove some directories [ npm warn cleanup [ npm warn cleanup C:\\Users\\kingchuxing\\Documents\\IPFS\\orbit-db-set-master\\node_modules\\ipfs-cli, npm…

如何在 Ubuntu 系统上部署 Laravel 项目 ?

到目前为止&#xff0c;Laravel 是 PHP 开发人员构建 api 和 web 应用程序的首选。如果你是新手的话&#xff0c;将 Laravel 应用程序部署到线上服务器上可能有点棘手。 在本指南中&#xff0c;我们将向您展示在 Ubuntu 系统中部署 Laravel 应用程序的全过程。 Step 1: Updat…

c++中的二叉搜索树

目录 ​编辑 一概念&#xff1a; 二性能分析&#xff1a; 三实现步骤&#xff1a; 31插入&#xff1a; 32删除&#xff1a; 33查找&#xff1a; 四应用&#xff08;key与key_value): 41key模型&#xff1a; 42key_value模型&#xff1a; 一概念&#xff1a; 静图展示…

Linux(6)--CentOS目录

文章目录 1. 根目录2. cd目录切换命令3. CentOS目录介绍4. pwd命令介绍5. ls命令介绍5.1 ls5.2 ls -a5.3 ls -l 1. 根目录 Windows电脑的根目录是计算机(我的电脑)&#xff0c;然后C盘、D盘。 Linux系统的根目录是/&#xff0c;我们可以使用cd /进入根目录&#xff0c;然后使…

20240919 - 【PYTHON】辞职信

import tkinter as tk # 导入 tkinter 模块&#xff0c;并简写为 tk from tkinter import messagebox # 从 tkinter 导入 messagebox 子模块&#xff0c;用于显示消息框 from random import random # 从 random 模块导入 random 函数&#xff0c;用于生成随机数# 创建窗口对…

一本还没发布的书,能在Github上拿25.6k⭐️,熬夜也要读完的书

重磅&#xff01;从零构建大语言模型教程开源&#xff01; 自从ChatGPT发布以来&#xff0c;大型语言模型&#xff08;LLM&#xff09;大放异彩。 如今市面上关于大模型的书籍和教程可谓琳琅满目&#xff0c;但基本上都只是从原理和参数调优上讲解的&#xff0c;没有一本系统性…

借老系统重构我准备写个OpenAPI3.1版的API管理工具(附录屏演示)

前段时间一直在忙公司老系统重构的方案设计&#xff0c;其中最大的重构点就是前后端分离。为了加快前后端协同开发和对接的工作效率&#xff0c;我决定写一个公司内部使用的OpenAPI3.1版的API管理工具。 文章目录 有现成的工具为啥不用现有成熟方案初步成果展示录屏演示下一步计…

手语识别系统源码分享

手语识别检测系统源码分享 [一条龙教学YOLOV8标注好的数据集一键训练_70全套改进创新点发刊_Web前端展示] 1.研究背景与意义 项目参考AAAI Association for the Advancement of Artificial Intelligence 项目来源AACV Association for the Advancement of Computer Vision …

计算机专业的就业方向

计算机专业的就业方向 亲爱的新生们&#xff0c;欢迎你们踏上计算机科学的旅程&#xff01;作为一名计算机专业的学生&#xff0c;你们即将进入一个充满无限可能的领域。今天&#xff0c;我将为大家介绍计算机专业的一些主要就业方向&#xff0c;帮助你们了解未来的职业选择。…

Java面试篇基础部分-Java内部类介绍

首先需要了解什么是内部类,内部类就是定义在类的内部的类称为内部类,内部类可以根据不同的定义方式分为静态内部类、成员内部类、局部内部类和匿名内部类。 静态内部类 定义在类体内部的通过static关键字修饰的类,被称为静态内部类。静态内部类可以访问外部类的静态变量和…

深度学习对抗海洋赤潮危机!浙大GIS实验室提出ChloroFormer模型,可提前预警海洋藻类爆发

2014 年 8 月&#xff0c;美国俄亥俄州托莱多市超 50 万名居民突然收到市政府的一则紧急通知——不得擅自饮用自来水&#xff01; 水是人类生存的基本供给&#xff0c;此通告关系重大&#xff0c;发出后也引起了不小的恐慌。究其原因&#xff0c;其实是美国伊利湖爆发了大规模…

OpenCV运动分析和目标跟踪(4)创建汉宁窗函数createHanningWindow()的使用

操作系统&#xff1a;ubuntu22.04 OpenCV版本&#xff1a;OpenCV4.9 IDE:Visual Studio Code 编程语言&#xff1a;C11 算法描述 此函数计算二维的汉宁窗系数。 createHanningWindow是OpenCV中的一个函数&#xff0c;用于创建汉宁窗&#xff08;Hann window&#xff09;。汉宁…

Give azure openai an encyclopedia of information

题意&#xff1a;给 Azure OpenAI 提供一部百科全书式的信息 问题背景&#xff1a; I am currently dabbling in the Azure OpenAI service. I want to take the default model and knowledge base and now add on to it my own unique information. So, for example, for mak…

Vert.x HttpClient调用后端服务时使用Idle Timeout和KeepAlive Timeout的行为分析

其实网上有大量讨论HTTP长连接的文章&#xff0c;而且Idle Timeout和KeepAlive Timeout都是HTTP协议上的事情&#xff0c;跟Vert.x本身没有太大关系&#xff0c;只不过最近在项目上遇到了一些问题&#xff0c;用到了Vert.x的HttpClient&#xff0c;就干脆总结一下&#xff0c;留…

react学习笔记一:react介绍

将view规划成一个个的组件&#xff0c;是一个响应式的声明式的设计。 虚拟dom&#xff0c;减少dom操作。vue的虚拟dom是在react的基础上拓展来的。 单向数据流&#xff1a;是一种数据流动的模式。数据流的方向是有上到下的&#xff0c;在react中主要是从父组件流向子组件。 …

C语言进阶四:(指针和数组笔试题解析1)

一维数组&#xff1a; sizeof是计算内存大小的&#xff0c;strlen是计算字符串的长度。 int main() {//一维数组int a[] {1,2,3,4};printf("%d\n", sizeof(a));printf("%d\n", sizeof(a 0));printf("%d\n", sizeof(*a));printf("%d\n&q…