程序启动-大数据平台搭建

1、启动zookeeper集群

/home/cluster/zookeeper.sh start

/home/cluster/zookeeper.sh stop

2、启动hadoop和yarn集群

/home/cluster/hadoop-3.3.6/sbin/start-dfs.sh

/home/cluster/hadoop-3.3.6/sbin/start-yarn.sh

/home/cluster/hadoop-3.3.6/sbin/stop-dfs.sh

/home/cluster/hadoop-3.3.6/sbin/stop-yarn.sh

3、启动spark集群

/home/cluster/spark-3.4.1-bin-hadoop3/sbin/start-all.sh

/home/cluster/spark-3.4.1-bin-hadoop3/sbin/stop-all.sh

创建目录

hdfs dfs -mkdir -p /tmp/spark-events

export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native/:$LD_LIBRARY_PATH
 

[root@node88 bin]# /home/cluster/spark-3.4.1-bin-hadoop3/bin/spark-submit --class com.example.cloud.KafkaSparkHoodie --master yarn --deploy-mode client --driver-memory 512m --executor-memory 512m --executor-cores 1 /home/cluster/KafkaSparkHoodie.jar
23/09/27 11:37:43 INFO SparkContext: Running Spark version 3.3.3
23/09/27 11:37:44 INFO ResourceUtils: ==============================================================
23/09/27 11:37:44 INFO ResourceUtils: No custom resources configured for spark.driver.
23/09/27 11:37:44 INFO ResourceUtils: ==============================================================
23/09/27 11:37:44 INFO SparkContext: Submitted application: com.example.cloud.KafkaSparkHoodie
23/09/27 11:37:44 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 512, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
23/09/27 11:37:44 INFO ResourceProfile: Limiting resource is cpus at 1 tasks per executor
23/09/27 11:37:44 INFO ResourceProfileManager: Added ResourceProfile id: 0
23/09/27 11:37:44 INFO SecurityManager: Changing view acls to: root
23/09/27 11:37:44 INFO SecurityManager: Changing modify acls to: root
23/09/27 11:37:44 INFO SecurityManager: Changing view acls groups to: 
23/09/27 11:37:44 INFO SecurityManager: Changing modify acls groups to: 
23/09/27 11:37:44 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
23/09/27 11:37:44 INFO Utils: Successfully started service 'sparkDriver' on port 41455.
23/09/27 11:37:44 INFO SparkEnv: Registering MapOutputTracker
23/09/27 11:37:44 INFO SparkEnv: Registering BlockManagerMaster
23/09/27 11:37:44 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
23/09/27 11:37:44 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
23/09/27 11:37:44 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
23/09/27 11:37:44 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-fc12136a-2b85-4e9a-9402-1a9cd5848849
23/09/27 11:37:44 INFO MemoryStore: MemoryStore started with capacity 93.3 MiB
23/09/27 11:37:44 INFO SparkEnv: Registering OutputCommitCoordinator
23/09/27 11:37:44 INFO Utils: Successfully started service 'SparkUI' on port 4040.
23/09/27 11:37:44 INFO SparkContext: Added JAR file:/home/cluster/KafkaSparkHoodie.jar at spark://node88:41455/jars/KafkaSparkHoodie.jar with timestamp 1695829063904
23/09/27 11:37:44 INFO FairSchedulableBuilder: Creating Fair Scheduler pools from default file: fairscheduler.xml
23/09/27 11:37:44 INFO FairSchedulableBuilder: Created pool: production, schedulingMode: FAIR, minShare: 2, weight: 1
23/09/27 11:37:44 INFO FairSchedulableBuilder: Created pool: test, schedulingMode: FIFO, minShare: 3, weight: 2
23/09/27 11:37:44 INFO FairSchedulableBuilder: Created default pool: default, schedulingMode: FIFO, minShare: 0, weight: 1
23/09/27 11:37:44 INFO Executor: Starting executor ID driver on host node88
23/09/27 11:37:44 INFO Executor: Starting executor with user classpath (userClassPathFirst = false): ''
23/09/27 11:37:44 INFO Executor: Fetching spark://node88:41455/jars/KafkaSparkHoodie.jar with timestamp 1695829063904
23/09/27 11:37:44 INFO TransportClientFactory: Successfully created connection to node88/10.10.10.88:41455 after 26 ms (0 ms spent in bootstraps)
23/09/27 11:37:44 INFO Utils: Fetching spark://node88:41455/jars/KafkaSparkHoodie.jar to /tmp/spark-76aff444-bab4-4750-bb7b-e24519621a6d/userFiles-d302459d-968c-47cc-8ba4-0b32ab4431fe/fetchFileTemp6103368611573950971.tmp
23/09/27 11:37:45 INFO Executor: Adding file:/tmp/spark-76aff444-bab4-4750-bb7b-e24519621a6d/userFiles-d302459d-968c-47cc-8ba4-0b32ab4431fe/KafkaSparkHoodie.jar to class loader
23/09/27 11:37:45 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 39695.
23/09/27 11:37:45 INFO NettyBlockTransferService: Server created on node88:39695
23/09/27 11:37:45 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
23/09/27 11:37:45 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, node88, 39695, None)
23/09/27 11:37:45 INFO BlockManagerMasterEndpoint: Registering block manager node88:39695 with 93.3 MiB RAM, BlockManagerId(driver, node88, 39695, None)
23/09/27 11:37:45 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, node88, 39695, None)
23/09/27 11:37:45 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, node88, 39695, None)
23/09/27 11:37:45 ERROR SparkContext: Error initializing SparkContext.
java.io.FileNotFoundException: File file:/tmp/spark-events does not exist
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:779)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1100)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:769)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:462)
    at org.apache.spark.deploy.history.EventLogFileWriter.requireLogBaseDirAsDirectory(EventLogFileWriters.scala:77)
    at org.apache.spark.deploy.history.SingleEventLogFileWriter.start(EventLogFileWriters.scala:221)
    at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:83)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:622)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2714)
    at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:953)
    at scala.Option.getOrElse(Option.scala:189)
    at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:947)
    at com.example.cloud.KafkaSparkHoodie$.main(KafkaSparkHoodie.scala:31)
    at com.example.cloud.KafkaSparkHoodie.main(KafkaSparkHoodie.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:984)
    at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:191)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:214)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1072)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1081)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
23/09/27 11:37:45 INFO SparkUI: Stopped Spark web UI at http://node88:4040
23/09/27 11:37:45 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
23/09/27 11:37:45 INFO MemoryStore: MemoryStore cleared
23/09/27 11:37:45 INFO BlockManager: BlockManager stopped
23/09/27 11:37:45 INFO BlockManagerMaster: BlockManagerMaster stopped
23/09/27 11:37:45 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
23/09/27 11:37:45 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" java.io.FileNotFoundException: File file:/tmp/spark-events does not exist
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:779)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1100)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:769)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:462)
    at org.apache.spark.deploy.history.EventLogFileWriter.requireLogBaseDirAsDirectory(EventLogFileWriters.scala:77)
    at org.apache.spark.deploy.history.SingleEventLogFileWriter.start(EventLogFileWriters.scala:221)
    at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:83)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:622)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2714)
    at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:953)
    at scala.Option.getOrElse(Option.scala:189)
    at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:947)
    at com.example.cloud.KafkaSparkHoodie$.main(KafkaSparkHoodie.scala:31)
    at com.example.cloud.KafkaSparkHoodie.main(KafkaSparkHoodie.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:984)
    at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:191)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:214)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1072)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1081)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
23/09/27 11:37:45 INFO ShutdownHookManager: Shutdown hook called
23/09/27 11:37:45 INFO ShutdownHookManager: Deleting directory /tmp/spark-19e10066-ccc4-4643-a8c7-e0174bdd6b83
23/09/27 11:37:45 INFO ShutdownHookManager: Deleting directory /tmp/spark-76aff444-bab4-4750-bb7b-e24519621a6d
[root@node88 bin]# 
 

4、启动flink集群

/home/cluster/flink/bin/start-cluster.sh

/home/cluster/flink/bin/stop-cluster.sh

5、启动hive

/home/cluster/hive/bin/hive

一、

/home/cluster/hive/bin/hive --service metastore

或者/home/cluster/hive/bin/hive --service metastore 2>&1 >/dev/null &

二、

/home/cluster/hive/bin/hive --service hiveserver2

或者/home/cluster/hive/bin/hive --service hiveserver2 2>&1 >/dev/null &

启动连接测试、数据写入到了Hadoop的HDFS里了。

SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory]
Beeline version 3.1.3 by Apache Hive
beeline> !connect jdbc:hive2://node88:10000
Connecting to jdbc:hive2://node88:10000
Enter username for jdbc:hive2://node88:10000: root
Enter password for jdbc:hive2://node88:10000: ******
Connected to: Apache Hive (version 3.1.3)
Driver: Hive JDBC (version 3.1.3)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://node88:10000> show database;
Error: Error while compiling statement: FAILED: ParseException line 1:5 cannot recognize input near 'show' 'database' '<EOF>' in ddl statement (state=42000,code=40000)
0: jdbc:hive2://node88:10000> show databases;
+----------------+
| database_name  |
+----------------+
| default        |
+----------------+
1 row selected (0.778 seconds)
0: jdbc:hive2://node88:10000> use default;
No rows affected (0.081 seconds)
0: jdbc:hive2://node88:10000> CREATE TABLE IF NOT EXISTS default.hive_demo(id INT,name STRING,ip STRING,time TIMESTAMP) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
Error: Error while compiling statement: FAILED: ParseException line 1:74 cannot recognize input near 'time' 'TIMESTAMP' ')' in column name or constraint (state=42000,code=40000)
0: jdbc:hive2://node88:10000> CREATE TABLE IF NOT EXISTS default.hive_demo(id INT,name STRING,ip STRING,t TIMESTAMP) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
No rows affected (2.848 seconds)
0: jdbc:hive2://node88:10000> show tables;
+------------+
|  tab_name  |
+------------+
| hive_demo  |
+------------+
1 row selected (0.096 seconds)
0: jdbc:hive2://node88:10000> insert into hive_demo(id,name,ip,t) values("123456","liebe","10.10.10.88",now());
Error: Error while compiling statement: FAILED: SemanticException [Error 10011]: Invalid function now (state=42000,code=10011)
0: jdbc:hive2://node88:10000> insert into hive_demo(id,name,ip,t) values("123456","liebe","10.10.10.88",unix_timestamp());
No rows affected (180.349 seconds)
0: jdbc:hive2://node88:10000> 
0: jdbc:hive2://node88:10000> 
0: jdbc:hive2://node88:10000> select * from hive_demo
. . . . . . . . . . . . . . > ;
+---------------+-----------------+---------------+--------------------------+
| hive_demo.id  | hive_demo.name  | hive_demo.ip  |       hive_demo.t        |
+---------------+-----------------+---------------+--------------------------+
| 123456        | liebe           | 10.10.10.88   | 1970-01-20 15:00:56.674  |
+---------------+-----------------+---------------+--------------------------+
1 row selected (0.278 seconds)
0: jdbc:hive2://node88:10000> 
 

命令行操作:

spring-boot查询hive数据

<property>
        <name>hadoop.proxyuser.root.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.root.groups</name>
        <value>*</value>
    </property>

server:port: 8085tomcat:max-http-form-post-size: 200MBservlet:context-path: /hive
spring:profiles:active: sql
customize:hive:url: jdbc:hive2://10.10.10.88:10000/defaulttype: com.alibaba.druid.pool.DruidDataSourceusername: rootpassword: 123456driver-class-name: org.apache.hive.jdbc.HiveDriver

6、启动kafka集群 

/home/cluster/kafka_2.12-3.5.1/bin/kafka-server-start.sh /home/cluster/kafka_2.12-3.5.1/config/server.properties 
 

/home/cluster/kafka_2.12-3.5.1/bin/kafka-server-stop.sh /home/cluster/kafka_2.12-3.5.1/config/server.properties 

创建topic

/home/cluster/kafka_2.12-3.5.1/bin/kafka-topics.sh --create --topic mysql-flink-kafka --replication-factor 3 --partitions 3 --bootstrap-server 10.10.10.89:9092,10.10.10.89:9092,10.10.10.99:9092
Created topic mysql-flink-kafka.


/home/cluster/kafka_2.12-3.5.1/bin/kafka-topics.sh --describe --topic mysql-flink-kafka --bootstrap-server 10.10.10.89:9092,10.10.10.89:9092,10.10.10.99:9092
Topic: mysql-flink-kafka    TopicId: g5_WRWLKR3WClRDZ_Vz2oA    PartitionCount: 3    ReplicationFactor: 3    Configs: 
    Topic: mysql-flink-kafka    Partition: 0    Leader: 1    Replicas: 1,0,2    Isr: 1,0,2
    Topic: mysql-flink-kafka    Partition: 1    Leader: 0    Replicas: 0,2,1    Isr: 0,2,1
    Topic: mysql-flink-kafka    Partition: 2    Leader: 2    Replicas: 2,1,0    Isr: 2,1,0

/home/cluster/kafka_2.12-3.5.1/bin/kafka-console-consumer.sh --bootstrap-server 10.10.10.89:9092,10.10.10.89:9092,10.10.10.99:9092 --topic mysql-flink-kafka --from-beginning

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/143276.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Java括号匹配

目录 一、题目描述 二、题解 一、题目描述 给定一个只包括 (&#xff0c;)&#xff0c;{&#xff0c;}&#xff0c;[&#xff0c;] 的字符串 s &#xff0c;判断字符串是否有效。 有效字符串需满足&#xff1a; 左括号必须用相同类型的右括号闭合。左括号必须以正确的顺序闭…

AG35学习笔记(二):安装编译SDK、CMakeLists编译app、Scons编译server

目录 一、概述二、安装SDK2.1 网盘SDK - 权限不够2.2 bj41 - 需要交叉source2.3 mullen - relocate_sdk.py路径有误 三、编译SDK3.1 /bin/sh: 1: gcc: not found3.2 curses.h: No such file or directory 四、CMakeLists - 编译app4.1 cmake - 项目构建4.2 make - 项目编译4.3 …

计算机图像处理:图像轮廓

图像轮廓 图像阈值分割主要是针对图片的背景和前景进行分离&#xff0c;而图像轮廓也是图像中非常重要的一个特征信息&#xff0c;通过对图像轮廓的操作&#xff0c;就能获取目标图像的大小、位置、方向等信息。画出图像轮廓的基本思路是&#xff1a;先用阈值分割划分为两类图…

如何修复wmvcore.dll缺失问题,wmvcore.dll下载修复方法分享

近年来&#xff0c;电脑使用的普及率越来越高&#xff0c;人们在日常生活中离不开电脑。然而&#xff0c;有时候我们可能会遇到一些问题&#xff0c;其中之一就是wmvcore.dll缺失的问题。wmvcore.dll是Windows平台上用于支持Windows Media Player的动态链接库文件&#xff0c;如…

命运2中文wiki搭建记录——MediaWiki安装与初设置

命运2中文wiki搭建记录 本文转自我的博客&#xff0c;原文地址——>命运2中文wiki搭建记录——MediaWiki安装与初设置 可能是出于闲的发霉&#xff0c;想自己搭建一个命运2wiki。 因为bilibili上的命运2Bwiki也全是自己搭的。指路——>命运2Bwiki 但是当自己实际上手Me…

windows系统删除网络适配器

此电脑&#xff0c;右键&#xff0c;管理 打开本机设备管理器 其中找到网络适配器&#xff1a; 选中要删除的&#xff0c;右键点击“卸载设备”&#xff0c;点击卸载即可完成。

控价与数据分析的关系

品牌在做线上控价时&#xff0c;会面对许多的数据&#xff0c;如店铺数据、行业数据&#xff0c;当这些数据仅仅只是拿来做监测低价输出低价报表使用&#xff0c;是没有发挥其最大作用的&#xff0c;因为商品链接的字段较丰富&#xff0c;涉及的内容会包含销量、评价量、促销信…

Linux:理解进程的多种状态

文章目录 理解状态运行状态阻塞状态挂起状态Linux系统下的进程状态的解析状态的查看 本篇总结的是进程的多种状态 对于进程的状态理解&#xff0c;在教材上通常是有下面的思维模式图 那么如何理解上面图片中的内容&#xff1f; 理解状态 如何理解状态&#xff1f;其实理解状…

PyCharm中使用pyqt5的方法2-2

1.2 是否下载成功 按照以上步骤安装了“pyqt5”、“pyqt5-tools”模块和“pyqt5designer”模块后&#xff0c;可以打开保存这三个模块的路径&#xff0c;找到其对应的文件夹&#xff0c;即可验证是否下载成功。 获取PyCharm保存下载模块路径的方法是&#xff0c;在PyCharm界面…

解决java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset.的错误

文章目录 1. 复现错误2. 分析错误3. 解决问题3.1 下载Hadoop3.2 配置Hadoop3.3 下载winutils3.4 配置winutils 1. 复现错误 今天在运行同事给我的项目&#xff0c;但在项目启动时&#xff0c;报出如下错误&#xff1a; java.io.FileNotFoundException: java.io.FileNotFoundEx…

【AI视野·今日NLP 自然语言处理论文速览 第四十二期】Wed, 27 Sep 2023

AI视野今日CS.NLP 自然语言处理论文速览 Wed, 27 Sep 2023 Totally 50 papers &#x1f449;上期速览✈更多精彩请移步主页 Daily Computation and Language Papers Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models Authors Mert …

苹果 CMS 原生 Java 白菜影视 App 源码【带打包教程】

苹果 CMS 原生 Java 白菜影视 App 源码是一款功能强大的影视应用程序&#xff0c;支持画中画、投屏、点播、播放前广告和支持普通解析等多种功能。与萝卜 App 源码相比&#xff0c;该套源码更加稳定&#xff0c;且拥有画中画投屏和自定义广告等功能&#xff0c;提高了安全性。 …

河北吉力宝以步力宝健康鞋引发的全新生活生态商

在当今瞬息万变的商业世界中&#xff0c;成功企业通常都是那些不拘泥于传统、勇于创新的先锋之选。河北吉力宝正是这样一家企业&#xff0c;通过打造一双步力宝健康鞋&#xff0c;他们以功能性智能科技穿戴品为核心&#xff0c;成功创造了一种结合智能康养与时尚潮流的独特产品…

Leetcode算法入门与数组丨5. 数组二分查找

文章目录 1 二分查找算法2 二分查找细节3 二分查找两种思路3.1 直接法3.2 排除法 task09task10 1 二分查找算法 二分查找算法是一种常用的查找算法&#xff0c;也被称为折半查找算法。它适用于有序数组的查找&#xff0c;并通过将待查找区间不断缩小一半的方式来快速定位目标值…

Redis 线程模式

Redis 是单线程吗&#xff1f; Redis 单线程指的是 [接收客户端请求 -> 解析请求 -> 进行数据读写操作 -> 发送数据给客户端] 这个过程是由一个线程 (主线程) 来完成的&#xff0c;这也是常说的 Redis 是单线程的原因。 但是 &#xff0c;Redis 程序不是单线程的&am…

VB从资源文件中播放wav音乐文件

Private Const SND_SYNC &H0 Private Const SND_MEMORY &H4 API函数 Private Declare Function sndPlaySoundFromMemory Lib "winmm.dll" Alias "sndPlaySoundA" (lpszSoundName As Any, ByVal uFlags As Long) As Long 音乐效果请“单击” Pr…

美国零售电商平台Target,值得入驻吗?如何入驻?

Target 是美国最大的零售商之一&#xff0c;在品牌出海为大势所趋的背景下&#xff0c;它在北美电商中的地位节节攀升。Target 商店在众多垂直领域提供各种价格实惠的自有品牌&#xff0c;吸引越来越多的跨境商家入驻&#xff0c;如美妆、家居、鞋服、日用百货等&#xff0c;随…

在比特币上支持椭圆曲线 BLS12–381

通过使用智能合约实现来支持任何曲线 BLS12–381 是一种较新的配对友好型椭圆曲线。 与常用的 BN-256 曲线相比&#xff0c;BLS12-381 的安全性明显更高&#xff0c;并且安全目标是 128 位。 所有其他区块链&#xff0c;例如 Zcash 和以太坊&#xff0c;都必须通过硬分叉才能升…

Android Studio 创建项目不自动生成BuildConfig文件

今天在AS上新建项目发现找不到BuildConfig文件&#xff0c;怎么clear都不行。通过多方面查找发现原来gradle版本不同造成的&#xff0c;Gradle 8.0默认不生成 BuildConfig 文件。 如上图&#xff0c;8.0版本是没有source文件夹 上图是低于8.0版本有source文件夹 针对这个问题&…

Anchors

这是源代码定义的anchors概念&#xff1a; 实现过程&#xff1a; 假如有一张500500的图片&#xff0c;那么经过第一步深度卷积网络之后&#xff08;4次池化&#xff09;&#xff0c;最终就会变成一个3232的特征&#xff1a; 在开源代码实现里面&#xff1a; 所以经过卷积完之后…