Atlas 血缘分析-hive/spark

Apache Atlas部署安装

这里需要注意,需要从官网下载Atlas的源码,不要从git上分支去checkout,因为从分支checkout出来的代码,无法正常运行,这里小编使用针对Atlas-2.3.0源码进行编译.

mvn clean -DskipTests package -Pdist

部署前置条件

  • Elastic7.x
  • HBase2.x
  • Kafla-2.x
  • zookeeper-3.4.x
  • Hive Metastore - 3.x

Atlas参数配置

#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##########  Graph Database Configs  ########## Graph Database#Configures the graph database to use.  Defaults to JanusGraph
#atlas.graphdb.backend=org.apache.atlas.repository.graphdb.janus.AtlasJanusGraphDatabase# Graph Storage
# Set atlas.graph.storage.backend to the correct value for your desired storage
# backend. Possible values:
#
# hbase
# cassandra
# embeddedcassandra - Should only be set by building Atlas with  -Pdist,embedded-cassandra-solr
# berkeleyje
#
# See the configuration documentation for more information about configuring the various  storage backends.
#
atlas.graph.storage.backend=hbase2
atlas.graph.storage.hbase.table=apache_atlas_janus
# atlas.graph.storage.username=
# atlas.graph.storage.password=#Hbase
#For standalone mode , specify localhost
#for distributed mode, specify zookeeper quorum here
atlas.graph.storage.hostname=10.0.0.141:2181,10.0.0.140:2181,10.0.0.142:2181
atlas.graph.storage.hbase.regions-per-server=1#In order to use Cassandra as a backend, comment out the hbase specific properties above, and uncomment the
#the following properties
#atlas.graph.storage.clustername=
#atlas.graph.storage.port=# Gremlin Query Optimizer
#
# Enables rewriting gremlin queries to maximize performance. This flag is provided as
# a possible way to work around any defects that are found in the optimizer until they
# are resolved.
#atlas.query.gremlinOptimizerEnabled=true# Delete handler
#
# This allows the default behavior of doing "soft" deletes to be changed.
#
# Allowed Values:
# org.apache.atlas.repository.store.graph.v1.SoftDeleteHandlerV1 - all deletes are "soft" deletes
# org.apache.atlas.repository.store.graph.v1.HardDeleteHandlerV1 - all deletes are "hard" deletes
#
atlas.DeleteHandlerV1.impl=org.apache.atlas.repository.store.graph.v1.HardDeleteHandlerV1# Entity audit repository
#
# This allows the default behavior of logging entity changes to hbase to be changed.
#
# Allowed Values:
# org.apache.atlas.repository.audit.HBaseBasedAuditRepository - log entity changes to hbase
# org.apache.atlas.repository.audit.CassandraBasedAuditRepository - log entity changes to cassandra
# org.apache.atlas.repository.audit.NoopEntityAuditRepository - disable the audit repository
#
atlas.EntityAuditRepository.impl=org.apache.atlas.repository.audit.HBaseBasedAuditRepository# if Cassandra is used as a backend for audit from the above property, uncomment and set the following
# properties appropriately. If using the embedded cassandra profile, these properties can remain
# commented out.
# atlas.EntityAuditRepository.keyspace=atlas_audit
# atlas.EntityAuditRepository.replicationFactor=1# Graph Search Index
atlas.graph.index.search.backend=elasticsearch#Solr
#Solr cloud mode properties
atlas.graph.index.search.solr.mode=cloud
atlas.graph.index.search.solr.zookeeper-url=
atlas.graph.index.search.solr.zookeeper-connect-timeout=60000
atlas.graph.index.search.solr.zookeeper-session-timeout=60000
atlas.graph.index.search.solr.wait-searcher=false#Solr http mode properties
#atlas.graph.index.search.solr.mode=http
#atlas.graph.index.search.solr.http-urls=http://localhost:8983/solr# ElasticSearch support (Tech Preview)
# Comment out above solr configuration, and uncomment the following two lines. Additionally, make sure the
# hostname field is set to a comma delimited set of elasticsearch master nodes, or an ELB that fronts the masters.
#
# Elasticsearch does not provide authentication out of the box, but does provide an option with the X-Pack product
# https://www.elastic.co/products/x-pack/security
#
# Alternatively, the JanusGraph documentation provides some tips on how to secure Elasticsearch without additional
# plugins: https://docs.janusgraph.org/latest/elasticsearch.html
atlas.graph.index.search.hostname=10.0.0.79:9200,10.0.0.80:9200,10.0.0.141:9200
atlas.graph.index.search.elasticsearch.client-only=true# Solr-specific configuration property
atlas.graph.index.search.max-result-set-size=150#########  Import Configs  #########
#atlas.import.temp.directory=/temp/import#########  Notification Configs  #########
atlas.notification.embedded=false
atlas.kafka.data=${sys:atlas.home}/data/kafka
atlas.kafka.zookeeper.connect=10.0.0.141:2181,10.0.0.140:2181,10.0.0.142:2181/kafka
atlas.kafka.bootstrap.servers=10.0.0.141:9092,10.0.0.80:9092,10.0.0.79:9092
atlas.kafka.zookeeper.session.timeout.ms=400
atlas.kafka.zookeeper.connection.timeout.ms=200
atlas.kafka.zookeeper.sync.time.ms=20
atlas.kafka.auto.commit.interval.ms=1000
atlas.kafka.hook.group.id=atlasatlas.kafka.enable.auto.commit=false
atlas.kafka.auto.offset.reset=earliest
atlas.kafka.session.timeout.ms=30000
atlas.kafka.offsets.topic.replication.factor=1
atlas.kafka.poll.timeout.ms=1000atlas.notification.create.topics=true
atlas.notification.replicas=1
atlas.notification.topics=ATLAS_HOOK,ATLAS_ENTITIES
atlas.notification.log.failed.messages=true
atlas.notification.consumer.retry.interval=500
atlas.notification.hook.retry.interval=1000
# Enable for Kerberized Kafka clusters
#atlas.notification.kafka.service.principal=kafka/_HOST@EXAMPLE.COM
#atlas.notification.kafka.keytab.location=/etc/security/keytabs/kafka.service.keytab## Server port configuration
#atlas.server.http.port=21000
#atlas.server.https.port=21443#########  Security Properties  ########## SSL config
atlas.enableTLS=false#truststore.file=/path/to/truststore.jks
#cert.stores.credential.provider.path=jceks://file/path/to/credentialstore.jceks#following only required for 2-way SSL
#keystore.file=/path/to/keystore.jks# Authentication configatlas.authentication.method.kerberos=false
atlas.authentication.method.file=true#### ldap.type= LDAP or AD
atlas.authentication.method.ldap.type=none#### user credentials file
atlas.authentication.method.file.filename=${sys:atlas.home}/conf/users-credentials.properties### groups from UGI
#atlas.authentication.method.ldap.ugi-groups=true######## LDAP properties #########
#atlas.authentication.method.ldap.url=ldap://<ldap server url>:389
#atlas.authentication.method.ldap.userDNpattern=uid={0},ou=People,dc=example,dc=com
#atlas.authentication.method.ldap.groupSearchBase=dc=example,dc=com
#atlas.authentication.method.ldap.groupSearchFilter=(member=uid={0},ou=Users,dc=example,dc=com)
#atlas.authentication.method.ldap.groupRoleAttribute=cn
#atlas.authentication.method.ldap.base.dn=dc=example,dc=com
#atlas.authentication.method.ldap.bind.dn=cn=Manager,dc=example,dc=com
#atlas.authentication.method.ldap.bind.password=<password>
#atlas.authentication.method.ldap.referral=ignore
#atlas.authentication.method.ldap.user.searchfilter=(uid={0})
#atlas.authentication.method.ldap.default.role=<default role>######### Active directory properties #######
#atlas.authentication.method.ldap.ad.domain=example.com
#atlas.authentication.method.ldap.ad.url=ldap://<AD server url>:389
#atlas.authentication.method.ldap.ad.base.dn=(sAMAccountName={0})
#atlas.authentication.method.ldap.ad.bind.dn=CN=team,CN=Users,DC=example,DC=com
#atlas.authentication.method.ldap.ad.bind.password=<password>
#atlas.authentication.method.ldap.ad.referral=ignore
#atlas.authentication.method.ldap.ad.user.searchfilter=(sAMAccountName={0})
#atlas.authentication.method.ldap.ad.default.role=<default role>#########  JAAS Configuration #########atlas.jaas.KafkaClient.loginModuleName = com.sun.security.auth.module.Krb5LoginModule
#atlas.jaas.KafkaClient.loginModuleControlFlag = required
#atlas.jaas.KafkaClient.option.useKeyTab = true
#atlas.jaas.KafkaClient.option.storeKey = true
#atlas.jaas.KafkaClient.option.serviceName = kafka
#atlas.jaas.KafkaClient.option.keyTab = /etc/security/keytabs/atlas.service.keytab
#atlas.jaas.KafkaClient.option.principal = atlas/_HOST@EXAMPLE.COM#########  Server Properties  #########
atlas.rest.address=http://localhost:21000
# If enabled and set to true, this will run setup steps when the server starts
#atlas.server.run.setup.on.start=false#########  Entity Audit Configs  #########
atlas.audit.hbase.tablename=apache_atlas_entity_audit
atlas.audit.zookeeper.session.timeout.ms=1000
atlas.audit.hbase.zookeeper.quorum=10.0.0.141:2181,10.0.0.140:2181,10.0.0.142:2181#########  High Availability Configuration ########
atlas.server.ha.enabled=false
#### Enabled the configs below as per need if HA is enabled #####
#atlas.server.ids=id1
#atlas.server.address.id1=localhost:21000
#atlas.server.ha.zookeeper.connect=localhost:2181
#atlas.server.ha.zookeeper.retry.sleeptime.ms=1000
#atlas.server.ha.zookeeper.num.retries=3
#atlas.server.ha.zookeeper.session.timeout.ms=20000
## if ACLs need to be set on the created nodes, uncomment these lines and set the values ##
#atlas.server.ha.zookeeper.acl=<scheme>:<id>
#atlas.server.ha.zookeeper.auth=<scheme>:<authinfo>######### Atlas Authorization #########
atlas.authorizer.impl=simple
atlas.authorizer.simple.authz.policy.file=atlas-simple-authz-policy.json#########  Type Cache Implementation ########
# A type cache class which implements
# org.apache.atlas.typesystem.types.cache.TypeCache.
# The default implementation is org.apache.atlas.typesystem.types.cache.DefaultTypeCache which is a local in-memory type cache.
#atlas.TypeCache.impl=#########  Performance Configs  #########
#atlas.graph.storage.lock.retries=10
#atlas.graph.storage.cache.db-cache-time=120000#########  CSRF Configs  #########
atlas.rest-csrf.enabled=true
atlas.rest-csrf.browser-useragents-regex=^Mozilla.*,^Opera.*,^Chrome.*
atlas.rest-csrf.methods-to-ignore=GET,OPTIONS,HEAD,TRACE
atlas.rest-csrf.custom-header=X-XSRF-HEADER############ KNOX Configs ################
#atlas.sso.knox.browser.useragent=Mozilla,Chrome,Opera
#atlas.sso.knox.enabled=true
#atlas.sso.knox.providerurl=https://<knox gateway ip>:8443/gateway/knoxsso/api/v1/websso
#atlas.sso.knox.publicKey=############ Atlas Metric/Stats configs ################
# Format: atlas.metric.query.<key>.<name>
atlas.metric.query.cache.ttlInSecs=900
#atlas.metric.query.general.typeCount=
#atlas.metric.query.general.typeUnusedCount=
#atlas.metric.query.general.entityCount=
#atlas.metric.query.general.tagCount=
#atlas.metric.query.general.entityDeleted=
#
#atlas.metric.query.entity.typeEntities=
#atlas.metric.query.entity.entityTagged=
#
#atlas.metric.query.tags.entityTags=#########  Compiled Query Cache Configuration  ########## The size of the compiled query cache.  Older queries will be evicted from the cache
# when we reach the capacity.#atlas.CompiledQueryCache.capacity=1000# Allows notifications when items are evicted from the compiled query
# cache because it has become full.  A warning will be issued when
# the specified number of evictions have occurred.  If the eviction
# warning threshold <= 0, no eviction warnings will be issued.#atlas.CompiledQueryCache.evictionWarningThrottle=0#########  Full Text Search Configuration  ##########Set to false to disable full text search.
#atlas.search.fulltext.enable=true#########  Gremlin Search Configuration  ##########Set to false to disable gremlin search.
atlas.search.gremlin.enable=false########## Add http headers ############atlas.headers.Access-Control-Allow-Origin=*
#atlas.headers.Access-Control-Allow-Methods=GET,OPTIONS,HEAD,PUT,POST
#atlas.headers.<headerName>=<headerValue>#########  UI Configuration ########atlas.ui.default.version=v1

修改部署atlas服务端环境信息,确保配置了HBASE_CONF_DIR环境变量信息之后,启动Atlas服务即可。这里需要注意Atlas服务首次启动服务时间较长,一般需要20分钟左右,才会初始化hbase和elastic索引数据,因此启动完Atlas之后,需要耐心等到。启动完成后,可以使用admin/admin账号登陆服务
Atlas服务登陆页面

Apache Hive元数据配置

1)在hive-site.xml文件中增加如下配置

<property><name>hive.exec.post.hooks</name><value>org.apache.atlas.hive.hook.HiveHook</value></property><property><name>hive.metastore.event.listeners</name><value>org.apache.atlas.hive.hook.HiveMetastoreHook</value></property>

2)解压apache-atlas-2.3.0-hive-hook.tar.gz文件,然后将该文件包下的atlas-plugin-classloader-2.3.0.jar和hive-bridge-shim-2.3.0.jar建立软连接到hive安装目录下的auxlib目录

[hdfs@citicbank-bdp-1a-02 server]$ tree  apache-atlas-hive-hook-2.3.0
apache-atlas-hive-hook-2.3.0
├── hook
│   └── hive
│       ├── atlas-hive-plugin-impl
│       │   ├── atlas-client-common-2.3.0.jar
│       │   ├── atlas-client-v1-2.3.0.jar
│       │   ├── atlas-client-v2-2.3.0.jar
│       │   ├── atlas-common-2.3.0.jar
│       │   ├── atlas-intg-2.3.0.jar
│       │   ├── atlas-notification-2.3.0.jar
│       │   ├── commons-configuration-1.10.jar
│       │   ├── hive-bridge-2.3.0.jar
│       │   ├── jackson-annotations-2.11.3.jar
│       │   ├── jackson-core-2.11.3.jar
│       │   ├── jackson-databind-2.11.3.jar
│       │   ├── jersey-json-1.19.jar
│       │   ├── jersey-multipart-1.19.jar
│       │   ├── kafka_2.12-2.8.1.jar
│       │   └── kafka-clients-2.8.1.jar
│       ├── atlas-plugin-classloader-2.3.0.jar
│       └── hive-bridge-shim-2.3.0.jar
└── hook-bin└── import-hive.sh4 directories, 18 files
[hdfs@citicbank-bdp-1a-02 hive-3.2.0]$ ls -l auxlib/
total 0
lrwxrwxrwx 1 root root 88 May 25 15:01 atlas-plugin-classloader-2.3.0.jar -> /export/server/apache-atlas-hive-hook-2.3.0/hook/hive/atlas-plugin-classloader-2.3.0.jar
lrwxrwxrwx 1 root root 80 May 25 15:01 hive-bridge-shim-2.3.0.jar -> /export/server/apache-atlas-hive-hook-2.3.0/hook/hive/hive-bridge-shim-2.3.0.jar

3)拷贝atlas-application.proerpties文件到hive安装目录下的conf目录下,重启hms即可

Spark SQL血缘集成

1)下载kyuubi源码,编译如下模块

mvn clean package -pl :kyuubi-spark-lineage_2.12 -am -DskipTests 

或者基于整个项目编译

mvn clean package -DskipTests -P mirror-cn -P spark-3.2 -P spark-hadoop-3.2
  1. 修改kyuubi-spark-lineage/pom.xml文件解决兼容性问题
<dependencyManagement><dependencies><dependency><groupId>com.fasterxml.jackson.core</groupId><artifactId>jackson-core</artifactId><version>2.14.3</version></dependency><dependency><groupId>com.fasterxml.jackson.core</groupId><artifactId>jackson-databind</artifactId><version>2.14.3</version></dependency><dependency><groupId>com.fasterxml.jackson.core</groupId><artifactId>jackson-annotations</artifactId><version>2.14.3</version></dependency></dependencies></dependencyManagement>
<properties>
...<dependency><groupId>com.sun.jersey</groupId><artifactId>jersey-client</artifactId><version>1.19</version></dependency><dependency><groupId>com.fasterxml.jackson.core</groupId><artifactId>jackson-core</artifactId></dependency><dependency><groupId>com.fasterxml.jackson.core</groupId><artifactId>jackson-databind</artifactId></dependency><dependency><groupId>com.fasterxml.jackson.core</groupId><artifactId>jackson-annotations</artifactId></dependency>...<build><plugins><plugin><groupId>net.alchim31.maven</groupId><artifactId>scala-maven-plugin</artifactId><version>${maven.plugin.scala.version}</version><executions><execution><id>scala-compile-first</id><phase>process-resources</phase><goals><goal>add-source</goal><goal>compile</goal></goals></execution></executions></plugin><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-shade-plugin</artifactId><version>3.2.1</version><executions><execution><phase>package</phase><goals><goal>shade</goal></goals><configuration><filters><filter><artifact>*:*</artifact><excludes><exclude>META-INF/*.SF</exclude><exclude>META-INF/*.DSA</exclude><exclude>META-INF/*.RSA</exclude></excludes></filter></filters><relocations><relocation><pattern>com.fasterxml.jackson.</pattern><shadedPattern>com.jdcloud.bigdata.hook.shade.com.fasterxml.jackson.</shadedPattern></relocation></relocations></configuration></execution></executions></plugin></plugins></build>
</properties>

3)配置spark-default.conf文件

spark.sql.queryExecutionListeners=org.apache.kyuubi.plugin.lineage.SparkOperationLineageQueryExecutionListener
spark.kyuubi.plugin.lineage.dispatchers=ATLAS
spark.atlas.rest.address=http://10.0.0.79:21000
spark.atlas.client.type=rest
spark.atlas.client.username=admin
spark.atlas.client.password=admin
spark.atlas.cluster.name=primary
spark.atlas.hook.spark.column.lineage.enabled=true
spark.kyuubi.plugin.lineage.skip.parsing.permanent.view.enabled=true

这里http://10.0.0.79:21000是部署的Atlas服务访问地址

3)拷贝atlas-application.proerpties文件到spark安装目录下的conf目录下,执行SQL,进行血缘测试
在这里插入图片描述

hbase 元数据集成

  1. 在hbase-site.xml文件中添加如下配置
</property><property><name>hbase.coprocessor.master.classes</name><value>com.jd.bigdata.hbase.hook.HBaseAtlasCoprocessor</value>
</property>
<property><name>hbase.coprocessor.region.classes</name><value>com.jd.bigdata.hbase.hook.HBaseAtlasCoprocessor</value>
</property>
  1. 解压apache-atlas-hbase-hook-2.3.0.tar.gz文件,然后将atlas-plugin-classloader-2.3.0.jar和hbase-bridge-shim-2.3.0.jar资源文件拷贝到hbase的安装目录下lib目录下创建软连接
tree apache-atlas-hbase-hook-2.3.0
apache-atlas-hbase-hook-2.3.0
├── hook
│   └── hbase
│       ├── atlas-hbase-plugin-impl
│       │   ├── atlas-client-common-2.3.0.jar
│       │   ├── atlas-client-v2-2.3.0.jar
│       │   ├── atlas-common-2.3.0.jar
│       │   ├── atlas-intg-2.3.0.jar
│       │   ├── atlas-notification-2.3.0.jar
│       │   ├── commons-collections-3.2.2.jar
│       │   ├── commons-configuration-1.10.jar
│       │   ├── commons-logging-1.1.3.jar
│       │   ├── hbase-bridge-2.3.0.jar
│       │   ├── jackson-annotations-2.11.3.jar
│       │   ├── jackson-core-2.11.3.jar
│       │   ├── jackson-databind-2.11.3.jar
│       │   ├── jackson-jaxrs-base-2.11.3.jar
│       │   ├── jackson-jaxrs-json-provider-2.11.3.jar
│       │   ├── jersey-bundle-1.19.jar
│       │   ├── jersey-json-1.19.jar
│       │   ├── jersey-multipart-1.19.jar
│       │   ├── jsr311-api-1.1.jar
│       │   ├── kafka_2.12-2.8.1.jar
│       │   └── kafka-clients-2.8.1.jar
│       ├── atlas-plugin-classloader-2.3.0.jar
│       └── hbase-bridge-shim-2.3.0.jar
└── hook-bin└── import-hbase.sh4 directories, 23 files

3)拷贝atlas-application.proerpties文件到hbase安装目录下的conf目录下,执行DDL语句查看元数据采集

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/336588.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

2024 京麟ctf -MazeCodeV1

文章目录 检查代码思路一个字节的指令注意附上S1uM4i佬们的exp https://www.ctfiot.com/184181.html 检查 代码 __int64 __fastcall check_solve(char *a1) {__int64 result; // rax__int64 v2; // rax__int64 index_step; // rax__int64 v4; // rax__int64 v5; // rax__int64…

MySQL索引与事务

1. 索引 &#xff08;1&#xff09;概念 索引是一种特殊的文件&#xff0c;包含着对数据表里所有记录的引用指针。可以对表中的一列或多列创建索引&#xff0c; 并指定索引的类型&#xff0c;各类索引有各自的数据结构实现。 &#xff08;2&#xff09;利弊 利&#xff1a; 数…

基于51单片机的温湿度控制系统

一.硬件方案 本设计采用51单片机每2秒钟从DHT11温湿度传感器中读入温度和湿度&#xff0c;在液晶屏上即时显示。液晶屏上同时显示温湿度上限值&#xff0c;该上限值保存外外部EEPROM存储器中&#xff0c;掉电不失&#xff0c;并且可以通过四只按键上调或下调。当温度或湿度值超…

车机壁纸生成解决方案,定制化服务,满足个性化需求

在数字化与智能化浪潮的推动下&#xff0c;汽车内部设计已不再仅仅满足于基本功能的需求&#xff0c;更追求为用户带来前所未有的视觉享受与沉浸式体验。美摄科技&#xff0c;凭借其在图像生成与处理领域的深厚积累&#xff0c;推出了一款创新的车机壁纸生成解决方案&#xff0…

LORA微调,让大模型更平易近人

技术背景 最近和大模型一起爆火的&#xff0c;还有大模型的微调方法。 这类方法只用很少的数据&#xff0c;就能让大模型在原本表现没那么好的下游任务中“脱颖而出”&#xff0c;成为这个任务的专家。 而其中最火的大模型微调方法&#xff0c;又要属LoRA。 增加数据量和模…

VMware ESXi 7.0 U3q 发布 - 领先的裸机 Hypervisor

VMware ESXi 7.0 U3q 发布 - 领先的裸机 Hypervisor VMware ESXi 7.0 Update 3 Standard & All Custom Image for ESXi 7.0U3 Install CD 请访问原文链接&#xff1a;https://sysin.org/blog/vmware-esxi-7-u3/&#xff0c;查看最新版。原创作品&#xff0c;转载请保留出…

[pdf,epub]《软件方法》2024版电子书共290页(202405更新)

DDD领域驱动设计批评文集 做强化自测题获得“软件方法建模师”称号 《软件方法》各章合集 已上传本账号CSDN资源。 或者到以下链接下载&#xff1a; http://www.umlchina.com/url/softmeth2024.html&#xff0c;或点击“阅读原文”。 如果需要提取码&#xff1a;umlc 已排…

浅谈网络安全态势感知

前言 网络空间环境日趋复杂&#xff0c;随着网络攻击种类和频次的增加&#xff0c;自建强有力的网络安全防御系统成为一个国家发展战略的一部分&#xff0c;而网络态势感知是实现网络安全主动防御的重要基础和前提。 什么是网络安全态势感知&#xff1f; 态势感知一词来源于对…

图形学初识--空间变换

文章目录 前言正文矩阵和向量相乘二维变换1、缩放2、旋转3、平移4、齐次坐标下总结 三维变换1、缩放2、平移3、旋转绕X轴旋转&#xff1a;绕Z轴旋转&#xff1a;绕Y轴旋转&#xff1a; 结尾&#xff1a;喜欢的小伙伴可以点点关注赞哦 前言 前面章节补充了一下基本的线性代数中…

软件安全复习

文章目录 第一章 软件安全概述1.1 信息定义1.2 信息的属性1.3 信息安全1.4 软件安全1.5 软件安全威胁及其来源1.5.1 软件缺陷与漏洞1.5.1.1 软件缺陷1.5.1.2 漏洞1.5.1.3 软件漏洞1.5.1.4 软件缺陷和漏洞的威胁 1.5.2 恶意软件1.5.2.1 恶意软件的定义1.5.2.2 恶意软件的威胁 1.…

Mysql搭建主从同步,docker方式(一主一从)

服务器&#xff1a;两台Centos9 用Docker搭建主从 使用Docker拉取MySQL镜像 确保两台服务器都安装好了docker 安装docker请查看&#xff1a;Centos安装docker 1.两台服务器都先拉取mysql镜像 docker pull mysql 2.我这里是在 /opt/docker/mysql 下创建mysql的文件夹用来存…

java人口老龄化社区服务与管理平台源码(springboot+vue+mysql)

风定落花生&#xff0c;歌声逐流水&#xff0c;大家好我是风歌&#xff0c;混迹在java圈的辛苦码农。今天要和大家聊的是一款基于springboot的人口老龄化社区服务与管理平台。项目源码以及部署相关请联系风歌&#xff0c;文末附上联系信息 。 项目简介&#xff1a; 人口老龄化…

在线思维导图编辑!3个AI思维导图生成软件推荐!

思维导图&#xff0c;一种以创新为驱动的视觉化思考工具&#xff0c;已经渗透到我们日常生活和工作的各个角落。当我们需要整理思绪、规划项目或者梳理信息时&#xff0c;思维导图总能提供极大的帮助。 近些年随着云服务等基础设施的完善&#xff0c;我们可以看到越来越多提供…

可视化大屏也在卷组件化设计了?分享一些可视化组件

hello&#xff0c;我是大千UI工场&#xff0c;这次分享一些可视化大屏的组件&#xff0c;供大家欣赏。&#xff08;本人没有源文件提供&#xff09;

Nacos 微服务管理

Nacos 本教程将为您提供Nacos的基本介绍&#xff0c;并带您完成Nacos的安装、服务注册与发现、配置管理等功能。在这个过程中&#xff0c;您将学到如何使用Nacos进行微服务管理。下方是官方文档&#xff1a; Nacos官方文档 1. Nacos 简介 Nacos&#xff08;Naming and Confi…

【雷丰阳-谷粒商城 】【分布式基础篇-全栈开发篇】【00】补充

持续学习&持续更新中… 守破离 【雷丰阳-谷粒商城 】【分布式基础篇-全栈开发篇】【00】补充 WindowsCMD插件IDEAVsCode MavenvagrantDocker解决MySQL连接慢问题启动&#xff08;自动&#xff09;Docker注意切换到root用户远程访问MySQL MyBatisPlusVue模块化开发项目结构…

横截面分位数回归

一、分位数回归简介 分位数回归&#xff08;英语&#xff1a;Quantile regression&#xff09;是回归分析的方法之一。最早由Roger Koenker和Gilbert Bassett于1978年提出。一般地&#xff0c;传统的回归分析研究自变量与因变量的条件期望之间的关系&#xff0c;相应得到的回归…

Leecode热题100---二分查找--4:寻找两个正序数组的中位数

题目&#xff1a; 给定两个大小分别为 m 和 n 的正序&#xff08;从小到大&#xff09;数组 nums1 和 nums2。请你找出并返回这两个正序数组的 中位数 。 解法1、暴力解法&#xff08;归并&#xff09; 思路&#xff1a; 合并 nums1&#xff0c;nums2 为第三个数组 排序第三个数…

如何降本增效获得目标客户?AI企业使用联盟营销这个方法就对了!

AI工具市场正在迅速发展&#xff0c;现仍有不少企业陆续涌出&#xff0c;那么如何让你的工具受到目标群体的关注呢&#xff1f;这相比是AI工具营销人员一直在思考的问题。 为什么AI企业难以获客呢&#xff1f; 即使这个市场正蓬勃发展&#xff0c;也无法保证营销就能轻易成功…

创建特定结构的二维数组:技巧与示例

新书上架~&#x1f447;全国包邮奥~ python实用小工具开发教程http://pythontoolsteach.com/3 欢迎关注我&#x1f446;&#xff0c;收藏下次不迷路┗|&#xff40;O′|┛ 嗷~~ 目录 一、引言&#xff1a;二维数组的奇妙世界 二、方法一&#xff1a;直接初始化 1. 初始化一个…