系统学习Linux-ELK日志收集系统

ELK日志收集系统集群实验

实验环境

角色主机名IP接口
httpd192.168.31.50ens33
node1192.168.31.51ens33
noed2192.168.31.53ens33

环境配置

设置各个主机的ip地址为拓扑中的静态ip,并修改主机名

#httpd
[root@localhost ~]# hostnamectl set-hostname httpd
[root@localhost ~]# bash
[root@httpd ~]# #node1
[root@localhost ~]# hostnamectl set-hostname node1
[root@localhost ~]# bash
[root@node1 ~]# vim /etc/hosts
192.168.31.51   node1
192.168.31.53   node2#node2
[root@localhost ~]# hostnamectl set-hostname node2
[root@localhost ~]# bash
[root@node2 ~]# vim /etc/hosts
192.168.31.51   node1
192.168.31.53   node2

安装elasticsearch

#node1
[root@node1 ~]# ls
elk软件包  公共  模板  视频  图片  文档  下载  音乐  桌面
[root@node1 ~]# mv elk软件包 elk
[root@node1 ~]# ls
elk  公共  模板  视频  图片  文档  下载  音乐  桌面
[root@node1 ~]# cd elk
[root@node1 elk]# ls
elasticsearch-5.5.0.rpm    kibana-5.5.1-x86_64.rpm  node-v8.2.1.tar.gz
elasticsearch-head.tar.gz  logstash-5.5.1.rpm       phantomjs-2.1.1-linux-x86_64.tar.bz2
[root@node1 elk]# rpm -ivh elasticsearch-5.5.0.rpm 
警告:elasticsearch-5.5.0.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY
准备中...                          ################################# [100%]
Creating elasticsearch group... OK
Creating elasticsearch user... OK
正在升级/安装...1:elasticsearch-0:5.5.0-1          ################################# [100%]
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemdsudo systemctl daemon-reloadsudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executingsudo systemctl start elasticsearch.service#node2

配置

node1

vim /etc/elasticsearch/elasticsearch.yml

 17 cluster.name: my-elk-cluster                //集群名称   23 node.name: node1                           //节点名字33 path.data: /var/lib/elasticsearch         //数据存放路径37 path.logs: /var/log/elasticsearch/       //日志存放路径43 bootstrap.memory_lock: false            //在启动的时候不锁定内存55 network.host: 0.0.0.0                  //提供服务绑定的IP地址,0.0.0.0代表所有地址59 http.port: 9200                       //侦听端口为920068 discovery.zen.ping.unicast.hosts: ["node1", "node2"]    //群集发现通过单播实现

 node2

 17 cluster.name: my-elk-cluster                //集群名称   23 node.name: node1                           //节点名字33 path.data: /var/lib/elasticsearch         //数据存放路径37 path.logs: /var/log/elasticsearch/       //日志存放路径43 bootstrap.memory_lock: false            //在启动的时候不锁定内存55 network.host: 0.0.0.0                  //提供服务绑定的IP地址,0.0.0.0代表所有地址59 http.port: 9200                       //侦听端口为920068 discovery.zen.ping.unicast.hosts: ["node1", "node2"]    //群集发现通过单播实现

在node1安装-elasticsearch-head插件

移动到elk文件夹

#安装插件编译很慢
[root@node1 ~]# cd elk/
[root@node1 elk]# ls
elasticsearch-5.5.0.rpm    kibana-5.5.1-x86_64.rpm  phantomjs-2.1.1-linux-x86_64.tar.bz2
elasticsearch-head.tar.gz  logstash-5.5.1.rpm       node-v8.2.1.tar.gz
[root@node1 elk]# tar xf node-v8.2.1.tar.gz
[root@node1 elk]# cd node-v8.2.1/
[root@node1 node-v8.2.1]# ./configure && make && make install
[root@node1 elk]# cd ~/elk
[root@node1 elk]# tar xf phantomjs-2.1.1-linux-x86_64.tar.bz2 
[root@node1 elk]# cd phantomjs-2.1.1-linux-x86_64/bin/
[root@node1 bin]# ls
phantomjs
[root@node1 bin]# cp phantomjs /usr/local/bin/
[root@node1 bin]# cd ~/elk/
[root@node1 elk]# tar xf elasticsearch-head.tar.gz 
[root@node1 elk]# cd elasticsearch-head/
[root@node1 elasticsearch-head]# npm install
npm WARN deprecated fsevents@1.2.13: The v1 package contains DANGEROUS / INSECURE binaries. Upgrade to safe fsevents v2
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@^1.0.0 (node_modules/karma/node_modules/chokidar/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.13: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm WARN elasticsearch-head@0.0.0 license should be a valid SPDX license expressionup to date in 3.536s

修改elasticsearch配置文件

[root@node1 ~]# vim /etc/elasticsearch/elasticsearch.yml 84 # ---------------------------------- Various -----------------------------------85 #86 # Require explicit names when deleting indices:87 #88 #action.destructive_requires_name: true89 http.cors.enabled: true                //开启跨域访问支持,默认为false90 http.cors.allow-origin:"*"            //跨域访问允许的域名地址
[root@node1 ~]# systemctl restart elasticsearch.service #启动elasticsearch-head
cd /root/elk/elasticsearch-head
npm run start &
#查看监听
netstat -anput | grep :9100
#访问
http://192.168.31.51:9100

node1服务器安装logstash

[root@node1 elk]# rpm -ivh logstash-5.5.1.rpm 
警告:logstash-5.5.1.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY
准备中...                          ################################# [100%]软件包 logstash-1:5.5.1-1.noarch 已经安装
#开启并创建一个软连接
[root@node1 elk]# systemctl start logstash.service
[root@node1 elk]# In -s /usr/share/logstash/bin/logstash  /usr/local/bin/
#测试1
[root@node1 elk]# logstash -e 'input{ stdin{} }output { stdout{} }'
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console
16:03:50.250 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
16:03:50.256 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
16:03:50.330 [LogStash::Runner] INFO  logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"9ba08544-a7a7-4706-a3cd-2e2ca163548d", :path=>"/usr/share/logstash/data/uuid"}
16:03:50.584 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
16:03:50.739 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
The stdin plugin is now waiting for input:
16:03:50.893 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
^C16:04:32.838 [SIGINT handler] WARN  logstash.runner - SIGINT received. Shutting down the agent.
16:04:32.855 [LogStash::Runner] WARN  logstash.agent - stopping pipeline {:id=>"main"}
#测试2
[root@node1 elk]# logstash -e 'input { stdin{} } output { stdout{ codec=>rubydebug }}'
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console
16:46:23.975 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
The stdin plugin is now waiting for input:
16:46:24.014 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
16:46:24.081 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
^C16:46:29.970 [SIGINT handler] WARN  logstash.runner - SIGINT received. Shutting down the agent.
16:46:29.975 [LogStash::Runner] WARN  logstash.agent - stopping pipeline {:id=>"main"}
#测试3
16:46:29.975 [LogStash::Runner] WARN  logstash.agent - stopping pipeline {:id=>"main"}
[root@node1 elk]# logstash -e 'input { stdin{} } output { elasticsearch{ hosts=>["192.168.31.51:9200"]} }'
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console
16:46:55.951 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.31.51:9200/]}}
16:46:55.955 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.31.51:9200/, :path=>"/"}
16:46:56.049 [[main]-pipeline-manager] WARN  logstash.outputs.elasticsearch - Restored connection to ES instance {:url=>#<Java::JavaNet::URI:0x3a106333>}
16:46:56.068 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Using mapping template from {:path=>nil}
16:46:56.204 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
16:46:56.233 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Installing elasticsearch template to _template/logstash
16:46:56.429 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<Java::JavaNet::URI:0x19aeba5c>]}
16:46:56.432 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
16:46:56.461 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
The stdin plugin is now waiting for input:
16:46:56.561 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
^C16:46:57.638 [SIGINT handler] WARN  logstash.runner - SIGINT received. Shutting down the agent.
16:46:57.658 [LogStash::Runner] WARN  logstash.agent - stopping pipeline {:id=>"main"}

logstash日志收集文件格式(默认存储在/etc/logstash/conf.d)

Logstash配置文件基本由三部分组成:input、output以及 filter(根据需要)。标准的配置文件格式如下:

input (...)  输入

filter {...}   过滤

output {...}  输出

在每个部分中,也可以指定多个访问方式。例如,若要指定两个日志来源文件,则格式如下:

input {

file{path =>"/var/log/messages" type =>"syslog"}

file { path =>"/var/log/apache/access.log"  type =>"apache"}

}

案例:通过logstash收集系统信息日志

[root@node1 conf.d]# chmod o+r /var/log/messages
[root@node1 conf.d]# vim /etc/logstash/conf.d/system.conf
input {
file{
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
}
output {
elasticsearch{
hosts =>["192.168.31.51:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}
[root@node1 conf.d]# systemctl restart logstash.service 

node1节点安装kibana

cd ~/elk

[root@node1 elk]# rpm -ivh kibana-5.5.1-x86_64.rpm 
警告:kibana-5.5.1-x86_64.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY
准备中...                          ################################# [100%]
正在升级/安装...1:kibana-5.5.1-1                   ################################# [100%]
[root@node1 elk]# vim /etc/kibana/kibana.yml 2 server.port: 5601                    //Kibana打开的端口7 server.host: "0.0.0.0"              //Kibana侦听的地址21 elasticsearch.url: "http://192.168.31.51:9200"    //和Elasticsearch 建立连接30 kibana.index: ".kibana"            //在Elasticsearch中添加.kibana索引
[root@node1 elk]# systemctl start kibana.service 

访问kibana

首次访问需要添加索引,我们添加前面已经添加过的索引:system-*

企业案例

收集nginx访问日志信息

在httpd服务器上安装logstash,参数上述安装过程

logstash在httpd服务器上作为agent(代理),不需要启动

编写httpd日志收集配置文件

[root@httpd ]# yum install -y httpd
[root@httpd ]# systemctl start httpd
[root@httpd ]# systemctl start logstash
[root@httpd ]# vim /etc/logstash/conf.d/httpd.conf
input {file {path => "/var/log/httpd/access_log"type => "access"start_position => "beginning"}
}
output {elasticsearch {hosts => ["192.168.31.51:9200"]index => "httpd-%{+YYYY.MM.dd}"}
}
[root@httpd ]# logstash -f  /etc/logstash/conf.d/httpd.conf
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console
21:29:34.272 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.31.51:9200/]}}
21:29:34.275 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.31.51:9200/, :path=>"/"}
21:29:34.400 [[main]-pipeline-manager] WARN  logstash.outputs.elasticsearch - Restored connection to ES instance {:url=>#<Java::JavaNet::URI:0x1c254b0a>}
21:29:34.423 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Using mapping template from {:path=>nil}
21:29:34.579 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
21:29:34.585 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<Java::JavaNet::URI:0x3b483278>]}
21:29:34.588 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
21:29:34.845 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
21:29:34.921 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
wwwww
w
w

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/123889.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【opencv】多版本安装

安装opencv3.2.0以及对应的付费模块 一、安装多版本OpenCV如何切换 按照如下步骤安装的OpenCV&#xff0c;在CMakeLists.txt文件中&#xff0c;直接指定opencv的版本就可以找到相应版本的OpenCV&#xff0c;为了验证可以在CMakeLists.txt文件中使用如下指令输出版本验证&…

C#写一个UDP程序判断延迟并运行在Centos上

服务端 using System.Net.Sockets; using System.Net;int serverPort 50001; Socket server; EndPoint client new IPEndPoint(IPAddress.Any, 0);//用来保存发送方的ip和端口号CreateSocket();void CreateSocket() {server new Socket(AddressFamily.InterNetwork, SocketT…

SolVES4.1学习1——安装与使用教程

1、下载并安装 SolVES 4版本是QGIS插件&#xff0c;但实际使用过程中发现在最新版的QGIS安装该插件过程中&#xff0c;会报错或异常。因此需安装特定版本的软件。共需安装如下图软件及Java环境等。 根据官方文档安装好后&#xff0c;可以进行相关操作。 2、设置QGIS环境 QG…

消息队列理解

rocketMQ RocketMQ消息存储原理_码上得天下的博客-CSDN博客 领域模型概述 | RocketMQ kafka Kafka基本架构介绍-腾讯云开发者社区-腾讯云 看完这篇Kafka&#xff0c;你也许就会了Kafka_心的步伐的博客-CSDN博客 Apache Kafka

PaddleX:一站式、全流程、高效率的飞桨AI套件

随着ChatGPT引领的AI破圈&#xff0c;各行各业掀起了AI落地的潮流&#xff0c;从智能客服、智能写作、智能监控&#xff0c;到智能医疗、智能家居、智能金融、智能农业&#xff0c;谁能快速将AI与传统业务相结合&#xff0c;谁就将成为企业数字化和智能化变革的优胜者。然而&am…

CLIP:连接文本-图像

Contrastive Language-Image Pre-Training CLIP的主要目标是通过对比学习&#xff0c;学习匹配图像和文本。CLIP最主要的作用&#xff1a;可以将文本和图像表征映射到同一个表示空间 这是通过训练模型来预测哪个图像属于给定的文本&#xff0c;反之亦然。在训练过程中&#…

【Axure教程】中继器网格拖动摆放

拖动摆放图标在移动端操作中扮演了重要的角色&#xff0c;允许用户自定义其设备的界面&#xff0c;使其更符合其偏好和使用习惯。这可以提高用户对设备的满意度和舒适度&#xff0c;将最常用的应用放置在易于访问的位置&#xff0c;从而提高使用效率。所以拖动摆放这类型操作不…

ESP32C3 LuatOS RC522①写入数据并读取M1卡

LuatOS RC522官方示例 官方示例没有针对具体开发板&#xff0c;现以ESP32C3开发板为例。 选用的RC522模块 ESP32C3-CORE开发板 注意ESP32C3的 SPI引脚位置&#xff0c;SPI的id2 示例代码 -- LuaTools需要PROJECT和VERSION这两个信息 PROJECT "helloworld" VERSIO…

Android逆向学习(二)vscode进行双开与图标修改

Android逆向学习&#xff08;二&#xff09;vscode进行双开与图标修改 写在前面 这其实应该还是吾爱的第一个作业&#xff0c;但是写完上一个博客的时候已经比较晚了&#xff0c;如果继续敲机械键盘吵到室友&#xff0c;我怕我看不到明天的太阳&#xff0c;所以我决定分成两篇…

2023百度云智大会:科技与创新的交汇点

​ 这次的百度云智大会&#xff0c;可谓是亮点云集—— 发布了包含42个大模型、41个数据集、10个精选应用范式的全新升级千帆大模型平台2.0&#xff0c;发布首个大模型生态伙伴计划&#xff0c;而且也预告了文心大模型4.0的发布&#xff0c;大模型服务的成绩单也非常秀&#x…

前端、后端面试集锦

诸位读者&#xff0c;我们在工作的过程中&#xff0c;经常会因跳槽而面试。 你开发能力很强&#xff0c;懂得技术也很多&#xff0c;若加上条理清晰的面试话术&#xff0c;可以让您的面试事半功倍。 个人博客阅读量破170万&#xff0c;为尔倾心打造的 面试专栏-前端、后端面试…

使用Apache Doris自动同步整个 MySQL/Oracle 数据库进行数据分析

Flink-Doris-Connector 1.4.0 允许用户一步将包含数千个表的整个数据库&#xff08;MySQL或Oracle &#xff09;摄取到Apache Doris&#xff08;一种实时分析数据库&#xff09;中。 通过内置的Flink CDC&#xff0c;连接器可以直接将上游源的表模式和数据同步到Apache Doris&…

Vue + Element UI 实现权限管理系统 前端篇(十四):菜单功能实现菜

Vue Element UI 实现权限管理系统 前端篇&#xff08;十四&#xff09;&#xff1a;菜单功能实现 菜单功能实现 菜单接口封装 菜单管理是一个对菜单树结构的增删改查操作。 提供一个菜单查询接口&#xff0c;查询整颗菜单树形结构。 http/modules/menu.js 添加 findMenu…

VsCode搭建Java开发环境 vscode搭建java开发环境 vscode springboot 搭建springboot

VsCode搭建Java开发环境 vscode搭建java开发环境 vscode springboot 搭建springboot VsCode java开发截图1、安装Java 环境相关插件2、安装 Spring 插件3、安装 Mybatis 插件第一个 vsc-mybatis第二个 mybatisX 4、安装Maven环境4.1、安装Maven环境4.2、VsCode配置Maven环境 5、…

Excel_VBA程序文件的加密及解密说明

VBA应用技巧及疑难解答 Excel_VBA程序文件的加密及解密 在您看到这个文档的时候&#xff0c;请和我一起念&#xff1a;“唵嘛呢叭咪吽”“唵嘛呢叭咪吽”“唵嘛呢叭咪吽”&#xff0c;为自己所得而感恩&#xff0c;为付出者赞叹功德。 本不想分享之一技术&#xff0c;但众多学…

智慧公厕是将数据、技术、业务深度融合的公共厕所敏捷化“操作系统”

文明社会的进步离不开公共设施的不断创新和提升。而在这些公共设施中&#xff0c;公共厕所一直是一个备受关注和改善的领域。近年来&#xff0c;随着智慧城市建设的推进&#xff0c;智慧公厕成为了城市管理的重要一环。智慧公厕不仅仅是为公众提供方便和舒适的便利设施&#xf…

TVC广告片存在的商业价值

TVC广告片是商业广告中最常见和重要的形式之一&#xff0c;具有广泛的覆盖面和影响力。宣传片是一种用于宣传推广产品、服务或活动的短片或视频。相比宣传片&#xff0c;TVC广告片可能存在一些弊端。接下来由深圳TVC广告片制作公司老友记小编从以下几个方面浅析一些可能的弊端&…

1998-2014年工业企业数据库和绿色专利匹配

1998-2014年工业企业数据库绿色专利匹配 1、时间&#xff1a;1998-2014年 2、样本量&#xff1a;470万 3、来源&#xff1a;工业企业数据库、国家知识产权局、WIPO 4、指标&#xff1a; 企业匹配唯一标识码、组织机构代码、企业名称、年份、法定代表人、法定代表人职务、行…

华为云云服务器评测|华为云耀云L搭建zerotier服务测试

0. 环境 - Win10 - 云耀云L服务器 1. 安装docker 检查yum源&#xff0c;本EulerOS的源在这里&#xff1a; cd /etc/yum.repos.d 更新源 yum makecache 安装 yum install -y docker-engine 运行测试 docker run hello-world 2. 运行docker镜像 默认配…

Android基础之Activity生命周期

Activity是Android四大组件之一、称为之首也恰如其分。 Activity直接翻译为中文叫活动。在Android系统中Activity就是我看到的一个完整的界面。 界面中看到的TextView(文字&#xff09;、Button(按钮)、ImageView&#xff08;图片&#xff09;都是需要Activity来承载的。 总…