【Logstash03】企业级日志分析系统ELK之Logstash 过滤 Filter 插件

Logstash 过滤 Filter 插件

数据从源传输到存储库的过程中,Logstash 过滤器能够解析各个事件,识别已命名的字段以构建结构, 并将它们转换成通用格式,以便进行更强大的分析和实现商业价值。

Logstash 能够动态地转换和解析数据,不受格式或复杂度的影响

常见的 Filter 插件:

  • 利用 Grok 从非结构化数据中转化为结构数据
  • 利用 GEOIP 根据 IP 地址找出对应的地理位置坐标
  • 利用 useragent 从请求中分析操作系统、设备类型
  • 简化整体处理,不受数据源、格式或架构的影响

官方链接

https://www.elastic.co/guide/en/logstash/current/filter-plugins.html
https://www.elastic.co/guide/en/logstash/7.6/filter-plugins.html
Grok 插件
Grok 介绍

Grok 是一个过滤器插件,可帮助您描述日志格式的结构。有超过200种 grok模式抽象概念,如IPv6地 址,UNIX路径和月份名称。

为了将日志行与格式匹配, 生产环境常需要将非结构化的数据解析成 json 结构化数据格式

比如下面行:

2016-09-19T18:19:00 [8.8.8.8:prd] DEBUG this is an example log message

使用 Grok 插件可以基于正则表达式技术利用其内置的正则表达式的别名来表示和匹配上面的日志,如下 效果

%{TIMESTAMP_ISO8601:timestamp} \[%{IPV4:ip};%{WORD:environment}\] %{LOGLEVEL:log_level} %{GREEDYDATA:message}

最终转换为以下格式

{"timestamp": "2016-09-19T18:19:00","ip": "8.8.8.8","environment": "prd","log_level": "DEBUG","message": "this is an example log message"
} 

参考网站

https://www.elastic.co/cn/blog/do-you-grok-grok
http://grokdebug.herokuapp.com/
http://grokdebug.herokuapp.com/discover?#

范例: Nginx 访问日志

#cat /var/log/nginx/access.log
10.0.0.100 - - [03/Aug/2022:16:34:17 +0800] "GET / HTTP/1.1" 200 612 "-" "curl/7.68.0"%{COMBINEDAPACHELOG}
范例: 利用kibana网站将nginx日志自动生成grok的内置格式代码
58.250.250.21 - - [14/Jul/2020:15:07:27 +0800] "GET /wpcontent/plugins/akismet/_inc/form.js?ver=4.1.3 HTTP/1.1" 200 330 "http://www.wangxiaochun.com/?p=117" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36" "-"

在这里插入图片描述

基于上面生成的代码转化为 Json 格式

%{COMBINEDAPACHELOG}

在这里插入图片描述

范例:使用 grok pattern 将 Nginx 日志格式化为 json 格式
[root@logstash ~]#vim /etc/logstash/conf.d/http_grok_stdout.conf
input {http {port =>6666}
}
filter {#将nginx日志格式化为json格式grok {match => {"message" => "%{COMBINEDAPACHELOG}"  #将message字段转化为指定的Json格式}}
}
output {stdout {codec => rubydebug}
}
[root@logstash ~]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/http_grok_stdout.conf -r[root@logstash ~]#curl  -XPOST -d'58.250.250.21 - - [14/Jul/2020:15:07:27 +0800] "GET /wpcontent/plugins/akismet/_inc/form.js?ver=4.1.3 HTTP/1.1" 200 330 "http://www.wangxiaochun.com/?p=117" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36" "-"' 10.0.0.180:6666
范例: 直接将nginx的访问日志转化为Json格式
[root@ubuntu2004 ~]#cat /etc/logstash/conf.d/nginx_grok_stdout.conf
input {file {path => "/var/log/nginx/access.log"type => "nginx-accesslog"start_position => "beginning"stat_interval => "3"}
}
filter {
#将nginx日志格式化为json格式grok {match => {"message" => "%{COMBINEDAPACHELOG}"  #将message字段转化为指定的Json格式}}
}
output {stdout {codec => rubydebug}
}
Geoip 插件

geoip 根据 ip 地址提供的对应地域信息,比如:经纬度,国家,城市名等,以方便进行地理数据分析

filebeat配置范例:
[root@kibana ~]#cat /etc/filebeat/logstash-filebeat.yml 
filebeat.inputs:
- type: logenabled: true             #开启日志           paths:- /var/log/nginx/access.log    #指定收集的日志文件  #json.keys_under_root: true #默认false,只识别为普通文本,会将全部日志数据存储至message字段,改为true则会以Json格式存储#json.overwrite_keys: true  #设为true,使用json格式日志中自定义的key替代默认的message字段,此项可选tags: ["nginx-access"]
output.logstash:hosts: ["10.0.0.180:5044"]  #指定Logstash服务器的地址和端口  [root@kibana ~]#cat /var/log/nginx/access.log
58.250.250.21 - - [14/Jul/2020:15:07:27 +0800] "GET /wpcontent/plugins/akismet/_inc/form.js?ver=4.1.3 HTTP/1.1" 200 330 "http://www.wangxiaochun.com/?p=117" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36" "-"
logstash配置范例:
[root@logstash ~]#vim /etc/logstash/conf.d/beats_geoip_stdout.conf
input {beats {port =>5044#codec => "json"}
}
filter {#将nginx日志格式化为json格式    grok {match => {"message" => "%{COMBINEDAPACHELOG}"}}#以上面提取clientip字段为源,获取地域信息geoip {#source => "clientip"          #7.X版本指定源IP的所在字段source => "[source][address]"  #8.X版本变化target => "geoip"}
}
output {stdout {codec => rubydebug}
}
数据展示
[root@logstash ~]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/beats_geoip_stdout.conf -r{"user_agent" => {"original" => "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36"},"message" => "58.250.250.21 - - [14/Jul/2020:15:07:27 +0800] \"GET /wpcontent/plugins/akismet/_inc/form.js?ver=4.1.3 HTTP/1.1\" 200 330 \"http://www.wangxiaochun.com/?p=117\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36\" \"-\"","geoip" => {"geo" => {"city_name" => "Shenzhen","region_name" => "Guangdong","continent_code" => "AS","location" => {"lat" => 22.5559,"lon" => 114.0577},"country_iso_code" => "CN","region_iso_code" => "CN-GD","country_name" => "China","timezone" => "Asia/Shanghai"},"ip" => "58.250.250.21"},"input" => {"type" => "log"},"@timestamp" => 2025-01-03T08:14:38.824Z,"source" => {"address" => "58.250.250.21"},"@version" => "1","url" => {"original" => "/wpcontent/plugins/akismet/_inc/form.js?ver=4.1.3"},"timestamp" => "14/Jul/2020:15:07:27 +0800","http" => {"request" => {"method" => "GET","referrer" => "http://www.wangxiaochun.com/?p=117"},"version" => "1.1","response" => {"body" => {"bytes" => 330},"status_code" => 200}},"tags" => [[0] "nginx-access",[1] "beats_input_codec_plain_applied"],"event" => {"original" => "58.250.250.21 - - [14/Jul/2020:15:07:27 +0800] \"GET /wpcontent/plugins/akismet/_inc/form.js?ver=4.1.3 HTTP/1.1\" 200 330 \"http://www.wangxiaochun.com/?p=117\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36\" \"-\""},"host" => {"name" => "kibana"},"ecs" => {"version" => "8.0.0"},"log" => {"offset" => 623,"file" => {"path" => "/var/log/nginx/access.log"}},"agent" => {"name" => "kibana","id" => "a3acb99e-b483-4367-a2df-535d8a39a0fa","version" => "8.8.2","ephemeral_id" => "5d8aad32-46e7-4500-8fa5-d18dd314f8d2","type" => "filebeat"}
}
Date 插件

Date插件可以将日志中的指定的日期字符串对应的源字段生成新的目标字段。

然后替换@timestamp 字段(此字段默认为当前写入logstash的时间而非日志本身的时间)或指定的其他 字段

match   #类型为数组,用于指定需要使用的源字段名和对应的时间格式 
target  #类型为字符串,用于指定生成的目标字段名,默认是 @timestamp 
timezone #类型为字符串,用于指定时区域

官方说明

https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html

时区格式参考

http://joda-time.sourceforge.net/timezones.html
范例: 利用源字段timestamp生成新的字段名access_time
[root@logstash ~]#cat /etc/logstash/conf.d/http_grok_date_stdout.conf
input {http {port => 6666}
}
filter {#将nginx日志格式化为json格式grok {match => {"message" => "%{COMBINEDAPACHELOG}"}}#解析源字段timestamp的date日期格式: 14/Jul/2020:15:07:27 +0800date {match => ["timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]#target => "access_time"        #将时间写入新生成的access_time字段,源字段仍保留target => "@timestamp"        #将时间覆盖原有的@timestamp字段timezone => "Asia/Shanghai"}
}
output {	stdout {codec => rubydebug}
}
数据展示
[root@logstash ~]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/http_grok_date_stdout.conf -r
{"@timestamp" => 2020-07-14T07:07:27.000Z,"message" => "58.250.250.21 - - [14/Jul/2020:15:07:27 +0800] \"GET /wpcontent/plugins/akismet/_inc/form.js?ver=4.1.3 HTTP/1.1\" 200 330 \"http://www.wangxiaochun.com/?p=117\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36\" \"-\"","url" => {"domain" => "10.0.0.180","path" => "/","original" => "/wpcontent/plugins/akismet/_inc/form.js?ver=4.1.3","port" => 6666},"event" => {"original" => "58.250.250.21 - - [14/Jul/2020:15:07:27 +0800] \"GET /wpcontent/plugins/akismet/_inc/form.js?ver=4.1.3 HTTP/1.1\" 200 330 \"http://www.wangxiaochun.com/?p=117\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36\" \"-\""},"user_agent" => {"original" => [[0] "curl/7.81.0",[1] "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36"]},"host" => {"ip" => "10.0.0.180"},"http" => {"version" => [[0] "HTTP/1.1",[1] "1.1"],"method" => "POST","request" => {"body" => {"bytes" => "274"},"method" => "GET","referrer" => "http://www.wangxiaochun.com/?p=117","mime_type" => "application/x-www-form-urlencoded"},"response" => {"body" => {"bytes" => 330},"status_code" => 200}},"source" => {"address" => "58.250.250.21"},"timestamp" => "14/Jul/2020:15:07:27 +0800","@version" => "1"
}

范例: 将UNIX时间转换指定格式

date {match => ["timestamp","UNIX","YYYY-MM-dd HH:mm:ss"]target =>"@timestamp"timezone => "Asia/shanghai"
}
Useragent 插件

useragent 插件可以根据请求中的 user-agent 字段,解析出浏览器设备、操作系统等信息, 以方便后续 的分析使用

范例:

[root@logstash ~]#cat /etc/logstash/conf.d/http_grok_useragent_stdout.conf
input {http {port =>6666}
}
filter {#将nginx日志格式化为json格式grok {match => {"message" => "%{COMBINEDAPACHELOG}"}}#解析date日期如: 10/Dec/2020:10:40:10 +0800date {match => ["timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]target => "@timestamp"			#将时间覆盖原有的@timestamp字段#target => "access_time"			#将时间写入新生成的access_time字段,源字段仍保留timezone => "Asia/Shanghai"}#提取agent字段,进行解析useragent {#source => "agent"        #7,X指定从哪个字段获取数据source => "message"        #8.X指定从哪个字段获取数据#source => "[user_agent][original]" #8.X指定从哪个字段获取数据target => "useragent" #指定生成新的字典类型的字段的名称,包括os,device等内容}}
output {stdout {codec => rubydebug}
}
数据展示
[root@logstash]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/http_grok_useragent_stdout.conf -r
{"user_agent" => {"original" => [[0] "curl/7.81.0",[1] "Mozilla/5.0 (iPad; CPU OS 16_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Mobile/15E148 Safari/604.1"]},"message" => "10.0.0.1 - - [03/Jan/2025:16:58:13 +0800] \"GET / HTTP/1.1\" 304 0 \"-\" \"Mozilla/5.0 (iPad; CPU OS 16_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Mobile/15E148 Safari/604.1\"","useragent" => {"name" => "Mobile Safari","device" => {"name" => "iPad"},"version" => "16.6","os" => {"name" => "iOS","version" => "16.6","full" => "iOS 16.6"}},"url" => {"domain" => "10.0.0.180","path" => "/","original" => "/","port" => 6666},"source" => {"address" => "10.0.0.1"},"http" => {"version" => [[0] "HTTP/1.1",[1] "1.1"],"method" => "POST","response" => {"status_code" => 304,"body" => {"bytes" => 0}},"request" => {"method" => "GET","mime_type" => "application/x-www-form-urlencoded","body" => {"bytes" => "197"}}},"@version" => "1","@timestamp" => 2025-01-03T08:58:13.000Z,"event" => {"original" => "10.0.0.1 - - [03/Jan/2025:16:58:13 +0800] \"GET / HTTP/1.1\" 304 0 \"-\" \"Mozilla/5.0 (iPad; CPU OS 16_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Mobile/15E148 Safari/604.1\""},"host" => {"ip" => "10.0.0.180"},"timestamp" => "03/Jan/2025:16:58:13 +0800"
}
Mutate 插件

官方链接:

 https://www.elastic.co/guide/en/logstash/master/plugins-filters-mutate.htmlhttps://www.elastic.co/guide/en/logstash/7.6/plugins-filters-mutate.html

Mutate 插件主要是对字段进行、类型转换、删除、替换、更新等操作,可以使用以下函数

remove_field    	#删除字段
split         		#字符串切割,相当于awk取列  
add_field      		#添加字段 
convert       		#类型转换,支持的数据类型:integer,integer_eu,float,float_eu,string,boolean     
gsub  				#字符串替换      
rename        	    #字符串改名
lowercase           #转换字符串为小写
remove_field 删除字段

范例:

[root@logstash ~]#cat /etc/logstash/conf.d/http_grok_mutate_remove_field_stdout.conf
input {http {port =>6666}
}filter {#将nginx日志格式化为json格式grok {match => {"message" => "%{COMBINEDAPACHELOG}"}}#解析date日期如: 10/Dec/2020:10:40:10 +0800date {match => ["timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]target => "@timestamp"#target => "access_time"timezone => "Asia/Shanghai"}#mutate 删除指定字段的操作mutate {#remove_field => ["headers","message", "agent"]  #7.Xremove_field => ["timestamp","message", "http"] #8.X}
}
output {stdout {codec => rubydebug}
}
数据展示
[root@logstash]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/http_grok_mutate_remove_field_stdout.conf -r
{"event" => {"original" => "10.0.0.1 - - [03/Jan/2025:16:58:13 +0800] \"GET / HTTP/1.1\" 304 0 \"-\" \"Mozilla/5.0 (iPad; CPU OS 16_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Mobile/15E148 Safari/604.1\""},"url" => {"domain" => "10.0.0.180","path" => "/","original" => "/","port" => 6666},"@timestamp" => 2025-01-03T08:58:13.000Z,"user_agent" => {"original" => [[0] "curl/7.81.0",[1] "Mozilla/5.0 (iPad; CPU OS 16_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Mobile/15E148 Safari/604.1"]},"host" => {"ip" => "10.0.0.180"},"source" => {"address" => "10.0.0.1"},"@version" => "1"
}
Split 切割

mutate 中的 split 字符串切割,指定字符做为分隔符,切割的结果用于生成新的列表元素

示例: 1000|提交订单|2020-01-08 09:10:21

范例: split 切割字符串取列

[root@logstash ~]#cat /etc/logstash/conf.d/http_grok_mutate_split_stdout.conf
input {http {port =>6666}
}
filter {#mutate 切割操作mutate {#字段分隔符split => { "message" => "|" } #将message字段按 | 分割成名称message列表中多个列表元素}
}
output {stdout {codec => rubydebug}
}
数据展示
#启动
[root@logstash]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/http_grok_mutate_split_stdout.conf
{"message" => [[0] "1000",[1] "提交订单",[2] "2020-01-08 09:10:21"],"event" => {"original" => "1000|提交订单|2020-01-08 09:10:21"},"user_agent" => {"original" => "curl/7.81.0"},"url" => {"domain" => "10.0.0.180","path" => "/","port" => 6666},"@version" => "1","host" => {"ip" => "10.0.0.180"},"@timestamp" => 2025-01-03T09:14:03.422624536Z,"http" => {"version" => "HTTP/1.1","method" => "POST","request" => {"mime_type" => "application/x-www-form-urlencoded","body" => {"bytes" => "37"}}}
}[root@logstash]#curl -XPOST -d '1000|提交订单|2020-01-08 09:10:21' 10.0.0.180:6666/
add_field 添加字段

用指定源字段添加新的字段,添加完成后源字段还存在

范例:

[root@logstash ~]#cat /etc/logstash/conf.d/http_grok_mutate_add_field_stdout.conf
input {http {port =>6666}
}
filter {#mutate 切割操作mutate {#字段分隔符split => { "message" => "|" }#添加字段,将message的列表的第0个元素添加字段名user_idadd_field => {"user_id" => "%{[message][0]}"  "action" => "%{[message][1]}""time" => "%{[message][2]}"}#添加字段做索引名#add_field => {"[@metadata][target_index]" => "app-%{+YYY.MM.dd}"} #删除无用字段remove_field => ["headers","message"]}
}
output {stdout {codec => rubydebug}
}
数据展示
#启动
[root@logstash ~]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/http_grok_mutate_add_field_stdout.conf
{"url" => {"domain" => "10.0.0.180","path" => "/","port" => 6666},"user_id" => "1000","@version" => "1","http" => {"request" => {"body" => {"bytes" => "37"},"mime_type" => "application/x-www-form-urlencoded"},"version" => "HTTP/1.1","method" => "POST"},"user_agent" => {"original" => "curl/7.81.0"},"event" => {"original" => "1000|提交订单|2020-01-08 09:10:21"},"@timestamp" => 2025-01-03T09:21:45.406866933Z,"time" => "2020-01-08 09:10:21","action" => "提交订单","host" => {"ip" => "10.0.0.180"}
}#用curl提交日志,可以看到上面输出信息
[root@ubuntu2004 ~]#curl -XPOST -d '1000|提交订单|2020-01-08 09:10:21' 10.0.0.180:6666/
convert 转换

mutate 中的 convert 可以实现数据类型的转换。 支持转换integer、float、string等类型

范例:

[root@logstash ~]#cat /etc/logstash/conf.d/http_grok_mutate_convert_stdout.conf
input {http {port =>6666}
}
filter {#mutate 切割操作mutate {#字段分隔符split => { "message" => "|" }#添加字段add_field => {"user_id" => "%{[message][0]}""action" => "%{[message][1]}""time" => "%{[message][2]}"}#删除无用字段remove_field => ["headers","message"]#对新添加字段进行格式转换convert => {"user_id" => "integer""action" => "string""time" => "string"}#convert => ["excute_time","float] #此格式也可以支持#convert => ["time","string" ]}
}
output {stdout {codec => rubydebug}
}
[root@logstash ~]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/http_grok_mutate_convert_stdout.conf -r
gsub 替换

gsub 实现字符串的替换

filter {mutate {gsub=>["message","\n", " "] #将message字段中的换行替换为空格}
}
条件判断

Filter 语句块中支持 if 条件判断功能

filebeat范例:
#vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: logenabled: truepaths:- /var/log/nginx/access.logtags: ["access"]- type: logenabled: truepaths:- /var/log/nginx/error.logtags: ["error"]
output.logstash:hosts: ["10.0.0.104:5044","10.0.0.105:5044",]#loadbalance: true        #负载均衡#worker: 2 #number of hosts * workers #开启多进程
logstash配置
#vim /etc/logstash/conf.d/filebeat_logstash_es.conf 
input {beats {port => 5044}
}
filter {if "access" in [tags][0] {mutate {add_field => { "target_index" => "access-%{+YYYY.MM.dd}"}}}else if "error" in [tags][0] {mutate {add_field => { "target_index" => "error-%{+YYYY.MM.dd}"}}}else if "system" in [tags][0] {mutate {add_field => { "target_index" => "system-%{+YYYY.MM.dd}"}}}}
output {elasticsearch {hosts =>["10.0.0.181:9200","10.0.0.182:9200","10.0.0.183:9200"]  #一般写data地址index => "%{[target_index]}"   #使用字段target_index值做为索引名template_overwrite => true     #覆盖索引模板  }
}

范例:

#vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: logenabled: truepaths:- /var/log/nginx/access.logfields:project: test-accessenv: test  
output.logstash:hosts: ["10.0.0.104:5044","10.0.0.105:5044",]  #vim /etc/logstash/conf.d/filebeat_logstash_es.conf 
input {beats {port => 5044}file {path  => "/tmp/wang.log"type  => wanglog    #自定义的类型,可以用于条件判断start_position => "beginning"stat_interval => "3"        }}
output {if [fields][env] == "test" {elasticsearch {hosts =>["10.0.0.101:9200","10.0.0.102:9200","10.0.0.103:9200"] index => "test-nginx-%{+YYYY.MM.dd}"   }}if [type] == "wanglog" {stdout {codec => rubydebug}}}

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/1435.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

unity打包sdk热更新笔记

基础打包需要知识: 安装包大小不要超过2G,AB包数量过多会影响加载和构建,多次IO,用Gradle打包,要支持64位系统,不同的渠道包:让做sdk的人支持,提供渠道包的打包工具 配置系统环境变量…

论文笔记(六十一)Implicit Behavioral Cloning

Implicit Behavioral Cloning 文章概括摘要1 引言2 背景:隐式模型的训练与推理3 隐式模型与显式模型的有趣属性4 policy学习成果5 理论见解:隐式模型的通用逼近性6 相关工作7 结论 文章概括 引用: inproceedings{florence2022implicit,titl…

【Rust自学】12.3. 重构 Pt.1:改善模块化

12.3.0. 写在正文之前 第12章要做一个实例的项目——一个命令行程序。这个程序是一个grep(Global Regular Expression Print),是一个全局正则搜索和输出的工具。它的功能是在指定的文件中搜索出指定的文字。 这个项目分为这么几步: 接收命令行参数读取…

Vue2+OpenLayers调用WMTS服务初始化天地图示例(提供Gitee源码)

目录 一、案例截图 二、安装OpenLayers库 三、WMTS服务详解 四、完整代码 五、Gitee源码 一、案例截图 二、安装OpenLayers库 npm install ol 三、WMTS服务详解 WMTS(Web Map Tile Service)是一种标准的网络地图服务协议,用于提供基于…

【STM32-学习笔记-6-】DMA

文章目录 DMAⅠ、DMA框图Ⅱ、DMA基本结构Ⅲ、不同外设的DMA请求Ⅳ、DMA函数Ⅴ、DMA_InitTypeDef结构体参数①、DMA_PeripheralBaseAddr②、DMA_PeripheralDataSize③、DMA_PeripheralInc④、DMA_MemoryBaseAddr⑤、DMA_MemoryDataSize⑥、DMA_MemoryInc⑦、DMA_DIR⑧、DMA_Buff…

lerna使用指南

lerna版本 以下所有配置命令都是基于v8.1.9,lerna v5 v7版本差别较大,在使用时,注意自身的lerna版本。 lerna开启缓存及缓存配置 nx缓存是v5版本以后才有的,小于该版本的无法使用该功能。 初始化配置 缓存配置文件nx.json&am…

html辅助标签与样式表

一、HTML其它常用标签 1.meta标签 &#xff08;1&#xff09;meta标签是一个特殊的HTML标签&#xff0c;提供有关网页的信息&#xff0c;如作者姓名、公司名称和联系信息等 &#xff08;2&#xff09;许多搜索引擎都使用meta标签 <head> <meta name"keyword…

用 Python 从零开始创建神经网络(十九):真实数据集

真实数据集 引言数据准备数据加载数据预处理数据洗牌批次&#xff08;Batches&#xff09;训练&#xff08;Training&#xff09;到目前为止的全部代码&#xff1a; 引言 在实践中&#xff0c;深度学习通常涉及庞大的数据集&#xff08;通常以TB甚至更多为单位&#xff09;&am…

DolphinScheduler自身容错导致的服务器持续崩溃重大问题的排查与解决

01 问题复现 在DolphinScheduler中有如下一个Shell任务&#xff1a; current_timestamp() { date "%Y-%m-%d %H:%M:%S" }TIMESTAMP$(current_timestamp) echo $TIMESTAMP sleep 60 在DolphinScheduler将工作流执行策略设置为并行&#xff1a; 定时周期调度设置…

【机器学习案列】学生抑郁可视化及预测分析

&#x1f9d1; 博主简介&#xff1a;曾任某智慧城市类企业算法总监&#xff0c;目前在美国市场的物流公司从事高级算法工程师一职&#xff0c;深耕人工智能领域&#xff0c;精通python数据挖掘、可视化、机器学习等&#xff0c;发表过AI相关的专利并多次在AI类比赛中获奖。CSDN…

Docker Desktop 构建java8基础镜像jdk安装配置失效解决

Docker Desktop 构建java8基础镜像jdk安装配置失效解决 文章目录 1.问题2.解决方法3.总结 1.问题 之前的好几篇文章中分享了在Linux(centOs上)和windows10上使用docker和docker Desktop环境构建java8的最小jre基础镜像&#xff0c;前几天我使用Docker Desktop环境重新构建了一个…

【Uniapp-Vue3】页面生命周期onLoad和onReady

一、onLoad函数 onLoad在页面载入时触发&#xff0c;多用于页面跳转时进行参数传递。 我们在跳转的时候传递参数name和age: 接受参数&#xff1a; import {onLoad} from "dcloudio/uni-app"; onLoad((e)>{...}) 二、onReady函数 页面生命周期函数中的onReady其…

【STM32-学习笔记-8-】I2C通信

文章目录 I2C通信Ⅰ、硬件电路Ⅱ、IIC时序基本单元① 起始条件② 终止条件③ 发送一个字节④ 接收一个字节⑤ 发送应答⑥ 接收应答 Ⅲ、IIC时序① 指定地址写② 当前地址读③ 指定地址读 Ⅳ、MPU6050---6轴姿态传感器&#xff08;软件I2C&#xff09;1、模块内部电路2、寄存器地…

WINFORM - DevExpress -> devexpress版--报表(report)

devexpress report模板 1.安装devexpress(DevExpress 总结【安装、案例】_caoyanchao1的博客-CSDN博客_devexpress) 2.新建vs项目且添加standarReportDesigner控件 涛神设计器注意 3.运行后步骤 点击New Report DetailReport 涛神设计器checkbox(3.复选框只认boolean类型的 b…

亿道三防丨三防笔记本是什么意思?和普通笔记本的优势在哪里?

三防笔记本是什么意思&#xff1f;和普通笔记本的优势在哪里&#xff1f; 在现代社会中&#xff0c;笔记本电脑已经成为人们工作和生活中不可或缺的一部分。然而&#xff0c;在一些特殊行业或环境中&#xff0c;普通笔记本电脑由于其脆弱性和对环境条件的敏感性&#xff0c;往…

opencv的NLM去噪算法

NLM&#xff08;Non-Local Means&#xff09;去噪算法是一种基于图像块&#xff08;patch&#xff09;相似性的去噪方法。其基本原理是&#xff1a; 图像块相似性&#xff1a;算法首先定义了一个搜索窗口&#xff08;search window&#xff09;&#xff0c;然后在该窗口内寻找…

ElasticSearch在Windows环境搭建测试

引子 也持续关注大数据相关内容一段时间&#xff0c;大数据内容很多。想了下还是从目前项目需求侧出发&#xff0c;进行相关学习。Elasticsearch&#xff08;ES&#xff09;是位于 Elastic Stack&#xff08;ELK stack&#xff09; 核心的分布式搜索和分析引擎。Logstash 和 B…

Docker安装和卸载(centos)

Docker安装和卸载 一&#xff0c;已安装Docker&#xff0c;卸载Docker 1.方法一 sudo yum remove docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-engine​ 如果出现以下提示就证明没卸载…

《自动驾驶与机器人中的SLAM技术》ch8:基于 IESKF 的紧耦合 LIO 系统

目录 基于 IESKF 的紧耦合 LIO 系统 1 IESKF 的状态变量和运动过程 1.1 对名义状态变量的预测 1.2 对误差状态变量的预测及对协方差矩阵的递推 2 观测方程中的迭代过程 3 高维观测中的等效处理 4 NDT 和 卡尔曼滤波的联系 5 紧耦合 LIO 系统的主要流程 5.1 IMU 静止初始化 …

认识机器学习中的经验风险最小化准则

经验风险最小化准则的定义 经验风险最小化&#xff08;Empirical Risk Minimization&#xff0c;简称 ERM&#xff09;是机器学习中的一种基本理论框架&#xff0c;用于指导模型的训练过程。其核心思想是通过最小化训练数据上的损失函数来优化模型参数&#xff0c;从而提高模型…