ELK之使用filebeat替换logstash

ELK之使用filebeat替换logstash


filebeat

将本机的json格式的日志通过filebeat收集并标准输出至屏幕

  • 确认日志格式为json格式:
    先访问web服务器,以产生一定的日志,然后确认是json格式,因为下面的课程中会使用到:

  • 这里演示的本机的json日志格式为web服务器tomcat的访问日志格式

1:访问tomcat WEB站点,生成访问日志信息

1
~]# ab -n100 -c100 http://192.168.15.16:8080/web

2:确认日志格式,后续会用日志做统计

1
2
3
~]# tail  /usr/local/tomcat/logs/localhost_access_log.2017-04-28.txt 
{"clientip":"192.168.15.15","ClientUser":"-","authenticated":"-","AccessTime":"[28/Apr/2017:21:16:46 +0800]","method":"GET /webdir/ HTTP/1.0","status":"200","SendBytes":"12","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}
{"clientip":"192.168.15.15","ClientUser":"-","authenticated":"-","AccessTime":"[28/Apr/2017:21:16:46 +0800]","method":"GET /webdir/ HTTP/1.0","status":"200","SendBytes":"12","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}

3:安装配置filebeat

1
2
3
4
    ~]# systemctl  stop logstash  #停止logstash服务(如果有安装)
src]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.3.2-x86_64.rpm
src]# yum install filebeat-5.3.2-x86_64.rpm -y
`

4:配置filebeat收集系统日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
    ~]# cd /etc/filebeat/
filebeat~]# cp filebeat.yml filebeat.yml.bak #备份源配置文件

filebeat收集多个系统日志并输出到本地文件

~]# grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$"
filebeat.prospectors:
- input_type: log
paths:
- /var/log/messages
- /var/log/*.log
exclude_lines: ["^DBG","^$"] #不收取的
#include_lines: ["^ERR", "^WARN"] #只收取的
document_type: system-log-1512 #类型,会在每条日志中插入标记
output.file:
path: "/tmp"
filename: "filebeat.txt"

5:启动filebeat服务并验证本地文件是否有数据

1
filebeat]# systemctl  start filebeat



使用Filebeat收集本机的Nginx的访问日志并存储到后端的ES主机上

Filebeat官方参考手册:https://www.elastic.co/products/beats

1:rpm包安装Filebeat

1
2
3
~]# ls
filebeat-5.6.13-x86_64.rpm
~]# rpm -ivh filebeat-5.6.13-x86_64.rpm

Filebeat配置文件解析

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
~]# vim /etc/filebeat/filebeat.yml  #yml语法格式
21 paths: #基于path参数,指定收集的哪个日志文件,支持模糊匹配
22 - /var/log/*.log
23 #- c:\programdata\elasticsearch\logs\* #windows收集windows日志文件语法

27 #exclude_lines: ["^DBG"] #排除日志文件中开头以DBU开头的行不做收集

31 #include_lines: ["^ERR", "^WARN"] #指定日志文件中只收集的行

35 #exclude_files: [".gz$"] #排除文件,范例排除收集压缩的文件

39 #fields:
40 # level: debug #在收集的每条日志中插入debug
41 # review: 1

## 多行合并
49 #multiline.pattern: ^\[

#================================ Outputs ================================ =====

81 output.elasticsearch: #将收集的日志存储到ES主机上格式

83 hosts: ["localhost:9200"]

86 #protocol: "https" #支持安全认证机制
87 #username: "elastic"
88 #password: "changeme"

2:编辑配置文件修改nginx日志的格式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
~]# vim /etc/nginx/nginx.conf
http {
# log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#
# access_log /var/log/nginx/access.log main;

log_format access_json '{"@timestamp":"$time_iso8601",' #定义日志的格式
'"host":"$server_addr",'
'"clientip":"$remote_addr",'
'"size":$body_bytes_sent,'
'"responsetime":$request_time,'
'"upstreamtime":"$upstream_response_time",'
'"upstreamhost":"$upstream_addr",'
'"http_host":"$host",'
'"url":"$uri",'
'"domain":"$host",'
'"xff":"$http_x_forwarded_for",'
'"referer":"$http_referer",'
'"status":"$status"}';
access_log /var/log/nginx/access.log access_json;

~]# mkdir -p /var/log/nginx/
~]# systemctl start nginx
~]# systemctl enable nginx

3:使用filebeat收集nginx访问日志,配置filebeat收集系统日志:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
~]# cd /etc/filebeat/
filebeat]# cp filebeat.yml filebeat.yml.bak #备份源配置文件

~]# vim /etc/filebeat/filebeat.yml
21 paths:
22 - /var/log/nginx/access.log

39 fields: #定义type
40 type: nginx-accesslog

82 output.elasticsearch: #指定将filebeat收集来的日志存储在ES中
83 # Array of hosts to connect to.
84 hosts: ["172.18.135.1:9200"]
85 index "filebeat-nginx-accesslog" #默认的index名称 filebeat-%{+yyyy.MM.DD}

4:启动filebeat

1
2
~]# systemctl start filebeat
~]# systemctl enable filebeat

5:在ES主机上的head插件查看索引

6:在kinban展示日志


filebeat收集nginx访问日志并写入redis(nginx日志格式需要改为json格式)

Filebeat支持将数据直接写入到redis服务器,本步骤为写入到redis当中的一个可以,另外filebeat还支持写入到elasticsearch、logstash等服务器。

1:filebeat配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
~]# vim /etc/filebeat/filebeat.yml 
21 paths:
22 - /var/log/nginx/access.log

39 fields: #定义type,格式很重要
40 type: nginx-accesslog


116 output.redis: #将日志输出至redis
117 hosts: ["172.18.135.5"] #redis地址
118 password: "123456" #redis密码
119 key: "system-log-5612" #KEY名称(自定义)
120 db: 1 #存储到redis的库
121 timeout: 5 #超时时长

2:启动filebeat

1
2
~]# systemctl start filebeat
~]# systemctl enable filebeat

3:测试访问nginx 默认的web页面,验证日志的格式是否为json

1
2
3
4
~]# curl 172.18.135.5:80
~]# cat /var/log/nginx/access.log
{"@timestamp":"2019-03-03T16:38:24+08:00","host":"172.18.135.5","clientip":"172.18.135.5","size":3700,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"172.18.135.5","url":"/index.html","domain":"172.18.135.5","xff":"-","referer":"-","status":"200"}
{"@timestamp":"2019-03-03T16:38:25+08:00","host":"172.18.135.5","clientip":"172.18.135.5","size":3700,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"172.18.135.5","url":"/index.html","domain":"172.18.135.5","xff":"-","referer":"-","status":"200"}

3:验证redis是否有数据(指定查看对用的库)

4:查看redis中的日志数据

1
2
3
4
5
6
注意选择的db是否和filebeat写入一致

172.18.135.2:6379> select 1
OK
172.18.135.2:6379[1]> RPOP system-log-5612
"{\"@timestamp\":\"2019-03-03T10:04:39.966Z\",\"beat\":{\"hostname\":\"centos77\",\"name\":\"centos77\",\"version\":\"5.6.13\"},\"fields\":{\"type\":\"nginx-accesslog\"},\"input_type\":\"log\",\"message\":\"{\\\"@timestamp\\\":\\\"2019-03-03T18:04:37+08:00\\\",\\\"host\\\":\\\"172.18.135.5\\\",\\\"clientip\\\":\\\"172.18.135.5\\\",\\\"size\\\":3700,\\\"responsetime\\\":0.000,\\\"upstreamtime\\\":\\\"-\\\",\\\"upstreamhost\\\":\\\"-\\\",\\\"http_host\\\":\\\"172.18.135.5\\\",\\\"url\\\":\\\"/index.html\\\",\\\"domain\\\":\\\"172.18.135.5\\\",\\\"xff\\\":\\\"-\\\",\\\"referer\\\":\\\"-\\\",\\\"status\\\":\\\"200\\\"}\",\"offset\":9248,\"source\":\"/var/log/nginx/access.log\",\"type\":\"log\"}"

截图补充:对应关系要对应好

5:配置logstash从redis读取上面的日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
~]# vim   /etc/logstash/conf.d/redis-systemlog-es.conf 
input {
redis {
host => "192.168.15.12" #指定redis地址
port => "6379"
password => "123456" #指定redis密码
db => "1" #从redis哪个库中取数据,需要和filebeat存储时指定的库序号保持一致
key => "system-log-1512" #从redis取出的KEY的名称,需要和filebeat存储仅redis时的KEY名称保持一致
data_type => "list"
}
}


output {
if [fields][type] == "nginx-accesslog" {
elasticsearch {
hosts => ["192.168.15.11:9200"] #ES主机地址
index => "system-log-1512" #定义存储的索引名称
codec => "json"
}}
}

~]# systemctl restart logstash #重启logstash服务

6:查看logstash服务日志

7:查看redis中是否有数据

8:在ES的head插件验证索引是否创建

9:kibana界面添加索引

10:在kibana验证system日志

11:监控redis数据长度

实际环境当中,可能会出现reids当中堆积了大量的数据而logstash由于种种原因未能及时提取日志,此时会导致redis服务器的内存被大量使用,甚至出现如下内存即将被使用完毕的情景:

12:查看reids中的日志队列长度发现有大量的日志堆积在redis 当中

-------------------码字不易尊重原创转载标注不胜感激-------------------
Yes or no?
0%