Skip to content

Latest commit

 

History

History
99 lines (76 loc) · 5.24 KB

2.F5-logstash-kibana设置(demo).md

File metadata and controls

99 lines (76 loc) · 5.24 KB

##This page only for demo

A much more like production version pls check here

F5设置

  • 参考 https://www.myf5.net/post/2491.htm F5侧配置部分

  • request logging profile这里只使用response部分,template部分,我的demo环境填写内容是:

    $CLIENT_IP $DATE_NCSA $VIRTUAL_IP $VIRTUAL_NAME $VIRTUAL_POOL_NAME $SERVER_IP $SERVER_PORT "$HTTP_PATH" "$HTTP_REQUEST" $HTTP_STATCODE $RESPONSE_SIZE $RESPONSE_MSECS "$Referer" "${User-agent}"
    

    request logging profile 使用tcp协议

    更多profile可输出变量定义可参考 https://support.f5.com/kb/en-us/products/big-ip-aam/manuals/product/aam-concepts-12-1-0/29.html

    注意:如果vs关联了 ASM或DOS profile, 上述模板中VIRTUAL_POOL_NAME, SERVER_IP, SERVER_PORT三个变量会出现空值情况,请向F5开一个case并refer ID 640758

logstash启动配置文件

input {
  tcp {
    port => 514
    type => 'f5-request'
  }
}

filter {
  if [type] == "f5-request" and [message]=~ 'user1' {
    grok {
        match => { "message" => "%{IP:clientip} \[%{HTTPDATE:timestamp}\] %{IP:virtual_ip} %{DATA:virtual_name} %{DATA:virtual_pool_name} %{DATA:server} %{NUMBER:server_port} \"%{DATA:path}\" \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response:int} %{NUMBER:bytes:int} %{NUMBER:response_ms:int} %{QS:referrer} %{QS:agent}"}
        add_field => [ "user", "user1" ]
    }
  } else if [type] == "f5-request" and [message]=~ 'user2' {
    grok {
        match => { "message" => "%{IP:clientip} \[%{HTTPDATE:timestamp}\] %{IP:virtual_ip} %{DATA:virtual_name} %{DATA:virtual_pool_name} %{DATA:server} %{NUMBER:server_port} \"%{DATA:path}\" \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response:int} %{NUMBER:bytes:int} %{NUMBER:response_ms:int} %{QS:referrer} %{QS:agent}"}
        add_field => [ "user", "user2" ]
    }
  } else if [type] == "f5-request" and [message]=~ 'user3' {
     grok {
        match => { "message" => "%{IP:clientip} \[%{HTTPDATE:timestamp}\] %{IP:virtual_ip} %{DATA:virtual_name} %{DATA:virtual_pool_name} %{DATA:server} %{NUMBER:server_port} \"%{DATA:path}\" \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response:int} %{NUMBER:bytes:int} %{NUMBER:response_ms:int} %{QS:referrer} %{QS:agent}"}
        add_field => [ "user", "user3" ]
    }
  } else if [type] == "f5-request" {
      grok {
        match => { "message" => "%{IP:clientip} \[%{HTTPDATE:timestamp}\] %{IP:virtual_ip} %{DATA:virtual_name} %{DATA:virtual_pool_name} %{DATA:server} %{NUMBER:server_port} \"%{DATA:path}\" \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response:int} %{NUMBER:bytes:int} %{NUMBER:response_ms:int} %{QS:referrer} %{QS:agent}"}
        add_field => [ "user", "anonymous" ]
    } 
  }
  geoip {
        source => "clientip"
        target => "geoip"
    }
}



output {
  elasticsearch {
    hosts => ["192.168.214.130:9200"]
    index => "f5-request-%{+YYYY.MM.dd}"
  }
}

解释:

  • 为了模拟捕捉user,在filter里做了几个特殊设置,如果有那几个用户,则增加user field到elastic数据库里,实际环境下,应该根据实际情况想办法获取实际用户ID信息
  • 上述grok filter与demo里的request logging profile中template是匹配的,如果修改了原始日志输出格式,需要对应修改grok正则
  • geoip中使用缺省地址库,未指定自定义的地址库,我的系统中如果指定自定义的地址库,则会报错,如果这里有问题,请自行Google,其实baidu搜这种文档也还行,很多。
  • 要想使得Kibana dashboard中的地图效果生效,需要保证elasticsearch中存储数据时候有对应的geo_point 类型field。demo中使用dynamic方式来产生elasticsearch filed的mapping,系统内置的logstash-* mapping模板实际包含了geo_point,但是我们这里使用了f5-request 索引名称,需要额外自定义一个elastic mapping模板,可参考github repo中提供的mapping模板(f5-request_template.json),在系统中执行: curl -XPUT http://localhost:9200/_template/f5-request?pretty -d @f5-request_template.json
  • 修改上述mapping模板后,如果在修改前kibana已经定义相关index,则需在kibana的管理菜单中refresh相关索引以正确认识新获取数据的filed格式
  • output中的elasticsearch地址需要替换为你的地址
  • 上述logstash配置文件在github中提供,启动方式 /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/f5-request-logging.conf

将logstash配置为服务启动,执行以下命令

/usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd

systemctl enable logstash.service

systemctl restart logstash

配置完毕后,/etc/logstash/conf.d/目录下应只放正式用于启动的配置文件,系统默认将合并读取该目录下所有conf文件

Kibana设置

kibana配置相对比较简单,google,baidu摸索一下

github中是demo里已经定义的相关可视化试图以及dashboard的json文件, 可以直接在kibana中导入