1. elasticsearch 설정
- elasticsearch.yml
path.data: /data01/esdata, data02/esdata
RHEL 6.5 (커널문제로 지원 안대서 아래설정 추가 )
위에 설정한 disk 수만큼 아래 숫자 지정했음.
bootstrap.system_call_filter: false
node.max_local_storage_node: 10
network.bind_host: * * * *
network.publish_host: * * * *
2. logstash 설정
- logstash.yml
x pack 설치하면서 추가된 설정
xpack.monitoring.elastichsearch.username: logstash_system
xpack.monitoring.elastichsearch.password: *******
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url : http://hostname:9200
- pipeline.yml
- pipeline.id: job1
path.config: "path"
pipeline.workers: 10
queyue.type: memory
- pipeline.id : job2
path.config: "path"
pipeline.workers: 10
queue.type: memory
- job1에 대한 설정 ( xpack 설치하면서 elastic 계정 사용 )
input {
file {
close_older => 0
sincedb_path => "/dev/null"
path => "저장할경로"
start_position => "beginning"
exclude => "*.gz"
max_open_fikes => "9999999"
codec => multiline {
max_lines => 9000000
max_bytes => "100 mib"
pattern => "^%{TIMESTAMP_ISO8601} %{LOGLEVEL} "
what => "previous"
negate => true
}
}
}
filter {
mutate { strip => "message" }
mutate { gsub => ["message" , "\n" , "<<ENTER>>"] }
mutate { gsub => ["message" , "\t" , "<<TAB>>"] }
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:loglvl} %{GREEDYDATA:msg}" }
}
}
output {
elasticsearch
{
host => ["hostname","hostname"]
index => "elasticsearch index"
user => "elastic"
password => "*****"
}
}
3. kibana 설정
이건 별거없어서 패스