elk快速启动
# ELK环境搭建
在我们搭建环境前,我们需要两台机器:192.168.56.101, 192.168.56.102
建议使用winscp和idea进行修改配置文件
. 代表当前程序[elasticsearch, kibana...]的根目录
## 192.168.56.101
elasticsearch: /opt/elasticsearch
kibana: /opt/kibana
安装位置随意
## 192.168.56.102
logstash: /opt/logstash
filebeat: /opt/filebeat
安装位置随意
## 系统配置
由于elasticsearch,kibana,filebeat要求不能使用root账户运行,所以我们需要修改一下系统配置
### 创建账户与授权
```shell
useradd elk
passwd elk
chown -R elk [需要授权的目录]
```
### /etc/hosts
为了方便测试,我们在host文件加入以下配置
```
192.168.56.101 elastic.local.com kibana.local.com
192.168.56.102 logstash.local.com filebeat.local.com
```
### /etc/security/limits.conf
加入以下配置
```
elk soft nofile 65536
elk hard nofile 65536
elk soft memlock unlimited
elk hard memlock unlimited
```
### /etc/security/limits.d/90-nproc.conf
修改以下配置
```
* soft nproc 4096
```
你也可以加入以下配置
```
elk soft nproc 4096
```
### /etc/security/sysctl.conf
加入以下配置
```
vm.max_map_count=655360
```
# elasticsearch
## 安装
```shell
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.4.2-linux-x86_64.tar.gz
tar -zxvf elasticsearch-7.4.2-linux-x86_64.tar.gz
```
## 配置
#### ./config/jvm.options
- 根据当前所在的机器配置配置内存使用。
- 在jdk11中UseConcMarkSweepGC过期了,所以我们也进行适当的修改。
```
-Xms1g>>-Xms512m
-Xmx1g>>-Xmx512m
-XX:+UseConcMarkSweepGC>>-XX:+UseG1GC
```
#### ./config/elasticsearch.yml
```yaml
cluster.name: elk
node.name: elastic1
path.data: /opt/elasticsearch/data
path.logs: /opt/elasticsearch/logs
# 避免频繁切换内存导致性能下降
bootstrap.memory_lock: true
network.host: elastic.local.com
http.port: 9200
# 集群
discovery.seed_hosts: ["elastic.local.com"]
# node.name
cluster.initial_master_nodes: ["elastic1"]
# 避免报错
bootstrap.system_call_filter: false
```
### 设置密码
#### 配置
在elasticsearch.yml中加入以下配置后启动elasticsearch
```yaml
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
```
#### 设置密码
启动后,执行以下命令,需要配置所有账户的密码,包括elasticsearch,kibana....
```shell
./bin/elasticsearch-setup-passwords interactive
```
## 运行
```shell
./bin/elasticsearch
# 后台运行
./bin/elasticsearch -d
```
# kibana
## 安装
```shell
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.4.2-linux-x86_64.tar.gz
tar -zxvf elasticsearch-7.4.2-linux-x86_64.tar.gz
```
## 配置
### ./config/kibana.yml
```yaml
server.port: 5601
server.host: "kibana.local.com"
elasticsearch.hosts: ["http://elastic.local.com:9200"]
kibana.index: ".kibana"
i18n.locale: "zh-CN"
```
## 运行
```shell
./bin/kibana
```
# logstash
## 配置
### ./config/jvm.options
同样的,我们也需要需改jvm.options
- 根据当前所在的机器配置配置内存使用。
- 在jdk11中UseConcMarkSweepGC过期了,所以我们也进行适当的修改。
```
-Xms1g>>-Xms512m
-Xmx1g>>-Xmx512m
-XX:+UseConcMarkSweepGC>>-XX:+UseG1GC
```
### ./config/logstash.yml
```yaml
http.host: "logstash.local.com"
http:port: 9600
```
## 运行
### hello world
启动完成后,在logstash命令窗口输入hello world,会再控制台看到返回过滤的结果
```shell
bin/logstash -e 'input { stdin { } } output { stdout {} }'
```
### elasticsearch
在此之前,请先启动elasticsearch,kibana,启动完成后,在logstash命令窗口输入hello world
```shell
bin/logstash -e 'input { stdin { } } output { stdout { codec => rubydebug} elasticsearch { hosts => ["elastic.local.com:9200"] index => "debug-%{+YYYY-MM-dd}"}}'
```
## 搭配filebeat使用
### 安装
```shell
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.4.2-linux-x86.tar.gz
tar -zxvf filebeat-7.4.2-linux-x86.tar.gz
```
### 配置
#### filebeat/filebeat.yml
不要直接复制配置文件,找到相关项进行修改
```yaml
# 配置日志来源,可以配置多个
filebeat.inputs:
- type: log
enabled: true
paths:
- /opt/logs/*.log
# 当一条日志内容不在一行时,配置多行匹配
multiline.pattern: ^\[
multiline.negate: false
multiline.match: before
name: filebeat.local.com
# 设置kibana的地址
setup.kibana:
host: "kibana.local.com:5601"
# 设置logstash的地址,可以配置多个
output.logstash:
hosts: ["logstash.local.com:5044"]
```
#### /opt/logs/test.log
准备一个日志文件test.log,内容如下,我们将他保存到/opt/logs
```
[2019-11-06 16:18:03:773] [INFO ] [method:org.apache.zookeeper.Environment.logEnv(Environment.java:100)]
Client environment:java.io.tmpdir=E:\soft\apache-tomcat-9.0.22\temp
[2019-11-06 16:18:03:773] [INFO ] [method:org.apache.zookeeper.Environment.logEnv(Environment.java:100)]
Client environment:java.compiler=<NA>
[2019-11-06 16:18:03:773] [INFO ] [method:org.apache.zookeeper.Environment.logEnv(Environment.java:100)]
Client environment:os.name=Windows 10
[2019-11-06 16:18:03:773] [INFO ] [method:org.apache.zookeeper.Environment.logEnv(Environment.java:100)]
Client environment:os.arch=amd64
[2019-11-06 16:18:03:773] [INFO ] [method:org.apache.zookeeper.Environment.logEnv(Environment.java:100)]
Client environment:os.version=10.0
[2019-11-06 16:18:03:773] [INFO ] [method:org.apache.zookeeper.Environment.logEnv(Environment.java:100)]
Client environment:user.name=Administrator
[2019-11-06 16:18:03:773] [INFO ] [method:org.apache.zookeeper.Environment.logEnv(Environment.java:100)]
Client environment:user.home=C:\Users\Administrator
[2019-11-06 16:18:03:773] [INFO ] [method:org.apache.zookeeper.Environment.logEnv(Environment.java:100)]
Client environment:user.dir=E:\soft\apache-tomcat-9.0.22\bin
[2019-11-06 16:18:03:774] [INFO ] [method:org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:442)]
Initiating client connection, connectString=47.112.124.158:2181 sessionTimeout=30000 watcher=org.I0Itec.zkclient.ZkClient@4920186e
```
#### logstash/logstash.conf
这个文件可以随便取,用于logstash配置input,filter,output...
> 我这里将logstash.conf保存到 logstash根目录
```
input {
beats {
port => '5044'
}
# 同时允许通过控制输入日志
stdin {}
}
filter{
grok {
patterns_dir => ['/opt/patterns']
match => {
"message" => '\[%{TIMESTAMP:date}\]\s+\[%{LOGLEVEL:level}\s+\]\s+\[%{WORD}:%{JAVACLASS:class}\.%{INVOKE_METHOD:method}\(%{JAVAFILE}:%{NUMBER:line}\)\](?:\s+)?%{WORD:msg}?'
}
}
date {
match => ['timestamp', 'yyyy-MM-dd HH:mm:ss:SSS']
locale => 'zh_CN'
}
mutate {
# 移除host属性,避免不能解析host报错
remove_field => 'host'
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
id => 'elasticsearch1'
hosts => ['elastic.local.com:9200']
# 没有开启密码不需要设置user,password
user => 'elastic'
password => 'elastic'
index => 'log-%{+YYYY-MM-dd}'
}
}
```
### 启动
为了方便测试,你可以通过`rm /opt/filebeat/data/registry -rf`来重置filebeat读取位置,在此之前先停止filebeat
```shell
cd /opt/logstash
./bin/logstash -f ./logstash.conf
cd /opt/filebeat
./filebeat -e
```
# 查看效果
在此之前同样的我们也需要修改访问kibana的电脑中host文件
1. 打开 http://kibana.local.com:5601,如果存在密码,输入前面设置的elastic账户以及设置的密码
2. 菜单>>设置>>索引管理:查看是否存在log-YYYY-MM-dd类似的索引
3. 菜单>>设置>>索引模式>>创建索引模式>>log-*>>下一步>>@timestamp>>创建索引模式
4. 菜单>>discover就能看到导入的日志了
具体操作请看: https://www.elastic.co/cn/webinars/getting-started-kibana?baymax=rtp&elektra=docs&storm=top-video
# 运行顺序
1. elasticsearch
2. kibana
3. logstash
4. filebeat