ELK部署
ELK部署
ELK日志平台实现日志收集功能,提供多种场景化的日志采集、查询。使用Logstash采集日志,ElasticSearch存储日志并提供检索查询,Kibana提供统计和可视化展示功能。
以下说明集群部署ELK的流程:
1.准备工作
1.1 准备镜像
需要使用到的容器镜像包括以下3个,需要提前拉取并导出:
- elasticsearch:7.10.1
- kibana:7.10.1
- logstash:7.10.1
1.2 准备服务器
需要准备至少3台服务器用于集群高可用安装,资源紧张时可以与其他组件共用服务器。
1.3 安装Docker
参考本文档基础环境安装内容,安装Docker及Docker Compose。
1.4 准备负载均衡
参考Nginx的部署内容。
2.部署集群
2.1 导入镜像
将上一步导出的容器镜像,拷贝到医院环境服务器并导入:
docker load -i elasticsearch.tar
docker load -i kibana.tar
docker load -i logstash.tar
2.2 配置文件
默认安装在数据盘/hos/下:
mkdir /hos/elk
cd /hos/elk
创建配置文件docker-compose.yml,输入以下内容:
version: '3'
services:
elasticsearch:
image: elasticsearch:7.10.1
container_name: elasticsearch
restart: always
volumes:
- ./elasticsearch/data:/usr/share/elasticsearch/data
- ./elasticsearch/config:/usr/share/elasticsearch/config
- ./elasticsearch/plugins:/usr/share/elasticsearch/plugins
- ./elasticsearch/logs:/usr/share/elasticsearch/logs
- /etc/localtime:/etc/localtime
ports:
- 9200:9200
- 9300:9300
kibana:
image: kibana:7.10.1
container_name: kibana
restart: always
volumes:
- ./kibana/config:/usr/share/kibana/config
- /etc/localtime:/etc/localtime
ports:
- 5601:5601
logstash:
image: logstash:7.10.1
container_name: logstash
restart: always
volumes:
- ./logstash/logstash-springboot.conf:/usr/share/logstash/pipeline/logstash.conf
- /etc/localtime:/etc/localtime
ports:
- 4560:4560
- 5044:5044
创建elasticsearch配置文件:
mkdir elasticsearch/config
vim elasticsearch/config/elasticsearch.yml
输入以下配置:
cluster.name: "es-cluster"
network.host: 0.0.0.0
network.publish_host: 10.241.12.4
node.name: es-1
node.master: true
node.data: true
http.port: 9200
transport.tcp.port: 9300
discovery.seed_hosts: ["10.241.12.4","10.241.12.5","10.241.12.6"]
cluster.initial_master_nodes: ["10.241.12.4"]
# 解决跨域问题
http.cors.enabled: true
http.cors.allow-origin: "*"
其中,部分需要修改的参数:
network.publish_host
:es当前服务器的ip地址;discovery.seed_hosts
:es集群所有节点的地址;cluster.initial_master_nodes
:es集群初始master节点ip,集群任意节点ip即可。
创建logstash配置文件:
mkdir logstash
vim logstash/logstash-springboot.conf
输入以下配置:
input {
tcp {
mode => "server"
host => "0.0.0.0"
port => 4560
codec => json_lines
}
}
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => "10.241.12.25:9200"
index => "%{[fields][app]}-%{+YYYY.MM.dd}"
}
}
其中,output
中elasticsearch的hosts修改为实际环境的es的地址。
创建kibana配置文件:
mkdir kibana/config
vim kibana/config/kibana.yml
输入以下配置:
#
# ** THIS IS AN AUTO-GENERATED FILE **
#
# Default Kibana configuration for docker target
server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "http://10.241.12.4:9200","http://10.241.12.5:9200","http://10.241.12.6:9200" ]
monitoring.ui.container.elasticsearch.enabled: true
i18n.locale: "zh-CN"
其中,elasticsearch.hosts
修改为实际的es访问地址,如果es集群部署,则填入所有es节点地址。
2.3 启动服务
docker-compose up -d
2.4 配置负载均衡
nginx配置文件中增加以下tcp代理:
stream {
upstream elasticsearch {
server 10.241.12.4:9200;
server 10.241.12.5:9200;
server 10.241.12.6:9200;
}
upstream logstash {
server 10.241.12.4:4560;
server 10.241.12.5:4560;
server 10.241.12.6:4560;
}
upstream logstash-beat {
server 10.241.12.4:5044;
server 10.241.12.5:5044;
server 10.241.12.6:5044;
}
upstream kibana {
server 10.241.12.4:5601;
server 10.241.12.5:5601;
server 10.241.12.6:5601;
}
server {
listen 9200;
proxy_pass elasticsearch;
}
server {
listen 4560;
proxy_pass logstash;
}
server {
listen 5044;
proxy_pass logstash-beat;
}
server {
listen 5601;
proxy_pass kibana;
}
}
3.访问验证
- 访问kibana:http://负载ip:5601
- 访问elasticsearch:http://负载ip:9200