上一篇完成基礎的CI/CD過程,但是缺少了一個重要的東西,就是Log分析,要知道系統發生了什麼事情,最好的方式就是查看Log!
yum install java
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.0.rpm
rpm -ivh elasticsearch-6.4.0.rpm
systemctl start elasticsearch
systemctl enable elasticsearch
安裝 yum key
rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
創建 yum repo 檔案
vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
YUM 安裝
yum -y install elasticsearch
修正 僅本地端可以連
vi /etc/elasticsearch/elasticsearch.yml
network.host: localhost
啟動服務
systemctl start elasticsearch
systemctl enable elasticsearch
修改 elasticsearch 使用的記憶體,設定為主機的 50% 記憶體
如果 主機 600M記憶體 elasticsearch 設定為 300m
vi /etc/elasticsearch/jvm.options
-Xms300m # Xms 記憶體使用下限
-Xmx300m # Xmx 記憶體使用上限
vi /etc/elasticsearch/elasticsearch.yml
network.bind_host: 0.0.0.0 # 綁定 IP
http.port: 9200 # 綁定 Port,預設 9200
curl "http://127.0.0.1:9200/_cat/nodes"
127.0.0.1 6 96 13 0.35 0.31 0.14 mdi * iyDpTNc <=有回應代表正常
創建 yum repo 檔案
vi /etc/yum.repos.d/kibana.repo
[kibana-4.4]
name=Kibana repository for 4.4.x packages
baseurl=http://packages.elastic.co/kibana/4.4/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
YUM 安裝
yum -y install kibana
修正 僅本地端可以連
vi /opt/kibana/config/kibana.yml
server.host: "localhost"
啟動服務
systemctl start kibana
systemctl enable kibana
yum -y install nginx httpd-tools
設定訪問 Kibana 密碼
htpasswd -c /etc/nginx/htpasswd.users kibanaadmin
設定nginx
vi /etc/nginx/nginx.conf
server {
listen 80;
server_name _;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
啟動nginx
systemctl start nginx
systemctl enable nginx
關閉 selinux OR 執行下面
setsebool -P httpd_can_network_connect 1
URL http://localhost
yum install java
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.4.0.rpm
rpm -ivh logstash-6.4.0.rpm
/usr/share/logstash/bin/system-install
systemctl start logstash
systemctl status logstash
vi /etc/yum.repos.d/logstash.repo
[logstash-2.2]
name=logstash repository for 2.2 packages
baseurl=http://packages.elasticsearch.org/logstash/2.2/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
Yum 安裝
yum -y install logstash
使用 IP 生成證書
vi /etc/pki/tls/openssl.cnf
subjectAltName = IP: ELK_server_private_ip ##在[ v3_ca ]部分找到
創建 證書
cd /etc/pki/tls
openssl req -subj '/CN=ELK_server_dns/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
配置 Logstash
vi /etc/logstash/conf.d/02-beats-input.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
配置日誌過濾文件以及格式化處理
grok 过滤与定义日志格式,能从github上下载很多样本用来参考。
vi /etc/logstash/conf.d/10-syslog-filter.conf
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
配置輸出日誌到elasticsearch
vi /etc/logstash/conf.d/30-elasticsearch-output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
確認logstash 配置文建是否正確
service logstash configtest
啟動服務
systemctl restart logstash
systemctl enable logstash
下載
cd ~
curl -L -O https://download.elastic.co/beats/dashboards/beats-dashboards-1.1.0.zip
yum install -y unzip
unzip beats-dashboards-1.1.0.zip
cd beats-dashboards-*
./load.sh
下載
cd ~
curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json
Elasticsearch 載入 JSON
curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d@filebeat-index-template.json
{
"acknowledged" : true #檢查是否載入成功
}
複製 ELK server 的 logstash-forwarder.crt 到 ELK Client主機
scp elk.server.tw:~/logstash-forwarder.crt .
scp logstash-forwarder.crt elk.client.tw:~/
ELK Client 設定
sudo mkdir -p /etc/pki/tls/certs
sudo cp ~/logstash-forwarder.crt /etc/pki/tls/certs/
切換到 root 帳號
導入 gpg-key
rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
創建 beats repo
vi /etc/yum.repos.d/elastic-beats.repo
[beats]
name=Elastic Beats Repository
baseurl=https://packages.elastic.co/beats/yum/el/$basearch
enabled=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
gpgcheck=1
安裝
yum -y install filebeat
修改配置文件
vi /etc/filebeat/filebeat.yml
...
paths:
- /var/log/secure
- /var/log/messages
# - /var/log/*.log
document_type: syslog #删除原有的注释
...
...
document_type: syslog #删除原有的注释
...
将elasticsearch这一段全部删除或者是注释掉
### Logstash as output
logstash:
# The Logstash hosts
hosts: ["ELK_server_private_IP:5044"] #定义Filebeat与ELK_server连接方式
Filebeat与Logstash安全通信配置
tls:
# List of root certificates for HTTPS server verifications
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
啟動服務
systemctl start filebeat
systemctl enable filebeat
启动过后查看Filebeat的状态。如果没有启动可以参照如下检测配置文件。
filebeat:
prospectors:
-
paths:
- /var/log/secure
- /var/log/messages
# - /var/log/*.log
input_type: log
document_type: syslog
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["elk_server_private_ip:5044"]
bulk_max_size: 1024
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB
filebeat -configtest filebeat.yml
如果看到total与successful不为0。代表成功从client端取到日志。
curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
{
"took" : 4,
"timed_out" : false,
"_shards" : {
"total" : 15,
"successful" : 15,
"failed" : 0
},
注:如果看到total与hits为0,代表通信受阻,无法从客户端取到日志。可以参照以下两个方式解决:
(1)查看Filebeat客户端配置是否正确,Filebeat是否成功启动。
(2)查看ELK_SERVER的安全组,确认5044端口是打开的。
架設 ELK Server 安裝 Elasticsearch、Kibana、Logstash 要求記憶體最少:2G
在 Client 主機 安裝 Filebeat
使用 Grok Online Debug 解析服務 Log 資料
在 Client 主機 設定 filebeat.yml
filebeat:
prospectors:
-
paths:
- /home/ithome_pellok_2018/ithome_pellok_2018/log/*.log
input_type: log
fields:
service: ithome_pellok_2018
document_type: syslog
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["logstash.domain.com:5044"]
bulk_max_size: 1024
shipper:
logging:
files:
path: /var/log/filebeat
rotateeverybytes: 10485760 # = 10MB
在 logstach 設定 pipeline.yml
input {
beats {
port => 5044
}
}
filter {
if [field.service] == "ithome_pellok_2018" {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:logTimestamp} %{DATA:logType} \[%{DATA:path}\]\[%{DATA:aa}\] %{GREEDYDATA:message}" }
}
mutate {
add_tag => ["joyi_service"]
}
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[@metadata][beat]}-%{+xxxx.ww}"
document_type => "%{[@metadata][type]}"
}
}
在部署了時候採用 supervisor 會有多個服務同時啟動做負載平衡 ,所以會有多個 log 檔案,要查看 log 的時候需要開啟多個畫面同時查看 log 檔案,感覺非常的不方便,有的 elk 之後查看 log 就變得很簡單,以容易使用,但是這邊也會讓我思考 Log 要怎麼寫,會比較好,而且容易分析,因為 ELK 是使用 JSON 格式,所以要好好的規劃Log 要記錄哪一些東西,另外一點是 kibana 報表,這一點到目前也還在研究要怎麼呈現.