开源社区

ElasticSearch集群部署

Linux · 2月24日 · 2021年

高可用

高可用(High Availability)是分布式系统架构设计中必须考虑的因素之一,它通常是指,通过设计减少系统不能提供服务的时间。如果系统每运行100个时间单位,会有1个时间单无法提供服务,我们说系统的可用性是99%。

负载均衡

将流量均衡的分布在同的节点上,每个节点都可处理一部分负载,并且可以在节点之间动态分配负载

高性能

将流量分发到不同机器,充分利用多机器多CPU,从串行计算到并行计算提供系统性能。

ES集群基本核心概念

Cluster集群

一个ElasticSearch集群由一个或多个节点(Node)组成,每个集群都有一个共同的集群名称作为标识。

Node节点

一个ElasticSearch实例即个Node,一台机器可以有多个实例,正常使用下每个实例应该会部署在不同机器上。ElasticSearch的配置文件中可以通过node.master、node.data来设置节点类型。

node.master:表示节点是否具有称为主节点的资格

true代表的是有资格竞选主节点

false代表的是没有资格竞选主节点

node.data:表示节点是否存储数据

Node节点组合

主节点+数据节点(master+data)

节点即有称为主节点的资格,又存储数据

node.master: true
node.data: true

数据节点(data)

节点没有成为主节点的资格,不参与选举,只会存储数据

node.master: false
node.data: true

客户端节点(client)

不会成为主节点,也不会存储数据,主要是针对海量请求的时候,可以进行负载均衡

node.master: false
node.data: false

分片

每个索引有一个或多个分片,每个分片存储不同的数据。分片可分为主分片(primary shard)和复制分片(replica shard),复制分片是主分片的拷贝。默认每个主分片有一个复制分片,一个索引的复制分片的数量可以动态地调整,复制分片匆匆不与它的主分片在同一个节点上。

搭建ES集群

环境要求

主机名称IP系统
es-node1192.168.3.23centos7
es-node2192.168.3.24centos7
es-node3192.168.3.26centos7

性能优化

1、设置ulimit  /etc/security/limits.conf 
echo 'elasticsearch  -  nofile  65535' >> /etc/security/limits.conf 
 
2、Disable swapping 
swapoff -a 
 
修改 /etc/fstab,注销swap。 
或者设置sysctl.conf vm.swappiness=1,或者设置elasticsearch.yml为bootstrap.memory_lock: true。 
3、安装完elasticsearch检查node节点的文件描述符 
GET _nodes/stats/process?filter_path=**.max_file_descriptors 
 
4、设置virtual memory 
sysctl -w vm.max_map_count=262144 
echo 'vm.max_map_count=262144' >> /etc/sysctl.conf 
 
5、设置 number of threads 
vim /etc/security/limits.conf 
*           soft   nofile       1024000 
*           hard   nofile       1024000 
*           soft   nproc        1024000 
*           hard   nproc        1024000 
* - sigpending 256612 
 
6、设置系统内存页 
cat /etc/rc.local 
 
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then 
  echo never > /sys/kernel/mm/transparent_hugepage/enabled 
fi 
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then 
  echo never > /sys/kernel/mm/transparent_hugepage/defrag 
fi 

搭建步骤

  • 拷贝ES7.9.2安装包3份,分别命名es-node1,es-node2,es-node3
  • 分别修改elasticsearch.yml文件
  • 分别启动a、b、c三个节点
  • 打开浏览器输入:ip:port/_cat/health?v,如果返回的nodt.total是3,代表集群搭建成功

es-node1配置文件

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
node.master: true
#
#
node.data: true
#
node.max_local_storage_nodes: 3
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
network.host: 192.168.186.28
# Set a custom port for HTTP:
#
http.port: 9200
#
transport.tcp.port: 9300
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["192.168.186.28","192.168.186.29","192.168.186.30"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1","node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
bootstrap.system_call_filter: false
http.cors.allow-origin: "*"
http.cors.enabled: true
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
http.cors.allow-credentials: true
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/certs/elastic-certificates.p12

es-node2配置文件

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-2
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
node.master: true
#
#
node.data: true
#
node.max_local_storage_nodes: 3
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
network.host: 192.168.186.29
# Set a custom port for HTTP:
#
http.port: 9201
#
transport.tcp.port: 9300
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["192.168.186.28","192.168.186.29","192.168.186.30"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1","node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
bootstrap.system_call_filter: false
http.cors.allow-origin: "*"
http.cors.enabled: true
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
http.cors.allow-credentials: true
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/certs/elastic-certificates.p12

es-node3配置文件

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-3
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
node.master: false
#
#
node.data: true
#
node.max_local_storage_nodes: 3
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
network.host: 192.168.186.30
# Set a custom port for HTTP:
#
http.port: 9202
#
transport.tcp.port: 9500
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["192.168.186.28","192.168.186.29","192.168.186.30"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1","node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
bootstrap.system_call_filter: false
http.cors.allow-origin: "*"
http.cors.enabled: true
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
http.cors.allow-credentials: true
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/certs/elastic-certificates.p12 

配置文件详解:

#集群名称,三台集群,要配置相同的集群名称!!!
cluster.name: my-application
#节点名称
node.name: node-1 #是不是有资格主节点
node.master: true
#是否存储数据
node.data: true
#最⼤集群节点数
node.max_local_storage_nodes: 3 #⽹关地址
network.host: 0.0.0.0
#端⼝
http.port: 9200
#内部节点之间沟通端⼝
transport.tcp.port: 9300
#es7.x 之后新增的配置,写⼊候选主节点的设备地址,在开启服务后可以被选为主节点
discovery.seed_hosts: ["localhost:9300","localhost:9400","localhost:9500"]
#es7.x 之后新增的配置,初始化⼀个新的集群时需要此配置来选举master
cluster.initial_master_nodes: ["node-1", "node-2","node-3"] 
#数据和存储路径
path.data: /path/to/data
#系统日志
path.logs: /path/to/logs

#出现错误的原因:是因为centos6.x操作系统不支持SecComp,而elasticsearch 5.5.2,默认bootstrap.system_call_filter为true
进行检测,所以导致检测失败,失败后直接导致ES不能启动。禁用内存
bootstrap.system_call_filter: false

#当设置允许跨域,默认为*,表示支持所有域名,如果我们只是允许某些网站能访问,那么可以使用正则表达式。比如只允许本地地址。
/https?:\/\/localhost(:[0-9]+)?
http.cors.allow-origin: "*"

#是否支持跨域,默认为false
http.cors.enabled: true

#跨域允许设置的头信息,默认为X-Requested-With,Content-Type,Content-Length
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization

#是否返回设置的跨域Access-Control-Allow-Credentials头,如果设置为true,那么会返回给客户端。
http.cors.allow-credentials: true

#开启xpack账号验证
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
#1. .p12的路径需要设置成绝对路径
#2. 要确保.p12的文件有权限, chmod 777 elastic-certificates.p12
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/certs/elastic-certificates.p12 

ES集群监控工具Cerebro

下载地址

https://github.com/lmenezes/cerebro/

将安装包copy到ECS节点,该节点网络和待监控的ES集群在同一VPC,网络互通

安装这里我采用rpm方式安装

rpm -ivh cerebro-0.9.3-1.noarch.rpm

修改配置文件vim /etc/cerebro/application.conf

修改配置文件conf/application.conf
指定监控的ES集群IP和Port:
hosts = [
{
  host = "http://192.168.3.23:9200"
  name = "my-application"
  headers-whitelist = [ "x-proxy-user", "x-proxy-roles", "X-Forwarded-For" ]
}
 Example of host with authentication
{
  host = "http://some-authenticated-host:9200"
  name = "Secured Cluster"
  auth = {
    username = "elastic"
    password = "VJX3cpwy6PQl3L948eTI"
  }
  }
]

启动方式

/usr/share/cerebro/bin/cerebro -Dconfig.file=/usr/share/cerebro/conf/application.conf &

图片中输入集群地址如:http://192.168.3.23:9200

说明:

集群监控状态信息查看 五角星深色则为主节点

 

Kibana配置登录Xpack加密

  • 1.在ES的根目录生成CA证书
    • bin/elasticsearch-certutil ca (中间需要设置密码,直接回车可以不设置(慎重考虑)。)
  • 2.使用第一步生成的证书,产生p12密钥
    • bin/elasticsearch-certutil cert –ca elastic-stack-ca.p12
  • 3.在config目录创建certs目录,拷贝certificates.p12文件至certs目录
  • 4.修改设置
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12

说明:

以上配置文件必须在每个节点中添加,并且动状态。然后在master主节点执行/usr/share/elasticsearch/bin/elasticsearch-certutil ca生成CA证书。

执行过程如下:

This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'ca' mode generates a new 'certificate authority'
This will create a new X.509 certificate and private key that can be used
to sign certificate when running in 'cert' mode.

Use the 'ca-dn' option if you wish to configure the 'distinguished name'
of the certificate authority

By default the 'ca' mode produces a single PKCS#12 output file which holds:
    * The CA certificate
    * The CA's private key

If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the CA certificate and private key

Please enter the desired output file [elastic-stack-ca.p12]: 
Enter password for elastic-stack-ca.p12 : 

再次执行:

[root@es-node3 elasticsearch]# /usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'cert' mode generates X.509 certificate and private keys.
    * By default, this generates a single certificate and key for use
       on a single instance.
    * The '-multiple' option will prompt you to enter details for multiple
       instances and will generate a certificate and key for each one
    * The '-in' option allows for the certificate generation to be automated by describing
       the details of each instance in a YAML file

    * An instance is any piece of the Elastic Stack that requires an SSL certificate.
      Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
      may all require a certificate and private key.
    * The minimum required value for each instance is a name. This can simply be the
      hostname, which will be used as the Common Name of the certificate. A full
      distinguished name may also be used.
    * A filename value may be required for each instance. This is necessary when the
      name would result in an invalid file or directory name. The name provided here
      is used as the directory name (within the zip) and the prefix for the key and
      certificate files. The filename is required if you are prompted and the name
      is not displayed in the prompt.
    * IP addresses and DNS names are optional. Multiple values can be specified as a
      comma separated string. If no IP addresses or DNS names are provided, you may
      disable hostname verification in your SSL configuration.

    * All certificates generated by this tool will be signed by a certificate authority (CA).
    * The tool can automatically generate a new CA for you, or you can provide your own with the
         -ca or -ca-cert command line options.

By default the 'cert' mode produces a single PKCS#12 output file which holds:
    * The instance certificate
    * The private key for the instance certificate
    * The CA certificate

If you specify any of the following options:
    * -pem (PEM formatted output)
    * -keep-ca-key (retain generated CA key)
    * -multiple (generate multiple certificates)
    * -in (generate certificates from an input file)
then the output will be be a zip file containing individual certificate/key files

Enter password for CA (elastic-stack-ca.p12) : 
Please enter the desired output file [elastic-certificates.p12]: 
Enter password for elastic-certificates.p12 : 

Certificates written to /usr/share/elasticsearch/elastic-certificates.p12

This file should be properly secured as it contains the private key for 
your instance.

This file is a self contained file and can be copied and used 'as is'
For each Elastic product that you wish to configure, you should copy
this '.p12' file to the relevant configuration directory
and then follow the SSL configuration instructions in the product guide.

For client applications, you may only need to copy the CA certificate and
configure the client to trust this certificate.
[root@es-node3 elasticsearch]# ls
bin  elastic-certificates.p12  elastic-stack-ca.p12  jdk  lib  LICENSE.txt  modules  NOTICE.txt  plugins  README.asciidoc

说明:最终生成名称为elastic-certificates.p12的证书,分发至从节点下/etc/elasticsearch/certs。补充说明下:rpm包安装下必须放入/etc/elasticsearch/certs路径下

并授权属主属组,权限设置最高权限

chown -Relasticsearch.elasticsearch /etc/elasticsearch/
chmod 777 /etc/elasticsearch/certs/elastic-certificates.p12 

设置密码:

/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive

过程如下:

[root@es-node3 ~]# /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]yEnter password for [elastic]: 
Reenter password for [elastic]: 
Enter password for [apm_system]: 
Reenter password for [apm_system]: 
Enter password for [kibana_system]: 
Reenter password for [kibana_system]: 
Enter password for [logstash_system]: 
Reenter password for [logstash_system]: 
Enter password for [beats_system]: 
Reenter password for [beats_system]: 
Enter password for [remote_monitoring_user]: 
Reenter password for [remote_monitoring_user]: 
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]

kibana加密登录配置路径/etc/kibana/kibana.yml设置如下,设置完成重启kibana即可。


# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "192.168.3.23"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries.
#集群节点的所有节点
elasticsearch.hosts:["http://192.168.3.23:9200","http://192.168.3.23:9201","http://192.168.3.23:9202"]

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you to specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
i18n.locale: "zh-CN"
elasticsearch.username: "elastic" “    #管理账号
elasticsearch.password: "VJX3cpwy6PQl3L948eTI" #设置的密码

设置密码报错解决办法如下

禁用:

#es7.x 之后新增的配置,写⼊候选主节点的设备地址,在开启服务后可以被选为主节点

discovery.seed_hosts: [“localhost:9300R21;,“localhost:9400”,“localhost:9500”]

#es7.x 之后新增的配置,初始化⼀个新的集群时需要此配置来选举mastercluster.initial_master_nodes: [“node-1”,“node-2”,R20;node-3”]

添加:

discovery.type: single-node

然后重启 在执行密码设置选项即可。

 ;

0 条回应