CentOS7.5
ElasticSearch-7.8.0
可以参考es(Windows环境下)文件
elasticsearch-env.bat
https://www.elastic.co/downloads/past-releases/elasticsearch-7-8-0
因为安全问题,Elasticsearch 不允许 root 用户直接运行, 所以要创建一个ES专属的普通用户es
[whybigdata@node02 ~]$ sudo useradd es
[whybigdata@node02 ~]$ sudo passwd es
更改用户 es 的密码 。
新的 密码:
passwd:所有的身份验证令牌已经成功更新。
[whybigdata@node02 ~]$ chown -R es:es /opt/module/es-7.8.0
[whybigdata@node02 ~]$ vim /etc/sudoers
[whybigdata@node02 ~]$ vim /etc/sudoers
在
root ALL(ALL) ALL
下面新增es ALL(ALL) ALL
# 使用root用户执行
visudo
# 在 root ALL(ALL) ALL下面新增
es ALL(ALL) ALL
[whybigdata@node02 ~]$ vim /etc/security/limits.conf
在文件末尾中增加下面内容:每个进程可以打开的文件数的限制
es soft nofile 65536
es hard nofile 65536
[whybigdata@node02 ~]$ vim /etc/security/limits.d/20-nproc.conf
在文件末尾中增加下面内容 :每个进程可以打开的文件数的限制;操作系统级别对每个用户创建的进程数的限制
es soft nofile 65536
es hard nofile 65536
* hard nproc 4096
注:* 带表 Linux 所有用户名称
[whybigdata@node02 ~]$ sudo vim /etc/sysctl.conf
在文件中增加下面内容:一个进程可以拥有的 VMA(虚拟内存区域)的数量,默认值为 65536
vm.max_map_count=655360
[whybigdata@node02 ~]$ sudo sysctl -p
vm.max_map_count = 262144
################################################################
## IMPORTANT: JVM heap size
################################################################
##
## You should always set the min and max JVM heap
## size to the same value. For example, to set
## the heap to 4 GB, set:
##
-Xms1g
-Xmx1g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
## for more information
##
主要修改以下参数值
cluster.name: my-application
node.name: node-1
path.data: ./data
path.logs: ./logs
network.host: 0.0.0.0
http.port: 9200
cluster.initial_master_nodes: ["node-1"]
Note:
cluster.initial_master_nodes
必须配置(即使名字跟默认的一样,也要放开注释),否则启动失败,失败日志「elasticsearch.log」说如下:
启动es
[es@node02 es-7.8.0]$ bin/elasticsearch -d
future versions of Elasticsearch will require Java 11; your Java version from [/opt/module/jdk1.8.0_212/jre] does not meet this requirement
future versions of Elasticsearch will require Java 11; your Java version from [/opt/module/jdk1.8.0_212/jre] does not meet this requirement
Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file logs/gc.log due to No such file or directory
[es@node02 es-7.8.0]$ jps
76151 Jps
76110 Elasticsearch
[es@node02 es-7.8.0]$ ps -ef | grep es
root 1 0 0 14:45 ? 00:00:03 /usr/lib/systemd/systemd --switched-root --system --deserialize 22
dbus 592 1 0 14:45 ? 00:00:02 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
root 68926 1742 0 19:29 pts/0 00:00:00 su - es
es 68943 68926 0 19:29 pts/0 00:00:00 -bash
es 76110 1 21 19:46 pts/0 00:00:40 /opt/module/jdk1.8.0_212/bin/java -Xshare:auto -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.locale.providers=SPI,JRE -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.io.tmpdir=/tmp/elasticsearch-3279581942497379597 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:logs/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -XX:MaxDirectMemorySize=536870912 -Des.path.home=/opt/module/es-7.8.0 -Des.path.conf=/opt/module/es-7.8.0/config -Des.distribution.flavor=default -Des.distribution.type=tar -Des.bundled_jdk=true -cp /opt/module/es-7.8.0/lib/* org.elasticsearch.bootstrap.Elasticsearch -d
es 76171 76110 0 19:46 pts/0 00:00:00 /opt/module/es-7.8.0/modules/x-pack-ml/platform/linux-x86_64/bin/controller
es 77441 68943 0 19:49 pts/0 00:00:00 ps -ef
es 77442 68943 0 19:49 pts/0 00:00:00 grep --color=auto es
启动成功,Web页面如下:
Note:注意启动ES不要使用root用户,否则会出现以下错误日志
https://www.elastic.co/downloads/past-releases/kibana-7-8-0
进入到config
目录,修改kibana.yml
文件
# 服务端口
server.port: 5601
# 运行服务的IP设置
server.host: "0.0.0.0"
# elasticsearch访问地址
elasticsearch.hosts: ["http://localhost:9200"]
# kibana汉化
i18n.locale: "zh-CN"
进入bin
目录,后台启动kibana
nohup ./kibana &
服务Web端:http://node02:5601/
ES提供插件机制对系统进行扩展,下文
在线安装
以icu分词器的安装为例子,离线安装
以ik分词器为例子:
ES默认的分词器为
standard
标准分词器,如下图的执行结果
「中华人民共和国」这一整体被拆分为一个一个字,并不是很友好。下文会展示使用其他分词器在同一示例下的执行结果
https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.8.0/elasticsearch-analysis-ik-7.8.0.zip
[es@node02 ~]$ cd /opt/module/es-7.8.0/plugins
[es@node02 plugins]$ mkdir ik-7.8.0
[es@node02 plugins]$ cd /opt/software/
[es@node02 software]$ ll
总用量 3192
-rw-rw-r-- 1 whybigdata whybigdata 3267201 1月 19 10:54 elasticsearch-analysis-ik-7.8.0.tar.gz
[es@node02 software]$ tar -zxvf elasticsearch-analysis-ik-7.8.0.tar.gz -C /opt/module/es-7.8.0/plugins/ik-7.8.0/
启动es,报错信息如下:
[2023-01-19T11:05:00,534][ERROR][o.e.b.Bootstrap ] [node-1] Exception
java.lang.IllegalStateException: Could not load plugin descriptor for plugin directory [ik-7.8.0]
.......
[2023-01-19T11:05:00,542][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [node-1] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: Could not load plugin descriptor for plugin directory [ik-7.8.0]
....
在线安装
[es@node02 ~]$ /opt/module/es-7.8.0/bin/elasticsearch-plugin --help
future versions of Elasticsearch will require Java 11; your Java version from [/opt/module/jdk1.8.0_212/jre] does not meet this requirement
A tool for managing installed elasticsearch plugins
Commands
--------
list - Lists installed elasticsearch plugins
install - Install a plugin
remove - removes a plugin from Elasticsearch
Non-option arguments:
command
Option Description
------ -----------
-E <KeyValuePair> Configure a setting
-h, --help Show help
-s, --silent Show minimal output
-v, --verbose Show verbose output
[es@node02 ~]$ cd /opt/module/es-7.8.0/
[es@node02 es-7.8.0]$ bin/elasticsearch-plugin list
[es@node02 es-7.8.0]$ bin/elasticsearch-plugin install analysis-icu
[es@node02 es-7.8.0]$ bin/elasticsearch-plugin remove analysis-icu
4.png
Note:安装和删除完插件之后,需要重启ES服务才能生效!
ik分词器不能在线安装,需要离线安装
离线安装
[es@node02 software]$ unzip elasticsearch-analysis-ik-7.8.0.zip -d /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/
Archive: elasticsearch-analysis-ik-7.8.0.zip
inflating: /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/elasticsearch-analysis-ik-7.8.0.jar
inflating: /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/httpclient-4.5.2.jar
inflating: /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/httpcore-4.4.4.jar
inflating: /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/commons-logging-1.2.jar
inflating: /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/commons-codec-1.9.jar
inflating: /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/plugin-descriptor.properties
inflating: /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/plugin-security.policy
creating: /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/config/
inflating: /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/config/main.dic
inflating: /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/config/quantifier.dic
inflating: /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/config/extra_single_word_full.dic
inflating: /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/config/IKAnalyzer.cfg.xml
inflating: /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/config/surname.dic
inflating: /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/config/suffix.dic
inflating: /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/config/stopword.dic
inflating: /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/config/extra_main.dic
inflating: /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/config/extra_stopword.dic
inflating: /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/config/preposition.dic
inflating: /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/config/extra_single_word_low_freq.dic
inflating: /opt/module/es-7.8.0/plugins/analysis-ik-7.8.0/config/extra_single_word.dic
ik分词器有粒度之分
示例
结果
{
"tokens" : [
{
"token" : "中华人民共和国",
"start_offset" : 0,
"end_offset" : 7,
"type" : "CN_WORD",
"position" : 0
}
]
}
示例
结果
{
"tokens" : [
{
"token" : "中华人民共和国",
"start_offset" : 0,
"end_offset" : 7,
"type" : "CN_WORD",
"position" : 0
},
{
"token" : "中华人民",
"start_offset" : 0,
"end_offset" : 4,
"type" : "CN_WORD",
"position" : 1
},
{
"token" : "中华",
"start_offset" : 0,
"end_offset" : 2,
"type" : "CN_WORD",
"position" : 2
},
{
"token" : "华人",
"start_offset" : 1,
"end_offset" : 3,
"type" : "CN_WORD",
"position" : 3
},
{
"token" : "人民共和国",
"start_offset" : 2,
"end_offset" : 7,
"type" : "CN_WORD",
"position" : 4
},
{
"token" : "人民",
"start_offset" : 2,
"end_offset" : 4,
"type" : "CN_WORD",
"position" : 5
},
{
"token" : "共和国",
"start_offset" : 4,
"end_offset" : 7,
"type" : "CN_WORD",
"position" : 6
},
{
"token" : "共和",
"start_offset" : 4,
"end_offset" : 6,
"type" : "CN_WORD",
"position" : 7
},
{
"token" : "国",
"start_offset" : 6,
"end_offset" : 7,
"type" : "CN_CHAR",
"position" : 8
}
]
}
创建索引时可以指定ik分词器作为默认分词器
PUT /es_db
{
"settings" : {
"index" : {
"analysis.analyzer.default.type": "ik_max_word"
}
}
}
全文结束!