Linux版
下载jdk8,解压
下载Elasticsearch7.3,解压
移动到usr目录,创建java和elasticsearch文件夹(我用的是winSCP直接复制,也可以用方法)
配置环境变量
JAVA_HOME=/usr/java/jdk1.8.0_321
JRE_HOME=/usr/java/jre/jdk1.8.0_321
CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
执行,设置环境变量
export JAVA_HOME JRE_HOME CLASS_PATH PATH
source /etc/profile
检查
java -version
无误
配置es
vim /usr/elasticsearch/elasticsearch-7.3.0/config/elasticsearch.yml(看好自己的路径)
node.name: node-1
network.host: 0.0.0.0 //所有地址访问
#
# Set a custom port for HTTP:
#
http.port: 9200
cluster.initial_master_nodes: ["node-1"]
/jvm.options
默认1G内存,不用改
添加es用户
useradd estest
修改密码 passwd estest //estest密码过短警告,无所谓
改变es目录拥有者账号
chown -R estest /usr/elasticsearch/elasticsearch-7.3.0/
修改/etc/sysctl.conf
末尾添加:vm.max_map_count=655360 //jvm虚拟机内存
执行,生效
sysctl -p
修改/etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
* soft nproc 4096
* hard nproc 4096
切换用户
su estest
启动
/usr/elasticsearch/elasticsearch-7.3.0/bin/elasticsearch
访问
http://localhost:9200/
成功
安装Kibana
解压要切换回root用户,不然会报错
移动到usr目录,这次直接移动到根目录,不新建文件夹,改名成kibana
chown -R estest /usr/kibana/
设置权限
chmod -R 777 /usr/kibana/
修改配置
vim /usr/kibana/config/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://localhost:9200"]
切换用户
su estest
启动
/usr/kibana/bin/kibana
注意7.3的es只能用7.3的kibana连接,不然连不上
windows版
这里使用的是8.0,尝尝鲜。
修改配置文件和Linux版一样
Elasticsearch 8.0报错:received plaintext http traffic on an https channel, closing connection
原因:是因为ES8默认开启了 ssl 认证。
修改elasticsearch.yml配置文件
将xpack.security.enabled设置为false
访问http://localhost:9200/
启动kibana,配置文件也和linux一样修改。
访问http://localhost:5601/
多了很多新功能,但是看不懂,直奔dev_tools
开始使用
spring配置
spring:
elasticsearch:
rest:
uris: localhost:9200
导入包
<elasticsearch.version>7.3.0</elasticsearch.version>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-high-level-client</artifactId>
<version>${elasticsearch.version}</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.elasticsearch/elasticsearch -->
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>${elasticsearch.version}</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-high-level-client</artifactId>
<version>7.3.0</version>
<exclusions>
<exclusion>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>7.3.0</version>
</dependency>
数据建模
@Autowired
private RestHighLevelClientclient;
BulkProcessor bulkProcessor = getBulkProcessor(client);
分词方式,通过map
map =new HashMap(128);
//每条数据都是一个map,key是属性,value是值
for (int i=1;i< colData.getColumnCount();i++){
c = colData.getColumnName(i);
v = rs.getString(c);
map.put(c,v);
}
dataList.add(map);
先用(map)键值对的格式处理好数据后
// 将数据添加到bulkProcessor
for (HashMap hashMap2 : dataList){
bulkProcessor.add(new IndexRequest("表名").source(hashMap2));
}
查询操作
// 搜索
SearchRequest searchRequest =new SearchRequest("表名");
SearchSourceBuilder searchSourceBuilder =new SearchSourceBuilder();
// 分页设置
searchSourceBuilder.from((pageNo-1)*pageSize);searchSourceBuilder.size(pageSize);
//入参根据逗号拆分?不需要ES自动拆分,很方便
//组合查询
QueryBuilder builder = QueryBuilders.boolQuery().must(QueryBuilders.termQuery("positionName",keyword)).must(QueryBuilders.termQuery("positionAdvantage",advantage));
//单个查询
QueryBuilder builder = QueryBuilders.matchQuery("positionAdvantage",keyword);
searchSourceBuilder.query(builder);
searchSourceBuilder.timeout(new TimeValue(60,TimeUnit.SECONDS));//设置超时时间
// 执行搜索
searchRequest.source(searchSourceBuilder);
SearchResponse searchResponse = client.search(searchRequest,RequestOptions.DEFAULT);
ArrayList<Map<String,Object>> list = new ArrayList<>();
SearchHit[] hits = searchResponse.getHits().getHits();
for (SearchHit hit:hits){
list.add(hit.getSourceAsMap());
}
网友评论