NingG +

ElasticSearch入门操作

当前使用组件的版本:

组件 版本
ElasticSearch 1.4.4
Java 1.7.0_67 HotSpot(64) 64-Bit

启动

直接下载,然后解压,直接运行脚本bin/elasticsearch。如果希望 ElasticSearch 在后台运行,则执行命令bin/elasticsearch -d,其将 ElasticSearch 进程的父进程设置为超级进程(pid=1)。现在,如何测试是否启动成功?可向 http://localhost:9200 发送一条请求,会查看到返回的JSON字符串,具体效果如下:

[ningg@localhost ~]$ curl -XGET http://localhost:9200/
{
  "status" : 200,
  "name" : "Silly Seal",
  "cluster_name" : "elasticsearch",
  "version" : {
	"number" : "1.4.4",
	"build_hash" : "c88f77ffc81301dfa9dfd81ca2232f09588bd512",
	"build_timestamp" : "2015-02-19T13:05:36Z",
	"build_snapshot" : false,
	"lucene_version" : "4.10.3"
  },
  "tagline" : "You Know, for Search"
}

补充几点:

Index操作

插入数据

如果指定的 Index`Type 不存在,则自动创建,下面为向Index\Type` 插入数据的命令;

curl -XPUT 'http://localhost:9200/test/test/1' -d '{ "name" : "Ning Guo"}'

查询数据

查询指定条件的数据,两个操作:_count_search

[ningg@localhost ~]$ curl -XGET http://localhost:9200/test/_count?pretty=true
{
  "count" : 1,
  "_shards" : {
	"total" : 5,
	"successful" : 5,
	"failed" : 0
  }
}
[ningg@localhost ~]$ 
[ningg@localhost ~]$ curl -XGET http://localhost:9200/test/_search?pretty=true
{
  "took" : 2,
  "timed_out" : false,
  "_shards" : {
	"total" : 5,
	"successful" : 5,
	"failed" : 0
  },
  "hits" : {
	"total" : 1,
	"max_score" : 1.0,
	"hits" : [ {
	  "_index" : "test",
	  "_type" : "test",
	  "_id" : "1",
	  "_score" : 1.0,
	  "_source":{ "name" : "Ning Guo"}
	} ]
  }
}

其他查询:

删除数据

如何删除一个Index、Type、Document。

curl -XDELETE http://localhost:9200/test?pretty

监控

Elastic官网提供了一种方式Marvel,不过这种方式是付费的,我x,那能不能利用Ganglia监控呢?实际上,ElasticSearch是基于Java的,而JVM能够通过JMX方式向外停工监控数据,唯一的问题是:ElasticSearch在JVM中记录的运行状态数据吗?

常见问题

WARN: Too many open files

详细错误信息:

[2015-04-14 11:18:25,797][WARN ][indices.cluster          ] [Rune] [flume-2015-04-14][2] failed to start shard
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [flume-2015-04-14][2] failed recovery
	at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:185)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.index.engine.EngineCreationFailureException: [flume-2015-04-14][2] failed to open reader on writer
	at org.elasticsearch.index.engine.internal.InternalEngine.start(InternalEngine.java:326)
	at org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryPrepareForTranslog(InternalIndexShard.java:732)
	at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:231)
	at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:132)
	... 3 more
Caused by: java.nio.file.FileSystemException: /home/storm/es/elasticsearch-1.4.4/data/elasticsearch/nodes/0/indices/flume-2015-04-14/2/index/_h3.cfe: Too many open files
	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
	at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
	at java.nio.channels.FileChannel.open(FileChannel.java:287)
	at java.nio.channels.FileChannel.open(FileChannel.java:334)
	at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:81)
	at org.apache.lucene.store.FileSwitchDirectory.openInput(FileSwitchDirectory.java:172)
	at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:80)
	at org.elasticsearch.index.store.DistributorDirectory.openInput(DistributorDirectory.java:130)
	at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:80)
	at org.elasticsearch.index.store.Store$StoreDirectory.openInput(Store.java:515)
	at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:113)
	at org.apache.lucene.store.CompoundFileDirectory.readEntries(CompoundFileDirectory.java:166)
	at org.apache.lucene.store.CompoundFileDirectory.<init>(CompoundFileDirectory.java:106)
	at org.apache.lucene.index.SegmentReader.readFieldInfos(SegmentReader.java:274)
	at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:107)
	at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
	at org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:239)
	at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:104)
	at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:422)
	at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:112)
	at org.apache.lucene.search.SearcherManager.<init>(SearcherManager.java:89)
	at org.elasticsearch.index.engine.internal.InternalEngine.buildSearchManager(InternalEngine.java:1569)
	at org.elasticsearch.index.engine.internal.InternalEngine.start(InternalEngine.java:313)
	... 6 more

解决办法:

简要解释:/etc/security/limits.conf文件中设置了,一个用户或者组,所能使用的系统资源,例如:CPU、内存以及可同时打开文件的数量等。

更多细节,参考:

参考来源

Top