grafana对接elasticsearch java的query怎么写

Elasticsearch - Grafana Documentation
Documentation
Grafana ships with advanced support for Elasticsearch. You can do many types of
simple or complex elasticsearch queries to visualize logs or metrics stored in elasticsearch. You can
also annotate your graphs with log events stored in elasticsearch.
Adding the data source
Open the side menu by clicking the the Grafana icon in the top header.
In the side menu under the Dashboards link you should find a link named Data Sources.
NOTE: If this link is missing in the side menu it means that your current user does not have the Admin role for the current organization.
Click the Add new link in the top header.
Select Elasticsearch from the dropdown.
Description
The data source name, important that this is the same as in Grafana v1.x if you plan to import old dashboards.
Default data source means that it will be pre-selected for new panels.
The http protocol, ip and port of you elasticsearch server.
Proxy = access via Grafana backend, Direct = access directory from browser.
Proxy access means that the Grafana backend will proxy all requests from the browser, and send them on to the Data Source. This is useful because it can eliminate CORS (Cross Origin Site Resource) issues, as well as eliminate the need to disseminate authentication details to the Data Source to the browser.
Direct access is still supported because in some cases it may be useful to access a Data Source directly depending on the use case and topology of Grafana, the user, and the Data Source.
Direct access
If you select direct access you must update your Elasticsearch configuration to allow other domains to access
Elasticsearch from the browser. You do this by specifying these to options in your elasticsearch.yml config file.
http.cors.enabled: true
http.cors.allow-origin: "*"
Index settings
Here you can specify a default for the time field and specify the name of your elasticsearch index. You can use
a time pattern for the index name or a wildcard.
Metric Query editor
The Elasticsearch query editor allows you to select multiple metrics and group by multiple terms or filters. Use the plus and minus icons to the right to add / remove
metrics or group bys. Some metrics and group by have options, click the option text to expand the the row to view and edit metric or group by options.
Pipeline metrics
If you have Elasticsearch 2.x and Grafana 2.6 or above then you can use pipeline metric aggregations like
Moving Average and Derivative. Elasticsearch pipeline metrics require another metric to be based on. Use the eye icon next to the metric
to hide metrics from appearing in the graph. This is useful for metrics you only have in the query to be used
in a pipeline metric.
Templating
The Elasticsearch datasource supports two types of queries you can use to fill template variables with values.
Possible values for a field
{&find&: &terms&, &field&: &@hostname&}
Fields filtered by type
{&find&: &fields&, &type&: &string&}
Fields filtered by type, with filter
{&find&: &fields&, &type&: &string&, &query&: &lucene query&}
Multi format / All format
Use lucene format.
Annotations浏览器不支持嵌入式框架,或被配置为不显示嵌入式框架。fluentd+elasticsearch+grafana搭建指北 - 为程序员服务
fluentd+elasticsearch+grafana搭建指北
通常大家都用ELK,但是这个世界就永远不会被一样事物统一的,主要是grafana也支持从ES中进行查询了,那就用下,毕竟grafana的界面比kinbana好看很多,虽然都是现成的,也没有二次开发的内容,但是还是会碰到一些问题,这个用ELK其实也会碰到的。
首先,默认fluentd是没有type标志的,这样导致默认到es中的都是string类型,这样只能做count了。所以我们要在所有client上安装fluent-plugin-typecast这个插件。
然后在服务端需要安装2个插件,其实一个就可以了fluent-plugin-elasticsearch, fluent-plugin-secure-forward。 用来支持传输到es中。
好了下面就是fluentd的具体配置了。首先是client端就是靠tail来收集, 但是重点是format的部分如何去匹配,还得考虑日志中的多种不同格式。
同时在打tag的时候把机器名也打进去,这样汇总的时候方便找从哪个nginx过来的日志。
还有type的部分可以把非string的给单独拉出来进行处理,string的就不用处理了。
下面这个是fluentd客户端上的配置:
path /opt/server/log/nginx/nginx-access.log
format /^(?&remote&[^ ]*)\s+(?&host&[^ ]*) (?&user&[^ ]*) \[(?&time&[^ ]*)\] "(?&method&\S+)(?: +(?&path&[^\"]*) +\S*)?" (?&status&[^ ]*) (?&size&[^ ]*)(?: "(?&referer&[^\"]*)" "(?&agent&[^\"]*)" "(?&forward_ip&[^\"]*)" (?&response_time&[^ ]*)\s+(?&upstream_time&[^ ]*)\s+(?&upstream_addr&[^ ]*)\s+(?&cache_status&[^ ]*)\s+(?&upstaem_status&[^ ]*))?$/
time_format %Y-%m-%dT%H:%M:%S%z
types status:integer,size:integer,response_time:float,upstream_time:float,cache_status:integer,upstream_status:integer
tag "#{Socket.gethostname}.nginx.access.log"
pos_file /var/log/td-agent/tmp/nginx.access.log.pos
&match *.nginx.access.log&
type forward
flush_interval 60s
buffer_type file
buffer_path /opt/server/buffer/*
host agg1.hk.
port 24224
下面这个是fluentd服务器端的配置, 这里主要就是填写es的地址,然后在time_slice_format稍微耍了小花样,这样可以按年月日的分目录进行存放。不然都放同一个目录也不好查找。
&match *.nginx.access.log&
path /opt/server/logs/nginx-access/
time_slice_format ./nginx-access/%Y/%m/%d/%Y%m%d%H.nginx.access
compress gzip
flush_interval 10m
time_format %Y-%m-%dT%H:%M:%S%z
buffer_path /opt/server/buffer/nginx_access_buffer
buffer_type file
buffer_chunk_limit 50m
type elasticsearch
host hk1.es.
include_tag_key true
tag_key @log_name
logstash_format true
flush_interval 10s
es的安装的部分就不在这里展开了,也许会在其他地方说下。不过要推荐一个工具kopf, zou哥去es大会上听来后马上给装上了,还不错,可以看到index,
node, share 这些信息,而且随着cluster的健康状态会以不同的颜色来显示出来,这样还是很贴心的。
下面这个就是一个运行的截图
grafana的安装也不说了,我用的是2.5 stable版本,支持了es做为datasource, 也支持用户验证等等。
默认grafana是用query_string来进行全文匹配,而且不支持正则,不过grafana也支持自定义书写es查询
https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html
根据上面的语法我们就可以自定义自己的查询语句。不过我更喜欢正则匹配。
第一天用正则匹配发现了问题,怎么都匹配不上URL,后来group by 一下,居然发现png,jpg这些排在最前面,于是查了下原因,
正则不匹配的原因是因为es默认通过/ . 这些符号来进行默认分词。
那就只好更改es的index template,
把某些字段不让它进行分词,下面是我自己建立的一个template, 只用在某些index中。也可以在kopf中进行定义。
curl -XPUT http://hk1.es.:9200/_template/template_1 -d '
"mappings": {
"_default_": {
"_all": { "enabled": false },
"_source": { "compress": true },
"properties" : {
"path": { "type": "string", "index": "not_analyzed" },
"referer": { "type": "string", "index": "not_analyzed" },
"agent": { "type": "string", "index": "not_analyzed" }
"settings": {
"index.cache.field.type" : "soft",
"index.refresh_interval": "5s",
"press.stored": true,
"index.number_of_shards": "3",
"index.query.default_field": "querystring",
"index.routing.allocation.total_shards_per_node": "2"
"template": "logstash-*"
书写让我更好的思考
原文地址:, 感谢原作者分享。
您可能感兴趣的代码Grafana 是 Graphite 和 InfluxDB 仪表盘和图形编辑器。Grafana 是开源的,功能齐全的度量仪表盘和图形编辑器,支持 Graphite,InfluxDB 和 OpenTSDB。
Grafana是一个开源的、功能强大的指标仪表板和图形编辑器工具,它面向Graphite、Elasticsearch、OpenTSDB、Prometheus和InfluxDB等数据源。目前Grafana的最新版本为2.6版。
Grafana仪表板界面如下:
Graphite:Graphite是一个可扩展的实时图表,最新版本为0.9.10,地址:
OpenTSDB:OpenTSDB是一个基于HBase的分布式、可扩展的、基于时间序列的实时监控信息收集和展示平台。它支持秒级数据采集metrics,使用HBase进行永久存储,可以做容量规划,并很容易的接入到现有的监控系统里。OpenTSDB可以从大规模的设备中获取相应的metrics并进行存储、索引以及服务,从而使得这些数据更容易让人理解,如web化,图形化等。
Prometheus:是一个开源的系统和服务的监控系统。地址:
InfluxDB:InfluxDB是一个开源的、分布式的、基于时间序列的数据库,它没有外部依赖。InfluxDB可用于记录测量、事件以及性能分析。地址:
Grafana可以作为Kibana的替代品。Grafana最令人称道的是它提供的可视化仪表板工具,可以汇集各种数据源(比如InfluxDB)的测量数据并以图形方式显示。Grafana是Kinana的一个分支,但它没有提供对以Elasticsearch作为数据源的支持。还好在Grafana 2.5版以后,增加了对以Elasticsearch作为数据源的支持。在Sematext数据分析网站有介绍。
Sematext网站:
Elasticsearch通常不用于存储测量数据,而是常常用于存储随时间不停记录的数据,比如日志数据或事件数据(可以想想物联网IoT)。Grafana 2.5在显示方面只支持数值类型的显示,但在2.6版开始支持文本数据的表格显示。
Grafana的安装很简单,以Debian安装为例:
执行命令:
$ wget https://grafanarel./builds/grafana_2.6.0_amd64.deb
$ sudo apt-get install -y adduser libfontconfig
$ sudo dpkg -i grafana_2.6.0_amd64.deb
启动服务器:
$ sudo service grafana-server start
Copyright &
All Rights Reserved &&&&&&}

我要回帖

更多关于 elasticsearch java 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信