Storing and Aggregating Time Series Data With Elastic Search
In this article, see a tutorial on how to store and aggregate time series data with elastic Search.
Join the DZone community and get the full member experience.
Join For FreeWhen talking about time series data, the data will be very huge. The number of records increases based on the granularity level. If the granularity is minute, we will get 60 records for one minute for one instance.
For example, we want to store CPU percentage of a device for each minute. So let's assume we are getting data for the last 30 days.
Total no. of records = 1 (device) * 30 (days) * 24 (hours) * 60 (minutes) = 43200 records
In normal use case we may require to store data for thousands of devices for more than 90 days. Just imagine the number of records we have to persist and how we scale the querying that much data with aggregations.
So to store time series data we have to tweak the elastic search index properties to best work with aggregations and space reductions.
Before going any further, please take a stop here.
Making source field false spare us with some space. And take this decision wisely based on the importance of data we are storing. Because there some limitations like we cannot see actual data without aggregations.
Let's actually talk about what we are trying to achieve. Suppose I have minute by minute metric for my device for the last 90 days. In this data, I want to know the metric value for the last month with the daily average. This is something we are getting aggregation of all 30 days of data points.
First, let's build the optimized Elastic index, which stores time series data.
x
PUT index_name_example1
{
"settings": {
"analysis": {
"normalizer": {
"lower_case_norm": {
"type": "custom",
"char_filter": [],
"filter": [
"lowercase"
]
}
}
}
},
"mappings": {
"type_name_example1": {
"dynamic_templates": [{
"strings": {
"match": "*",
"match_mapping_type": "string",
"mapping": {
"type": "string",
"doc_values": true,
"index": "not_analyzed"
}
}
}
],
"_source": {
"enabled": false
},
"properties": {
"deviceName": {
"type": "text",
"fields": {
"normalize": {
"type": "keyword",
"normalizer": "lower_case_norm"
}
}
},
"metricName": {
"type": "text",
"fields": {
"normalize": {
"type": "keyword",
"normalizer": "lower_case_norm"
}
}
},
"timeStamp": {
"type": "long"
},
"utilization": {
"type": "float"
}
}
}
}
}
In this, we are considering 4 fields:
- deviceName — stores the device name
- metricName — Name of the metric which we storing
- timeStamp — timepoint at we are storing metric value
- utilization — Actual value of the metric
This wraps up the schema set up part. Now let's see how to query, to get the aggregated results from the data.
Below is the Elastic query to get the daily average value of the metric of the time range of 1 month.
xxxxxxxxxx
GET index_name_example1/type_name_example1 / _search
{
"query": {
"bool": {
"must": [{
"range": {
"timeStamp": {
"gte":1546300800000,
"lt": 1548979199000
}
}
},
{
"term": {
"deviceName.normalize": {
"value": "mydevice"
}
}
}
}
]
}
},
"aggs": {
"byday": {
"date_histogram": {
"field": "timeStamp",
"interval": "1d"
},
"aggs": {
"NAME": {
"max": {
"field": "utilization"
}
}
}
}
},
"size": 0
}
}
Basically, we are using date_histogram and sub aggregation to provide interval while aggregating data.
We can further provide different intervals like hourly, weekly, monthly.
Based on the above process we can work on storing times series data in Elastic search with storage optimization.
Opinions expressed by DZone contributors are their own.
Comments