Prometheus: Fill in Data for New Recording Rules


  • wanting to test the new recording rule right away instead of wait for it to populate
  • wanting to create an alert using the recording rule right away
  • making a dashboard using the new recording rule data

Example Usage

  • Have a running Prometheus server with flag storage.tsdb.allow-overlapping-block enabled.
  • Create a new recording rule
  1. Run the command promtool tsdb create-blocks-from rulesto fill in past data for the new rule. Provide the Prometheus server address and port with the url flag.
  2. Validate the output of the command, make sure it contains the correct data. The output of the promtool tsdb create-blocks-from rules are blocks of data. By default they are created in 2 hour chunks.
  3. Manually move the blocks over to the running Prometheus server at the same location of storage.tsdb.path.
  4. Wait briefly for the next compaction to hit, then query for the new rule and confirm past data is there. Warning: after compaction runs, the data cannot be removed or reverted.
$ cat prometheus.yml
- job_name: 'prometheus'
- targets: ['localhost:9090']
- job_name: 'node-exporter'
- targets: ['localhost:9100']
$ prometheus --storage.tsdb.allow-overlapping-blocks
example recording rule file
see recording rules at http://localhost:9090/rules
verify that recording rule data is being collected, tiny amount of data at starts 13:04
verify that recording rule data is being collected, tiny amount of data at starts 13:04
// the time to start filling in from as unix timestamp
$ date --date="1 hour ago" +%s
// end time, the first timestamp with no series data
// for the new recording rule
$ date --date="2021-04-04 13:04:06" +%s
$ promtool tsdb create-blocks-from rules \
--start 1617563353 --end 1617566646 \
--url http://localhost:9090 \
// by default the output is written to ./data directory
$ ls data/
output of command: promtool tsdb list -r data/
$ mv data/ $PROM_DATA_DIR
level=info ts=2021-04-04T20:23:51.035Z caller=compact.go:686 component=tsdb msg="Found overlapping blocks during compaction" ulid=01F2F8WMZ71M4VVZA3JJWEH3DNlevel=info ts=2021-04-04T20:23:51.043Z caller=compact.go:448 component=tsdb msg="compact blocks" count=6 mint=1617563241870 maxt=1617566601871 ulid=01F2F8WMZ71M4VVZA3JJWEH3DN sources="[01F2F87M7M94Z0B0WY18ER5VKA 01F2F87MAR7YEX8NZ0WA77YNZ4 01F2F87MBW23QN3M8NWNF2Y4HQ 01F2F87M9HKVVRGENWZMR5KYQ1 01F2F87MB6A582X8A4A9H36VHB 01F2F87MCMNBZTPMRRW9GW4XN1]" duration=28.268502ms
Backfilled recording rule data from 12:00 to 13:04
Backfilled recording rule data from 12:00 to 13:04
Backfilled recording rule
Running recording rule query in Prometheus graph to confirm data matches backfiller data.

Under the Hood

  • A Rule Manager is created to parse the recording rule files. This is the same code that Prometheus uses to process recording rules.
  • Requests are made to the Prometheus API QueryRange endpoint using the Prometheus Go client library (ref). The QueryRangeAPI evaluates the recording rule expression against existing time series data.
  • The response returned from the API contains samples with the timestamps and values for the recording rule. This is used to create new series that are written to tsdb blocks.

Future Work

Happy Ending




Live simply. Program stuff.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Part 2 (a): A tale of three tables

Building an Ad Engine for Grocery

How To Containerize Your Go Application with Docker

Metagenomic Binning with NextFlow

Make Everything Event Driven— Learnings from building an App — 2

What makes “good” procedural content?

Re-thinking Data Pipeline Patterns

Using default value if string is empty

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Jessica G

Jessica G

Live simply. Program stuff.

More from Medium

Canary Deployments with Service Mesh — Istio and Kubernetes

Scale your streaming data pipelines efficiently with kubernetes — Part 1

Working with Metrics Server and Horizontal Pod Autoscaling in Kubernetes

How to migrate Elasticsearch/Opensearch cluster data from AWS to on-prem / other cloud…