I have a custom Prometheus exporter that is meant for extreme "hermeticity" - to operate at all times, even when there is not network connection, for a spectrum of reasons.
Normally, I have a main Prometheus instance that scrapes nodes with this exporter, but when network goes out, the team added functionality to the exporter to dump the metrics to a text file periodically, in order to not lose any crucial data.
Now, I have about many hours of metrics from several nodes, in some text files, and I want to be able to query them. I checked to see if the prometheus_client
package in python had any way to query on that, but the closest thing I found was to parse the text-formatted metrics to gauge/counter objects in python, and if I wished to query on them I would have to implement something my self.
I've searched for available solutions, but the only way to query Prometheus I found was through the API, which needs me to push the metrics into the main Prometheus instance.
I don't have direct access to the main Prometheus instance, thus I can't make a quick script to push metrics into it.
Finally, my question is: How can I perform PromQL queries on Prometheus text-formatted metrics in a text file? Is there an available solution, or do we have to implement something similar ourselves?
I believe simplest course of action for this case - write small exporter, that will take metrics saved to file, and expose them to Prometheus (using correct timestamp).
This way you have to make reconfiguration only on one place (Prometheus scrape config) and all metrics are stored in same place.
Note: even simpler it would be to expose such files through node_exporter, but as of yet it's still not supporting timestamps.