Prometheus Flask Exporter having a memory leak

148 views Asked by At

I wanted to visualize some metrics to check my application performance. I am using a pull based approach where I am sending all the relevant metrices to "/metrics" endpoint and configured VM pod scrape to pull in every 45 seconds. But since the changes went live I am seeing a constant increase in memory utilization(memory leak) hence facing intermittent timeouts on my application. How to avoid this in flask, is there any way to clean my custom metrics periodically? Or through garbage collection?

my prometheus setup

import time
from functools import wraps

from prometheus_client import Counter, Histogram, CollectorRegistry
from prometheus_flask_exporter import PrometheusMetrics

from api.flask_app_initializer import app

custom_registry = CollectorRegistry(auto_describe=True)

metrics = PrometheusMetrics(app, registry=custom_registry)


def method_latency(name, description):


    def decorator(f):
        @wraps(f)
        def wrapper(*args, **kwargs):
            start_time = time.time()
            result = f(*args, **kwargs)
            latency = time.time() - start_time
            method_name = f.__name__
            histogram_metric_method.labels(method_name).observe(latency)
            return result

        return wrapper
  
histogram_metric_method =  Histogram(
        name="method_latency_seconds",
        documentation="Latency for method execution",
        labelnames=["method"],
        buckets=[0.05, 0.1, 0.2, 0.5, 0.8, 1, 2, 5],
        registry=custom_registry
    )

This is one decorator for calculating method latencies, similarly I have other decorators as well(db query latency, count of exceptions, etc)

I am getting the metrices correctly on the endpoint but the memory leak is causing issues

0

There are 0 answers