Emit metrics from on-premise python application to Prometheus in EKS cluster

49 views Asked by At

I have an EKS cluster with a helm-deployed kube-prometheus-stack. I currently use prometheus to monitor my RabbitMQ and Postgres pods.

I also have on-premise devices that run a python application, whose metrics I would also like to emit to the Prometheus server. I have tested my application with a prometheus server running on a local network and it works successfully. However, it doesn't work when I use my cloud prometheus server. These are some of the things I tried:

  1. For the kube-prometheus-stack, I added a scrape target by modifying my values.yaml file:
prometheus:
  service:
    additionalScrapeConfigs: |
      - job_name: 'My RPi'
        scrape_interval: 2m
        scrape_timeout: 1m
        static_configs:
        - targets: ['X.X.X.X:9090']

However, when I check my targets from the Prometheus UI page, I get "http://X.X.X.X:9090/metrics": context deadline exceeded. I changed the scrape_interval and scrape_timeout to account slow responses, but it didn't seem to fix the issue. It seems to be a networking issue - EKS is not in the same network as my local network.

In an effort to expose my Prometheus service, I changed its service type from the default ClusterIP to Load Balancer, but the problem still occurs.

I also tried the PushProx approach using as a proxy an EC2 instance that I currently use as a VPN solution:

prometheus:
  service:
    additionalScrapeConfigs: |
      - job_name: 'My RPi'
        proxy_url: http://my-proxy:8080/
        scrape_interval: 2m
        scrape_timeout: 1m
        static_configs:
        - targets: ['X.X.X.X:9090']

(and I setup my fqdn at the client side to be X.X.X.X:9090), but I still find the same issue.

Any leads?

0

There are 0 answers