I want to add metrics for my Spark application, I use JMX-exporter to expose the metrics to Prometheus. As a first step, I would like to see Prometheus connecting to the JMX-exporter successfully and scraping some existing spark metrics. I followed this answer, I execute the following command:
spark-shell --conf "spark.driver.extraJavaOptions=-javaagent:jmx_prometheus_javaagent-0.10.jar=8888:.../spark.yml"
I found a spark.yml file here
When I go to http://localhost:8888/metrics I see a lot of metrics, that's part of them:
# HELP jvm_threads_current Current thread count of a JVM
# TYPE jvm_threads_current gauge
jvm_threads_current 57.0
# HELP jvm_threads_daemon Daemon thread count of a JVM
# TYPE jvm_threads_daemon gauge
jvm_threads_daemon 50.0
# HELP jvm_threads_peak Peak thread count of a JVM
# TYPE jvm_threads_peak gauge
jvm_threads_peak 58.0
# HELP jvm_threads_started_total Started thread count of a JVM
# TYPE jvm_threads_started_total counter
jvm_threads_started_total 60.0
# HELP jvm_threads_deadlocked Cycles of JVM-threads that are in deadlock waiting to acquire object monitors or ownable synchronizers
# TYPE jvm_threads_deadlocked gauge
jvm_threads_deadlocked 0.0
# HELP jvm_threads_deadlocked_monitor Cycles of JVM-threads that are in deadlock waiting to acquire object monitors
# TYPE jvm_threads_deadlocked_monitor gauge
jvm_threads_deadlocked_monitor 0.0
# HELP jmx_scrape_duration_seconds Time this JMX scrape took, in seconds.
# TYPE jmx_scrape_duration_seconds gauge
jmx_scrape_duration_seconds 0.018020101
# HELP jmx_scrape_error Non-zero if this scrape failed.
# TYPE jmx_scrape_error gauge
jmx_scrape_error 0.0
# HELP jvm_info JVM version info
# TYPE jvm_info gauge
jvm_info{version="11.0.9+11",vendor="Oracle Corporation",} 1.0
# HELP jmx_config_reload_failure_total Number of times configuration have failed to be reloaded.
# TYPE jmx_config_reload_failure_total counter
jmx_config_reload_failure_total 0.0
# HELP jmx_config_reload_success_total Number of times configuration have successfully been reloaded.
# TYPE jmx_config_reload_success_total counter
jmx_config_reload_success_total 0.0
# HELP jvm_memory_bytes_used Used bytes of a given JVM memory area.
# TYPE jvm_memory_bytes_used gauge
jvm_memory_bytes_used{area="heap",} 1.83810352E8
jvm_memory_bytes_used{area="nonheap",} 1.324068E8
# HELP jvm_memory_bytes_committed Committed (bytes) of a given JVM memory area.
# TYPE jvm_memory_bytes_committed gauge
jvm_memory_bytes_committed{area="heap",} 5.36870912E8
jvm_memory_bytes_committed{area="nonheap",} 1.39730944E8
# HELP jvm_memory_bytes_max Max (bytes) of a given JVM memory area.
# TYPE jvm_memory_bytes_max gauge
jvm_memory_bytes_max{area="heap",} 1.073741824E9
jvm_memory_bytes_max{area="nonheap",} -1.0
# HELP jvm_memory_pool_bytes_used Used bytes of a given JVM memory pool.
# TYPE jvm_memory_pool_bytes_used gauge
jvm_memory_pool_bytes_used{pool="CodeHeap 'non-nmethods'",} 1330816.0
jvm_memory_pool_bytes_used{pool="Metaspace",} 9.090232E7
jvm_memory_pool_bytes_used{pool="CodeHeap 'profiled nmethods'",} 2.3704192E7
jvm_memory_pool_bytes_used{pool="Compressed Class Space",} 1.1603552E7
jvm_memory_pool_bytes_used{pool="G1 Eden Space",} 7.2351744E7
jvm_memory_pool_bytes_used{pool="G1 Old Gen",} 9.3632816E7
jvm_memory_pool_bytes_used{pool="G1 Survivor Space",} 1.7825792E7
jvm_memory_pool_bytes_used{pool="CodeHeap 'non-profiled nmethods'",} 4865920.0
# HELP jvm_memory_pool_bytes_committed Committed bytes of a given JVM memory pool.
# TYPE jvm_memory_pool_bytes_committed gauge
jvm_memory_pool_bytes_committed{pool="CodeHeap 'non-nmethods'",} 2555904.0
jvm_memory_pool_bytes_committed{pool="Metaspace",} 9.490432E7
jvm_memory_pool_bytes_committed{pool="CodeHeap 'profiled nmethods'",} 2.3724032E7
jvm_memory_pool_bytes_committed{pool="Compressed Class Space",} 1.3631488E7
jvm_memory_pool_bytes_committed{pool="G1 Eden Space",} 2.71581184E8
jvm_memory_pool_bytes_committed{pool="G1 Old Gen",} 2.47463936E8
jvm_memory_pool_bytes_committed{pool="G1 Survivor Space",} 1.7825792E7
jvm_memory_pool_bytes_committed{pool="CodeHeap 'non-profiled nmethods'",} 4915200.0
# HELP jvm_memory_pool_bytes_max Max bytes of a given JVM memory pool.
# TYPE jvm_memory_pool_bytes_max gauge
jvm_memory_pool_bytes_max{pool="CodeHeap 'non-nmethods'",} 5836800.0
jvm_memory_pool_bytes_max{pool="Metaspace",} -1.0
jvm_memory_pool_bytes_max{pool="CodeHeap 'profiled nmethods'",} 1.22908672E8
jvm_memory_pool_bytes_max{pool="Compressed Class Space",} 1.073741824E9
jvm_memory_pool_bytes_max{pool="G1 Eden Space",} -1.0
jvm_memory_pool_bytes_max{pool="G1 Old Gen",} 1.073741824E9
jvm_memory_pool_bytes_max{pool="G1 Survivor Space",} -1.0
jvm_memory_pool_bytes_max{pool="CodeHeap 'non-profiled nmethods'",} 1.22912768E8
# HELP jvm_classes_loaded The number of classes that are currently loaded in the JVM
# TYPE jvm_classes_loaded gauge
jvm_classes_loaded 10829.0
# HELP jvm_classes_loaded_total The total number of classes that have been loaded since the JVM has started execution
# TYPE jvm_classes_loaded_total counter
jvm_classes_loaded_total 10829.0
# HELP jvm_classes_unloaded_total The total number of classes that have been unloaded since the JVM has started execution
# TYPE jvm_classes_unloaded_total counter
jvm_classes_unloaded_total 0.0
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 23.438644
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.623251436259E9
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 412.0
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 10240.0
# HELP jvm_gc_collection_seconds Time spent in a given JVM garbage collector in seconds.
# TYPE jvm_gc_collection_seconds summary
jvm_gc_collection_seconds_count{gc="G1 Young Generation",} 10.0
jvm_gc_collection_seconds_sum{gc="G1 Young Generation",} 0.257
jvm_gc_collection_seconds_count{gc="G1 Old Generation",} 0.0
jvm_gc_collection_seconds_sum{gc="G1 Old Generation",} 0.0
My prometheus.yml contains the following:
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: prometheus
static_configs:
- targets: ['localhost:9090']
- job_name: "spark_streaming_app"
scrape_interval: "5s"
static_configs:
- targets: ['localhost:8888']
When I go to Prometheus UI at localhost:9090/targets I can see that prometheus target is up, whereas spark_streaming_app is down. In my opinion, the metrics exposed successfully and show at localhost:8888 but prometheus fails to scrape them.
Any idea what did I do wrong?
Prometheus is 'containerized' and
localhost
for a container is the container itself. Thus, there is nothing on port 8888 for Prometheus to scrape metrics from.If you're using Docker Desktop (MacOS/Windows) then use
host.docker.internal
instead oflocalhost
inprometheus.yml
for targets running on the host.On Linux run Prometheus container in host network mode and no change in configuration is required.