I have 2 executors running on worker nodes in spark. I want to monitor them using java flight recorder. For some reason I can't use jcmd to monitor executors.
Now using spark.executor.extraJavaOptions I can give static file name like record.jfr the problem is bothe executors on the worker node will write jfr detail in same file which is not ideal. Also as name expasion is not supported in java 11 for jfr file name (https://bugs.openjdk.org/browse/JDK-8269127) I can't use %p option.
I was wondering if Spark provides place holders for executor id that it can replace when it creates executors? that way I can pass filename as say-> {{EXECUTOR_ID}}_record.jfr and will get separate recording for each executor.
The spark documentation for spark.executor.defaultJavaOptions at (https://spark.apache.org/docs/latest/configuration.html#runtime-environment) mentions that it provides place holder for Executor ID and Application ID but not sure what place holder is.
I tried -recording.jfr and {{EXECUTOR_ID}.jfr (suggested here Spark Executor Id in JAVA_OPTS) but it produces file named -recording.jfr and {{EXECUTOR_ID}.jfr respectively.