My dataflow job was triggered from Apache_beam of Python. It was worked when the runner is default, but was failed when the runner is DataFlowRunner. I wonder some of dataflow setup is not correct in GCP project.
insertId: "18yr612ckq8"
labels: {
dataflow.googleapis.com/job_id: "2019-12-29_20_13_18-6351782926232365732"
dataflow.googleapis.com/job_name: "wordcountpy-test"
dataflow.googleapis.com/region: "us-central1"
}
logName: "projects/hsbc-9820327-cmbsp54-dev/logs/dataflow.googleapis.com%2Fjob-message"
receiveTimestamp: "2019-12-30T05:13:27.146833360Z"
resource: {
labels: {
job_id: "2019-12-29_20_13_18-6351782926232365732"
job_name: "wordcountpy-test"
project_id: "488006911152"
region: "us-central1"
step_id: ""
}
type: "dataflow_step"
}
severity: "ERROR"
textPayload: "Workflow failed. Causes: The Dataflow job appears to be stuck because no worker activity has been seen in the last 1h. You can get help with Cloud Dataflow at https://cloud.google.com/dataflow/support."
timestamp: "2019-12-30T05:13:25.787782564Z" enter code here