We are using Jeager in kubernets verion 1.21.7 and the Jeager version is 1.29. We are storing the Jeager logs in Azure storage account. Till now it was fine. When the pod dies and when the new pod spins, it is not able to access the previous data and the container is crash looping. Below is the error what I am getting
kubectl logs jaeger-75cddd55bf-48nss -n istio-system
2022/02/16 04:14:54 maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
{"level":"info","ts":1644984894.1548135,"caller":"flags/service.go:117","msg":"Mounting metrics handler on admin server","route":"/metrics"}
{"level":"info","ts":1644984894.1551123,"caller":"flags/service.go:123","msg":"Mounting expvar handler on admin server","route":"/debug/vars"}
{"level":"info","ts":1644984894.1556144,"caller":"flags/admin.go:104","msg":"Mounting health check on admin server","route":"/"}
{"level":"info","ts":1644984894.1557937,"caller":"flags/admin.go:115","msg":"Starting admin HTTP server","http-addr":":14269"}
{"level":"info","ts":1644984894.1560519,"caller":"flags/admin.go:96","msg":"Admin server started","http.host-port":"[::]:14269","health-status":"unavailable"}
badger 2022/02/16 04:14:54 ERROR: Received err: file does not exist for table 6122. Cleaning up...
{"level":"fatal","ts":1644984895.000993,"caller":"./main.go:107","msg":"Failed to init storage factory","error":"file does not exist for table 6122","stacktrace":"main.main.func1\n\t./main.go:107\ngithub.com/spf13/cobra.(*Command).execute\n\tgithub.com/spf13/[email protected]/command.go:856\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tgithub.com/spf13/[email protected]/command.go:974\ngithub.com/spf13/cobra.(*Command).Execute\n\tgithub.com/spf13/[email protected]/command.go:902\nmain.main\n\t./main.go:226\nruntime.main\n\truntime/proc.go:255"}
And Below is the Jearger yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: jaeger
namespace: istio-system
labels:
app: jaeger
spec:
selector:
matchLabels:
app: jaeger
template:
metadata:
labels:
app: jaeger
annotations:
sidecar.istio.io/inject: "false"
prometheus.io/scrape: "true"
prometheus.io/port: "14269"
spec:
containers:
- name: jaeger
image: "docker.io/jaegertracing/all-in-one:1.29"
env:
- name: BADGER_EPHEMERAL
value: "false"
- name: SPAN_STORAGE_TYPE
value: "badger"
- name: BADGER_DIRECTORY_VALUE
value: "/badger/data"
- name: BADGER_DIRECTORY_KEY
value: "/badger/key"
- name: COLLECTOR_ZIPKIN_HOST_PORT
value: ":9411"
- name: MEMORY_MAX_TRACES
value: "50000"
- name: QUERY_BASE_PATH
value: /jaeger
livenessProbe:
httpGet:
path: /
port: 14269
readinessProbe:
httpGet:
path: /
port: 14269
volumeMounts:
- name: azurefileshare
mountPath: /badger
resources:
requests:
cpu: 10m
volumes:
- name: azurefileshare
azureFile:
secretName: log-storage-secret
shareName: jaegerfileshare
readOnly: false
---
apiVersion: v1
kind: Service
metadata:
name: tracing
namespace: istio-system
labels:
app: jaeger
spec:
type: ClusterIP
ports:
- name: http-query
port: 80
protocol: TCP
targetPort: 16686
# Note: Change port name if you add '--query.grpc.tls.enabled=true'
- name: grpc-query
port: 16685
protocol: TCP
targetPort: 16685
selector:
app: jaeger
---
# Jaeger implements the Zipkin API. To support swapping out the tracing backend, we use a Service named Zipkin.
apiVersion: v1
kind: Service
metadata:
labels:
name: zipkin
name: zipkin
namespace: istio-system
spec:
ports:
- port: 9411
targetPort: 9411
name: http-query
selector:
app: jaeger
---
apiVersion: v1
kind: Service
metadata:
name: jaeger-collector
namespace: istio-system
labels:
app: jaeger
spec:
type: ClusterIP
ports:
- name: jaeger-collector-http
port: 14268
targetPort: 14268
protocol: TCP
- name: jaeger-collector-grpc
port: 14250
targetPort: 14250
protocol: TCP
- port: 9411
targetPort: 9411
name: http-zipkin
selector:
app: jaeger
Can anyone help me to resolve the issue?
I found a temporary solution, I deleted the entire directory where the Volume is, this space fills up over time,