As Azure Monitor for containers will gather basic logs from the console i.e. stdout / stderr. Is there any reason to implement sidecar for log shipping especially for production workloads? Currently I am using the below pattern
apiVersion: apps/v1
kind: Deployment
metadata:
name: sidecar-logshipping
spec:
replicas: 2
selector:
matchLabels:
app: sidecar-logshipping
template:
metadata:
labels:
app: sidecar-logshipping
spec:
containers:
- name: main-container
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date) dog" >> /var/log/mylogs/app.log;
i=$((i+1));
sleep 1;
done
resources:
limits:
memory: "256Mi"
cpu: "500m"
requests:
memory: "64Mi"
cpu: "250m"
volumeMounts:
- name: logs
mountPath: /var/log/mylogs
- name: log-shipper
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/mylogs/*.log']
resources:
limits:
memory: "256Mi"
cpu: "500m"
requests:
memory: "64Mi"
cpu: "250m"
volumeMounts:
- name: logs
mountPath: /var/log/mylogs
volumes:
- name: logs
emptyDir: {}
Azure Monitor collects logs and sends to Log Analytics workspace. It can't send logs to ELK stack. So if you are used to these tools and want to continue using them then fluentbit sidecar or fluentd daemonset based solutions are alternative. But management of ELK stack is on you in this case.
The advantage of Azure Monitor is that it consolidates your AKS logs with other Azure platform logs, providing a unified monitoring experience.
The disadvantage of azure monitor is that at very high volumes, cost may become a consideration.
So you may want to use open source ELK stack for applications which produces high volume of logs and use Azure Monitor for applications which produces low volume of logs.