I have a kubernetes cluster and I have a telegraf daemonset running on it.

If I install telegraf on the master node or on the nodes that one can sent the metrics to our kafka endpoint. But when I ran the telegraf in a pod as a daemonset the metrics can't send out from my kubernetes cluster. How can I forward or reroute somehow the metrics from the daemonset to the same way as the node send it out.

This is my kubernetes master node ip a and ip r s output.

[[email protected] ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:15:5d:0f:6d:04 brd ff:ff:ff:ff:ff:ff
    inet 192.168.213.18/28 brd 192.168.213.31 scope global noprefixroute dynamic eth0
       valid_lft 86327sec preferred_lft 86327sec
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:15:5d:0f:6d:07 brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:c8:53:b5:1b brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 6a:c2:52:fd:5f:8e brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::68c2:52ff:fefd:5f8e/64 scope link
       valid_lft forever preferred_lft forever
6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether ea:51:35:73:62:2e brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/24 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::e851:35ff:fe73:622e/64 scope link
       valid_lft forever preferred_lft forever
7: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
    link/ether c2:de:42:ed:c8:57 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::c0de:42ff:feed:c857/64 scope link
       valid_lft forever preferred_lft forever
8: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
    link/ether 7e:dd:ab:8b:f7:ef brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::7cdd:abff:fe8b:f7ef/64 scope link
       valid_lft forever preferred_lft forever
[[email protected] ~]# ip r s
default via 192.168.213.17 dev eth0 proto dhcp metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.0.0/24 dev cni0 proto kernel scope link src 192.168.0.1
192.168.1.0/24 via 192.168.1.0 dev flannel.1 onlink
192.168.2.0/24 via 192.168.2.0 dev flannel.1 onlink
192.168.213.16/28 dev eth0 proto kernel scope link src 192.168.213.18 metric 100
[[email protected] ~]#

This is the same outpur inside the pod:

[email protected]:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 2e:99:ce:82:bd:67 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.1.10/24 scope global eth0
       valid_lft forever preferred_lft forever
[email protected]:/# ip r s
default via 192.168.1.1 dev eth0
10.244.0.0/16 via 192.168.1.1 dev eth0
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.10
[email protected]:/#

I want to send the metrics to here 10.121.63.5:9092

To sum up if I send from k8s-master, it works, if I send from the telegraf pod, not transfer the metrics. I'm new in kubernetes so I'm not clear about the networking yet.

0 Answers