• Basic Information:

    1. System:

      # cat /proc/version
      
      Linux version 3.10.0-514.2.2.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) ) #1 SMP Tue Dec 6 23:06:41 UTC 2016
      
    2. Kubeadm Version:

      # kubeadm version
      
      kubeadm version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.2074+a092d8e0f95f52", GitCommit:"a092d8e0f95f5200f7ae2cba45c75ab42da36537", GitTreeState:"clean", BuildDate:"2016-12-13T17:03:18Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
      
    3. Kubectl Version

      # kubectl version
      
      Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:57:05Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
      Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:52:01Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
      
    4. Docker Version

      # docker version
      
      Client:
       Version:      1.12.5
       API version:  1.24
       Go version:   go1.6.4
       Git commit:   7392c3b
       Built:        Fri Dec 16 02:23:59 2016
       OS/Arch:      linux/amd64
      
      Server:
       Version:      1.12.5
       API version:  1.24
       Go version:   go1.6.4
       Git commit:   7392c3b
       Built:        Fri Dec 16 02:23:59 2016
       OS/Arch:      linux/amd64
      
    5. Weave Images

      REPOSITORY                                               TAG                 IMAGE ID            CREATED             SIZE
      weaveworks/weave-npc                                     1.8.2               c91ef3f4642b        4 weeks ago         68.77 MB
      weaveworks/weave-kube                                    1.8.2               a4740ae55aae        4 weeks ago         166.7 MB
      
  • Problem

    I am deploying k8s by using kubeadm. It is strange that, at First Time weave works fine with kube-dns on a new VM, BUT, after reset the kubeadm and re-init, weave CAN NOT work anymore.

    • Kubectl Get Pods

      [root@192-168-1-177 pod_network]# kubectl get pods -o wide --all-namespaces
      NAMESPACE     NAME                                           READY     STATUS             RESTARTS   AGE       IP              NODE
      kube-system   dummy-2088944543-tdxck                         1/1       Running            0          59m       192.168.1.177   192-168-1-177.master
      kube-system   etcd-192-168-1-177.master                      1/1       Running            0          59m       192.168.1.177   192-168-1-177.master
      kube-system   kube-apiserver-192-168-1-177.master            1/1       Running            0          59m       192.168.1.177   192-168-1-177.master
      kube-system   kube-controller-manager-192-168-1-177.master   1/1       Running            0          59m       192.168.1.177   192-168-1-177.master
      kube-system   kube-discovery-1769846148-87pgm                1/1       Running            0          59m       192.168.1.177   192-168-1-177.master
      kube-system   kube-dns-2924299975-82sb6                      4/4       Running            0          59m       10.32.0.2       192-168-1-177.master
      kube-system   kube-proxy-8xprh                               1/1       Running            0          59m       192.168.1.177   192-168-1-177.master
      kube-system   kube-scheduler-192-168-1-177.master            1/1       Running            0          59m       192.168.1.177   192-168-1-177.master
      kube-system   weave-net-ssqtd                                1/2       CrashLoopBackOff   16         58m       192.168.1.177   192-168-1-177.master
      
    • Kubectl Logs

        # kubectl logs $(kubectl get pods --all-namespaces | grep weave-net | awk '{print $2}') -n kube-system weave-npc
      
        time="2017-01-09T11:11:17Z" level=info msg="Starting Weaveworks NPC 1.8.2" 
        time="2017-01-09T11:11:17Z" level=info msg="Serving /metrics on :6781" 
        Mon Jan  9 11:11:17 2017 <5> ulogd.c:843 building new pluginstance stack: 'log1:NFLOG,base1:BASE,pcap1:PCAP'
        time="2017-01-09T11:11:17Z" level=fatal msg="ipset [destroy] failed: ipset v6.29: Set cannot be destroyed: it is in use by a kernel component\n: exit status 1" 
      
  • Basic Operation

    • Kubeadm Init

      kubeadm init --api-advertise-addresses 192.168.1.177 --use-kubernetes-version v1.5.1
      
    • Apply Weave

      kubectl apply -f https://git.io/weave-kube
      
    • Kubeadm Reset

      kubeadm reset
      docker rm `docker ps -a -q`
      find /var/lib/kubelet | xargs -n 1 findmnt -n -t tmpfs -o TARGET -T | uniq | xargs -r umount -v
      rm -r -f /etc/kubernetes /var/lib/kubelet /var/lib/etcd
      
1

There are 1 answers

0
CHENJIAN On BEST ANSWER

I fixed my own questions just by REBOOT my VM machine.

Before the reboot:

  • Ipset List

    # ipset list
    
    Name: weave-k?Z;25^M}|1s7P3|H9i;*;MhG
    Type: hash:ip
    Revision: 1
    Header: family inet hashsize 1024 maxelem 65536
    Size in memory: 16528
    References: 0
    Members:
    
    Name: weave-iuZcey(5DeXbzgRFs8Szo]<@p
    Type: hash:ip
    Revision: 1
    Header: family inet hashsize 1024 maxelem 65536
    Size in memory: 16528
    References: 0
    Members:
    
    Name: weave-#Of<X6ofOD9U?jkdAmmuY.VL(
    Type: hash:ip
    Revision: 1
    Header: family inet hashsize 1024 maxelem 65536
    Size in memory: 16528
    References: 0
    Members:
    
    Name: felix-calico-hosts-4
    Type: hash:ip
    Revision: 1
    Header: family inet hashsize 1024 maxelem 1048576
    Size in memory: 16528
    References: 1
    Members:
    
    Name: felix-all-ipam-pools
    Type: hash:net
    Revision: 3
    Header: family inet hashsize 1024 maxelem 1048576
    Size in memory: 16784
    References: 1
    Members:
    
    Name: felix-masq-ipam-pools
    Type: hash:net
    Revision: 3
    Header: family inet hashsize 1024 maxelem 1048576
    Size in memory: 16784
    References: 1
    Members:
    
  • Ipset Destroy

    # ipset destroy
    
    ipset v6.19: Set cannot be destroyed: it is in use by a kernel component
    

After the reboot:

  • Ipset List

    # ipset list
    
    Name: weave-iuZcey(5DeXbzgRFs8Szo]<@p
    Type: hash:ip
    Revision: 1
    Header: family inet hashsize 1024 maxelem 65536
    Size in memory: 16544
    References: 1
    Members:
    10.32.0.2
    
    Name: weave-k?Z;25^M}|1s7P3|H9i;*;MhG
    Type: hash:ip
    Revision: 1
    Header: family inet hashsize 1024 maxelem 65536
    Size in memory: 16528
    References: 1
    Members:
    

And everything is OK.

  • Kubectl Get Pods

    # kubectl get pods -o wide --all-namespaces
    
    NAMESPACE     NAME                                           READY     STATUS    RESTARTS   AGE       IP              NODE
    default       busybox                                        1/1       Running   0          6s        10.44.0.1       192-168-1-178.node
    kube-system   dummy-2088944543-05kj3                         1/1       Running   0          19m       192.168.1.177   192-168-1-177.master
    kube-system   etcd-192-168-1-177.master                      1/1       Running   0          18m       192.168.1.177   192-168-1-177.master
    kube-system   kube-apiserver-192-168-1-177.master            1/1       Running   0          18m       192.168.1.177   192-168-1-177.master
    kube-system   kube-controller-manager-192-168-1-177.master   1/1       Running   0          17m       192.168.1.177   192-168-1-177.master
    kube-system   kube-discovery-1769846148-3t242                1/1       Running   0          19m       192.168.1.177   192-168-1-177.master
    kube-system   kube-dns-2924299975-6bv1x                      4/4       Running   0          19m       10.32.0.2       192-168-1-177.master
    kube-system   kube-proxy-4jqzb                               1/1       Running   0          19m       192.168.1.177   192-168-1-177.master
    kube-system   kube-proxy-kxkxm                               1/1       Running   0          10m       192.168.1.178   192-168-1-178.node
    kube-system   kube-scheduler-192-168-1-177.master            1/1       Running   0          18m       192.168.1.177   192-168-1-177.master
    kube-system   weave-net-jgwwt                                2/2       Running   0          10m       192.168.1.178   192-168-1-178.node
    kube-system   weave-net-s4w7w                                2/2       Running   0          17m       192.168.1.177   192-168-1-177.master