Keepalived - VIP on device different from one where VRRP instance configured

1k views Asked by At

I have 2 VMs with Linux and keepalived installed. Their hostnames are master and slave. Each VM has 2 network interfaces configured for different subnets:

  • master:
    • eth1 - 192.168.1.101/24
    • eth2 - 192.168.56.101/24
  • slave:
    • eth1 - 192.168.1.102/24
    • eth2 - 192.168.56.102/24

On each node I configured one vrrp_instance using interface eth1:

vrrp_instance VI_1 {
    ...
    interface eth1
    ...
}

And I assigned one VIP for each subnet - one per interface:

vrrp_instance VI_1 {
  ...
  virtual_ipaddress {
      192.168.1.250/32  dev eth1 label eth1:vip0
      192.168.56.250/32 dev eth2 label eth2:vip0
    }
  ...
}

So, whole configs are:

  • master:
    vrrp_instance VI_1 {
        state MASTER
        interface eth1
        virtual_router_id 1
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass HURRDURR
        }
    
    
      virtual_ipaddress {
          192.168.1.250/32  dev eth1 label eth1:vip0
          192.168.56.250/32 dev eth2 label eth2:vip0
        }
    }
    
  • slave:
    vrrp_instance VI_1 {
        state BACKUP
        interface eth1
        virtual_router_id 1
        priority 99
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass HURRDURR
        }
    
    
      virtual_ipaddress {
          192.168.1.250/32  dev eth1 label eth1:vip0
          192.168.56.250/32 dev eth2 label eth2:vip0
        }
    }
    

A question: could someone please tell me if there are pitfalls with a similar setup (on condition that VRRP multicast is allowed for the interface specified in option interface <interface name>).

As far as I understood, option interface <interface name> is used only for intercommunication between keepalived instances, and in fact, it specified which interface keepalived will use to send multicast traffic to negotiate which one should be a leader at the moment. And it should not affect configured VIPs (on condition that I configured them properly).

1

There are 1 answers

0
Al Ryz On BEST ANSWER

I realized at least one pitfall of a similar configuration. In case of network problems with interface eth2 on master server, VIP assigned on eth1 will not be moved to slave because VRRP instance configured via network on eth1 of both servers.

Therefore I think that a similar configuration is not recommended. VIP should be assigned to the same interface where VRRP instance was configured.

Correct configuration:

  • master:

    vrrp_instance VI_1 {
        state MASTER
        interface eth1
        virtual_router_id 1
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass HURRDURR
        }
    
    
      virtual_ipaddress {
          192.168.1.250/32  dev eth1 label eth1:vip0
        }
    }
    
    vrrp_instance VI_2 {
      state MASTER
      interface eth2
      virtual_router_id 2
      priority 100
      advert_int 1
      authentication {
          auth_type PASS
          auth_pass HURRDURR
      }
    
    
    virtual_ipaddress {
        192.168.56.250/32 dev eth2 label eth2:vip0
      }
    }
    
  • slave:

    vrrp_instance VI_1 {
      state BACKUP
      interface eth1
      virtual_router_id 1
      priority 99
      advert_int 1
      authentication {
          auth_type PASS
          auth_pass HURRDURR
      }
    
    
    virtual_ipaddress {
        192.168.1.250/32  dev eth1 label eth1:vip0
      }
    }
    
    vrrp_instance VI_2 {
      state BACKUP
      interface eth2
      virtual_router_id 2
      priority 99
      advert_int 1
      authentication {
          auth_type PASS
          auth_pass HURRDURR
      }
    
    
    virtual_ipaddress {
        192.168.56.250/32 dev eth2 label eth2:vip0
      }
    }