Is the lack of RDMA a reason (source1, source2) for problems with a network, e.g. ifconfig displays eth1-avahi (see link)? I want to run OpenFOAM on two A8 nodes and have to do
/etc/init.d/networking restart
frequently to enable just eth0. Otherwise mpi uses wrong IP address to communicate, e.g. 169... , instead of 10.....
$bash> ifconfig
eth0 Link encap:Ethernet HWaddr 00:0d:3a:20:3f:33
inet addr:10.0.0.4 Bcast:10.0.1.255 Mask:255.255.254.0
eth1 Link encap:Ethernet HWaddr 00:15:5d:33:ff:ad
inet6 addr: fe80::215:5dff:fe33:ffad/64 Scope:Link
eth1:avahi Link encap:Ethernet HWaddr 00:15:5d:33:ff:ad
inet addr:169.254.9.198 Bcast:169.254.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
If I run mpirun with Infiniband as follows:
mpirun --host localhost --mca btl openib,self,tcp -np $nProcs
is it really Infiniband on the VM?