I'm having a hard time to finalize a first working configuration with IPsec.
I want to have a IPsec server that creates a network with its clients, and I want the clients to be able to communicate each other through the server. I'm using Strongswan on both server and clients, and I'll have a few clients with other IPsec implementations.
Problem
So the server is reachable at 10.231.0.1 for every clients and the server can ping the clients. It works well. But the clients cannot reach each other.
Here is an output of tcpdump when I try to ping 10.231.0.2 from 10.231.0.3
# tcpdump -n host 10.231.0.3
[..]
21:28:49.099653 ARP, Request who-has 10.231.0.2 tell 10.231.0.3, length 28
21:28:50.123649 ARP, Request who-has 10.231.0.2 tell 10.231.0.3, length 28
I thought of farp plugin, mentionned here : https://wiki.strongswan.org/projects/strongswan/wiki/ForwardingAndSplitTunneling but the ARP request is not making its way to the server, it stays local.
Information
Server ipsec.conf
config setup
charondebug="ike 1, knl 1, cfg 0"
uniqueids=no
conn ikev2-vpn
auto=add
compress=no
type=tunnel
keyexchange=ikev2
fragmentation=yes
forceencaps=yes
dpdaction=clear
dpddelay=300s
esp=aes256-sha256-modp4096!
ike=aes256-sha256-modp4096!
rekey=no
left=%any
leftid=%any
leftcert=server.crt
leftsendcert=always
leftsourceip=10.231.0.1
leftauth=pubkey
leftsubnet=10.231.0.0/16
right=%any
rightid=%any
rightauth=pubkey
rightsourceip=10.231.0.2-10.231.254.254
rightsubnet=10.231.0.0/16
Client ipsec.conf
config setup
charondebug="ike 1, knl 1, cfg 0"
uniqueids=no
conn ikev2-vpn
auto=route
compress=no
type=tunnel
keyexchange=ikev2
fragmentation=yes
forceencaps=yes
dpdaction=clear
dpddelay=60s
esp=aes256-sha256-modp4096!
ike=aes256-sha256-modp4096!
rekey=no
right=server.url
rightid=%any
rightauth=pubkey
rightsubnet=10.231.0.1/32
left=%defaultroute
leftid=%any
leftauth=pubkey
leftcert=client.crt
leftsendcert=always
leftsourceip=10.231.0.3
leftsubnet=10.231.0.3/32
There should be nothing special or relevant in Strongswan's & charon's configuration file, but I can provide them if you think that could be usefull.
I've taken a few shortcuts in the configuration : I'm using VirtualIP but I'm not using a DHCP plugin or anything to distribute the IP. I'm setting the IP address manually on the clients like so :
ip address add 10.231.0.3/16 dev eth0
And here is a routing table on the client's side (automatically set like that by adding the IP and by Strongswann for the table 220) :
# ip route list | grep 231
10.231.0.0/16 dev eth0 proto kernel scope link src 10.231.0.3
# ip route list table 220
10.231.0.1 via 192.168.88.1 dev eth0 proto static src 10.231.0.3
I've also played with iptables and this rule
iptables -t nat -I POSTROUTING -m policy --pol ipsec --dir out -j ACCEPT
On both client and server, because I understood that could be a problem if I have MASQUERADE rules already set, but that did not changed anything.
I've also set those kernel parameters through sysctl on both client and server side :
sysctl net.ipv4.conf.default.accept_redirects=0
sysctl net.ipv4.conf.default.send_redirects=0
sysctl net.ipv4.conf.default.rp_filter=0
sysctl net.ipv4.conf.eth0.accept_redirects=0
sysctl net.ipv4.conf.eth0.send_redirects=0
sysctl net.ipv4.conf.eth0.rp_filter=0
sysctl net.ipv4.conf.all.proxy_arp=1
sysctl net.ipv4.conf.eth0.proxy_arp=1
sysctl net.ipv4.ip_forward=1
Lead 1
This could be related to my subnets declared in /32 in my client's configurations. At first I declared the subnet in /16 but I could not connect two clients with this configuration. The second client was taking the whole traffic for itself. So I understood I should limit the traffic selectors and this is how I did it but maybe I'm wrong.
Lead 2
This could be related to my way of assigning IP manually, and the mess it can introduce in the routing table. When I play with the routing table manually assigning gateway (like the public IP of the client as a gateway) then the ARP in TCPdump disappear and I see the ICMP request. But absolutely nothing on the server.
Any thoughts on what I've done wrong ?
Thanks