Assuming my iptables rules default to DROP on the INPUT and OUTPUT chains, what is the bare minimum set of rules that I must add to my chains to prevent a script running in GitHub Actions from stalling indefinitely?
I'm using (free) GitHub Actions for my open-source application's CI/CD infrastructure. When I push changes to github.com, it automatically spins up an Ubuntu 18.04 linux server in Microsoft's cloud that checks-out my repo and executes a BASH script to build my application.
For security reasons, early on in my build script I install and setup some very restrictive iptables
rules that default to DROP
on the INPUT
and OUTPUT
chains. I poke a hole in the firewall for 127.0.0.1
, RELATED/ESTABLISHED
on INPUT
, and only permit the _apt
user to send traffic through OUTPUT
.
This works great when I run the build script in a docker container on my local system. But--as I just learned--when it runs with GitHub Actions, it stalls indefinitely. Clearly, the instance itself needs to be able to communicate out to GitHub's servers in order to finish. And I appear to have broken that.
So the question is: what -j ACCEPT
rules should I add to my iptables
INPUT
and OUTPUT
chains to only permit the bare necessities for GitHub Actions executions to proceed as usual?
For reference, here's the snippet from my build script that sets-up my firewall:
##################
# SETUP IPTABLES #
##################
# We setup iptables so that only the apt user (and therefore the apt command)
# can access the internet. We don't want insecure tools like `pip` to download
# unsafe code from the internet.
${SUDO} iptables-save > /tmp/iptables-save.`date "+%Y%m%d_%H%M%S"`
${SUDO} iptables -A INPUT -i lo -j ACCEPT
${SUDO} iptables -A INPUT -s 127.0.0.1/32 -j DROP
${SUDO} iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
${SUDO} iptables -A INPUT -j DROP
${SUDO} iptables -A OUTPUT -s 127.0.0.1/32 -d 127.0.0.1/32 -j ACCEPT
${SUDO} iptables -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
${SUDO} iptables -A OUTPUT -m owner --uid-owner 100 -j ACCEPT # apt uid = 100
${SUDO} iptables -A OUTPUT -j DROP
${SUDO} ip6tables-save > /tmp/ip6tables-save.`date "+%Y%m%d_%H%M%S"`
${SUDO} ip6tables -A INPUT -i lo -j ACCEPT
${SUDO} ip6tables -A INPUT -s ::1/128 -j DROP
${SUDO} ip6tables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
${SUDO} ip6tables -A INPUT -j DROP
${SUDO} ip6tables -A OUTPUT -s ::1/128 -d ::1/128 -j ACCEPT
${SUDO} ip6tables -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
${SUDO} ip6tables -A OUTPUT -m owner --uid-owner 100 -j ACCEPT
${SUDO} ip6tables -A OUTPUT -j DROP
# attempt to access the internet as root. If it works, exit 1
curl -s 1.1.1.1
if [ $? -eq 0 ]; then
echo "ERROR: iptables isn't blocking internet access to unsafe tools. You may need to run this as root (and you should do it inside a VM)"
exit 1
fi
This can be achieved by running your build script in a docker container and applying your iptables rules inside that container, which won't affect the host
runner
's connectivity.For example, if the below script is executed in a GitHub Actions job (in the Ubuntu 18.04 GitHub shared runner), it will run the build script (
docker_script.sh
) in a debian docker container that has no internet connectivity, except from the_apt
user.Note that:
You have to execute the
docker run
command manually rather than just specify thecontainer:
in the GitHub Actions yaml file in order to add theNET_ADMIN
capability. See Also How to run script in docker container with additional capabilities (docker exec ... --cap-add ...)This is a security risk unless you pin the root signing keys before calling
docker pull
. See also https://security.stackexchange.com/questions/238529/how-to-list-all-of-the-known-root-keys-in-docker-docker-content-trustThe above script should be executed as root. For example, prepend
sudo
to it in therun:
key for the step in the GitHub Actions workflow.