A Node with only private IP in a vpc can not connect to other node in other vpc

325 views Asked by At

Problem:

My eks residing in a vpc can not connect to other redis service residing in another vpc with port number: 6379.

Things I have done so far:

I have created a eks cluster with the following vpc configs:

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "3.2.0"

  name                 = "test-vpc"
  cidr                 = "10.0.0.0/16"
  azs                  = data.aws_availability_zones.available.names
  private_subnets      = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  public_subnets       = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true

  tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
  }

  public_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/elb"                      = "1"
  }

  private_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/internal-elb"             = "1"
  }
}

And made peering connection:

resource "aws_vpc_peering_connection" "vpc_peering" {
  peer_vpc_id = "target_vpc_id"
  vpc_id      = module.vpc.vpc_id
  auto_accept = true
  accepter {
    allow_remote_vpc_dns_resolution = true
  }

  requester {
    allow_remote_vpc_dns_resolution = true
  }
  tags = {
    Name = "VPC Peering between ${module.vpc.name} and Miso Default"
  }
}

After this, I manually added route with target_vpc cidr as destination to private route table of newly created vpc above, and similarly added route with the newly created vpc cidr as the destination.

Then i realized I need to check network acl of target_vpc and it has the following inbound rules and seems normal:

Rulle Number        Type         Source         Status
100              All Traffic   0.0.0.0/0        Allow

Then I realized i need to add add another security group to target_vpc that allows all tcp connection on port 6379 from the nat gateway of the newly created vpc.

After those all things i have done If i deploy a pod that connects to redis in target_vpc on port 6379 connection fails with code: CONNECTIONTIMEDOUT

Am I missing something here? I appreciate any comment that might help me. Thanks

1

There are 1 answers

0
Deen On

I have figured out the problem. Redis server inside target_vpc had a security group that did not have CIDR block of new vpc and it was blocking the traffic.