its works fine when:
- everything build on azure portal
- enable agic in kubernetes service networking
- import self signed ssl cert
- host table mapping
When deploy by terrform:
- deploy a web page, using application gateway ingress controller
- deploy aks, vnet, subnet by terraform
- enable agic by ingress_application_gateway blocks
so now, got a auto-generated vnet, application gateway and public ip in the auto-generated resource group, and a resource group A contains the aks and the application gateway subnet vnet
I am thinking about peering the aks vnet and the vnet that contains application gateway subnet, however they are in the same address space. Any idea or good way to fix
resource "azurerm_kubernetes_cluster" "aks_cluster" {
name = "${var.aks_name}"
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
dns_prefix = "aksxxx"
kubernetes_version = var.aks_version
automatic_channel_upgrade = "stable"
private_cluster_enabled = false
node_resource_group = "${var.rg_name}-node-group"
sku_tier = "Free"
oidc_issuer_enabled = true
workload_identity_enabled = true
network_profile {
network_plugin = "azure"
dns_service_ip = "10.20.0.10"
service_cidr = "10.20.0.0/16"
}
ingress_application_gateway {
subnet_id = data.azurerm_subnet.appgwsubnet.id
#subnet_cidr = "10.225.0.0/16"
gateway_name = "appgw-ingress"
}
default_node_pool {
name = "defaultnp"
vm_size = "Standard_B2ms"
orchestrator_version = var.aks_version
# vnet_subnet_id = data.azurerm_subnet.aks_node_subnet.id
type = "VirtualMachineScaleSets"
enable_auto_scaling = true
node_count = 1
min_count = 1
max_count = 2
node_labels = {
role = "general"
}
}
identity {
type = "SystemAssigned"
#identity_ids = [azurerm_user_assigned_identity.aks_service_pricipal.id]
}
lifecycle {
ignore_changes = [default_node_pool[0].node_count]
}
depends_on = [
azurerm_role_assignment.aks_role_assignment
]
tags = {
"managed_by" = "terraform"
}
}
# # # =================== Node pool NSG ===========
resource "azurerm_network_security_group" "aks_nodepool_nsg" {
name = "nsg-${var.aks_nodepool_name}"
location = "${var.azure_region_map["az1"]}"
resource_group_name = azurerm_resource_group.resource_group.name
tags = {
"managed_by" = "terraform"
}
}
# # # =================== Node pool Vnet =====================
resource "azurerm_virtual_network" "aks_node_vnet" {
name = "vnet-${var.aks_nodepool_name}"
location = "${var.azure_region_map["az1"]}"
resource_group_name = azurerm_resource_group.resource_group.name
address_space = ["10.224.0.0/12"]
subnet {
name = "aks-subnet"
address_prefix = "10.224.0.0/16"
security_group = azurerm_network_security_group.aks_nodepool_nsg.id
}
subnet {
name = "ingress-appgateway-subnet"
address_prefix = "10.225.0.0/16"
}
tags = {
"managed_by" = "terraform"
}
}
data "azurerm_subnet" "appgwsubnet" {
name = "ingress-appgateway-subnet"
resource_group_name = azurerm_resource_group.resource_group.name
virtual_network_name = azurerm_virtual_network.aks_node_vnet.name
#address_prefixes = ["10.225.0.0/24"]
}
auto-generated rg for vent, contains the apic
same rg for aks and vnet

Azure does not allow VNet peering between virtual networks that have overlapping address spaces. Each VNet in Azure should have a distinct address space. If there is an overlap, you have to reconfigure your network to make sure that the VNets use unique and non-overlapping CIDR blocks.
It looks like you are using Terraform to set up a Kubernetes cluster and a virtual network that includes a default node pool and an Application Gateway. To fix the 502 error, you have to check that your Application Gateway can properly direct traffic to your AKS cluster.
My demo terraform configuration:
I have given this for demo purposes with the AKS cluster and not included Ingress controller as Issuse was mentioned as network peering IP Addresses.
Output: