We were trying to enable canary based deployment for our aks based applications using istio feature and HELM templates. By referring the documents provided in medium , intex we tried the solution as below.
Helm envt.yaml file as below.
my-helm-chart:
environment: dev
#################### Production version ##########################
deployment:
image:
tag: ${currentVersion}$ # ### current running version
replicaCount: "#{replicacount}#"
resources:
memory:
minimum: "#{memory_minimum}#"
maximum: "#{memory_maximum}#"
cpu:
minimum: "#{CPU_minimum}#"
maximum: "#{CPU_maximum}#"
hpa:
autoscale:
maxReplicas: "#{autoscaleMaxReplicaCount}#"
minReplicas: "#{autoscaleMinReplicaCount}#"
configmap:
data:
JAVA_JVM_ARGS: #{jvmSettings}#
#################### canary version ##########################
canarydeployment:
enabled: true
image:
tag: ${releaseVersionTagDocker}$ ### current build or release version
replicaCount: "#{canaryreplicacount}#"
resources:
memory:
minimum: "#{memory_minimum}#"
maximum: "#{memory_maximum}#"
cpu:
minimum: "#{CPU_minimum}#"
maximum: "#{CPU_maximum}#"
hpa:
autoscale:
maxReplicas: "#{autoscaleMaxReplicaCount}#"
minReplicas: "#{autoscaleMinReplicaCount}#"
configmap:
data:
JAVA_JVM_ARGS: "#{jvmSettings}#"
AzureDevops task is as below for helm upgrade
- task: HelmDeploy@0
name: run_helm_upgrade
displayName: 'Deploy service'
inputs:
azureSubscriptionEndpoint: ${{ parameters.subscriptionEndpoint }}
azureResourceGroup: $(aksRg)
kubernetesCluster: $(aksName)
namespace: ${{ parameters.helmNamespace }}
command: upgrade
chartType: FilePath
chartPath: $(System.DefaultWorkingDirectory)/${{ parameters.environment }}-helm
releaseName: ${{ parameters.appName }}
arguments: '--install --timeout ${{ parameters.helmTimeout }} -f $(System.DefaultWorkingDirectory)/${{ parameters.environment }}-helm/environments/${{ parameters.environment }}.yaml'
failOnStderr: false
overrideValues: my-helm-chart.deployment.agentpool=$(targetNodePool),my-helm-chart.canarydeployment.agentpool=$(targetNodePool),my-helm-chart.deployment.image.tag=$(currentVersion),my-helm-chart.deployment.weight=$(weight),my-helm-chart.deployment.replicaCount=$(replicacount),my-helm-chart.canarydeployment.image.tag=$(releaseVersionTagDocker),my-helm-chart.canarydeployment.replicaCount=$(canaryreplicaCount),my-helm-chart.canarydeployment.weight=$(canaryweight)
In above, task, $(canaryweight), $(weight), $(canaryreplicaCount), $(replicaCount) variables are added to the Variable group, so that, the App team can determine these values accordingly in the VG and re-run the deployment task alone.
But As per the medium Document, their they mentioned to have the canary replica count to set to "0" and once canary is serving 100 %, set the prod replica count to "0". Not sure whats the advantage of having these replicas setting to 0, since we already have the weight set in the virtual service. Is this have any advantage ? if yes, how we can make this dynamically, as we dont want to confuse our Dev App team on these different deployments (canary and prod) weightage and replica relations as input.
Also looking for an easiest and simple way for our developers to work on the ADO yaml pipelines for setting these weightage and achieve the more comfortable canary release for a scenario like below.
Canary Production
******* ************
phase-1 weight:0%, replica:0 weight:100%,replica:2
phase-2 weight:10%, replica:2 weight:90%, replica:2
phase-3 weight:30%, replica:2 weight:70%, replica:2
phase-4 weight:70%, replica:2 weight:30%, replica:2
phase-5 weight:90%, replica:2 weight:10%, replica:2
phase-6 weight:100%,replica:2 weight:0%, replica:2
phase-7 weight:100%,replica:2 weight:0%, replica:0