Posts Deploy to Azure Kubernetes Service using Azure DevOps YAML Pipelines
Deploy to Azure Kubernetes Service using Azure DevOps YAML Pipelines
Cancel

Deploy to Azure Kubernetes Service using Azure DevOps YAML Pipelines

Microservices are becoming more and more popular these days. These microservices run most of the time in Kubernetes. A goal we want to achieve with microservices is a quick and reliable deployment.

In this post, I will show how to deploy to Kubernetes (more precisely Azure Kubernetes Service (AKS)) using Helm and Azure DevOps pipelines.

Create a Kubernetes Cluster in Azure

If you are new to Kubernetes or want instructions on how to install an Azure Kubernetes Service (AKS) cluster, see my post “Azure Kubernetes Service - Getting Started”.

Create a Service Connection in Azure DevOps

Before you can deploy to AKS, you have to create a service connection so Azure DevOps can access your Azure resources. To create a new service connection go to Project settings –> Service connections and click on New service connection.

Create a new service connection

Create a new service connection

This opens a flyout where you have to select Azure Resource Manager and then click Next.

Choose a service connection type

Choose a service connection type

Select Service principal (automatic) as your authentication method and click Next.

Service connection authentication method

Service connection authentication method

On the next step, select a scope level, your subscription, the resource group, and provide a name. For example, you could configure that the service connection is only allowed to access the subscription ABC and in this subscription access only the resource group XYZ. I want my service connection to access all resource groups. Therefore, I don’t select any.

Configure the service connection

Configure the service connection

Click on Save and the service connection gets created. Note that the service connection name will be used in the pipeline to reference this connection.

Configure the Azure DevOps YAML Pipeline to Deploy to Azure Kubernetes Service

I created already a YAML pipeline in my previous posts which I will extend now. You can find this pipeline on Github.

Since I already have a Docker image on Docker hub, I only have to add Helm charts and a couple of variables to the pipeline.

Define Variables for the Deployment

First, I add the following variables at the beginning of the pipeline:

1
2
3
4
5
6
7
8
9
10
variables:
  AzureSubscription: 'AzureServiceConnection' # Name of the Service Connection
  ApiName: 'customerapi'
  ClusterResourceGroup: MicroserviceDemo  
  ChartPackage: '$(Build.ArtifactStagingDirectory)/$(ApiName)-$(Build.BuildNumber).tgz'  
  ChartPath: 'CustomerApi/CustomerApi/charts/$(ApiName)'
  HelmVersion: 3.5.0
  ImageName: 'wolfgangofner/$(ApiName):$(Build.BuildNumber)'
  K8sNamespace: '$(ApiName)-test'
  KubernetesCluster: 'microservice-aks'

The variables should be self-explaining. They configure the previously created service connection, set some information about the AKS cluster like its name, resource group, and what namespace I want to use, and some information for Helm. For more information about Helm see my post “Helm - Getting Started”.

Deploy to Azure Kubernetes Service

Since I am using Helm for the deployment, I only need three tasks for the whole deployment. First I have to install Helm in my Kubernetes cluster. I use the HelmInstaller task and provide the Helm version which I previously configured in a variable.

1
2
3
4
5
6
7
- task: HelmInstaller@0
  displayName: 'Install Helm $(HelmVersion)'
  inputs:
    helmVersion: $(HelmVersion)
    checkLatestHelmVersion: false
    installKubectl: true
  condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))   

Next, I have to create a Helm package from my Helm chart. To do that, I use the HelmDeploy task and the package command. For this task, I have to provide the service connection, the information about my Kubernetes cluster, the path to the Helm chart, and a version. I calculate the version at the beginning of the pipeline and set it in the Build.BuildNumber variable. Therefore, I provide this variable as the version.

1
2
3
4
5
6
7
8
9
10
11
12
- task: HelmDeploy@0
  displayName: 'helm package'
  inputs:
    azureSubscriptionEndpoint: $(AzureSubscription)
    azureResourceGroup: $(ClusterResourceGroup)
    kubernetesCluster: $(KubernetesCluster)
    command: 'package'
    chartPath: $(ChartPath)
    chartVersion: $(Build.BuildNumber)
    save: false
    namespace: '$(K8sNamespace)'
  condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))   

The last step is to install the Helm package. Therefore, I use HelmDeploy again but this time I use the upgrade command. Upgrade installs the package if no corresponding deployment exists and updates it if a deployment already exists. Additionally, I prove the –create-namespace argument to create the Kubernetes namespace if it doesn’t exist.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
- task: HelmDeploy@0
  displayName: 'Helm upgrade release'
  inputs:
    connectionType: 'Azure Resource Manager'
    azureSubscription: $(AzureSubscription)
    azureResourceGroup: '$(ClusterResourceGroup)'
    kubernetesCluster: '$(KubernetesCluster)'
    useClusterAdmin: true
    namespace: '$(K8sNamespace)'
    command: 'upgrade'
    chartType: 'FilePath'
    chartPath: '$(ChartPackage)'
    releaseName: '$(ApiName)-$(K8sNamespace)'  
    arguments: '--create-namespace'

That’s already everything you need to deploy to Kubernetes. Run the pipeline to test that everything works as expected.

The deployment to Kubernetes was successful

The deployment to Kubernetes was successful

For practice, try to add the deployment to another namespace, for example, prod.

Test the deployed Microservice

Use the dashboard of Kubernetes (see here how to use Octant) or use the Azure portal to find the URL of the previously created microservice.

Find the external URL of the microservice

Find the external URL of the microservice

Open the external URL in your browser and you will see the Swagger UI of the microservice.

Swagger UI of the microservice running in AKS

Swagger UI of the microservice running in AKS

The finished Pipeline

The full YAML pipeline looks as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
name : CustomerApi-CI
trigger:
  branches:
    include:
      - master
  paths:
    include:
      - CustomerApi/*

pool:
  vmImage: 'ubuntu-latest'

variables:
  AzureSubscription: 'AzureServiceConnection' # Name of the Service Connection
  ApiName: 'customerapi'
  ClusterResourceGroup: MicroserviceDemo  
  ChartPackage: '$(Build.ArtifactStagingDirectory)/$(ApiName)-$(Build.BuildNumber).tgz'  
  ChartPath: 'CustomerApi/CustomerApi/charts/$(ApiName)'
  HelmVersion: 3.5.0
  ImageName: 'wolfgangofner/$(ApiName):$(Build.BuildNumber)'
  K8sNamespace: '$(ApiName)-test'
  KubernetesCluster: 'microservice-aks'

stages:
- stage: Build
  displayName: Build image
  jobs:  
  - job: Build
    displayName: Build and push Docker image
    steps:
    
    - task: BuildVersioning@0
      displayName: 'Build Versioning'
      inputs:
        versionSource: 'gitversion'
        doInstallGitVersion: true
        GitVersionInstallerSource: 'choco'
        GitVersionInstallerVersion: '5.0.1'
        doUseLatestGitVersionInstallerVersion: false
        paramAssemblyVersion: '7'
        paramAssemblyFileVersion: '7'
        paramAssemblyInformationalVersion: '6'
        paramOverwriteFourthDigitWithBuildCounter: false
        paramVersionCode: '2'
        doAssemblyInfoAppendSuffix: false
        doConvertAssemblyInfoToLowerCase: true
        buildNumberVersionFormat: '3'
        buildNumberAction: 'replace'
        doReplaceAssemblyInfo: false
        doReplaceNuspec: false
        doReplaceNpm: false
        doReplaceDotNetCore: true
        filePatternDotNetCore: |
          **\*.csproj
          **\*.props
        paramDotNetCoreVersionType: '3'
        doReplaceAndroid: false
        doReplaceiOS: false
        doReplaceCustom: false
        doShowWarningsForUnmatchedRegex: false
        excludeFilePattern: |
          !**/bin/**
          !**/obj/**
          !**/node_modules/**
          !**/packages/**

    - task: Docker@1      
      inputs:
        containerregistrytype: 'Container Registry'
        dockerRegistryEndpoint: 'Docker Hub'
        command: 'Build an image'
        dockerFile: '**/CustomerApi/CustomerApi/Dockerfile'
        arguments: '--build-arg BuildId=$(Build.BuildId) --build-arg PAT=$(PatMicroserviceDemoNugetsFeed)'
        imageName: '$(ImageName)'
        useDefaultContext: false
        buildContext: 'CustomerApi'
      displayName: 'Build the Docker image'

    - pwsh: |
       $id=docker images --filter "label=test=$(Build.BuildId)" -q | Select-Object -First 1
       docker create --name testcontainer $id
       docker cp testcontainer:/testresults ./testresults
       docker rm testcontainer
      displayName: 'Copy test results' 
    
    - task: PublishTestResults@2
      inputs:
        testResultsFormat: 'VSTest'
        testResultsFiles: '**/*.trx'
        searchFolder: '$(System.DefaultWorkingDirectory)/testresults'
      displayName: 'Publish test results'

    - task: PublishCodeCoverageResults@1
      inputs:
        codeCoverageTool: 'Cobertura'
        summaryFileLocation: '$(System.DefaultWorkingDirectory)/testresults/coverage/coverage.cobertura.xml'
        reportDirectory: '$(System.DefaultWorkingDirectory)/testresults/coverage/reports'
      displayName: 'Publish code coverage results'

    - task: Docker@1      
      inputs:
        containerregistrytype: 'Container Registry'
        dockerRegistryEndpoint: 'Docker Hub'
        command: 'Push an image'
        imageName: '$(ImageName)'
      condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))
      displayName: 'Push the Docker image to Dockerhub'
    
    - task: HelmInstaller@0
      displayName: 'Install Helm $(HelmVersion)'
      inputs:
        helmVersion: $(HelmVersion)
        checkLatestHelmVersion: false
        installKubectl: true
      condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))   
      
    - task: HelmDeploy@0
      displayName: 'helm package'
      inputs:
        azureSubscriptionEndpoint: $(AzureSubscription)
        azureResourceGroup: $(ClusterResourceGroup)
        kubernetesCluster: $(KubernetesCluster)
        command: 'package'
        chartPath: $(ChartPath)
        chartVersion: $(Build.BuildNumber)
        save: false
        namespace: '$(K8sNamespace)'
      condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))   
    
    - task: HelmDeploy@0
      displayName: 'Helm upgrade release'
      inputs:
        connectionType: 'Azure Resource Manager'
        azureSubscription: $(AzureSubscription)
        azureResourceGroup: '$(ClusterResourceGroup)'
        kubernetesCluster: '$(KubernetesCluster)'
        useClusterAdmin: true
        namespace: '$(K8sNamespace)'
        command: 'upgrade'
        chartType: 'FilePath'
        chartPath: '$(ChartPackage)'
        releaseName: '$(ApiName)-$(K8sNamespace)'
        arguments: '--create-namespace'

Shortcommings of my Implementation

This implementation is more a proof of concept than a best practice. In a real-world project, you should use different stages, for example, build, deploy-test, and deploy-prod. Right now, every build (if it’s not a pull request) deploys to test and prod. Usually, you want some tests or checks after the test deployment.

The pipeline is also getting quite long and it would be nice to move different parts to different files using templates.

I will implement all these best practices and even more over the next couple of posts.

Conclusion

Using an Azure DevOps pipeline to deploy to Kubernetes is quite simple. In this example, I showed how to use Helm to create a Helm package and then deploy it to an Azure Kubernetes Service cluster. Over the next couple of posts, I will improve the pipeline and extend its functionality to follow all best practices.

You can find the code of the demo on Github.

This post is part of “Microservice Series - From Zero to Hero”.

This post is licensed under CC BY 4.0 by the author.