Azure Kubernetes Service(AKS)

What is Landing Zone in Azure? How to implement it via Terraform

March 16, 2023 Architecture, Architectures, Azure, Azure Kubernetes Service(AKS), Azure Solution Architect Expert, Best Practices, Cloud Computing, Emerging Technologies, Kubernetes, Microsoft, Software/System Design, Terraform No comments

In Azure, a landing zone is a pre-configured environment that provides a baseline for hosting workloads. It helps organizations establish a secure, scalable, and well-managed environment for their applications and services. A landing zone typically includes a set of Azure resources such as networks, storage accounts, virtual machines, and security controls.

Implementing a landing zone in Azure can be a complex task, but it can be simplified by using Infrastructure as Code (IaC) tools like Terraform. Terraform allows you to define and manage infrastructure as code, making it easier to create, modify, and maintain your landing zone.

Here are the steps to implement a landing zone in Azure using Terraform:

  1. Define your landing zone architecture: Decide on the resources you need to include in your landing zone, such as virtual networks, storage accounts, and virtual machines. Create a Terraform module for each resource, and define the parameters and variables for each module.
  2. Create a Terraform configuration file: Create a main.tf file and define the Terraform modules you want to use. Use the Azure provider to specify your subscription and authentication details.
  3. Initialize your Terraform environment: Run the ‘terraform init’ command to initialize your Terraform environment and download any necessary plugins.
  4. Plan your deployment: Run the ‘terraform plan’ command to see a preview of the changes that will be made to your Azure environment.
  5. Apply your Terraform configuration: Run the ‘terraform apply’ command to deploy your landing zone resources to Azure.

By implementing a landing zone in Azure using Terraform, you can ensure that your environment is consistent, repeatable, and secure. Terraform makes it easier to manage your infrastructure as code, so you can focus on developing and deploying your applications and services.

Once the landing zone architecture is defined, it can be implemented using various automation tools such as Azure Resource Manager (ARM) templates, Azure Blueprints, or Terraform. In this blog, we will focus on implementing a landing zone using Terraform.

Terraform is a widely used infrastructure-as-code tool that allows us to define and manage our infrastructure as code. It provides a declarative language that allows us to define our desired state, and then it takes care of creating and managing resources to meet that state.

To implement a landing zone using Terraform, we can follow these steps:

  1. Define the landing zone architecture: As discussed earlier, we need to define the architecture for our landing zone. This includes defining the network topology, security controls, governance policies, and management tools.
  2. Create a Terraform project: Once the landing zone architecture is defined, we can create a Terraform project to manage the infrastructure. This involves creating Terraform configuration files that define the resources to be provisioned.
  3. Define the Terraform modules: We can define Terraform modules to create reusable components of infrastructure. These modules can be used across multiple projects to ensure consistency and standardization.
  4. Configure Terraform backend: We need to configure the Terraform backend to store the state of our infrastructure. Terraform uses this state to understand the current state of our infrastructure and to make necessary changes to achieve the desired state.
  5. Initialize and apply Terraform configuration: We can initialize the Terraform configuration by running the terraform init command. This command downloads the necessary provider plugins and sets up the backend. Once initialized, we can apply the Terraform configuration using the terraform apply command. This command creates or updates the resources to match the desired state.

By implementing a landing zone using Terraform, we can ensure that our infrastructure is consistent, compliant, and repeatable. We can easily provision new environments, applications, or services using the same architecture and governance policies. This can reduce the time and effort required to manage infrastructure and improve the reliability and security of our applications.

Implementing Azure Landing Zone using Terraform and Reference Architecture

Below I provide general guidance on the steps involved in implementing an Azure Landing Zone using Terraform and the Azure Reference Architecture.

Here are the general steps:

  1. Create an Azure Active Directory (AD) tenant and register an application in the tenant.
  2. Create a Terraform module for the initial deployment of the Azure Landing Zone. This module should include the following:
    • A virtual network with subnets and network security groups.
    • A jumpbox virtual machine for accessing the Azure environment.
    • A storage account for storing Terraform state files.
    • An Azure Key Vault for storing secrets.
    • A set of Resource Groups that organize resources for management, data, networking, and security.
    • An Azure Policy that enforces resource compliance with standards.
  3. Implement the Reference Architecture for Azure Landing Zone using Terraform modules.
  4. Create a Terraform workspace for each environment (dev, test, prod) and deploy the Landing Zone.
  5. Set up and configure additional services in the environment using Terraform modules, such as Azure Kubernetes Service (AKS), Azure SQL Database, and Azure App Service.

Conclusion

Implementing an Azure Landing Zone using Terraform can be a powerful way to manage your cloud infrastructure. By automating the deployment of foundational resources and configuring policies and governance, you can ensure consistency, security, repeatable, and compliance across all of your Azure resources. Terraform’s infrastructure as code approach also makes it easy to maintain and update your Landing Zone as your needs evolve. This can help us reduce the time and effort required to manage our infrastructure and improve the reliability and security of our applications.

Whether you’re just getting started with Azure or looking to improve your existing cloud infrastructure, implementing an Azure Landing Zone with Terraform is definitely worth considering. With the right planning, tooling, and expertise, you can create a secure, scalable, and resilient cloud environment that meets your business needs.

References

Example Code

  1. Implementing Azure Landing Zone using Terraform :

Here’s an example Terraform code snippet that creates an Azure Landing Zone with a virtual network, subnets, and a network security group:

  • Define the subscription and resource group using Terraform:
#hcl coderesource "azurerm_resource_group" "landing_zone_rg" {
  name     = "landing-zone-rg"
  location = var.location
}

resource "azurerm_virtual_network" "landing_zone_vnet" {
  name                = "landing-zone-vnet"
  address_space       = ["10.0.0.0/16"]
  location            = var.location
  resource_group_name = azurerm_resource_group.landing_zone_rg.name

  subnet {
    name           = "web-subnet"
    address_prefix = "10.0.1.0/24"
  }

  subnet {
    name           = "db-subnet"
    address_prefix = "10.0.2.0/24"
  }
}
resource "azurerm_network_security_group" "landing_zone_nsg" {
  name                = "landing-zone-nsg"
  location            = var.location
  resource_group_name = azurerm_resource_group.landing_zone_rg.name

  security_rule {
    name                       = "http"
    priority                   = 100
    direction                  = "Inbound"
    access                     = "Allow"
    protocol                   = "Tcp"
    source_port_range          = "*"
    destination_port_range     = "80"
    source_address_prefix      = "*"
    destination_address_prefix = "*"
  }

  security_rule {
    name                       = "ssh"
    priority                   = 200
    direction                  = "Inbound"
    access                     = "Allow"
    protocol                   = "Tcp"
    source_port_range          = "*"
    destination_port_range     = "22"
    source_address_prefix      = "*"
    destination_address_prefix = "*"
  }
}
resource "azurerm_network_security_group" "nsg-web" {
  name                = "nsg-web-dev"
  location            = azurerm_resource_group.resource_group.location
  resource_group_name = azurerm_resource_group.resource_group.name
}

resource "azurerm_network_security_group" "nsg-db" {
  name                = "nsg-db-dev"
  location            = azurerm_resource_group.resource_group.location
  resource_group_name = azurerm_resource_group.resource_group.name
}

resource "azurerm_subnet_network_security_group_association" "web-nsg" {
  subnet_id                 = azurerm_virtual_network.virtual_network.subnet_web.id
  network_security_group_id = azurerm_network_security_group.nsg-web.id
}

resource "azurerm_subnet_network_security_group_association" "db-nsg" {
  subnet_id                 = azurerm_virtual_network.virtual_network.subnet_db.id
  network_security_group_id = azurerm_network_security_group.nsg-db.id
}

This Terraform code creates a resource group, a virtual network, a subnet, and two additional subnet for web-frontend, db-backend , associated network security groups, and associates the subnet with the network security group. The network security group allows inbound traffic on port 22 (SSH) and port 80 (HTTP). This is just an example, and the security rules can be customized as per the organization’s security policies.

  • Create an Azure Kubernetes Service (AKS) cluster:
#hcl code
resource "azurerm_kubernetes_cluster" "aks" {
  name                = "aks-dev"
  location            = azurerm_resource_group.resource_group.location
  resource_group_name = azurerm_resource_group.resource_group.name
  dns_prefix          = "aks-dev"

  default_node_pool {
    name            = "default"
    node_count      = 1
    vm_size         = "Standard_D2s_v3"
    os_disk_size_gb = 30
  }
}

2. Implementing Azure Landing Zone using Terraform and Cloud Adoption Framework:

Cloud Adoption Framework for Azure provides a set of recommended practices for building and managing cloud-based applications. You can use Terraform to implement these best practices in your Azure environment.

Here’s an example of implementing a landing zone for a development environment using Terraform and the Cloud Adoption Framework modules:

security groups using the Azure Cloud Adoption Framework (CAF) Terraform modules:

#hcl code
provider "azurerm" {
  features {}
}

module "caf" {
  source  = "aztfmod/caf/azurerm"
  version = "5.3.0"

  naming_prefix               = "myproject"
  naming_suffix               = "dev"
  resource_group_location     = "eastus"
  resource_group_name         = "rg-networking-dev"
  diagnostics_log_analytics   = false
  diagnostics_event_hub       = false
  diagnostics_storage_account = false

  custom_tags = {
    Environment = "Dev"
  }

  # Define the virtual network
  virtual_networks = {
    my_vnet = {
      address_space = ["10.0.0.0/16"]
      dns_servers   = ["8.8.8.8", "8.8.4.4"]

      subnets = {
        frontend = {
          cidr           = "10.0.1.0/24"
          enforce_public = true
        }
        backend = {
          cidr = "10.0.2.0/24"
        }
      }

      nsgs = {
        frontend = {
          rules = [
            {
              name                       = "HTTP"
              priority                   = 100
              direction                  = "Inbound"
              access                     = "Allow"
              protocol                   = "Tcp"
              source_port_range          = "*"
              destination_port_range     = "80"
              source_address_prefix      = "*"
              destination_address_prefix = "*"
            }
          ]
        }
      }
    }
  }
}

In this example, the aztfmod/caf/azurerm module is used to create a virtual network with two subnets (frontend and backend) and a network security group (NSG) applied to the frontend subnet. The NSG has an inbound rule allowing HTTP traffic on port 80.

Note that the naming_prefix and naming_suffix variables are used to generate names for the resources created by the module. The custom_tags variable is used to apply custom tags to the resources.

This is just one example of how the Azure Cloud Adoption Framework Terraform modules can be used to create a landing zone. There are many other modules available for creating other types of resources, such as virtual machines, storage accounts, and more.

Due to the complexity and length of the example code for implementing Azure Landing Zone using Terraform and Reference Architecture, it is not possible to provide it within a blog article.

However, here are the high-level steps and an overview of the code structure:

  1. Define the variables and providers for Azure and Terraform.
  2. Create the Resource Group for the Landing Zone and networking resources.
  3. Create the Virtual Network and Subnets with the appropriate address spaces.
  4. Create the Network Security Groups and associate them with the appropriate Subnets.
  5. Create the Bastion Host for remote access to the Virtual Machines.
  6. Create the Azure Firewall to protect the Landing Zone resources.
  7. Create the Storage Account for Terraform state files.
  8. Create the Key Vault for storing secrets and keys.
  9. Create the Log Analytics Workspace for monitoring and logging.
  10. Create the Azure Policy Definitions and Assignments for enforcing governance.

The code structure follows the Cloud Adoption Framework (CAF) for Azure landing zones and is organized into the following directories:

  • variables: Contains the variables used by the Terraform code.
  • providers: Contains the provider configuration for Azure and Terraform.
  • resource-groups: Contains the code for creating the Resource Group and networking resources.
  • virtual-networks: Contains the code for creating the Virtual Network and Subnets.
  • network-security-groups: Contains the code for creating the Network Security Groups and associating them with the Subnets.
  • bastion: Contains the code for creating the Bastion Host.
  • firewall: Contains the code for creating the Azure Firewall.
  • storage-account: Contains the code for creating the Storage Account for Terraform state files.
  • key-vault: Contains the code for creating the Key Vault for secrets and keys.
  • log-analytics: Contains the code for creating the Log Analytics Workspace.
  • policy: Contains the code for creating the Azure Policy Definitions and Assignments.

Each directory contains a main.tf file with the Terraform code, as well as any necessary supporting files such as variables and modules.

Overall, implementing an Azure Landing Zone using Terraform and Reference Architecture requires a significant amount of planning and configuration. However, the end result is a well-architected, secure, and scalable environment that can serve as a foundation for your cloud-based workloads.

It’s important to note that the specific code required for this process will depend on your organization’s specific needs and requirements. Additionally, implementing an Azure Landing Zone can be a complex process and may require assistance from experienced Azure and Terraform professionals.

GitOps with a comparison between Flux and ArgoCD and which one is better for use in Azure AKS

March 15, 2023 Azure, Azure, Azure DevOps, Azure Kubernetes Service(AKS), Cloud Computing, Development Process, DevOps, DevSecOps, Emerging Technologies, GitOps, KnowledgeBase, Kubernates, Kubernetes, Microsoft, Orchestrator, Platforms, SecOps No comments

GitOps has emerged as a powerful paradigm for managing Kubernetes clusters and deploying applications. Two popular tools for implementing GitOps in Kubernetes are Flux and ArgoCD. Both tools have similar functionalities, but they differ in terms of their architecture, ease of use, and integration with cloud platforms like Azure AKS. In this blog, we will compare Flux and ArgoCD and see which one is better for use in Azure AKS.

Flux:

Flux is a GitOps tool that automates the deployment of Kubernetes resources by syncing them with a Git repository. It supports multiple deployment strategies, including canary, blue-green, and A/B testing. Flux has a simple architecture that consists of two components: a controller and an agent. The controller watches a Git repository for changes, while the agent runs on each Kubernetes node and applies the changes to the cluster. Flux can be easily integrated with Azure AKS using the Flux Helm Operator, which allows users to manage their Helm charts using GitOps.

ArgoCD:

ArgoCD is a GitOps tool that provides a declarative way to deploy and manage applications on Kubernetes clusters. It has a powerful UI that allows users to visualize the application state and perform rollbacks and updates. ArgoCD has a more complex architecture than Flux, consisting of a server, a CLI, and an agent. The server is responsible for managing the Git repository, while the CLI provides a command-line interface for interacting with the server. The agent runs on each Kubernetes node and applies the changes to the cluster. ArgoCD can be integrated with Azure AKS using the ArgoCD Operator, which allows users to manage their Kubernetes resources using GitOps.

Comparison:

Now that we have an understanding of the two tools, let’s compare them based on some key factors:

  1. Architecture: Flux has a simpler architecture than ArgoCD, which makes it easier to set up and maintain. ArgoCD’s more complex architecture allows for more advanced features, but it requires more resources to run.
  2. Ease of use: Flux is easier to use than ArgoCD, as it has fewer components and a more straightforward setup process. ArgoCD’s UI is more user-friendly than Flux, but it also has more features that can be overwhelming for beginners.
  3. Integration with Azure AKS: Both Flux and ArgoCD can be integrated with Azure AKS, but Flux has better integration through the Flux Helm Operator, which allows users to manage Helm charts using GitOps.
  4. Community support: Both tools have a large and active community, with extensive documentation and support available. However, Flux has been around longer and has more users, which means it has more plugins and integrations available.

Conclusion:

In conclusion, both Flux and ArgoCD are excellent tools for implementing GitOps in Kubernetes. Flux has a simpler architecture and is easier to use, making it a good choice for beginners. ArgoCD has a more advanced feature set and a powerful UI, making it a better choice for more complex deployments. When it comes to integrating with Azure AKS, Flux has the advantage through its Helm Operator. Ultimately, the choice between Flux and ArgoCD comes down to the specific needs of your organization and your level of experience with GitOps.

Difference between workload managed identity, Pod Managed Identity and AKS Managed Identity

March 12, 2023 Azure, Azure, Azure Kubernetes Service(AKS), Cloud Computing, Cloud Native, Cloud Strategy, Computing, Emerging Technologies, Intelligent Cloud, Kubernetes, Managed Services, Microsoft, PaaS, Platforms No comments

Azure Kubernetes Service(AKS) offers several options for managing identities within Kubernetes clusters, including AKS Managed Identity, Pod Managed Identity, and Workload Managed Identity. Here’s a comparison of these three options:

Key FeaturesAKS Managed IdentityPod Managed IdentityWorkload Managed Identity
OverviewA built-in feature of AKS that allows you to assign an Azure AD identity to your entire clusterAllows you to assign an Azure AD identity to an individual podAllows you to assign an Azure AD identity to a Kubernetes workload, which can represent one or more pods
ScopeCluster-widePod-specificWorkload-specific
Identity TypeService PrincipalManaged Service IdentityManaged Service Identity
Identity LocationClusterNodeNode
UsageGenerally used for cluster-wide permissions, such as managing Azure resourcesUseful for individual pod permissions, such as accessing Azure Key Vault secretsUseful for workload-specific permissions, such as accessing a database
LimitationsLimited to one identity per clusterLimited to one identity per podNone
Configuration ComplexityRequires configuration of AKS cluster and Azure ADRequires configuration of individual pods and Azure ADRequires configuration of Kubernetes workloads and Azure AD
Key features Comparison Table

Here are a few examples of how you might use each type of identity in AKS:

AKS Managed Identity

Suppose you have an AKS cluster that needs to access Azure resources, such as an Azure Key Vault or Azure Storage account. You can use AKS Managed Identity to assign an Azure AD identity to your entire cluster, and then grant that identity permissions to access the Azure resources. This way, you don’t need to manage individual service principals or access tokens for each pod.

Pod Managed Identity

Suppose you have a pod in your AKS cluster that needs to access a secret in Azure Key Vault. You can use Pod Managed Identity to assign an Azure AD identity to the pod, and then grant that identity permissions to access the secret in Azure Key Vault. This way, you don’t need to manage a separate service principal for the pod, and you can ensure that the pod only has access to the resources it needs.

Workload Managed Identity

Suppose you have a Kubernetes workload in your AKS cluster that needs to access a database hosted in Azure. You can use Workload Managed Identity to assign an Azure AD identity to the workload, and then grant that identity permissions to access the database. This way, you can ensure that the workload only has access to the database, and you don’t need to manage a separate service principal for each pod in the workload.

In summary, each type of AKS identity has its own strengths and use cases. AKS Managed Identity is useful for cluster-wide permissions, Pod Managed Identity is useful for individual pod permissions, and Workload Managed Identity is useful for workload-specific permissions. By choosing the right type of identity for your needs, you can simplify identity management and ensure that your AKS workloads have secure and controlled access to Azure resources.

How is AKS workload identity different from AKS pod managed identity?

March 12, 2023 Azure, Azure, Azure Kubernetes Service(AKS), Cloud Computing, Cloud Native, Cloud Strategy, Kubernetes, Managed Services, Microsoft, PaaS, Platforms No comments

AKS workload identity and AKS pod managed identity both provide a way to manage access to Azure resources from within a Kubernetes cluster. However, there are some key differences between the two features.

Scope

AKS pod managed identity provides a managed identity for each individual pod within a Kubernetes cluster. This allows you to grant access to Azure resources at a very granular level. AKS workload identity, on the other hand, provides a single AAD service principal for a Kubernetes namespace. This provides a broader scope for access to Azure resources within the namespace.

Access management

With AKS pod managed identity, you can assign roles or permissions directly to individual pods. This provides greater flexibility for managing access to Azure resources within the cluster. With AKS workload identity, access management is done through AAD roles and role assignments. This provides a more centralized approach to managing access to Azure resources within the namespace.

Security

AKS pod managed identity eliminates the need to store secrets or access tokens within pod configurations, which can improve the security of the Kubernetes cluster. AKS workload identity also eliminates the need to store secrets or access tokens within pod configurations. However, because the AAD service principal is shared by all pods within the namespace, there is a risk that if the service principal is compromised, all pods within the namespace could be affected.

In summary, AKS workload identity is a powerful feature of AKS that enables you to use Azure Active Directory to manage access to Azure resources from within a Kubernetes cluster. By creating a single AAD service principal for a Kubernetes namespace, AKS workload identity provides a centralized approach to access management. This can simplify the management of access to Azure resources and improve the security of your Kubernetes cluster.

While AKS pod managed identity and AKS workload identity both provide a way to manage access to Azure resources from within a Kubernetes cluster, they have different scopes and approaches to access management. By understanding the differences between the two features, you can choose the approach that best meets the needs of your organization.

AKS pod managed identity

March 12, 2023 Azure, Azure, Azure Kubernetes Service(AKS), Cloud Computing, Cloud Native, Kubernetes, Managed Services, PaaS, Platforms No comments

Kubernetes has become one of the most popular container orchestration tools, and Azure Kubernetes Service (AKS) is a managed Kubernetes service provided by Microsoft Azure. With the increasing use of Kubernetes and AKS, there is a growing need to improve the security and management of access to cloud resources.

AKS pod managed identity is a feature of AKS that simplifies the management of access to Azure resources by creating an identity for each pod in a Kubernetes cluster. The AKS pod managed identity allows the pods to access Azure services securely without the need to manage credentials, passwords, or access tokens.

In this blog post, we’ll take a closer look at what AKS pod managed identity is, how it works, and its benefits.

What is AKS Pod Managed Identity?

AKS pod managed identity is a feature of AKS that enables the management of identities for pods in a Kubernetes cluster. When a pod is created with AKS pod managed identity enabled, a Managed Identity is automatically created for that pod. This Managed Identity is then used to authenticate the pod with Azure services such as Azure Key Vault, Azure Storage, and Azure SQL Database, among others.

AKS pod managed identity eliminates the need for storing secrets and credentials within the pod’s configuration, which can improve the security of the pod and simplify the management of access to cloud resources.

How AKS Pod Managed Identity Works

AKS pod managed identity uses Azure’s Managed Identity service, which is a feature of Azure Active Directory (AAD). When a pod is created in an AKS cluster with pod managed identity enabled, a Managed Identity is automatically created for that pod.

To use AKS pod managed identity, you must first enable the feature in your AKS cluster. This can be done using the Azure CLI or through the Azure portal. Once enabled, you can then create a Kubernetes manifest file that includes a ManagedIdentity resource definition for each pod that needs to access Azure resources.

Here’s an example of a Kubernetes manifest file that uses AKS pod managed identity:

#yaml 
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: my-image
    env:
    - name: AZURE_TENANT_ID
      value: "<tenant-id>"
    - name: AZURE_CLIENT_ID
      value: "<client-id>"
    - name: AZURE_CLIENT_SECRET
      valueFrom:
        secretKeyRef:
          name: my-secret
          key: my-secret-key
  identity:
    type: ManagedIdentity

In this example, the identity section defines a Managed Identity for the pod using the type: ManagedIdentity field. The AZURE_TENANT_ID, AZURE_CLIENT_ID, and AZURE_CLIENT_SECRET environment variables are also defined, which allow the pod to authenticate with Azure services using its Managed Identity.

Once the pod is created, you can then grant it access to Azure resources by assigning it the appropriate role or permissions. This can be done using Azure’s Role-Based Access Control (RBAC) system or through other access control mechanisms provided by Azure services.

Here’s another example manifest file that demonstrates how to use AKS Pod Managed Identity:

#yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: myregistry/my-app:v1
        ports:
        - containerPort: 80
        env:
        - name: AzureServicesAuthConnectionString
          value: RunAs=App;AppId=<app-id>;TenantId=<tenant-id>;AppKey=<app-key>
      identity:
        type: ManagedIdentity

In this example, the identity section defines a Managed Identity for the pod using the type: ManagedIdentity field. The AzureServicesAuthConnectionString environment variable is also defined, which allows the pod to authenticate with Azure services using its Managed Identity.

Once the pod is created, you can then grant it access to Azure resources by assigning it the appropriate role or permissions. This can be done using Azure’s Role-Based Access Control (RBAC) system or through other access control mechanisms provided by Azure services.

Benefits of AKS Pod Managed Identity

AKS pod managed identity provides several benefits, including:

Improved security

AKS pod managed identity eliminates the need to store credentials or access tokens within the pod’s configuration. This reduces the risk of accidental exposure of sensitive data and improves the overall security of the pod and the cluster.

Simplified management

AKS pod managed identity simplifies the management of access to cloud resources by creating an identity for each pod in a Kubernetes cluster. This eliminates the need to manage service principals or credentials manually, which can reduce the administrative overhead and improve the efficiency of the cluster.

Greater flexibility

AKS pod managed identity provides greater flexibility by allowing you to grant access to Azure resources at a more granular level. You can assign roles or permissions directly to individual pods, which can reduce the risk of unauthorized access and improve the overall security posture of the cluster.

Easier compliance

AKS pod managed identity can make it easier to comply with regulatory requirements such as GDPR, HIPAA, and PCI DSS. By eliminating the need to store secrets and credentials within the pod’s configuration, you can reduce the risk of non-compliance and simplify the auditing process.

Better scalability

AKS pod managed identity can help improve the scalability of your Kubernetes clusters by reducing the overhead associated with managing service principals or credentials manually. This can enable you to scale your clusters more easily and efficiently, which can improve the overall performance and availability of your applications.

Conclusion

AKS pod managed identity is a powerful feature of AKS that can simplify the management of access to Azure resources, improve the security of your pods and clusters, and help you comply with regulatory requirements. By creating a Managed Identity for each pod in your Kubernetes cluster, AKS pod managed identity can eliminate the need to manage credentials and access tokens manually, which can reduce the administrative overhead and improve the efficiency of your operations.

In addition to AKS pod managed identity, Azure provides other identity and access management features such as AKS managed identity and workload management identity that can help you manage access to your Azure resources securely. By using these features in conjunction with AKS pod managed identity, you can create a comprehensive identity and access management solution for your Kubernetes workloads in Azure.

References

  • Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview)

AKS Workload Identity

March 11, 2023 Azure, Azure, Azure Kubernetes Service(AKS), Cloud Computing, Cloud Native, Computing, Intelligent Cloud, Kubernetes, Managed Services, Microsoft, PaaS, Platforms No comments

AKS workload identity is a feature of Azure Kubernetes Service (AKS) that enables you to use Azure Active Directory (AAD) to manage access to Azure resources from within a Kubernetes cluster. In this blog post, we’ll explore how AKS workload identity works and how to use it with an example code.

How does AKS workload identity work?

AKS workload identity works by creating an AAD service principal that is associated with a Kubernetes namespace. This service principal can be used by pods within the namespace to access Azure resources, such as storage accounts, without needing to store secrets or access tokens within the pod configuration.

When a pod needs to access an Azure resource, it sends a request to the Kubernetes API server, which forwards the request to the Azure Identity Binding Controller. The controller then looks up the AAD service principal associated with the namespace and retrieves an access token from AAD on behalf of the pod. This access token is then used to authenticate the pod to the Azure resource.

How to use AKS workload identity

To use AKS workload identity, you need to have an Azure subscription, an AKS cluster, and an AAD tenant. Here are the steps to set up AKS workload identity and use it in your application:

1. Create an AAD application registration

First, you need to create an AAD application registration for your AKS cluster. This application registration will be used to create the service principal that is associated with your Kubernetes namespace.

You can create an application registration by following these steps:

  1. Go to the Azure portal and navigate to your AAD tenant.
  2. Click on “App registrations” and then click on “New registration”.
  3. Give your application a name and select “Accounts in this organizational directory only” for the supported account types.
  4. Under “Redirect URI (optional)”, select “Web” and enter a dummy URI.
  5. Click on “Register”.

Make a note of the “Application (client) ID” and “Directory (tenant) ID” for later use.

2. Grant permissions to the AAD application registration

Next, you need to grant permissions to the AAD application registration to access the Azure resources that you want to use in your application.

You can grant permissions by following these steps:

  1. Go to the Azure portal and navigate to the resource that you want to grant access to.
  2. Click on “Access control (IAM)” and then click on “Add role assignment”.
  3. Select the role that you want to assign and then search for the name of your AAD application registration.
  4. Select your AAD application registration from the list and then click on “Save”.

3. Create a Kubernetes namespace with AKS workload identity enabled

Next, you need to create a Kubernetes namespace with AKS workload identity enabled. This namespace will be associated with the AAD service principal that you created in step 1.

You can create a namespace with AKS workload identity enabled by following these steps:

  1. Create a Kubernetes namespace with the following annotations:
#yaml code
apiVersion: v1
kind: Namespace
metadata:
  name: <your-namespace-name>
  annotations:
    "aadpodidentitybinding": "binding-name"
  1. Create an AKS identity binding with the following annotations:
#yaml codeapiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentityBinding
metadata:
  name: binding-name
spec:
  azureIdentity: <your-azure-identity>
  selector: <your-selector>

4. Use AKS workload identity in your application

Finally, you can use AKS workload identity in your application by configuring your application to use the service principal associated with your Kubernetes namespace.

Here’s an example code snippet in C# that demonstrates how to use AKS workload identity with the Azure SDK for .NET:

#csharp code
using System;
using System.Threading.Tasks;
using Microsoft.Azure.Storage;
using Microsoft.Azure.Storage.Blob;
using Microsoft.Azure.Services.AppAuthentication;

namespace AKSWorkloadIdentityExample
{
    class Program
    {
        static async Task Main(string[] args)
        {
            // create a new instance of AzureServiceTokenProvider
            var tokenProvider = new AzureServiceTokenProvider();

            // create a new instance of CloudStorageAccount using the AKS identity endpoint
            var storageAccount = new CloudStorageAccount(new Microsoft.Azure.Storage.Auth.TokenCredentialAdapter(tokenProvider), "<your-storage-account-name>", endpointSuffix: null, useHttps: true);

            // create a new instance of CloudBlobClient using the CloudStorageAccount
            var blobClient = storageAccount.CreateCloudBlobClient();

            // use the CloudBlobClient to retrieve the contents of a blob
            var container = blobClient.GetContainerReference("<your-container-name>");
            var blob = container.GetBlockBlobReference("<your-blob-name>");
            var contents = await blob.DownloadTextAsync();

            Console.WriteLine(contents);
        }
    }
}

In this example, we create a new instance of AzureServiceTokenProvider, which uses the AKS identity endpoint to retrieve an access token for the AAD service principal associated with the Kubernetes namespace. We then use this token provider to create a new instance of CloudStorageAccount, passing in the name of the storage account we want to access.

Next, we create a new instance of CloudBlobClient using the CloudStorageAccount, and use it to retrieve the contents of a blob. Note that we don’t need to pass any secrets or access tokens to the CloudBlobClient. Instead, the AKS identity endpoint handles authentication on our behalf, making it much easier to manage access to Azure resources from within our Kubernetes cluster.

I hope this example helps you understand how to use AKS workload identity with the Azure SDK for .NET!

Conclusion

AKS workload identity is a powerful feature of AKS that enables you to use AAD to manage access to Azure resources from within your Kubernetes cluster. By using AKS workload identity, you can avoid storing secrets or access tokens within your pod configurations, making it easier to manage security and access control in your application.

In this blog post, we’ve explored how AKS workload identity works and how to use it in your application. We’ve also seen an example code snippet that demonstrates how to use AKS workload identity with the Azure SDK for Go. Hopefully, this has given you a better understanding of how AKS workload identity can be used to simplify access control in your Kubernetes applications.

References