Introduction to Azure Kubernetes Service

Microsoft’s Azure is a public cloud platform that offers bundled services including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) with comprehensive, multilayered security. Out of its numerous offerings, the Azure Kubernetes Service (AKS) is one of the most popular managed Kubernetes services. It leverages Azure’s continuous integration/continuous delivery (CI/CD) and security capabilities to automate container orchestration.

In our first article of the three-part series, we delved into the features, benefits, and use cases of leveraging the managed EKS service by AWS. In the second part of this series, we’ll learn about the features, benefits, and use cases of the AKS service.

Deep dive into Azure Kubernetes Service (AKS)

Azure Kubernetes Service is a fully managed, open-source container orchestration service used to deploy, scale, and manage container-based applications on Azure cloud clusters. Being a managed service, managing AKS does not require software teams to possess a high level of expertise in Kubernetes operations. As AKS integrates inherently with most Azure services, the platform offers a feature-rich ecosystem for the quick deployment and management of containerized workloads.

How Azure AKS works

Provisioning an AKS cluster is straightforward and can be done using one of the following four options:

  • Azure CLI
  • Azure Portal
  • Azure Powershell
  • Template-driven options like Terraform or Azure Resource Manager template

After the AKS cluster is set up, AKS automatically spins up the Kubernetes master node (also known as the control plane). The control plane is a single-tenant node that comes with dedicated Kubernetes control plane components, such as scheduler, controller, and the API server. After the master node is ready, AKS deploys VMs as worker nodes within which all containers and workloads are executed.

The control plane consists of the Kubernetes API and storage that retains information about the cluster state. The Azure platform automatically secures the connection between the nodes and the control plane and enables interaction through Kubernetes APIs. For comprehensive monitoring and troubleshooting, the control plane also captures and shares logs with observability platforms such as Azure Monitor logs.

Components of an AKS cluster

An AKS cluster relies on the following key components to operate:

Virtual network

AKS typically utilizes kubenet to automatically create a Virtual Private Network (VPN) and subnet for the cluster. The VPN subnet assigns IP addresses to the worker nodes within the cluster. AKS additionally uses the Azure Container Networking Interface (CNI) to assign pods with an IP address that is logically distinct from the node they run on.

Ingress

An ingress furnishes HTTP/HTTPS routing rules to manage external users’ access to cluster services. Ingress relies on an ingress controller that is the primary component accountable for executing rules defined by ingress.

In a typical AKS cluster, the ingress controller handles the functionality of API Gateway for managing user authentication and authorization. The controller is also utilized for the configurable routing of traffic, service reverse proxy, and TLS termination for clusters. AKS supports numerous Kubernetes-conformant ingress controllers, such as Nginx, Traefik, Istio, and Contour.

Managed control plane

AKS automatically configures and deploys a single-tenant control plane in the cluster region. The singular, managed control plane is available as an Azure service at no cost .

The control plane consists of the below elementary Kubernetes components:

  • kube-apiserver: The API server that exposes the underlying Kubernetes API objects
  • etcd: A key-value pair that saves, examines, and maintains cluster state
  • kube-scheduler: Determines the nodes that would run containerized applications encased in pods
  • kube-controller manager: Orchestrates workload controllers to perform node functions and pod replications

Nodes and node pools

Every cluster includes at least one node—an Azure VM on which pods are deployed to execute the application and its underlying services. The node contains various foundational Kubernetes services, including kubelet, kube-proxy, and the container runtime, which enable seamless communication between workloads and resources. AKS nodes typically use either Docker (Windows VMs running Kubernetes version 1.2 or older) or containerd runtimes.

AKS features

Salient features of the AKS service include:

Managed node pools

When an AKS cluster is created, Azure automatically creates and configures the required number of nodes (VMs) called system node pools. AKS provisions node pool autoscaling to automatically apply changes to these nodes during scaling, thus eliminating the need to provision more VMs. When planning resources, cluster administrators can proactively plan VM size and type around the application resource requirements for seamless autoscaling.

Role-based access control

AKS integrates Kubernetes role-based access control with the Azure Active Directory that enables administrators to bind cluster roles with Azure AD groups and users. The API server asks for access credentials when a user initiates a session, verifies the credentials against Azure AD, and issues a token if the user is deemed valid.

Pod-level security

The Azure policy add-on for AKS lets administrators enforce compliance standards on their clusters at scale. With the add-on, administrators and developers can apply individual policy definitions and policies to specific groups of pods within the cluster.

Open-source command line interface tool

aksctl is a simple CLI tool written in Go for creating and managing AKS clusters. This open-source tool is free to download on Github. The tool is customized for Kubernetes and simplifies command operations that would otherwise be lengthy with the Azure CLI out of the box.

GitOps IaC with Azure Arc

AKS clusters leverage Azure Arc to power application deployment and configuration using state stored in Git. Following the GitOps model, infrastructure-as-code (IaC) templates can be used to create and manage infrastructure components such as VMs, firewall, and network across clusters. GitOps with Azure Arc-enabled AKS clusters further allows administrators to attach the pulled state from Git to clusters running across any cloud services, including GCP, AWS, or on premises.

Pricing

The Azure Kubernetes Service and AKS control plane are offered as free container services and no charges are incurred toward the management of Kubernetes clusters. Organizations only pay for consumed Azure cloud infrastructure resources used in the cluster, such as VMs, network resources, and storage.

Multiple deployment options

Azure also offers multiple options to create, deploy, and access AKS clusters, depending on preference and level of expertise. These include Azure CLI, Azure Portal, Resource Manager (ARM) templates, and Azure PowerShell console. Beyond Azure VMs, administrators can connect AKS clusters to other clusters using Azure Arc, for a hybrid or multi-cloud orchestration.

Efficient management and monitoring

Each AKS installation includes the Kubernetes metrics server, which provides basic resource consumption information about the memory and CPU consumption of pods and nodes. Azure Monitor additionally offers various pre-configured data points to monitor the AKS cluster. These include alerts, metrics, logs, container insights, workbooks, and advisor recommendations. Azure also provides the option to choose from various third-party, open-source health and resource monitoring tools, including Weave Scope, Prometheus, and Grafana.

With the Azure platform managing the provisioning of Kubernetes components and node resources, AKS offloads the administration and operational overhead to Azure cloud. For faster deployment and management of clusters, Azure also offers various developer-centric tools, including Azure DevOps, Visual Studio Code, and Azure Monitor.

Why use AKS

AKS abstracts the provisioning, security, and automation overhead of maintaining container workloads in a cloud-native ecosystem. By offering an efficient platform to deploy and manage microservice-based applications, the platform blends Kubernetes-supported container workloads and DevOps practices.

The platform also facilitates the managing and health monitoring of managed Kubernetes service. An AKS cluster can also leverage additional service components, such as advanced networking, security, and Azure Active Directory integration.

Benefits of AKS

The advantages of running Kubernetes clusters on AKS include:

Strict access control

AKS allows administrators to tightly secure their workflows by integrating with the Azure Active Directory to authenticate users based on Azure Directory identity and group membership. Azure AD facilitates administrators to confine the users’ access by providing role-based access to cluster resources and namespaces using inbuilt Kubernetes role-based access control.

Enables CI/CD out of the box

AKS uses Azure DevOps pipelines to automate the build, test, and deployment of cluster configurations by integrating with Git and the Azure Container Repository. With Azure pipelines, organizations can set up rapid build and release cycles backed by agile CI/CD processes.

Improved developer productivity

AKS automatically handles infrastructure upgrades, patching, spin-up, and scaling of worker nodes, eliminating the manual effort needed to provision the required resources for containerized workloads. The managed service also removes the complexity and expertise needed to install, maintain, and secure Kubernetes on Azure cloud. AKS also integrates with developer productivity tools like VSCode and Azure Monitor to simplify development and deployment tasks and enable efficient collaboration among cross-functional teams.

High availability with Azure regions

AKS maintains high availability by deploying multiple nodes within a Virtual Machine Scale Set. While these multinodes are meant to maintain availability, they are not tolerant against an Azure region failure. To improve resilience against region failures, administrators are allowed to deploy the application on multiple clusters across different paired regions designed to manage disaster recovery.

Using availability zones, AKS ensures the availability of applications and data in case of data center failures. Availability zones are distinctive, physically isolated locations within Azure regions that contain single or multiple data centers. To avoid any type of failure, applications are deployed on AKS clusters, employing availability zones and spreading cluster nodes across multiple availability zones within an Azure region.

Dynamic multi-cluster security with Azure Policy

The Azure Policy service for AKS helps implement pod-level policy to ensure clusters comply with different security frameworks. With the add-on service, administrators can apply individual policy definitions or policies to specific groups of pods within the cluster. This helps limit the pods allowed to run on nodes within specific clusters, enabling granular access control for multi-cluster applications.

Easy to deploy and use

The fully managed AKS service takes care of deploying the control plane and spinning up worker nodes for Kubernetes workloads. AKS offers automatic upgrades, patching, self-healing, scaling, and monitoring, thereby minimizing the maintenance overhead while facilitating quicker development and deployment.

AKS use cases

Common usage scenarios for AKS clusters include:

  • Deployment of trained ML models: Through AKS integration with Kubeflow, Azure supports the deployment of trained machine learning (ML) models onto AKS clusters that are backed by a graphical processing unit (GPU)–enabled VM. AKS enables seamless autoscaling and logging of low-latency ML production pipelines that can be trained, registered to ACL, and profile the model in Azure Machine Learning (Azure ML).
  • DevSecOps integration: Apart from offering cloud workload protection through Azure Defender and Azure Policy, AKS ships with pre-configured services such as Azure’s dynamic policy controls, Active Directory, and Azure Monitor Logs to achieve the right balance between security, scale, and agility for secure DevOps operations. For full-stack security and compliance, Azure integrates Visual Studio and GitHub into development workflows to automatically detect security vulnerabilities from the initial phases of SDLC.
  • IoT app deployment and management: Engineering teams can connect AKS clusters with Azure services such as CosmoDB, Azure HDinsight, and Azure API management for IoT data flows. The platform helps create an agile pipeline for the real-time detection, ingestion, and processing of sensor data to provide instant insights and recommendations.

Conclusion

The Azure Kubernetes Service offers an integrated CI/CD experience, a serverless Kubernetes offering, and a comprehensive security experience for DevSecOps operations. Given the extensive number of regions available through Azure cloud, administrators can leverage Azure DevOps capabilities to enforce dynamic policy management across distributed clusters. This makes the AKS service ideal for high-performance Kubernetes applications such as IoT device management and training ML models.

In this second article of the series, we delved into the features, benefits, and use cases of leveraging the managed AKS service by Azure. In the forthcoming articles of the series, we’ll explore other managed Kubernetes services to learn how they fare with each other on common points.

Was this article helpful?
Monitor your Azure infrastructure

Monitor over 100 Azure resources with Site24x7's Azure monitoring tool for optimal Azure performance.

Related Articles

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 "Learn" portal. Get paid for your writing.

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 “Learn” portal. Get paid for your writing.

Apply Now
Write For Us