Introduction to Google Kubernetes Engine (GKE)

The Google Cloud Platform (GCP) offers managed services of computing, networking, storage, and other infrastructure resources that aid enterprises in digital and cloud migration. The platform enables the deployment of applications at scale using Google resources. The Google Kubernetes Engine is a GCP service that simplifies the orchestration and management of containerized workloads.

In this final article of our three-part series, we’ll discuss the various features, benefits, and possible use cases of the Google Kubernetes Engine (GKE) and learn how it compares with other managed Kubernetes services.

Deep dive into Google Kubernetes Engine

GKE is a feature-rich, managed Kubernetes platform that enables deployment, configuration, and orchestration of containers using Google Cloud infrastructure. A GKE environment is typically composed of multiple Google Compute Engine instances grouped to form a Kubernetes cluster.

How GKE Works

GKE relies on Google Compute Engine (GCE) to provide an extensible and portable platform for managing containers in Kubernetes clusters. GKE allows cluster administrators to streamline operations using release channels, which allows them to choose between various Kubernetes releases, balancing performance and stability.

As Kubernetes was conceptualized and developed by Google, using GKE provides organizations with early access to guidance on security, upgrades with performance improvements, and newer integrations. Kubernetes cluster administrators also benefit from Google’s advanced container management capabilities, including GCP’s efficient load balancing features, autoscaling, auto-repair of GCE instances, and automatic upgrades.

Components of GKE clusters

The GKE environment comprises the following primary components:

Virtual private cloud

The virtual private cloud (VPC) enforces cluster isolation and allows for the configuration of routing and network policies. Each GKE cluster is created within a subnet under a GCP virtual private cloud that allocates IPs to pods based on native routing rules within the VPC network. Communication with clusters in other VPCs, on the other hand, is achieved through VPC network peering. GKE clusters can also be connected with external on-prem clusters or third-party cloud platforms using Cloud Interconnect or Cloud VPN routers.

Cluster master

Cluster master is the managed instance that runs GKE control plane components, including the API server, resource controllers, and the scheduler, which are collectively used to manage storage, compute, and network resources for workloads. The GKE control plane also manages the lifecycle of the containerized application, including scheduling, scaling, and upgrades.

Nodes and node pools

Nodes are the worker machines that host containerized applications. GKE creates these nodes using compute engine instances when the cluster is initiated. Nodes run the kubelet and kube-proxy services and a runtime to support the containerized workload. GKE also deploys special containers on each node to provide cluster functionalities, such as networking and log collection.

GKE features

Key features of the GKE environment include:

Dual operation modes

GKE allows for two modes of operation, depending on the level of control and flexibility required. In auto-pilot mode, GKE provides a fully automated experience as it manages all the cluster infrastructure and operations. On the other hand, the standard mode offers cluster administrators complete control to manage clusters and nodes. Though the standard operational model enables configuration flexibility while controlling unnecessary spending, the mode also increases operational overhead in managing clusters.

Prebuilt Kubernetes applications

GKE provides a library with prebuilt templates and applications to support the quick provisioning and management of Kubernetes environments. These are open-source, commercial-grade applications developed by Google to accelerate development with simplified licensing, consolidated billing, and portability.

Isolation with GKE Sandbox

GKE provides an extra layer of security with the Sandbox that relies on gVisor and a container runtime to reimplement system calls on behalf of the host kernel. This restricts direct interaction with the host kernel, limiting untrusted workloads to a low blast radius.

Secure accounts and role permissions

Along with Kubernetes role-based access control (RBAC), GKE uses Google’s Identity and Access Management (IAM) roles to administer cluster permissions. Cluster administrators can use service accounts within GCP’s IAM service to control interactions between non-human entities. Beyond authorization and authentication, the IAM service can also be used to configure monitoring, rate limiting, and auditing capabilities.

Deep integration with the Kubernetes codebase

As both GKE and Kubernetes were developed by Google, the Google Kubernetes Engine is fundamentally backed by the same design principles and teams that built Kubernetes. As a result, GKE offers innate integrations with the Kubernetes ecosystem and enjoys instant access to the latest Kubernetes features and upgrades.

Four-way autoscaling

The GKE service is designed to manage all the components and resources of a Kubernetes cluster. To optimally maintain services with changing patterns of demand spikes, GKE enforces horizontal autoscaling by adding or removing pods in response to the workload’s changing CPU and memory resource consumption metrics. The controllers also enforce vertical scaling by adjusting the memory and CPU requirements of pods.

In addition to pod autoscaling, GKE’s cluster autoscaler also provisions or removes nodes with changing demands of a workload. The autoscaler uses a node pool as a designated subset of cluster nodes, which is subsequently used for auto-provisioning of nodes as required.


GKE clusters running in standard mode on less than three nodes are free, while a cluster of more than three costs $0.10 an hour per cluster, in addition to resources used by the workload. An autopilot cluster with a single control plane accrues $0.10 an hour per cluster. Large autopilot clusters with multiple control plane instances are billed according to the number of control planes, nodes, and cluster resources used.

Item Regular price ​Spot price* 1 year Commitment price (US$) ​3 year Commitment price (US$)
GKE Autopilot vCPU Price (vCPU-hr) $0.0445 $0.0133 $0.0356000 $0.0244750
GKE Autopilot Pod Memory Price (GB-hr) $0.0049225 $0.0014767 $0.0039380 $0.0027074
GKE Autopilot Ephemeral Storage Price (GB-hr) $0.0000548 $0.0000548 $0.0000438 $0.0000301

Deployment options

GKE clusters can be launched using the Google Cloud CLI (gcloud CLI) tool or the Google Cloud platform’s Web UI. While nodes are primarily Google Compute Engine VMs, administrators can connect to nodes and other resources of third-party cloud platforms and on-prem deployments through Google Anthos. Anthos registers all clusters, including third-party Kubernetes clusters, into its fleet for unified access and management of a multi-cluster setup.

Managing and monitoring

GKE supports various cloud logging and monitoring tools. The Google Cloud Managed Service for Prometheus helps cluster administrators collect and analyze system logs and workload performance through tracing and profiling. Additionally, GKE clusters are supported by Google site reliability engineers (SREs) and administrators to ensure application security and comprehensive regulatory compliance.

Why use GKE

Being one of the largest contributors to the Kubernetes project, Google offers an efficient platform to provision Kubernetes with a single click. The growth and development of GKE is driven by years of experience in managing containers at scale, enabling organizations to leverage the latest Kubernetes features per their use case.

As the first fully-managed Kubernetes service, GKE has made rich advancements in its offerings, including four-way autoscaling, multi-cluster support, and a comprehensive Kubernetes API for the seamless management of complex Kubernetes ecosystems.

Benefits of GKE

Advantages of running Kubernetes applications on GKE include:

Early access to Kubernetes updates and releases

Being the original creator and one of the leading contributors of the Kubernetes project, Google introduces the latest Kubernetes updates with Google Cloud’s resilient infrastructure to enable instant deployment and cluster management.

Improved productivity and agility

The Google Cloud Marketplace offers enterprise-ready, open-source templates and applications for containerized setups. These production-grade applications slash development time and effort by implementing Google-backed features such as networking, access control, and licensing.

Hybrid cloud cluster management

GKE leverages Google Anthos’ automated managed-applications platform to enable the consistent management of workloads across cloud and on-prem environments. Anthos is fundamentally designed to use fleet—the concept of logical segregation and normalization of Kubernetes clusters and underlying resources—to enable hybrid cluster orchestration.

Streamlined operations with release channels

Kubernetes releases quick updates to introduce new features, deliver security upgrades, and fix known issues. GKE automates the uptake of these updates through scheduled release channels for the automatic management of Kubernetes versions.

GKE lists three release channels:

  • Rapid: Gets the latest Kubernetes release and uses the newest GKE features as soon as they reach general availability (GA) status
  • Regular: Balances stability and availability by adopting new releases after 2–3 months of GA
  • Stable: Prioritizes stability over new functionality, adopting features 2–3 months after the regular release date, allowing more time for validation

Low ownership and maintenance costs

Google charges a flat fee for GKE clusters depending on the number of nodes within the cluster. Standard GKE clusters with less than three nodes are free, while a cluster of more than three starts at $0.10 per hour. Additionally, Google also features an autopilot mode that enforces per-pod billing to ensure organizations are only billed for resource requests, eliminating costs associated with unallocated capacity, OS overhead, and system components.

Easy to deploy and use

GKE allows administrators to deploy clusters, provision resources, and enable critical cluster functionalities with a click-based workflow. The autopilot mode further simplifies cluster operations by eliminating manual configuration and monitoring toward managing the entire cluster’s infrastructure.

GKE use cases

Possible use cases of GKE clusters include:

  • Graphic and multimedia streaming applications: As streaming applications rely on high resource utilization, cluster administrators can create GPU equipped node-pools to achieve a high-compute baseline for image processing, video transcoding, and image recognition for cloud-based multimedia processing.
  • Cloud data pipelines: The flexibility and deep integration of GKE allows for the deployment of various components in a machine learning pipeline. Administrators can leverage Kubeflow pipelines to extend TensorFlow ML models on GKE, enabling a managed cloud-based data pipeline.
  • Hybrid GCP and on-prem Kubernetes clusters: Organizations can use Google Anthos to orchestrate Kubernetes clusters in GKE and on-prem for hybrid container management.


The Google Kubernetes Engine provides a highly scalable, feature-packed environment for deploying dynamic applications in independent, isolated, containerized instances. GKE provides managed container orchestration, four-way autoscaling, managed cluster upgrades, and seamless hybrid cloud orchestration.

As the penultimate article of this series, we delved into the features, benefits, and use cases of leveraging the managed Kubernetes service by GCP. In the last article of the series, we’ll perform a comparative analysis of all three managed Kubernetes services—EKS, AKS, and GKE—to learn how they fare with each other on common points.

Was this article helpful?
Monitor your Kubernetes containers

Resolve issues faster in your containerized environment by monitoring Kubernetes with Site24x7.

Related Articles

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 "Learn" portal. Get paid for your writing.

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 “Learn” portal. Get paid for your writing.

Apply Now
Write For Us