GCP releases Spot VMs, the next generation of Pre-emptible VMs
Spot VMs are Compute Engine virtual machine instances that are priced lower than on-demand Compute Engine VMs. Spot VMs offer the same machine types and options as on-demand VMs, but provide no availability guarantees. You can use Spot VMs in your clusters and node pools to run stateless, batch, or fault-tolerant workloads that can tolerate disruptions caused by the ephemeral nature of Spot VMs. Spot VMs remain available until Compute Engine requires the resources for on-demand VMs. To maximize your cost efficiency, combine using Spot VMs with Best practices for running cost-optimized Kubernetes applications on GKE. To learn more about Spot VMs, see Spot VMs in the Compute Engine documentation. Lower pricing than on-demand Compute Engine VMs. Useful for stateless, fault-tolerant workloads that are resilient to the ephemeral nature of these VMs. Works with the cluster autoscaler and node auto-provisioning. How Spot VMs work in GKE. When you create a cluster or node pool with Spot VMs, GKE creates underlying Compute Engine Spot VMs that behave like a managed instance group. If you use node taints, ensure that your cluster also has at least one node pool that uses on-demand Compute Engine VMs. Node pools that use on-demand VMs provide a reliable place for GKE to schedule critical system components like DNS. For information on using a node taint for Spot VMs, see Use taints and tolerations for Spot VMs. Using Spot VMs with GPU node pools. If your cluster has Pods that can't be placed on existing Spot VMs, the cluster autoscaler adds new nodes that use Spot VMs. Modifications to Kubernetes behavior. To ensure that your workloads and Jobs are processed even when no Spot VMs are available, ensure that your clusters have a mix of node pools that use Spot VMs and node pools that use on-demand Compute Engine VMs. Ensure that your cluster has at least one non-GPU node pool that uses on-demand VMs before you add a GPU node pool that uses Spot VMs. Use the Kubernetes on GCP Node Termination Event Handler on clusters running GKE versions prior to 1.20, where the kubelet graceful node shutdown feature is disabled.