You are viewing documentation for Kubernetes version: v1.18
Kubernetes v1.18 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.
Well-Known Labels, Annotations and Taints
Kubernetes reserves all labels and annotations in the kubernetes.io namespace.
This document serves both as a reference to the values and as a coordination point for assigning values.
kubernetes.io/arch
Example: kubernetes.io/arch=amd64
Used on: Node
The Kubelet populates this with runtime.GOARCH
as defined by Go. This can be handy if you are mixing arm and x86 nodes.
kubernetes.io/os
Example: kubernetes.io/os=linux
Used on: Node
The Kubelet populates this with runtime.GOOS
as defined by Go. This can be handy if you are mixing operating systems in your cluster (for example: mixing Linux and Windows nodes).
beta.kubernetes.io/arch (deprecated)
This label has been deprecated. Please use kubernetes.io/arch
instead.
beta.kubernetes.io/os (deprecated)
This label has been deprecated. Please use kubernetes.io/os
instead.
kubernetes.io/hostname
Example: kubernetes.io/hostname=ip-172-20-114-199.ec2.internal
Used on: Node
The Kubelet populates this label with the hostname. Note that the hostname can be changed from the "actual" hostname by passing the --hostname-override
flag to the kubelet
.
beta.kubernetes.io/instance-type (deprecated)
Note: Starting in v1.17, this label is deprecated in favor of node.kubernetes.io/instance-type.
node.kubernetes.io/instance-type
Example: node.kubernetes.io/instance-type=m3.medium
Used on: Node
The Kubelet populates this with the instance type as defined by the cloudprovider
.
This will be set only if you are using a cloudprovider
. This setting is handy
if you want to target certain workloads to certain instance types, but typically you want
to rely on the Kubernetes scheduler to perform resource-based scheduling. You should aim to schedule based on properties rather than on instance types (for example: require a GPU, instead of requiring a g2.2xlarge
).
failure-domain.beta.kubernetes.io/region (deprecated)
See failure-domain.beta.kubernetes.io/zone.
Note: Starting in v1.17, this label is deprecated in favor of topology.kubernetes.io/region.
failure-domain.beta.kubernetes.io/zone (deprecated)
Example:
failure-domain.beta.kubernetes.io/region=us-east-1
failure-domain.beta.kubernetes.io/zone=us-east-1c
Used on: Node, PersistentVolume
On the Node: The kubelet
populates this with the zone information as defined by the cloudprovider
.
This will be set only if you are using a cloudprovider
. However, you should consider setting this
on the nodes if it makes sense in your topology.
On the PersistentVolume: The PersistentVolumeLabel
admission controller will automatically add zone labels to PersistentVolumes, on GCE and AWS.
Kubernetes will automatically spread the Pods in a replication controller or service across nodes in a single-zone cluster (to reduce the impact of failures). With multiple-zone clusters, this spreading behaviour is extended across zones (to reduce the impact of zone failures). This is achieved via SelectorSpreadPriority.
SelectorSpreadPriority is a best effort placement. If the zones in your cluster are heterogeneous (for example: different numbers of nodes, different types of nodes, or different pod resource requirements), this placement might prevent equal spreading of your Pods across zones. If desired, you can use homogenous zones (same number and types of nodes) to reduce the probability of unequal spreading.
The scheduler (through the VolumeZonePredicate predicate) also will ensure that Pods, that claim a given volume, are only placed into the same zone as that volume. Volumes cannot be attached across zones.
The actual values of zone and region don't matter. Nor is the node hierarchy rigidly defined. The expectation is that failures of nodes in different zones should be uncorrelated unless the entire region has failed. For example, zones should typically avoid sharing a single network switch. The exact mapping depends on your particular infrastructure - a three rack installation will choose a very different setup to a multi-datacenter configuration.
If PersistentVolumeLabel
does not support automatic labeling of your PersistentVolumes, you should consider
adding the labels manually (or adding support for PersistentVolumeLabel
). With PersistentVolumeLabel
, the scheduler prevents Pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all.
Note: Starting in v1.17, this label is deprecated in favor of topology.kubernetes.io/zone.
topology.kubernetes.io/region
See topology.kubernetes.io/zone.
topology.kubernetes.io/zone
Example:
topology.kubernetes.io/region=us-east-1
topology.kubernetes.io/zone=us-east-1c
Used on: Node, PersistentVolume
On the Node: The kubelet
populates this with the zone information as defined by the cloudprovider
.
This will be set only if you are using a cloudprovider
. However, you should consider setting this
on the nodes if it makes sense in your topology.
On the PersistentVolume: The PersistentVolumeLabel
admission controller will automatically add zone labels to PersistentVolumes, on GCE and AWS.
Kubernetes will automatically spread the Pods in a replication controller or service across nodes in a single-zone cluster (to reduce the impact of failures). With multiple-zone clusters, this spreading behaviour is extended across zones (to reduce the impact of zone failures). This is achieved via SelectorSpreadPriority.
SelectorSpreadPriority is a best effort placement. If the zones in your cluster are heterogeneous (for example: different numbers of nodes, different types of nodes, or different pod resource requirements), this placement might prevent equal spreading of your Pods across zones. If desired, you can use homogenous zones (same number and types of nodes) to reduce the probability of unequal spreading.
The scheduler (through the VolumeZonePredicate predicate) also will ensure that Pods, that claim a given volume, are only placed into the same zone as that volume. Volumes cannot be attached across zones.
The actual values of zone and region don't matter. Nor is the node hierarchy rigidly defined. The expectation is that failures of nodes in different zones should be uncorrelated unless the entire region has failed. For example, zones should typically avoid sharing a single network switch. The exact mapping depends on your particular infrastructure - a three rack installation will choose a very different setup to a multi-datacenter configuration.
If PersistentVolumeLabel
does not support automatic labeling of your PersistentVolumes, you should consider
adding the labels manually (or adding support for PersistentVolumeLabel
). With PersistentVolumeLabel
, the scheduler prevents Pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all.