You are viewing documentation for Kubernetes version: v1.18
Kubernetes v1.18 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.
Running Kubernetes on AWS EC2
This page describes how to install a Kubernetes cluster on AWS.
Before you begin
To create a Kubernetes cluster on AWS, you will need an Access Key ID and a Secret Access Key from AWS.
Supported Production Grade Tools
conjure-up is an open-source installer for Kubernetes that creates Kubernetes clusters with native AWS integrations on Ubuntu.
Kubernetes Operations - Production Grade K8s Installation, Upgrades, and Management. Supports running Debian, Ubuntu, CentOS, and RHEL in AWS.
kube-aws, creates and manages Kubernetes clusters with Flatcar Linux nodes, using AWS tools: EC2, CloudFormation and Autoscaling.
KubeOne is an open source cluster lifecycle management tool that creates, upgrades and manages Kubernetes Highly-Available clusters.
Getting started with your cluster
Command line administration tool: kubectl
The cluster startup script will leave you with a kubernetes
directory on your workstation.
Alternately, you can download the latest Kubernetes release from this page.
Next, add the appropriate binary folder to your PATH
to access kubectl:
# macOS
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
# Linux
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
An up-to-date documentation page for this tool is available here: kubectl manual
By default, kubectl
will use the kubeconfig
file generated during the cluster startup for authenticating against the API.
For more information, please read kubeconfig files
Examples
See a simple nginx example to try out your new cluster.
The "Guestbook" application is another popular example to get started with Kubernetes: guestbook example
For more complete applications, please look in the examples directory
Scaling the cluster
Adding and removing nodes through kubectl
is not supported. You can still scale the amount of nodes manually through adjustments of the 'Desired' and 'Max' properties within the
Auto Scaling Group, which was created during the installation.
Tearing down the cluster
Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the
kubernetes
directory:
cluster/kube-down.sh
Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level |
---|---|---|---|---|---|---|
AWS | kops | Debian | k8s (VPC) | docs | Community (@justinsb) | |
AWS | CoreOS | CoreOS | flannel | - | Community | |
AWS | Juju | Ubuntu | flannel, calico, canal | - | 100% | Commercial, Community |
AWS | KubeOne | Ubuntu, CoreOS, CentOS | canal, weavenet | docs | 100% | Commercial, Community |