Kubernetes-native declarative infrastructure for AWS.
What is the Cluster API Provider AWS
The Cluster API brings declarative, Kubernetes-style APIs to cluster creation, configuration and management.
The API itself is shared across multiple cloud providers allowing for true AWS hybrid deployments of Kubernetes. It is built atop the lessons learned from previous cluster managers such as kops and kubicorn.
Documentation
Please see our book for in-depth documentation.
Launching a Kubernetes cluster on AWS
Check out the Cluster API Quick Start for launching a cluster on AWS.
Features
- Native Kubernetes manifests and API
- Manages the bootstrapping of VPCs, gateways, security groups and instances.
- Choice of Linux distribution using pre-baked AMIs.
- Deploys Kubernetes control planes into private subnets with a separate bastion server.
- Doesn’t use SSH for bootstrapping nodes.
- Installs only the minimal components to bootstrap a control plane and workers.
- Supports control planes on EC2 instances.
- EKS support
Compatibility with Cluster API and Kubernetes Versions
This provider’s versions are compatible with the following versions of Cluster API and support all Kubernetes versions that is supported by its compatible Cluster API version:
Cluster API v1alpha4 (v0.4) | Cluster API v1beta1 (v1.x) | |
---|---|---|
CAPA v1alpha4 (v0.7) | ✓ | ☓ |
CAPA v1beta1 (v1.x) | ☓ | ✓ |
CAPA v1beta2 (v2.x, main) | ☓ | ✓ |
(See Kubernetes support matrix of Cluster API versions).
Kubernetes versions with published AMIs
See amis for the list of most recently published AMIs.
clusterawsadm
clusterawsadm
CLI tool provides bootstrapping, AMI, EKS, and controller related helpers.
clusterawsadm
binaries are released with each release, can be found under assets section.
clusterawsadm
could also be installed via Homebrew on macOS and linux OS.
Install the latest release using homebrew:
brew install clusterawsadm
Test to ensure the version you installed is up-to-date:
clusterawsadm version
Getting involved and contributing
Are you interested in contributing to cluster-api-provider-aws? We, the maintainers and community, would love your suggestions, contributions, and help! Also, the maintainers can be contacted at any time to learn more about how to get involved.
In the interest of getting more new people involved we tag issues with
good first issue
.
These are typically issues that have smaller scope but are good ways to start
to get acquainted with the codebase.
We also encourage ALL active community participants to act as if they are maintainers, even if you don’t have “official” write permissions. This is a community effort, we are here to serve the Kubernetes community. If you have an active interest and you want to get involved, you have real power! Don’t assume that the only people who can get things done around here are the “maintainers”.
We also would love to add more “official” maintainers, so show us what you can do!
This repository uses the Kubernetes bots. See a full list of the commands here.
Build the images locally
If you want to just build the CAPA containers locally, run
REGISTRY=docker.io/my-reg make docker-build
Tilt-based development environment
See development section for details.
Implementer office hours
Maintainers hold office hours every two weeks, with sessions open to all developers working on this project.
Office hours are hosted on a zoom video chat every other Monday at 09:00 (Pacific) / 12:00 (Eastern) / 17:00 (Europe/London), and are published on the Kubernetes community meetings calendar.
Other ways to communicate with the contributors
Please check in with us in the #cluster-api-aws channel on Slack.
Github issues
Bugs
If you think you have found a bug please follow the instructions below.
- Please spend a small amount of time giving due diligence to the issue tracker. Your issue might be a duplicate.
- Get the logs from the cluster controllers. Please paste this into your issue.
- Open a new issue.
- Remember that users might be searching for your issue in the future, so please give it a meaningful title to help others.
- Feel free to reach out to the cluster-api community on the kubernetes slack.
Tracking new features
We also use the issue tracker to track features. If you have an idea for a feature, or think you can help kops become even more awesome follow the steps below.
- Open a new issue.
- Remember that users might be searching for your issue in the future, so please give it a meaningful title to help others.
- Clearly define the use case, using concrete examples. EG: I type
this
and cluster-api-provider-aws doesthat
. - Some of our larger features will require some design. If you would like to include a technical design for your feature please include it in the issue.
- After the new feature is well understood, and the design agreed upon, we can start coding the feature. We would love for you to code it. So please open up a WIP (work in progress) pull request, and happy coding.
“Amazon Web Services, AWS, and the “Powered by AWS” logo materials are trademarks of Amazon.com, Inc. or its affiliates in the United States and/or other countries.”
Our Contributors
Thank you to all contributors and a special thanks to our current maintainers & reviewers:
Maintainers | Reviewers |
---|---|
@richardcase (from 2020-12-04) | @cnmcavoy (from 2023-10-16) |
@Ankitasw (from 2022-10-19) | @AverageMarcus (from 2022-10-19) |
@dlipovetsky (from 2021-10-31) | @luthermonson (from 2023-03-08) |
@nrb (from 2024-05-24) | @faiq (from 2023-10-16) |
@AndiDog (from 2023-12-13) | @fiunchinho (from 2023-11-6) |
@damdo (from 2023-03-01) |
and the previous/emeritus maintainers & reviewers:
Emeritus Maintainers | Emeritus Reviewers |
---|---|
@chuckha | @ashish-amarnath |
@detiber | @davidewatson |
@ncdc | @enxebre |
@randomvariable | @ingvagabund |
@rudoi | @michaelbeaumont |
@sedefsavas | @sethp-nr |
@Skarlso | @shivi28 |
@vincepri | @dthorsen |
@pydctw |
All the CAPA contributors:
Getting Started
Quick Start
In this tutorial we’ll cover the basics of how to use Cluster API to create one or more Kubernetes clusters.
Installation
There are two major quickstart paths: Using clusterctl or the Cluster API Operator.
This article describes a path that uses the clusterctl
CLI tool to handle the lifecycle of a Cluster API management cluster.
The clusterctl command line interface is specifically designed for providing a simple “day 1 experience” and a quick start with Cluster API. It automates fetching the YAML files defining provider components and installing them.
Additionally it encodes a set of best practices in managing providers, that helps the user in avoiding mis-configurations or in managing day 2 operations such as upgrades.
The Cluster API Operator is a Kubernetes Operator built on top of clusterctl and designed to empower cluster administrators to handle the lifecycle of Cluster API providers within a management cluster using a declarative approach. It aims to improve user experience in deploying and managing Cluster API, making it easier to handle day-to-day tasks and automate workflows with GitOps. Visit the CAPI Operator quickstart if you want to experiment with this tool.
Common Prerequisites
Install and/or configure a Kubernetes cluster
Cluster API requires an existing Kubernetes cluster accessible via kubectl. During the installation process the Kubernetes cluster will be transformed into a management cluster by installing the Cluster API provider components, so it is recommended to keep it separated from any application workload.
It is a common practice to create a temporary, local bootstrap cluster which is then used to provision a target management cluster on the selected infrastructure provider.
Choose one of the options below:
-
Existing Management Cluster
For production use-cases a “real” Kubernetes cluster should be used with appropriate backup and disaster recovery policies and procedures in place. The Kubernetes cluster must be at least v1.20.0.
export KUBECONFIG=<...>
OR
-
Kind
kind can be used for creating a local Kubernetes cluster for development environments or for the creation of a temporary bootstrap cluster used to provision a target management cluster on the selected infrastructure provider.
The installation procedure depends on the version of kind; if you are planning to use the Docker infrastructure provider, please follow the additional instructions in the dedicated tab:
Create the kind cluster:
kind create cluster
Test to ensure the local kind cluster is ready:
kubectl cluster-info
Run the following command to create a kind config file for allowing the Docker provider to access Docker on the host:
cat > kind-cluster-with-extramounts.yaml <<EOF kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 networking: ipFamily: dual nodes: - role: control-plane extraMounts: - hostPath: /var/run/docker.sock containerPath: /var/run/docker.sock EOF
Then follow the instruction for your kind version using
kind create cluster --config kind-cluster-with-extramounts.yaml
to create the management cluster using the above file.Create the Kind Cluster
KubeVirt is a cloud native virtualization solution. The virtual machines we’re going to create and use for the workload cluster’s nodes, are actually running within pods in the management cluster. In order to communicate with the workload cluster’s API server, we’ll need to expose it. We are using Kind which is a limited environment. The easiest way to expose the workload cluster’s API server (a pod within a node running in a VM that is itself running within a pod in the management cluster, that is running inside a Docker container), is to use a LoadBalancer service.
To allow using a LoadBalancer service, we can’t use the kind’s default CNI (kindnet), but we’ll need to install another CNI, like Calico. In order to do that, we’ll need first to initiate the kind cluster with two modifications:
- Disable the default CNI
- Add the Docker credentials to the cluster, to avoid the Docker Hub pull rate limit of the calico images; read more about it in the docker documentation, and in the kind documentation.
Create a configuration file for kind. Please notice the Docker config file path, and adjust it to your local setting:
cat <<EOF > kind-config.yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 networking: # the default CNI will not be installed disableDefaultCNI: true nodes: - role: control-plane extraMounts: - containerPath: /var/lib/kubelet/config.json hostPath: <YOUR DOCKER CONFIG FILE PATH> EOF
Now, create the kind cluster with the configuration file:
kind create cluster --config=kind-config.yaml
Test to ensure the local kind cluster is ready:
kubectl cluster-info
Install the Calico CNI
Now we’ll need to install a CNI. In this example, we’re using calico, but other CNIs should work as well. Please see calico installation guide for more details (use the “Manifest” tab). Below is an example of how to install calico version v3.24.4.
Use the Calico manifest to create the required resources; e.g.:
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.4/manifests/calico.yaml
Install clusterctl
The clusterctl CLI tool handles the lifecycle of a Cluster API management cluster.
Install clusterctl binary with curl on Linux
If you are unsure you can determine your computers architecture by running uname -a
Download for AMD64:
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.8.5/clusterctl-linux-amd64 -o clusterctl
Download for ARM64:
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.8.5/clusterctl-linux-arm64 -o clusterctl
Download for PPC64LE:
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.8.5/clusterctl-linux-ppc64le -o clusterctl
Install clusterctl:
sudo install -o root -g root -m 0755 clusterctl /usr/local/bin/clusterctl
Test to ensure the version you installed is up-to-date:
clusterctl version
Install clusterctl binary with curl on macOS
If you are unsure you can determine your computers architecture by running uname -a
Download for AMD64:
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.8.5/clusterctl-darwin-amd64 -o clusterctl
Download for M1 CPU (”Apple Silicon”) / ARM64:
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.8.5/clusterctl-darwin-arm64 -o clusterctl
Make the clusterctl binary executable.
chmod +x ./clusterctl
Move the binary in to your PATH.
sudo mv ./clusterctl /usr/local/bin/clusterctl
Test to ensure the version you installed is up-to-date:
clusterctl version
Install clusterctl with homebrew on macOS and Linux
Install the latest release using homebrew:
brew install clusterctl
Test to ensure the version you installed is up-to-date:
clusterctl version
Install clusterctl binary with curl on Windows using PowerShell
Go to the working directory where you want clusterctl downloaded.
Download the latest release; on Windows, type:
curl.exe -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.8.5/clusterctl-windows-amd64.exe -o clusterctl.exe
Append or prepend the path of that directory to the PATH
environment variable.
Test to ensure the version you installed is up-to-date:
clusterctl.exe version
Initialize the management cluster
Now that we’ve got clusterctl installed and all the prerequisites in place, let’s transform the Kubernetes cluster
into a management cluster by using clusterctl init
.
The command accepts as input a list of providers to install; when executed for the first time, clusterctl init
automatically adds to the list the cluster-api
core provider, and if unspecified, it also adds the kubeadm
bootstrap
and kubeadm
control-plane providers.
Enabling Feature Gates
Feature gates can be enabled by exporting environment variables before executing clusterctl init
.
For example, the ClusterTopology
feature, which is required to enable support for managed topologies and ClusterClass,
can be enabled via:
export CLUSTER_TOPOLOGY=true
Additional documentation about experimental features can be found in Experimental Features.
Initialization for common providers
Depending on the infrastructure provider you are planning to use, some additional prerequisites should be satisfied before getting started with Cluster API. See below for the expected settings for common providers.
export LINODE_TOKEN=<your-access-token>
# Initialize the management cluster
clusterctl init --infrastructure linode-linode
Download the latest binary of clusterawsadm
from the AWS provider releases. The clusterawsadm command line utility assists with identity and access management (IAM) for Cluster API Provider AWS.
Download the latest release; on Linux, type:
curl -L https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/download/v0.0.0/clusterawsadm-linux-amd64 -o clusterawsadm
Make it executable
chmod +x clusterawsadm
Move the binary to a directory present in your PATH
sudo mv clusterawsadm /usr/local/bin
Check version to confirm installation
clusterawsadm version
Example Usage
export AWS_REGION=us-east-1 # This is used to help encode your environment variables
export AWS_ACCESS_KEY_ID=<your-access-key>
export AWS_SECRET_ACCESS_KEY=<your-secret-access-key>
export AWS_SESSION_TOKEN=<session-token> # If you are using Multi-Factor Auth.
# The clusterawsadm utility takes the credentials that you set as environment
# variables and uses them to create a CloudFormation stack in your AWS account
# with the correct IAM resources.
clusterawsadm bootstrap iam create-cloudformation-stack
# Create the base64 encoded credentials using clusterawsadm.
# This command uses your environment variables and encodes
# them in a value to be stored in a Kubernetes Secret.
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
# Finally, initialize the management cluster
clusterctl init --infrastructure aws
Download the latest release; on macOs, type:
curl -L https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/download/v0.0.0/clusterawsadm-darwin-amd64 -o clusterawsadm
Or if your Mac has an M1 CPU (”Apple Silicon”):
curl -L https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/download/v0.0.0/clusterawsadm-darwin-arm64 -o clusterawsadm
Make it executable
chmod +x clusterawsadm
Move the binary to a directory present in your PATH
sudo mv clusterawsadm /usr/local/bin
Check version to confirm installation
clusterawsadm version
Example Usage
export AWS_REGION=us-east-1 # This is used to help encode your environment variables
export AWS_ACCESS_KEY_ID=<your-access-key>
export AWS_SECRET_ACCESS_KEY=<your-secret-access-key>
export AWS_SESSION_TOKEN=<session-token> # If you are using Multi-Factor Auth.
# The clusterawsadm utility takes the credentials that you set as environment
# variables and uses them to create a CloudFormation stack in your AWS account
# with the correct IAM resources.
clusterawsadm bootstrap iam create-cloudformation-stack
# Create the base64 encoded credentials using clusterawsadm.
# This command uses your environment variables and encodes
# them in a value to be stored in a Kubernetes Secret.
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
# Finally, initialize the management cluster
clusterctl init --infrastructure aws
Install the latest release using homebrew:
brew install clusterawsadm
Check version to confirm installation
clusterawsadm version
Example Usage
export AWS_REGION=us-east-1 # This is used to help encode your environment variables
export AWS_ACCESS_KEY_ID=<your-access-key>
export AWS_SECRET_ACCESS_KEY=<your-secret-access-key>
export AWS_SESSION_TOKEN=<session-token> # If you are using Multi-Factor Auth.
# The clusterawsadm utility takes the credentials that you set as environment
# variables and uses them to create a CloudFormation stack in your AWS account
# with the correct IAM resources.
clusterawsadm bootstrap iam create-cloudformation-stack
# Create the base64 encoded credentials using clusterawsadm.
# This command uses your environment variables and encodes
# them in a value to be stored in a Kubernetes Secret.
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
# Finally, initialize the management cluster
clusterctl init --infrastructure aws
Download the latest release; on Windows, type:
curl.exe -L https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/download/v0.0.0/clusterawsadm-windows-amd64.exe -o clusterawsadm.exe
Append or prepend the path of that directory to the PATH
environment variable.
Check version to confirm installation
clusterawsadm.exe version
Example Usage in Powershell
$Env:AWS_REGION="us-east-1" # This is used to help encode your environment variables
$Env:AWS_ACCESS_KEY_ID="<your-access-key>"
$Env:AWS_SECRET_ACCESS_KEY="<your-secret-access-key>"
$Env:AWS_SESSION_TOKEN="<session-token>" # If you are using Multi-Factor Auth.
# The clusterawsadm utility takes the credentials that you set as environment
# variables and uses them to create a CloudFormation stack in your AWS account
# with the correct IAM resources.
clusterawsadm bootstrap iam create-cloudformation-stack
# Create the base64 encoded credentials using clusterawsadm.
# This command uses your environment variables and encodes
# them in a value to be stored in a Kubernetes Secret.
$Env:AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
# Finally, initialize the management cluster
clusterctl init --infrastructure aws
See the AWS provider prerequisites document for more details.
For more information about authorization, AAD, or requirements for Azure, visit the Azure provider prerequisites document.
export AZURE_SUBSCRIPTION_ID="<SubscriptionId>"
# Create an Azure Service Principal and paste the output here
export AZURE_TENANT_ID="<Tenant>"
export AZURE_CLIENT_ID="<AppId>"
export AZURE_CLIENT_ID_USER_ASSIGNED_IDENTITY=$AZURE_CLIENT_ID # for compatibility with CAPZ v1.16 templates
export AZURE_CLIENT_SECRET="<Password>"
# Settings needed for AzureClusterIdentity used by the AzureCluster
export AZURE_CLUSTER_IDENTITY_SECRET_NAME="cluster-identity-secret"
export CLUSTER_IDENTITY_NAME="cluster-identity"
export AZURE_CLUSTER_IDENTITY_SECRET_NAMESPACE="default"
# Create a secret to include the password of the Service Principal identity created in Azure
# This secret will be referenced by the AzureClusterIdentity used by the AzureCluster
kubectl create secret generic "${AZURE_CLUSTER_IDENTITY_SECRET_NAME}" --from-literal=clientSecret="${AZURE_CLIENT_SECRET}" --namespace "${AZURE_CLUSTER_IDENTITY_SECRET_NAMESPACE}"
# Finally, initialize the management cluster
clusterctl init --infrastructure azure
Create a file named cloud-config in the repo’s root directory, substituting in your own environment’s values
[Global]
api-url = <cloudstackApiUrl>
api-key = <cloudstackApiKey>
secret-key = <cloudstackSecretKey>
Create the base64 encoded credentials by catting your credentials file. This command uses your environment variables and encodes them in a value to be stored in a Kubernetes Secret.
export CLOUDSTACK_B64ENCODED_SECRET=`cat cloud-config | base64 | tr -d '\n'`
Finally, initialize the management cluster
clusterctl init --infrastructure cloudstack
export DIGITALOCEAN_ACCESS_TOKEN=<your-access-token>
export DO_B64ENCODED_CREDENTIALS="$(echo -n "${DIGITALOCEAN_ACCESS_TOKEN}" | base64 | tr -d '\n')"
# Initialize the management cluster
clusterctl init --infrastructure digitalocean
The Docker provider requires the ClusterTopology
and MachinePool
features to deploy ClusterClass-based clusters.
We are only supporting ClusterClass-based cluster-templates in this quickstart as ClusterClass makes it possible to
adapt configuration based on Kubernetes version. This is required to install Kubernetes clusters < v1.24 and
for the upgrade from v1.23 to v1.24 as we have to use different cgroupDrivers depending on Kubernetes version.
# Enable the experimental Cluster topology feature.
export CLUSTER_TOPOLOGY=true
# Initialize the management cluster
clusterctl init --infrastructure docker
In order to initialize the Equinix Metal Provider (formerly Packet) you have to expose the environment
variable PACKET_API_KEY
. This variable is used to authorize the infrastructure
provider manager against the Equinix Metal API. You can retrieve your token directly
from the Equinix Metal Console.
export PACKET_API_KEY="34ts3g4s5g45gd45dhdh"
clusterctl init --infrastructure packet
# Create the base64 encoded credentials by catting your credentials json.
# This command uses your environment variables and encodes
# them in a value to be stored in a Kubernetes Secret.
export GCP_B64ENCODED_CREDENTIALS=$( cat /path/to/gcp-credentials.json | base64 | tr -d '\n' )
# Finally, initialize the management cluster
clusterctl init --infrastructure gcp
Please visit the Hetzner project.
Please visit the Hivelocity project.
In order to initialize the IBM Cloud Provider you have to expose the environment
variable IBMCLOUD_API_KEY
. This variable is used to authorize the infrastructure
provider manager against the IBM Cloud API. To create one from the UI, refer here.
export IBMCLOUD_API_KEY=<you_api_key>
# Finally, initialize the management cluster
clusterctl init --infrastructure ibmcloud
The IONOS Cloud credentials are configured in the IONOSCloudCluster
.
Therefore, there is no need to specify them during the provider initialization.
clusterctl init --infrastructure ionoscloud-ionoscloud
For more information, please visit the IONOS Cloud project.
# Initialize the management cluster
clusterctl init --infrastructure k0sproject-k0smotron
# Initialize the management cluster
clusterctl init --infrastructure kubekey
Please visit the KubeVirt project for more information.
As described above, we want to use a LoadBalancer service in order to expose the workload cluster’s API server. In the example below, we will use MetalLB solution to implement load balancing to our kind cluster. Other solution should work as well.
Install MetalLB for load balancing
Install MetalLB, as described here; for example:
METALLB_VER=$(curl "https://api.github.com/repos/metallb/metallb/releases/latest" | jq -r ".tag_name")
kubectl apply -f "https://raw.githubusercontent.com/metallb/metallb/${METALLB_VER}/config/manifests/metallb-native.yaml"
kubectl wait pods -n metallb-system -l app=metallb,component=controller --for=condition=Ready --timeout=10m
kubectl wait pods -n metallb-system -l app=metallb,component=speaker --for=condition=Ready --timeout=2m
Now, we’ll create the IPAddressPool
and the L2Advertisement
custom resources. The script below creates the CRs with
the right addresses, that match to the kind cluster addresses:
GW_IP=$(docker network inspect -f '{{range .IPAM.Config}}{{.Gateway}}{{end}}' kind)
NET_IP=$(echo ${GW_IP} | sed -E 's|^([0-9]+\.[0-9]+)\..*$|\1|g')
cat <<EOF | sed -E "s|172.19|${NET_IP}|g" | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: capi-ip-pool
namespace: metallb-system
spec:
addresses:
- 172.19.255.200-172.19.255.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: empty
namespace: metallb-system
EOF
Install KubeVirt on the kind cluster
# get KubeVirt version
KV_VER=$(curl "https://api.github.com/repos/kubevirt/kubevirt/releases/latest" | jq -r ".tag_name")
# deploy required CRDs
kubectl apply -f "https://github.com/kubevirt/kubevirt/releases/download/${KV_VER}/kubevirt-operator.yaml"
# deploy the KubeVirt custom resource
kubectl apply -f "https://github.com/kubevirt/kubevirt/releases/download/${KV_VER}/kubevirt-cr.yaml"
kubectl wait -n kubevirt kv kubevirt --for=condition=Available --timeout=10m
Initialize the management cluster with the KubeVirt Provider
clusterctl init --infrastructure kubevirt
Please visit the Metal3 project.
Please follow the Cluster API Provider for Nutanix Getting Started Guide
Please follow the Cluster API Provider for Oracle Cloud Infrastructure (OCI) Getting Started Guide
# Initialize the management cluster
clusterctl init --infrastructure openstack
export OSC_SECRET_KEY=<your-secret-key>
export OSC_ACCESS_KEY=<your-access-key>
export OSC_REGION=<you-region>
# Create namespace
kubectl create namespace cluster-api-provider-outscale-system
# Create secret
kubectl create secret generic cluster-api-provider-outscale --from-literal=access_key=${OSC_ACCESS_KEY} --from-literal=secret_key=${OSC_SECRET_KEY} --from-literal=region=${OSC_REGION} -n cluster-api-provider-outscale-system
# Initialize the management cluster
clusterctl init --infrastructure outscale
The Proxmox credentials are optional, when creating a cluster they can be set in the ProxmoxCluster
resource,
if you do not set them here.
# The host for the Proxmox cluster
export PROXMOX_URL="https://pve.example:8006"
# The Proxmox token ID to access the remote Proxmox endpoint
export PROXMOX_TOKEN='root@pam!capi'
# The secret associated with the token ID
# You may want to set this in `$XDG_CONFIG_HOME/cluster-api/clusterctl.yaml` so your password is not in
# bash history
export PROXMOX_SECRET="1234-1234-1234-1234"
# Finally, initialize the management cluster
clusterctl init --infrastructure proxmox --ipam in-cluster
For more information about the CAPI provider for Proxmox, see the Proxmox project.
Please follow the Cluster API Provider for Cloud Director Getting Started Guide
# Initialize the management cluster
clusterctl init --infrastructure vcd
clusterctl init --infrastructure vcluster
Please follow the Cluster API Provider for vcluster Quick Start Guide
# Initialize the management cluster
clusterctl init --infrastructure virtink
# The username used to access the remote vSphere endpoint
export VSPHERE_USERNAME="vi-admin@vsphere.local"
# The password used to access the remote vSphere endpoint
# You may want to set this in `$XDG_CONFIG_HOME/cluster-api/clusterctl.yaml` so your password is not in
# bash history
export VSPHERE_PASSWORD="admin!23"
# Finally, initialize the management cluster
clusterctl init --infrastructure vsphere
For more information about prerequisites, credentials management, or permissions for vSphere, see the vSphere project.
export VULTR_API_KEY=<your_api_key>
# initialize the management cluster
clusterctl init --infrastructure vultr
The output of clusterctl init
is similar to this:
Fetching providers
Installing cert-manager Version="v1.11.0"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v1.0.0" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v1.0.0" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v1.0.0" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-docker" Version="v1.0.0" TargetNamespace="capd-system"
Your management cluster has been initialized successfully!
You can now create your first workload cluster by running the following:
clusterctl generate cluster [name] --kubernetes-version [version] | kubectl apply -f -
Create your first workload cluster
Once the management cluster is ready, you can create your first workload cluster.
Preparing the workload cluster configuration
The clusterctl generate cluster
command returns a YAML template for creating a workload cluster.
Required configuration for common providers
Depending on the infrastructure provider you are planning to use, some additional prerequisites should be satisfied before configuring a cluster with Cluster API. Instructions are provided for common providers below.
Otherwise, you can look at the clusterctl generate cluster
command documentation for details about how to
discover the list of variables required by a cluster templates.
export LINODE_REGION=us-ord
export LINODE_TOKEN=<your linode PAT>
export LINODE_CONTROL_PLANE_MACHINE_TYPE=g6-standard-2
export LINODE_MACHINE_TYPE=g6-standard-2
See the Akamai (Linode) provider for more information.
export AWS_REGION=us-east-1
export AWS_SSH_KEY_NAME=default
# Select instance types
export AWS_CONTROL_PLANE_MACHINE_TYPE=t3.large
export AWS_NODE_MACHINE_TYPE=t3.large
See the AWS provider prerequisites document for more details.
# Name of the Azure datacenter location. Change this value to your desired location.
export AZURE_LOCATION="centralus"
# Select VM types.
export AZURE_CONTROL_PLANE_MACHINE_TYPE="Standard_D2s_v3"
export AZURE_NODE_MACHINE_TYPE="Standard_D2s_v3"
# [Optional] Select resource group. The default value is ${CLUSTER_NAME}.
export AZURE_RESOURCE_GROUP="<ResourceGroupName>"
A Cluster API compatible image must be available in your CloudStack installation. For instructions on how to build a compatible image see image-builder (CloudStack)
Prebuilt images can be found here
To see all required CloudStack environment variables execute:
clusterctl generate cluster --infrastructure cloudstack --list-variables capi-quickstart
Apart from the script, the following CloudStack environment variables are required.
# Set this to the name of the zone in which to deploy the cluster
export CLOUDSTACK_ZONE_NAME=<zone name>
# The name of the network on which the VMs will reside
export CLOUDSTACK_NETWORK_NAME=<network name>
# The endpoint of the workload cluster
export CLUSTER_ENDPOINT_IP=<cluster endpoint address>
export CLUSTER_ENDPOINT_PORT=<cluster endpoint port>
# The service offering of the control plane nodes
export CLOUDSTACK_CONTROL_PLANE_MACHINE_OFFERING=<control plane service offering name>
# The service offering of the worker nodes
export CLOUDSTACK_WORKER_MACHINE_OFFERING=<worker node service offering name>
# The capi compatible template to use
export CLOUDSTACK_TEMPLATE_NAME=<template name>
# The ssh key to use to log into the nodes
export CLOUDSTACK_SSH_KEY_NAME=<ssh key name>
A full configuration reference can be found in configuration.md.
A ClusterAPI compatible image must be available in your DigitalOcean account. For instructions on how to build a compatible image see image-builder.
export DO_REGION=nyc1
export DO_SSH_KEY_FINGERPRINT=<your-ssh-key-fingerprint>
export DO_CONTROL_PLANE_MACHINE_TYPE=s-2vcpu-2gb
export DO_CONTROL_PLANE_MACHINE_IMAGE=<your-capi-image-id>
export DO_NODE_MACHINE_TYPE=s-2vcpu-2gb
export DO_NODE_MACHINE_IMAGE==<your-capi-image-id>
The Docker provider does not require additional configurations for cluster templates.
However, if you require special network settings you can set the following environment variables:
# The list of service CIDR, default ["10.128.0.0/12"]
export SERVICE_CIDR=["10.96.0.0/12"]
# The list of pod CIDR, default ["192.168.0.0/16"]
export POD_CIDR=["192.168.0.0/16"]
# The service domain, default "cluster.local"
export SERVICE_DOMAIN="k8s.test"
It is also possible but not recommended to disable the per-default enabled Pod Security Standard:
export POD_SECURITY_STANDARD_ENABLED="false"
There are several required variables you need to set to create a cluster. There are also a few optional tunables if you’d like to change the OS or CIDRs used.
# Required (made up examples shown)
# The project where your cluster will be placed to.
# You have to get one from the Equinix Metal Console if you don't have one already.
export PROJECT_ID="2b59569f-10d1-49a6-a000-c2fb95a959a1"
# This can help to take advantage of automated, interconnected bare metal across our global metros.
export METRO="da"
# What plan to use for your control plane nodes
export CONTROLPLANE_NODE_TYPE="m3.small.x86"
# What plan to use for your worker nodes
export WORKER_NODE_TYPE="m3.small.x86"
# The ssh key you would like to have access to the nodes
export SSH_KEY="ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDvMgVEubPLztrvVKgNPnRe9sZSjAqaYj9nmCkgr4PdK username@computer"
export CLUSTER_NAME="my-cluster"
# Optional (defaults shown)
export NODE_OS="ubuntu_18_04"
export POD_CIDR="192.168.0.0/16"
export SERVICE_CIDR="172.26.0.0/16"
# Only relevant if using the kube-vip flavor
export KUBE_VIP_VERSION="v0.5.0"
# Name of the GCP datacenter location. Change this value to your desired location
export GCP_REGION="<GCP_REGION>"
export GCP_PROJECT="<GCP_PROJECT>"
# Make sure to use same Kubernetes version here as building the GCE image
export KUBERNETES_VERSION=1.23.3
# This is the image you built. See https://github.com/kubernetes-sigs/image-builder
export IMAGE_ID=projects/$GCP_PROJECT/global/images/<built image>
export GCP_CONTROL_PLANE_MACHINE_TYPE=n1-standard-2
export GCP_NODE_MACHINE_TYPE=n1-standard-2
export GCP_NETWORK_NAME=<GCP_NETWORK_NAME or default>
export CLUSTER_NAME="<CLUSTER_NAME>"
See the GCP provider for more information.
# Required environment variables for VPC
# VPC region
export IBMVPC_REGION=us-south
# VPC zone within the region
export IBMVPC_ZONE=us-south-1
# ID of the resource group in which the VPC will be created
export IBMVPC_RESOURCEGROUP=<your-resource-group-id>
# Name of the VPC
export IBMVPC_NAME=ibm-vpc-0
export IBMVPC_IMAGE_ID=<you-image-id>
# Profile for the virtual server instances
export IBMVPC_PROFILE=bx2-4x16
export IBMVPC_SSHKEY_ID=<your-sshkey-id>
# Required environment variables for PowerVS
export IBMPOWERVS_SSHKEY_NAME=<your-ssh-key>
# Internal and external IP of the network
export IBMPOWERVS_VIP=<internal-ip>
export IBMPOWERVS_VIP_EXTERNAL=<external-ip>
export IBMPOWERVS_VIP_CIDR=29
export IBMPOWERVS_IMAGE_NAME=<your-capi-image-name>
# ID of the PowerVS service instance
export IBMPOWERVS_SERVICE_INSTANCE_ID=<service-instance-id>
export IBMPOWERVS_NETWORK_NAME=<your-capi-network-name>
Please visit the IBM Cloud provider for more information.
A ClusterAPI compatible image must be available in your IONOS Cloud contract. For instructions on how to build a compatible Image, see our docs.
# The token which is used to authenticate against the IONOS Cloud API
export IONOS_TOKEN=<your-token>
# The datacenter ID where the cluster will be deployed
export IONOSCLOUD_DATACENTER_ID="<your-datacenter-id>"
# The IP of the control plane endpoint
export CONTROL_PLANE_ENDPOINT_IP=10.10.10.4
# The location of the data center where the cluster will be deployed
export CONTROL_PLANE_ENDPOINT_LOCATION=de/txl
# The image ID of the custom image that will be used for the VMs
export IONOSCLOUD_MACHINE_IMAGE_ID="<your-image-id>"
# The SSH key that will be used to access the VMs
export IONOSCLOUD_MACHINE_SSH_KEYS="<your-ssh-key>"
For more configuration options check our list of available variables
Please visit the K0smotron provider for more information.
# Required environment variables
# The KKZONE is used to specify where to download the binaries. (e.g. "", "cn")
export KKZONE=""
# The ssh name of the all instance Linux user. (e.g. root, ubuntu)
export USER_NAME=<your-linux-user>
# The ssh password of the all instance Linux user.
export PASSWORD=<your-linux-user-password>
# The ssh IP address of the all instance. (e.g. "[{address: 192.168.100.3}, {address: 192.168.100.4}]")
export INSTANCES=<your-linux-ip-address>
# The cluster control plane VIP. (e.g. "192.168.100.100")
export CONTROL_PLANE_ENDPOINT_IP=<your-control-plane-virtual-ip>
Please visit the KubeKey provider for more information.
export CAPK_GUEST_K8S_VERSION="v1.23.10"
export CRI_PATH="/var/run/containerd/containerd.sock"
export NODE_VM_IMAGE_TEMPLATE="quay.io/capk/ubuntu-2004-container-disk:${CAPK_GUEST_K8S_VERSION}"
Please visit the KubeVirt project for more information.
Note: If you are running CAPM3 release prior to v0.5.0, make sure to export the following environment variables. However, you don’t need them to be exported if you use CAPM3 release v0.5.0 or higher.
# The URL of the kernel to deploy.
export DEPLOY_KERNEL_URL="http://172.22.0.1:6180/images/ironic-python-agent.kernel"
# The URL of the ramdisk to deploy.
export DEPLOY_RAMDISK_URL="http://172.22.0.1:6180/images/ironic-python-agent.initramfs"
# The URL of the Ironic endpoint.
export IRONIC_URL="http://172.22.0.1:6385/v1/"
# The URL of the Ironic inspector endpoint.
export IRONIC_INSPECTOR_URL="http://172.22.0.1:5050/v1/"
# Do not use a dedicated CA certificate for Ironic API. Any value provided in this variable disables additional CA certificate validation.
# To provide a CA certificate, leave this variable unset. If unset, then IRONIC_CA_CERT_B64 must be set.
export IRONIC_NO_CA_CERT=true
# Disables basic authentication for Ironic API. Any value provided in this variable disables authentication.
# To enable authentication, leave this variable unset. If unset, then IRONIC_USERNAME and IRONIC_PASSWORD must be set.
export IRONIC_NO_BASIC_AUTH=true
# Disables basic authentication for Ironic inspector API. Any value provided in this variable disables authentication.
# To enable authentication, leave this variable unset. If unset, then IRONIC_INSPECTOR_USERNAME and IRONIC_INSPECTOR_PASSWORD must be set.
export IRONIC_INSPECTOR_NO_BASIC_AUTH=true
Please visit the Metal3 getting started guide for more details.
A ClusterAPI compatible image must be available in your Nutanix image library. For instructions on how to build a compatible image see image-builder.
To see all required Nutanix environment variables execute:
clusterctl generate cluster --infrastructure nutanix --list-variables capi-quickstart
A ClusterAPI compatible image must be available in your OpenStack. For instructions on how to build a compatible image see image-builder. Depending on your OpenStack and underlying hypervisor the following options might be of interest:
To see all required OpenStack environment variables execute:
clusterctl generate cluster --infrastructure openstack --list-variables capi-quickstart
The following script can be used to export some of them:
wget https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-openstack/master/templates/env.rc -O /tmp/env.rc
source /tmp/env.rc <path/to/clouds.yaml> <cloud>
Apart from the script, the following OpenStack environment variables are required.
# The list of nameservers for OpenStack Subnet being created.
# Set this value when you need create a new network/subnet while the access through DNS is required.
export OPENSTACK_DNS_NAMESERVERS=<dns nameserver>
# FailureDomain is the failure domain the machine will be created in.
export OPENSTACK_FAILURE_DOMAIN=<availability zone name>
# The flavor reference for the flavor for your server instance.
export OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR=<flavor>
# The flavor reference for the flavor for your server instance.
export OPENSTACK_NODE_MACHINE_FLAVOR=<flavor>
# The name of the image to use for your server instance. If the RootVolume is specified, this will be ignored and use rootVolume directly.
export OPENSTACK_IMAGE_NAME=<image name>
# The SSH key pair name
export OPENSTACK_SSH_KEY_NAME=<ssh key pair name>
# The external network
export OPENSTACK_EXTERNAL_NETWORK_ID=<external network ID>
A full configuration reference can be found in configuration.md.
A ClusterAPI compatible image must be available in your Outscale account. For instructions on how to build a compatible image see image-builder.
# The outscale root disk iops
export OSC_IOPS="<IOPS>"
# The outscale root disk size
export OSC_VOLUME_SIZE="<VOLUME_SIZE>"
# The outscale root disk volumeType
export OSC_VOLUME_TYPE="<VOLUME_TYPE>"
# The outscale key pair
export OSC_KEYPAIR_NAME="<KEYPAIR_NAME>"
# The outscale subregion name
export OSC_SUBREGION_NAME="<SUBREGION_NAME>"
# The outscale vm type
export OSC_VM_TYPE="<VM_TYPE>"
# The outscale image name
export OSC_IMAGE_NAME="<IMAGE_NAME>"
A ClusterAPI compatible image must be available in your Proxmox cluster. For instructions on how to build a compatible VM template see image-builder.
# The node that hosts the VM template to be used to provision VMs
export PROXMOX_SOURCENODE="pve"
# The template VM ID used for cloning VMs
export TEMPLATE_VMID=100
# The ssh authorized keys used to ssh to the machines.
export VM_SSH_KEYS="ssh-ed25519 ..., ssh-ed25519 ..."
# The IP address used for the control plane endpoint
export CONTROL_PLANE_ENDPOINT_IP=10.10.10.4
# The IP ranges for Cluster nodes
export NODE_IP_RANGES="[10.10.10.5-10.10.10.50, 10.10.10.55-10.10.10.70]"
# The gateway for the machines network-config.
export GATEWAY="10.10.10.1"
# Subnet Mask in CIDR notation for your node IP ranges
export IP_PREFIX=24
# The Proxmox network device for VMs
export BRIDGE="vmbr1"
# The dns nameservers for the machines network-config.
export DNS_SERVERS="[8.8.8.8,8.8.4.4]"
# The Proxmox nodes used for VM deployments
export ALLOWED_NODES="[pve1,pve2,pve3]"
For more information about prerequisites and advanced setups for Proxmox, see the Proxmox getting started guide.
export TINKERBELL_IP=<hegel ip>
For more information please visit Tinkerbell getting started guide.
A ClusterAPI compatible image must be available in your VCD catalog. For instructions on how to build and upload a compatible image see CAPVCD
To see all required VCD environment variables execute:
clusterctl generate cluster --infrastructure vcd --list-variables capi-quickstart
export CLUSTER_NAME=kind
export CLUSTER_NAMESPACE=vcluster
export KUBERNETES_VERSION=1.23.4
export HELM_VALUES="service:\n type: NodePort"
Please see the vcluster installation instructions for more details.
To see all required Virtink environment variables execute:
clusterctl generate cluster --infrastructure virtink --list-variables capi-quickstart
See the Virtink provider document for more details.
It is required to use an official CAPV machine images for your vSphere VM templates. See uploading CAPV machine images for instructions on how to do this.
# The vCenter server IP or FQDN
export VSPHERE_SERVER="10.0.0.1"
# The vSphere datacenter to deploy the management cluster on
export VSPHERE_DATACENTER="SDDC-Datacenter"
# The vSphere datastore to deploy the management cluster on
export VSPHERE_DATASTORE="vsanDatastore"
# The VM network to deploy the management cluster on
export VSPHERE_NETWORK="VM Network"
# The vSphere resource pool for your VMs
export VSPHERE_RESOURCE_POOL="*/Resources"
# The VM folder for your VMs. Set to "" to use the root vSphere folder
export VSPHERE_FOLDER="vm"
# The VM template to use for your VMs
export VSPHERE_TEMPLATE="ubuntu-1804-kube-v1.17.3"
# The public ssh authorized key on all machines
export VSPHERE_SSH_AUTHORIZED_KEY="ssh-rsa AAAAB3N..."
# The certificate thumbprint for the vCenter server
export VSPHERE_TLS_THUMBPRINT="97:48:03:8D:78:A9..."
# The storage policy to be used (optional). Set to "" if not required
export VSPHERE_STORAGE_POLICY="policy-one"
# The IP address used for the control plane endpoint
export CONTROL_PLANE_ENDPOINT_IP="1.2.3.4"
For more information about prerequisites, credentials management, or permissions for vSphere, see the vSphere getting started guide.
A Cluster API compatible image must be available in your Vultr account. For instructions on how to build a compatible image see image-builder for Vultr
export CLUSTER_NAME=<clustername>
export KUBERNETES_VERSION=v1.28.9
export CONTROL_PLANE_MACHINE_COUNT=1
export CONTROL_PLANE_PLANID=<plan_id>
export WORKER_MACHINE_COUNT=1
export WORKER_PLANID=<plan_id>
export MACHINE_IMAGE=<snapshot_id>
export REGION=<region>
export PLANID=<plan_id>
export VPCID=<vpc_id>
export SSHKEY_ID=<sshKey_id>
Generating the cluster configuration
For the purpose of this tutorial, we’ll name our cluster capi-quickstart.
clusterctl generate cluster capi-quickstart --flavor development \
--kubernetes-version v1.31.0 \
--control-plane-machine-count=3 \
--worker-machine-count=3 \
> capi-quickstart.yaml
export CLUSTER_NAME=kind
export CLUSTER_NAMESPACE=vcluster
export KUBERNETES_VERSION=1.28.0
export HELM_VALUES="service:\n type: NodePort"
kubectl create namespace ${CLUSTER_NAMESPACE}
clusterctl generate cluster ${CLUSTER_NAME} \
--infrastructure vcluster \
--kubernetes-version ${KUBERNETES_VERSION} \
--target-namespace ${CLUSTER_NAMESPACE} | kubectl apply -f -
As we described above, in this tutorial, we will use a LoadBalancer service in order to expose the API server of the
workload cluster, so we want to use the load balancer (lb) template (rather than the default one). We’ll use the
clusterctl’s --flavor
flag for that:
clusterctl generate cluster capi-quickstart \
--infrastructure="kubevirt" \
--flavor lb \
--kubernetes-version ${CAPK_GUEST_K8S_VERSION} \
--control-plane-machine-count=1 \
--worker-machine-count=1 \
> capi-quickstart.yaml
clusterctl generate cluster capi-quickstart \
--infrastructure azure \
--kubernetes-version v1.31.0 \
--control-plane-machine-count=3 \
--worker-machine-count=3 \
> capi-quickstart.yaml
# Cluster templates authenticate with Workload Identity by default. Modify the AzureClusterIdentity for ServicePrincipal authentication.
# See https://capz.sigs.k8s.io/topics/identities for more details.
yq -i "with(. | select(.kind == \"AzureClusterIdentity\"); .spec.type |= \"ServicePrincipal\" | .spec.clientSecret.name |= \"${AZURE_CLUSTER_IDENTITY_SECRET_NAME}\" | .spec.clientSecret.namespace |= \"${AZURE_CLUSTER_IDENTITY_SECRET_NAMESPACE}\")" capi-quickstart.yaml
clusterctl generate cluster capi-quickstart \
--kubernetes-version v1.31.0 \
--control-plane-machine-count=3 \
--worker-machine-count=3 \
> capi-quickstart.yaml
This creates a YAML file named capi-quickstart.yaml
with a predefined list of Cluster API objects; Cluster, Machines,
Machine Deployments, etc.
The file can be eventually modified using your editor of choice.
See clusterctl generate cluster for more details.
Apply the workload cluster
When ready, run the following command to apply the cluster manifest.
kubectl apply -f capi-quickstart.yaml
The output is similar to this:
cluster.cluster.x-k8s.io/capi-quickstart created
dockercluster.infrastructure.cluster.x-k8s.io/capi-quickstart created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capi-quickstart-control-plane created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/capi-quickstart-control-plane created
machinedeployment.cluster.x-k8s.io/capi-quickstart-md-0 created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/capi-quickstart-md-0 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capi-quickstart-md-0 created
Accessing the workload cluster
The cluster will now start provisioning. You can check status with:
kubectl get cluster
You can also get an “at glance” view of the cluster and its resources by running:
clusterctl describe cluster capi-quickstart
and see an output similar to this:
NAME PHASE AGE VERSION
capi-quickstart Provisioned 8s v1.31.0
To verify the first control plane is up:
kubectl get kubeadmcontrolplane
You should see an output is similar to this:
NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
capi-quickstart-g2trk capi-quickstart true 3 3 3 4m7s v1.31.0
After the first control plane node is up and running, we can retrieve the workload cluster Kubeconfig.
clusterctl get kubeconfig capi-quickstart > capi-quickstart.kubeconfig
kind get kubeconfig --name capi-quickstart > capi-quickstart.kubeconfig
Install a Cloud Provider
The Kubernetes in-tree cloud provider implementations are being removed in favor of external cloud providers (also referred to as “out-of-tree”). This requires deploying a new component called the cloud-controller-manager which is responsible for running all the cloud specific controllers that were previously run in the kube-controller-manager. To learn more, see this blog post.
Install the official cloud-provider-azure Helm chart on the workload cluster:
helm install --kubeconfig=./capi-quickstart.kubeconfig --repo https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/helm/repo cloud-provider-azure --generate-name --set infra.clusterName=capi-quickstart --set cloudControllerManager.clusterCIDR="192.168.0.0/16"
For more information, see the CAPZ book.
Before deploying the OpenStack external cloud provider, configure the cloud.conf
file for integration with your OpenStack environment:
cat > cloud.conf <<EOF
[Global]
auth-url=<your_auth_url>
application-credential-id=<your_credential_id>
application-credential-secret=<your_credential_secret>
region=<your_region>
domain-name=<your_domain_name>
EOF
For more detailed information on configuring the cloud.conf
file, see the OpenStack Cloud Controller Manager documentation.
Next, create a Kubernetes secret using this configuration to securely store your cloud environment details. You can create this secret for example with:
kubectl --kubeconfig=./capi-quickstart.kubeconfig -n kube-system create secret generic cloud-config --from-file=cloud.conf
Now, you are ready to deploy the external cloud provider!
kubectl apply --kubeconfig=./capi-quickstart.kubeconfig -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/cloud-controller-manager-roles.yaml
kubectl apply --kubeconfig=./capi-quickstart.kubeconfig -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml
kubectl apply --kubeconfig=./capi-quickstart.kubeconfig -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/openstack-cloud-controller-manager-ds.yaml
Alternatively, refer to the helm chart.
Deploy a CNI solution
Calico is used here as an example.
Install the official Calico Helm chart on the workload cluster:
helm repo add projectcalico https://docs.tigera.io/calico/charts --kubeconfig=./capi-quickstart.kubeconfig && \
helm install calico projectcalico/tigera-operator --kubeconfig=./capi-quickstart.kubeconfig -f https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-azure/main/templates/addons/calico/values.yaml --namespace tigera-operator --create-namespace
After a short while, our nodes should be running and in Ready
state,
let’s check the status using kubectl get nodes
:
kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes
Calico not required for vcluster.
Before deploying the Calico CNI, make sure the VMs are running:
kubectl get vm
If our new VMs are running, we should see a response similar to this:
NAME AGE STATUS READY
capi-quickstart-control-plane-7s945 167m Running True
capi-quickstart-md-0-zht5j 164m Running True
We can also read the virtual machine instances:
kubectl get vmi
The output will be similar to:
NAME AGE PHASE IP NODENAME READY
capi-quickstart-control-plane-7s945 167m Running 10.244.82.16 kind-control-plane True
capi-quickstart-md-0-zht5j 164m Running 10.244.82.17 kind-control-plane True
Since our workload cluster is running within the kind cluster, we need to prevent conflicts between the kind (management) cluster’s CNI, and the workload cluster CNI. The following modifications in the default Calico settings are enough for these two CNI to work on (actually) the same environment.
- Change the CIDR to a non-conflicting range
- Change the value of the
CLUSTER_TYPE
environment variable tok8s
- Change the value of the
CALICO_IPV4POOL_IPIP
environment variable toNever
- Change the value of the
CALICO_IPV4POOL_VXLAN
environment variable toAlways
- Add the
FELIX_VXLANPORT
environment variable with the value of a non-conflicting port, e.g."6789"
.
The following script downloads the Calico manifest and modifies the required field. The CIDR and the port values are examples.
curl https://raw.githubusercontent.com/projectcalico/calico/v3.24.4/manifests/calico.yaml -o calico-workload.yaml
sed -i -E 's|^( +)# (- name: CALICO_IPV4POOL_CIDR)$|\1\2|g;'\
's|^( +)# ( value: )"192.168.0.0/16"|\1\2"10.243.0.0/16"|g;'\
'/- name: CLUSTER_TYPE/{ n; s/( +value: ").+/\1k8s"/g };'\
'/- name: CALICO_IPV4POOL_IPIP/{ n; s/value: "Always"/value: "Never"/ };'\
'/- name: CALICO_IPV4POOL_VXLAN/{ n; s/value: "Never"/value: "Always"/};'\
'/# Set Felix endpoint to host default action to ACCEPT./a\ - name: FELIX_VXLANPORT\n value: "6789"' \
calico-workload.yaml
Now, deploy the Calico CNI on the workload cluster:
kubectl --kubeconfig=./capi-quickstart.kubeconfig create -f calico-workload.yaml
After a short while, our nodes should be running and in Ready
state, let’s check the status using kubectl get nodes
:
kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes
kubectl --kubeconfig=./capi-quickstart.kubeconfig \
apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
After a short while, our nodes should be running and in Ready
state,
let’s check the status using kubectl get nodes
:
kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes
NAME STATUS ROLES AGE VERSION
capi-quickstart-vs89t-gmbld Ready control-plane 5m33s v1.31.0
capi-quickstart-vs89t-kf9l5 Ready control-plane 6m20s v1.31.0
capi-quickstart-vs89t-t8cfn Ready control-plane 7m10s v1.31.0
capi-quickstart-md-0-55x6t-5649968bd7-8tq9v Ready <none> 6m5s v1.31.0
capi-quickstart-md-0-55x6t-5649968bd7-glnjd Ready <none> 6m9s v1.31.0
capi-quickstart-md-0-55x6t-5649968bd7-sfzp6 Ready <none> 6m9s v1.31.0
Clean Up
Delete workload cluster.
kubectl delete cluster capi-quickstart
Delete management cluster
kind delete cluster
Next steps
- Create a second workload cluster. Simply follow the steps outlined above, but remember to provide a different name for your second workload cluster.
- Deploy applications to your workload cluster. Use the CNI deployment steps for pointers.
- See the clusterctl documentation for more detail about clusterctl supported actions.
AWS Machine Images for CAPA Clusters
CAPA requires a “machine image” containing pre-installed, matching versions of kubeadm and kubelet.
EKS Clusters
For an EKS cluster the default behaviour is to retieve the AMI to use from SSM. This is so the recommended Amazon Linux AMI is used (see here).
Instead of using the auto resolved AMIs an appropriate custom image ID for the Kubernetes version can be set in AWSMachineTemplate
spec.
Non-EKS Clusters
By default the machine image is auto-resolved by CAPA to a public AMI that matches the Kubernetes version in KubeadmControlPlane
or MachineDeployment
spec. These AMIs are published in a community owned AWS account. See pre-built public AMIs for details of the CAPA project published images.
IMPORTANT: The project doesn’t recommend using the public AMIs for production use. Instead its recommended that you build your own AMIs for the Kubernetes versions you want to use. The AMI can then be specified in the
AWSMachineTemplate
spec. Custom images can be created using image-builder project.
Pre-built Kubernetes AMIs
New AMIs are built on a best effort basis when a new Kubernetes version is released for each supported OS distribution and then published to supported regions.
AMI Publication Policy
- AMIs should only be used for non-production usage. For production environments we recommend that you build and maintain your own AMIs using the image-builder project.
- AMIs will only be published for the latest release series and 2 previous release series. For example, if the current release series is v1.30 then AMIs will only be published for v1.30, v1.29, v1.28.
- When there is a new k8s release series then any AMIs no longer covered by the previous point will be deleted. For example, when v1.31.0 is published then any AMIs for the v1.28 release series will be deleted.
- Existing AMIs are not updated for security fixes and it is recommended to always use the latest patch version for the Kubernetes version you want to run.
NOTE: As the old community images were located in an AWS account that the project no longer has access to and because those AMIs have been automatically deleted, we have started publishing images again starting from Kubernetes v1.29.9.
Finding AMIs
clusterawsadm ami list
command lists pre-built reference AMIs by Kubernetes version, OS, or AWS region. See clusterawsadm ami list for details.
If you are using a version of clusterawsadm prior to v2.6.2 then you will need to explicitly specify the owner-id for the community account: clusterawsadm ami list --owner-id 819546954734
.
Supported OS Distributions
- Ubuntu (ubuntu-22.04, ubuntu-24.04)
- Flatcar (flatcar-stable)
Note: Centos (centos-7) and Amazon Linux 2 (amazon-2) where supported but there are some issues with the AMI build that need fixing. See this issue for details.
Supported AWS Regions
- ap-northeast-1
- ap-northeast-2
- ap-south-1
- ap-southeast-1
- ap-southeast-2
- ca-central-1
- eu-central-1
- eu-west-1
- eu-west-2
- eu-west-3
- sa-east-1
- us-east-1
- us-east-2
- us-west-1
- us-west-2
Custom Kubernetes AMIs
Cluster API uses the Kubernetes Image Builder tools. You should use the AWS images from that project as a starting point for your custom image.
The Image Builder Book explains how to build the images defined in that repository, with instructions for AWS CAPI Images in particular.
Operating system requirements
For custom images to work with Cluster API, it must meet the operating system requirements of the bootstrap provider. For example, the default kubeadm
bootstrap provider has a set of [preflight checks
][kubeadm-preflight-checks] that a VM is expected to pass before it can join the cluster.
Kubernetes version requirements
The pre-built public images are each built to support a specific version of Kubernetes. When using custom images, make sure to match the image to the version:
field of the KubeadmControlPlane
and MachineDeployment
in the YAML template for your workload cluster.
To upgrade to a new Kubernetes release with custom images requires this preparation:
- create a new custom image which supports the Kubernetes release version
- copy the existing
AWSMachineTemplate
and change itsami:
section to reference the new custom image - create the new
AWSMachineTemplate
on the management cluster - modify the existing
KubeadmControlPlane
andMachineDeployment
to reference the newAWSMachineTemplate
and update theversion:
field to match
See Upgrading workload clusters for more details.
Creating a cluster from a custom image
To use a custom image, it needs to be referenced in an ami:
section of your AWSMachineTemplate
.
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachineTemplate
metadata:
name: capa-image-id-example
namespace: default
spec:
template:
spec:
ami:
id: ami-09709369c53539c11
iamInstanceProfile: control-plane.cluster-api-provider-aws.sigs.k8s.io
instanceType: m5.xlarge
sshKeyName: default
Topics
Using clusterawsadm to fulfill prerequisites
Requirements
IAM resources
With clusterawsadm
Get the latest clusterawsadm and place it in your path.
Cluster API Provider AWS ships with clusterawsadm, a utility to help you manage IAM objects for this project.
In order to use clusterawsadm you must have an administrative user in an AWS account. Once you have that administrator user you need to set your environment variables:
AWS_REGION
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
(if you are using Multi-factor authentication)
After these are set run this command to get you up and running:
clusterawsadm bootstrap iam create-cloudformation-stack
Additional policies can be added by creating a configuration file
apiVersion: bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSIAMConfiguration
spec:
controlPlane:
extraPolicyAttachments:
- arn:aws:iam::<AWS_ACCOUNT>:policy/my-policy
- arn:aws:iam::aws:policy/AmazonEC2FullAccess
nodes:
extraPolicyAttachments:
- arn:aws:iam::<AWS_ACCOUNT>:policy/my-other-policy
and passing it to clusterawsadm as follows
clusterawsadm bootstrap iam create-cloudformation-stack --config bootstrap-config.yaml
These will be added to the control plane and node roles respectively when they are created.
Note: If you used the now deprecated
clusterawsadm alpha bootstrap
0.5.4 or earlier to create IAM objects for the Cluster API Provider for AWS, usingclusterawsadm bootstrap iam
0.5.5 or later will, by default, remove the bootstrap user and group. Anything using those credentials to authenticate will start experiencing authentication failures. If you rely on the bootstrap user and group credentials, specifybootstrapUser.enable = true
in the configuration file, like this:apiVersion: bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSIAMConfiguration spec: bootstrapUser: enable: true
With EKS Support
The pre-requisities for EKS are enabled by default. However, if you want to use some of the optional features of EKS (see here for more information on what these are) then you will need to enable these features via the configuration file. For example:
apiVersion: bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSIAMConfiguration
spec:
eks:
iamRoleCreation: false # Set to true if you plan to use the EKSEnableIAM feature flag to enable automatic creation of IAM roles
managedMachinePool:
disable: false # Set to false to enable creation of the default node role for managed machine pools
fargate:
disable: false # Set to false to enable creation of the default role for the fargate profiles
and then use that configuration file:
clusterawsadm bootstrap iam create-cloudformation-stack --config bootstrap-config.yaml
Enabling EventBridge Events
To enable EventBridge instance state events, additional permissions must be granted along with enabling the feature-flag. Additional permissions for events and queue management can be enabled through the configuration file as follows:
apiVersion: bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSIAMConfiguration
spec:
...
eventBridge:
enable: true
...
Cross Account Role Assumption
CAPA, by default, does not provide the necessary permissions to allow cross-account role assumption, which can be used to manage clusters in other environments. This is documented here. The ‘sts:AssumeRole’ permissions can be added via the following configuration on the manager account configuration:
apiVersion: bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSIAMConfiguration
spec:
...
allowAssumeRole: true
...
The above will give the controller to have the necessary permissions needed in order for it to manage clusters in other accounts using the AWSClusterRoleIdentity. Please note, the above should only be applied to the account where CAPA is running. To allow CAPA to assume the roles in the managed/target accounts, the following configuration needs to be used:
apiVersion: bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSIAMConfiguration
spec:
...
clusterAPIControllers:
disabled: false
trustStatements:
- Action:
- "sts:AssumeRole"
Effect: "Allow"
Principal:
AWS:
- "arn:aws:iam::<manager account>:role/controllers.cluster-api-provider-aws.sigs.k8s.io"
...
Without clusterawsadm
This is not a recommended route as the policies are very specific and will change with new features.
If you do not wish to use the clusteradwsadm
tool then you will need to
understand exactly which IAM policies and groups we are expecting. There are
several policies, roles and users that need to be created. Please see our
controller policy file to understand the permissions that are necessary.
You can use clusteradwadm
to print out the needed IAM policies, e.g.
clusterawsadm bootstrap iam print-policy --document AWSIAMManagedPolicyControllers --config bootstrap-config.yaml
SSH Key pair
If you plan to use SSH to access the instances created by Cluster API Provider AWS then you will need to specify the name of an existing SSH key pair within the region you plan on using. If you don’t have one yet, a new one needs to be created.
Create a new key pair
# Save the output to a secure location
aws ec2 create-key-pair --key-name default --output json | jq .KeyMaterial -r
-----BEGIN RSA PRIVATE KEY-----
[... contents omitted ...]
-----END RSA PRIVATE KEY-----
If you want to save the private key directly into AWS Systems Manager Parameter Store with KMS encryption for security, you can use the following command:
aws ssm put-parameter --name "/sigs.k8s.io/cluster-api-provider-aws/ssh-key" \
--type SecureString \
--value "$(aws ec2 create-key-pair --key-name default --output json | jq .KeyMaterial -r)"
Adding an existing public key to AWS
# Replace with your own public key
aws ec2 import-key-pair \
--key-name default \
--public-key-material "$(cat ~/.ssh/id_rsa.pub)"
NB: Only RSA keys are supported by AWS.
Setting up the environment
The current iteration of the Cluster API Provider AWS relies on credentials being present in your environment. These then get written into the cluster manifests for use by the controllers.
E.g.
export AWS_REGION=us-east-1 # This is used to help encode your environment variables
export AWS_ACCESS_KEY_ID=<your-access-key>
export AWS_SECRET_ACCESS_KEY=<your-secret-access-key>
export AWS_SESSION_TOKEN=<session-token> # If you are using Multi-Factor Auth.
Note: The credentials used must have the appropriate permissions for use by the controllers. You can get the required policy statement by using the following command:
clusterawsadm bootstrap iam print-policy --document AWSIAMManagedPolicyControllers --config bootstrap-config.yaml
To save credentials securely in your environment, aws-vault uses the OS keystore as permanent storage, and offers shell features to securely expose and setup local AWS environments.
Accessing cluster instances
Overview
After running clusterctl generate cluster
to generate the configuration for a new workload cluster (and then redirecting that output to a file for use with kubectl apply
, or piping it directly to kubectl apply
), the new workload cluster will be deployed. This document explains how to access the new workload cluster’s nodes.
Prerequisites
clusterctl generate cluster
was successfully executed to generate the configuration for a new workload cluster- The configuration for the new workload cluster was applied to the management cluster using
kubectl apply
and the cluster is up and running in an AWS environment. - The SSH key referenced by
clusterctl
in step 1 exists in AWS and is stored in the correct location locally for use by SSH (on macOS/Linux systems, this is typically$HOME/.ssh
). This document will refer to this key ascluster-api-provider-aws.sigs.k8s.io
. - (If using AWS Session Manager) The AWS CLI and the Session Manager plugin have been installed and configured.
Methods for accessing nodes
There are two ways to access cluster nodes once the workload cluster is up and running:
- via SSH
- via AWS Session Manager
Accessing nodes via SSH
By default, workload clusters created in AWS will not support access via SSH apart from AWS Session Manager (see the section titled “Accessing nodes via AWS Session Manager”). However, the manifest for a workload cluster can be modified to include an SSH bastion host, created and managed by the management cluster, to enable SSH access to cluster nodes. The bastion node is created in a public subnet and provides SSH access from the world. It runs the official Ubuntu Linux image.
Enabling the bastion host
To configure the Cluster API Provider for AWS to create an SSH bastion host, add this line to the AWSCluster spec:
spec:
bastion:
enabled: true
If this field is set and a specific AMI ID is not provided for the bastion (by setting spec.bastion.ami) then by default the latest AMI(Ubuntu 20.04 LTS OS) is looked up from Ubuntu cloud images by CAPA controller and used in bastion host creation.
Obtain public IP address of the bastion node
Once the workload cluster is up and running after being configured for an SSH bastion host, you can use the kubectl get awscluster
command to look up the public IP address of the bastion host (make sure the kubectl
context is set to the management cluster). The output will look something like this:
NAME CLUSTER READY VPC BASTION IP
test test true vpc-1739285ed052be7ad 1.2.3.4
Setting up the SSH key path
Assumming that the cluster-api-provider-aws.sigs.k8s.io
SSH key is stored in
$HOME/.ssh/cluster-api-provider-aws
, use this command to set up an environment variable for use in a later command:
export CLUSTER_SSH_KEY=$HOME/.ssh/cluster-api-provider-aws
Get private IP addresses of nodes in the cluster
To get the private IP addresses of nodes in the cluster (nodes may be control plane nodes or worker nodes), use this kubectl
command with the context set to the management cluster:
kubectl get nodes -o custom-columns=NAME:.metadata.name,\
IP:"{.status.addresses[?(@.type=='InternalIP')].address}"
This will produce output that looks like this:
NAME IP
ip-10-0-0-16.us-west-2.compute.internal 10.0.0.16
ip-10-0-0-68.us-west-2.compute.internal 10.0.0.68
The above command returns IP addresses of the nodes in the cluster. In this
case, the values returned are 10.0.0.16
and 10.0.0.68
.
Connecting to the nodes via SSH
To access one of the nodes (either a control plane node or a worker node) via the SSH bastion host, use this command if you are using a non-EKS cluster:
ssh -i ${CLUSTER_SSH_KEY} ubuntu@<NODE_IP> \
-o "ProxyCommand ssh -W %h:%p -i ${CLUSTER_SSH_KEY} ubuntu@${BASTION_HOST}"
And use this command if you are using a EKS based cluster:
ssh -i ${CLUSTER_SSH_KEY} ec2-user@<NODE_IP> \
-o "ProxyCommand ssh -W %h:%p -i ${CLUSTER_SSH_KEY} ubuntu@${BASTION_HOST}"
If the whole document is followed, the value of <NODE_IP>
will be either
10.0.0.16 or 10.0.0.68.
Alternately, users can add a configuration stanza to their SSH configuration file (typically found on macOS/Linux systems as $HOME/.ssh/config
):
Host 10.0.*
User ubuntu
IdentityFile <CLUSTER_SSH_KEY>
ProxyCommand ssh -W %h:%p ubuntu@<BASTION_HOST>
Accessing nodes via AWS Session Manager
All CAPA-published AMIs based on Ubuntu have the AWS SSM Agent pre-installed (as a Snap package; this was added in June 2018 to the base Ubuntu Server image for all 16.04 and later AMIs). This allows users to access cluster nodes directly, without the need for an SSH bastion host, using the AWS CLI and the Session Manager plugin.
To access a cluster node (control plane node or worker node), you’ll need the instance ID. You can retrieve the instance ID using this kubectl
command with the context set to the management cluster:
kubectl get awsmachines -o custom-columns=NAME:.metadata.name,INSTANCEID:.spec.providerID
This will produce output similar to this:
NAME INSTANCEID
test-controlplane-52fhh aws:////i-112bac41a19da1819
test-controlplane-lc5xz aws:////i-99aaef2381ada9228
Users can then use the instance ID (everything after the aws:////
prefix) to connect to the cluster node with this command:
aws ssm start-session --target <INSTANCE_ID>
This will log you into the cluster node as the ssm-user
user ID.
Additional Notes
Using the AWS CLI instead of kubectl
It is also possible to use AWS CLI commands instead of kubectl
to gather information about the cluster nodes.
For example, to use the AWS CLI to get the public IP address of the SSH bastion host, use this AWS CLI command:
export BASTION_HOST=$(aws ec2 describe-instances --filter='Name=tag:Name,Values=<CLUSTER_NAME>-bastion' \
| jq '.Reservations[].Instances[].PublicIpAddress' -r)
You should substitute the correct cluster name for <CLUSTER_NAME>
in the above command. (NOTE: If make manifests
was used to generate manifests, by default the <CLUSTER_NAME>
is set to test1
.)
Similarly, to obtain the list of private IP addresses of the cluster nodes, use this AWS CLI command:
for type in control-plane node
do
aws ec2 describe-instances \
--filter="Name=tag:sigs.k8s.io/cluster-api-provider-aws/role,\
Values=${type}" \
| jq '.Reservations[].Instances[].PrivateIpAddress' -r
done
10.0.0.16
10.0.0.68
Finally, to obtain AWS instance IDs for cluster nodes, you can use this AWS CLI command:
for type in control-plane node
do
aws ec2 describe-instances \
--filter="Name=tag:sigs.k8s.io/cluster-api-provider-aws/role,\
Values=${type}" \
| jq '.Reservations[].Instances[].InstanceId' -r
done
i-112bac41a19da1819
i-99aaef2381ada9228
Note that your AWS CLI must be configured with credentials that enable you to query the AWS EC2 API.
Spot Instances
AWS Spot Instances allows user to reduce the costs of their compute resources by utilising AWS spare capacity for a lower price.
Because Spot Instances are tightly integrated with AWS services such as Auto Scaling, ECS and CloudFormation, users can choose how to launch and maintain their applications running on Spot Instances.
Although, with this lower cost, comes the risk of preemption. When capacity within a particular Availability Zone is increased, AWS may need to reclaim Spot instances to satisfy the demand on their data centres.
When to use spot instances?
Spot instances are ideal for workloads that can be interrupted. For example, short jobs or stateless services that can be rescheduled quickly, without data loss, and resume operation with limited degradation to a service.
Using Spot Instances with AWSMachine
To enable AWS Machine to be backed by a Spot Instance, users need to add spotMarketOptions
to AWSMachineTemplate:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachineTemplate
metadata:
name: ${CLUSTER_NAME}-md-0
spec:
template:
spec:
iamInstanceProfile: nodes.cluster-api-provider-aws.sigs.k8s.io
instanceType: ${AWS_NODE_MACHINE_TYPE}
spotMarketOptions:
maxPrice: ""
sshKeyName: ${AWS_SSH_KEY_NAME}
Users may also add a maxPrice
to the options to limit the maximum spend for the instance. It is however, recommended not to set a maxPrice as AWS will cap your spending at the on-demand price if this field is left empty, and you will experience fewer interruptions.
spec:
template:
spotMarketOptions:
maxPrice: 0.02 # Price in USD per hour (up to 5 decimal places)
Using Spot Instances with AWSManagedMachinePool
To use spot instance in EKS managed node groups for a EKS cluster, set capacityType
to spot
in AWSManagedMachinePool
.
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSManagedMachinePool
metadata:
name: ${CLUSTER_NAME}-pool-0
spec:
capacityType: spot
...
See AWS doc for more details.
Using Spot Instances with AWSMachinePool
To enable AWSMachinePool to be backed by a Spot Instance, users need to add spotMarketOptions
to AWSLaunchTemplate:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachinePool
metadata:
name: ${CLUSTER_NAME}-mp-0
spec:
minSize: 1
maxSize: 4
awsLaunchTemplate:
instanceType: "${AWS_CONTROL_PLANE_MACHINE_TYPE}"
iamInstanceProfile: "nodes.cluster-api-provider-aws.sigs.k8s.io"
sshKeyName: "${AWS_SSH_KEY_NAME}"
spotMarketOptions:
maxPrice: ""
IMPORTANT WARNING: The experimental feature
AWSMachinePool
supports using spot instances, but the graceful shutdown of machines inAWSMachinePool
is not supported and has to be handled externally by users.
MachinePools
- Feature status: Experimental
- Feature gate: MachinePool=true
MachinePool allows users to manage many machines as a single entity. Infrastructure providers implement a separate CRD that handles infrastructure side of the feature.
AWSMachinePool
Cluster API Provider AWS (CAPA) has experimental support for MachinePool
though the infrastructure type AWSMachinePool
. An AWSMachinePool
corresponds to an AWS AutoScaling Groups, which provides the cloud provider specific resource for orchestrating a group of EC2 machines.
The AWSMachinePool controller creates and manages an AWS AutoScaling Group using launch templates so users don’t have to manage individual machines. You can use Autoscaling health checks for replacing instances and it will maintain the number of instances specified.
Using clusterctl
to deploy
To deploy a MachinePool / AWSMachinePool via clusterctl generate
there’s a flavor for that.
Make sure to set up your AWS environment as described here.
export EXP_MACHINE_POOL=true
clusterctl init --infrastructure aws
clusterctl generate cluster my-cluster --kubernetes-version v1.25.0 --flavor machinepool > my-cluster.yaml
The template used for this flavor is located here.
AWSManagedMachinePool
Cluster API Provider AWS (CAPA) has experimental support for EKS Managed Node Groups using MachinePool
through the infrastructure type AWSManagedMachinePool
. An AWSManagedMachinePool
corresponds to an AWS AutoScaling Groups that is used for an EKS managed node group. .
The AWSManagedMachinePool controller creates and manages an EKS managed node group which in turn manages an AWS AutoScaling Group of managed EC2 instance types.
To use the managed machine pools certain IAM permissions are needed. The easiest way to ensure the required IAM permissions are in place is to use clusterawsadm
to create them. To do this, follow the EKS instructions in using clusterawsadm to fulfill prerequisites.
Using clusterctl
to deploy
To deploy an EKS managed node group using AWSManagedMachinePool via clusterctl generate
you can use a flavor.
Make sure to set up your AWS environment as described here.
export EXP_MACHINE_POOL=true
clusterctl init --infrastructure aws
clusterctl generate cluster my-cluster --kubernetes-version v1.22.0 --flavor eks-managedmachinepool > my-cluster.yaml
The template used for this flavor is located here.
Examples
Example: MachinePool, AWSMachinePool and KubeadmConfig Resources
Below is an example of the resources needed to create a pool of EC2 machines orchestrated with an AWS Auto Scaling Group.
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachinePool
metadata:
name: capa-mp-0
spec:
clusterName: capa
replicas: 2
template:
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfig
name: capa-mp-0
clusterName: capa
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachinePool
name: capa-mp-0
version: v1.25.0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachinePool
metadata:
name: capa-mp-0
spec:
minSize: 1
maxSize: 10
availabilityZones:
- "${AWS_AVAILABILITY_ZONE}"
awsLaunchTemplate:
instanceType: "${AWS_CONTROL_PLANE_MACHINE_TYPE}"
sshKeyName: "${AWS_SSH_KEY_NAME}"
subnets:
- id : "${AWS_SUBNET_ID}"
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfig
metadata:
name: capa-mp-0
namespace: default
spec:
joinConfiguration:
nodeRegistration:
name: '{{ ds.meta_data.local_hostname }}'
kubeletExtraArgs:
cloud-provider: aws
Autoscaling
cluster-autoscaler
can be used to scale MachinePools up and down.
Two providers are possible to use with CAPA MachinePools: clusterapi
, or aws
.
If the AWS
autoscaler provider is used, each MachinePool needs to have an annotation set to prevent scale up/down races between
cluster-autoscaler and cluster-api. Example:
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachinePool
metadata:
name: capa-mp-0
annotations:
cluster.x-k8s.io/replicas-managed-by: "external-autoscaler"
spec:
clusterName: capa
replicas: 2
template:
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfig
name: capa-mp-0
clusterName: capa
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachinePool
name: capa-mp-0
version: v1.25.0
When using GitOps, make sure to ignore differences in spec.replicas
on MachinePools. Example when using ArgoCD:
ignoreDifferences:
- group: cluster.x-k8s.io
kind: MachinePool
jsonPointers:
- /spec/replicas
Multi-tenancy
Starting from v0.6.5, single controller multi-tenancy is supported that allows using a different AWS Identity for each workload cluster. For details, see the multi-tenancy proposal.
For multi-tenancy support, a reference field (identityRef
) is added to AWSCluster
, which informs the controller of which identity to be used when reconciling the cluster.
If the identity provided exists in a different AWS account, this is the mechanism which informs the controller to provision a cluster in a different account.
Identities should have adequate permissions for CAPA to reconcile clusters.
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSCluster
metadata:
name: "test"
namespace: "test"
spec:
region: "eu-west-1"
identityRef:
kind: <IdentityType>
name: <IdentityName>
Identity resources are used to describe IAM identities that will be used during reconciliation. There are three identity types: AWSClusterControllerIdentity, AWSClusterStaticIdentity, and AWSClusterRoleIdentity. Once an IAM identity is created in AWS, the corresponding values should be used to create a identity resource.
AWSClusterControllerIdentity
Before multi-tenancy support, all AWSClusters were being reconciled using the credentials that are used by Cluster API Provider AWS Controllers.
AWSClusterControllerIdentity
is used to restrict the usage of controller credentials only to AWSClusters that are in allowedNamespaces
.
Since CAPA controllers use a single set of credentials, AWSClusterControllerIdentity
is a singleton, and can only be created with name: default
.
For backward compatibility, AutoControllerIdentityCreator
experimental feature is added, which is responsible to create the AWSClusterControllerIdentity
singleton if it does not exist.
- Feature status: Experimental
- Feature gate: AutoControllerIdentityCreator=true
AutoControllerIdentityCreator
createsAWSClusterControllerIdentity
singleton with emptyallowedNamespaces
(allowedNamespaces: {}) to grant access to theAWSClusterControllerIdentity
from all namespaces.
Example:
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSCluster
metadata:
name: "test"
namespace: "test"
spec:
region: "eu-west-1"
identityRef:
kind: AWSClusterControllerIdentity
name: default
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSClusterControllerIdentity
metadata:
name: "default"
spec:
allowedNamespaces: {} # matches all namespaces
AWSClusterControllerIdentity
is immutable to avoid any unwanted overrides to the allowed namespaces, especially during upgrading clusters.
AWSClusterStaticIdentity
AWSClusterStaticIdentity
represents static AWS credentials, which are stored in a Secret
.
Example: Below, an AWSClusterStaticIdentity
is created that allows access to the AWSClusters
that are in “test” namespace.
The identity credentials that will be used by “test” AWSCluster are stored in “test-account-creds” secret.
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSCluster
metadata:
name: "test"
namespace: "test"
spec:
region: "eu-west-1"
identityRef:
kind: AWSClusterStaticIdentity
name: test-account
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSClusterStaticIdentity
metadata:
name: "test-account"
spec:
secretRef: test-account-creds
allowedNamespaces:
selector:
matchLabels:
cluster.x-k8s.io/ns: "testlabel"
---
apiVersion: v1
kind: Namespace
metadata:
labels:
cluster.x-k8s.io/ns: "testlabel"
name: "test"
---
apiVersion: v1
kind: Secret
metadata:
name: "test-account-creds"
namespace: capa-system
stringData:
AccessKeyID: AKIAIOSFODNN7EXAMPLE
SecretAccessKey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
AWSClusterRoleIdentity
AWSClusterRoleIdentity
allows CAPA to assume a role either in the same or another AWS account, using the STS::AssumeRole API.
The assumed role could be used by the AWSClusters that is in the allowedNamespaces
.
Example:
Below, an AWSClusterRoleIdentity
instance, which will be used by AWSCluster
“test”, is created.
This role will be assumed by the source identity at runtime. Source identity can be of any identity type.
Role is assumed in the beginning once and after, whenever the assumed role’s credentials are expired.
This snippet illustrates the connection between AWSCluster
and the AWSClusterRoleIdentity
, however this is not a working example.
Please view a full example below.
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSCluster
metadata:
name: "test"
namespace: "test"
spec:
region: "eu-west-1"
identityRef:
kind: AWSClusterRoleIdentity
name: test-account-role
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSClusterRoleIdentity
metadata:
name: "test-account-role"
spec:
allowedNamespaces:
- "test" # allows only "test" namespace to use this identity
roleARN: "arn:aws:iam::123456789:role/CAPARole"
sourceIdentityRef:
kind: AWSClusterControllerIdentity # use the singleton for root auth
name: default
Nested role assumption is also supported.
Example: Below, “multi-tenancy-nested-role” will be assumed by “multi-tenancy-role”, which will be assumed by the “default” AWSClusterControllerIdentity
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSClusterRoleIdentity
metadata:
name: multi-tenancy-role
spec:
allowedNamespaces:
list: []
durationSeconds: 900 # default and min value is 900 seconds
roleARN: arn:aws:iam::11122233344:role/multi-tenancy-role
sessionName: multi-tenancy-role-session
sourceIdentityRef:
kind: AWSClusterControllerIdentity
name: default
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSClusterRoleIdentity
metadata:
name: multi-tenancy-nested-role
spec:
allowedNamespaces:
list: []
roleARN: arn:aws:iam::11122233355:role/multi-tenancy-nested-role
sessionName: multi-tenancy-nested-role-session
sourceIdentityRef:
kind: AWSClusterRoleIdentity
name: multi-tenancy-role
Necessary permissions for assuming a role:
There are multiple AWS assume role permissions that need to be configured in order for the assume role to work:
-
The source identity (user/role specified in the source identity field) should have IAM policy permissions that enable it to perform sts:AssumeRole operation.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "*" } ] }
-
The target role (can be in a different AWS account) must be configured to allow the source user/role (or all users in an AWS account) to assume into it by setting a trust policy:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::111111111111:root" // "AWS": "arn:aws:iam::111111111111:role/role-used-during-cluster-bootstrap" }, "Action": "sts:AssumeRole" } ] }
Both of these permissions can be enabled via clusterawsadm as documented here.
Examples
This is a deployable example which uses the AWSClusterRoleIdentity
“test-account-role” to assume into the arn:aws:iam::123456789:role/CAPARole
role in the target account.
This example assumes that the CAPARole
has already been configured in the target account.
Finally, we inform the Cluster
to use our AWSCluster
type to provision a cluster in the target account specified by the identityRef
section.
Note
By default the AutoControllerIdentityCreator=true
feature gate is set to true
here.
If this is not enabled for your cluster, you will need to enable the flag, or create your own default AWSClusterControllerIdentity
.
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSClusterControllerIdentity
metadata:
name: "default"
spec:
allowedNamespaces: {} # matches all namespaces
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSClusterRoleIdentity
metadata:
name: "test-account-role"
spec:
allowedNamespaces: {} # matches all namespaces
roleARN: "arn:aws:iam::123456789:role/CAPARole"
sourceIdentityRef:
kind: AWSClusterControllerIdentity # use the singleton for root auth
name: default
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSCluster
metadata:
name: "test-multi-tenant-workload"
spec:
region: "eu-west-1"
identityRef:
kind: AWSClusterRoleIdentity
name: test-account-role
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: "test-multi-tenant-workload"
spec:
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSCluster
name: "test-multi-tenant-workload"
More specific examples can be referenced from the existing templates directory.
In order to use the EC2 template with identity type, you can add the identityRef
section to kind: AWSCluster
spec section in the template. If you do not, CAPA will automatically add the default identity provider (which is usually your local account credentials).
Similarly, to use the EKS template with identity type, you can add the identityRef
section to kind: AWSManagedControlPlane
spec section in the template. If you do not, CAPA will automatically add the default identity provider (which is usually your local account credentials).
Secure Access to Identities
allowedNamespaces
field is used to grant access to the namespaces to use Identities.
Only AWSClusters that are created in one of the Identity’s allowed namespaces can use that Identity.
allowedNamespaces
are defined by providing either a list of namespaces or label selector to select namespaces.
Examples
An empty allowedNamespaces
indicates that the Identity can be used by all namespaces.
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSClusterControllerIdentity
spec:
allowedNamespaces:{} # matches all namespaces
Having a nil list
and a nil selector
is the same with having an empty allowedNamespaces
(Identity can be used by all namespaces).
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSClusterControllerIdentity
spec:
allowedNamespaces:
list: nil
selector: nil
A nil allowedNamespaces
indicates that the Identity cannot be used from any namespace.
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSClusterControllerIdentity
spec:
allowedNamespaces: # this is same with not providing the field at all or allowedNamespaces: null
The union of namespaces that are matched by selector
and the namespaces that are in the list
is granted access to the identity.
The namespaces that are not in the list and not matching the selector will not have access.
Nil or empty list
matches no namespaces. Nil or empty selector
matches no namespaces.
If list
is nil and selector
is empty OR list
is empty and selector
is nil, Identity cannot be used from any namespace.
Because in this case, allowedNamespaces
is not empty or nil, and neither list
nor selector
allows any namespaces, so the union is empty.
# Matches no namespaces
allowedNamespaces:
list: []
# Matches no namespaces
allowedNamespaces:
selector: {}
# Matches no namespaces
allowedNamespaces:
list: null
selector: {}
# Matches no namespaces
allowedNamespaces:
list: []
selector: {}
Important The default behaviour of an empty label selector is to match all objects, however here we do not follow that behavior to avoid unintended access to the identities. This is consistent with core cluster API selectors, e.g., Machine and ClusterResourceSet selectors. The result of matchLabels and matchExpressions are ANDed.
In Kubernetes selectors, matchLabels
and matchExpressions
are ANDed.
In the example below, list is empty/nil, so does not allow any namespaces and selector matches with only default
namespace.
Since list
and selector
results are ORed, default
namespace can use this identity.
kind: namespace
metadata:
name: default
labels:
environment: dev
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSClusterControllerIdentity
spec:
allowedNamespaces:
list: null # or []
selector:
matchLabels:
namespace: default
matchExpressions:
- {key: environment, operator: In, values: [dev]}
Multitenancy setup with EKS and Service Account
See multitenancy for more details on enabling the functionality and the various options you can use.
In this example, we are going to see how to create the following architecture with cluster API:
AWS Account 1
+--------------------+
| |
+---------------+->EKS - (Managed) |
| | |
| +--------------------+
AWS Account 0 | AWS Account 2
+----------------+---+ +--------------------+
| | | | |
| EKS - (Manager)---+-----------+->EKS - (Managed) |
| | | | |
+----------------+---+ +--------------------+
| AWS Account 3
| +--------------------+
| | |
+---------------+->EKS - (Managed) |
| |
+--------------------+
And specifically, we will only include:
- AWS Account 0 (aka Manager account used by management cluster where cluster API controllers reside)
- AWS Account 1 (aka Managed account used for EKS-managed workload clusters)
Prerequisites
- A bootstrap cluster (kind)
- AWS CLI installed
- 2 (or more) AWS accounts
- clusterawsadm
- clusterctl
Set variables
Note: the credentials below are the ones of the manager account
Export the following environment variables:
- AWS_REGION
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- AWS_SESSION_TOKEN (if you are using Multi-factor authentication)
- AWS_MANAGER_ACCOUNT_ID
- AWS_MANAGED_ACCOUNT_ID
- OIDC_PROVIDER_ID=”WeWillReplaceThisLater”
Prepare the manager account
As explained in the EKS prerequisites page, we need a couple of roles in the account to build the cluster, clusterawsadm
CLI can take care of it.
We know that the CAPA provider in the Manager account should be able to assume roles in the Managed account (AWS Account 1).
We can create a clusterawsadm configuration that adds an inline policy to the controllers.cluster-api-provider-aws.sigs.k8s.io
role.
envsubst > bootstrap-manager-account.yaml << EOL
apiVersion: bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSIAMConfiguration
spec:
eks: # This section should be changed accordingly to your requirements
iamRoleCreation: false
managedMachinePool:
disable: true
fargate:
disable: false
clusterAPIControllers: # This is the section that really matter
disabled: false
extraStatements:
- Action:
- "sts:AssumeRole"
Effect: "Allow"
Resource: ["arn:aws:iam::${AWS_MANAGED_ACCOUNT_ID}:role/controllers.cluster-api-provider-aws.sigs.k8s.io"]
trustStatements:
- Action:
- "sts:AssumeRoleWithWebIdentity"
Effect: "Allow"
Principal:
Federated:
- "arn:aws:iam::${AWS_MANAGER_ACCOUNT_ID}:oidc-provider/oidc.eks.${AWS_REGION}.amazonaws.com/id/${OIDC_PROVIDER_ID}"
Condition:
"ForAnyValue:StringEquals":
"oidc.eks.${AWS_REGION}.amazonaws.com/id/${OIDC_PROVIDER_ID}:sub":
- system:serviceaccount:capi-providers:capa-controller-manager
- system:serviceaccount:capa-eks-control-plane-system:capa-eks-control-plane-controller-manager # Include if also using EKS
EOL
Let’s provision the Manager role with:
clusterawsadm bootstrap iam create-cloudformation-stack --config bootstrap-manager-account.yaml
Manager cluster
The following commands assume you have the AWS credentials for the Manager account exposed, and your kube context is pointing to the bootstrap cluster.
Install cluster API provider in the bootstrap cluster
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
export EKS=true
export EXP_MACHINE_POOL=true
clusterctl init --infrastructure aws --target-namespace capi-providers
Generate the cluster configuration
NOTE: You might want to update the Kubernetes and VPC addon versions to one of the available versions when running this command.
- Kubernetes versions
- VPC CNI add-on versions don’t forget to add the
v
prefix
export AWS_SSH_KEY_NAME=default
export VPC_ADDON_VERSION="v1.10.2-eksbuild.1"
clusterctl generate cluster manager --flavor eks-managedmachinepool-vpccni --kubernetes-version v1.20.2 --worker-machine-count=3 > manager-cluster.yaml
Apply the cluster configuration
kubectl apply -f manager-cluster.yaml
WAIT: time to have a drink, the cluster is creating and we will have to wait for it to be there before continuing.
IAM OIDC Identity provider
Follow AWS documentation to create an OIDC provider https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html
Update the TrustStatement above
export OIDC_PROVIDER_ID=<OIDC_ID_OF_THE_CLUSTER>
run the Prepare the manager account step again
Get manager cluster credentials
kubectl --namespace=default get secret manager-user-kubeconfig \
-o jsonpath={.data.value} | base64 --decode \
> manager.kubeconfig
Install the CAPA provider in the manager cluster
Here we install the Cluster API providers into the manager cluster and create a service account to use the controllers.cluster-api-provider-aws.sigs.k8s.io
role for the Management Components.
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
export EKS=true
export EXP_MACHINE_POOL=true
export AWS_CONTROLLER_IAM_ROLE=arn:aws:iam::${AWS_MANAGER_ACCOUNT_ID}:role/controllers.cluster-api-provider-aws.sigs.k8s.io
clusterctl init --kubeconfig manager.kubeconfig --infrastructure aws --target-namespace capi-providers
Managed cluster
Time to build the managed cluster for pivoting the bootstrap cluster.
Generate the cluster configuration
NOTE: As for the manager cluster you might want to update the Kubernetes and VPC addon versions.
export AWS_SSH_KEY_NAME=default
export VPC_ADDON_VERSION="v1.10.2-eksbuild.1"
clusterctl generate cluster managed --flavor eks-managedmachinepool-vpccni --kubernetes-version v1.20.2 --worker-machine-count=3 > managed-cluster.yaml
Edit the file and add the following to the AWSManagedControlPlane
resource spec to point the controller to the manager account when creating the cluster.
identityRef:
kind: AWSClusterRoleIdentity
name: managed-account
Create the identities
envsubst > cluster-role-identity.yaml << EOL
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSClusterRoleIdentity
metadata:
name: managed-account
spec:
allowedNamespaces: {} # This is unsafe since every namespace is allowed to use the role identity
roleARN: arn:aws:iam::${AWS_MANAGED_ACCOUNT_ID}:role/controllers.cluster-api-provider-aws.sigs.k8s.io
sourceIdentityRef:
kind: AWSClusterControllerIdentity
name: default
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSClusterControllerIdentity
metadata:
name: default
spec:
allowedNamespaces: {}
EOL
Prepare the managed account
NOTE: Expose the managed account credentials before running the following commands.
This configuration is adding the trustStatement in the cluster api controller role to allow the controllers.cluster-api-provider-aws.sigs.k8s.io
in the manager account to assume it.
envsubst > bootstrap-managed-account.yaml << EOL
apiVersion: bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSIAMConfiguration
spec:
eks:
iamRoleCreation: false # Set to true if you plan to use the EKSEnableIAM feature flag to enable automatic creation of IAM roles
managedMachinePool:
disable: true # Set to false to enable creation of the default node role for managed machine pools
fargate:
disable: false # Set to false to enable creation of the default role for the fargate profiles
clusterAPIControllers:
disabled: false
trustStatements:
- Action:
- "sts:AssumeRole"
Effect: "Allow"
Principal:
AWS:
- "arn:aws:iam::${AWS_MANAGER_ACCOUNT_ID}:role/controllers.cluster-api-provider-aws.sigs.k8s.io"
EOL
Let’s provision the Managed account with:
clusterawsadm bootstrap iam create-cloudformation-stack --config bootstrap-managed-account.yaml
Apply the cluster configuration
Note: Back to the manager account credentials
kubectl --kubeconfig manager.kubeconfig apply -f cluster-role-identity.yaml
kubectl --kubeconfig manager.kubeconfig apply -f managed-cluster.yaml
Time for another drink, enjoy your multi-tenancy setup.
EKS Support in the AWS Provider
- Feature status: Stable
- Feature gate (required): EKS=true
- Feature gate (optional): EKSEnableIAM=true,EKSAllowAddRoles=true
Overview
The AWS provider supports creating EKS based cluster. Currently the following features are supported:
- Provisioning/managing an Amazon EKS Cluster
- Upgrading the Kubernetes version of the EKS Cluster
- Attaching a self-managed machines as nodes to the EKS cluster
- Creating a machine pool and attaching it to the EKS cluster. See machine pool docs for details.
- Creating a managed machine pool and attaching it to the EKS cluster. See machine pool docs for details
- Managing “EKS Addons”. See addons for further details
- Creating an EKS fargate profile
- Managing aws-iam-authenticator configuration
Note: machine pools and fargate profiles are still classed as experimental.
The implementation introduces the following CRD kinds:
- AWSManagedControlPlane - specifies the EKS Cluster in AWS and used by the Cluster API AWS Managed Control plane (MACP)
- AWSManagedMachinePool - defines the managed node pool for the cluster
- EKSConfig - used by Cluster API bootstrap provider EKS (CABPE)
And a number of new templates are available in the templates folder for creating a managed workload cluster.
SEE ALSO
- Prerequisites
- Enabling EKS Support
- Disabling EKS Support
- Creating a cluster
- Using EKS Console
- Using EKS Addons
- Enabling Encryption
- Cluster Upgrades
Prerequisites
To use EKS you must give the controller the required permissions. The easiest way to do this is by using clusterawsadm
. For instructions on how to do this see the prerequisites.
When using clusterawsadm
and enabling EKS support a new IAM role will be created for you called eks-controlplane.cluster-api-provider-aws.sigs.k8s.io. This role is the IAM role that will be used for the EKS control plane if you don’t specify your own role and if EKSEnableIAM isn’t enabled (see the enabling docs for further information).
Additionally using clusterawsadm
will add permissions to the controllers.cluster-api-provider-aws.sigs.k8s.io policy for EKS to function properly.
Enabling EKS Support
Support for EKS is enabled by default when you use the AWS infrastructure provider. For example:
clusterctl init --infrastructure aws
Enabling optional EKS features
There are additional EKS experimental features that are disabled by default. The sections below cover how to enable these features.
Machine Pools
To enable support for machine pools the MachinePool feature flag must be set to to true. This can be done using the EXP_MACHINE_POOL environment variable:
export EXP_MACHINE_POOL=true
clusterctl init --infrastructure aws
See the machine pool documentation for further information.
NOTE: you will need to enable the creation of the default IAM role. The easiest way is using clusterawsadm
, for instructions see the prerequisites.
IAM Roles Per Cluster
By default EKS clusters will use the same IAM roles (i.e. control plane, node group roles). There is a feature that allows each cluster to have its own IAM roles. This is done by enabling the EKSEnableIAM feature flag. This can be done before running clusterctl init
by using the the CAPA_EKS_IAM environment variable:
export CAPA_EKS_IAM=true
clusterctl init --infrastructure aws
NOTE: you will need the correct prerequisities for this. The easiest way is using clusterawsadm
and setting iamRoleCreation
to true, for instructions see the prerequisites.
Additional Control Plane Roles
You can add additional roles to the control plane role that is created for an EKS cluster. To use this you must enable the EKSAllowAddRoles feature flag. This can be done before running clusterctl init
by using the CAPA_EKS_ADD_ROLES environment variable:
export CAPA_EKS_IAM=true
export CAPA_EKS_ADD_ROLES=true
clusterctl init --infrastructure aws
NOTE: to use this feature you must also enable the CAPA_EKS_IAM feature.
EKS Fargate Profiles
You can use Fargate Profiles with EKS. To use this you must enable the EKSFargate feature flag. This can be done before running clusterctl init
by using the EXP_EKS_FARGATE environmnet variable:
export EXP_EKS_FARGATE=true
clusterctl init --infrastructure aws
NOTE: you will need to enable the creation of the default Fargate IAM role. The easiest way is using clusterawsadm
and using the fargate
configuration option, for instructions see the prerequisites.
Pod Networking
When creating a EKS cluster the Amazon VPC CNI will be used by default for Pod Networking.
When using the AWS Console to create an EKS cluster with a Kubernetes version of v1.18 or greater you are required to select a specific version of the VPC CNI to use.
Using the VPC CNI Addon
You can use an explicit version of the Amazon VPC CNI by using the vpc-cni EKS addon. See the addons documentation for further details of how to use addons.
Using Custom VPC CNI Configuration
If your use case demands custom networking VPC CNI configuration you might already be familiar with the helm chart which helps with the process. This gives you access to ENI Configs and you can set Environment Variables on the aws-node
DaemonSet where the VPC CNI runs. CAPA is able to tune the same DaemonSet through Kubernetes.
The following example shows how to turn on custom network config and set a label definition.
kind: AWSManagedControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
metadata:
name: "capi-managed-test-control-plane"
spec:
vpcCni:
env:
- name: AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG
value: "true"
- name: ENABLE_PREFIX_DELEGATION
value: "true"
Increase node pod limit
You can increase the pod limit per-node as per the upstream AWS documentation. You’ll need to enable the vpc-cni
plugin addon on your EKS cluster as well as enable prefix assignment mode through the ENABLE_PREFIX_DELEGATION
environment variable.
kind: AWSManagedControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
metadata:
name: "capi-managed-test-control-plane"
spec:
vpcCni:
env:
- name: AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG
value: "true"
- name: ENABLE_PREFIX_DELEGATION
value: "true"
addons:
- name: vpc-cni
version: <replace_with_version>
conflictResolution: overwrite
associateOIDCProvider: true
disableVPCCNI: false
Using Secondary CIDRs
EKS allows users to assign a secondary CIDR range for pods to be assigned. Below are how to get CAPA to generate ENIConfigs in both the managed and unmanaged VPC configurations.
Secondary CIDR functionality will not work unless you enable custom network config too.
Managed (dynamic) VPC
Default configuration for CAPA is to manage the VPC and all the subnets for you dynamically. It will create and delete them along with your cluster. In this method all you need to do is set a SecondaryCidrBlock to one of the allowed two IPv4 CIDR blocks: 100.64.0.0/10 and 198.19.0.0/16. CAPA will automatically generate subnets and ENIConfigs for you and the VPC CNI will do the rest.
kind: AWSManagedControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
metadata:
name: "capi-managed-test-control-plane"
spec:
secondaryCidrBlock: 100.64.0.0/10
vpcCni:
env:
- name: AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG
value: "true"
Unmanaged (static) VPC
In an unmanaged VPC configuration CAPA will create no VPC or subnets and will instead assign the cluster pieces to the IDs you pass. In order to get ENIConfigs to generate you will need to add tags to the subnet you created and want to use as the secondary subnets for your pods. This is done through tagging the subnets with the following tag: sigs.k8s.io/cluster-api-provider-aws/association=secondary
.
Setting
SecondaryCidrBlock
in this configuration will be ignored and no subnets are created.
Using an alternative CNI
There may be scenarios where you do not want to use the Amazon VPC CNI. EKS supports a number of alternative CNIs such as Calico, Cilium, and Weave Net (see docs for full list).
There are a number of ways to install an alternative CNI into the cluster. One option is to use a ClusterResourceSet to apply the required artifacts to a newly provisioned cluster.
When using an alternative CNI you will want to delete the Amazon VPC CNI, especially for a cluster using v1.17 or less. This can be done via the disableVPCCNI property of the AWSManagedControlPlane:
kind: AWSManagedControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
metadata:
name: "capi-managed-test-control-plane"
spec:
region: "eu-west-2"
sshKeyName: "capi-management"
version: "v1.18.0"
disableVPCCNI: true
If you are replacing Amazon VPC CNI with your own helm managed instance, you will need to set AWSManagedControlPlane.spec.disableVPCCNI
to true
and add "aws.cluster.x-k8s.io/prevent-deletion": "true"
label on the Daemonset. This label is needed so aws-node
daemonset is not reaped during CNI reconciliation.
The following example shows how to label your aws-node Daemonset.
apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
...
generation: 1
labels:
app.kubernetes.io/instance: aws-vpc-cni
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: aws-node
app.kubernetes.io/version: v1.15.1
helm.sh/chart: aws-vpc-cni-1.15.1
aws.cluster.x-k8s.io/prevent-deletion: true
You cannot set disableVPCCNI to true if you are using the VPC CNI addon.
Some alternative CNIs provide for the replacement of kube-proxy, such as in Calico and Cilium. When enabling the kube-proxy alternative, the kube-proxy installed by EKS must be deleted. This can be done via the disable property of kubeProxy in AWSManagedControlPlane:
kind: AWSManagedControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
metadata:
name: "capi-managed-test-control-plane"
spec:
region: "eu-west-2"
sshKeyName: "capi-management"
version: "v1.18.0"
disableVPCCNI: true
kubeProxy:
disable: true
You cannot set disable to true in kubeProxy if you are using the kube-proxy addon.
Additional Information
See the AWS documentation for further details of EKS pod networking.
Creating a EKS cluster
New “eks” cluster templates have been created that you can use with clusterctl
to create a EKS cluster. To create a EKS cluster with self-managed nodes (a.k.a machines):
clusterctl generate cluster capi-eks-quickstart --flavor eks --kubernetes-version v1.22.9 --worker-machine-count=3 > capi-eks-quickstart.yaml
To create a EKS cluster with a managed node group (a.k.a managed machine pool):
clusterctl generate cluster capi-eks-quickstart --flavor eks-managedmachinepool --kubernetes-version v1.22.9 --worker-machine-count=3 > capi-eks-quickstart.yaml
NOTE: When creating an EKS cluster only the MAJOR.MINOR of the -kubernetes-version
is taken into consideration.
Kubeconfig
When creating an EKS cluster 2 kubeconfigs are generated and stored as secrets in the management cluster. This is different to when you create a non-managed cluster using the AWS provider.
User kubeconfig
This should be used by users that want to connect to the newly created EKS cluster. The name of the secret that contains the kubeconfig will be [cluster-name]-user-kubeconfig
where you need to replace [cluster-name] with the name of your cluster. The -user-kubeconfig in the name indicates that the kubeconfig is for the user use.
To get the user kubeconfig for a cluster named managed-test
you can run a command similar to:
kubectl --namespace=default get secret managed-test-user-kubeconfig \
-o jsonpath={.data.value} | base64 --decode \
> managed-test.kubeconfig
Cluster API (CAPI) kubeconfig
This kubeconfig is used internally by CAPI and shouldn’t be used outside of the management server. It is used by CAPI to perform operations, such as draining a node. The name of the secret that contains the kubeconfig will be [cluster-name]-kubeconfig
where you need to replace [cluster-name] with the name of your cluster. Note that there is NO -user
in the name.
There are three keys in the CAPI kubeconfig for eks clusters:
keys | purpose |
---|---|
value | contains a complete kubeconfig with the cluster admin user and token embedded |
relative | contains a kubeconfig with the cluster admin user, referencing the token file in a relative path - assumes you are mounting all the secret keys in the same dir |
single-file | contains the same token embedded in the complete kubeconfig, it is separated into a single file so that existing APIMachinery can reload the token file when the secret is updated |
The secret contents are regenerated every sync-period
as the token that is embedded in the kubeconfig and token file is only valid for a short period of time. When EKS support is enabled the maximum sync period is 10 minutes. If you try to set --sync-period
to greater than 10 minutes then an error will be raised.
EKS Console
To use the Amazon EKS Console to view workloads running in an EKS cluster created using the AWS provider (CAPA) you can do the following:
-
Create a new policy with the required IAM permissions for the console. This example can be used. For example, a policy called
EKSViewNodesAndWorkloads
. -
Assign the policy created in step 1) to a IAM user or role for the users of your EKS cluster
-
Map the IAM user or role from step 2) to a Kubernetes user that has the RBAC permissions to view the Kubernetes resources. This needs to be done via the
aws-auth
configmap (used byaws-iam-authenticator
) which is generated by the AWS provider. This mapping can be specified using in theAWSManagedControlPlane
, for example:
kind: AWSManagedControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
metadata:
name: "capi-managed-test-control-plane"
spec:
region: "eu-west-2"
sshKeyName: "capi-management"
version: "v1.18.0"
iamAuthenticatorConfig:
mapRoles:
- username: "kubernetes-admin"
rolearn: "arn:aws:iam::1234567890:role/AdministratorAccess"
groups:
- "system:masters"
In the sample above the arn:aws:iam::1234567890:role/AdministratorAccess IAM role has the EKSViewNodesAndWorkloads policy attached (created in step 1.)
EKS Addons
EKS Addons can be used with EKS clusters created using Cluster API Provider AWS.
Addons are supported in EKS clusters using Kubernetes v1.18 or greater.
Installing addons
To install an addon you need to declare them by specifying the name, version and optionally how conflicts should be resolved in the AWSManagedControlPlane
. For example:
kind: AWSManagedControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
metadata:
name: "capi-managed-test-control-plane"
spec:
region: "eu-west-2"
sshKeyName: "capi-management"
version: "v1.18.0"
addons:
- name: "vpc-cni"
version: "v1.6.3-eksbuild.1"
conflictResolution: "overwrite"
Note: For conflictResolution
overwrite
is the default behaviour. That means, if not otherwise specified, it’s
set to overwrite
.
Additionally, there is a cluster flavor called eks-managedmachinepool-vpccni that you can use with clusterctl:
clusterctl generate cluster my-cluster --kubernetes-version v1.18.0 --flavor eks-managedmachinepool-vpccni > my-cluster.yaml
Updating Addons
To update the version of an addon you need to edit the AWSManagedControlPlane
instance and update the version of the addon you want to update. Using the example from the previous section we would do:
...
addons:
- name: "vpc-cni"
version: "v1.7.5-eksbuild.1"
conflictResolution: "overwrite"
...
Deleting Addons
To delete an addon from a cluster you need to edit the AWSManagedControlPlane
instance and remove the entry for the addon you want to delete.
Viewing installed addons
You can see what addons are installed on your EKS cluster by looking in the Status
of the AWSManagedControlPlane
instance.
Additionally you can run the following command:
clusterawsadm eks addons list-installed -n <<eksclustername>>
Viewing available addons
You can see what addons are available to your EKS cluster by running the following command:
clusterawsadm eks addons list-available -n <<eksclustername>>
Enabling Encryption
To enable encryption when creating a cluster you need to create a new KMS key that has an alias name starting with cluster-api-provider-aws-
.
For example, arn:aws:kms:eu-north-1:12345678901:alias/cluster-api-provider-aws-key1
.
You then need to specify the key ARN in the encryptionConfig
of the AWSManagedControlPlane
:
kind: AWSManagedControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
metadata:
name: "capi-managed-test-control-plane"
spec:
...
encryptionConfig:
provider: "arn:aws:kms:eu-north-1:12345678901:key/351f5544-6130-42e4-8786-2c85e546fc2d"
resources:
- "secrets"
You must use the ARN of the key and not the ARN of the alias.
Custom KMS Alias Prefix
If you would like to use a different alias prefix then you can use the kmsAliasPrefix
in the optional configuration file for clusterawsadm:
clusterawsadm bootstrap iam create-stack --config custom-prefix.yaml
And the contents of the configuration file:
apiVersion: bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSIAMConfiguration
spec:
eks:
enable: true
kmsAliasPrefix: "my-prefix-*
EKS Cluster Upgrades
Control Plane Upgrade
Upgrading the Kubernetes version of the control plane is supported by the provider. To perform an upgrade you need to update the version
in the spec of the AWSManagedControlPlane
. Once the version has changed the provider will handle the upgrade for you.
You can only upgrade a EKS cluster by 1 minor version at a time. If you attempt to upgrade the version by more then 1 minor version the provider will ensure the upgrade is done in multiple steps of 1 minor version. For example upgrading from v1.15 to v1.17 would result in your cluster being upgraded v1.15 -> v1.16 first and then v1.16 to v1.17.
ROSA Support in the AWS Provider
- Feature status: Experimental
- Feature gate (required): ROSA=true
Overview
The AWS provider supports creating Red Hat OpenShift Service on AWS (ROSA) based cluster. Currently the following features are supported:
- Provisioning/Deleting a ROSA cluster with hosted control planes (HCP)
The implementation introduces the following CRD kinds:
ROSAControlPlane
- specifies the ROSA Cluster in AWSROSACluster
- needed only to satisfy cluster-api contract
A new template is available in the templates folder for creating a managed ROSA workload cluster.
SEE ALSO
- Enabling ROSA Support
- Creating a cluster
- Creating MachinePools
- Upgrades
- External Auth Providers
- Support
Enabling ROSA Support
To enable support for ROSA clusters, the ROSA feature flag must be set to true. This can be done using the EXP_ROSA environment variable.
Make sure to set up your AWS environment first as described here.
export EXP_ROSA="true"
export EXP_MACHINE_POOL="true"
clusterctl init --infrastructure aws
Troubleshooting
To check the feature-gates for the Cluster API controller run the following command:
$ kubectl get deploy capi-controller-manager -n capi-system -o yaml
the feature gate container arg should have MachinePool=true
as shown below.
spec:
containers:
- args:
- --feature-gates=MachinePool=true,ClusterTopology=true,...
To check the feature-gates for the Cluster API AWS controller run the following command:
$ kubectl get deploy capa-controller-manager -n capa-system -o yaml
the feature gate arg should have ROSA=true
as shown below.
spec:
containers:
- args:
- --feature-gates=ROSA=true,...
Creating a ROSA cluster
Permissions
CAPA controller requires an API token in order to be able to provision ROSA clusters:
-
Visit https://console.redhat.com/openshift/token to retrieve your API authentication token
-
Create a credentials secret within the target namespace with the token to be referenced later by
ROSAControlePlane
kubectl create secret generic rosa-creds-secret \ --from-literal=ocmToken='eyJhbGciOiJIUzI1NiIsI....' \ --from-literal=ocmApiUrl='https://api.openshift.com'
Alternatively, you can edit CAPA controller deployment to provide the credentials:
kubectl edit deployment -n capa-system capa-controller-manager
and add the following environment variables to the manager container:
env: - name: OCM_TOKEN value: "<token>" - name: OCM_API_URL value: "https://api.openshift.com" # or https://api.stage.openshift.com
Prerequisites
Follow the guide here up until Step 3 to install the required tools and setup the prerequisite infrastructure. Once Step 3 is done, you will be ready to proceed with creating a ROSA cluster using cluster-api.
Creating the cluster
-
Prepare the environment:
export OPENSHIFT_VERSION="4.14.5" export AWS_REGION="us-west-2" export AWS_AVAILABILITY_ZONE="us-west-2a" export AWS_ACCOUNT_ID="<account_id>" export AWS_CREATOR_ARN="<user_arn>" # can be retrieved e.g. using `aws sts get-caller-identity` export OIDC_CONFIG_ID="<oidc_id>" # OIDC config id creating previously with `rosa create oidc-config` export ACCOUNT_ROLES_PREFIX="ManagedOpenShift-HCP" # prefix used to create account IAM roles with `rosa create account-roles` export OPERATOR_ROLES_PREFIX="capi-rosa-quickstart" # prefix used to create operator roles with `rosa create operator-roles --prefix <PREFIX_NAME>` # subnet IDs created earlier export PUBLIC_SUBNET_ID="subnet-0b54a1111111111111" export PRIVATE_SUBNET_ID="subnet-05e72222222222222"
-
Render the cluster manifest using the ROSA cluster template:
clusterctl generate cluster <cluster-name> --from templates/cluster-template-rosa.yaml > rosa-capi-cluster.yaml
Note: The AWS role name must be no more than 64 characters in length. Otherwise an error will be returned. Truncate values exceeding 64 characters.
-
If a credentials secret was created earlier, edit
ROSAControlPlane
to reference it:apiVersion: controlplane.cluster.x-k8s.io/v1beta2 kind: ROSAControlPlane metadata: name: "capi-rosa-quickstart-control-plane" spec: credentialsSecretRef: name: rosa-creds-secret ...
-
Provide an AWS identity reference
apiVersion: controlplane.cluster.x-k8s.io/v1beta2 kind: ROSAControlPlane metadata: name: "capi-rosa-quickstart-control-plane" spec: identityRef: kind: <IdentityType> name: <IdentityName> ...
Otherwise, make sure the following
AWSClusterControllerIdentity
singleton exists in your management cluster:apiVersion: infrastructure.cluster.x-k8s.io/v1beta2 kind: AWSClusterControllerIdentity metadata: name: "default" spec: allowedNamespaces: {} # matches all namespaces
see Multi-tenancy for more details
-
Finally apply the manifest to create your Rosa cluster:
kubectl apply -f rosa-capi-cluster.yaml
see ROSAControlPlane CRD Reference for all possible configurations.
Creating MachinePools
Cluster API Provider AWS (CAPA) has experimental support for managed ROSA MachinePools through the infrastructure type ROSAMachinePool
. A ROSAMachinePool
is responsible for orchestrating and bootstraping a group of EC2 machines into kubernetes nodes.
Using clusterctl
to deploy
To deploy a MachinePool / ROSAMachinePool via clusterctl generate
use the template located here.
Make sure to set up your environment as described here.
clusterctl generate cluster my-cluster --from templates/cluster-template-rosa-machinepool > my-cluster.yaml
Example
Below is an example of the resources needed to create a ROSA MachinePool.
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachinePool
metadata:
name: "${CLUSTER_NAME}-pool-0"
spec:
clusterName: "${CLUSTER_NAME}"
replicas: 1
template:
spec:
clusterName: "${CLUSTER_NAME}"
bootstrap:
dataSecretName: ""
infrastructureRef:
name: "${CLUSTER_NAME}-pool-0"
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: ROSAMachinePool
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: ROSAMachinePool
metadata:
name: "${CLUSTER_NAME}-pool-0"
spec:
nodePoolName: "nodepool-0"
instanceType: "m5.xlarge"
subnet: "${PRIVATE_SUBNET_ID}"
version: "${OPENSHIFT_VERSION}"
see ROSAMachinePool CRD Reference for all possible configurations.
Upgrades
Control Plane Upgrade
Upgrading the OpenShift version of the control plane is supported by the provider. To perform an upgrade you need to update the version
in the spec of the ROSAControlPlane
. Once the version has changed the provider will handle the upgrade for you.
The Upgrade state can be checked in the conditions under ROSAControlPlane.status
.
MachinePool Upgrade
Upgrading the OpenShift version of the MachinePools is supported by the provider and can be performed independetly from the Control Plane upgrades. To perform an upgrade you need to update the version
in the spec of the ROSAMachinePool
. Once the version has changed the provider will handle the upgrade for you.
The Upgrade state can be checked in the conditions under ROSAMachinePool.status
.
The version of the MachinePool can’t be greater than Control Plane version.
External Auth Providers (BYOI)
ROSA allows you to Bring Your Own Identity (BYOI) to manage and authenticate cluster users.
Enabling
To enable this feature, enableExternalAuthProviders
field should be set to true
on cluster creation. Changing this field afterwards will have no effect:
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: ROSAControlPlane
metadata:
name: "capi-rosa-quickstart-control-plane"
spec:
enableExternalAuthProviders: true
....
Note: This feauture requires OpenShift version 4.15.5
or newer.
Usage
After creating and configuring your OIDC provider of choice, the next step is to configure ROSAControlPlane externalAuthProviders
as follows:
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: ROSAControlPlane
metadata:
name: "capi-rosa-quickstart-control-plane"
spec:
enableExternalAuthProviders: true
externalAuthProviders:
- name: my-oidc-provider
issuer:
issuerURL: https://login.microsoftonline.com/<tenant-id>/v2.0 # e.g. if using Microsoft Entra ID
audiences: # audiences that will be trusted by the kube-apiserver
- "audience1" # usually the client ID
claimMappings:
username:
claim: email
prefixPolicy: ""
groups:
claim: groups
....
Note: oidcProviders
only accepts one entry at the moment.
Accessing the cluster
Setting up RBAC
When enableExternalAuthProviders
is set to true
, ROSA provider will generate a temporary admin kubeconfig secret in the same namespace named <cluster-name>-bootstrap-kubeconfig
. This kubeconfig can be used to access the cluster to setup RBAC for OIDC users/groups.
The following example binds the cluster-admin
role to an OIDC group, giving all users in that group admin permissions.
kubectl get secret <cluster-name>-bootstrap-kubeconfig -o jsonpath='{.data.value}' | base64 -d > /tmp/capi-admin-kubeconfig
export KUBECONFIG=/tmp/capi-admin-kubeconfig
kubectl create clusterrolebinding oidc-cluster-admins --clusterrole cluster-admin --group <group-id>
Note: The generated bootstrap kubeconfig is only valid for 24h, and will not be usable afterwards. However, users can opt to manually delete the secret object to trigger the generation of a new one which will be valid for another 24h.
Login using the cli
The kubelogin kubectl plugin can be used to login with OIDC credentials using the cli.
Configuring OpenShift Console
The OpenShift Console needs to be configured before it can be used to authenticate and login to the cluster.
-
Setup a new client in your OIDC provider with the following Redirect URL:
<console-url>/auth/callback
. You can find the console URL in the status field of theROSAControlPlane
once the cluster is ready:kubectl get rosacontrolplane <control-plane-name> -o jsonpath='{.status.consoleURL}'
-
Create a new client secret in your OIDC provider and store the value in a kubernetes secret in the same namespace as your cluster:
kubectl create secret generic console-client-secret --from-literal=clientSecret='<client-secret-value>'
-
Configure
ROSAControlPlane
external auth provider with the created client:--- apiVersion: controlplane.cluster.x-k8s.io/v1beta2 kind: ROSAControlPlane metadata: name: "capi-rosa-quickstart-control-plane" spec: enableExternalAuthProviders: true externalAuthProviders: - name: my-oidc-provider issuer: issuerURL: https://login.microsoftonline.com/<tenant-id>/v2.0 # e.g. if using Microsoft Entra ID audiences: # audiences that will be trusted by the kube-apiserver - "audience1" - <console-client-id> # <----New claimMappings: username: claim: email prefixPolicy: "" groups: claim: groups oidcClients: # <----New - componentName: console componentNamespace: openshift-console clientID: <console-client-id> clientSecret: name: console-client-secret # secret name created in step 2 ....
see ROSAControlPlane CRD Reference for all possible configurations.
Create issue for ROSA
When creating issue for ROSA-HCP cluster, include the logs for the capa-controller-manager and capi-controller-manager deployment pods. The logs can be saved to text file using the commands below. Also include the yaml files for all the resources used to create the ROSA cluster:
Cluster
ROSAControlPlane
MachinePool
ROSAMachinePool
$ kubectl get pod -n capa-system
NAME READY STATUS RESTARTS AGE
capa-controller-manager-77f5b946b-sddcg 1/1 Running 1 3d3h
$ kubectl logs -n capa-system capa-controller-manager-77f5b946b-sddcg > capa-controller-manager-logs.txt
$ kubectl get pod -n capi-system
NAME READY STATUS RESTARTS AGE
capi-controller-manager-78dc897784-f8gpn 1/1 Running 18 26d
$ kubectl logs -n capi-system capi-controller-manager-78dc897784-f8gpn > capi-controller-manager-logs.txt
Bring Your Own AWS Infrastructure
Normally, Cluster API will create infrastructure on AWS when standing up a new workload cluster. However, it is possible to have Cluster API re-use external AWS infrastructure instead of creating its own infrastructure.
There are two possible ways to do this:
- By consuming existing AWS infrastructure
- By using externally managed AWS infrastructure
IMPORTANT NOTE: This externally managed AWS infrastructure should not be confused with EKS-managed clusters.
Follow the instructions below to configure Cluster API to consume existing AWS infrastructure.
Consuming Existing AWS Infrastructure
Overview
CAPA supports using existing AWS resources while creating AWS Clusters which gives flexibility to the users to bring their own existing resources into the cluster instead of creating new resources again.
Follow the instructions below to configure Cluster API to consume existing AWS infrastructure.
Prerequisites
In order to have Cluster API consume existing AWS infrastructure, you will need to have already created the following resources:
- A VPC
- One or more private subnets (subnets that do not have a route to an Internet gateway)
- A NAT gateway for each private subnet, along with associated Elastic IP addresses (only needed if the nodes require access to the Internet, i.e. pulling public images)
- A public subnet in the same Availability Zone (AZ) for each private subnet (this is required for NAT gateways to function properly)
- An Internet gateway for all public subnets (only required if the workload cluster is set to use an Internet facing load balancer or one or more NAT gateways exist in the VPC)
- Route table associations that provide connectivity to the Internet through a NAT gateway (for private subnets) or the Internet gateway (for public subnets)
- VPC endpoints for
ec2
,elasticloadbalancing
,secretsmanager
anautoscaling
(if using MachinePools) when the private Subnets do not have a NAT gateway
You will need the ID of the VPC and subnet IDs that Cluster API should use. This information is available via the AWS Management Console or the AWS CLI.
Note that there is no need to create an Elastic Load Balancer (ELB), security groups, or EC2 instances; Cluster API will take care of these items.
If you want to use existing security groups, these can be specified and new ones will not be created.
If you want to use an existing control load load balancer, specify its name.
Tagging AWS Resources
Cluster API itself does tag AWS resources it creates. The sigs.k8s.io/cluster-api-provider-aws/cluster/<cluster-name>
(where <cluster-name>
matches the metadata.name
field of the Cluster object) tag, with a value of owned
, tells Cluster API that it has ownership of the resource. In this case, Cluster API will modify and manage the lifecycle of the resource.
When consuming existing AWS infrastructure, the Cluster API AWS provider does not require any tags to be present. The absence of the tags on an AWS resource indicates to Cluster API that it should not modify the resource or attempt to manage the lifecycle of the resource.
However, the built-in Kubernetes AWS cloud provider does require certain tags in order to function properly. Specifically, all subnets where Kubernetes nodes reside should have the kubernetes.io/cluster/<cluster-name>
tag present. Private subnets should also have the kubernetes.io/role/internal-elb
tag with a value of 1, and public subnets should have the kubernetes.io/role/elb
tag with a value of 1. These latter two tags help the cloud provider understand which subnets to use when creating load balancers.
Finally, if the controller manager isn’t started with the --configure-cloud-routes: "false"
parameter, the route table(s) will also need the kubernetes.io/cluster/<cluster-name>
tag. (This parameter can be added by customizing the KubeadmConfigSpec
object of the KubeadmControlPlane
object.)
Note: All the tagging of resources should be the responsibility of the users and are not managed by CAPA controllers.
Configuring the AWSCluster Specification
Specifying existing infrastructure for Cluster API to use takes place in the specification for the AWSCluster object. Specifically, you will need to add an entry with the VPC ID and the IDs of all applicable subnets into the network
field. Here is an example:
For EC2
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: AWSCluster
For EKS
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: AWSManagedControlPlane
spec:
network:
vpc:
id: vpc-0425c335226437144
subnets:
- id: subnet-0261219d564bb0dc5
- id: subnet-0fdcccba78668e013
When you use kubectl apply
to apply the Cluster and AWSCluster specifications to the management cluster, Cluster API will use the specified VPC ID and subnet IDs, and will not create a new VPC, new subnets, or other associated resources. It will, however, create a new ELB and new security groups.
Placing EC2 Instances in Specific AZs
To distribute EC2 instances across multiple AZs, you can add information to the Machine specification. This is optional and only necessary if control over AZ placement is desired.
To tell Cluster API that an EC2 instance should be placed in a particular AZ but allow Cluster API to select which subnet in that AZ can be used, add this to the Machine specification:
spec:
failureDomain: "us-west-2a"
If using a MachineDeployment, specify AZ placement like so:
spec:
template:
spec:
failureDomain: "us-west-2b"
Note that all replicas within a MachineDeployment will reside in the same AZ.
Placing EC2 Instances in Specific Subnets
To specify that an EC2 instance should be placed in a specific subnet, add this to the AWSMachine specification:
spec:
subnet:
id: subnet-0a3507a5ad2c5c8c3
When using MachineDeployments, users can control subnet selection by adding information to the AWSMachineTemplate associated with that MachineDeployment, like this:
spec:
template:
spec:
subnet:
id: subnet-0a3507a5ad2c5c8c3
Users may either specify failureDomain
on the Machine or MachineDeployment objects, or users may explicitly specify subnet IDs on the AWSMachine or AWSMachineTemplate objects. If both are specified, the subnet ID is used and the failureDomain
is ignored.
Placing EC2 Instances in Specific External VPCs
CAPA clusters are deployed within a single VPC, but it’s possible to place machines that live in external VPCs. For this kind of configuration, we assume that all the VPCs have the ability to communicate, either through external peering, a transit gateway, or some other mechanism already established outside of CAPA. CAPA will not create a tunnel or manage the network configuration for any secondary VPCs.
The AWSMachineTemplate subnet
field allows specifying filters or specific subnet ids for worker machine placement. If the filters or subnet id is specified in a secondary VPC, CAPA will place the machine in that VPC and subnet.
spec:
template:
spec:
subnet:
filters:
name: "vpc-id"
values:
- "secondary-vpc-id"
securityGroupOverrides:
node: sg-04e870a3507a5ad2c5c8c2
node-eks-additional: sg-04e870a3507a5ad2c5c8c1
Caveats/Notes
CAPA helpfully creates security groups for various roles in the cluster and automatically attaches them to workers. However, security groups are tied to a specific VPC, so workers placed in a VPC outside of the cluster will need to have these security groups created by some external process first and set in the securityGroupOverrides
field, otherwise the ec2 creation will fail.
Security Groups
To use existing security groups for instances for a cluster, add this to the AWSCluster specification:
spec:
network:
securityGroupOverrides:
bastion: sg-0350a3507a5ad2c5c8c3
controlplane: sg-0350a3507a5ad2c5c8c3
apiserver-lb: sg-0200a3507a5ad2c5c8c3
node: sg-04e870a3507a5ad2c5c8c3
lb: sg-00a3507a5ad2c5c8c3
Any additional security groups specified in an AWSMachineTemplate will be applied in addition to these overriden security groups.
To specify additional security groups for the control plane load balancer for a cluster, add this to the AWSCluster specification:
spec:
controlPlaneLoadBalancer:
additionalSecurityGroups:
- sg-0200a3507a5ad2c5c8c3
- ...
It’s also possible to override the cluster security groups for an individual AWSMachine or AWSMachineTemplate:
spec:
SecurityGroupOverrides:
node: sg-04e870a3507a5ad2c5c8c2
node-eks-additional: sg-04e870a3507a5ad2c5c8c1
Control Plane Load Balancer
The cluster control plane is accessed through a Classic ELB. By default, Cluster API creates the Classic ELB. To use an existing Classic ELB, add its name to the AWSCluster specification:
spec:
controlPlaneLoadBalancer:
name: my-classic-elb-name
As control plane instances are added or removed, Cluster API will register and deregister them, respectively, with the Classic ELB.
It’s also possible to specify custom ingress rules for the control plane load balancer. To do so, add this to the AWSCluster specification:
spec:
controlPlaneLoadBalancer:
ingressRules:
- description: "example ingress rule"
protocol: "-1" # all
fromPort: 7777
toPort: 7777
WARNING: Using an existing Classic ELB is an advanced feature. If you use an existing Classic ELB, you must correctly configure it, and attach subnets to it.
An incorrectly configured Classic ELB can easily lead to a non-functional cluster. We strongly recommend you let Cluster API create the Classic ELB.
Control Plane ingress rules
It’s possible to specify custom ingress rules for the control plane itself. To do so, add this to the AWSCluster specification:
spec:
network:
additionalControlPlaneIngressRules:
- description: "example ingress rule"
protocol: "-1" # all
fromPort: 7777
toPort: 7777
Caveats/Notes
- When both public and private subnets are available in an AZ, CAPI will choose the private subnet in the AZ over the public subnet for placing EC2 instances.
- If you configure CAPI to use existing infrastructure as outlined above, CAPI will not create an SSH bastion host. Combined with the previous bullet, this means you must make sure you have established some form of connectivity to the instances that CAPI will create.
Using Externally managed AWS Clusters
Overview
Alternatively, CAPA supports externally managed cluster infrastructure which is useful for scenarios where a different persona is managing the cluster infrastructure out-of-band(external system) while still wanting to use CAPI for automated machine management. Users can make use of existing AWSCluster CRDs in their externally managed clusters.
How to use externally managed clusters?
Users have to use cluster.x-k8s.io/managed-by: "<name-of-system>"
annotation to depict that AWS resources are managed externally. If CAPA controllers come across this annotation in any of the AWS resources while reconciliation, then it will ignore the resource and not perform any reconciliation(including creating/modifying any of the AWS resources, or it’s status).
A predicate ResourceIsNotExternallyManaged
is exposed by Cluster API which allows CAPA controllers to differentiate between externally managed vs CAPA managed resources. For example:
c, err := ctrl.NewControllerManagedBy(mgr).
For(&providerv1.InfraCluster{}).
Watches(...).
WithOptions(options).
WithEventFilter(predicates.ResourceIsNotExternallyManaged(logger.FromContext(ctx))).
Build(r)
if err != nil {
return errors.Wrap(err, "failed setting up with a controller manager")
}
The external system must provide all required fields within the spec of the AWSCluster and must adhere to the CAPI provider contract and set the AWSCluster status to be ready when it is appropriate to do so.
IMPORTANT NOTE: Users should take care of skipping reconciliation in external controllers within mapping function while enqueuing requests. For example:
err := c.Watch( &source.Kind{Type: &infrav1.AWSCluster{}}, handler.EnqueueRequestsFromMapFunc(func(a client.Object) []reconcile.Request { if annotations.IsExternallyManaged(awsCluster) { log.Info("AWSCluster is externally managed, skipping mapping.") return nil } return []reconcile.Request{ { NamespacedName: client.ObjectKey{Namespace: c.Namespace, Name: c.Spec.InfrastructureRef.Name}, }, }})) if err != nil { // handle it }
Caveats
Once the user has created externally managed AWSCluster, it is not allowed to convert it to CAPA managed cluster. However, converting from managed to externally managed is allowed.
User should only use this feature if their cluster infrastructure lifecycle management has constraints that the reference implementation does not support. See user stories for more details.
Bring your own (BYO) Public IPv4 addresses
Cluster API also provides a mechanism to allocate Elastic IP from the existing Public IPv4 Pool that you brought to AWS[1].
Bringing your own Public IPv4 Pool (BYOIPv4) can be used as an alternative to buying Public IPs from AWS, also considering the changes in charging for this since February 2024[2].
Supported resources to BYO Public IPv4 Pool (BYO Public IPv4
):
- NAT Gateways
- Network Load Balancer for API server
- Machines
Use BYO Public IPv4
when you have brought to AWS custom IPv4 CIDR blocks and want the cluster to automatically use IPs from the custom pool instead of Amazon-provided pools.
Prerequisites and limitations for BYO Public IPv4 Pool
- BYOIPv4 is limited to AWS to selected regions. See more in AWS Documentation for Regional availability
- The IPv4 address must be provisioned and advertised to the AWS account before the cluster is installed
- The public IPv4 addresses is limited to the network border group that the CIDR block have been advertised[3][4], and the
NetworkSpec.ElasticIpPool.PublicIpv4Pool
must be the same of the cluster will be installed. - Only NAT Gateways and the Network Load Balancer for API server will consume from the IPv4 pool defined in the network scope.
- The public IPv4 pool must be assigned to each machine to consume public IPv4 from a custom IPv4 pool.
Steps to set BYO Public IPv4 Pool to core infrastructure
Currently, CAPA supports BYO Public IPv4 to core components NAT Gateways and Network Load Balancer for the internet-facing API server.
To specify a Public IPv4 Pool for core components you must set the spec.elasticIpPool
as follows:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSCluster
metadata:
name: aws-cluster-localzone
spec:
region: us-east-1
networkSpec:
vpc:
elasticIpPool:
publicIpv4Pool: ipv4pool-ec2-0123456789abcdef0
publicIpv4PoolFallbackOrder: amazon-pool
Then all the Elastic IPs will be created by consuming from the pool ipv4pool-ec2-0123456789abcdef0
.
Steps to BYO Public IPv4 Pool to machines
To create a machine consuming from a custom Public IPv4 Pool you must set the pool ID to the AWSMachine spec, then set the PublicIP
to true
:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSMachine
metadata:
name: byoip-s55p4-bootstrap
spec:
# placeholder for AWSMachine spec
elasticIpPool:
publicIpv4Pool: ipv4pool-ec2-0123456789abcdef0
publicIpv4PoolFallbackOrder: amazon-pool
publicIP: true
[1] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-byoip.html [2] https://aws.amazon.com/blogs/aws/new-aws-public-ipv4-address-charge-public-ip-insights/ [3] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-byoip.html#byoip-onboard [4] https://docs.aws.amazon.com/cli/latest/reference/ec2/advertise-byoip-cidr.html
Specifying the IAM Role to use for Management Components
Prerequisites
To be able to specify the IAM role that the management components should run as your cluster must be set up with the ability to assume IAM roles using one of the following solutions:
Setting IAM Role
Set the AWS_CONTROLLER_IAM_ROLE
environment variable to the ARN of the IAM role to use when performing the clusterctl init
command.
For example:
export AWS_CONTROLLER_IAM_ROLE=arn:aws:iam::1234567890:role/capa-management-components
clusterctl init --infrastructure=aws
IAM Role Trust Policy
IAM Roles for Service Accounts
When creating the IAM role, the following trust policy will need to be used with the AWS_ACCOUNT_ID
, AWS_REGION
and OIDC_PROVIDER_ID
environment variables replaced.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/oidc.eks.${AWS_REGION}.amazonaws.com/id/${OIDC_PROVIDER_ID}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"ForAnyValue:StringEquals": {
"oidc.eks.${AWS_REGION}.amazonaws.com/id/${OIDC_PROVIDER_ID}:sub": [
"system:serviceaccount:capa-system:capa-controller-manager",
"system:serviceaccount:capi-system:capi-controller-manager",
"system:serviceaccount:capa-eks-control-plane-system:capa-eks-control-plane-controller-manager",
"system:serviceaccount:capa-eks-bootstrap-system:capa-eks-bootstrap-controller-manager",
]
}
}
}
]
}
If you plan to use the controllers.cluster-api-provider-aws.sigs.k8s.io
role created by clusterawsadm then you’ll need to add the following to your AWSIAMConfiguration:
apiVersion: bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSIAMConfiguration
spec:
clusterAPIControllers:
disabled: false
trustStatements:
- Action:
- "sts:AssumeRoleWithWebIdentity"
Effect: "Allow"
Principal:
Federated:
- "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/oidc.eks.${AWS_REGION}.amazonaws.com/id/${OIDC_PROVIDER_ID}"
Condition:
"ForAnyValue:StringEquals":
"oidc.eks.${AWS_REGION}.amazonaws.com/id/${OIDC_PROVIDER_ID}:sub":
- system:serviceaccount:capa-system:capa-controller-manager
- system:serviceaccount:capa-eks-control-plane-system:capa-eks-control-plane-controller-manager # Include if also using EKS
With this you can then set AWS_CONTROLLER_IAM_ROLE
to arn:aws:iam::${AWS_ACCOUNT_ID}:role/controllers.cluster-api-provider-aws.sigs.k8s.io
Kiam / kube2iam
When creating the IAM role, you will need to apply the kubernetes.io/cluster/${CLUSTER_NAME}/role": "enabled"
tag to the role and use the following trust policy with the AWS_ACCOUNT_ID
and CLUSTER_NAME
environment variables correctly replaced.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::${AWS_ACCOUNT_ID}:role/${CLUSTER_NAME}.worker-node-role"
},
"Action": "sts:AssumeRole"
}
]
}
If you plan to use the controllers.cluster-api-provider-aws.sigs.k8s.io
role created by clusterawsadm then you’ll need to add the following to your AWSIAMConfiguration:
apiVersion: bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSIAMConfiguration
spec:
clusterAPIControllers:
disabled: false
trustStatements:
- Action:
- "sts:AssumeRole"
Effect: "Allow"
Principal:
Service:
- "ec2.amazonaws.com"
- Action:
- "sts:AssumeRole"
Effect: "Allow"
Principal:
AWS:
- "arn:aws:iam::${AWS_ACCOUNT_ID}:role/${CLUSTER_NAME}.worker-node-role"
With this you can then set AWS_CONTROLLER_IAM_ROLE
to arn:aws:iam::${AWS_ACCOUNT_ID}:role/controllers.cluster-api-provider-aws.sigs.k8s.io
External AWS Cloud Provider and AWS CSI Driver
Overview
The support for in-tree cloud providers and the CSI drivers is coming to an end and CAPA supports various upgrade paths to use external cloud provider (Cloud Controller Manager - CCM) and external CSI drivers. This document explains how to create a CAPA cluster with external CSI/CCM plugins and how to upgrade existing clusters that rely on in-tree providers.
Creating clusters with external CSI/CCM and validating
For clusters that will use external CCM, cloud-provider: external
flag needs to be set in KubeadmConfig resources in both KubeadmControlPlane
and MachineDeployment
resources.
clusterConfiguration:
apiServer:
extraArgs:
cloud-provider: external
controllerManager:
extraArgs:
cloud-provider: external
initConfiguration:
nodeRegistration:
kubeletExtraArgs:
cloud-provider: external
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
cloud-provider: external
External CCM and EBS CSI driver can be installed manually or using ClusterResourceSets (CRS) onto the CAPA workload cluster.
To install them with CRS, create a CRS resource on the management cluster with labels, for example csi: external
and ccm: external
labels.
Then, when creating Cluster
objects for workload clusters that should have this CSR applied, create them with matching labels csi: external
and ccm: external
for CSI and CCM, respectively.
Manifests for installing the AWS CCM and the AWS EBS CSI driver are available from their respective GitHub repositories (see here for the AWS CCM and here for the AWS EBS CSI driver).
An example of a workload cluster manifest with labels assigned for matching to a CRS can be found here.
Verifying dynamically provisioned volumes with CSI driver
Once you have the cluster with external CCM and CSI controller running successfully, you can test the CSI driver functioning with following steps after switching to workload cluster:
- Create a service (say,
nginx
)
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: default
spec:
clusterIP: None
ports:
- name: nginx-web
port: 80
selector:
app: nginx
- Create a storageclass and statefulset for the service created above with the persistent volume assigned to the storageclass:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: aws-ebs-volumes
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
csi.storage.k8s.io/fstype: xfs
type: io1
iopsPerGB: "100"
allowedTopologies:
- matchLabelExpressions:
- key: topology.ebs.csi.aws.com/zone
values:
- us-east-1a
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-statefulset
spec:
serviceName: "nginx-svc"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: registry.k8s.io/nginx-slim:0.8
ports:
- name: nginx-web
containerPort: 80
volumeMounts:
- name: nginx-volumes
mountPath: /usr/share/nginx/html
volumes:
- name: nginx-volumes
persistentVolumeClaim:
claimName: nginx-volumes
volumeClaimTemplates:
- metadata:
name: nginx-volumes
spec:
storageClassName: "aws-ebs-volumes"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 4Gi
- Once you apply the above manifest, the EBS volumes will be created and attached to the worker nodes.
IMPORTANT WARNING: The CRDs from the AWS EBS CSI driver and AWS external cloud provider gives issue while installing the respective controllers on the AWS Cluster, it doesn’t allow statefulsets to create the volume on existing EC2 instance. We need the CSI controller deployment and CCM pinned to the control plane which has right permissions to create, attach and mount the volumes to EC2 instances. To achieve this, you should add the node affinity rules to the CSI driver controller deployment and CCM DaemonSet manifests.
tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule - effect: NoSchedule key: node-role.kubernetes.io/control-plane affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/control-plane operator: Exists - matchExpressions: - key: node-role.kubernetes.io/master operator: Exists
Validated upgrade paths for existing clusters
From Kubernetes 1.23 onwards, CSIMigrationAWS
flag is enabled by default, which requires the installation of external CSI driver, unless CSIMigrationAWS
is disabled by the user.
For installing external CSI/CCM in the upgraded cluster, CRS can be used, see the section above for details.
CCM and CSI do not need to be migrated to use external plugins at the same time, external CSI drivers works with in-tree CCM (Warning: using in-tree CSI with external CCM does not work).
Following 3 upgrade paths are validated:
- Scenario 1: During upgrade to v1.23.x, disabling
CSIMigrationAWS
flag and keep using in-tree CCM and CSI. - Scenario 2: During upgrade to v1.23.x, enabling
CSIMigrationAWS
flag and using in-tree CCM with external CSI. - Scenario 3: During upgrade to v1.23.x, enabling
CSIMigrationAWS
flag and using external CCM and CSI.
CSI | CCM | feature-gate CSIMigrationAWS | external-cloud-volume-plugin | |
---|---|---|---|---|
Scenario 1 | ||||
From Kubernetes < v1.23 | in-tree | in-tree | off | NA |
To Kubernetes >= v1.23 | in-tree | in-tree | off | NA |
Scenario 2 | ||||
From Kubernetes < v1.23 | in-tree | in-tree | off | NA |
To Kubernetes >= v1.23 | external | in-tree | on | NA |
Scenario 3 | ||||
From Kubernetes < v1.23 | in-tree | in-tree | off | NA |
To Kubernetes >= v1.23 | external | external | on | aws |
KubeadmConfig in the upgraded cluster for scenario 1:
clusterConfiguration:
apiServer:
extraArgs:
cloud-provider: aws
controllerManager:
extraArgs:
cloud-provider: aws
feature-gates: CSIMigrationAWS=false
initConfiguration:
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
feature-gates: CSIMigrationAWS=false
name: '{{ ds.meta_data.local_hostname }}'
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
feature-gates: CSIMigrationAWS=false
KubeadmConfig in the upgraded cluster for scenario 2:
When CSIMigrationAWS=true
, installed external CSI driver will be used while relying on in-tree CCM.
clusterConfiguration:
apiServer:
extraArgs:
cloud-provider: aws
feature-gates: CSIMigrationAWS=true // Set only if Kubernetes version < 1.23.x, otherwise this flag is enabled by default.
controllerManager:
extraArgs:
cloud-provider: aws
feature-gates: CSIMigrationAWS=true // Set only if Kubernetes version < 1.23.x, otherwise this flag is enabled by default.
initConfiguration:
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
feature-gates: CSIMigrationAWS=true // Set only if Kubernetes version < 1.23.x, otherwise this flag is enabled by default.
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
feature-gates: CSIMigrationAWS=true // Set only if Kubernetes version < 1.23.x, otherwise this flag is enabled by default.
KubeadmConfig in the upgraded cluster for scenario 3:
external-cloud-volume-plugin
flag needs to be set for old Kubelets to keep talking to in-tree CCM and upgrade fails without this is set.
clusterConfiguration:
apiServer:
extraArgs:
cloud-provider: external
controllerManager:
extraArgs:
cloud-provider: external
external-cloud-volume-plugin: aws
initConfiguration:
nodeRegistration:
kubeletExtraArgs:
cloud-provider: external
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
cloud-provider: external
Restricting Cluster API to certain namespaces
Cluster-api-provider-aws controllers by default, reconcile cluster-api objects across all namespaces in the cluster. However, it is possible to restrict reconciliation to a single namespace and this document tells you how.
Contents
Use cases
- Grouping clusters into a namespace based on the AWS account will allow
managing clusters across multiple AWS accounts. This will require each
cluster-api-provider-aws
controller to have credentials to their respective AWS accounts. These credentials can be created as kubernetes secret and be mounted in the pod at/home/.aws
or as environment variables. - Grouping clusters into a namespace based on their environment, (test,
qualification, canary, production) will allow a phased rolling out of
cluster-api-provider-aws
releases. - Grouping clusters into a namespace based on the infrastructure provider will allow running multiple cluster-api provider implementations side-by-side and manage clusters across infrastructure providers.
Configuring cluster-api-provider-aws
controllers
- Create the namespace that
cluster-api-provider-aws
controller will watch for cluster-api objects
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: my-pet-clusters #edit if necessary
EOF
- Deploy/edit
aws-provider-controller-manager
controller statefulset
Specifically, edit the container spec for cluster-api-aws-controller
, in the
aws-provider-controller-manager
statefulset, to pass a value to the namespace
CLI flag.
- -namespace=my-pet-clusters # edit this if necessary
Once the aws-provider-controller-manager-0
pod restarts,
cluster-api-provider-aws
controllers will only reconcile the cluster-api
objects in the my-pet-clusters
namespace.
Using IAM roles in management cluster instead of AWS credentials
Overview
Sometimes users might want to use IAM roles to deploy management clusters. If the user already has a management cluster which was created using the AWS credentials, CAPA provides a way to use IAM roles instead of using these credentials.
Pre-requisites
User has a bootstrap cluster created with AWS credentials. These credentials can be temporary as well. To create temporary credentials, please follow this doc.
We can verify whether this bootstrap cluster is using AWS credentials by checking the capa-manager-bootstrap-credentials
secret created in capa-system
namespace:
kubectl get secret -n capa-system capa-manager-bootstrap-credentials -o=jsonpath='{.data.credentials}' | { base64 -d 2>/dev/null || base64 -D; }
which will give output similar to below:
[default]
aws_access_key_id = <your-access-key>
aws_secret_access_key = <your-secret-access-key>
region = us-east-1
aws_session_token = <session-token>
Goal
Create a management cluster which uses instance profiles (IAM roles) attached to EC2 instance.
Steps for CAPA-managed clusters
- Create a workload cluster on existing bootstrap cluster. Refer quick start guide for more details. Since only control-plane nodes have the required IAM roles attached, CAPA deployment should have the necessary tolerations for master (control-plane) node and node selector for master.
Note: A cluster with a single control plane node won’t be sufficient here due to the
NoSchedule
taint.
-
Get the kubeconfig for the new target management cluster(created in previous step) once it is up and running.
-
Zero the credentials CAPA controller started with, such that target management cluster uses empty credentials and not the previous credentials used to create bootstrap cluster using:
clusterawsadm controller zero-credentials --namespace=capa-system
For more details, please refer zero-credentials doc.
-
Rollout and restart on capa-controller-manager deployment using:
clusterawsadm controller rollout-controller --kubeconfig=kubeconfig --namespace=capa-system
For more details, please refer rollout-controller doc.
-
Use
clusterctl init
with the new cluster’s kubeconfig to install the provider components. For more details on preparing for init, please refer clusterctl init doc. -
Use
clusterctl move
to move the Cluster API resources from the bootstrap cluster to the target management cluster. For more details on preparing for move, please refer clusterctl move doc. -
Once the resources are moved to target management cluster successfully,
capa-manager-bootstrap-credentials
will be created as nil, and hence CAPA controllers will fall back to use the attached instance profiles. -
Delete the bootstrap cluster with the AWS credentials.
Failure Domains
A failure domain in the AWS provider corresponds to an availability zone within an AWS region.
In AWS, Availability Zones are distinct locations within an AWS Region that are engineered to be isolated from failures in other Availability Zones. They provide inexpensive, low-latency network connectivity to other Availability Zones in the same AWS Region, to ensure a cluster (or any application) is resilient to failure.
If a zone goes down, your cluster will continue to run as the other 2 zones are physically separated and can continue to run.
More details of availability zones and regions can be found in the AWS docs.
The usage of failure domains for control-plane and worker nodes can be found below in detail:
Failure domains in control-plane nodes
By default, the control plane of a workload cluster created by CAPA will span multiple availability zones (AZs) (also referred to as “failure domains”) when using multiple control plane nodes. This is because CAPA will, by default, create public and private subnets in all the AZs of a region (up to a maximum of 3 AZs by default). If a region has more than 3 AZs then CAPA will pick 3 AZs to use.
Configuring CAPA to Use Specific AZs
The Cluster API controller will look for the FailureDomain status field and will set the FailureDomain field in a Machine
if a value hasn’t already been explicitly set. It will try to ensure that the machines are spread across all the failure domains.
The AWSMachine
controller looks for a failure domain (i.e. Availability Zone) first in the Machine
before checking in the network
specification of AWSMachine
. This failure domain is then used when provisioning the AWSMachine
.
Using FailureDomain in Machine/MachineDeployment spec
To control the placement of AWSMachine
into a failure domain (i.e. Availability Zones), we can explicitly state the failure domain in Machine
. The best way is to specify this using the FailureDomain field within the Machine
(or MachineDeployment
) spec.
For example:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Machine
metadata:
labels:
cluster.x-k8s.io/cluster-name: my-cluster
cluster.x-k8s.io/control-plane: "true"
name: controlplane-0
namespace: default
spec:
version: "v1.22.1"
clusterName: my-cluster
failureDomain: "1"
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: my-cluster-md-0
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachineTemplate
name: my-cluster-md-0
IMPORTANT WARNING: All the replicas within a
MachineDeployment
will reside in the same Availability Zone.
Using FailureDomain in network object of AWSMachine
Another way to explicitly instruct CAPA to create resources in specific AZs (and not by random), users can add a network
object to the AWSCluster specification. Here is an example network
that creates resources across three AZs in the “us-west-2” region:
spec:
network:
vpc:
cidrBlock: 10.50.0.0/16
subnets:
- availabilityZone: us-west-2a
cidrBlock: 10.50.0.0/20
isPublic: true
- availabilityZone: us-west-2a
cidrBlock: 10.50.16.0/20
- availabilityZone: us-west-2b
cidrBlock: 10.50.32.0/20
isPublic: true
- availabilityZone: us-west-2b
cidrBlock: 10.50.48.0/20
- availabilityZone: us-west-2c
cidrBlock: 10.50.64.0/20
isPublic: true
- availabilityZone: us-west-2c
cidrBlock: 10.50.80.0/20
Note: This method can also be used with worker nodes as well.
Specifying the CIDR block alone for the VPC is not enough; users must also supply a list of subnets that provides the desired AZ, the CIDR for the subnet, and whether the subnet is public (has a route to an Internet gateway) or is private (does not have a route to an Internet gateway).
Note that CAPA insists that there must be a public subnet (and associated Internet gateway), even if no public load balancer is requested for the control plane. Therefore, for every AZ where a control plane node should be placed, the network
object must define both a public and private subnet.
Once CAPA is provided with a network
that spans multiple AZs, the KubeadmControlPlane controller will automatically distribute control plane nodes across multiple AZs. No further configuration from the user is required.
Note: This method can also be used if you do not want to split your EC2 instances across multiple AZs.
Changing AZ defaults
When creating default subnets by default a maximum of 3 AZs will be used. If you are creating a cluster in a region that has more than 3 AZs then 3 AZs will be picked based on alphabetical from that region.
If this default behavior for maximum number of AZs and ordered selection method doesn’t suit your requirements you can use the following to change the behaviour:
availabilityZoneUsageLimit
- specifies the maximum number of availability zones (AZ) that should be used in a region when automatically creating subnets.availabilityZoneSelection
- specifies how AZs should be selected if there are more AZs in a region than specified by availabilityZoneUsageLimit. There are 2 selection schemes:Ordered
- selects based on alphabetical orderRandom
- selects AZs randomly in a region
For example if you wanted have a maximum of 2 AZs using a random selection scheme:
spec:
network:
vpc:
availabilityZoneUsageLimit: 2
availabilityZoneSelection: Random
Caveats
Deploying control plane nodes across multiple AZs is not a panacea to cure all availability concerns. The sizing and overall utilization of the cluster will greatly affect the behavior of the cluster and the workloads hosted there in the event of an AZ failure. Careful planning is needed to maximize the availability of the cluster even in the face of an AZ failure. There are also other considerations, like cross-AZ traffic charges, that should be taken into account.
Failure domains in worker nodes
To ensure that the worker machines are spread across failure domains, we need to create N MachineDeployment
for your N failure domains, scaling them independently. Resiliency to failures comes from having multiple MachineDeployment
.
For example:
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: ${CLUSTER_NAME}-md-0
namespace: default
spec:
clusterName: ${CLUSTER_NAME}
replicas: ${WORKER_MACHINE_COUNT}
selector:
matchLabels: null
template:
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: ${CLUSTER_NAME}-md-0
clusterName: ${CLUSTER_NAME}
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachineTemplate
name: ${CLUSTER_NAME}-md-0
version: ${KUBERNETES_VERSION}
failureDomain: "1"
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: ${CLUSTER_NAME}-md-1
namespace: default
spec:
clusterName: ${CLUSTER_NAME}
replicas: ${WORKER_MACHINE_COUNT}
selector:
matchLabels: null
template:
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: ${CLUSTER_NAME}-md-1
clusterName: ${CLUSTER_NAME}
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachineTemplate
name: ${CLUSTER_NAME}-md-1
version: ${KUBERNETES_VERSION}
failureDomain: "2"
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: ${CLUSTER_NAME}-md-2
namespace: default
spec:
clusterName: ${CLUSTER_NAME}
replicas: ${WORKER_MACHINE_COUNT}
selector:
matchLabels: null
template:
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: ${CLUSTER_NAME}-md-2
clusterName: ${CLUSTER_NAME}
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachineTemplate
name: ${CLUSTER_NAME}-md-2
version: ${KUBERNETES_VERSION}
failureDomain: "3"
IMPORTANT WARNING: All the replicas within a
MachineDeployment
will reside in the same Availability Zone.
Using AWSMachinePool
You can use an AWSMachinePool
object which automatically distributes worker machines across the configured availability zones.
Set the FailureDomains field to the list of availability zones that you want to use. Be aware that not all regions have the same availability zones.
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachinePool
metadata:
labels:
cluster.x-k8s.io/cluster-name: my-cluster
name: ${CLUSTER_NAME}-mp-0
namespace: default
spec:
clusterName: my-cluster
failureDomains:
- "1"
- "3"
replicas: 3
template:
spec:
clusterName: my-cluster
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: ${CLUSTER_NAME}-mp-0
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachinePool
name: ${CLUSTER_NAME}-mp-0
version: ${KUBERNETES_VERSION}
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachinePool
metadata:
labels:
cluster.x-k8s.io/cluster-name: my-cluster
name: ${CLUSTER_NAME}-mp-0
namespace: default
spec:
minSize: 1
maxSize: 4
awsLaunchTemplate:
instanceType: ${AWS_NODE_MACHINE_TYPE}
iamInstanceProfile: "nodes.cluster-api-provider-aws.sigs.k8s.io"
sshKeyName: ${AWS_SSH_KEY_NAME}
Userdata Privacy
Cluster API Provider AWS bootstraps EC2 instances to create and join Kubernetes clusters using instance user data. Because Kubernetes clusters are secured using TLS using multiple Certificate Authorities, these are generated by Cluster API and injected into the user data. It is important to note that without the configuring of host firewalls, processes can retrieve instance userdata from http://169.254.169.254/latest/api/token
Requirements
- An AMI that includes the AWS CLI
- AMIs using CloudInit
- A working
/bin/bash
shell - LFS directory layout (i.e.
/etc
exists and is readable by CloudInit)
Listed AMIs on 1.16 and up should include the AWS CLI.
How Cluster API secures TLS secrets
Since v0.5.x, Cluster API Provider AWS has used AWS Secrets Manager as a limited-time secret store, storing the userdata using KMS encryption at rest in AWS. The EC2 IMDS userdata will contain a boot script to download the encrypted userdata secret using instance profile permissions, then immediately delete it from AWS Secrets Manager, and then execute it.
To avoid guessing keys in the AWS Secrets Manager key-value store and to prevent collisions, the key is an encoding the Kubernetes namespace, cluster name and instance name, with a random string appended, providing ~256-bits of entropy.
Cluster API Provider AWS also stores the secret ARN in the AWSMachine spec, and will delete the secret if it isn’t already deleted and the machine has registered successfully against the workload cluster API server as a node. Cluster API Provider AWS will also attempt deletion of the secret if the AWSMachine is otherwise deleted or the EC2 instance is terminated or failed.
This method is only compatible with operating systems and distributions using cloud-init. If you are using a different bootstrap process, you will need to co-ordinate this externally and set the following in the specification of the AWSMachine types to disable the use of a cloud-init boothook:
cloudInit:
insecureSkipSecretsManager: true
Troubleshooting
Script errors
cloud-init does not print boothook script errors to the systemd journal. Logs for the script, if it errored can be found in
/var/log/cloud-init-output.log
Warning messages
Because cloud-init will attempt to read the final file at start, cloud-init will always print a /etc/secret-userdata.txt cannot be found
message. This can be safely ignored.
Secrets manager console
The AWS secrets manager console should show secrets being created and deleted, with a lifetime of around a minute. No plaintext secret data will appear in the console as Cluster API Provider AWS stores the userdata as fragments of a gzipped data stream.
Troubleshooting
Resources aren’t being created
TODO
Target cluster’s control plane machine is up but target cluster’s apiserver not working as expected
If aws-provider-controller-manager-0
logs did not help, you might want to look into cloud-init logs, /var/log/cloud-init-output.log
, on the controller host.
Verifying kubelet status and logs may also provide hints:
journalctl -u kubelet.service
systemctl status kubelet
For reaching controller host from your local machine:
ssh -i <private-key> -o "ProxyCommand ssh -W %h:%p -i <private-key> ubuntu@<bastion-IP>" ubuntu@<controller-host-IP>
private-key
is the private key from the key-pair discussed in the ssh key pair
section above.
kubelet on the control plane host failing with error: NoCredentialProviders
failed to run Kubelet: could not init cloud provider "aws": error finding instance i-0c276f2a1f1c617b2: "error listing AWS instances: \"NoCredentialProviders: no valid providers in chain. Deprecated.\\n\\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors\""
This error can occur if CloudFormation
stack is not created properly and IAM instance profile is missing appropriate roles. Run following command to inspect IAM instance profile:
$ aws iam get-instance-profile --instance-profile-name control-plane.cluster-api-provider-aws.sigs.k8s.io --output json
{
"InstanceProfile": {
"InstanceProfileId": "AIPAJQABLZS4A3QDU576Q",
"Roles": [
{
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
}
}
]
},
"RoleId": "AROAJQABLZS4A3QDU576Q",
"CreateDate": "2019-05-13T16:45:12Z",
"RoleName": "control-plane.cluster-api-provider-aws.sigs.k8s.io",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:role/control-plane.cluster-api-provider-aws.sigs.k8s.io"
}
],
"CreateDate": "2019-05-13T16:45:28Z",
"InstanceProfileName": "control-plane.cluster-api-provider-aws.sigs.k8s.io",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:instance-profile/control-plane.cluster-api-provider-aws.sigs.k8s.io"
}
}
If instance profile does not look as expected, you may try recreating the CloudFormation stack using clusterawsadm
as explained in the above sections.
Recover a management cluster after losing the api server load balancer
These steps outline the process for recovering a management cluster after losing the load balancer for the api server. These steps are needed because AWS load balancers have dynamically generated DNS names. This means that when a load balancer is deleted CAPA will recreate the load balancer but it will have a different DNS name that does not match the original, so we need to update some resources as well as the certs to match the new name to make the cluster healthy again. There are a few different scenarios which this could happen.
- The load balancer gets deleted by some external process or user.
- If a cluster is created with the same name as the management cluster in a different namespace and then deleted it will delete the existing load balancer. This is due to ownership of AWS resources being managed by tags. See this issue for reference.
Access the api server locally
-
ssh to a control plane node and modify the
/etc/kubernetes/admin.conf
-
Replace the
server
withserver: https://localhost:6443
-
Add
insecure-skip-tls-verify: true
-
Comment out
certificate-authority-data:
-
-
Export the kubeconfig and ensure you can connect
export KUBECONFIG=/etc/kubernetes/admin.conf kubectl get nodes
Get rid of the lingering duplicate cluster
This step is only needed in the scenario that duplicate cluster was created and deleted which caused the API server load balancer to be deleted.
-
since there is a duplicate cluster that is trying to be deleted and can’t due to some resources being unable to cleanup since they are in use we need to stop the conflicting reconciliation process. Edit the duplicate aws cluster object and remove the
finalizers
kubectl edit awscluster <clustername>
-
next run
kubectl describe awscluster <clustername>
to validate that the finalizers have been removed -
kubectl get clusters
to verify the cluster is gone
Make at least one node Ready
-
Right now all endpoints are down due to nodes not being ready. this is problematic for coredns adn cni pods in particular. let’s get one control plane node back healthy. on the control plane node we logged into edit the
/etc/kubernetes/kubelet.conf
-
Replace the
server
withserver: https://localhost:6443
-
Add
insecure-skip-tls-verify: true
-
Comment out
certificate-authority-data:
-
Restart the kubelet
systemctl restart kubelet
-
-
kubectl get nodes
and validate that the node is in a ready state. -
After a few minutes most things should start scheduling themselves on the new node. The pods that did not restart on their own that were causing issues were core-dns,kube-proxy, and cni pods.Those should be restart manually.
-
(optional) tail the capa logs to see the load balancer start to reconcile
kubectl logs -f -n capa-system deployments.apps/capa-controller-manager`
Update the control plane nodes with new LB settings
-
To be safe we will do this on all CP nodes rather than having them recreate to avoid potential data loss issues. Follow the following steps for each CP node.
-
Regenrate the certs for the api server using the new name. Make sure to update your service cidr and endpoint in the below command.
rm /etc/kubernetes/pki/apiserver.crt rm /etc/kubernetes/pki/apiserver.key kubeadm init phase certs apiserver --control-plane-endpoint="mynewendpoint.com" --service-cidr=100.64.0.0/13 -v10
-
Update settings in
/etc/kubernetes/admin.conf
-
Replace the
server
withserver: https://<your-new-lb.com>:6443
-
Remove
insecure-skip-tls-verify: true
-
Uncomment
certificate-authority-data:
-
Export the kubeconfig and ensure you can connect
export KUBECONFIG=/etc/kubernetes/admin.conf kubectl get nodes
-
-
Update the settings in
/etc/kubernetes/kubelet.conf
-
Replace the
server
withserver: https://your-new-lb.com:6443
-
Remove
insecure-skip-tls-verify: true
-
Uncomment
certificate-authority-data:
-
restart the kubelet
systemctl restart kubelet
-
-
Just as we did before we need new pods to pick up api server cache changes so you will want to force restart pods like cni pods, kube-proxy, core-dns , etc.
Update capi settings for new LB DNS name
-
Update the control plane endpoint on the
awscluster
andcluster
objects. To do this we need to disable the validatingwebhooks. We will back them up and then delete so we can apply later.kubectl get validatingwebhookconfigurations capa-validating-webhook-configuration -o yaml > capa-webhook && kubectl delete validatingwebhookconfigurations capa-validating-webhook-configuration kubectl get validatingwebhookconfigurations capi-validating-webhook-configuration -o yaml > capi-webhook && kubectl delete validatingwebhookconfigurations capi-validating-webhook-configuration
-
Edit the
spec.controlPlaneEndpoint.host
field on bothawscluster
andcluster
to have the new endpoint -
Re-apply your webhooks
kubectl apply -f capi-webhook kubectl apply -f capa-webhook
-
Update the following config maps and replace the old control plane name with the new one.
kubectl edit cm -n kube-system kubeadm-config kubectl edit cm -n kube-system kube-proxy kubectl edit cm -n kube-public cluster-info
-
Edit the cluster kubeconfig secret that capi uses to talk to the management cluster. You will need to decode teh secret, replace the endpoint and re-encode and save.
kubectl edit secret -n <namespace> <cluster-name>-kubeconfig`
-
At this point things should start to reconcile on their own, but we can use the commands in the next step to force it.
Roll all of the nodes to make sure everything is fresh
-
kubectl patch kcp <clusternamekcp> -n namespace --type merge -p "{\"spec\":{\"rolloutAfter\":\"`date +'%Y-%m-%dT%TZ'`\"}}"
-
kubectl patch machinedeployment CLUSTER_NAME-md-0 -n namespace --type merge -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
IAM Permissions
Required to use clusterawsadm to provision IAM roles via CloudFormation
If using clusterawsadm
to automate deployment of IAM roles via CloudFormation,
you must have IAM administrative access as clusterawsadm
will provision IAM
roles and policies.
Required by Cluster API Provider AWS controllers
The Cluster API Provider AWS controller requires permissions to use EC2, ELB
Autoscaling and optionally EKS. If provisioning IAM roles using clusterawsadm
,
these will be set up as the controllers.cluster-api-provider-aws.sigs.k8s.io
IAM Policy, and attached to the controllers.cluster-api-provider-aws.sigs.k8s.io
and control-plane.cluster-api-provider-aws.sigs.k8s.io
IAM roles.
EC2 Provisioned Kubernetes Clusters
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeIpamPools",
"ec2:AllocateIpamPoolCidr",
"ec2:AttachNetworkInterface",
"ec2:DetachNetworkInterface",
"ec2:AllocateAddress",
"ec2:AssignIpv6Addresses",
"ec2:AssignPrivateIpAddresses",
"ec2:UnassignPrivateIpAddresses",
"ec2:AssociateRouteTable",
"ec2:AssociateVpcCidrBlock",
"ec2:AttachInternetGateway",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateCarrierGateway",
"ec2:CreateInternetGateway",
"ec2:CreateEgressOnlyInternetGateway",
"ec2:CreateNatGateway",
"ec2:CreateNetworkInterface",
"ec2:CreateRoute",
"ec2:CreateRouteTable",
"ec2:CreateSecurityGroup",
"ec2:CreateSubnet",
"ec2:CreateTags",
"ec2:CreateVpc",
"ec2:CreateVpcEndpoint",
"ec2:DisassociateVpcCidrBlock",
"ec2:ModifyVpcAttribute",
"ec2:ModifyVpcEndpoint",
"ec2:DeleteCarrierGateway",
"ec2:DeleteInternetGateway",
"ec2:DeleteEgressOnlyInternetGateway",
"ec2:DeleteNatGateway",
"ec2:DeleteRouteTable",
"ec2:ReplaceRoute",
"ec2:DeleteSecurityGroup",
"ec2:DeleteSubnet",
"ec2:DeleteTags",
"ec2:DeleteVpc",
"ec2:DeleteVpcEndpoints",
"ec2:DescribeAccountAttributes",
"ec2:DescribeAddresses",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeCarrierGateways",
"ec2:DescribeInstances",
"ec2:DescribeInstanceTypes",
"ec2:DescribeInternetGateways",
"ec2:DescribeEgressOnlyInternetGateways",
"ec2:DescribeInstanceTypes",
"ec2:DescribeImages",
"ec2:DescribeNatGateways",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeNetworkInterfaceAttribute",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs",
"ec2:DescribeDhcpOptions",
"ec2:DescribeVpcAttribute",
"ec2:DescribeVpcEndpoints",
"ec2:DescribeVolumes",
"ec2:DescribeTags",
"ec2:DetachInternetGateway",
"ec2:DisassociateRouteTable",
"ec2:DisassociateAddress",
"ec2:ModifyInstanceAttribute",
"ec2:ModifyNetworkInterfaceAttribute",
"ec2:ModifySubnetAttribute",
"ec2:ReleaseAddress",
"ec2:RevokeSecurityGroupIngress",
"ec2:RunInstances",
"ec2:TerminateInstances",
"tag:GetResources",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:ConfigureHealthCheck",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DeleteTargetGroup",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
"elasticloadbalancing:SetSecurityGroups",
"elasticloadbalancing:DescribeTags",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:RemoveTags",
"elasticloadbalancing:SetSubnets",
"elasticloadbalancing:ModifyTargetGroupAttributes",
"elasticloadbalancing:CreateTargetGroup",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeleteListener",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeInstanceRefreshes",
"ec2:CreateLaunchTemplate",
"ec2:CreateLaunchTemplateVersion",
"ec2:DescribeLaunchTemplates",
"ec2:DescribeLaunchTemplateVersions",
"ec2:DeleteLaunchTemplate",
"ec2:DeleteLaunchTemplateVersions",
"ec2:DescribeKeyPairs",
"ec2:ModifyInstanceMetadataOptions"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"autoscaling:CreateAutoScalingGroup",
"autoscaling:UpdateAutoScalingGroup",
"autoscaling:CreateOrUpdateTags",
"autoscaling:StartInstanceRefresh",
"autoscaling:DeleteAutoScalingGroup",
"autoscaling:DeleteTags"
],
"Resource": [
"arn:*:autoscaling:*:*:autoScalingGroup:*:autoScalingGroupName/*"
]
},
{
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole"
],
"Resource": [
"arn:*:iam::*:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling"
],
"Condition": {
"StringLike": {
"iam:AWSServiceName": "autoscaling.amazonaws.com"
}
}
},
{
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole"
],
"Resource": [
"arn:*:iam::*:role/aws-service-role/elasticloadbalancing.amazonaws.com/AWSServiceRoleForElasticLoadBalancing"
],
"Condition": {
"StringLike": {
"iam:AWSServiceName": "elasticloadbalancing.amazonaws.com"
}
}
},
{
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole"
],
"Resource": [
"arn:*:iam::*:role/aws-service-role/spot.amazonaws.com/AWSServiceRoleForEC2Spot"
],
"Condition": {
"StringLike": {
"iam:AWSServiceName": "spot.amazonaws.com"
}
}
},
{
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": [
"arn:*:iam::*:role/*.cluster-api-provider-aws.sigs.k8s.io"
]
},
{
"Effect": "Allow",
"Action": [
"secretsmanager:CreateSecret",
"secretsmanager:DeleteSecret",
"secretsmanager:TagResource"
],
"Resource": [
"arn:*:secretsmanager:*:*:secret:aws.cluster.x-k8s.io/*"
]
}
]
}
With EKS Support
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeIpamPools",
"ec2:AllocateIpamPoolCidr",
"ec2:AttachNetworkInterface",
"ec2:DetachNetworkInterface",
"ec2:AllocateAddress",
"ec2:AssignIpv6Addresses",
"ec2:AssignPrivateIpAddresses",
"ec2:UnassignPrivateIpAddresses",
"ec2:AssociateRouteTable",
"ec2:AssociateVpcCidrBlock",
"ec2:AttachInternetGateway",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateCarrierGateway",
"ec2:CreateInternetGateway",
"ec2:CreateEgressOnlyInternetGateway",
"ec2:CreateNatGateway",
"ec2:CreateNetworkInterface",
"ec2:CreateRoute",
"ec2:CreateRouteTable",
"ec2:CreateSecurityGroup",
"ec2:CreateSubnet",
"ec2:CreateTags",
"ec2:CreateVpc",
"ec2:CreateVpcEndpoint",
"ec2:DisassociateVpcCidrBlock",
"ec2:ModifyVpcAttribute",
"ec2:ModifyVpcEndpoint",
"ec2:DeleteCarrierGateway",
"ec2:DeleteInternetGateway",
"ec2:DeleteEgressOnlyInternetGateway",
"ec2:DeleteNatGateway",
"ec2:DeleteRouteTable",
"ec2:ReplaceRoute",
"ec2:DeleteSecurityGroup",
"ec2:DeleteSubnet",
"ec2:DeleteTags",
"ec2:DeleteVpc",
"ec2:DeleteVpcEndpoints",
"ec2:DescribeAccountAttributes",
"ec2:DescribeAddresses",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeCarrierGateways",
"ec2:DescribeInstances",
"ec2:DescribeInstanceTypes",
"ec2:DescribeInternetGateways",
"ec2:DescribeEgressOnlyInternetGateways",
"ec2:DescribeInstanceTypes",
"ec2:DescribeImages",
"ec2:DescribeNatGateways",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeNetworkInterfaceAttribute",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs",
"ec2:DescribeDhcpOptions",
"ec2:DescribeVpcAttribute",
"ec2:DescribeVpcEndpoints",
"ec2:DescribeVolumes",
"ec2:DescribeTags",
"ec2:DetachInternetGateway",
"ec2:DisassociateRouteTable",
"ec2:DisassociateAddress",
"ec2:ModifyInstanceAttribute",
"ec2:ModifyNetworkInterfaceAttribute",
"ec2:ModifySubnetAttribute",
"ec2:ReleaseAddress",
"ec2:RevokeSecurityGroupIngress",
"ec2:RunInstances",
"ec2:TerminateInstances",
"tag:GetResources",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:ConfigureHealthCheck",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DeleteTargetGroup",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
"elasticloadbalancing:SetSecurityGroups",
"elasticloadbalancing:DescribeTags",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:RemoveTags",
"elasticloadbalancing:SetSubnets",
"elasticloadbalancing:ModifyTargetGroupAttributes",
"elasticloadbalancing:CreateTargetGroup",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeleteListener",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeInstanceRefreshes",
"ec2:CreateLaunchTemplate",
"ec2:CreateLaunchTemplateVersion",
"ec2:DescribeLaunchTemplates",
"ec2:DescribeLaunchTemplateVersions",
"ec2:DeleteLaunchTemplate",
"ec2:DeleteLaunchTemplateVersions",
"ec2:DescribeKeyPairs",
"ec2:ModifyInstanceMetadataOptions"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"autoscaling:CreateAutoScalingGroup",
"autoscaling:UpdateAutoScalingGroup",
"autoscaling:CreateOrUpdateTags",
"autoscaling:StartInstanceRefresh",
"autoscaling:DeleteAutoScalingGroup",
"autoscaling:DeleteTags"
],
"Resource": [
"arn:*:autoscaling:*:*:autoScalingGroup:*:autoScalingGroupName/*"
]
},
{
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole"
],
"Resource": [
"arn:*:iam::*:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling"
],
"Condition": {
"StringLike": {
"iam:AWSServiceName": "autoscaling.amazonaws.com"
}
}
},
{
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole"
],
"Resource": [
"arn:*:iam::*:role/aws-service-role/elasticloadbalancing.amazonaws.com/AWSServiceRoleForElasticLoadBalancing"
],
"Condition": {
"StringLike": {
"iam:AWSServiceName": "elasticloadbalancing.amazonaws.com"
}
}
},
{
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole"
],
"Resource": [
"arn:*:iam::*:role/aws-service-role/spot.amazonaws.com/AWSServiceRoleForEC2Spot"
],
"Condition": {
"StringLike": {
"iam:AWSServiceName": "spot.amazonaws.com"
}
}
},
{
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": [
"arn:*:iam::*:role/*.cluster-api-provider-aws.sigs.k8s.io"
]
},
{
"Effect": "Allow",
"Action": [
"secretsmanager:CreateSecret",
"secretsmanager:DeleteSecret",
"secretsmanager:TagResource"
],
"Resource": [
"arn:*:secretsmanager:*:*:secret:aws.cluster.x-k8s.io/*"
]
}
]
}
With S3 Support
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeIpamPools",
"ec2:AllocateIpamPoolCidr",
"ec2:AttachNetworkInterface",
"ec2:DetachNetworkInterface",
"ec2:AllocateAddress",
"ec2:AssignIpv6Addresses",
"ec2:AssignPrivateIpAddresses",
"ec2:UnassignPrivateIpAddresses",
"ec2:AssociateRouteTable",
"ec2:AssociateVpcCidrBlock",
"ec2:AttachInternetGateway",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateCarrierGateway",
"ec2:CreateInternetGateway",
"ec2:CreateEgressOnlyInternetGateway",
"ec2:CreateNatGateway",
"ec2:CreateNetworkInterface",
"ec2:CreateRoute",
"ec2:CreateRouteTable",
"ec2:CreateSecurityGroup",
"ec2:CreateSubnet",
"ec2:CreateTags",
"ec2:CreateVpc",
"ec2:CreateVpcEndpoint",
"ec2:DisassociateVpcCidrBlock",
"ec2:ModifyVpcAttribute",
"ec2:ModifyVpcEndpoint",
"ec2:DeleteCarrierGateway",
"ec2:DeleteInternetGateway",
"ec2:DeleteEgressOnlyInternetGateway",
"ec2:DeleteNatGateway",
"ec2:DeleteRouteTable",
"ec2:ReplaceRoute",
"ec2:DeleteSecurityGroup",
"ec2:DeleteSubnet",
"ec2:DeleteTags",
"ec2:DeleteVpc",
"ec2:DeleteVpcEndpoints",
"ec2:DescribeAccountAttributes",
"ec2:DescribeAddresses",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeCarrierGateways",
"ec2:DescribeInstances",
"ec2:DescribeInstanceTypes",
"ec2:DescribeInternetGateways",
"ec2:DescribeEgressOnlyInternetGateways",
"ec2:DescribeInstanceTypes",
"ec2:DescribeImages",
"ec2:DescribeNatGateways",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeNetworkInterfaceAttribute",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs",
"ec2:DescribeDhcpOptions",
"ec2:DescribeVpcAttribute",
"ec2:DescribeVpcEndpoints",
"ec2:DescribeVolumes",
"ec2:DescribeTags",
"ec2:DetachInternetGateway",
"ec2:DisassociateRouteTable",
"ec2:DisassociateAddress",
"ec2:ModifyInstanceAttribute",
"ec2:ModifyNetworkInterfaceAttribute",
"ec2:ModifySubnetAttribute",
"ec2:ReleaseAddress",
"ec2:RevokeSecurityGroupIngress",
"ec2:RunInstances",
"ec2:TerminateInstances",
"tag:GetResources",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:ConfigureHealthCheck",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DeleteTargetGroup",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
"elasticloadbalancing:SetSecurityGroups",
"elasticloadbalancing:DescribeTags",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:RemoveTags",
"elasticloadbalancing:SetSubnets",
"elasticloadbalancing:ModifyTargetGroupAttributes",
"elasticloadbalancing:CreateTargetGroup",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeleteListener",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeInstanceRefreshes",
"ec2:CreateLaunchTemplate",
"ec2:CreateLaunchTemplateVersion",
"ec2:DescribeLaunchTemplates",
"ec2:DescribeLaunchTemplateVersions",
"ec2:DeleteLaunchTemplate",
"ec2:DeleteLaunchTemplateVersions",
"ec2:DescribeKeyPairs",
"ec2:ModifyInstanceMetadataOptions"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"autoscaling:CreateAutoScalingGroup",
"autoscaling:UpdateAutoScalingGroup",
"autoscaling:CreateOrUpdateTags",
"autoscaling:StartInstanceRefresh",
"autoscaling:DeleteAutoScalingGroup",
"autoscaling:DeleteTags"
],
"Resource": [
"arn:*:autoscaling:*:*:autoScalingGroup:*:autoScalingGroupName/*"
]
},
{
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole"
],
"Resource": [
"arn:*:iam::*:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling"
],
"Condition": {
"StringLike": {
"iam:AWSServiceName": "autoscaling.amazonaws.com"
}
}
},
{
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole"
],
"Resource": [
"arn:*:iam::*:role/aws-service-role/elasticloadbalancing.amazonaws.com/AWSServiceRoleForElasticLoadBalancing"
],
"Condition": {
"StringLike": {
"iam:AWSServiceName": "elasticloadbalancing.amazonaws.com"
}
}
},
{
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole"
],
"Resource": [
"arn:*:iam::*:role/aws-service-role/spot.amazonaws.com/AWSServiceRoleForEC2Spot"
],
"Condition": {
"StringLike": {
"iam:AWSServiceName": "spot.amazonaws.com"
}
}
},
{
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": [
"arn:*:iam::*:role/*.cluster-api-provider-aws.sigs.k8s.io"
]
},
{
"Effect": "Allow",
"Action": [
"secretsmanager:CreateSecret",
"secretsmanager:DeleteSecret",
"secretsmanager:TagResource"
],
"Resource": [
"arn:*:secretsmanager:*:*:secret:aws.cluster.x-k8s.io/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:DeleteBucket",
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:PutBucketPolicy",
"s3:PutBucketTagging"
],
"Resource": [
"arn:*:s3:::cluster-api-provider-aws-*"
]
}
]
}
Required by the Kubernetes AWS Cloud Provider
These permissions are used by the Kubernetes AWS Cloud Provider. If you are
running with the in-tree cloud provider, this will typically be used by the
controller-manager
pod in the kube-system
namespace.
If provisioning IAM roles using clusterawsadm
,
these will be set up as the control-plane.cluster-api-provider-aws.sigs.k8s.io
IAM Policy, and attached to the control-plane.cluster-api-provider-aws.sigs.k8s.io
IAM role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"ec2:AssignIpv6Addresses",
"ec2:DescribeInstances",
"ec2:DescribeImages",
"ec2:DescribeRegions",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVolumes",
"ec2:CreateSecurityGroup",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:ModifyInstanceAttribute",
"ec2:ModifyVolume",
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateRoute",
"ec2:DeleteRoute",
"ec2:DeleteSecurityGroup",
"ec2:DeleteVolume",
"ec2:DetachVolume",
"ec2:RevokeSecurityGroupIngress",
"ec2:DescribeVpcs",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:AttachLoadBalancerToSubnets",
"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
"elasticloadbalancing:SetSecurityGroups",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:CreateLoadBalancerPolicy",
"elasticloadbalancing:CreateLoadBalancerListeners",
"elasticloadbalancing:ConfigureHealthCheck",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DeleteLoadBalancerListeners",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DetachLoadBalancerFromSubnets",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:CreateTargetGroup",
"elasticloadbalancing:DeleteListener",
"elasticloadbalancing:DeleteTargetGroup",
"elasticloadbalancing:DeregisterTargets",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:DescribeLoadBalancerPolicies",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:ModifyListener",
"elasticloadbalancing:ModifyTargetGroup",
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
"iam:CreateServiceLinkedRole",
"kms:DescribeKey"
],
"Resource": [
"*"
]
}
]
}
Required by all nodes
All nodes require these permissions in order to run, and are used by the AWS cloud provider run by kubelet.
If provisioning IAM roles using clusterawsadm
,
these will be set up as the nodes.cluster-api-provider-aws.sigs.k8s.io
IAM Policy, and attached to the nodes.cluster-api-provider-aws.sigs.k8s.io
IAM role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AssignIpv6Addresses",
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ec2:CreateTags",
"ec2:DescribeTags",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeInstanceTypes",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:BatchGetImage"
],
"Resource": [
"*"
]
}
]
}
When using EKS, the AmazonEKSWorkerNodePolicy
and AmazonEKS_CNI_Policy
AWS managed policies will also be attached to
nodes.cluster-api-provider-aws.sigs.k8s.io
IAM role.
Ignition support
- Feature status: Experimental
- Feature gate: BootstrapFormatIgnition=true
The default configuration engine for bootstrapping workload cluster machines is cloud-init. Ignition is an alternative engine used by Linux distributions such as Flatcar Container Linux and Fedora CoreOS and therefore should be used when choosing an Ignition-based distribution as the underlying OS for workload clusters.
This document explains how Ignition support works.
For more generic information, see Cluster API documentation on Ignition Bootstrap configuration.
Overview
When using CloudInit for bootstrapping, by default the awsmachine controller stores EC2 instance user data using SSM to store it encrypted, which underneath uses multi part mime types. Unfortunately multi part mime types are not supported by Ignition. Moreover EC2 instance user data storage is also limited to 64 KB, which might not always be enough to provision Kubernetes controlplane because of the size of required certificates and configuration files.
To address these limitations, when using Ignition for bootstrapping, by default the awsmachine controller uses a Cluster Object Store (e.g. S3 Bucket), configured in the AWSCluster, to store user data, which will be then pulled by the instances during provisioning.
Optionally, when using Ignition for bootstrapping, users can optionally choose an alternative storageType for user data. For now the single available alternative is to store user data unencrypted directly in the EC2 instance user data. This storageType option is although discouraged unless strictly necessary, as it is not considered as safe as storing it in the S3 Bucket.
Prerequirements for enabling Ignition bootstrapping
Enabling EXP_BOOTSTRAP_FORMAT_IGNITION feature gate
In order to activate Ignition bootstrap you first need to enable its feature gate.
When deploying CAPA using clusterctl
, make sure you set BOOTSTRAP_FORMAT_IGNITION=true
and
EXP_KUBEADM_BOOTSTRAP_FORMAT_IGNITION=true
environment variables to enable experimental Ignition bootstrap
support.
# Enable the feature gates controlling Ignition bootstrap.
export EXP_KUBEADM_BOOTSTRAP_FORMAT_IGNITION=true # Used by the kubeadm bootstrap provider.
export EXP_BOOTSTRAP_FORMAT_IGNITION=true # Used by the AWS provider.
# Initialize the management cluster.
clusterctl init --infrastructure aws
Choosing a storage type for Ignition user data
S3 is the default storage type when Ignition is enabled for managing machine’s bootstrapping. But other methods can be choosen for storing Ignition user data.
Store Ignition config in a Cluster Object Store (e.g. S3 bucket)
To explicitly set ClusterObjectStore as the storage type, provide the following config in the AWSMachineTemplate
:
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachineTemplate
metadata:
name: "test"
spec:
template:
spec:
ignition:
storageType: ClusterObjectStore
Cluster Object Store and object management
When you want to use Ignition user data format for you machines, you need to configure your cluster to specify which Cluster Object Store to use. Controller will then check that the bucket already exists and that required policies are in place.
See the configuration snippet below to learn how to configure AWSCluster
to manage S3 bucket.
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSCluster
spec:
s3Bucket:
controlPlaneIAMInstanceProfile: control-plane.cluster-api-provider-aws.sigs.k8s.io
name: cluster-api-provider-aws-unique-suffix
nodesIAMInstanceProfiles:
- nodes.cluster-api-provider-aws.sigs.k8s.io
Buckets are safe to be reused between clusters.
After successful machine provisioning, the bootstrap data is removed from the object store.
During cluster removal, if the Cluster Object Store is empty, it will be deleted as well.
S3 IAM Permissions
If you choose to use an S3 bucket as the Cluster Object Store, CAPA controllers require additional IAM permissions.
If you use clusterawsadm
for managing the IAM roles, you can use the configuration below to create S3-related
IAM permissions.
apiVersion: bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSIAMConfiguration
spec:
s3Buckets:
enable: true
See Using clusterawsadm to fulfill prerequisites for more details.
Cluster Object Store naming
Cluster Object Store and bucket naming must follow S3 Bucket naming rules.
In addition, by default clusterawsadm
creates IAM roles to only allow interacting with buckets with
cluster-api-provider-aws-
prefix to reduce the permissions of CAPA controller, so all bucket names should
use this prefix.
To change it, use spec.s3Buckets.namePrefix
field in AWSIAMConfiguration
.
apiVersion: bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSIAMConfiguration
spec:
s3Buckets:
namePrefix: my-custom-secure-bucket-prefix-
Store Ignition config as UnencryptedUserData
To instruct the controllers to store the user data directly in the EC2 instance user data unencrypted,
provide the following config in the AWSMachineTemplate
:
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachineTemplate
metadata:
name: "test"
spec:
template:
spec:
ignition:
storageType: UnencryptedUserData
No further requirements are necessary.
Supported bootstrap providers
At the moment only CABPK is known to support producing bootstrap data in Ignition format.
Trying it out
If you want to test Ignition support, use flatcar
cluster flavor.
Other bootstrap providers
If you want to use Ignition support with custom bootstrap provider which supports producing Ignition bootstrap
data, ensure that bootstrap provider sets the format
field in machine bootstrap secret to ignition
. This
information is used by the machine controller to determine which user data format to use for the instances.
External Resource Garbage Collection
- Feature status: Experimental
- Feature gate (required): ExternalResourceGC=true
Overview
Workload clusters that CAPA has created may have additional resources in AWS that need to be deleted when the cluster is deleted.
For example, if the workload cluster has Services
of type LoadBalancer
then AWS ELB/NLB are provisioned. If you try to delete the workload cluster in this example, it will fail as these load balancers are still using the VPC.
This feature enables deletion of these external resources as part of cluster deletion. During the deletion of a workload cluster the external AWS resources that where created by the Cloud Controller Manager (CCM) in the workload cluster will be identified and deleted.
NOTE: This is not related to externally managed infrastructure.
Currently, we support cleaning up the following:
- AWS ELB/NLB - by deleting
Services
of typeLoadBalancer
from the workload cluster
We will look to support deleting EBS volumes in the future potentially.
Note: this feature will likely be superseded by an upstream CAPI feature in the future when this issue is resolved.
Enabling
To enable garbage collection, you must set the ExternalResourceGC
feature gate to true
on the controller manager. The easiest way to do this is via an environment variable:
export EXP_EXTERNAL_RESOURCE_GC=true
clusterctl init --infrastructure aws
Note: if you enable this feature ALL clusters will be marked as requiring garbage collection.
Operations
Manually Disabling Garbage Collection for a Cluster
There are 2 ways to manually disable garbage collection for an individual cluster:
Using clusterawsadm
By running the following command:
clusterawsadm gc disable --cluster-name mycluster
See the command help for more examples.
Editing AWSCluster\AWSManagedControlPlane
Or, by editing your AWSCluster
or AWSManagedControlPlane
so that the annotation aws.cluster.x-k8s.io/external-resource-gc
is set to false.
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: AWSManagedControlPlane
metadata:
annotations:
aws.cluster.x-k8s.io/external-resource-gc: "false"
Manually Enabling Garbage Collection for a Cluster
There are 2 ways to manually enable garbage collection for an individual cluster:
Using clusterawsadm
By running the following command:
clusterawsadm gc enable --cluster-name mycluster
See the command help for more examples.
Editing AWSCluster\AWSManagedControlPlane
Or, by editing your AWSCluster
or AWSManagedControlPlane
o that the annotation aws.cluster.x-k8s.io/external-resource-gc
is either removed or set to true.
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: AWSManagedControlPlane
metadata:
annotations:
aws.cluster.x-k8s.io/external-resource-gc: "true"
Instance Metadata Service
Instance metadata is data about your instance that you can use to configure or manage the running instance which you can access from a running instance using one of the following methods:
- Instance Metadata Service Version 1 (IMDSv1) – a request/response method
- Instance Metadata Service Version 2 (IMDSv2) – a session-oriented method
CAPA defaults to use IMDSv2 as optional property when creating instances.
CAPA expose options to configure IMDSv2 as required when creating instances, as it provides a better level of security.
It is possible to configure the instance metadata options using the field called instanceMetadataOptions
in the AWSMachineTemplate
.
Example:
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachineTemplate
metadata:
name: "test"
spec:
template:
spec:
instanceMetadataOptions:
httpEndpoint: enabled
httpPutResponseHopLimit: 1
httpTokens: optional
instanceMetadataTags: disabled
To use IMDSv2, simply set httpTokens
value to required
(in other words, set the use of IMDSv2 to required).
To use IMDSv2, please also set httpPutResponseHopLimit
value to 2
, as it is recommended in container environment according to AWS document.
Similarly, this can be done with AWSManagedMachinePool
for use with EKS Managed Nodegroups. One slight difference here is that you must use Launch Templates to configure IMDSv2 with Autoscaling Groups. In order to configure the LaunchTemplate, you must use a custom AMI type according to the AWS API. This can be done by setting AWSManagedMachinePool.spec.amiType
to CUSTOM
. This change means that you must also specify a bootstrapping script to the worker node, which allows it to be joined to the EKS cluster. The default AWS Managed Node Group bootstrap script can be found here on Github.
The following example will use the default Amazon EKS Worker Node AMI which includes the default EKS Bootstrapping script. This must be installed on the management cluster as a Secret, under the key value
. The secret’s name must then be included in your MachinePool
manifest at MachinePool.spec.template.spec.bootstrap.dataSecretName
. Some assumptions are made for this example:
- Your cluster name is
capi-imds
, which CAPA renames todefault_capi-imds-control-plane
automatically - Your cluster is Kubernetes Version
v1.25.9
- Your
AWSManagedCluster
is deployed in thedefault
namespace along with the bootstrap secreteks-bootstrap
kind: Secret
apiVersion: v1
type: Opaque
data:
value: IyEvYmluL2Jhc2ggLXhlCi9ldGMvZWtzL2Jvb3RzdHJhcC5zaCBkZWZhdWx0X2NhcGktaW1kcy1jb250cm9sLXBsYW5l
metadata:
name: eks-bootstrap
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSManagedMachinePool
metadata:
name: "capi-imds-pool-launchtemplate"
spec:
amiType: CUSTOM
awsLaunchTemplate:
name: my-aws-launch-template
instanceType: t3.nano
metadataOptions:
httpTokens: required
httpPutResponseHopLimit: 2
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachinePool
metadata:
name: "capi-imds-pool-1"
spec:
clusterName: "capi-imds"
replicas: 1
template:
spec:
version: v1.25.9
clusterName: "capi-imds"
bootstrap:
dataSecretName: "eks-bootstrap"
infrastructureRef:
name: "capi-imds-pool-launchtemplate"
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSManagedMachinePool
IyEvYmluL2Jhc2ggLXhlCi9ldGMvZWtzL2Jvb3RzdHJhcC5zaCBkZWZhdWx0X2NhcGktaW1kcy1jb250cm9sLXBsYW5l
in the above secret is a Base64 encoded version of the following script:
#!/bin/bash -xe
/etc/eks/bootstrap.sh default_capi-imds-control-plane
If your cluster is not named default_capi-imds-control-plane
in the AWS EKS console, you must update the name and store it as a Secret again.
See the CLI command reference for more information.
Before you decide to use IMDSv2 for the cluster instances, please make sure all your applications are compatible with IMDSv2.
See the transition guide for more information.
Setting up a Network Load Balancer
Overview
It’s possible to set up and use a Network Load Balancer with AWSCluster
instead of the
Classic Load Balancer that is created by default.
AWSCluster
setting
To make CAPA create a network load balancer simply set the load balancer type to network
like this:
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSCluster
metadata:
name: "test-aws-cluster"
spec:
region: "eu-central-1"
controlPlaneLoadBalancer:
loadBalancerType: nlb
This will create the following objects:
- A network load balancer
- Listeners
- A target group
It will also take into consideration IPv6 enabled clusters and create an IPv6 aware load balancer.
Preserve Client IPs
By default, client ip preservation is disabled. This is to avoid hairpinning issues between kubelet and the node registration process. To enable client IP preservation, you can set it to enable with the following flag:
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSCluster
metadata:
name: "test-aws-cluster"
spec:
region: "eu-central-1"
sshKeyName: "capa-key"
controlPlaneLoadBalancer:
loadBalancerType: nlb
preserveClientIP: true
Security
NLBs can use security groups, but only if one is associated at the time of creation. CAPA will associate the default control plane security groups with a new NLB by default.
For more information, see AWS’s Network Load Balancer and Security Groups documentation.
Extension of the code
Right now, only NLBs and a Classic Load Balancer is supported. However, the code has been written in a way that it should be easy to extend with an ALB or a GLB.
Enabling a Secondary Control Plane Load Balancer
Overview
It is possible to use a second control plane load balancer within a CAPA cluster. This secondary control plane load balancer is primarily meant to be used for internal cluster traffic, for use cases where traffic between nodes and pods should be kept internal to the VPC network. This adds a layer of privacy to traffic, as well as potentially saving on egress costs for traffic to the Kubernetes API server.
A dual load balancer topology is not used as a default in order to maintain backward compatibility with existing CAPA clusters.
Requirements and defaults
- A secondary control plane load balancer is not created by default.
- The secondary control plane load balancer must be a Network Load Balancer, and will default to this type.
- The secondary control plane load balancer must also be provided a name.
- The secondary control plane’s
Scheme
defaults tointernal
, and must be different from thespec.controlPlaneLoadBalancer
‘sScheme
.
The secondary load balancer will use the same Security Group information as the primary control plane load balancer.
Creating a secondary load balancer
To create a secondary load balancer, add the secondaryControlPlaneLoadBalancer
stanza to your AWSCluster
.
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSCluster
metadata:
name: test-aws-cluster
spec:
region: us-east-2
sshKeyName: nrb-default
secondaryControlPlaneLoadBalancer:
name: internal-apiserver
scheme: internal # optional
Manage Local Zone subnets
Overview
CAPA provides the option to manage network resources required to provision compute nodes to Local Zone and Wavelength Zone locations.
AWS Local Zones extends the cloud infrastructure to metropolitan regions, allowing to deliver applications closer to the end-users, decreasing the network latency.
AWS Wavelength Zones extends the AWS infrastructure deployments infrastructure to carrier infrastructure, allowing to deploy within communications service providers’ (CSP) 5G networks.
When “edge zones” is mentioned in this document, it is referencing to AWS Local Zones and AWS Wavelength Zones.
Requirements and defaults
For both Local Zones and Wavelength Zones (’edge zones’):
- Subnets in edge zones are not created by default.
- When you choose to CAPA manage edge zone’s subnets, you also must specify the regular zones (Availability Zones) you will create the cluster.
- IPv6 is not globally supported by AWS across Local Zones, and is not supported in Wavelength zones, CAPA support is limited to IPv4 subnets in edge zones.
- The subnets in edge zones will not be used by CAPA to create NAT Gateways, Network Load Balancers, or provision Control Plane or Compute nodes by default.
- NAT Gateways are not globally available to edge zone’s locations, the CAPA uses the Parent Zone for the edge zone to create the NAT Gateway to allow the instances on private subnets to egress traffic to the internet.
- The CAPA subnet controllers discovers the zone attributes
ZoneType
andParentZoneName
for each subnet on creation, those fields are used to ensure subnets for it’s role. For example: only subnets withZoneType
with valueavailability-zone
can be used to create a load balancer for API. - It is required to manually opt-in to each zone group for edge zones you are planning to create subnets.
The following steps are example to describe the zones and opt-into an zone group for an Local Zone:
- To check the zone group name for a Local Zone, you can use the [EC2 API `DescribeAvailabilityZones`][describe-availability-zones]. For example:
aws --region "<value_of_AWS_Region>" ec2 describe-availability-zones \
--query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' \
--filters Name=zone-type,Values=local-zone \
--all-availability-zones
- To opt-int the zone group, you can use the [EC2 API `ModifyZoneAttributes`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_ModifyAvailabilityZoneGroup.html):
aws ec2 modify-availability-zone-group \
--group-name "<value_of_GroupName>" \
--opt-in-status opted-in
Installing managed clusters extending subnets to Local Zones
To create a cluster with support of subnets on AWS Local Zones, add the Subnets
stanza to your AWSCluster.NetworkSpec
. Example:
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSCluster
metadata:
name: aws-cluster-localzone
spec:
region: us-east-1
networkSpec:
vpc:
cidrBlock: "10.0.0.0/20"
subnets:
# regular zones (availability zones)
- availabilityZone: us-east-1a
cidrBlock: "10.0.0.0/24"
id: "cluster-subnet-private-us-east-1a"
isPublic: false
- availabilityZone: us-east-1a
cidrBlock: "10.0.1.0/24"
id: "cluster-subnet-public-us-east-1a"
isPublic: true
- availabilityZone: us-east-1b
cidrBlock: "10.0.3.0/24"
id: "cluster-subnet-private-us-east-1b"
isPublic: false
- availabilityZone: us-east-1b
cidrBlock: "10.0.4.0/24"
id: "cluster-subnet-public-us-east-1b"
isPublic: true
- availabilityZone: us-east-1c
cidrBlock: "10.0.5.0/24"
id: "cluster-subnet-private-us-east-1c"
isPublic: false
- availabilityZone: us-east-1c
cidrBlock: "10.0.6.0/24"
id: "cluster-subnet-public-us-east-1c"
isPublic: true
# Subnets in Local Zones of New York location (public and private)
- availabilityZone: us-east-1-nyc-1a
cidrBlock: "10.0.128.0/25"
id: "cluster-subnet-private-us-east-1-nyc-1a"
isPublic: false
- availabilityZone: us-east-1-nyc-1a
cidrBlock: "10.0.128.128/25"
id: "cluster-subnet-public-us-east-1-nyc-1a"
isPublic: true
Installing managed clusters extending subnets to Wavelength Zones
To create a cluster with support of subnets on AWS Wavelength Zones, add the Subnets
stanza to your AWSCluster.NetworkSpec
. Example:
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSCluster
metadata:
name: aws-cluster-wavelengthzone
spec:
region: us-east-1
networkSpec:
vpc:
cidrBlock: "10.0.0.0/20"
subnets:
# <placeholder for regular zones (availability zones)>
- availabilityZone: us-east-1-wl1-was-wlz-1
cidrBlock: "10.0.128.0/25"
id: "cluster-subnet-private-us-east-1-wl1-was-wlz-1"
isPublic: false
- availabilityZone: us-east-1-wl1-was-wlz-1
cidrBlock: "10.0.128.128/25"
id: "cluster-subnet-public-us-east-1-wl1-was-wlz-1"
isPublic: true
Installing managed clusters extending subnets to Local and Wavelength Zones
It is also possible to mix the creation across both Local and Wavelength zones.
To create a cluster with support of edge zones, add the Subnets
stanza to your AWSCluster.NetworkSpec
. Example:
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSCluster
metadata:
name: aws-cluster-edge
spec:
region: us-east-1
networkSpec:
vpc:
cidrBlock: "10.0.0.0/20"
subnets:
# <placeholder for regular zones (availability zones)>
- availabilityZone: us-east-1-nyc-1a
cidrBlock: "10.0.128.0/25"
id: "cluster-subnet-private-us-east-1-nyc-1a"
isPublic: false
- availabilityZone: us-east-1-nyc-1a
cidrBlock: "10.0.128.128/25"
id: "cluster-subnet-public-us-east-1-nyc-1a"
isPublic: true
- availabilityZone: us-east-1-wl1-was-wlz-1
cidrBlock: "10.0.129.0/25"
id: "cluster-subnet-private-us-east-1-wl1-was-wlz-1"
isPublic: false
- availabilityZone: us-east-1-wl1-was-wlz-1
cidrBlock: "10.0.129.128/25"
id: "cluster-subnet-public-us-east-1-wl1-was-wlz-1"
isPublic: true
clusterawsadm
Kubernetes Cluster API Provider AWS Management Utility
Synopsis
clusterawsadm provides helpers for bootstrapping Kubernetes Cluster API Provider AWS. Use clusterawsadm to view required AWS Identity and Access Management (IAM) policies as JSON docs, or create IAM roles and instance profiles automatically using AWS CloudFormation.
clusterawsadm additionally helps provide credentials for use with clusterctl.
clusterawsadm [flags]
Examples
# Create AWS Identity and Access Management (IAM) roles for use with
# Kubernetes Cluster API Provider AWS.
clusterawsadm bootstrap iam create-cloudformation-stack
# Encode credentials for use with clusterctl init
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
clusterctl init --infrastructure aws
Options
-h, --help help for clusterawsadm
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm ami - AMI commands
- clusterawsadm bootstrap - bootstrap commands
- clusterawsadm controller - controller commands
- clusterawsadm eks - Commands related to EKS
- clusterawsadm gc - Commands related to garbage collecting external resources of clusters
- clusterawsadm resource - Commands related to AWS resources
- clusterawsadm version - Print version of clusterawsadm
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm bootstrap
bootstrap commands
Synopsis
In order to use Kubernetes Cluster API Provider AWS, an AWS account needs to be prepared with AWS Identity and Access Management (IAM) roles to be used by clusters as well as provide Kubernetes Cluster API Provider AWS with credentials to use to provision infrastructure.
clusterawsadm bootstrap [command] [flags]
Options
-h, --help help for bootstrap
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm - Kubernetes Cluster API Provider AWS Management Utility
- clusterawsadm bootstrap credentials - Encode credentials to use with Kubernetes Cluster API Provider AWS
- clusterawsadm bootstrap iam - View required AWS IAM policies and create/update IAM roles using AWS CloudFormation
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm bootstrap credentials
Encode credentials to use with Kubernetes Cluster API Provider AWS
Synopsis
Encode credentials to use with Kubernetes Cluster API Provider AWS.
The utility will attempt to find credentials in the following order:
- Check for the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.
- Read the default credentials from the shared configuration files ~/.aws/credentials or the default profile in ~/.aws/config.
- Check for the presence of an EC2 IAM instance profile if it’s running on AWS.
- Check for ECS credentials.
IAM role assumption can be performed by using any valid configuration for the AWS CLI at: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html. For role assumption to be used, a region is required for the utility to use the AWS Security Token Service (STS). The utility resolves the region in the following order:
- Check for the --region flag.
- Check for the AWS_REGION environment variable.
- Check for the DEFAULT_AWS_REGION environment variable.
- Check that a region is specified in the shared configuration file.
The utility will then generate an ini-file with a default profile corresponding to the resolved credentials.
If a region cannot be found, for the purposes of using AWS Security Token Service, this utility will fall back to us-east-1. This does not affect the region in which clusters will be created.
In the case of an instance profile or role assumption, note that encoded credentials are time-limited.
clusterawsadm bootstrap credentials [flags]
Examples
# Encode credentials from the environment for use with clusterctl
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
clusterctl init --infrastructure aws
Options
-h, --help help for credentials
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm bootstrap - bootstrap commands
- clusterawsadm bootstrap credentials encode-as-profile - Generate an AWS profile from the current environment
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm bootstrap credentials encode-as-profile
Generate an AWS profile from the current environment
Synopsis
Generate an AWS profile from the current environment for the ephemeral bootstrap cluster.
The utility will attempt to find credentials in the following order:
- Check for the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.
- Read the default credentials from the shared configuration files ~/.aws/credentials or the default profile in ~/.aws/config.
- Check for the presence of an EC2 IAM instance profile if it’s running on AWS.
- Check for ECS credentials.
IAM role assumption can be performed by using any valid configuration for the AWS CLI at: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html. For role assumption to be used, a region is required for the utility to use the AWS Security Token Service (STS). The utility resolves the region in the following order:
- Check for the --region flag.
- Check for the AWS_REGION environment variable.
- Check for the DEFAULT_AWS_REGION environment variable.
- Check that a region is specified in the shared configuration file.
The utility will then generate an ini-file with a default profile corresponding to the resolved credentials.
If a region cannot be found, for the purposes of using AWS Security Token Service, this utility will fall back to us-east-1. This does not affect the region in which clusters will be created.
In the case of an instance profile or role assumption, note that encoded credentials are time-limited.
clusterawsadm bootstrap credentials encode-as-profile [flags]
Examples
# Encode credentials from the environment for use with clusterctl
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
clusterctl init --infrastructure aws
Options
-h, --help help for encode-as-profile
--output string Output for credential configuration (rawSharedConfig, base64SharedConfig) (default "base64SharedConfig")
--region string The AWS region in which to provision
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm bootstrap credentials - Encode credentials to use with Kubernetes Cluster API Provider AWS
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm bootstrap iam
View required AWS IAM policies and create/update IAM roles using AWS CloudFormation
Synopsis
View/output AWS Identity and Access Management (IAM) policy documents required for configuring Kubernetes Cluster API Provider AWS as well as create/update AWS IAM resources using AWS CloudFormation.
clusterawsadm bootstrap iam [command] [flags]
Options
-h, --help help for iam
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm bootstrap - bootstrap commands
- clusterawsadm bootstrap iam create-cloudformation-stack - Create or update an AWS CloudFormation stack
- clusterawsadm bootstrap iam delete-cloudformation-stack - Delete an AWS CloudFormation stack
- clusterawsadm bootstrap iam print-cloudformation-template - Print cloudformation template
- clusterawsadm bootstrap iam print-config - Print configuration
- clusterawsadm bootstrap iam print-policy - Generate and show an IAM policy
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm bootstrap iam create-cloudformation-stack
Create or update an AWS CloudFormation stack
Synopsis
Create or update an AWS CloudFormation stack for bootstrapping Kubernetes Cluster API and Kubernetes AWS Identity and Access Management (IAM) permissions. To use this command, there must be AWS credentials loaded in this environment.
The utility will attempt to find credentials in the following order:
- Check for the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.
- Read the default credentials from the shared configuration files ~/.aws/credentials or the default profile in ~/.aws/config.
- Check for the presence of an EC2 IAM instance profile if it’s running on AWS.
- Check for ECS credentials.
IAM role assumption can be performed by using any valid configuration for the AWS CLI at: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html. For role assumption to be used, a region is required for the utility to use the AWS Security Token Service (STS). The utility resolves the region in the following order:
- Check for the --region flag.
- Check for the AWS_REGION environment variable.
- Check for the DEFAULT_AWS_REGION environment variable.
- Check that a region is specified in the shared configuration file.
clusterawsadm bootstrap iam create-cloudformation-stack [flags]
Examples
# Create or update IAM roles and policies for Kubernetes using a AWS CloudFormation stack.
clusterawsadm bootstrap iam create-cloudformation-stack
# Create or update IAM roles and policies for Kubernetes using a AWS CloudFormation stack with a custom configuration.
clusterawsadm bootstrap iam create-cloudformation-stack --config bootstrap_config.yaml
Options
--config string clusterawsadm will load a bootstrap configuration from this file. The path may be
absolute or relative; relative paths start at the current working directory.
The configuration file is a Kubernetes YAML using the
bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1/AWSIAMConfiguration
kind.
Documentation for this kind can be found at:
https://pkg.go.dev/sigs.k8s.io/cluster-api-provider-aws/v2/cmd/clusterawsadm/api/bootstrap/v1beta1
To see the default configuration, run 'clusterawsadm bootstrap iam print-config'.
-h, --help help for create-cloudformation-stack
--region string The AWS region in which to provision
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm bootstrap iam - View required AWS IAM policies and create/update IAM roles using AWS CloudFormation
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm bootstrap iam delete-cloudformation-stack
Delete an AWS CloudFormation stack
Synopsis
Delete the AWS CloudFormation stack that created AWS Identity and Access Management (IAM) resources for use with Kubernetes Cluster API Provider AWS.
clusterawsadm bootstrap iam delete-cloudformation-stack [flags]
Options
--config string clusterawsadm will load a bootstrap configuration from this file. The path may be
absolute or relative; relative paths start at the current working directory.
The configuration file is a Kubernetes YAML using the
bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1/AWSIAMConfiguration
kind.
Documentation for this kind can be found at:
https://pkg.go.dev/sigs.k8s.io/cluster-api-provider-aws/v2/cmd/clusterawsadm/api/bootstrap/v1beta1
To see the default configuration, run 'clusterawsadm bootstrap iam print-config'.
-h, --help help for delete-cloudformation-stack
--region string The AWS region in which to provision
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm bootstrap iam - View required AWS IAM policies and create/update IAM roles using AWS CloudFormation
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm bootstrap iam print-cloudformation-template
Print cloudformation template
Synopsis
Generate and print out a CloudFormation template that can be used to provision AWS Identity and Access Management (IAM) policies and roles for use with Kubernetes Cluster API Provider AWS.
clusterawsadm bootstrap iam print-cloudformation-template [flags]
Examples
# Print out the default CloudFormation template.
clusterawsadm bootstrap iam print-cloudformation-template
# Print out a CloudFormation template using a custom configuration.
clusterawsadm bootstrap iam print-cloudformation-template --config bootstrap_config.yaml
Options
--config string clusterawsadm will load a bootstrap configuration from this file. The path may be
absolute or relative; relative paths start at the current working directory.
The configuration file is a Kubernetes YAML using the
bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1/AWSIAMConfiguration
kind.
Documentation for this kind can be found at:
https://pkg.go.dev/sigs.k8s.io/cluster-api-provider-aws/v2/cmd/clusterawsadm/api/bootstrap/v1beta1
To see the default configuration, run 'clusterawsadm bootstrap iam print-config'.
-h, --help help for print-cloudformation-template
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm bootstrap iam - View required AWS IAM policies and create/update IAM roles using AWS CloudFormation
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm bootstrap iam print-config
Print configuration
Synopsis
Print configuration
clusterawsadm bootstrap iam print-config [flags]
Examples
# Print the default configuration.
clusterawsadm bootstrap iam print-config
# Apply defaults to a configuration file and print the result
clusterawsadm bootstrap iam print-config --config bootstrap_config.yaml
Options
--config string clusterawsadm will load a bootstrap configuration from this file. The path may be
absolute or relative; relative paths start at the current working directory.
The configuration file is a Kubernetes YAML using the
bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1/AWSIAMConfiguration
kind.
Documentation for this kind can be found at:
https://pkg.go.dev/sigs.k8s.io/cluster-api-provider-aws/v2/cmd/clusterawsadm/api/bootstrap/v1beta1
To see the default configuration, run 'clusterawsadm bootstrap iam print-config'.
-h, --help help for print-config
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm bootstrap iam - View required AWS IAM policies and create/update IAM roles using AWS CloudFormation
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm bootstrap iam print-policy
Generate and show an IAM policy
Synopsis
Generate and show an AWS Identity and Access Management (IAM) policy for Kubernetes Cluster API Provider AWS.
clusterawsadm bootstrap iam print-policy [flags]
Examples
# Print out all the IAM policies for the Kubernetes CLuster API Provider AWS.
clusterawsadm bootstrap iam print-policy
# Print out the IAM policy for the Kubernetes Cluster API Provider AWS Controller.
clusterawsadm bootstrap iam print-policy --document AWSIAMManagedPolicyControllers
# Print out the IAM policy for the Kubernetes Cluster API Provider AWS Controller using a given configuration file.
clusterawsadm bootstrap iam print-policy --document AWSIAMManagedPolicyControllers --config bootstrap_config.yaml
# Print out the IAM policy for the Kubernetes AWS Cloud Provider for the control plane.
clusterawsadm bootstrap iam print-policy --document AWSIAMManagedPolicyCloudProviderControlPlane
# Print out the IAM policy for the Kubernetes AWS Cloud Provider for all nodes.
clusterawsadm bootstrap iam print-policy --document AWSIAMManagedPolicyCloudProviderNodes
# Print out the IAM policy for the Kubernetes AWS EBS CSI Driver Controller.
clusterawsadm bootstrap iam print-policy --document AWSEBSCSIPolicyController
Options
--config string clusterawsadm will load a bootstrap configuration from this file. The path may be
absolute or relative; relative paths start at the current working directory.
The configuration file is a Kubernetes YAML using the
bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1/AWSIAMConfiguration
kind.
Documentation for this kind can be found at:
https://pkg.go.dev/sigs.k8s.io/cluster-api-provider-aws/v2/cmd/clusterawsadm/api/bootstrap/v1beta1
To see the default configuration, run 'clusterawsadm bootstrap iam print-config'.
--document string which document to show: [AWSIAMManagedPolicyControllers AWSIAMManagedPolicyControllersEKS AWSIAMManagedPolicyCloudProviderControlPlane AWSIAMManagedPolicyCloudProviderNodes AWSEBSCSIPolicyController]
-h, --help help for print-policy
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm bootstrap iam - View required AWS IAM policies and create/update IAM roles using AWS CloudFormation
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm controller
controller commands
Synopsis
All controller related actions such as:
Zero controller credentials and rollout controllers
clusterawsadm controller [command] [flags]
Options
-h, --help help for controller
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm - Kubernetes Cluster API Provider AWS Management Utility
- clusterawsadm controller print-credentials - print credentials the controller is using
- clusterawsadm controller rollout-controller - initiates rollout and restart on capa-controller-manager deployment
- clusterawsadm controller update-credentials - update credentials the controller is using (i.e., update controller bootstrap secret)
- clusterawsadm controller zero-credentials - zero credentials the controller is started with
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm controller print-credentials
print credentials the controller is using
Synopsis
print credentials the controller is using
clusterawsadm controller print-credentials [flags]
Examples
# print credentials
clusterawsadm controller print-credentials --kubeconfig=kubeconfig --namespace=capa-system
Options
-h, --help help for print-credentials
--kubeconfig string Path to the kubeconfig file to use for the management cluster. If empty, default discovery rules apply.
--kubeconfig-context string Context to be used within the kubeconfig file. If empty, current context will be used.
--namespace string Namespace the controllers are in. If empty, default value (capa-system) is used (default "capa-system")
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm controller - controller commands
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm controller rollout-controller
initiates rollout and restart on capa-controller-manager deployment
Synopsis
initiates rollout and restart on capa-controller-manager deployment
clusterawsadm controller rollout-controller [flags]
Examples
# rollout controller deployment
clusterawsadm controller rollout-controller --kubeconfig=kubeconfig --namespace=capa-system
Options
-h, --help help for rollout-controller
--kubeconfig string Path to the kubeconfig file to use for the management cluster. If empty, default discovery rules apply.
--kubeconfig-context string Context to be used within the kubeconfig file. If empty, current context will be used.
--namespace string Namespace the controllers are in. If empty, default value (capa-system) is used (default "capa-system")
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm controller - controller commands
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm controller update-credentials
update credentials the controller is using (i.e., update controller bootstrap secret)
Synopsis
Update credentials the controller is started with
clusterawsadm controller update-credentials [flags]
Examples
# update credentials: AWS_B64ENCODED_CREDENTIALS environment variable must be set and be used to update the bootstrap secret
# Kubeconfig file will be searched in default locations
clusterawsadm controller update-credentials --namespace=capa-system
# Provided kubeconfig file will be used
clusterawsadm controller update-credentials --kubeconfig=kubeconfig --namespace=capa-system
# Kubeconfig in the default location will be retrieved and the provided context will be used
clusterawsadm controller update-credentials --kubeconfig-context=mgmt-cluster --namespace=capa-system
Options
-h, --help help for update-credentials
--kubeconfig string Path to the kubeconfig file to use for the management cluster. If empty, default discovery rules apply.
--kubeconfig-context string Context to be used within the kubeconfig file. If empty, current context will be used.
--namespace string Namespace the controllers are in. If empty, default value (capa-system) is used (default "capa-system")
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm controller - controller commands
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm controller zero-credentials
zero credentials the controller is started with
Synopsis
Zero credentials the controller is started with
clusterawsadm controller zero-credentials [flags]
Examples
# zero credentials
# Kubeconfig file will be searched in default locations
clusterawsadm controller zero-credentials --namespace=capa-system
# Provided kubeconfig file will be used
clusterawsadm controller zero-credentials --kubeconfig=kubeconfig --namespace=capa-system
# Kubeconfig in the default location will be retrieved and the provided context will be used
clusterawsadm controller zero-credentials --kubeconfig-context=mgmt-cluster --namespace=capa-system
Options
-h, --help help for zero-credentials
--kubeconfig string Path to the kubeconfig file to use for the management cluster. If empty, default discovery rules apply.
--kubeconfig-context string Context to be used within the kubeconfig file. If empty, current context will be used.
--namespace string Namespace the controllers are in. If empty, default value (capa-system) is used (default "capa-system")
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm controller - controller commands
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm eks
Commands related to EKS
clusterawsadm eks [flags]
Options
-h, --help help for eks
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm - Kubernetes Cluster API Provider AWS Management Utility
- clusterawsadm eks addons - Commands related to EKS addons
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm eks addons
Commands related to EKS addons
clusterawsadm eks addons [flags]
Options
-h, --help help for addons
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm eks - Commands related to EKS
- clusterawsadm eks addons list-available - List available EKS addons
- clusterawsadm eks addons list-installed - List installed EKS addons
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm eks addons list-available
List available EKS addons
Synopsis
Lists the addons that are available for use with an EKS cluster
clusterawsadm eks addons list-available [flags]
Options
-n, --cluster-name string The name of the cluster to get the list of available addons for
-h, --help help for list-available
-o, --output string The output format of the results. Possible values: table,json,yaml (default "table")
-r, --region string The AWS region containing the EKS cluster
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm eks addons - Commands related to EKS addons
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm eks addons list-installed
List installed EKS addons
Synopsis
Lists the addons that are installed for an EKS cluster
clusterawsadm eks addons list-installed [flags]
Options
-n, --cluster-name string The name of the cluster to get the list of installed addons for
-h, --help help for list-installed
-o, --output string The output format of the results. Possible values: table,json,yaml (default "table")
-r, --region string The AWS region containing the EKS cluster
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm eks addons - Commands related to EKS addons
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm gc
Commands related to garbage collecting external resources of clusters
clusterawsadm gc [command] [flags]
Options
-h, --help help for gc
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm - Kubernetes Cluster API Provider AWS Management Utility
- clusterawsadm gc configure - Specify what cleanup tasks will be executed on a given cluster
- clusterawsadm gc disable - Mark a cluster as NOT requiring external resource garbage collection
- clusterawsadm gc enable - Mark a cluster as requiring external resource garbage collection
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm gc configure
Specify what cleanup tasks will be executed on a given cluster
Synopsis
This command will set what cleanup tasks to execute on the given cluster during garbage collection (i.e. deleting) when the cluster is requested to be deleted. Supported values: load-balancer, security-group, target-group.
clusterawsadm gc configure [flags]
Examples
# Configure GC for a cluster to delete only load balancers and security groups using existing k8s context
clusterawsadm gc configure --cluster-name=test-cluster --gc-task load-balancer --gc-task security-group
# Reset GC configuration for a cluster using kubeconfig
clusterawsadm gc configure --cluster-name=test-cluster --kubeconfig=test.kubeconfig
Options
--cluster-name string The name of the CAPA cluster
--gc-task strings Garbage collection tasks to execute during cluster deletion
-h, --help help for configure
--kubeconfig string Path to the kubeconfig file to use (default "/opt/buildhome/.kube/config")
-n, --namespace string The namespace for the cluster definition (default "default")
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm gc - Commands related to garbage collecting external resources of clusters
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm gc disable
Mark a cluster as NOT requiring external resource garbage collection
Synopsis
This command will mark the given cluster as not requiring external resource garbage collection (i.e. deleting) when the cluster is requested to be deleted.
clusterawsadm gc disable [flags]
Examples
# Disable GC for a cluster using existing k8s context
clusterawsadm gc disable --cluster-name=test-cluster
# Disable GC for a cluster using kubeconfig
clusterawsadm gc disable --cluster-name=test-cluster --kubeconfig=test.kubeconfig
Options
--cluster-name string The name of the CAPA cluster
-h, --help help for disable
--kubeconfig string Path to the kubeconfig file to use (default "/opt/buildhome/.kube/config")
-n, --namespace string The namespace for the cluster definition (default "default")
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm gc - Commands related to garbage collecting external resources of clusters
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm gc enable
Mark a cluster as requiring external resource garbage collection
Synopsis
This command will mark the given cluster as requiring external resource garbage collection (i.e. deleting) when the cluster is requested to be deleted. This works by adding an annotation to the infra cluster.
clusterawsadm gc enable [flags]
Examples
# Enable GC for a cluster using existing k8s context
clusterawsadm gc enable --cluster-name=test-cluster
# Enable GC for a cluster using kubeconfig
clusterawsadm gc enable --cluster-name=test-cluster --kubeconfig=test.kubeconfig
Options
--cluster-name string The name of the CAPA cluster
-h, --help help for enable
--kubeconfig string Path to the kubeconfig file to use (default "/opt/buildhome/.kube/config")
-n, --namespace string The namespace for the cluster definition (default "default")
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm gc - Commands related to garbage collecting external resources of clusters
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm resource
Commands related to AWS resources
Synopsis
All AWS resources related actions such as:
List of AWS resources created by CAPA
clusterawsadm resource [command] [flags]
Options
-h, --help help for resource
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm - Kubernetes Cluster API Provider AWS Management Utility
- clusterawsadm resource list - List all AWS resources created by CAPA
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm resource list
List all AWS resources created by CAPA
Synopsis
List AWS resources directly created by CAPA based on region and cluster-name. There are some indirect resources like Cloudwatch alarms, rules, etc which are not directly created by CAPA, so those resources are not listed here. If region and cluster-name are not set, then it will throw an error.
clusterawsadm resource list [flags]
Examples
# List AWS resources directly created by CAPA in given region and clustername
clusterawsadm resource list --region=us-east-1 --cluster-name=test-cluster
Options
-n, --cluster-name string The name of the cluster where AWS resources created by CAPA
-h, --help help for list
-o, --output string The output format of the results. Possible values: table, json, yaml (default "table")
-r, --region string The AWS region where resources are created by CAPA
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm resource - Commands related to AWS resources
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm version
Print version of clusterawsadm
clusterawsadm version [flags]
Options
-h, --help help for version
-o, --output string Output format; available options are 'yaml', 'json' and 'short'
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm - Kubernetes Cluster API Provider AWS Management Utility
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm ami
AMI commands
Synopsis
All AMI related actions such as:
Copy AMIs based on Kubernetes version, OS etc from an AWS account where AMIs are stored
to the current AWS account (use case: air-gapped deployments)
(to be implemented) List available AMIs
clusterawsadm ami [command] [flags]
Options
-h, --help help for ami
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm - Kubernetes Cluster API Provider AWS Management Utility
- clusterawsadm ami copy - Copy AMIs from an AWS account to the AWS account which credentials are provided
- clusterawsadm ami encrypted-copy - Encrypt and copy AMI snapshot, then create an AMI with that snapshot
- clusterawsadm ami list - List AMIs from the default AWS account where AMIs are stored
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm ami list
List AMIs from the default AWS account where AMIs are stored
Synopsis
List AMIs based on Kubernetes version, OS, region. If no arguments are provided, it will print all AMIs in all regions, OS types for the supported Kubernetes versions. Supported Kubernetes versions start from the latest stable version and goes 2 release back: if the latest stable release is v1.20.4- v1.19.x and v1.18.x are supported. Note: First release of each version will be skipped, e.g., v1.21.0 To list AMIs of unsupported Kubernetes versions, --kubernetes-version flag needs to be provided.
clusterawsadm ami list [flags]
Examples
# List AMIs from the default AWS account where AMIs are stored.
# Available os options: centos-7, ubuntu-24.04, ubuntu-22.04, amazon-2, flatcar-stable
clusterawsadm ami list --kubernetes-version=v1.18.12 --os=ubuntu-20.04 --region=us-west-2
# To list all supported AMIs in all supported Kubernetes versions, regions, and linux distributions:
clusterawsadm ami list
Options
-h, --help help for list
--kubernetes-version string Kubernetes version of the AMI to be copied
--os string Operating system of the AMI to be listed
-o, --output string The output format of the results. Possible values: table,json,yaml (default "table")
--owner-id string The owner ID of the AWS account to be used for listing AMIs
--region string The AWS region in which to provision
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm ami - AMI commands
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm ami copy
Copy AMIs from an AWS account to the AWS account which credentials are provided
Synopsis
Copy AMIs based on Kubernetes version, OS, region from an AWS account where AMIs are stored to the current AWS account (use case: air-gapped deployments)
clusterawsadm ami copy [flags]
Examples
# Copy AMI from the default AWS account where AMIs are stored.
# Available os options: centos-7, ubuntu-24.04, ubuntu-22.04, amazon-2, flatcar-stable
clusterawsadm ami copy --kubernetes-version=v1.30.1 --os=ubuntu-22.04 --region=us-west-2
# owner-id and dry-run flags are optional. region can be set via flag or env
clusterawsadm ami copy --os centos-7 --kubernetes-version=v1.19.4 --owner-id=111111111111 --dry-run
# copy from us-east-1 to us-east-2
clusterawsadm ami copy --os centos-7 --kubernetes-version=v1.19.4 --region us-east-2 --source-region us-east-1
Options
--dry-run Check if AMI exists and can be copied
-h, --help help for copy
--kubernetes-version string Kubernetes version of the AMI to be copied
--os string Operating system of the AMI to be copied
--owner-id string The source AWS owner ID, where the AMI will be copied from (default "819546954734")
--region string The AWS region in which to provision
--source-region string Set if wanting to copy an AMI from a different region
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm ami - AMI commands
Auto generated by spf13/cobra on 12-Nov-2024
clusterawsadm ami encrypted-copy
Encrypt and copy AMI snapshot, then create an AMI with that snapshot
Synopsis
Find the AMI based on Kubernetes version, OS, region in the AWS account where AMIs are stored. Encrypt and copy the snapshot of the AMI to the current AWS account. Create an AMI with that snapshot.
clusterawsadm ami encrypted-copy [flags]
Examples
# Create an encrypted AMI:
# Available os options: centos-7, ubuntu-24.04, ubuntu-22.04, amazon-2, flatcar-stable
clusterawsadm ami encrypted-copy --kubernetes-version=v1.18.12 --os=ubuntu-20.04 --region=us-west-2
# owner-id and dry-run flags are optional. region can be set via flag or env
clusterawsadm ami encrypted-copy --os centos-7 --kubernetes-version=v1.19.4 --owner-id=111111111111 --dry-run
# copy from us-east-1 to us-east-2
clusterawsadm ami encrypted-copy --os centos-7 --kubernetes-version=v1.19.4 --owner-id=111111111111 --region us-east-2 --source-region us-east-1
# Encrypt using a non-default KmsKeyId specified using Key ID:
clusterawsadm ami encrypted-copy --os centos-7 --kubernetes-version=v1.19.4 --kms-key-id=key/1234abcd-12ab-34cd-56ef-1234567890ab
# Encrypt using a non-default KmsKeyId specified using Key alias:
clusterawsadm ami encrypted-copy --os centos-7 --kubernetes-version=v1.19.4 --kms-key-id=alias/ExampleAlias
# Encrypt using a non-default KmsKeyId specified using Key ARN:
clusterawsadm ami encrypted-copy --os centos-7 --kubernetes-version=v1.19.4 --kms-key-id=arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef
# Encrypt using a non-default KmsKeyId specified using Alias ARN:
clusterawsadm ami encrypted-copy --os centos-7 --kubernetes-version=v1.19.4 --kms-key-id=arn:aws:kms:us-east-1:012345678910:alias/ExampleAlias
Options
--dry-run Check if AMI exists and can be copied
-h, --help help for encrypted-copy
--kms-key-id string The ID of the KMS key for Amazon EBS encryption
--kubernetes-version string Kubernetes version of the AMI to be copied
--os string Operating system of the AMI to be copied
--owner-id string The source AWS owner ID, where the AMI will be copied from (default "819546954734")
--region string The AWS region in which to provision
--source-region string Set if wanting to copy an AMI from a different region
Options inherited from parent commands
-v, --v int Set the log level verbosity. (default 2)
SEE ALSO
- clusterawsadm ami - AMI commands
Auto generated by spf13/cobra on 12-Nov-2024
Developer Guide
Initial setup for development environment
Install prerequisites
- Install go
- Get the latest patch version for go v1.22.
- Install jq
brew install jq
on macOS.chocolatey install jq
on Windows.sudo apt install jq
on Ubuntu Linux.
- Install KIND
GO111MODULE="on" go get sigs.k8s.io/kind@v0.12.0
.
- Install Kustomize
- Install envsubst
- Install make.
- Install direnv
brew install direnv
on macOS.
- Set AWS Environment variable for an IAM Admin user
-
export AWS_ACCESS_KEY_ID=ADMID export AWS_SECRET_ACCESS_KEY=ADMKEY export AWS_REGION=eu-west-1
-
Get the source
Fork the cluster-api-provider-aws repo:
cd "$(go env GOPATH)"/src
mkdir sigs.k8s.io
cd sigs.k8s.io/
git clone git@github.com:<GITHUB USERNAME>/cluster-api-provider-aws.git
cd cluster-api-provider-aws
git remote add upstream git@github.com:kubernetes-sigs/cluster-api-provider-aws.git
git fetch upstream
Build clusterawsadm
Build clusterawsadm
in cluster-api-provider-aws
:
cd "$(go env GOPATH)"/src/sigs.k8s.io/cluster-api-provider-aws/
make clusterawsadm
sudo mv ./bin/clusterawsadm /usr/local/bin/clusterawsadm
Setup AWS Environment
Create bootstrap file and bootstrap IAM roles and policies using clusterawsadm
$ cat config-bootstrap.yaml
apiVersion: bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSIAMConfiguration
spec:
bootstrapUser:
enable: true
$ clusterawsadm bootstrap iam create-cloudformation-stack
Attempting to create AWS CloudFormation stack cluster-api-provider-aws-sigs-k8s-io
Customizing the bootstrap permission
The IAM permissions can be customized by using a configuration file with clusterawsadm. For example, to create the default IAM role for use with managed machine pools:
$ cat config-bootstrap.yaml
apiVersion: bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSIAMConfiguration
spec:
bootstrapUser:
enable: true
eks:
iamRoleCreation: false # Set to true if you plan to use the EKSEnableIAM feature flag to enable automatic creation of IAM roles
managedMachinePool:
disable: false # Set to false to enable creation of the default node role for managed machine pools
Use the configuration file to create the additional IAM role:
$ clusterawsadm bootstrap iam create-cloudformation-stack --config=config-bootstrap.yaml
Attempting to create AWS CloudFormation stack cluster-api-provider-aws-sigs-k8s-io
If you don’t plan on using EKS then see the documentation on disabling EKS support.
Sample Output
When creating the CloudFormation stack using clusterawsadm you will see output similar to this:
Following resources are in the stack:
Resource |Type |Status
AWS::IAM::Group |cluster-api-provider-aws-s-AWSIAMGroupBootstrapper-ME9XZVCO2491 |CREATE_COMPLETE
AWS::IAM::InstanceProfile |control-plane.cluster-api-provider-aws.sigs.k8s.io |CREATE_COMPLETE
AWS::IAM::InstanceProfile |controllers.cluster-api-provider-aws.sigs.k8s.io |CREATE_COMPLETE
AWS::IAM::InstanceProfile |nodes.cluster-api-provider-aws.sigs.k8s.io |CREATE_COMPLETE
AWS::IAM::ManagedPolicy |arn:aws:iam::xxx:policy/control-plane.cluster-api-provider-aws.sigs.k8s.io |CREATE_COMPLETE
AWS::IAM::ManagedPolicy |arn:aws:iam::xxx:policy/nodes.cluster-api-provider-aws.sigs.k8s.io |CREATE_COMPLETE
AWS::IAM::ManagedPolicy |arn:aws:iam::xxx:policy/controllers.cluster-api-provider-aws.sigs.k8s.io |CREATE_COMPLETE
AWS::IAM::Role |control-plane.cluster-api-provider-aws.sigs.k8s.io |CREATE_COMPLETE
AWS::IAM::Role |controllers.cluster-api-provider-aws.sigs.k8s.io |CREATE_COMPLETE
AWS::IAM::Role |eks-controlplane.cluster-api-provider-aws.sigs.k8s.io |CREATE_COMPLETE
AWS::IAM::Role |eks-nodegroup.cluster-api-provider-aws.sigs.k8s.io |CREATE_COMPLETE
AWS::IAM::Role |nodes.cluster-api-provider-aws.sigs.k8s.io |CREATE_COMPLETE
AWS::IAM::User |bootstrapper.cluster-api-provider-aws.sigs.k8s.io |CREATE_COMPLETE
Set Environment Variables
-
Create a security credentials in the
bootstrapper.cluster-api-provider-aws.sigs.k8s.io
IAM user that is created by cloud-formation stack and copy theAWS_ACCESS_KEY_ID
andAWS_SECRETS_ACCESS_KEY
. (Or use admin user credentials instead) -
Set AWS_B64ENCODED_CREDENTIALS environment variable
export AWS_ACCESS_KEY_ID=AKIATEST export AWS_SECRET_ACCESS_KEY=TESTTEST export AWS_REGION=eu-west-1 export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
Running local management cluster for development
Before the next steps, make sure initial setup for development environment steps are complete.
There are two ways to build aws manager from local cluster-api-provider-aws source and run it in local kind cluster:
Option 1: Setting up Development Environment with Tilt
Tilt is a tool for quickly building, pushing, and reloading Docker containers as part of a Kubernetes deployment. Many of the Cluster API engineers use it for quick iteration. Please see our Tilt instructions to get started.
Option 2: The Old-fashioned way
Running cluster-api and cluster-api-provider-aws controllers in a kind cluster:
- Create a local kind cluster
kind create cluster
- Install core cluster-api controllers (the version must match the cluster-api version in go.mod)
clusterctl init --core cluster-api:v0.3.16 --bootstrap kubeadm:v0.3.16 --control-plane kubeadm:v0.3.16
- Build cluster-api-provider-aws docker images
make e2e-image
- Release manifests under
./out
directoryRELEASE_TAG="e2e" make release-manifests
- Apply the manifests
kubectl apply -f ./out/infrastructure.yaml
Developing Cluster API Provider AWS with Tilt
This document describes how to use kind and Tilt for a simplified workflow that offers easy deployments and rapid iterative builds. Before the next steps, make sure initial setup for development environment steps are complete.
Also, visit the Cluster API documentation on Tilt for more information on how to set up your development environment.
Create a kind cluster
First, make sure you have a kind cluster and that your KUBECONFIG
is set up correctly:
kind create cluster --name=capi-test
This local cluster will be running all the cluster api controllers and become the management cluster which then can be used to spin up workload clusters on AWS.
Get the source
Get the source for core cluster-api for development with Tilt along with cluster-api-provider-aws.
cd "$(go env GOPATH)"/src
mkdir sigs.k8s.io
cd sigs.k8s.io/
git clone git@github.com:kubernetes-sigs/cluster-api.git
cd cluster-api
git fetch upstream
Create a tilt-settings.json file
Next, create a tilt-settings.json
file and place it in your local copy of cluster-api
. Here is an example:
Example tilt-settings.json
for CAPA clusters:
{
"enable_providers": [
"kubeadm-bootstrap",
"kubeadm-control-plane",
"aws"
],
"default_registry": "gcr.io/your-project-name-here",
"provider_repos": [
"/Users/username/go/src/sigs.k8s.io/cluster-api-provider-aws/v2"
],
"kustomize_substitutions": {
"EXP_CLUSTER_RESOURCE_SET": "true",
"EXP_MACHINE_POOL": "true",
"EVENT_BRIDGE_INSTANCE_STATE": "true",
"AWS_B64ENCODED_CREDENTIALS": "W2RlZmFZSZnRg==",
"EXP_EKS_FARGATE": "false",
"CAPA_EKS_IAM": "false",
"CAPA_EKS_ADD_ROLES": "false",
"EXP_BOOTSTRAP_FORMAT_IGNITION": "true"
},
"extra_args": {
"aws": ["--v=2"]
}
}
Example tilt-settings.json
for EKS managed clusters prior to CAPA v0.7.0:
{
"default_registry": "gcr.io/your-project-name-here",
"provider_repos": ["../cluster-api-provider-aws"],
"enable_providers": ["eks-bootstrap", "eks-controlplane", "kubeadm-bootstrap", "kubeadm-control-plane", "aws"],
"kustomize_substitutions": {
"AWS_B64ENCODED_CREDENTIALS": "W2RlZmFZSZnRg==",
"EXP_EKS": "true",
"EXP_EKS_IAM": "true",
"EXP_MACHINE_POOL": "true"
},
"extra_args": {
"aws": ["--v=2"],
"eks-bootstrap": ["--v=2"],
"eks-controlplane": ["--v=2"]
}
}
Debugging
If you would like to debug CAPA (or core CAPI / another provider) you can run the provider with delve. This will then allow you to attach to delve and debug.
To do this you need to use the debug configuration in tilt-settings.json. Full details of the options can be seen here.
An example tilt-settings.json:
{
"enable_providers": [
"kubeadm-bootstrap",
"kubeadm-control-plane",
"aws"
],
"default_registry": "gcr.io/your-project-name-here",
"provider_repos": [
"/Users/username/go/src/sigs.k8s.io/cluster-api-provider-aws/v2"
],
"kustomize_substitutions": {
"EXP_CLUSTER_RESOURCE_SET": "true",
"EXP_MACHINE_POOL": "true",
"EVENT_BRIDGE_INSTANCE_STATE": "true",
"AWS_B64ENCODED_CREDENTIALS": "W2RlZmFZSZnRg==",
"EXP_EKS_FARGATE": "false",
"CAPA_EKS_IAM": "false",
"CAPA_EKS_ADD_ROLES": "false"
},
"extra_args": {
"aws": ["--v=2"]
}
"debug": {
"aws": {
"continue": true,
"port": 30000
}
}
}
Once you have run tilt (see section below) you will be able to connect to the running instance of delve.
For vscode, you can use the a launch configuration like this:
{
"name": "Connect to CAPA",
"type": "go",
"request": "attach",
"mode": "remote",
"remotePath": "",
"port": 30000,
"host": "127.0.0.1",
"showLog": true,
"trace": "log",
"logOutput": "rpc"
}
For GoLand/IntelliJ add a new run configuration following these instructions.
Or you could use delve directly from the CLI using a command similar to this:
dlv-dap connect 127.0.0.1:3000
Run Tilt!
To launch your development environment, run:
tilt up
kind cluster becomes a management cluster after this point, check the pods running on the kind cluster kubectl get pods -A
.
Create workload clusters
Set the following variables for both CAPA and EKS managed clusters:
export AWS_SSH_KEY_NAME=<sshkeypair>
export KUBERNETES_VERSION=v1.20.2
export CLUSTER_NAME=capi-<test-clustename>
export CONTROL_PLANE_MACHINE_COUNT=1
export AWS_CONTROL_PLANE_MACHINE_TYPE=t3.large
export WORKER_MACHINE_COUNT=1
export AWS_NODE_MACHINE_TYPE=t3.large
Set the following variables for only EKS managed clusters:
export AWS_EKS_ROLE_ARN=arn:aws:iam::<accountid>:role/aws-service-role/eks.amazonaws.com/AWSServiceRoleForAmazonEKS
export EKS_KUBERNETES_VERSION=v1.15
Create CAPA managed workload cluster:
cat templates/cluster-template.yaml
cat templates/cluster-template.yaml | $HOME/go/bin/envsubst > test-cluster.yaml
kubectl apply -f test-cluster.yaml
Create EKS workload cluster:
cat templates/cluster-template-eks.yaml
cat templates/cluster-template-eks.yaml | $HOME/go/bin/envsubst > test-cluster.yaml
kubectl apply -f test-cluster.yaml
Check the tilt logs and wait for the clusters to be created.
Clean up
Before deleting the kind cluster, make sure you delete all the workload clusters.
kubectl delete cluster <clustername>
tilt up (ctrl-c)
kind delete cluster
Troubleshooting
- Make sure you have at least three available spaces EIP and NAT Gateways to be created
- If your git starts throwing this error
flag provided but not defined: -variables
Usage: envsubst [options...] <input>
you might need to reinstall the system envsubst
brew install gettetxt
# or
brew reinstall gettext
Make sure you specify which envsubst
you are using
Developing E2E tests
Visit the Cluster API documentation on E2E for information on how to develop and run e2e tests.
Set up
It’s recommended to create a separate AWS account to run E2E tests. This ensures it does not conflict with your other cluster API environment.
Running from CLI
e2e tests can be run using Makefile targets:
$ make test-e2e
$ make test-e2e-eks
The following useful env variables can help to speed up the runs:
E2E_ARGS="--skip-cloudformation-creation --skip-cloudformation-deletion"
- in case the cloudformation stack is already properly set up, this ensures a quicker start and tear down.GINKGO_FOCUS='\[PR-Blocking\]'
- only run a subset of testsUSE_EXISTING_CLUSTER
- use an existing management cluster (useful if you have a Tilt setup)
Running in IDEs
The following example assumes you run a management cluster locally (e.g. using Tilt).
IntelliJ/GoLand
The following run configuration can be used:
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="capa e2e: unmanaged PR-Blocking" type="GoTestRunConfiguration" factoryName="Go Test">
<module name="cluster-api-provider-aws" />
<working_directory value="$PROJECT_DIR$/test/e2e/suites/unmanaged" />
<parameters value="-ginkgo.focus="\[PR-Blocking\]" -ginkgo.v=true -artifacts-folder=$PROJECT_DIR$/_artifacts --data-folder=$PROJECT_DIR$/test/e2e/data -use-existing-cluster=true -config-path=$PROJECT_DIR$/test/e2e/data/e2e_conf.yaml" />
<envs>
<env name="AWS_REGION" value="SET_AWS_REGION" />
<env name="AWS_PROFILE" value="IF_YOU_HAVE_MULTIPLE_PROFILES" />
<env name="AWS_ACCESS_KEY_ID" value="REPLACE_ACCESS_KEY" />
<env name="AWS_SECRET_ACCESS_KEY" value="2W2RlZmFZSZnRg==" />
</envs>
<kind value="PACKAGE" />
<package value="sigs.k8s.io/cluster-api-provider-aws/v2/test/e2e/suites/unmanaged" />
<directory value="$PROJECT_DIR$" />
<filePath value="$PROJECT_DIR$" />
<framework value="gotest" />
<pattern value="^\QTestE2E\E$" />
<method v="2" />
</configuration>
</component>
Visual Studio Code
With the example above, you can configure a launch configuration for VSCode.
Coding Conventions
Below is a collection of conventions, guidlines and general tips for writing code for this project.
API Definitions
Don’t Expose 3rd Party Package Types
When adding new or modifying API types don’t expose 3rd party package types/enums via the CAPA API definitions. Instead create our own versions and where provide mapping functions.
For example:
- AWS SDK InstanceState
- CAPA InstanceState
Don’t use struct pointer slices
When adding new fields to an API type don’t use a slice of struct pointers. This can cause issues with the code generator for the conversion functions. Instead use struct slices.
For example:
Instead of this
// Configuration options for the non root storage volumes.
// +optional
NonRootVolumes []*Volume `json:"nonRootVolumes,omitempty"`
use
// Configuration options for the non root storage volumes.
// +optional
NonRootVolumes []Volume `json:"nonRootVolumes,omitempty"`
And then within the code you can check the length or range over the slice.
Tests
There are three types of tests written for CAPA controllers in this repo:
- Unit tests
- Integration tests
- E2E tests
In these tests, we use fakeclient, envtest and gomock libraries based on the requirements of individual test types.
If any new unit, integration or E2E tests has to be added in this repo,we should follow the below conventions.
Unit tests
These tests are meant to verify the functions inside the same controller file where we perform sanity checks, functionality checks etc. These tests go into the file with suffix *_unit_test.go.
Integration tests
These tests are meant to verify the overall flow of the reconcile calls in the controllers to test the flows for all the services/subcomponents of controllers as a whole. These tests go into the file with suffix *_test.go.
E2E tests
These tests are meant to verify the proper functioning of a CAPA cluster in an environment that resembles a real production environment. For details, refer here.
Nightly Builds
Nightly builds are regular automated builds of the CAPA source code that occur every night.
These builds are generated directly from the latest commit of source code on the main branch.
Nightly builds serve several purposes:
- Early Testing: They provide an opportunity for developers and testers to access the most recent changes in the codebase and identify any issues or bugs that may have been introduced.
- Feedback Loop: They facilitate a rapid feedback loop, enabling developers to receive feedback on their changes quickly, allowing them to iterate and improve the code more efficiently.
- Preview of New Features: Users and can get a preview of upcoming features or changes by testing nightly builds, although these builds may not always be stable enough for production use.
Overall, nightly builds play a crucial role in software development by promoting user testing, early bug detection, and rapid iteration.
CAPA Nightly build jobs run in Prow. For details on how this is configured you can check the Periodics Jobs section.
Usage
To try a nightly build, you can download the latest built nightly CAPA manifests, you can find the available ones by executing the following command:
curl -sL -H 'Accept: application/json' "https://storage.googleapis.com/storage/v1/b/k8s-staging-cluster-api-aws/o" | jq -r '.items | map(select(.name | startswith("components/nightly_main"))) | .[] | [.timeCreated,.mediaLink] | @tsv'
The output should look something like this:
2024-05-03T08:03:09.087Z https://storage.googleapis.com/download/storage/v1/b/k8s-staging-cluster-api-aws/o/components%2Fnightly_main_2024050x?generation=1714723389033961&alt=media
2024-05-04T08:02:52.517Z https://storage.googleapis.com/download/storage/v1/b/k8s-staging-cluster-api-aws/o/components%2Fnightly_main_2024050y?generation=1714809772486582&alt=media
2024-05-05T08:02:45.840Z https://storage.googleapis.com/download/storage/v1/b/k8s-staging-cluster-api-aws/o/components%2Fnightly_main_2024050z?generation=1714896165803510&alt=media
Now visit the link for the manifest you want to download. This will automatically download the manifest for you.
Once downloaded you can apply the manifest directly to your testing CAPI management cluster/namespace (e.g. with kubectl), as the downloaded CAPA manifest will already contain the correct, corresponding CAPA nightly image reference.
Publish AMIs
Publishing new AMIs is done via manually invoking a GitHub Actions workflow.
NOTE: the plan is to ultimately fully automate the process in the future (see this issue for progress).
NOTE: there are some issues with the RHEL based images at present.
Get build inputs
For a new Kubernetes version that you want to build an AMI for you will need to determine the following values:
Input | Description |
---|---|
kubernetes_semver | The semver version of k8s you want to build an AMI for. In format vMAJOR.MINOR.PATCH. |
kubernetes_series | The release series for the Kubernetes version. In format vMAJOR.MINOR. |
kubernetes_deb_version | The version of the debian package for the release. |
kubernetes_rpm_version | The version of the rpm package for the release |
kubernetes_cni_semver | The version of CNI to include. It needs to match the k8s release. |
kubernetes_cni_deb_version | The version of the debian package for the CNI release to use |
crictl_version | The vesion of the cri-tools package to install into the AMI |
You can determine these values directly or by looking at the publish debian apt repositories for the k8s release.
Build
Using GitHub Actions Workflow
To build the AMI using GitHub actions you must have write access to the CAPA repository (i.e. be a maintainer or part of release team).
To build the new version:
- Got to the GitHub Action
- Click the Start Workflow button
- Fill in the details of the build
- Click Run
Manually
**WARNING: the manual process should only be followed in exceptional circumstances.
To build manually you must have admin access to the CNCF AWS account used for the AMIs.
The steps to build manually are:
- Clone image-builder
- Open a terminal
- Set the AWS environment variables for the CAPA AMI account
- Change directory into
images/capi
- Create a new file called
vars.json
with the following content (substituing the values with the build inputs):
{
"kubernetes_rpm_version": "<INSERT_INPUT_VALUE>",
"kubernetes_semver": "<INSERT_INPUT_VALUE>",
"kubernetes_series": "<INSERT_INPUT_VALUE>",
"kubernetes_deb_version": "<INSERT_INPUT_VALUE>",
"kubernetes_cni_semver": "<INSERT_INPUT_VALUE>",
"kubernetes_cni_deb_version": "<INSERT_INPUT_VALUE>",
"crictl_version": "<INSERT_INPUT_VALUE>"
}
- Install dependencies by running:
make deps-ami
- Build the AMIs using:
PACKER_VAR_FILES=vars.json make build-ami-ubuntu-2204
PACKER_VAR_FILES=vars.json make build-ami-ubuntu-2404
PACKER_VAR_FILES=vars.json make build-ami-flatcar
PACKER_VAR_FILES=vars.json make build-ami-rhel-8
Additional Information
- The AMIs are hosted in a CNCF owned AWS account (819546954734).
- The AWS resources that are needed to support the GitHub Actions workflow are created via terraform. Source is here.
- OIDC and IAM Roles are used to grant access via short lived credentials to the GitHub Action workflow instance when it runs.
Packages:
- ami.aws.infrastructure.cluster.x-k8s.io/v1beta1
- bootstrap.aws.infrastructure.cluster.x-k8s.io/v1alpha1
- bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1
- bootstrap.cluster.x-k8s.io/v1beta1
- bootstrap.cluster.x-k8s.io/v1beta2
- controlplane.cluster.x-k8s.io/v1beta1
- controlplane.cluster.x-k8s.io/v1beta2
- infrastructure.cluster.x-k8s.io/v1beta1
- infrastructure.cluster.x-k8s.io/v1beta2
ami.aws.infrastructure.cluster.x-k8s.io/v1beta1
Package v1beta1 contains API Schema definitions for the AMI v1beta1 API group
Resource Types:AWSAMI
AWSAMI defines an AMI.
Field | Description | ||||||||
---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||
spec AWSAMISpec |
|
AWSAMISpec
(Appears on:AWSAMI)
AWSAMISpec defines an AMI.
Field | Description |
---|---|
os string |
|
region string |
|
imageID string |
|
kubernetesVersion string |
bootstrap.aws.infrastructure.cluster.x-k8s.io/v1alpha1
Package v1alpha1 contains API Schema definitions for the bootstrap v1alpha1 API group
Resource Types:AWSIAMConfiguration
AWSIAMConfiguration controls the creation of AWS Identity and Access Management (IAM) resources for use by Kubernetes clusters and Kubernetes Cluster API Provider AWS.
Field | Description | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
spec AWSIAMConfigurationSpec |
|
AWSIAMConfigurationSpec
(Appears on:AWSIAMConfiguration)
AWSIAMConfigurationSpec defines the specification of the AWSIAMConfiguration.
Field | Description |
---|---|
namePrefix string |
NamePrefix will be prepended to every AWS IAM role, user and policy created by clusterawsadm. Defaults to “”. |
nameSuffix string |
NameSuffix will be appended to every AWS IAM role, user and policy created by clusterawsadm. Defaults to “.cluster-api-provider-aws.sigs.k8s.io”. |
controlPlane ControlPlane |
ControlPlane controls the configuration of the AWS IAM role for a Kubernetes cluster’s control plane nodes. |
clusterAPIControllers ClusterAPIControllers |
ClusterAPIControllers controls the configuration of an IAM role and policy specifically for Kubernetes Cluster API Provider AWS. |
nodes Nodes |
Nodes controls the configuration of the AWS IAM role for all nodes in a Kubernetes cluster. |
bootstrapUser BootstrapUser |
BootstrapUser contains a list of elements that is specific to the configuration and enablement of an IAM user. |
stackName string |
StackName defines the name of the AWS CloudFormation stack. |
region string |
Region controls which region the control-plane is created in if not specified on the command line or via environment variables. |
eks EKSConfig |
EKS controls the configuration related to EKS. Settings in here affect the control plane and nodes roles |
eventBridge EventBridgeConfig |
EventBridge controls configuration for consuming EventBridge events |
partition string |
Partition is the AWS security partition being used. Defaults to “aws” |
secureSecretBackends []SecretBackend |
SecureSecretsBackend, when set to parameter-store will create AWS Systems Manager Parameter Storage policies. By default or with the value of secrets-manager, will generate AWS Secrets Manager policies instead. |
AWSIAMRoleSpec
(Appears on:ClusterAPIControllers, ControlPlane, EKSConfig, Nodes)
AWSIAMRoleSpec defines common configuration for AWS IAM roles created by Kubernetes Cluster API Provider AWS.
Field | Description |
---|---|
disable bool |
Disable if set to true will not create the AWS IAM role. Defaults to false. |
extraPolicyAttachments []string |
ExtraPolicyAttachments is a list of additional policies to be attached to the IAM role. |
extraStatements []Cluster API AWS iam/api/v1beta1.StatementEntry |
ExtraStatements are additional IAM statements to be included inline for the role. |
trustStatements []Cluster API AWS iam/api/v1beta1.StatementEntry |
TrustStatements is an IAM PolicyDocument defining what identities are allowed to assume this role. See “sigs.k8s.io/cluster-api-provider-aws/v2/cmd/clusterawsadm/api/iam/v1beta1” for more documentation. |
tags Tags |
Tags is a map of tags to be applied to the AWS IAM role. |
BootstrapUser
(Appears on:AWSIAMConfigurationSpec)
BootstrapUser contains a list of elements that is specific to the configuration and enablement of an IAM user.
Field | Description |
---|---|
enable bool |
Enable controls whether or not a bootstrap AWS IAM user will be created. This can be used to scope down the initial credentials used to bootstrap the cluster. Defaults to false. |
userName string |
UserName controls the username of the bootstrap user. Defaults to “bootstrapper.cluster-api-provider-aws.sigs.k8s.io” |
groupName string |
GroupName controls the group the user will belong to. Defaults to “bootstrapper.cluster-api-provider-aws.sigs.k8s.io” |
extraPolicyAttachments []string |
ExtraPolicyAttachments is a list of additional policies to be attached to the IAM user. |
extraGroups []string |
ExtraGroups is a list of groups to add this user to. |
extraStatements []Cluster API AWS iam/api/v1beta1.StatementEntry |
ExtraStatements are additional AWS IAM policy document statements to be included inline for the user. |
tags Tags |
Tags is a map of tags to be applied to the AWS IAM user. |
ClusterAPIControllers
(Appears on:AWSIAMConfigurationSpec)
ClusterAPIControllers controls the configuration of the AWS IAM role for the Kubernetes Cluster API Provider AWS controller.
Field | Description |
---|---|
AWSIAMRoleSpec AWSIAMRoleSpec |
(Members of |
allowedEC2InstanceProfiles []string |
AllowedEC2InstanceProfiles controls which EC2 roles are allowed to be
consumed by Cluster API when creating an ec2 instance. Defaults to
*. |
ControlPlane
(Appears on:AWSIAMConfigurationSpec)
ControlPlane controls the configuration of the AWS IAM role for the control plane of provisioned Kubernetes clusters.
Field | Description |
---|---|
AWSIAMRoleSpec AWSIAMRoleSpec |
(Members of |
disableClusterAPIControllerPolicyAttachment bool |
DisableClusterAPIControllerPolicyAttachment, if set to true, will not attach the AWS IAM policy for Cluster API Provider AWS to the control plane role. Defaults to false. |
disableCloudProviderPolicy bool |
DisableCloudProviderPolicy if set to true, will not generate and attach the AWS IAM policy for the AWS Cloud Provider. |
enableCSIPolicy bool |
EnableCSIPolicy if set to true, will generate and attach the AWS IAM policy for the EBS CSI Driver. |
EKSConfig
(Appears on:AWSIAMConfigurationSpec)
EKSConfig represents the EKS related configuration config.
Field | Description |
---|---|
disable bool |
Disable controls whether EKS-related permissions are granted |
iamRoleCreation bool |
AllowIAMRoleCreation controls whether the EKS controllers have permissions for creating IAM roles per cluster |
enableUserEKSConsolePolicy bool |
EnableUserEKSConsolePolicy controls the creation of the policy to view EKS nodes and workloads. |
defaultControlPlaneRole AWSIAMRoleSpec |
DefaultControlPlaneRole controls the configuration of the AWS IAM role for the EKS control plane. This is the default role that will be used if no role is included in the spec and automatic creation of the role isn’t enabled |
managedMachinePool AWSIAMRoleSpec |
ManagedMachinePool controls the configuration of the AWS IAM role for used by EKS managed machine pools. |
fargate AWSIAMRoleSpec |
Fargate controls the configuration of the AWS IAM role for used by EKS managed machine pools. |
kmsAliasPrefix string |
KMSAliasPrefix is prefix to use to restrict permission to KMS keys to only those that have an alias name that is prefixed by this. Defaults to cluster-api-provider-aws-* |
EventBridgeConfig
(Appears on:AWSIAMConfigurationSpec)
EventBridgeConfig represents configuration for enabling experimental feature to consume EventBridge EC2 events.
Field | Description |
---|---|
enable bool |
Enable controls whether permissions are granted to consume EC2 events |
Nodes
(Appears on:AWSIAMConfigurationSpec)
Nodes controls the configuration of the AWS IAM role for worker nodes in a cluster created by Kubernetes Cluster API Provider AWS.
Field | Description |
---|---|
AWSIAMRoleSpec AWSIAMRoleSpec |
(Members of |
disableCloudProviderPolicy bool |
DisableCloudProviderPolicy if set to true, will not generate and attach the policy for the AWS Cloud Provider. Defaults to false. |
ec2ContainerRegistryReadOnly bool |
EC2ContainerRegistryReadOnly controls whether the node has read-only access to the EC2 container registry |
bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1
Package v1beta1 contains API Schema definitions for the bootstrap v1beta1 API group
Resource Types:AWSIAMConfiguration
AWSIAMConfiguration controls the creation of AWS Identity and Access Management (IAM) resources for use by Kubernetes clusters and Kubernetes Cluster API Provider AWS.
Field | Description | ||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
spec AWSIAMConfigurationSpec |
|
AWSIAMConfigurationSpec
(Appears on:AWSIAMConfiguration)
AWSIAMConfigurationSpec defines the specification of the AWSIAMConfiguration.
Field | Description |
---|---|
namePrefix string |
NamePrefix will be prepended to every AWS IAM role, user and policy created by clusterawsadm. Defaults to “”. |
nameSuffix string |
NameSuffix will be appended to every AWS IAM role, user and policy created by clusterawsadm. Defaults to “.cluster-api-provider-aws.sigs.k8s.io”. |
controlPlane ControlPlane |
ControlPlane controls the configuration of the AWS IAM role for a Kubernetes cluster’s control plane nodes. |
clusterAPIControllers ClusterAPIControllers |
ClusterAPIControllers controls the configuration of an IAM role and policy specifically for Kubernetes Cluster API Provider AWS. |
nodes Nodes |
Nodes controls the configuration of the AWS IAM role for all nodes in a Kubernetes cluster. |
bootstrapUser BootstrapUser |
BootstrapUser contains a list of elements that is specific to the configuration and enablement of an IAM user. |
stackName string |
StackName defines the name of the AWS CloudFormation stack. |
stackTags map[string]string |
(Optional)
StackTags defines the tags of the AWS CloudFormation stack. |
region string |
Region controls which region the control-plane is created in if not specified on the command line or via environment variables. |
eks EKSConfig |
EKS controls the configuration related to EKS. Settings in here affect the control plane and nodes roles |
eventBridge EventBridgeConfig |
EventBridge controls configuration for consuming EventBridge events |
partition string |
Partition is the AWS security partition being used. Defaults to “aws” |
secureSecretBackends []SecretBackend |
SecureSecretsBackend, when set to parameter-store will create AWS Systems Manager Parameter Storage policies. By default or with the value of secrets-manager, will generate AWS Secrets Manager policies instead. |
s3Buckets S3Buckets |
(Optional)
S3Buckets, when enabled, will add controller nodes permissions to create S3 Buckets for workload clusters. TODO: This field could be a pointer, but it seems it breaks setting default values? |
allowAssumeRole bool |
AllowAssumeRole enables the sts:AssumeRole permission within the CAPA policies |
AWSIAMRoleSpec
(Appears on:ClusterAPIControllers, ControlPlane, EKSConfig, Nodes)
AWSIAMRoleSpec defines common configuration for AWS IAM roles created by Kubernetes Cluster API Provider AWS.
Field | Description |
---|---|
disable bool |
Disable if set to true will not create the AWS IAM role. Defaults to false. |
extraPolicyAttachments []string |
ExtraPolicyAttachments is a list of additional policies to be attached to the IAM role. |
extraStatements []Cluster API AWS iam/api/v1beta1.StatementEntry |
ExtraStatements are additional IAM statements to be included inline for the role. |
trustStatements []Cluster API AWS iam/api/v1beta1.StatementEntry |
TrustStatements is an IAM PolicyDocument defining what identities are allowed to assume this role. See “sigs.k8s.io/cluster-api-provider-aws/v2/cmd/clusterawsadm/api/iam/v1beta1” for more documentation. |
tags Tags |
Tags is a map of tags to be applied to the AWS IAM role. |
BootstrapUser
(Appears on:AWSIAMConfigurationSpec)
BootstrapUser contains a list of elements that is specific to the configuration and enablement of an IAM user.
Field | Description |
---|---|
enable bool |
Enable controls whether or not a bootstrap AWS IAM user will be created. This can be used to scope down the initial credentials used to bootstrap the cluster. Defaults to false. |
userName string |
UserName controls the username of the bootstrap user. Defaults to “bootstrapper.cluster-api-provider-aws.sigs.k8s.io” |
groupName string |
GroupName controls the group the user will belong to. Defaults to “bootstrapper.cluster-api-provider-aws.sigs.k8s.io” |
extraPolicyAttachments []string |
ExtraPolicyAttachments is a list of additional policies to be attached to the IAM user. |
extraGroups []string |
ExtraGroups is a list of groups to add this user to. |
extraStatements []Cluster API AWS iam/api/v1beta1.StatementEntry |
ExtraStatements are additional AWS IAM policy document statements to be included inline for the user. |
tags Tags |
Tags is a map of tags to be applied to the AWS IAM user. |
ClusterAPIControllers
(Appears on:AWSIAMConfigurationSpec)
ClusterAPIControllers controls the configuration of the AWS IAM role for the Kubernetes Cluster API Provider AWS controller.
Field | Description |
---|---|
AWSIAMRoleSpec AWSIAMRoleSpec |
(Members of |
allowedEC2InstanceProfiles []string |
AllowedEC2InstanceProfiles controls which EC2 roles are allowed to be
consumed by Cluster API when creating an ec2 instance. Defaults to
*. |
ControlPlane
(Appears on:AWSIAMConfigurationSpec)
ControlPlane controls the configuration of the AWS IAM role for the control plane of provisioned Kubernetes clusters.
Field | Description |
---|---|
AWSIAMRoleSpec AWSIAMRoleSpec |
(Members of |
disableClusterAPIControllerPolicyAttachment bool |
DisableClusterAPIControllerPolicyAttachment, if set to true, will not attach the AWS IAM policy for Cluster API Provider AWS to the control plane role. Defaults to false. |
disableCloudProviderPolicy bool |
DisableCloudProviderPolicy if set to true, will not generate and attach the AWS IAM policy for the AWS Cloud Provider. |
enableCSIPolicy bool |
EnableCSIPolicy if set to true, will generate and attach the AWS IAM policy for the EBS CSI Driver. |
EKSConfig
(Appears on:AWSIAMConfigurationSpec)
EKSConfig represents the EKS related configuration config.
Field | Description |
---|---|
disable bool |
Disable controls whether EKS-related permissions are granted |
iamRoleCreation bool |
AllowIAMRoleCreation controls whether the EKS controllers have permissions for creating IAM roles per cluster |
enableUserEKSConsolePolicy bool |
EnableUserEKSConsolePolicy controls the creation of the policy to view EKS nodes and workloads. |
defaultControlPlaneRole AWSIAMRoleSpec |
DefaultControlPlaneRole controls the configuration of the AWS IAM role for the EKS control plane. This is the default role that will be used if no role is included in the spec and automatic creation of the role isn’t enabled |
managedMachinePool AWSIAMRoleSpec |
ManagedMachinePool controls the configuration of the AWS IAM role for used by EKS managed machine pools. |
fargate AWSIAMRoleSpec |
Fargate controls the configuration of the AWS IAM role for used by EKS managed machine pools. |
kmsAliasPrefix string |
KMSAliasPrefix is prefix to use to restrict permission to KMS keys to only those that have an alias name that is prefixed by this. Defaults to cluster-api-provider-aws-* |
EventBridgeConfig
(Appears on:AWSIAMConfigurationSpec)
EventBridgeConfig represents configuration for enabling experimental feature to consume EventBridge EC2 events.
Field | Description |
---|---|
enable bool |
Enable controls whether permissions are granted to consume EC2 events |
Nodes
(Appears on:AWSIAMConfigurationSpec)
Nodes controls the configuration of the AWS IAM role for worker nodes in a cluster created by Kubernetes Cluster API Provider AWS.
Field | Description |
---|---|
AWSIAMRoleSpec AWSIAMRoleSpec |
(Members of |
disableCloudProviderPolicy bool |
DisableCloudProviderPolicy if set to true, will not generate and attach the policy for the AWS Cloud Provider. Defaults to false. |
ec2ContainerRegistryReadOnly bool |
EC2ContainerRegistryReadOnly controls whether the node has read-only access to the EC2 container registry |
S3Buckets
(Appears on:AWSIAMConfigurationSpec)
S3Buckets controls the configuration of the AWS IAM role for S3 buckets which can be created for storing bootstrap data for nodes requiring it.
Field | Description |
---|---|
enable bool |
Enable controls whether permissions are granted to manage S3 buckets. |
namePrefix string |
NamePrefix will be prepended to every AWS IAM role bucket name. Defaults to “cluster-api-provider-aws-”. AWSCluster S3 Bucket name must be prefixed with the same prefix. |
bootstrap.cluster.x-k8s.io/v1beta1
Resource Types:EKSConfig
EKSConfig is the schema for the Amazon EKS Machine Bootstrap Configuration API.
Field | Description | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||||||||||
spec EKSConfigSpec |
|
||||||||||||||||
status EKSConfigStatus |
EKSConfigSpec
(Appears on:EKSConfig, EKSConfigTemplateResource)
EKSConfigSpec defines the desired state of Amazon EKS Bootstrap Configuration.
Field | Description |
---|---|
kubeletExtraArgs map[string]string |
(Optional)
KubeletExtraArgs passes the specified kubelet args into the Amazon EKS machine bootstrap script |
containerRuntime string |
(Optional)
ContainerRuntime specify the container runtime to use when bootstrapping EKS. |
dnsClusterIP string |
(Optional)
DNSClusterIP overrides the IP address to use for DNS queries within the cluster. |
dockerConfigJson string |
(Optional)
DockerConfigJson is used for the contents of the /etc/docker/daemon.json file. Useful if you want a custom config differing from the default one in the AMI. This is expected to be a json string. |
apiRetryAttempts int |
(Optional)
APIRetryAttempts is the number of retry attempts for AWS API call. |
pauseContainer PauseContainer |
(Optional)
PauseContainer allows customization of the pause container to use. |
useMaxPods bool |
(Optional)
UseMaxPods sets –max-pods for the kubelet when true. |
serviceIPV6Cidr string |
(Optional)
ServiceIPV6Cidr is the ipv6 cidr range of the cluster. If this is specified then the ip family will be set to ipv6. |
EKSConfigStatus
(Appears on:EKSConfig)
EKSConfigStatus defines the observed state of the Amazon EKS Bootstrap Configuration.
Field | Description |
---|---|
ready bool |
Ready indicates the BootstrapData secret is ready to be consumed |
dataSecretName string |
(Optional)
DataSecretName is the name of the secret that stores the bootstrap data script. |
failureReason string |
(Optional)
FailureReason will be set on non-retryable errors |
failureMessage string |
(Optional)
FailureMessage will be set on non-retryable errors |
observedGeneration int64 |
(Optional)
ObservedGeneration is the latest generation observed by the controller. |
conditions Cluster API api/v1beta1.Conditions |
(Optional)
Conditions defines current service state of the EKSConfig. |
EKSConfigTemplate
EKSConfigTemplate is the Amazon EKS Bootstrap Configuration Template API.
Field | Description | ||
---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||
spec EKSConfigTemplateSpec |
|
EKSConfigTemplateResource
(Appears on:EKSConfigTemplateSpec)
EKSConfigTemplateResource defines the Template structure.
Field | Description | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
spec EKSConfigSpec |
|
EKSConfigTemplateSpec
(Appears on:EKSConfigTemplate)
EKSConfigTemplateSpec defines the desired state of templated EKSConfig Amazon EKS Bootstrap Configuration resources.
Field | Description |
---|---|
template EKSConfigTemplateResource |
PauseContainer
(Appears on:EKSConfigSpec)
PauseContainer contains details of pause container.
Field | Description |
---|---|
accountNumber string |
AccountNumber is the AWS account number to pull the pause container from. |
version string |
Version is the tag of the pause container to use. |
bootstrap.cluster.x-k8s.io/v1beta2
Package v1beta2 contains API Schema definitions for the Amazon EKS Bootstrap v1beta2 API group.
Resource Types:DiskSetup
(Appears on:EKSConfigSpec)
DiskSetup defines input for generated disk_setup and fs_setup in cloud-init.
Field | Description |
---|---|
partitions []Partition |
(Optional)
Partitions specifies the list of the partitions to setup. |
filesystems []Filesystem |
(Optional)
Filesystems specifies the list of file systems to setup. |
EKSConfig
EKSConfig is the schema for the Amazon EKS Machine Bootstrap Configuration API.
Field | Description | ||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||||||||||||||||||||||||||
spec EKSConfigSpec |
|
||||||||||||||||||||||||||||||||
status EKSConfigStatus |
EKSConfigSpec
(Appears on:EKSConfig, EKSConfigTemplateResource)
EKSConfigSpec defines the desired state of Amazon EKS Bootstrap Configuration.
Field | Description |
---|---|
kubeletExtraArgs map[string]string |
(Optional)
KubeletExtraArgs passes the specified kubelet args into the Amazon EKS machine bootstrap script |
containerRuntime string |
(Optional)
ContainerRuntime specify the container runtime to use when bootstrapping EKS. |
dnsClusterIP string |
(Optional)
DNSClusterIP overrides the IP address to use for DNS queries within the cluster. |
dockerConfigJson string |
(Optional)
DockerConfigJson is used for the contents of the /etc/docker/daemon.json file. Useful if you want a custom config differing from the default one in the AMI. This is expected to be a json string. |
apiRetryAttempts int |
(Optional)
APIRetryAttempts is the number of retry attempts for AWS API call. |
pauseContainer PauseContainer |
(Optional)
PauseContainer allows customization of the pause container to use. |
useMaxPods bool |
(Optional)
UseMaxPods sets –max-pods for the kubelet when true. |
serviceIPV6Cidr string |
(Optional)
ServiceIPV6Cidr is the ipv6 cidr range of the cluster. If this is specified then the ip family will be set to ipv6. |
preBootstrapCommands []string |
(Optional)
PreBootstrapCommands specifies extra commands to run before bootstrapping nodes to the cluster |
postBootstrapCommands []string |
(Optional)
PostBootstrapCommands specifies extra commands to run after bootstrapping nodes to the cluster |
boostrapCommandOverride string |
(Optional)
BootstrapCommandOverride allows you to override the bootstrap command to use for EKS nodes. |
files []File |
(Optional)
Files specifies extra files to be passed to user_data upon creation. |
diskSetup DiskSetup |
(Optional)
DiskSetup specifies options for the creation of partition tables and file systems on devices. |
mounts []MountPoints |
(Optional)
Mounts specifies a list of mount points to be setup. |
users []User |
(Optional)
Users specifies extra users to add |
ntp NTP |
(Optional)
NTP specifies NTP configuration |
EKSConfigStatus
(Appears on:EKSConfig)
EKSConfigStatus defines the observed state of the Amazon EKS Bootstrap Configuration.
Field | Description |
---|---|
ready bool |
Ready indicates the BootstrapData secret is ready to be consumed |
dataSecretName string |
(Optional)
DataSecretName is the name of the secret that stores the bootstrap data script. |
failureReason string |
(Optional)
FailureReason will be set on non-retryable errors |
failureMessage string |
(Optional)
FailureMessage will be set on non-retryable errors |
observedGeneration int64 |
(Optional)
ObservedGeneration is the latest generation observed by the controller. |
conditions Cluster API api/v1beta1.Conditions |
(Optional)
Conditions defines current service state of the EKSConfig. |
EKSConfigTemplate
EKSConfigTemplate is the Amazon EKS Bootstrap Configuration Template API.
Field | Description | ||
---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||
spec EKSConfigTemplateSpec |
|
EKSConfigTemplateResource
(Appears on:EKSConfigTemplateSpec)
EKSConfigTemplateResource defines the Template structure.
Field | Description | ||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
spec EKSConfigSpec |
|
EKSConfigTemplateSpec
(Appears on:EKSConfigTemplate)
EKSConfigTemplateSpec defines the desired state of templated EKSConfig Amazon EKS Bootstrap Configuration resources.
Field | Description |
---|---|
template EKSConfigTemplateResource |
Encoding
(string
alias)
(Appears on:File)
Encoding specifies the cloud-init file encoding.
Value | Description |
---|---|
"base64" |
Base64 implies the contents of the file are encoded as base64. |
"gzip" |
Gzip implies the contents of the file are encoded with gzip. |
"gzip+base64" |
GzipBase64 implies the contents of the file are first base64 encoded and then gzip encoded. |
File
(Appears on:EKSConfigSpec)
File defines the input for generating write_files in cloud-init.
Field | Description |
---|---|
path string |
Path specifies the full path on disk where to store the file. |
owner string |
(Optional)
Owner specifies the ownership of the file, e.g. “root:root”. |
permissions string |
(Optional)
Permissions specifies the permissions to assign to the file, e.g. “0640”. |
encoding Encoding |
(Optional)
Encoding specifies the encoding of the file contents. |
append bool |
(Optional)
Append specifies whether to append Content to existing file if Path exists. |
content string |
(Optional)
Content is the actual content of the file. |
contentFrom FileSource |
(Optional)
ContentFrom is a referenced source of content to populate the file. |
FileSource
(Appears on:File)
FileSource is a union of all possible external source types for file data. Only one field may be populated in any given instance. Developers adding new sources of data for target systems should add them here.
Field | Description |
---|---|
secret SecretFileSource |
Secret represents a secret that should populate this file. |
Filesystem
(Appears on:DiskSetup)
Filesystem defines the file systems to be created.
Field | Description |
---|---|
device string |
Device specifies the device name |
filesystem string |
Filesystem specifies the file system type. |
label string |
Label specifies the file system label to be used. If set to None, no label is used. |
partition string |
(Optional)
Partition specifies the partition to use. The valid options are: “auto|any”, “auto”, “any”, “none”, and |
overwrite bool |
(Optional)
Overwrite defines whether or not to overwrite any existing filesystem. If true, any pre-existing file system will be destroyed. Use with Caution. |
extraOpts []string |
(Optional)
ExtraOpts defined extra options to add to the command for creating the file system. |
MountPoints
([]string
alias)
(Appears on:EKSConfigSpec)
MountPoints defines input for generated mounts in cloud-init.
NTP
(Appears on:EKSConfigSpec)
NTP defines input for generated ntp in cloud-init.
Field | Description |
---|---|
servers []string |
(Optional)
Servers specifies which NTP servers to use |
enabled bool |
(Optional)
Enabled specifies whether NTP should be enabled |
Partition
(Appears on:DiskSetup)
Partition defines how to create and layout a partition.
Field | Description |
---|---|
device string |
Device is the name of the device. |
layout bool |
Layout specifies the device layout. If it is true, a single partition will be created for the entire device. When layout is false, it means don’t partition or ignore existing partitioning. |
overwrite bool |
(Optional)
Overwrite describes whether to skip checks and create the partition if a partition or filesystem is found on the device. Use with caution. Default is ‘false’. |
tableType string |
(Optional)
TableType specifies the tupe of partition table. The following are supported: ‘mbr’: default and setups a MS-DOS partition table ‘gpt’: setups a GPT partition table |
PasswdSource
(Appears on:User)
PasswdSource is a union of all possible external source types for passwd data. Only one field may be populated in any given instance. Developers adding new sources of data for target systems should add them here.
Field | Description |
---|---|
secret SecretPasswdSource |
Secret represents a secret that should populate this password. |
PauseContainer
(Appears on:EKSConfigSpec)
PauseContainer contains details of pause container.
Field | Description |
---|---|
accountNumber string |
AccountNumber is the AWS account number to pull the pause container from. |
version string |
Version is the tag of the pause container to use. |
SecretFileSource
(Appears on:FileSource)
SecretFileSource adapts a Secret into a FileSource.
The contents of the target Secret’s Data field will be presented as files using the keys in the Data field as the file names.
Field | Description |
---|---|
name string |
Name of the secret in the KubeadmBootstrapConfig’s namespace to use. |
key string |
Key is the key in the secret’s data map for this value. |
SecretPasswdSource
(Appears on:PasswdSource)
SecretPasswdSource adapts a Secret into a PasswdSource.
The contents of the target Secret’s Data field will be presented as passwd using the keys in the Data field as the file names.
Field | Description |
---|---|
name string |
Name of the secret in the KubeadmBootstrapConfig’s namespace to use. |
key string |
Key is the key in the secret’s data map for this value. |
User
(Appears on:EKSConfigSpec)
User defines the input for a generated user in cloud-init.
Field | Description |
---|---|
name string |
Name specifies the username |
gecos string |
(Optional)
Gecos specifies the gecos to use for the user |
groups string |
(Optional)
Groups specifies the additional groups for the user |
homeDir string |
(Optional)
HomeDir specifies the home directory to use for the user |
inactive bool |
(Optional)
Inactive specifies whether to mark the user as inactive |
shell string |
(Optional)
Shell specifies the user’s shell |
passwd string |
(Optional)
Passwd specifies a hashed password for the user |
passwdFrom PasswdSource |
(Optional)
PasswdFrom is a referenced source of passwd to populate the passwd. |
primaryGroup string |
(Optional)
PrimaryGroup specifies the primary group for the user |
lockPassword bool |
(Optional)
LockPassword specifies if password login should be disabled |
sudo string |
(Optional)
Sudo specifies a sudo role for the user |
sshAuthorizedKeys []string |
(Optional)
SSHAuthorizedKeys specifies a list of ssh authorized keys for the user |
controlplane.cluster.x-k8s.io/v1beta1
Package v1beta1 contains API Schema definitions for the controlplane v1beta1 API group
Resource Types:AWSManagedControlPlane
AWSManagedControlPlane is the schema for the Amazon EKS Managed Control Plane API.
Field | Description | ||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||||||||||||||||||||||||||||||||||||||||||||||
spec AWSManagedControlPlaneSpec |
|
||||||||||||||||||||||||||||||||||||||||||||||||||||
status AWSManagedControlPlaneStatus |
AWSManagedControlPlaneSpec
(Appears on:AWSManagedControlPlane)
AWSManagedControlPlaneSpec defines the desired state of an Amazon EKS Cluster.
Field | Description |
---|---|
eksClusterName string |
(Optional)
EKSClusterName allows you to specify the name of the EKS cluster in AWS. If you don’t specify a name then a default name will be created based on the namespace and name of the managed control plane. |
identityRef AWSIdentityReference |
IdentityRef is a reference to an identity to be used when reconciling the managed control plane. If no identity is specified, the default identity for this controller will be used. |
network NetworkSpec |
NetworkSpec encapsulates all things related to AWS network. |
secondaryCidrBlock string |
(Optional)
SecondaryCidrBlock is the additional CIDR range to use for pod IPs. Must be within the 100.64.0.0/10 or 198.19.0.0/16 range. |
region string |
The AWS Region the cluster lives in. |
sshKeyName string |
(Optional)
SSHKeyName is the name of the ssh key to attach to the bastion host. Valid values are empty string (do not use SSH keys), a valid SSH key name, or omitted (use the default SSH key name) |
version string |
(Optional)
Version defines the desired Kubernetes version. If no version number is supplied then the latest version of Kubernetes that EKS supports will be used. |
roleName string |
(Optional)
RoleName specifies the name of IAM role that gives EKS permission to make API calls. If the role is pre-existing we will treat it as unmanaged and not delete it on deletion. If the EKSEnableIAM feature flag is true and no name is supplied then a role is created. |
roleAdditionalPolicies []string |
(Optional)
RoleAdditionalPolicies allows you to attach additional polices to the control plane role. You must enable the EKSAllowAddRoles feature flag to incorporate these into the created role. |
logging ControlPlaneLoggingSpec |
(Optional)
Logging specifies which EKS Cluster logs should be enabled. Entries for each of the enabled logs will be sent to CloudWatch |
encryptionConfig EncryptionConfig |
(Optional)
EncryptionConfig specifies the encryption configuration for the cluster |
additionalTags Tags |
(Optional)
AdditionalTags is an optional set of tags to add to AWS resources managed by the AWS provider, in addition to the ones added by default. |
iamAuthenticatorConfig IAMAuthenticatorConfig |
(Optional)
IAMAuthenticatorConfig allows the specification of any additional user or role mappings for use when generating the aws-iam-authenticator configuration. If this is nil the default configuration is still generated for the cluster. |
endpointAccess EndpointAccess |
(Optional)
Endpoints specifies access to this cluster’s control plane endpoints |
controlPlaneEndpoint Cluster API api/v1beta1.APIEndpoint |
(Optional)
ControlPlaneEndpoint represents the endpoint used to communicate with the control plane. |
imageLookupFormat string |
(Optional)
ImageLookupFormat is the AMI naming format to look up machine images when a machine does not specify an AMI. When set, this will be used for all cluster machines unless a machine specifies a different ImageLookupOrg. Supports substitutions for {{.BaseOS}} and {{.K8sVersion}} with the base OS and kubernetes version, respectively. The BaseOS will be the value in ImageLookupBaseOS or ubuntu (the default), and the kubernetes version as defined by the packages produced by kubernetes/release without v as a prefix: 1.13.0, 1.12.5-mybuild.1, or 1.17.3. For example, the default image format of capa-ami-{{.BaseOS}}-?{{.K8sVersion}}-* will end up searching for AMIs that match the pattern capa-ami-ubuntu-?1.18.0-* for a Machine that is targeting kubernetes v1.18.0 and the ubuntu base OS. See also: https://golang.org/pkg/text/template/ |
imageLookupOrg string |
(Optional)
ImageLookupOrg is the AWS Organization ID to look up machine images when a machine does not specify an AMI. When set, this will be used for all cluster machines unless a machine specifies a different ImageLookupOrg. |
imageLookupBaseOS string |
ImageLookupBaseOS is the name of the base operating system used to look up machine images when a machine does not specify an AMI. When set, this will be used for all cluster machines unless a machine specifies a different ImageLookupBaseOS. |
bastion Bastion |
(Optional)
Bastion contains options to configure the bastion host. |
tokenMethod EKSTokenMethod |
TokenMethod is used to specify the method for obtaining a client token for communicating with EKS iam-authenticator - obtains a client token using iam-authentictor aws-cli - obtains a client token using the AWS CLI Defaults to iam-authenticator |
associateOIDCProvider bool |
AssociateOIDCProvider can be enabled to automatically create an identity provider for the controller for use with IAM roles for service accounts |
addons []sigs.k8s.io/cluster-api-provider-aws/v2/controlplane/eks/api/v1beta1.Addon |
(Optional)
Addons defines the EKS addons to enable with the EKS cluster. |
oidcIdentityProviderConfig OIDCIdentityProviderConfig |
(Optional)
IdentityProviderconfig is used to specify the oidc provider config to be attached with this eks cluster |
disableVPCCNI bool |
DisableVPCCNI indicates that the Amazon VPC CNI should be disabled. With EKS clusters the Amazon VPC CNI is automatically installed into the cluster. For clusters where you want to use an alternate CNI this option provides a way to specify that the Amazon VPC CNI should be deleted. You cannot set this to true if you are using the Amazon VPC CNI addon. |
vpcCni VpcCni |
(Optional)
VpcCni is used to set configuration options for the VPC CNI plugin |
kubeProxy KubeProxy |
KubeProxy defines managed attributes of the kube-proxy daemonset |
AWSManagedControlPlaneStatus
(Appears on:AWSManagedControlPlane)
AWSManagedControlPlaneStatus defines the observed state of an Amazon EKS Cluster.
Field | Description |
---|---|
networkStatus NetworkStatus |
(Optional)
Networks holds details about the AWS networking resources used by the control plane |
failureDomains Cluster API api/v1beta1.FailureDomains |
(Optional)
FailureDomains specifies a list fo available availability zones that can be used |
bastion Instance |
(Optional)
Bastion holds details of the instance that is used as a bastion jump box |
oidcProvider OIDCProviderStatus |
(Optional)
OIDCProvider holds the status of the identity provider for this cluster |
externalManagedControlPlane bool |
ExternalManagedControlPlane indicates to cluster-api that the control plane is managed by an external service such as AKS, EKS, GKE, etc. |
initialized bool |
(Optional)
Initialized denotes whether or not the control plane has the uploaded kubernetes config-map. |
ready bool |
Ready denotes that the AWSManagedControlPlane API Server is ready to receive requests and that the VPC infra is ready. |
failureMessage string |
(Optional)
ErrorMessage indicates that there is a terminal problem reconciling the state, and will be set to a descriptive error message. |
conditions Cluster API api/v1beta1.Conditions |
Conditions specifies the cpnditions for the managed control plane |
addons []AddonState |
(Optional)
Addons holds the current status of the EKS addons |
identityProviderStatus IdentityProviderStatus |
(Optional)
IdentityProviderStatus holds the status for associated identity provider |
Addon
Addon represents a EKS addon.
Field | Description |
---|---|
name string |
Name is the name of the addon |
version string |
Version is the version of the addon to use |
configuration string |
(Optional)
Configuration of the EKS addon |
conflictResolution AddonResolution |
ConflictResolution is used to declare what should happen if there are parameter conflicts. Defaults to none |
serviceAccountRoleARN string |
(Optional)
ServiceAccountRoleArn is the ARN of an IAM role to bind to the addons service account |
AddonIssue
(Appears on:AddonState)
AddonIssue represents an issue with an addon.
Field | Description |
---|---|
code string |
Code is the issue code |
message string |
Message is the textual description of the issue |
resourceIds []string |
ResourceIDs is a list of resource ids for the issue |
AddonResolution
(string
alias)
(Appears on:Addon)
AddonResolution defines the method for resolving parameter conflicts.
AddonState
(Appears on:AWSManagedControlPlaneStatus)
AddonState represents the state of an addon.
Field | Description |
---|---|
name string |
Name is the name of the addon |
version string |
Version is the version of the addon to use |
arn string |
ARN is the AWS ARN of the addon |
serviceAccountRoleARN string |
ServiceAccountRoleArn is the ARN of the IAM role used for the service account |
createdAt Kubernetes meta/v1.Time |
CreatedAt is the date and time the addon was created at |
modifiedAt Kubernetes meta/v1.Time |
ModifiedAt is the date and time the addon was last modified |
status string |
Status is the status of the addon |
issues []AddonIssue |
Issues is a list of issue associated with the addon |
AddonStatus
(string
alias)
AddonStatus defines the status for an addon.
ControlPlaneLoggingSpec
(Appears on:AWSManagedControlPlaneSpec)
ControlPlaneLoggingSpec defines what EKS control plane logs that should be enabled.
Field | Description |
---|---|
apiServer bool |
APIServer indicates if the Kubernetes API Server log (kube-apiserver) shoulkd be enabled |
audit bool |
Audit indicates if the Kubernetes API audit log should be enabled |
authenticator bool |
Authenticator indicates if the iam authenticator log should be enabled |
controllerManager bool |
ControllerManager indicates if the controller manager (kube-controller-manager) log should be enabled |
scheduler bool |
Scheduler indicates if the Kubernetes scheduler (kube-scheduler) log should be enabled |
EKSTokenMethod
(string
alias)
(Appears on:AWSManagedControlPlaneSpec)
EKSTokenMethod defines the method for obtaining a client token to use when connecting to EKS.
EncryptionConfig
(Appears on:AWSManagedControlPlaneSpec)
EncryptionConfig specifies the encryption configuration for the EKS clsuter.
Field | Description |
---|---|
provider string |
Provider specifies the ARN or alias of the CMK (in AWS KMS) |
resources []*string |
Resources specifies the resources to be encrypted |
EndpointAccess
(Appears on:AWSManagedControlPlaneSpec)
EndpointAccess specifies how control plane endpoints are accessible.
Field | Description |
---|---|
public bool |
(Optional)
Public controls whether control plane endpoints are publicly accessible |
publicCIDRs []*string |
(Optional)
PublicCIDRs specifies which blocks can access the public endpoint |
private bool |
(Optional)
Private points VPC-internal control plane access to the private endpoint |
IAMAuthenticatorConfig
(Appears on:AWSManagedControlPlaneSpec)
IAMAuthenticatorConfig represents an aws-iam-authenticator configuration.
Field | Description |
---|---|
mapRoles []RoleMapping |
(Optional)
RoleMappings is a list of role mappings |
mapUsers []UserMapping |
(Optional)
UserMappings is a list of user mappings |
IdentityProviderStatus
(Appears on:AWSManagedControlPlaneStatus)
IdentityProviderStatus holds the status for associated identity provider
Field | Description |
---|---|
arn string |
ARN holds the ARN of associated identity provider |
status string |
Status holds current status of associated identity provider |
KubeProxy
(Appears on:AWSManagedControlPlaneSpec)
KubeProxy specifies how the kube-proxy daemonset is managed.
Field | Description |
---|---|
disable bool |
Disable set to true indicates that kube-proxy should be disabled. With EKS clusters kube-proxy is automatically installed into the cluster. For clusters where you want to use kube-proxy functionality that is provided with an alternate CNI, this option provides a way to specify that the kube-proxy daemonset should be deleted. You cannot set this to true if you are using the Amazon kube-proxy addon. |
KubernetesMapping
(Appears on:RoleMapping, UserMapping)
KubernetesMapping represents the kubernetes RBAC mapping.
Field | Description |
---|---|
username string |
UserName is a kubernetes RBAC user subject |
groups []string |
Groups is a list of kubernetes RBAC groups |
OIDCIdentityProviderConfig
(Appears on:AWSManagedControlPlaneSpec)
OIDCIdentityProviderConfig defines the configuration for an OIDC identity provider.
Field | Description |
---|---|
clientId string |
This is also known as audience. The ID for the client application that makes authentication requests to the OpenID identity provider. |
groupsClaim string |
(Optional)
The JWT claim that the provider uses to return your groups. |
groupsPrefix string |
(Optional)
The prefix that is prepended to group claims to prevent clashes with existing names (such as system: groups). For example, the valueoidc: will create group names like oidc:engineering and oidc:infra. |
identityProviderConfigName string |
The name of the OIDC provider configuration. IdentityProviderConfigName is a required field |
issuerUrl string |
The URL of the OpenID identity provider that allows the API server to discover public signing keys for verifying tokens. The URL must begin with https:// and should correspond to the iss claim in the provider’s OIDC ID tokens. Per the OIDC standard, path components are allowed but query parameters are not. Typically the URL consists of only a hostname, like https://server.example.org or https://example.com. This URL should point to the level below .well-known/openid-configuration and must be publicly accessible over the internet. |
requiredClaims map[string]string |
(Optional)
The key value pairs that describe required claims in the identity token. If set, each claim is verified to be present in the token with a matching value. For the maximum number of claims that you can require, see Amazon EKS service quotas (https://docs.aws.amazon.com/eks/latest/userguide/service-quotas.html) in the Amazon EKS User Guide. |
usernameClaim string |
(Optional)
The JSON Web Token (JWT) claim to use as the username. The default is sub, which is expected to be a unique identifier of the end user. You can choose other claims, such as email or name, depending on the OpenID identity provider. Claims other than email are prefixed with the issuer URL to prevent naming clashes with other plug-ins. |
usernamePrefix string |
(Optional)
The prefix that is prepended to username claims to prevent clashes with existing names. If you do not provide this field, and username is a value other than email, the prefix defaults to issuerurl#. You can use the value - to disable all prefixing. |
tags Tags |
(Optional)
tags to apply to oidc identity provider association |
OIDCProviderStatus
(Appears on:AWSManagedControlPlaneStatus)
OIDCProviderStatus holds the status of the AWS OIDC identity provider.
Field | Description |
---|---|
arn string |
ARN holds the ARN of the provider |
trustPolicy string |
TrustPolicy contains the boilerplate IAM trust policy to use for IRSA |
RoleMapping
(Appears on:IAMAuthenticatorConfig)
RoleMapping represents a mapping from a IAM role to Kubernetes users and groups.
Field | Description |
---|---|
rolearn string |
RoleARN is the AWS ARN for the role to map |
KubernetesMapping KubernetesMapping |
(Members of KubernetesMapping holds the RBAC details for the mapping |
UserMapping
(Appears on:IAMAuthenticatorConfig)
UserMapping represents a mapping from an IAM user to Kubernetes users and groups.
Field | Description |
---|---|
userarn string |
UserARN is the AWS ARN for the user to map |
KubernetesMapping KubernetesMapping |
(Members of KubernetesMapping holds the RBAC details for the mapping |
VpcCni
(Appears on:AWSManagedControlPlaneSpec)
VpcCni specifies configuration related to the VPC CNI.
Field | Description |
---|---|
env []Kubernetes core/v1.EnvVar |
(Optional)
Env defines a list of environment variables to apply to the |
controlplane.cluster.x-k8s.io/v1beta2
Package v1beta2 contains API Schema definitions for the controlplane v1beta2 API group
Resource Types:AWSManagedControlPlane
AWSManagedControlPlane is the schema for the Amazon EKS Managed Control Plane API.
Field | Description | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||
spec AWSManagedControlPlaneSpec |
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||
status AWSManagedControlPlaneStatus |
AWSManagedControlPlaneSpec
(Appears on:AWSManagedControlPlane)
AWSManagedControlPlaneSpec defines the desired state of an Amazon EKS Cluster.
Field | Description |
---|---|
eksClusterName string |
(Optional)
EKSClusterName allows you to specify the name of the EKS cluster in AWS. If you don’t specify a name then a default name will be created based on the namespace and name of the managed control plane. |
identityRef AWSIdentityReference |
IdentityRef is a reference to an identity to be used when reconciling the managed control plane. If no identity is specified, the default identity for this controller will be used. |
network NetworkSpec |
NetworkSpec encapsulates all things related to AWS network. |
secondaryCidrBlock string |
(Optional)
SecondaryCidrBlock is the additional CIDR range to use for pod IPs. Must be within the 100.64.0.0/10 or 198.19.0.0/16 range. |
region string |
The AWS Region the cluster lives in. |
partition string |
(Optional)
Partition is the AWS security partition being used. Defaults to “aws” |
sshKeyName string |
(Optional)
SSHKeyName is the name of the ssh key to attach to the bastion host. Valid values are empty string (do not use SSH keys), a valid SSH key name, or omitted (use the default SSH key name) |
version string |
(Optional)
Version defines the desired Kubernetes version. If no version number is supplied then the latest version of Kubernetes that EKS supports will be used. |
roleName string |
(Optional)
RoleName specifies the name of IAM role that gives EKS permission to make API calls. If the role is pre-existing we will treat it as unmanaged and not delete it on deletion. If the EKSEnableIAM feature flag is true and no name is supplied then a role is created. |
roleAdditionalPolicies []string |
(Optional)
RoleAdditionalPolicies allows you to attach additional polices to the control plane role. You must enable the EKSAllowAddRoles feature flag to incorporate these into the created role. |
logging ControlPlaneLoggingSpec |
(Optional)
Logging specifies which EKS Cluster logs should be enabled. Entries for each of the enabled logs will be sent to CloudWatch |
encryptionConfig EncryptionConfig |
(Optional)
EncryptionConfig specifies the encryption configuration for the cluster |
additionalTags Tags |
(Optional)
AdditionalTags is an optional set of tags to add to AWS resources managed by the AWS provider, in addition to the ones added by default. |
iamAuthenticatorConfig IAMAuthenticatorConfig |
(Optional)
IAMAuthenticatorConfig allows the specification of any additional user or role mappings for use when generating the aws-iam-authenticator configuration. If this is nil the default configuration is still generated for the cluster. |
endpointAccess EndpointAccess |
(Optional)
Endpoints specifies access to this cluster’s control plane endpoints |
controlPlaneEndpoint Cluster API api/v1beta1.APIEndpoint |
(Optional)
ControlPlaneEndpoint represents the endpoint used to communicate with the control plane. |
imageLookupFormat string |
(Optional)
ImageLookupFormat is the AMI naming format to look up machine images when a machine does not specify an AMI. When set, this will be used for all cluster machines unless a machine specifies a different ImageLookupOrg. Supports substitutions for {{.BaseOS}} and {{.K8sVersion}} with the base OS and kubernetes version, respectively. The BaseOS will be the value in ImageLookupBaseOS or ubuntu (the default), and the kubernetes version as defined by the packages produced by kubernetes/release without v as a prefix: 1.13.0, 1.12.5-mybuild.1, or 1.17.3. For example, the default image format of capa-ami-{{.BaseOS}}-?{{.K8sVersion}}-* will end up searching for AMIs that match the pattern capa-ami-ubuntu-?1.18.0-* for a Machine that is targeting kubernetes v1.18.0 and the ubuntu base OS. See also: https://golang.org/pkg/text/template/ |
imageLookupOrg string |
(Optional)
ImageLookupOrg is the AWS Organization ID to look up machine images when a machine does not specify an AMI. When set, this will be used for all cluster machines unless a machine specifies a different ImageLookupOrg. |
imageLookupBaseOS string |
ImageLookupBaseOS is the name of the base operating system used to look up machine images when a machine does not specify an AMI. When set, this will be used for all cluster machines unless a machine specifies a different ImageLookupBaseOS. |
bastion Bastion |
(Optional)
Bastion contains options to configure the bastion host. |
tokenMethod EKSTokenMethod |
TokenMethod is used to specify the method for obtaining a client token for communicating with EKS iam-authenticator - obtains a client token using iam-authentictor aws-cli - obtains a client token using the AWS CLI Defaults to iam-authenticator |
associateOIDCProvider bool |
AssociateOIDCProvider can be enabled to automatically create an identity provider for the controller for use with IAM roles for service accounts |
addons []sigs.k8s.io/cluster-api-provider-aws/v2/controlplane/eks/api/v1beta2.Addon |
(Optional)
Addons defines the EKS addons to enable with the EKS cluster. |
oidcIdentityProviderConfig OIDCIdentityProviderConfig |
(Optional)
IdentityProviderconfig is used to specify the oidc provider config to be attached with this eks cluster |
vpcCni VpcCni |
(Optional)
VpcCni is used to set configuration options for the VPC CNI plugin |
restrictPrivateSubnets bool |
RestrictPrivateSubnets indicates that the EKS control plane should only use private subnets. |
kubeProxy KubeProxy |
KubeProxy defines managed attributes of the kube-proxy daemonset |
AWSManagedControlPlaneStatus
(Appears on:AWSManagedControlPlane)
AWSManagedControlPlaneStatus defines the observed state of an Amazon EKS Cluster.
Field | Description |
---|---|
networkStatus NetworkStatus |
(Optional)
Networks holds details about the AWS networking resources used by the control plane |
failureDomains Cluster API api/v1beta1.FailureDomains |
(Optional)
FailureDomains specifies a list fo available availability zones that can be used |
bastion Instance |
(Optional)
Bastion holds details of the instance that is used as a bastion jump box |
oidcProvider OIDCProviderStatus |
(Optional)
OIDCProvider holds the status of the identity provider for this cluster |
externalManagedControlPlane bool |
ExternalManagedControlPlane indicates to cluster-api that the control plane is managed by an external service such as AKS, EKS, GKE, etc. |
initialized bool |
(Optional)
Initialized denotes whether or not the control plane has the uploaded kubernetes config-map. |
ready bool |
Ready denotes that the AWSManagedControlPlane API Server is ready to receive requests and that the VPC infra is ready. |
failureMessage string |
(Optional)
ErrorMessage indicates that there is a terminal problem reconciling the state, and will be set to a descriptive error message. |
conditions Cluster API api/v1beta1.Conditions |
Conditions specifies the cpnditions for the managed control plane |
addons []AddonState |
(Optional)
Addons holds the current status of the EKS addons |
identityProviderStatus IdentityProviderStatus |
(Optional)
IdentityProviderStatus holds the status for associated identity provider |
Addon
Addon represents a EKS addon.
Field | Description |
---|---|
name string |
Name is the name of the addon |
version string |
Version is the version of the addon to use |
configuration string |
(Optional)
Configuration of the EKS addon |
conflictResolution AddonResolution |
ConflictResolution is used to declare what should happen if there are parameter conflicts. Defaults to none |
serviceAccountRoleARN string |
(Optional)
ServiceAccountRoleArn is the ARN of an IAM role to bind to the addons service account |
AddonIssue
(Appears on:AddonState)
AddonIssue represents an issue with an addon.
Field | Description |
---|---|
code string |
Code is the issue code |
message string |
Message is the textual description of the issue |
resourceIds []string |
ResourceIDs is a list of resource ids for the issue |
AddonResolution
(string
alias)
(Appears on:Addon)
AddonResolution defines the method for resolving parameter conflicts.
AddonState
(Appears on:AWSManagedControlPlaneStatus)
AddonState represents the state of an addon.
Field | Description |
---|---|
name string |
Name is the name of the addon |
version string |
Version is the version of the addon to use |
arn string |
ARN is the AWS ARN of the addon |
serviceAccountRoleARN string |
ServiceAccountRoleArn is the ARN of the IAM role used for the service account |
createdAt Kubernetes meta/v1.Time |
CreatedAt is the date and time the addon was created at |
modifiedAt Kubernetes meta/v1.Time |
ModifiedAt is the date and time the addon was last modified |
status string |
Status is the status of the addon |
issues []AddonIssue |
Issues is a list of issue associated with the addon |
AddonStatus
(string
alias)
AddonStatus defines the status for an addon.
ControlPlaneLoggingSpec
(Appears on:AWSManagedControlPlaneSpec)
ControlPlaneLoggingSpec defines what EKS control plane logs that should be enabled.
Field | Description |
---|---|
apiServer bool |
APIServer indicates if the Kubernetes API Server log (kube-apiserver) shoulkd be enabled |
audit bool |
Audit indicates if the Kubernetes API audit log should be enabled |
authenticator bool |
Authenticator indicates if the iam authenticator log should be enabled |
controllerManager bool |
ControllerManager indicates if the controller manager (kube-controller-manager) log should be enabled |
scheduler bool |
Scheduler indicates if the Kubernetes scheduler (kube-scheduler) log should be enabled |
EKSTokenMethod
(string
alias)
(Appears on:AWSManagedControlPlaneSpec)
EKSTokenMethod defines the method for obtaining a client token to use when connecting to EKS.
EncryptionConfig
(Appears on:AWSManagedControlPlaneSpec)
EncryptionConfig specifies the encryption configuration for the EKS clsuter.
Field | Description |
---|---|
provider string |
Provider specifies the ARN or alias of the CMK (in AWS KMS) |
resources []*string |
Resources specifies the resources to be encrypted |
EndpointAccess
(Appears on:AWSManagedControlPlaneSpec)
EndpointAccess specifies how control plane endpoints are accessible.
Field | Description |
---|---|
public bool |
(Optional)
Public controls whether control plane endpoints are publicly accessible |
publicCIDRs []*string |
(Optional)
PublicCIDRs specifies which blocks can access the public endpoint |
private bool |
(Optional)
Private points VPC-internal control plane access to the private endpoint |
IAMAuthenticatorConfig
(Appears on:AWSManagedControlPlaneSpec)
IAMAuthenticatorConfig represents an aws-iam-authenticator configuration.
Field | Description |
---|---|
mapRoles []RoleMapping |
(Optional)
RoleMappings is a list of role mappings |
mapUsers []UserMapping |
(Optional)
UserMappings is a list of user mappings |
IdentityProviderStatus
(Appears on:AWSManagedControlPlaneStatus)
IdentityProviderStatus holds the status for associated identity provider.
Field | Description |
---|---|
arn string |
ARN holds the ARN of associated identity provider |
status string |
Status holds current status of associated identity provider |
KubeProxy
(Appears on:AWSManagedControlPlaneSpec)
KubeProxy specifies how the kube-proxy daemonset is managed.
Field | Description |
---|---|
disable bool |
Disable set to true indicates that kube-proxy should be disabled. With EKS clusters kube-proxy is automatically installed into the cluster. For clusters where you want to use kube-proxy functionality that is provided with an alternate CNI, this option provides a way to specify that the kube-proxy daemonset should be deleted. You cannot set this to true if you are using the Amazon kube-proxy addon. |
KubernetesMapping
(Appears on:RoleMapping, UserMapping)
KubernetesMapping represents the kubernetes RBAC mapping.
Field | Description |
---|---|
username string |
UserName is a kubernetes RBAC user subject |
groups []string |
Groups is a list of kubernetes RBAC groups |
OIDCIdentityProviderConfig
(Appears on:AWSManagedControlPlaneSpec)
OIDCIdentityProviderConfig represents the configuration for an OIDC identity provider.
Field | Description |
---|---|
clientId string |
This is also known as audience. The ID for the client application that makes authentication requests to the OpenID identity provider. |
groupsClaim string |
(Optional)
The JWT claim that the provider uses to return your groups. |
groupsPrefix string |
(Optional)
The prefix that is prepended to group claims to prevent clashes with existing names (such as system: groups). For example, the valueoidc: will create group names like oidc:engineering and oidc:infra. |
identityProviderConfigName string |
The name of the OIDC provider configuration. IdentityProviderConfigName is a required field |
issuerUrl string |
The URL of the OpenID identity provider that allows the API server to discover public signing keys for verifying tokens. The URL must begin with https:// and should correspond to the iss claim in the provider’s OIDC ID tokens. Per the OIDC standard, path components are allowed but query parameters are not. Typically the URL consists of only a hostname, like https://server.example.org or https://example.com. This URL should point to the level below .well-known/openid-configuration and must be publicly accessible over the internet. |
requiredClaims map[string]string |
(Optional)
The key value pairs that describe required claims in the identity token. If set, each claim is verified to be present in the token with a matching value. For the maximum number of claims that you can require, see Amazon EKS service quotas (https://docs.aws.amazon.com/eks/latest/userguide/service-quotas.html) in the Amazon EKS User Guide. |
usernameClaim string |
(Optional)
The JSON Web Token (JWT) claim to use as the username. The default is sub, which is expected to be a unique identifier of the end user. You can choose other claims, such as email or name, depending on the OpenID identity provider. Claims other than email are prefixed with the issuer URL to prevent naming clashes with other plug-ins. |
usernamePrefix string |
(Optional)
The prefix that is prepended to username claims to prevent clashes with existing names. If you do not provide this field, and username is a value other than email, the prefix defaults to issuerurl#. You can use the value - to disable all prefixing. |
tags Tags |
(Optional)
tags to apply to oidc identity provider association |
OIDCProviderStatus
(Appears on:AWSManagedControlPlaneStatus)
OIDCProviderStatus holds the status of the AWS OIDC identity provider.
Field | Description |
---|---|
arn string |
ARN holds the ARN of the provider |
trustPolicy string |
TrustPolicy contains the boilerplate IAM trust policy to use for IRSA |
RoleMapping
(Appears on:IAMAuthenticatorConfig)
RoleMapping represents a mapping from a IAM role to Kubernetes users and groups.
Field | Description |
---|---|
rolearn string |
RoleARN is the AWS ARN for the role to map |
KubernetesMapping KubernetesMapping |
(Members of KubernetesMapping holds the RBAC details for the mapping |
UserMapping
(Appears on:IAMAuthenticatorConfig)
UserMapping represents a mapping from an IAM user to Kubernetes users and groups.
Field | Description |
---|---|
userarn string |
UserARN is the AWS ARN for the user to map |
KubernetesMapping KubernetesMapping |
(Members of KubernetesMapping holds the RBAC details for the mapping |
VpcCni
(Appears on:AWSManagedControlPlaneSpec)
VpcCni specifies configuration related to the VPC CNI.
Field | Description |
---|---|
disable bool |
Disable indicates that the Amazon VPC CNI should be disabled. With EKS clusters the Amazon VPC CNI is automatically installed into the cluster. For clusters where you want to use an alternate CNI this option provides a way to specify that the Amazon VPC CNI should be deleted. You cannot set this to true if you are using the Amazon VPC CNI addon. |
env []Kubernetes core/v1.EnvVar |
(Optional)
Env defines a list of environment variables to apply to the |
AWSRolesRef
(Appears on:RosaControlPlaneSpec)
AWSRolesRef contains references to various AWS IAM roles required for operators to make calls against the AWS API.
Field | Description |
---|---|
ingressARN string |
The referenced role must have a trust relationship that allows it to be assumed via web identity. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html. Example: { “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Principal”: { “Federated”: “{{ .ProviderARN }}” }, “Action”: “sts:AssumeRoleWithWebIdentity”, “Condition”: { “StringEquals”: { “{{ .ProviderName }}:sub”: {{ .ServiceAccounts }} } } } ] } IngressARN is an ARN value referencing a role appropriate for the Ingress Operator. The following is an example of a valid policy document: { “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Action”: [ “elasticloadbalancing:DescribeLoadBalancers”, “tag:GetResources”, “route53:ListHostedZones” ], “Resource”: “*” }, { “Effect”: “Allow”, “Action”: [ “route53:ChangeResourceRecordSets” ], “Resource”: [ “arn:aws:route53:::PUBLIC_ZONE_ID”, “arn:aws:route53:::PRIVATE_ZONE_ID” ] } ] } |
imageRegistryARN string |
ImageRegistryARN is an ARN value referencing a role appropriate for the Image Registry Operator. The following is an example of a valid policy document: { “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Action”: [ “s3:CreateBucket”, “s3:DeleteBucket”, “s3:PutBucketTagging”, “s3:GetBucketTagging”, “s3:PutBucketPublicAccessBlock”, “s3:GetBucketPublicAccessBlock”, “s3:PutEncryptionConfiguration”, “s3:GetEncryptionConfiguration”, “s3:PutLifecycleConfiguration”, “s3:GetLifecycleConfiguration”, “s3:GetBucketLocation”, “s3:ListBucket”, “s3:GetObject”, “s3:PutObject”, “s3:DeleteObject”, “s3:ListBucketMultipartUploads”, “s3:AbortMultipartUpload”, “s3:ListMultipartUploadParts” ], “Resource”: “*” } ] } |
storageARN string |
StorageARN is an ARN value referencing a role appropriate for the Storage Operator. The following is an example of a valid policy document: { “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Action”: [ “ec2:AttachVolume”, “ec2:CreateSnapshot”, “ec2:CreateTags”, “ec2:CreateVolume”, “ec2:DeleteSnapshot”, “ec2:DeleteTags”, “ec2:DeleteVolume”, “ec2:DescribeInstances”, “ec2:DescribeSnapshots”, “ec2:DescribeTags”, “ec2:DescribeVolumes”, “ec2:DescribeVolumesModifications”, “ec2:DetachVolume”, “ec2:ModifyVolume” ], “Resource”: “*” } ] } |
networkARN string |
NetworkARN is an ARN value referencing a role appropriate for the Network Operator. The following is an example of a valid policy document: { “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Action”: [ “ec2:DescribeInstances”, “ec2:DescribeInstanceStatus”, “ec2:DescribeInstanceTypes”, “ec2:UnassignPrivateIpAddresses”, “ec2:AssignPrivateIpAddresses”, “ec2:UnassignIpv6Addresses”, “ec2:AssignIpv6Addresses”, “ec2:DescribeSubnets”, “ec2:DescribeNetworkInterfaces” ], “Resource”: “*” } ] } |
kubeCloudControllerARN string |
KubeCloudControllerARN is an ARN value referencing a role appropriate for the KCM/KCC. Source: https://cloud-provider-aws.sigs.k8s.io/prerequisites/#iam-policies The following is an example of a valid policy document: { “Version”: “2012-10-17”, “Statement”: [ { “Action”: [ “autoscaling:DescribeAutoScalingGroups”, “autoscaling:DescribeLaunchConfigurations”, “autoscaling:DescribeTags”, “ec2:DescribeAvailabilityZones”, “ec2:DescribeInstances”, “ec2:DescribeImages”, “ec2:DescribeRegions”, “ec2:DescribeRouteTables”, “ec2:DescribeSecurityGroups”, “ec2:DescribeSubnets”, “ec2:DescribeVolumes”, “ec2:CreateSecurityGroup”, “ec2:CreateTags”, “ec2:CreateVolume”, “ec2:ModifyInstanceAttribute”, “ec2:ModifyVolume”, “ec2:AttachVolume”, “ec2:AuthorizeSecurityGroupIngress”, “ec2:CreateRoute”, “ec2:DeleteRoute”, “ec2:DeleteSecurityGroup”, “ec2:DeleteVolume”, “ec2:DetachVolume”, “ec2:RevokeSecurityGroupIngress”, “ec2:DescribeVpcs”, “elasticloadbalancing:AddTags”, “elasticloadbalancing:AttachLoadBalancerToSubnets”, “elasticloadbalancing:ApplySecurityGroupsToLoadBalancer”, “elasticloadbalancing:CreateLoadBalancer”, “elasticloadbalancing:CreateLoadBalancerPolicy”, “elasticloadbalancing:CreateLoadBalancerListeners”, “elasticloadbalancing:ConfigureHealthCheck”, “elasticloadbalancing:DeleteLoadBalancer”, “elasticloadbalancing:DeleteLoadBalancerListeners”, “elasticloadbalancing:DescribeLoadBalancers”, “elasticloadbalancing:DescribeLoadBalancerAttributes”, “elasticloadbalancing:DetachLoadBalancerFromSubnets”, “elasticloadbalancing:DeregisterInstancesFromLoadBalancer”, “elasticloadbalancing:ModifyLoadBalancerAttributes”, “elasticloadbalancing:RegisterInstancesWithLoadBalancer”, “elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer”, “elasticloadbalancing:AddTags”, “elasticloadbalancing:CreateListener”, “elasticloadbalancing:CreateTargetGroup”, “elasticloadbalancing:DeleteListener”, “elasticloadbalancing:DeleteTargetGroup”, “elasticloadbalancing:DeregisterTargets”, “elasticloadbalancing:DescribeListeners”, “elasticloadbalancing:DescribeLoadBalancerPolicies”, “elasticloadbalancing:DescribeTargetGroups”, “elasticloadbalancing:DescribeTargetHealth”, “elasticloadbalancing:ModifyListener”, “elasticloadbalancing:ModifyTargetGroup”, “elasticloadbalancing:RegisterTargets”, “elasticloadbalancing:SetLoadBalancerPoliciesOfListener”, “iam:CreateServiceLinkedRole”, “kms:DescribeKey” ], “Resource”: [ “*” ], “Effect”: “Allow” } ] } |
nodePoolManagementARN string |
NodePoolManagementARN is an ARN value referencing a role appropriate for the CAPI Controller. The following is an example of a valid policy document: { “Version”: “2012-10-17”, “Statement”: [ { “Action”: [ “ec2:AssociateRouteTable”, “ec2:AttachInternetGateway”, “ec2:AuthorizeSecurityGroupIngress”, “ec2:CreateInternetGateway”, “ec2:CreateNatGateway”, “ec2:CreateRoute”, “ec2:CreateRouteTable”, “ec2:CreateSecurityGroup”, “ec2:CreateSubnet”, “ec2:CreateTags”, “ec2:DeleteInternetGateway”, “ec2:DeleteNatGateway”, “ec2:DeleteRouteTable”, “ec2:DeleteSecurityGroup”, “ec2:DeleteSubnet”, “ec2:DeleteTags”, “ec2:DescribeAccountAttributes”, “ec2:DescribeAddresses”, “ec2:DescribeAvailabilityZones”, “ec2:DescribeImages”, “ec2:DescribeInstances”, “ec2:DescribeInternetGateways”, “ec2:DescribeNatGateways”, “ec2:DescribeNetworkInterfaces”, “ec2:DescribeNetworkInterfaceAttribute”, “ec2:DescribeRouteTables”, “ec2:DescribeSecurityGroups”, “ec2:DescribeSubnets”, “ec2:DescribeVpcs”, “ec2:DescribeVpcAttribute”, “ec2:DescribeVolumes”, “ec2:DetachInternetGateway”, “ec2:DisassociateRouteTable”, “ec2:DisassociateAddress”, “ec2:ModifyInstanceAttribute”, “ec2:ModifyNetworkInterfaceAttribute”, “ec2:ModifySubnetAttribute”, “ec2:RevokeSecurityGroupIngress”, “ec2:RunInstances”, “ec2:TerminateInstances”, “tag:GetResources”, “ec2:CreateLaunchTemplate”, “ec2:CreateLaunchTemplateVersion”, “ec2:DescribeLaunchTemplates”, “ec2:DescribeLaunchTemplateVersions”, “ec2:DeleteLaunchTemplate”, “ec2:DeleteLaunchTemplateVersions” ], “Resource”: [ “” ], “Effect”: “Allow” }, { “Condition”: { “StringLike”: { “iam:AWSServiceName”: “elasticloadbalancing.amazonaws.com” } }, “Action”: [ “iam:CreateServiceLinkedRole” ], “Resource”: [ “arn::iam:::role/aws-service-role/elasticloadbalancing.amazonaws.com/AWSServiceRoleForElasticLoadBalancing” ], “Effect”: “Allow” }, { “Action”: [ “iam:PassRole” ], “Resource”: [ “arn::iam:::role/-worker-role” ], “Effect”: “Allow” }, { “Effect”: “Allow”, “Action”: [ “kms:Decrypt”, “kms:ReEncrypt”, “kms:GenerateDataKeyWithoutPlainText”, “kms:DescribeKey” ], “Resource”: “” }, { “Effect”: “Allow”, “Action”: [ “kms:CreateGrant” ], “Resource”: “”, “Condition”: { “Bool”: { “kms:GrantIsForAWSResource”: true } } } ] } |
controlPlaneOperatorARN string |
ControlPlaneOperatorARN is an ARN value referencing a role appropriate for the Control Plane Operator. The following is an example of a valid policy document: { “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Action”: [ “ec2:CreateVpcEndpoint”, “ec2:DescribeVpcEndpoints”, “ec2:ModifyVpcEndpoint”, “ec2:DeleteVpcEndpoints”, “ec2:CreateTags”, “route53:ListHostedZones”, “ec2:CreateSecurityGroup”, “ec2:AuthorizeSecurityGroupIngress”, “ec2:AuthorizeSecurityGroupEgress”, “ec2:DeleteSecurityGroup”, “ec2:RevokeSecurityGroupIngress”, “ec2:RevokeSecurityGroupEgress”, “ec2:DescribeSecurityGroups”, “ec2:DescribeVpcs”, ], “Resource”: “*” }, { “Effect”: “Allow”, “Action”: [ “route53:ChangeResourceRecordSets”, “route53:ListResourceRecordSets” ], “Resource”: “arn:aws:route53:::%s” } ] } |
kmsProviderARN string |
DefaultMachinePoolSpec
(Appears on:RosaControlPlaneSpec)
DefaultMachinePoolSpec defines the configuration for the required worker nodes provisioned as part of the cluster creation.
Field | Description |
---|---|
instanceType string |
(Optional)
The instance type to use, for example |
autoscaling RosaMachinePoolAutoScaling |
(Optional)
Autoscaling specifies auto scaling behaviour for the default MachinePool. Autoscaling min/max value must be equal or multiple of the availability zones count. |
ExternalAuthProvider
(Appears on:RosaControlPlaneSpec)
ExternalAuthProvider is an external OIDC identity provider that can issue tokens for this cluster
Field | Description |
---|---|
name string |
Name of the OIDC provider |
issuer TokenIssuer |
Issuer describes attributes of the OIDC token issuer |
oidcClients []OIDCClientConfig |
(Optional)
OIDCClients contains configuration for the platform’s clients that need to request tokens from the issuer |
claimMappings TokenClaimMappings |
(Optional)
ClaimMappings describes rules on how to transform information from an ID token into a cluster identity |
claimValidationRules []TokenClaimValidationRule |
ClaimValidationRules are rules that are applied to validate token claims to authenticate users. |
LocalObjectReference
(Appears on:OIDCClientConfig, TokenIssuer)
LocalObjectReference references an object in the same namespace.
Field | Description |
---|---|
name string |
Name is the metadata.name of the referenced object. |
NetworkSpec
(Appears on:RosaControlPlaneSpec)
NetworkSpec for ROSA-HCP.
Field | Description |
---|---|
machineCIDR string |
(Optional)
IP addresses block used by OpenShift while installing the cluster, for example “10.0.0.0/16”. |
podCIDR string |
(Optional)
IP address block from which to assign pod IP addresses, for example |
serviceCIDR string |
(Optional)
IP address block from which to assign service IP addresses, for example |
hostPrefix int |
(Optional)
Network host prefix which is defaulted to |
networkType string |
(Optional)
The CNI network type default is OVNKubernetes. |
OIDCClientConfig
(Appears on:ExternalAuthProvider)
OIDCClientConfig contains configuration for the platform’s client that need to request tokens from the issuer.
Field | Description |
---|---|
componentName string |
ComponentName is the name of the component that is supposed to consume this client configuration |
componentNamespace string |
ComponentNamespace is the namespace of the component that is supposed to consume this client configuration |
clientID string |
ClientID is the identifier of the OIDC client from the OIDC provider |
clientSecret LocalObjectReference |
ClientSecret refers to a secret that
contains the client secret in the |
extraScopes []string |
(Optional)
ExtraScopes is an optional set of scopes to request tokens with. |
PrefixedClaimMapping
(Appears on:TokenClaimMappings)
PrefixedClaimMapping defines claims with a prefix.
Field | Description |
---|---|
claim string |
Claim is a JWT token claim to be used in the mapping |
prefix string |
Prefix is a string to prefix the value from the token in the result of the claim mapping. By default, no prefixing occurs. Example: if |
ROSAControlPlane
ROSAControlPlane is the Schema for the ROSAControlPlanes API.
Field | Description | ||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||||||||||||||||||||||||||||||||||||||||||||||
spec RosaControlPlaneSpec |
|
||||||||||||||||||||||||||||||||||||||||||||||||||||
status RosaControlPlaneStatus |
RegistryConfig
(Appears on:RosaControlPlaneSpec)
RegistryConfig for ROSA-HCP cluster
Field | Description |
---|---|
additionalTrustedCAs map[string]string |
(Optional)
AdditionalTrustedCAs containing the registry hostname as the key, and the PEM-encoded certificate as the value, for each additional registry CA to trust. |
allowedRegistriesForImport []RegistryLocation |
(Optional)
AllowedRegistriesForImport limits the container image registries that normal users may import images from. Set this list to the registries that you trust to contain valid Docker images and that you want applications to be able to import from. |
registrySources RegistrySources |
(Optional)
RegistrySources contains configuration that determines how the container runtime should treat individual registries when accessing images. It does not contain configuration for the internal cluster registry. AllowedRegistries, BlockedRegistries are mutually exclusive. |
RegistryLocation
(Appears on:RegistryConfig)
RegistryLocation contains a location of the registry specified by the registry domain name.
Field | Description |
---|---|
domainName string |
(Optional)
domainName specifies a domain name for the registry. The domain name might include wildcards, like ‘*’ or ‘??’. In case the registry use non-standard (80 or 443) port, the port should be included in the domain name as well. |
insecure bool |
(Optional)
insecure indicates whether the registry is secure (https) or insecure (http), default is secured. |
RegistrySources
(Appears on:RegistryConfig)
RegistrySources contains registries configuration.
Field | Description |
---|---|
allowedRegistries []string |
(Optional)
AllowedRegistries are the registries for which image pull and push actions are allowed. To specify all subdomains, add the asterisk (*) wildcard character as a prefix to the domain name, For example, *.example.com. You can specify an individual repository within a registry, For example: reg1.io/myrepo/myapp:latest. All other registries are blocked. |
blockedRegistries []string |
(Optional)
BlockedRegistries are the registries for which image pull and push actions are denied. To specify all subdomains, add the asterisk (*) wildcard character as a prefix to the domain name, For example, *.example.com. You can specify an individual repository within a registry, For example: reg1.io/myrepo/myapp:latest. All other registries are allowed. |
insecureRegistries []string |
(Optional)
InsecureRegistries are registries which do not have a valid TLS certificate or only support HTTP connections. To specify all subdomains, add the asterisk (*) wildcard character as a prefix to the domain name, For example, *.example.com. You can specify an individual repository within a registry, For example: reg1.io/myrepo/myapp:latest. |
RosaControlPlaneSpec
(Appears on:ROSAControlPlane)
RosaControlPlaneSpec defines the desired state of ROSAControlPlane.
Field | Description |
---|---|
rosaClusterName string |
Cluster name must be valid DNS-1035 label, so it must consist of lower case alphanumeric characters or ‘-’, start with an alphabetic character, end with an alphanumeric character and have a max length of 54 characters. |
domainPrefix string |
(Optional)
DomainPrefix is an optional prefix added to the cluster’s domain name. It will be used when generating a sub-domain for the cluster on openshiftapps domain. It must be valid DNS-1035 label consisting of lower case alphanumeric characters or ‘-’, start with an alphabetic character end with an alphanumeric character and have a max length of 15 characters. |
subnets []string |
The Subnet IDs to use when installing the cluster. SubnetIDs should come in pairs; two per availability zone, one private and one public. |
availabilityZones []string |
AvailabilityZones describe AWS AvailabilityZones of the worker nodes. should match the AvailabilityZones of the provided Subnets. a machinepool will be created for each availabilityZone. |
region string |
The AWS Region the cluster lives in. |
version string |
OpenShift semantic version, for example “4.14.5”. |
versionGate VersionGateAckType |
VersionGate requires acknowledgment when upgrading ROSA-HCP y-stream versions (e.g., from 4.15 to 4.16). Default is WaitForAcknowledge. WaitForAcknowledge: If acknowledgment is required, the upgrade will not proceed until VersionGate is set to Acknowledge or AlwaysAcknowledge. Acknowledge: If acknowledgment is required, apply it for the upgrade. After upgrade is done set the version gate to WaitForAcknowledge. AlwaysAcknowledge: If acknowledgment is required, apply it and proceed with the upgrade. |
rolesRef AWSRolesRef |
AWS IAM roles used to perform credential requests by the openshift operators. |
oidcID string |
The ID of the internal OpenID Connect Provider. |
enableExternalAuthProviders bool |
(Optional)
EnableExternalAuthProviders enables external authentication configuration for the cluster. |
externalAuthProviders []ExternalAuthProvider |
ExternalAuthProviders are external OIDC identity providers that can issue tokens for this cluster. Can only be set if “enableExternalAuthProviders” is set to “True”. At most one provider can be configured. |
installerRoleARN string |
InstallerRoleARN is an AWS IAM role that OpenShift Cluster Manager will assume to create the cluster.. |
supportRoleARN string |
SupportRoleARN is an AWS IAM role used by Red Hat SREs to enable access to the cluster account in order to provide support. |
workerRoleARN string |
WorkerRoleARN is an AWS IAM role that will be attached to worker instances. |
billingAccount string |
(Optional)
BillingAccount is an optional AWS account to use for billing the subscription fees for ROSA clusters. The cost of running each ROSA cluster will be billed to the infrastructure account in which the cluster is running. |
defaultMachinePoolSpec DefaultMachinePoolSpec |
(Optional)
DefaultMachinePoolSpec defines the configuration for the default machinepool(s) provisioned as part of the cluster creation.
One MachinePool will be created with this configuration per AvailabilityZone. Those default machinepools are required for openshift cluster operators
to work properly.
As these machinepool not created using ROSAMachinePool CR, they will not be visible/managed by ROSA CAPI provider.
This field will be removed in the future once the current limitation is resolved. |
network NetworkSpec |
(Optional)
Network config for the ROSA HCP cluster. |
endpointAccess RosaEndpointAccessType |
(Optional)
EndpointAccess specifies the publishing scope of cluster endpoints. The default is Public. |
additionalTags Tags |
(Optional)
AdditionalTags are user-defined tags to be added on the AWS resources associated with the control plane. |
etcdEncryptionKMSARN string |
(Optional)
EtcdEncryptionKMSARN is the ARN of the KMS key used to encrypt etcd. The key itself needs to be
created out-of-band by the user and tagged with |
auditLogRoleARN string |
(Optional)
AuditLogRoleARN defines the role that is used to forward audit logs to AWS CloudWatch. If not set, audit log forwarding is disabled. |
provisionShardID string |
(Optional)
ProvisionShardID defines the shard where rosa control plane components will be hosted. |
credentialsSecretRef Kubernetes core/v1.LocalObjectReference |
(Optional)
CredentialsSecretRef references a secret with necessary credentials to connect to the OCM API. The secret should contain the following data keys: - ocmToken: eyJhbGciOiJIUzI1NiIsI…. - ocmApiUrl: Optional, defaults to ‘https://api.openshift.com’ |
identityRef AWSIdentityReference |
(Optional)
IdentityRef is a reference to an identity to be used when reconciling the managed control plane. If no identity is specified, the default identity for this controller will be used. |
controlPlaneEndpoint Cluster API api/v1beta1.APIEndpoint |
(Optional)
ControlPlaneEndpoint represents the endpoint used to communicate with the control plane. |
clusterRegistryConfig RegistryConfig |
(Optional)
ClusterRegistryConfig represents registry config used with the cluster. |
RosaControlPlaneStatus
(Appears on:ROSAControlPlane)
RosaControlPlaneStatus defines the observed state of ROSAControlPlane.
Field | Description |
---|---|
externalManagedControlPlane bool |
ExternalManagedControlPlane indicates to cluster-api that the control plane is managed by an external service such as AKS, EKS, GKE, etc. |
initialized bool |
(Optional)
Initialized denotes whether or not the control plane has the uploaded kubernetes config-map. |
ready bool |
Ready denotes that the ROSAControlPlane API Server is ready to receive requests. |
failureMessage string |
(Optional)
FailureMessage will be set in the event that there is a terminal problem reconciling the state and will be set to a descriptive error message. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the spec or the configuration of the controller, and that manual intervention is required. |
conditions Cluster API api/v1beta1.Conditions |
Conditions specifies the conditions for the managed control plane |
id string |
ID is the cluster ID given by ROSA. |
consoleURL string |
ConsoleURL is the url for the openshift console. |
oidcEndpointURL string |
OIDCEndpointURL is the endpoint url for the managed OIDC provider. |
availableUpgrades []string |
Available upgrades for the ROSA hosted control plane. |
RosaEndpointAccessType
(string
alias)
(Appears on:RosaControlPlaneSpec)
RosaEndpointAccessType specifies the publishing scope of cluster endpoints.
Value | Description |
---|---|
"Private" |
Private endpoint access allows only private API server access and private node communication with the control plane. |
"Public" |
Public endpoint access allows public API server access and private node communication with the control plane. |
TokenAudience
(string
alias)
(Appears on:TokenIssuer)
TokenAudience is the audience that the token was issued for.
TokenClaimMappings
(Appears on:ExternalAuthProvider)
TokenClaimMappings describes rules on how to transform information from an ID token into a cluster identity.
Field | Description |
---|---|
username UsernameClaimMapping |
(Optional)
Username is a name of the claim that should be used to construct usernames for the cluster identity. Default value: “sub” |
groups PrefixedClaimMapping |
(Optional)
Groups is a name of the claim that should be used to construct groups for the cluster identity. The referenced claim must use array of strings values. |
TokenClaimValidationRule
(Appears on:ExternalAuthProvider)
TokenClaimValidationRule validates token claims to authenticate users.
Field | Description |
---|---|
type TokenValidationRuleType |
Type sets the type of the validation rule |
requiredClaim TokenRequiredClaim |
RequiredClaim allows configuring a required claim name and its expected value |
TokenIssuer
(Appears on:ExternalAuthProvider)
TokenIssuer describes attributes of the OIDC token issuer
Field | Description |
---|---|
issuerURL string |
URL is the serving URL of the token issuer. Must use the https:// scheme. |
audiences []TokenAudience |
Audiences is an array of audiences that the token was issued for. Valid tokens must include at least one of these values in their “aud” claim. Must be set to exactly one value. |
issuerCertificateAuthority LocalObjectReference |
CertificateAuthority is a reference to a config map in the configuration namespace. The .data of the configMap must contain the “ca-bundle.crt” key. If unset, system trust is used instead. |
TokenRequiredClaim
(Appears on:TokenClaimValidationRule)
TokenRequiredClaim allows configuring a required claim name and its expected value.
Field | Description |
---|---|
claim string |
Claim is a name of a required claim. Only claims with string values are supported. |
requiredValue string |
RequiredValue is the required value for the claim. |
TokenValidationRuleType
(string
alias)
(Appears on:TokenClaimValidationRule)
TokenValidationRuleType defines the type of the validation rule.
Value | Description |
---|---|
"RequiredClaim" |
TokenValidationRuleTypeRequiredClaim defines the type for RequiredClaim. |
UsernameClaimMapping
(Appears on:TokenClaimMappings)
UsernameClaimMapping defines the claim that should be used to construct usernames for the cluster identity.
Field | Description |
---|---|
claim string |
Claim is a JWT token claim to be used in the mapping |
prefixPolicy UsernamePrefixPolicy |
(Optional)
PrefixPolicy specifies how a prefix should apply. By default, claims other than Set to “NoPrefix” to disable prefixing. Example:
(1) |
prefix string |
(Optional)
Prefix is prepended to claim to prevent clashes with existing names. |
UsernamePrefixPolicy
(string
alias)
(Appears on:UsernameClaimMapping)
UsernamePrefixPolicy specifies how a prefix should apply.
Value | Description |
---|---|
"" |
NoOpinion let’s the cluster assign prefixes. If the username claim is email, there is no prefix If the username claim is anything else, it is prefixed by the issuerURL |
"NoPrefix" |
NoPrefix means the username claim value will not have any prefix |
"Prefix" |
Prefix means the prefix value must be specified. It cannot be empty |
VersionGateAckType
(string
alias)
(Appears on:RosaControlPlaneSpec)
VersionGateAckType specifies the version gate acknowledgement.
Value | Description |
---|---|
"Acknowledge" |
Acknowledge if acknowledgment is required and proceed with the upgrade. |
"AlwaysAcknowledge" |
AlwaysAcknowledge always acknowledg if required and proceed with the upgrade. |
"WaitForAcknowledge" |
WaitForAcknowledge if acknowledgment is required, wait not to proceed with the upgrade. |
infrastructure.cluster.x-k8s.io/v1beta1
Package v1beta1 contains the v1beta1 API implementation.
Resource Types:AMIReference
(Appears on:AWSMachineSpec)
AMIReference is a reference to a specific AWS resource by ID, ARN, or filters. Only one of ID, ARN or Filters may be specified. Specifying more than one will result in a validation error.
Field | Description |
---|---|
id string |
(Optional)
ID of resource |
eksLookupType EKSAMILookupType |
(Optional)
EKSOptimizedLookupType If specified, will look up an EKS Optimized image in SSM Parameter store |
AWSCluster
AWSCluster is the schema for Amazon EC2 based Kubernetes Cluster API.
Field | Description | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||||||||||||||||||
spec AWSClusterSpec |
|
||||||||||||||||||||||||
status AWSClusterStatus |
AWSClusterControllerIdentity
AWSClusterControllerIdentity is the Schema for the awsclustercontrolleridentities API It is used to grant access to use Cluster API Provider AWS Controller credentials.
Field | Description | ||
---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||
spec AWSClusterControllerIdentitySpec |
Spec for this AWSClusterControllerIdentity.
|
AWSClusterControllerIdentitySpec
(Appears on:AWSClusterControllerIdentity)
AWSClusterControllerIdentitySpec defines the specifications for AWSClusterControllerIdentity.
Field | Description |
---|---|
AWSClusterIdentitySpec AWSClusterIdentitySpec |
(Members of |
AWSClusterIdentitySpec
(Appears on:AWSClusterControllerIdentitySpec, AWSClusterRoleIdentitySpec, AWSClusterStaticIdentitySpec)
AWSClusterIdentitySpec defines the Spec struct for AWSClusterIdentity types.
Field | Description |
---|---|
allowedNamespaces AllowedNamespaces |
(Optional)
AllowedNamespaces is used to identify which namespaces are allowed to use the identity from. Namespaces can be selected either using an array of namespaces or with label selector. An empty allowedNamespaces object indicates that AWSClusters can use this identity from any namespace. If this object is nil, no namespaces will be allowed (default behaviour, if this field is not provided) A namespace should be either in the NamespaceList or match with Selector to use the identity. |
AWSClusterRoleIdentity
AWSClusterRoleIdentity is the Schema for the awsclusterroleidentities API It is used to assume a role using the provided sourceRef.
Field | Description | ||||||||
---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||
spec AWSClusterRoleIdentitySpec |
Spec for this AWSClusterRoleIdentity.
|
AWSClusterRoleIdentitySpec
(Appears on:AWSClusterRoleIdentity)
AWSClusterRoleIdentitySpec defines the specifications for AWSClusterRoleIdentity.
Field | Description |
---|---|
AWSClusterIdentitySpec AWSClusterIdentitySpec |
(Members of |
AWSRoleSpec AWSRoleSpec |
(Members of |
externalID string |
(Optional)
A unique identifier that might be required when you assume a role in another account. If the administrator of the account to which the role belongs provided you with an external ID, then provide that value in the ExternalId parameter. This value can be any string, such as a passphrase or account number. A cross-account role is usually set up to trust everyone in an account. Therefore, the administrator of the trusting account might send an external ID to the administrator of the trusted account. That way, only someone with the ID can assume the role, rather than everyone in the account. For more information about the external ID, see How to Use an External ID When Granting Access to Your AWS Resources to a Third Party in the IAM User Guide. |
sourceIdentityRef AWSIdentityReference |
SourceIdentityRef is a reference to another identity which will be chained to do role assumption. All identity types are accepted. |
AWSClusterSpec
(Appears on:AWSCluster, AWSClusterTemplateResource)
AWSClusterSpec defines the desired state of an EC2-based Kubernetes cluster.
Field | Description |
---|---|
network NetworkSpec |
NetworkSpec encapsulates all things related to AWS network. |
region string |
The AWS Region the cluster lives in. |
sshKeyName string |
(Optional)
SSHKeyName is the name of the ssh key to attach to the bastion host. Valid values are empty string (do not use SSH keys), a valid SSH key name, or omitted (use the default SSH key name) |
controlPlaneEndpoint Cluster API api/v1beta1.APIEndpoint |
(Optional)
ControlPlaneEndpoint represents the endpoint used to communicate with the control plane. |
additionalTags Tags |
(Optional)
AdditionalTags is an optional set of tags to add to AWS resources managed by the AWS provider, in addition to the ones added by default. |
controlPlaneLoadBalancer AWSLoadBalancerSpec |
(Optional)
ControlPlaneLoadBalancer is optional configuration for customizing control plane behavior. |
imageLookupFormat string |
(Optional)
ImageLookupFormat is the AMI naming format to look up machine images when a machine does not specify an AMI. When set, this will be used for all cluster machines unless a machine specifies a different ImageLookupOrg. Supports substitutions for {{.BaseOS}} and {{.K8sVersion}} with the base OS and kubernetes version, respectively. The BaseOS will be the value in ImageLookupBaseOS or ubuntu (the default), and the kubernetes version as defined by the packages produced by kubernetes/release without v as a prefix: 1.13.0, 1.12.5-mybuild.1, or 1.17.3. For example, the default image format of capa-ami-{{.BaseOS}}-?{{.K8sVersion}}-* will end up searching for AMIs that match the pattern capa-ami-ubuntu-?1.18.0-* for a Machine that is targeting kubernetes v1.18.0 and the ubuntu base OS. See also: https://golang.org/pkg/text/template/ |
imageLookupOrg string |
(Optional)
ImageLookupOrg is the AWS Organization ID to look up machine images when a machine does not specify an AMI. When set, this will be used for all cluster machines unless a machine specifies a different ImageLookupOrg. |
imageLookupBaseOS string |
ImageLookupBaseOS is the name of the base operating system used to look up machine images when a machine does not specify an AMI. When set, this will be used for all cluster machines unless a machine specifies a different ImageLookupBaseOS. |
bastion Bastion |
(Optional)
Bastion contains options to configure the bastion host. |
identityRef AWSIdentityReference |
IdentityRef is a reference to an identity to be used when reconciling the managed control plane. If no identity is specified, the default identity for this controller will be used. |
s3Bucket S3Bucket |
(Optional)
S3Bucket contains options to configure a supporting S3 bucket for this cluster - currently used for nodes requiring Ignition (https://coreos.github.io/ignition/) for bootstrapping (requires BootstrapFormatIgnition feature flag to be enabled). |
AWSClusterStaticIdentity
AWSClusterStaticIdentity is the Schema for the awsclusterstaticidentities API It represents a reference to an AWS access key ID and secret access key, stored in a secret.
Field | Description | ||||
---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||
spec AWSClusterStaticIdentitySpec |
Spec for this AWSClusterStaticIdentity
|
AWSClusterStaticIdentitySpec
(Appears on:AWSClusterStaticIdentity)
AWSClusterStaticIdentitySpec defines the specifications for AWSClusterStaticIdentity.
Field | Description |
---|---|
AWSClusterIdentitySpec AWSClusterIdentitySpec |
(Members of |
secretRef string |
Reference to a secret containing the credentials. The secret should contain the following data keys: AccessKeyID: AKIAIOSFODNN7EXAMPLE SecretAccessKey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY SessionToken: Optional |
AWSClusterStatus
(Appears on:AWSCluster)
AWSClusterStatus defines the observed state of AWSCluster.
Field | Description |
---|---|
ready bool |
|
networkStatus NetworkStatus |
|
failureDomains Cluster API api/v1beta1.FailureDomains |
|
bastion Instance |
|
conditions Cluster API api/v1beta1.Conditions |
AWSClusterTemplate
AWSClusterTemplate is the schema for Amazon EC2 based Kubernetes Cluster Templates.
Field | Description | ||
---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||
spec AWSClusterTemplateSpec |
|
AWSClusterTemplateResource
(Appears on:AWSClusterTemplateSpec)
AWSClusterTemplateResource defines the desired state of AWSClusterTemplate.
Field | Description | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Cluster API api/v1beta1.ObjectMeta |
(Optional)
Standard object’s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Refer to the Kubernetes API documentation for the fields of themetadata field.
|
||||||||||||||||||||||||
spec AWSClusterSpec |
|
AWSClusterTemplateSpec
(Appears on:AWSClusterTemplate)
AWSClusterTemplateSpec defines the desired state of AWSClusterTemplate.
Field | Description |
---|---|
template AWSClusterTemplateResource |
AWSIdentityKind
(string
alias)
(Appears on:AWSIdentityReference)
AWSIdentityKind defines allowed AWS identity types.
AWSIdentityReference
(Appears on:AWSClusterRoleIdentitySpec, AWSClusterSpec)
AWSIdentityReference specifies a identity.
Field | Description |
---|---|
name string |
Name of the identity. |
kind AWSIdentityKind |
Kind of the identity. |
AWSLoadBalancerSpec
(Appears on:AWSClusterSpec)
AWSLoadBalancerSpec defines the desired state of an AWS load balancer.
Field | Description |
---|---|
name string |
(Optional)
Name sets the name of the classic ELB load balancer. As per AWS, the name must be unique within your set of load balancers for the region, must have a maximum of 32 characters, must contain only alphanumeric characters or hyphens, and cannot begin or end with a hyphen. Once set, the value cannot be changed. |
scheme ClassicELBScheme |
(Optional)
Scheme sets the scheme of the load balancer (defaults to internet-facing) |
crossZoneLoadBalancing bool |
(Optional)
CrossZoneLoadBalancing enables the classic ELB cross availability zone balancing. With cross-zone load balancing, each load balancer node for your Classic Load Balancer distributes requests evenly across the registered instances in all enabled Availability Zones. If cross-zone load balancing is disabled, each load balancer node distributes requests evenly across the registered instances in its Availability Zone only. Defaults to false. |
subnets []string |
(Optional)
Subnets sets the subnets that should be applied to the control plane load balancer (defaults to discovered subnets for managed VPCs or an empty set for unmanaged VPCs) |
healthCheckProtocol ClassicELBProtocol |
(Optional)
HealthCheckProtocol sets the protocol type for classic ELB health check target default value is ClassicELBProtocolSSL |
additionalSecurityGroups []string |
(Optional)
AdditionalSecurityGroups sets the security groups used by the load balancer. Expected to be security group IDs This is optional - if not provided new security groups will be created for the load balancer |
AWSMachine
AWSMachine is the schema for Amazon EC2 machines.
Field | Description | ||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||||||||||||||||||||||||||||||||||||||
spec AWSMachineSpec |
|
||||||||||||||||||||||||||||||||||||||||||||
status AWSMachineStatus |
AWSMachineProviderConditionType
(string
alias)
AWSMachineProviderConditionType is a valid value for AWSMachineProviderCondition.Type.
AWSMachineSpec
(Appears on:AWSMachine, AWSMachineTemplateResource)
AWSMachineSpec defines the desired state of an Amazon EC2 instance.
Field | Description |
---|---|
providerID string |
ProviderID is the unique identifier as specified by the cloud provider. |
instanceID string |
InstanceID is the EC2 instance ID for this machine. |
ami AMIReference |
AMI is the reference to the AMI from which to create the machine instance. |
imageLookupFormat string |
(Optional)
ImageLookupFormat is the AMI naming format to look up the image for this machine It will be ignored if an explicit AMI is set. Supports substitutions for {{.BaseOS}} and {{.K8sVersion}} with the base OS and kubernetes version, respectively. The BaseOS will be the value in ImageLookupBaseOS or ubuntu (the default), and the kubernetes version as defined by the packages produced by kubernetes/release without v as a prefix: 1.13.0, 1.12.5-mybuild.1, or 1.17.3. For example, the default image format of capa-ami-{{.BaseOS}}-?{{.K8sVersion}}-* will end up searching for AMIs that match the pattern capa-ami-ubuntu-?1.18.0-* for a Machine that is targeting kubernetes v1.18.0 and the ubuntu base OS. See also: https://golang.org/pkg/text/template/ |
imageLookupOrg string |
ImageLookupOrg is the AWS Organization ID to use for image lookup if AMI is not set. |
imageLookupBaseOS string |
ImageLookupBaseOS is the name of the base operating system to use for image lookup the AMI is not set. |
instanceType string |
InstanceType is the type of instance to create. Example: m4.xlarge |
additionalTags Tags |
(Optional)
AdditionalTags is an optional set of tags to add to an instance, in addition to the ones added by default by the AWS provider. If both the AWSCluster and the AWSMachine specify the same tag name with different values, the AWSMachine’s value takes precedence. |
iamInstanceProfile string |
(Optional)
IAMInstanceProfile is a name of an IAM instance profile to assign to the instance |
publicIP bool |
(Optional)
PublicIP specifies whether the instance should get a public IP. Precedence for this setting is as follows: 1. This field if set 2. Cluster/flavor setting 3. Subnet default |
additionalSecurityGroups []AWSResourceReference |
(Optional)
AdditionalSecurityGroups is an array of references to security groups that should be applied to the instance. These security groups would be set in addition to any security groups defined at the cluster level or in the actuator. It is possible to specify either IDs of Filters. Using Filters will cause additional requests to AWS API and if tags change the attached security groups might change too. |
failureDomain string |
FailureDomain is the failure domain unique identifier this Machine should be attached to, as defined in Cluster API. For this infrastructure provider, the ID is equivalent to an AWS Availability Zone. If multiple subnets are matched for the availability zone, the first one returned is picked. |
subnet AWSResourceReference |
(Optional)
Subnet is a reference to the subnet to use for this instance. If not specified, the cluster subnet will be used. |
sshKeyName string |
(Optional)
SSHKeyName is the name of the ssh key to attach to the instance. Valid values are empty string (do not use SSH keys), a valid SSH key name, or omitted (use the default SSH key name) |
rootVolume Volume |
(Optional)
RootVolume encapsulates the configuration options for the root volume |
nonRootVolumes []Volume |
(Optional)
Configuration options for the non root storage volumes. |
networkInterfaces []string |
(Optional)
NetworkInterfaces is a list of ENIs to associate with the instance. A maximum of 2 may be specified. |
uncompressedUserData bool |
(Optional)
UncompressedUserData specify whether the user data is gzip-compressed before it is sent to ec2 instance. cloud-init has built-in support for gzip-compressed user data user data stored in aws secret manager is always gzip-compressed. |
cloudInit CloudInit |
(Optional)
CloudInit defines options related to the bootstrapping systems where CloudInit is used. |
ignition Ignition |
(Optional)
Ignition defined options related to the bootstrapping systems where Ignition is used. |
spotMarketOptions SpotMarketOptions |
(Optional)
SpotMarketOptions allows users to configure instances to be run using AWS Spot instances. |
tenancy string |
(Optional)
Tenancy indicates if instance should run on shared or single-tenant hardware. |
AWSMachineStatus
(Appears on:AWSMachine)
AWSMachineStatus defines the observed state of AWSMachine.
Field | Description |
---|---|
ready bool |
(Optional)
Ready is true when the provider resource is ready. |
interruptible bool |
(Optional)
Interruptible reports that this machine is using spot instances and can therefore be interrupted by CAPI when it receives a notice that the spot instance is to be terminated by AWS. This will be set to true when SpotMarketOptions is not nil (i.e. this machine is using a spot instance). |
addresses []Cluster API api/v1beta1.MachineAddress |
Addresses contains the AWS instance associated addresses. |
instanceState InstanceState |
(Optional)
InstanceState is the state of the AWS instance for this machine. |
failureReason Cluster API errors.MachineStatusError |
(Optional)
FailureReason will be set in the event that there is a terminal problem reconciling the Machine and will contain a succinct value suitable for machine interpretation. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the Machine’s spec or the configuration of the controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the controller, or the responsible controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of Machines can be added as events to the Machine object and/or logged in the controller’s output. |
failureMessage string |
(Optional)
FailureMessage will be set in the event that there is a terminal problem reconciling the Machine and will contain a more verbose string suitable for logging and human consumption. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the Machine’s spec or the configuration of the controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the controller, or the responsible controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of Machines can be added as events to the Machine object and/or logged in the controller’s output. |
conditions Cluster API api/v1beta1.Conditions |
(Optional)
Conditions defines current service state of the AWSMachine. |
AWSMachineTemplate
AWSMachineTemplate is the schema for the Amazon EC2 Machine Templates API.
Field | Description | ||
---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||
spec AWSMachineTemplateSpec |
|
||
status AWSMachineTemplateStatus |
AWSMachineTemplateResource
(Appears on:AWSMachineTemplateSpec)
AWSMachineTemplateResource describes the data needed to create am AWSMachine from a template.
Field | Description | ||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Cluster API api/v1beta1.ObjectMeta |
(Optional)
Standard object’s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Refer to the Kubernetes API documentation for the fields of themetadata field.
|
||||||||||||||||||||||||||||||||||||||||||||
spec AWSMachineSpec |
Spec is the specification of the desired behavior of the machine.
|
AWSMachineTemplateSpec
(Appears on:AWSMachineTemplate)
AWSMachineTemplateSpec defines the desired state of AWSMachineTemplate.
Field | Description |
---|---|
template AWSMachineTemplateResource |
AWSMachineTemplateStatus
(Appears on:AWSMachineTemplate)
AWSMachineTemplateStatus defines a status for an AWSMachineTemplate.
Field | Description |
---|---|
capacity Kubernetes core/v1.ResourceList |
(Optional)
Capacity defines the resource capacity for this machine. This value is used for autoscaling from zero operations as defined in: https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20210310-opt-in-autoscaling-from-zero.md |
AWSResourceReference
(Appears on:AWSMachineSpec)
AWSResourceReference is a reference to a specific AWS resource by ID or filters. Only one of ID or Filters may be specified. Specifying more than one will result in a validation error.
Field | Description |
---|---|
id string |
(Optional)
ID of resource |
arn string |
(Optional)
ARN of resource. Deprecated: This field has no function and is going to be removed in the next release. |
filters []Filter |
(Optional)
Filters is a set of key/value pairs used to identify a resource They are applied according to the rules defined by the AWS API: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Filtering.html |
AWSRoleSpec
(Appears on:AWSClusterRoleIdentitySpec)
AWSRoleSpec defines the specifications for all identities based around AWS roles.
Field | Description |
---|---|
roleARN string |
The Amazon Resource Name (ARN) of the role to assume. |
sessionName string |
An identifier for the assumed role session |
durationSeconds int32 |
The duration, in seconds, of the role session before it is renewed. |
inlinePolicy string |
An IAM policy as a JSON-encoded string that you want to use as an inline session policy. |
policyARNs []string |
The Amazon Resource Names (ARNs) of the IAM managed policies that you want to use as managed session policies. The policies must exist in the same account as the role. |
AZSelectionScheme
(string
alias)
(Appears on:VPCSpec)
AZSelectionScheme defines the scheme of selecting AZs.
AllowedNamespaces
(Appears on:AWSClusterIdentitySpec)
AllowedNamespaces is a selector of namespaces that AWSClusters can use this ClusterPrincipal from. This is a standard Kubernetes LabelSelector, a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed.
Field | Description |
---|---|
list []string |
(Optional)
An nil or empty list indicates that AWSClusters cannot use the identity from any namespace. |
selector Kubernetes meta/v1.LabelSelector |
(Optional)
An empty selector indicates that AWSClusters cannot use this AWSClusterIdentity from any namespace. |
Bastion
(Appears on:AWSClusterSpec)
Bastion defines a bastion host.
Field | Description |
---|---|
enabled bool |
(Optional)
Enabled allows this provider to create a bastion host instance with a public ip to access the VPC private network. |
disableIngressRules bool |
(Optional)
DisableIngressRules will ensure there are no Ingress rules in the bastion host’s security group. Requires AllowedCIDRBlocks to be empty. |
allowedCIDRBlocks []string |
(Optional)
AllowedCIDRBlocks is a list of CIDR blocks allowed to access the bastion host. They are set as ingress rules for the Bastion host’s Security Group (defaults to 0.0.0.0/0). |
instanceType string |
InstanceType will use the specified instance type for the bastion. If not specified, Cluster API Provider AWS will use t3.micro for all regions except us-east-1, where t2.micro will be the default. |
ami string |
(Optional)
AMI will use the specified AMI to boot the bastion. If not specified, the AMI will default to one picked out in public space. |
BuildParams
BuildParams is used to build tags around an aws resource.
Field | Description |
---|---|
Lifecycle ResourceLifecycle |
Lifecycle determines the resource lifecycle. |
ClusterName string |
ClusterName is the cluster associated with the resource. |
ResourceID string |
ResourceID is the unique identifier of the resource to be tagged. |
Name string |
(Optional)
Name is the name of the resource, it’s applied as the tag “Name” on AWS. |
Role string |
(Optional)
Role is the role associated to the resource. |
Additional Tags |
(Optional)
Any additional tags to be added to the resource. |
CNIIngressRule
CNIIngressRule defines an AWS ingress rule for CNI requirements.
Field | Description |
---|---|
description string |
|
protocol SecurityGroupProtocol |
|
fromPort int64 |
|
toPort int64 |
CNIIngressRules
([]sigs.k8s.io/cluster-api-provider-aws/v2/api/v1beta1.CNIIngressRule
alias)
(Appears on:CNISpec)
CNIIngressRules is a slice of CNIIngressRule.
CNISpec
(Appears on:NetworkSpec)
CNISpec defines configuration for CNI.
Field | Description |
---|---|
cniIngressRules CNIIngressRules |
CNIIngressRules specify rules to apply to control plane and worker node security groups. The source for the rule will be set to control plane and worker security group IDs. |
ClassicELB
(Appears on:NetworkStatus)
ClassicELB defines an AWS classic load balancer.
Field | Description |
---|---|
name string |
(Optional)
The name of the load balancer. It must be unique within the set of load balancers defined in the region. It also serves as identifier. |
dnsName string |
DNSName is the dns name of the load balancer. |
scheme ClassicELBScheme |
Scheme is the load balancer scheme, either internet-facing or private. |
availabilityZones []string |
AvailabilityZones is an array of availability zones in the VPC attached to the load balancer. |
subnetIds []string |
SubnetIDs is an array of subnets in the VPC attached to the load balancer. |
securityGroupIds []string |
SecurityGroupIDs is an array of security groups assigned to the load balancer. |
listeners []ClassicELBListener |
Listeners is an array of classic elb listeners associated with the load balancer. There must be at least one. |
healthChecks ClassicELBHealthCheck |
HealthCheck is the classic elb health check associated with the load balancer. |
attributes ClassicELBAttributes |
Attributes defines extra attributes associated with the load balancer. |
tags map[string]string |
Tags is a map of tags associated with the load balancer. |
ClassicELBAttributes
(Appears on:ClassicELB)
ClassicELBAttributes defines extra attributes associated with a classic load balancer.
Field | Description |
---|---|
idleTimeout time.Duration |
IdleTimeout is time that the connection is allowed to be idle (no data has been sent over the connection) before it is closed by the load balancer. |
crossZoneLoadBalancing bool |
(Optional)
CrossZoneLoadBalancing enables the classic load balancer load balancing. |
ClassicELBHealthCheck
(Appears on:ClassicELB)
ClassicELBHealthCheck defines an AWS classic load balancer health check.
Field | Description |
---|---|
target string |
|
interval time.Duration |
|
timeout time.Duration |
|
healthyThreshold int64 |
|
unhealthyThreshold int64 |
ClassicELBListener
(Appears on:ClassicELB)
ClassicELBListener defines an AWS classic load balancer listener.
Field | Description |
---|---|
protocol ClassicELBProtocol |
|
port int64 |
|
instanceProtocol ClassicELBProtocol |
|
instancePort int64 |
ClassicELBProtocol
(string
alias)
(Appears on:AWSLoadBalancerSpec, ClassicELBListener)
ClassicELBProtocol defines listener protocols for a classic load balancer.
ClassicELBScheme
(string
alias)
(Appears on:AWSLoadBalancerSpec, ClassicELB)
ClassicELBScheme defines the scheme of a classic load balancer.
CloudInit
(Appears on:AWSMachineSpec)
CloudInit defines options related to the bootstrapping systems where CloudInit is used.
Field | Description |
---|---|
insecureSkipSecretsManager bool |
InsecureSkipSecretsManager, when set to true will not use AWS Secrets Manager or AWS Systems Manager Parameter Store to ensure privacy of userdata. By default, a cloud-init boothook shell script is prepended to download the userdata from Secrets Manager and additionally delete the secret. |
secretCount int32 |
(Optional)
SecretCount is the number of secrets used to form the complete secret |
secretPrefix string |
(Optional)
SecretPrefix is the prefix for the secret name. This is stored temporarily, and deleted when the machine registers as a node against the workload cluster. |
secureSecretsBackend SecretBackend |
(Optional)
SecureSecretsBackend, when set to parameter-store will utilize the AWS Systems Manager Parameter Storage to distribute secrets. By default or with the value of secrets-manager, will use AWS Secrets Manager instead. |
EKSAMILookupType
(string
alias)
(Appears on:AMIReference)
EKSAMILookupType specifies which AWS AMI to use for a AWSMachine and AWSMachinePool.
Filter
(Appears on:AWSResourceReference)
Filter is a filter used to identify an AWS resource.
Field | Description |
---|---|
name string |
Name of the filter. Filter names are case-sensitive. |
values []string |
Values includes one or more filter values. Filter values are case-sensitive. |
IPv6
(Appears on:VPCSpec)
IPv6 contains ipv6 specific settings for the network.
Field | Description |
---|---|
cidrBlock string |
(Optional)
CidrBlock is the CIDR block provided by Amazon when VPC has enabled IPv6. |
poolId string |
(Optional)
PoolID is the IP pool which must be defined in case of BYO IP is defined. |
egressOnlyInternetGatewayId string |
(Optional)
EgressOnlyInternetGatewayID is the id of the egress only internet gateway associated with an IPv6 enabled VPC. |
Ignition
(Appears on:AWSMachineSpec)
Ignition defines options related to the bootstrapping systems where Ignition is used.
Field | Description |
---|---|
version string |
(Optional)
Version defines which version of Ignition will be used to generate bootstrap data. |
IngressRule
IngressRule defines an AWS ingress rule for security groups.
Field | Description |
---|---|
description string |
|
protocol SecurityGroupProtocol |
|
fromPort int64 |
|
toPort int64 |
|
cidrBlocks []string |
(Optional)
List of CIDR blocks to allow access from. Cannot be specified with SourceSecurityGroupID. |
ipv6CidrBlocks []string |
(Optional)
List of IPv6 CIDR blocks to allow access from. Cannot be specified with SourceSecurityGroupID. |
sourceSecurityGroupIds []string |
(Optional)
The security group id to allow access from. Cannot be specified with CidrBlocks. |
IngressRules
([]sigs.k8s.io/cluster-api-provider-aws/v2/api/v1beta1.IngressRule
alias)
(Appears on:SecurityGroup)
IngressRules is a slice of AWS ingress rules for security groups.
Instance
(Appears on:AWSClusterStatus)
Instance describes an AWS instance.
Field | Description |
---|---|
id string |
|
instanceState InstanceState |
The current state of the instance. |
type string |
The instance type. |
subnetId string |
The ID of the subnet of the instance. |
imageId string |
The ID of the AMI used to launch the instance. |
sshKeyName string |
The name of the SSH key pair. |
securityGroupIds []string |
SecurityGroupIDs are one or more security group IDs this instance belongs to. |
userData string |
UserData is the raw data script passed to the instance which is run upon bootstrap. This field must not be base64 encoded and should only be used when running a new instance. |
iamProfile string |
The name of the IAM instance profile associated with the instance, if applicable. |
addresses []Cluster API api/v1beta1.MachineAddress |
Addresses contains the AWS instance associated addresses. |
privateIp string |
The private IPv4 address assigned to the instance. |
publicIp string |
The public IPv4 address assigned to the instance, if applicable. |
enaSupport bool |
Specifies whether enhanced networking with ENA is enabled. |
ebsOptimized bool |
Indicates whether the instance is optimized for Amazon EBS I/O. |
rootVolume Volume |
(Optional)
Configuration options for the root storage volume. |
nonRootVolumes []Volume |
(Optional)
Configuration options for the non root storage volumes. |
networkInterfaces []string |
Specifies ENIs attached to instance |
tags map[string]string |
The tags associated with the instance. |
availabilityZone string |
Availability zone of instance |
spotMarketOptions SpotMarketOptions |
SpotMarketOptions option for configuring instances to be run using AWS Spot instances. |
tenancy string |
(Optional)
Tenancy indicates if instance should run on shared or single-tenant hardware. |
volumeIDs []string |
(Optional)
IDs of the instance’s volumes |
InstanceState
(string
alias)
(Appears on:AWSMachineStatus, Instance)
InstanceState describes the state of an AWS instance.
NetworkSpec
(Appears on:AWSClusterSpec)
NetworkSpec encapsulates all things related to AWS network.
Field | Description |
---|---|
vpc VPCSpec |
(Optional)
VPC configuration. |
subnets Subnets |
(Optional)
Subnets configuration. |
cni CNISpec |
(Optional)
CNI configuration |
securityGroupOverrides map[sigs.k8s.io/cluster-api-provider-aws/v2/api/v1beta1.SecurityGroupRole]string |
(Optional)
SecurityGroupOverrides is an optional set of security groups to use for cluster instances This is optional - if not provided new security groups will be created for the cluster |
NetworkStatus
(Appears on:AWSClusterStatus)
NetworkStatus encapsulates AWS networking resources.
Field | Description |
---|---|
securityGroups map[sigs.k8s.io/cluster-api-provider-aws/v2/api/v1beta1.SecurityGroupRole]sigs.k8s.io/cluster-api-provider-aws/v2/api/v1beta1.SecurityGroup |
SecurityGroups is a map from the role/kind of the security group to its unique name, if any. |
apiServerElb ClassicELB |
APIServerELB is the Kubernetes api server classic load balancer. |
ResourceLifecycle
(string
alias)
(Appears on:BuildParams)
ResourceLifecycle configures the lifecycle of a resource.
RouteTable
RouteTable defines an AWS routing table.
Field | Description |
---|---|
id string |
S3Bucket
(Appears on:AWSClusterSpec)
S3Bucket defines a supporting S3 bucket for the cluster, currently can be optionally used for Ignition.
Field | Description |
---|---|
controlPlaneIAMInstanceProfile string |
ControlPlaneIAMInstanceProfile is a name of the IAMInstanceProfile, which will be allowed to read control-plane node bootstrap data from S3 Bucket. |
nodesIAMInstanceProfiles []string |
NodesIAMInstanceProfiles is a list of IAM instance profiles, which will be allowed to read worker nodes bootstrap data from S3 Bucket. |
name string |
Name defines name of S3 Bucket to be created. |
SecretBackend
(string
alias)
(Appears on:CloudInit)
SecretBackend defines variants for backend secret storage.
SecurityGroup
(Appears on:NetworkStatus)
SecurityGroup defines an AWS security group.
Field | Description |
---|---|
id string |
ID is a unique identifier. |
name string |
Name is the security group name. |
ingressRule IngressRules |
(Optional)
IngressRules is the inbound rules associated with the security group. |
tags Tags |
Tags is a map of tags associated with the security group. |
SecurityGroupProtocol
(string
alias)
(Appears on:CNIIngressRule, IngressRule)
SecurityGroupProtocol defines the protocol type for a security group rule.
SecurityGroupRole
(string
alias)
SecurityGroupRole defines the unique role of a security group.
SpotMarketOptions
(Appears on:AWSMachineSpec, Instance)
SpotMarketOptions defines the options available to a user when configuring Machines to run on Spot instances. Most users should provide an empty struct.
Field | Description |
---|---|
maxPrice string |
(Optional)
MaxPrice defines the maximum price the user is willing to pay for Spot VM instances |
SubnetSpec
SubnetSpec configures an AWS Subnet.
Field | Description |
---|---|
id string |
ID defines a unique identifier to reference this resource. |
cidrBlock string |
CidrBlock is the CIDR block to be used when the provider creates a managed VPC. |
ipv6CidrBlock string |
(Optional)
IPv6CidrBlock is the IPv6 CIDR block to be used when the provider creates a managed VPC. A subnet can have an IPv4 and an IPv6 address. IPv6 is only supported in managed clusters, this field cannot be set on AWSCluster object. |
availabilityZone string |
AvailabilityZone defines the availability zone to use for this subnet in the cluster’s region. |
isPublic bool |
(Optional)
IsPublic defines the subnet as a public subnet. A subnet is public when it is associated with a route table that has a route to an internet gateway. |
isIpv6 bool |
(Optional)
IsIPv6 defines the subnet as an IPv6 subnet. A subnet is IPv6 when it is associated with a VPC that has IPv6 enabled. IPv6 is only supported in managed clusters, this field cannot be set on AWSCluster object. |
routeTableId string |
(Optional)
RouteTableID is the routing table id associated with the subnet. |
natGatewayId string |
(Optional)
NatGatewayID is the NAT gateway id associated with the subnet. Ignored unless the subnet is managed by the provider, in which case this is set on the public subnet where the NAT gateway resides. It is then used to determine routes for private subnets in the same AZ as the public subnet. |
tags Tags |
Tags is a collection of tags describing the resource. |
Subnets
([]sigs.k8s.io/cluster-api-provider-aws/v2/api/v1beta1.SubnetSpec
alias)
(Appears on:NetworkSpec)
Subnets is a slice of Subnet.
Tags
(map[string]string
alias)
(Appears on:AWSClusterSpec, AWSMachineSpec, BuildParams, SecurityGroup, SubnetSpec, VPCSpec)
Tags defines a map of tags.
VPCSpec
(Appears on:NetworkSpec)
VPCSpec configures an AWS VPC.
Field | Description |
---|---|
id string |
ID is the vpc-id of the VPC this provider should use to create resources. |
cidrBlock string |
CidrBlock is the CIDR block to be used when the provider creates a managed VPC. Defaults to 10.0.0.0/16. |
ipv6 IPv6 |
(Optional)
IPv6 contains ipv6 specific settings for the network. Supported only in managed clusters. This field cannot be set on AWSCluster object. |
internetGatewayId string |
(Optional)
InternetGatewayID is the id of the internet gateway associated with the VPC. |
tags Tags |
Tags is a collection of tags describing the resource. |
availabilityZoneUsageLimit int |
AvailabilityZoneUsageLimit specifies the maximum number of availability zones (AZ) that should be used in a region when automatically creating subnets. If a region has more than this number of AZs then this number of AZs will be picked randomly when creating default subnets. Defaults to 3 |
availabilityZoneSelection AZSelectionScheme |
AvailabilityZoneSelection specifies how AZs should be selected if there are more AZs in a region than specified by AvailabilityZoneUsageLimit. There are 2 selection schemes: Ordered - selects based on alphabetical order Random - selects AZs randomly in a region Defaults to Ordered |
Volume
(Appears on:AWSMachineSpec, Instance)
Volume encapsulates the configuration options for the storage device.
Field | Description |
---|---|
deviceName string |
(Optional)
Device name |
size int64 |
Size specifies size (in Gi) of the storage device. Must be greater than the image snapshot size or 8 (whichever is greater). |
type VolumeType |
(Optional)
Type is the type of the volume (e.g. gp2, io1, etc…). |
iops int64 |
(Optional)
IOPS is the number of IOPS requested for the disk. Not applicable to all types. |
throughput int64 |
(Optional)
Throughput to provision in MiB/s supported for the volume type. Not applicable to all types. |
encrypted bool |
(Optional)
Encrypted is whether the volume should be encrypted or not. |
encryptionKey string |
(Optional)
EncryptionKey is the KMS key to use to encrypt the volume. Can be either a KMS key ID or ARN. If Encrypted is set and this is omitted, the default AWS key will be used. The key must already exist and be accessible by the controller. |
VolumeType
(string
alias)
(Appears on:Volume)
VolumeType describes the EBS volume type. See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html
ASGStatus
(string
alias)
(Appears on:AWSMachinePoolStatus, AutoScalingGroup)
ASGStatus is a status string returned by the autoscaling API.
AWSFargateProfile
AWSFargateProfile is the Schema for the awsfargateprofiles API.
Field | Description | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||||||
spec FargateProfileSpec |
|
||||||||||||
status FargateProfileStatus |
AWSLaunchTemplate
(Appears on:AWSMachinePoolSpec, AWSManagedMachinePoolSpec)
AWSLaunchTemplate defines the desired state of AWSLaunchTemplate.
Field | Description |
---|---|
name string |
The name of the launch template. |
iamInstanceProfile string |
The name or the Amazon Resource Name (ARN) of the instance profile associated with the IAM role for the instance. The instance profile contains the IAM role. |
ami AMIReference |
(Optional)
AMI is the reference to the AMI from which to create the machine instance. |
imageLookupFormat string |
(Optional)
ImageLookupFormat is the AMI naming format to look up the image for this machine It will be ignored if an explicit AMI is set. Supports substitutions for {{.BaseOS}} and {{.K8sVersion}} with the base OS and kubernetes version, respectively. The BaseOS will be the value in ImageLookupBaseOS or ubuntu (the default), and the kubernetes version as defined by the packages produced by kubernetes/release without v as a prefix: 1.13.0, 1.12.5-mybuild.1, or 1.17.3. For example, the default image format of capa-ami-{{.BaseOS}}-?{{.K8sVersion}}-* will end up searching for AMIs that match the pattern capa-ami-ubuntu-?1.18.0-* for a Machine that is targeting kubernetes v1.18.0 and the ubuntu base OS. See also: https://golang.org/pkg/text/template/ |
imageLookupOrg string |
ImageLookupOrg is the AWS Organization ID to use for image lookup if AMI is not set. |
imageLookupBaseOS string |
ImageLookupBaseOS is the name of the base operating system to use for image lookup the AMI is not set. |
instanceType string |
InstanceType is the type of instance to create. Example: m4.xlarge |
rootVolume Volume |
(Optional)
RootVolume encapsulates the configuration options for the root volume |
sshKeyName string |
(Optional)
SSHKeyName is the name of the ssh key to attach to the instance. Valid values are empty string (do not use SSH keys), a valid SSH key name, or omitted (use the default SSH key name) |
versionNumber int64 |
VersionNumber is the version of the launch template that is applied. Typically a new version is created when at least one of the following happens: 1) A new launch template spec is applied. 2) One or more parameters in an existing template is changed. 3) A new AMI is discovered. |
additionalSecurityGroups []AWSResourceReference |
(Optional)
AdditionalSecurityGroups is an array of references to security groups that should be applied to the instances. These security groups would be set in addition to any security groups defined at the cluster level or in the actuator. |
spotMarketOptions SpotMarketOptions |
SpotMarketOptions are options for configuring AWSMachinePool instances to be run using AWS Spot instances. |
AWSMachinePool
AWSMachinePool is the Schema for the awsmachinepools API.
Field | Description | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||||||||||||||||||
spec AWSMachinePoolSpec |
|
||||||||||||||||||||||||
status AWSMachinePoolStatus |
AWSMachinePoolInstanceStatus
(Appears on:AWSMachinePoolStatus)
AWSMachinePoolInstanceStatus defines the status of the AWSMachinePoolInstance.
Field | Description |
---|---|
instanceID string |
(Optional)
InstanceID is the identification of the Machine Instance within ASG |
version string |
(Optional)
Version defines the Kubernetes version for the Machine Instance |
AWSMachinePoolSpec
(Appears on:AWSMachinePool)
AWSMachinePoolSpec defines the desired state of AWSMachinePool.
Field | Description |
---|---|
providerID string |
(Optional)
ProviderID is the ARN of the associated ASG |
minSize int32 |
MinSize defines the minimum size of the group. |
maxSize int32 |
MaxSize defines the maximum size of the group. |
availabilityZones []string |
AvailabilityZones is an array of availability zones instances can run in |
subnets []AWSResourceReference |
(Optional)
Subnets is an array of subnet configurations |
additionalTags Tags |
(Optional)
AdditionalTags is an optional set of tags to add to an instance, in addition to the ones added by default by the AWS provider. |
awsLaunchTemplate AWSLaunchTemplate |
AWSLaunchTemplate specifies the launch template and version to use when an instance is launched. |
mixedInstancesPolicy MixedInstancesPolicy |
MixedInstancesPolicy describes how multiple instance types will be used by the ASG. |
providerIDList []string |
(Optional)
ProviderIDList are the identification IDs of machine instances provided by the provider. This field must match the provider IDs as seen on the node objects corresponding to a machine pool’s machine instances. |
defaultCoolDown Kubernetes meta/v1.Duration |
(Optional)
The amount of time, in seconds, after a scaling activity completes before another scaling activity can start. If no value is supplied by user a default value of 300 seconds is set |
refreshPreferences RefreshPreferences |
(Optional)
RefreshPreferences describes set of preferences associated with the instance refresh request. |
capacityRebalance bool |
(Optional)
Enable or disable the capacity rebalance autoscaling group feature |
AWSMachinePoolStatus
(Appears on:AWSMachinePool)
AWSMachinePoolStatus defines the observed state of AWSMachinePool.
Field | Description |
---|---|
ready bool |
(Optional)
Ready is true when the provider resource is ready. |
replicas int32 |
(Optional)
Replicas is the most recently observed number of replicas |
conditions Cluster API api/v1beta1.Conditions |
(Optional)
Conditions defines current service state of the AWSMachinePool. |
instances []AWSMachinePoolInstanceStatus |
(Optional)
Instances contains the status for each instance in the pool |
launchTemplateID string |
The ID of the launch template |
launchTemplateVersion string |
(Optional)
The version of the launch template |
failureReason Cluster API errors.MachineStatusError |
(Optional)
FailureReason will be set in the event that there is a terminal problem reconciling the Machine and will contain a succinct value suitable for machine interpretation. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the Machine’s spec or the configuration of the controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the controller, or the responsible controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of Machines can be added as events to the Machine object and/or logged in the controller’s output. |
failureMessage string |
(Optional)
FailureMessage will be set in the event that there is a terminal problem reconciling the Machine and will contain a more verbose string suitable for logging and human consumption. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the Machine’s spec or the configuration of the controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the controller, or the responsible controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of Machines can be added as events to the Machine object and/or logged in the controller’s output. |
asgStatus ASGStatus |
AWSManagedMachinePool
AWSManagedMachinePool is the Schema for the awsmanagedmachinepools API.
Field | Description | ||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||||||||||||||||||||||||||||||
spec AWSManagedMachinePoolSpec |
|
||||||||||||||||||||||||||||||||||||
status AWSManagedMachinePoolStatus |
AWSManagedMachinePoolSpec
(Appears on:AWSManagedMachinePool)
AWSManagedMachinePoolSpec defines the desired state of AWSManagedMachinePool.
Field | Description |
---|---|
eksNodegroupName string |
(Optional)
EKSNodegroupName specifies the name of the nodegroup in AWS corresponding to this MachinePool. If you don’t specify a name then a default name will be created based on the namespace and name of the managed machine pool. |
availabilityZones []string |
AvailabilityZones is an array of availability zones instances can run in |
subnetIDs []string |
(Optional)
SubnetIDs specifies which subnets are used for the auto scaling group of this nodegroup |
additionalTags Tags |
(Optional)
AdditionalTags is an optional set of tags to add to AWS resources managed by the AWS provider, in addition to the ones added by default. |
roleAdditionalPolicies []string |
(Optional)
RoleAdditionalPolicies allows you to attach additional polices to the node group role. You must enable the EKSAllowAddRoles feature flag to incorporate these into the created role. |
roleName string |
(Optional)
RoleName specifies the name of IAM role for the node group. If the role is pre-existing we will treat it as unmanaged and not delete it on deletion. If the EKSEnableIAM feature flag is true and no name is supplied then a role is created. |
amiVersion string |
(Optional)
AMIVersion defines the desired AMI release version. If no version number is supplied then the latest version for the Kubernetes version will be used |
amiType ManagedMachineAMIType |
(Optional)
AMIType defines the AMI type |
labels map[string]string |
(Optional)
Labels specifies labels for the Kubernetes node objects |
taints Taints |
(Optional)
Taints specifies the taints to apply to the nodes of the machine pool |
diskSize int32 |
(Optional)
DiskSize specifies the root disk size |
instanceType string |
(Optional)
InstanceType specifies the AWS instance type |
scaling ManagedMachinePoolScaling |
(Optional)
Scaling specifies scaling for the ASG behind this pool |
remoteAccess ManagedRemoteAccess |
(Optional)
RemoteAccess specifies how machines can be accessed remotely |
providerIDList []string |
(Optional)
ProviderIDList are the provider IDs of instances in the autoscaling group corresponding to the nodegroup represented by this machine pool |
capacityType ManagedMachinePoolCapacityType |
(Optional)
CapacityType specifies the capacity type for the ASG behind this pool |
updateConfig UpdateConfig |
(Optional)
UpdateConfig holds the optional config to control the behaviour of the update to the nodegroup. |
awsLaunchTemplate AWSLaunchTemplate |
(Optional)
AWSLaunchTemplate specifies the launch template to use to create the managed node group. If AWSLaunchTemplate is specified, certain node group configuraions outside of launch template are prohibited (https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html). |
AWSManagedMachinePoolStatus
(Appears on:AWSManagedMachinePool)
AWSManagedMachinePoolStatus defines the observed state of AWSManagedMachinePool.
Field | Description |
---|---|
ready bool |
Ready denotes that the AWSManagedMachinePool nodegroup has joined the cluster |
replicas int32 |
(Optional)
Replicas is the most recently observed number of replicas. |
launchTemplateID string |
(Optional)
The ID of the launch template |
launchTemplateVersion string |
(Optional)
The version of the launch template |
failureReason Cluster API errors.MachineStatusError |
(Optional)
FailureReason will be set in the event that there is a terminal problem reconciling the MachinePool and will contain a succinct value suitable for machine interpretation. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the Machine’s spec or the configuration of the controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the controller, or the responsible controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of MachinePools can be added as events to the MachinePool object and/or logged in the controller’s output. |
failureMessage string |
(Optional)
FailureMessage will be set in the event that there is a terminal problem reconciling the MachinePool and will contain a more verbose string suitable for logging and human consumption. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the MachinePool’s spec or the configuration of the controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the controller, or the responsible controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of MachinePools can be added as events to the MachinePool object and/or logged in the controller’s output. |
conditions Cluster API api/v1beta1.Conditions |
(Optional)
Conditions defines current service state of the managed machine pool |
AutoScalingGroup
AutoScalingGroup describes an AWS autoscaling group.
Field | Description |
---|---|
id string |
The tags associated with the instance. |
tags Tags |
|
name string |
|
desiredCapacity int32 |
|
maxSize int32 |
|
minSize int32 |
|
placementGroup string |
|
subnets []string |
|
defaultCoolDown Kubernetes meta/v1.Duration |
|
capacityRebalance bool |
|
mixedInstancesPolicy MixedInstancesPolicy |
|
Status ASGStatus |
|
instances []Instance |
BlockDeviceMapping
BlockDeviceMapping specifies the block devices for the instance. You can specify virtual devices and EBS volumes.
Field | Description |
---|---|
deviceName string |
The device name exposed to the EC2 instance (for example, /dev/sdh or xvdh). |
ebs EBS |
(Optional)
You can specify either VirtualName or Ebs, but not both. |
EBS
(Appears on:BlockDeviceMapping)
EBS can be used to automatically set up EBS volumes when an instance is launched.
Field | Description |
---|---|
encrypted bool |
(Optional)
Encrypted is whether the volume should be encrypted or not. |
volumeSize int64 |
(Optional)
The size of the volume, in GiB. This can be a number from 1-1,024 for standard, 4-16,384 for io1, 1-16,384 for gp2, and 500-16,384 for st1 and sc1. If you specify a snapshot, the volume size must be equal to or larger than the snapshot size. |
volumeType string |
(Optional)
The volume type For more information, see Amazon EBS Volume Types (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) |
FargateProfileSpec
(Appears on:AWSFargateProfile)
FargateProfileSpec defines the desired state of FargateProfile.
Field | Description |
---|---|
clusterName string |
ClusterName is the name of the Cluster this object belongs to. |
profileName string |
ProfileName specifies the profile name. |
subnetIDs []string |
(Optional)
SubnetIDs specifies which subnets are used for the auto scaling group of this nodegroup. |
additionalTags Tags |
(Optional)
AdditionalTags is an optional set of tags to add to AWS resources managed by the AWS provider, in addition to the ones added by default. |
roleName string |
(Optional)
RoleName specifies the name of IAM role for this fargate pool If the role is pre-existing we will treat it as unmanaged and not delete it on deletion. If the EKSEnableIAM feature flag is true and no name is supplied then a role is created. |
selectors []FargateSelector |
Selectors specify fargate pod selectors. |
FargateProfileStatus
(Appears on:AWSFargateProfile)
FargateProfileStatus defines the observed state of FargateProfile.
Field | Description |
---|---|
ready bool |
Ready denotes that the FargateProfile is available. |
failureReason Cluster API errors.MachineStatusError |
(Optional)
FailureReason will be set in the event that there is a terminal problem reconciling the FargateProfile and will contain a succinct value suitable for machine interpretation. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the FargateProfile’s spec or the configuration of the controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the controller, or the responsible controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of FargateProfiles can be added as events to the FargateProfile object and/or logged in the controller’s output. |
failureMessage string |
(Optional)
FailureMessage will be set in the event that there is a terminal problem reconciling the FargateProfile and will contain a more verbose string suitable for logging and human consumption. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the FargateProfile’s spec or the configuration of the controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the controller, or the responsible controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of FargateProfiles can be added as events to the FargateProfile object and/or logged in the controller’s output. |
conditions Cluster API api/v1beta1.Conditions |
(Optional)
Conditions defines current state of the Fargate profile. |
FargateSelector
(Appears on:FargateProfileSpec)
FargateSelector specifies a selector for pods that should run on this fargate pool.
Field | Description |
---|---|
labels map[string]string |
Labels specifies which pod labels this selector should match. |
namespace string |
Namespace specifies which namespace this selector should match. |
InstancesDistribution
(Appears on:MixedInstancesPolicy)
InstancesDistribution to configure distribution of On-Demand Instances and Spot Instances.
Field | Description |
---|---|
onDemandAllocationStrategy OnDemandAllocationStrategy |
|
spotAllocationStrategy SpotAllocationStrategy |
|
onDemandBaseCapacity int64 |
|
onDemandPercentageAboveBaseCapacity int64 |
ManagedMachineAMIType
(string
alias)
(Appears on:AWSManagedMachinePoolSpec)
ManagedMachineAMIType specifies which AWS AMI to use for a managed MachinePool.
Value | Description |
---|---|
"AL2023_ARM_64_STANDARD" |
Al2023Arm64 is the AL2023 Arm AMI type. |
"AL2023_x86_64_STANDARD" |
Al2023x86_64 is the AL2023 x86-64 AMI type. |
"AL2_ARM_64" |
Al2Arm64 is the Arm AMI type. |
"AL2_x86_64" |
Al2x86_64 is the default AMI type. |
"AL2_x86_64_GPU" |
Al2x86_64GPU is the x86-64 GPU AMI type. |
ManagedMachinePoolCapacityType
(string
alias)
(Appears on:AWSManagedMachinePoolSpec)
ManagedMachinePoolCapacityType specifies the capacity type to be used for the managed MachinePool.
Value | Description |
---|---|
"onDemand" |
ManagedMachinePoolCapacityTypeOnDemand is the default capacity type, to launch on-demand instances. |
"spot" |
ManagedMachinePoolCapacityTypeSpot is the spot instance capacity type to launch spot instances. |
ManagedMachinePoolScaling
(Appears on:AWSManagedMachinePoolSpec)
ManagedMachinePoolScaling specifies scaling options.
Field | Description |
---|---|
minSize int32 |
|
maxSize int32 |
ManagedRemoteAccess
(Appears on:AWSManagedMachinePoolSpec)
ManagedRemoteAccess specifies remote access settings for EC2 instances.
Field | Description |
---|---|
sshKeyName string |
SSHKeyName specifies which EC2 SSH key can be used to access machines. If left empty, the key from the control plane is used. |
sourceSecurityGroups []string |
SourceSecurityGroups specifies which security groups are allowed access |
public bool |
Public specifies whether to open port 22 to the public internet |
MixedInstancesPolicy
(Appears on:AWSMachinePoolSpec, AutoScalingGroup)
MixedInstancesPolicy for an Auto Scaling group.
Field | Description |
---|---|
instancesDistribution InstancesDistribution |
|
overrides []Overrides |
OnDemandAllocationStrategy
(string
alias)
(Appears on:InstancesDistribution)
OnDemandAllocationStrategy indicates how to allocate instance types to fulfill On-Demand capacity.
Overrides
(Appears on:MixedInstancesPolicy)
Overrides are used to override the instance type specified by the launch template with multiple instance types that can be used to launch On-Demand Instances and Spot Instances.
Field | Description |
---|---|
instanceType string |
RefreshPreferences
(Appears on:AWSMachinePoolSpec)
RefreshPreferences defines the specs for instance refreshing.
Field | Description |
---|---|
strategy string |
(Optional)
The strategy to use for the instance refresh. The only valid value is Rolling. A rolling update is an update that is applied to all instances in an Auto Scaling group until all instances have been updated. |
instanceWarmup int64 |
(Optional)
The number of seconds until a newly launched instance is configured and ready to use. During this time, the next replacement will not be initiated. The default is to use the value for the health check grace period defined for the group. |
minHealthyPercentage int64 |
(Optional)
The amount of capacity as a percentage in ASG that must remain healthy during an instance refresh. The default is 90. |
SpotAllocationStrategy
(string
alias)
(Appears on:InstancesDistribution)
SpotAllocationStrategy indicates how to allocate instances across Spot Instance pools.
Tags
(map[string]string
alias)
Tags is a mapping for tags.
Taint
Taint defines the specs for a Kubernetes taint.
Field | Description |
---|---|
effect TaintEffect |
Effect specifies the effect for the taint |
key string |
Key is the key of the taint |
value string |
Value is the value of the taint |
TaintEffect
(string
alias)
(Appears on:Taint)
TaintEffect is the effect for a Kubernetes taint.
Taints
([]sigs.k8s.io/cluster-api-provider-aws/v2/exp/api/v1beta1.Taint
alias)
(Appears on:AWSManagedMachinePoolSpec)
Taints is an array of Taints.
UpdateConfig
(Appears on:AWSManagedMachinePoolSpec)
UpdateConfig is the configuration options for updating a nodegroup. Only one of MaxUnavailable and MaxUnavailablePercentage should be specified.
Field | Description |
---|---|
maxUnavailable int |
(Optional)
MaxUnavailable is the maximum number of nodes unavailable at once during a version update. Nodes will be updated in parallel. The maximum number is 100. |
maxUnavailablePrecentage int |
(Optional)
MaxUnavailablePercentage is the maximum percentage of nodes unavailable during a version update. This percentage of nodes will be updated in parallel, up to 100 nodes at once. |
infrastructure.cluster.x-k8s.io/v1beta2
Package v1beta2 contains the v1beta2 API implementation.
Resource Types:AMIReference
(Appears on:AWSMachineSpec, AWSLaunchTemplate, AWSLaunchTemplate)
AMIReference is a reference to a specific AWS resource by ID, ARN, or filters. Only one of ID, ARN or Filters may be specified. Specifying more than one will result in a validation error.
Field | Description |
---|---|
id string |
(Optional)
ID of resource |
eksLookupType EKSAMILookupType |
(Optional)
EKSOptimizedLookupType If specified, will look up an EKS Optimized image in SSM Parameter store |
AWSCluster
AWSCluster is the schema for Amazon EC2 based Kubernetes Cluster API.
Field | Description | ||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||||||||||||||||||||||
spec AWSClusterSpec |
|
||||||||||||||||||||||||||||
status AWSClusterStatus |
AWSClusterControllerIdentity
AWSClusterControllerIdentity is the Schema for the awsclustercontrolleridentities API It is used to grant access to use Cluster API Provider AWS Controller credentials.
Field | Description | ||
---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||
spec AWSClusterControllerIdentitySpec |
Spec for this AWSClusterControllerIdentity.
|
AWSClusterControllerIdentitySpec
(Appears on:AWSClusterControllerIdentity)
AWSClusterControllerIdentitySpec defines the specifications for AWSClusterControllerIdentity.
Field | Description |
---|---|
AWSClusterIdentitySpec AWSClusterIdentitySpec |
(Members of |
AWSClusterIdentitySpec
(Appears on:AWSClusterControllerIdentitySpec, AWSClusterRoleIdentitySpec, AWSClusterStaticIdentitySpec)
AWSClusterIdentitySpec defines the Spec struct for AWSClusterIdentity types.
Field | Description |
---|---|
allowedNamespaces AllowedNamespaces |
(Optional)
AllowedNamespaces is used to identify which namespaces are allowed to use the identity from. Namespaces can be selected either using an array of namespaces or with label selector. An empty allowedNamespaces object indicates that AWSClusters can use this identity from any namespace. If this object is nil, no namespaces will be allowed (default behaviour, if this field is not provided) A namespace should be either in the NamespaceList or match with Selector to use the identity. |
AWSClusterRoleIdentity
AWSClusterRoleIdentity is the Schema for the awsclusterroleidentities API It is used to assume a role using the provided sourceRef.
Field | Description | ||||||||
---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||
spec AWSClusterRoleIdentitySpec |
Spec for this AWSClusterRoleIdentity.
|
AWSClusterRoleIdentitySpec
(Appears on:AWSClusterRoleIdentity)
AWSClusterRoleIdentitySpec defines the specifications for AWSClusterRoleIdentity.
Field | Description |
---|---|
AWSClusterIdentitySpec AWSClusterIdentitySpec |
(Members of |
AWSRoleSpec AWSRoleSpec |
(Members of |
externalID string |
(Optional)
A unique identifier that might be required when you assume a role in another account. If the administrator of the account to which the role belongs provided you with an external ID, then provide that value in the ExternalId parameter. This value can be any string, such as a passphrase or account number. A cross-account role is usually set up to trust everyone in an account. Therefore, the administrator of the trusting account might send an external ID to the administrator of the trusted account. That way, only someone with the ID can assume the role, rather than everyone in the account. For more information about the external ID, see How to Use an External ID When Granting Access to Your AWS Resources to a Third Party in the IAM User Guide. |
sourceIdentityRef AWSIdentityReference |
SourceIdentityRef is a reference to another identity which will be chained to do role assumption. All identity types are accepted. |
AWSClusterSpec
(Appears on:AWSCluster, AWSClusterTemplateResource)
AWSClusterSpec defines the desired state of an EC2-based Kubernetes cluster.
Field | Description |
---|---|
network NetworkSpec |
NetworkSpec encapsulates all things related to AWS network. |
region string |
The AWS Region the cluster lives in. |
partition string |
(Optional)
Partition is the AWS security partition being used. Defaults to “aws” |
sshKeyName string |
(Optional)
SSHKeyName is the name of the ssh key to attach to the bastion host. Valid values are empty string (do not use SSH keys), a valid SSH key name, or omitted (use the default SSH key name) |
controlPlaneEndpoint Cluster API api/v1beta1.APIEndpoint |
(Optional)
ControlPlaneEndpoint represents the endpoint used to communicate with the control plane. |
additionalTags Tags |
(Optional)
AdditionalTags is an optional set of tags to add to AWS resources managed by the AWS provider, in addition to the ones added by default. |
controlPlaneLoadBalancer AWSLoadBalancerSpec |
(Optional)
ControlPlaneLoadBalancer is optional configuration for customizing control plane behavior. |
secondaryControlPlaneLoadBalancer AWSLoadBalancerSpec |
(Optional)
SecondaryControlPlaneLoadBalancer is an additional load balancer that can be used for the control plane. An example use case is to have a separate internal load balancer for internal traffic, and a separate external load balancer for external traffic. |
imageLookupFormat string |
(Optional)
ImageLookupFormat is the AMI naming format to look up machine images when a machine does not specify an AMI. When set, this will be used for all cluster machines unless a machine specifies a different ImageLookupOrg. Supports substitutions for {{.BaseOS}} and {{.K8sVersion}} with the base OS and kubernetes version, respectively. The BaseOS will be the value in ImageLookupBaseOS or ubuntu (the default), and the kubernetes version as defined by the packages produced by kubernetes/release without v as a prefix: 1.13.0, 1.12.5-mybuild.1, or 1.17.3. For example, the default image format of capa-ami-{{.BaseOS}}-?{{.K8sVersion}}-* will end up searching for AMIs that match the pattern capa-ami-ubuntu-?1.18.0-* for a Machine that is targeting kubernetes v1.18.0 and the ubuntu base OS. See also: https://golang.org/pkg/text/template/ |
imageLookupOrg string |
(Optional)
ImageLookupOrg is the AWS Organization ID to look up machine images when a machine does not specify an AMI. When set, this will be used for all cluster machines unless a machine specifies a different ImageLookupOrg. |
imageLookupBaseOS string |
ImageLookupBaseOS is the name of the base operating system used to look up machine images when a machine does not specify an AMI. When set, this will be used for all cluster machines unless a machine specifies a different ImageLookupBaseOS. |
bastion Bastion |
(Optional)
Bastion contains options to configure the bastion host. |
identityRef AWSIdentityReference |
IdentityRef is a reference to an identity to be used when reconciling the managed control plane. If no identity is specified, the default identity for this controller will be used. |
s3Bucket S3Bucket |
(Optional)
S3Bucket contains options to configure a supporting S3 bucket for this cluster - currently used for nodes requiring Ignition (https://coreos.github.io/ignition/) for bootstrapping (requires BootstrapFormatIgnition feature flag to be enabled). |
AWSClusterStaticIdentity
AWSClusterStaticIdentity is the Schema for the awsclusterstaticidentities API It represents a reference to an AWS access key ID and secret access key, stored in a secret.
Field | Description | ||||
---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||
spec AWSClusterStaticIdentitySpec |
Spec for this AWSClusterStaticIdentity
|
AWSClusterStaticIdentitySpec
(Appears on:AWSClusterStaticIdentity)
AWSClusterStaticIdentitySpec defines the specifications for AWSClusterStaticIdentity.
Field | Description |
---|---|
AWSClusterIdentitySpec AWSClusterIdentitySpec |
(Members of |
secretRef string |
Reference to a secret containing the credentials. The secret should contain the following data keys: AccessKeyID: AKIAIOSFODNN7EXAMPLE SecretAccessKey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY SessionToken: Optional |
AWSClusterStatus
(Appears on:AWSCluster)
AWSClusterStatus defines the observed state of AWSCluster.
Field | Description |
---|---|
ready bool |
|
networkStatus NetworkStatus |
|
failureDomains Cluster API api/v1beta1.FailureDomains |
|
bastion Instance |
|
conditions Cluster API api/v1beta1.Conditions |
AWSClusterTemplate
AWSClusterTemplate is the schema for Amazon EC2 based Kubernetes Cluster Templates.
Field | Description | ||
---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||
spec AWSClusterTemplateSpec |
|
AWSClusterTemplateResource
(Appears on:AWSClusterTemplateSpec)
AWSClusterTemplateResource defines the desired state of AWSClusterTemplateResource.
Field | Description | ||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Cluster API api/v1beta1.ObjectMeta |
(Optional)
Standard object’s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Refer to the Kubernetes API documentation for the fields of themetadata field.
|
||||||||||||||||||||||||||||
spec AWSClusterSpec |
|
AWSClusterTemplateSpec
(Appears on:AWSClusterTemplate)
AWSClusterTemplateSpec defines the desired state of AWSClusterTemplate.
Field | Description |
---|---|
template AWSClusterTemplateResource |
AWSIdentityKind
(string
alias)
(Appears on:AWSIdentityReference)
AWSIdentityKind defines allowed AWS identity types.
AWSIdentityReference
(Appears on:AWSClusterRoleIdentitySpec, AWSClusterSpec, AWSManagedControlPlaneSpec, AWSManagedControlPlaneSpec, RosaControlPlaneSpec)
AWSIdentityReference specifies a identity.
Field | Description |
---|---|
name string |
Name of the identity. |
kind AWSIdentityKind |
Kind of the identity. |
AWSLoadBalancerSpec
(Appears on:AWSClusterSpec)
AWSLoadBalancerSpec defines the desired state of an AWS load balancer.
Field | Description |
---|---|
name string |
(Optional)
Name sets the name of the classic ELB load balancer. As per AWS, the name must be unique within your set of load balancers for the region, must have a maximum of 32 characters, must contain only alphanumeric characters or hyphens, and cannot begin or end with a hyphen. Once set, the value cannot be changed. |
scheme ELBScheme |
(Optional)
Scheme sets the scheme of the load balancer (defaults to internet-facing) |
crossZoneLoadBalancing bool |
(Optional)
CrossZoneLoadBalancing enables the classic ELB cross availability zone balancing. With cross-zone load balancing, each load balancer node for your Classic Load Balancer distributes requests evenly across the registered instances in all enabled Availability Zones. If cross-zone load balancing is disabled, each load balancer node distributes requests evenly across the registered instances in its Availability Zone only. Defaults to false. |
subnets []string |
(Optional)
Subnets sets the subnets that should be applied to the control plane load balancer (defaults to discovered subnets for managed VPCs or an empty set for unmanaged VPCs) |
healthCheckProtocol ELBProtocol |
(Optional)
HealthCheckProtocol sets the protocol type for ELB health check target default value is ELBProtocolSSL |
healthCheck TargetGroupHealthCheckAPISpec |
(Optional)
HealthCheck sets custom health check configuration to the API target group. |
additionalSecurityGroups []string |
(Optional)
AdditionalSecurityGroups sets the security groups used by the load balancer. Expected to be security group IDs This is optional - if not provided new security groups will be created for the load balancer |
additionalListeners []AdditionalListenerSpec |
(Optional)
AdditionalListeners sets the additional listeners for the control plane load balancer. This is only applicable to Network Load Balancer (NLB) types for the time being. |
ingressRules []IngressRule |
(Optional)
IngressRules sets the ingress rules for the control plane load balancer. |
loadBalancerType LoadBalancerType |
LoadBalancerType sets the type for a load balancer. The default type is classic. |
disableHostsRewrite bool |
DisableHostsRewrite disabled the hair pinning issue solution that adds the NLB’s address as 127.0.0.1 to the hosts file of each instance. This is by default, false. |
preserveClientIP bool |
PreserveClientIP lets the user control if preservation of client ips must be retained or not. If this is enabled 6443 will be opened to 0.0.0.0/0. |
AWSMachine
AWSMachine is the schema for Amazon EC2 machines.
Field | Description | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||
spec AWSMachineSpec |
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||
status AWSMachineStatus |
AWSMachineProviderConditionType
(string
alias)
AWSMachineProviderConditionType is a valid value for AWSMachineProviderCondition.Type.
AWSMachineSpec
(Appears on:AWSMachine, AWSMachineTemplateResource)
AWSMachineSpec defines the desired state of an Amazon EC2 instance.
Field | Description |
---|---|
providerID string |
ProviderID is the unique identifier as specified by the cloud provider. |
instanceID string |
InstanceID is the EC2 instance ID for this machine. |
instanceMetadataOptions InstanceMetadataOptions |
(Optional)
InstanceMetadataOptions is the metadata options for the EC2 instance. |
ami AMIReference |
AMI is the reference to the AMI from which to create the machine instance. |
imageLookupFormat string |
(Optional)
ImageLookupFormat is the AMI naming format to look up the image for this machine It will be ignored if an explicit AMI is set. Supports substitutions for {{.BaseOS}} and {{.K8sVersion}} with the base OS and kubernetes version, respectively. The BaseOS will be the value in ImageLookupBaseOS or ubuntu (the default), and the kubernetes version as defined by the packages produced by kubernetes/release without v as a prefix: 1.13.0, 1.12.5-mybuild.1, or 1.17.3. For example, the default image format of capa-ami-{{.BaseOS}}-?{{.K8sVersion}}-* will end up searching for AMIs that match the pattern capa-ami-ubuntu-?1.18.0-* for a Machine that is targeting kubernetes v1.18.0 and the ubuntu base OS. See also: https://golang.org/pkg/text/template/ |
imageLookupOrg string |
ImageLookupOrg is the AWS Organization ID to use for image lookup if AMI is not set. |
imageLookupBaseOS string |
ImageLookupBaseOS is the name of the base operating system to use for image lookup the AMI is not set. |
instanceType string |
InstanceType is the type of instance to create. Example: m4.xlarge |
additionalTags Tags |
(Optional)
AdditionalTags is an optional set of tags to add to an instance, in addition to the ones added by default by the AWS provider. If both the AWSCluster and the AWSMachine specify the same tag name with different values, the AWSMachine’s value takes precedence. |
iamInstanceProfile string |
(Optional)
IAMInstanceProfile is a name of an IAM instance profile to assign to the instance |
publicIP bool |
(Optional)
PublicIP specifies whether the instance should get a public IP. Precedence for this setting is as follows: 1. This field if set 2. Cluster/flavor setting 3. Subnet default |
elasticIpPool ElasticIPPool |
(Optional)
ElasticIPPool is the configuration to allocate Public IPv4 address (Elastic IP/EIP) from user-defined pool. |
additionalSecurityGroups []AWSResourceReference |
(Optional)
AdditionalSecurityGroups is an array of references to security groups that should be applied to the instance. These security groups would be set in addition to any security groups defined at the cluster level or in the actuator. It is possible to specify either IDs of Filters. Using Filters will cause additional requests to AWS API and if tags change the attached security groups might change too. |
subnet AWSResourceReference |
(Optional)
Subnet is a reference to the subnet to use for this instance. If not specified, the cluster subnet will be used. |
securityGroupOverrides map[sigs.k8s.io/cluster-api-provider-aws/v2/api/v1beta2.SecurityGroupRole]string |
(Optional)
SecurityGroupOverrides is an optional set of security groups to use for the node. This is optional - if not provided security groups from the cluster will be used. |
sshKeyName string |
(Optional)
SSHKeyName is the name of the ssh key to attach to the instance. Valid values are empty string (do not use SSH keys), a valid SSH key name, or omitted (use the default SSH key name) |
rootVolume Volume |
(Optional)
RootVolume encapsulates the configuration options for the root volume |
nonRootVolumes []Volume |
(Optional)
Configuration options for the non root storage volumes. |
networkInterfaces []string |
(Optional)
NetworkInterfaces is a list of ENIs to associate with the instance. A maximum of 2 may be specified. |
uncompressedUserData bool |
(Optional)
UncompressedUserData specify whether the user data is gzip-compressed before it is sent to ec2 instance. cloud-init has built-in support for gzip-compressed user data user data stored in aws secret manager is always gzip-compressed. |
cloudInit CloudInit |
(Optional)
CloudInit defines options related to the bootstrapping systems where CloudInit is used. |
ignition Ignition |
(Optional)
Ignition defined options related to the bootstrapping systems where Ignition is used. |
spotMarketOptions SpotMarketOptions |
(Optional)
SpotMarketOptions allows users to configure instances to be run using AWS Spot instances. |
placementGroupName string |
(Optional)
PlacementGroupName specifies the name of the placement group in which to launch the instance. |
placementGroupPartition int64 |
(Optional)
PlacementGroupPartition is the partition number within the placement group in which to launch the instance.
This value is only valid if the placement group, referred in |
tenancy string |
(Optional)
Tenancy indicates if instance should run on shared or single-tenant hardware. |
privateDnsName PrivateDNSName |
(Optional)
PrivateDNSName is the options for the instance hostname. |
capacityReservationId string |
(Optional)
CapacityReservationID specifies the target Capacity Reservation into which the instance should be launched. |
AWSMachineStatus
(Appears on:AWSMachine)
AWSMachineStatus defines the observed state of AWSMachine.
Field | Description |
---|---|
ready bool |
(Optional)
Ready is true when the provider resource is ready. |
interruptible bool |
(Optional)
Interruptible reports that this machine is using spot instances and can therefore be interrupted by CAPI when it receives a notice that the spot instance is to be terminated by AWS. This will be set to true when SpotMarketOptions is not nil (i.e. this machine is using a spot instance). |
addresses []Cluster API api/v1beta1.MachineAddress |
Addresses contains the AWS instance associated addresses. |
instanceState InstanceState |
(Optional)
InstanceState is the state of the AWS instance for this machine. |
failureReason Cluster API errors.MachineStatusError |
(Optional)
FailureReason will be set in the event that there is a terminal problem reconciling the Machine and will contain a succinct value suitable for machine interpretation. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the Machine’s spec or the configuration of the controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the controller, or the responsible controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of Machines can be added as events to the Machine object and/or logged in the controller’s output. |
failureMessage string |
(Optional)
FailureMessage will be set in the event that there is a terminal problem reconciling the Machine and will contain a more verbose string suitable for logging and human consumption. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the Machine’s spec or the configuration of the controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the controller, or the responsible controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of Machines can be added as events to the Machine object and/or logged in the controller’s output. |
conditions Cluster API api/v1beta1.Conditions |
(Optional)
Conditions defines current service state of the AWSMachine. |
AWSMachineTemplate
AWSMachineTemplate is the schema for the Amazon EC2 Machine Templates API.
Field | Description | ||
---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||
spec AWSMachineTemplateSpec |
|
||
status AWSMachineTemplateStatus |
AWSMachineTemplateResource
(Appears on:AWSMachineTemplateSpec)
AWSMachineTemplateResource describes the data needed to create am AWSMachine from a template.
Field | Description | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Cluster API api/v1beta1.ObjectMeta |
(Optional)
Standard object’s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Refer to the Kubernetes API documentation for the fields of themetadata field.
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||
spec AWSMachineSpec |
Spec is the specification of the desired behavior of the machine.
|
AWSMachineTemplateSpec
(Appears on:AWSMachineTemplate)
AWSMachineTemplateSpec defines the desired state of AWSMachineTemplate.
Field | Description |
---|---|
template AWSMachineTemplateResource |
AWSMachineTemplateStatus
(Appears on:AWSMachineTemplate)
AWSMachineTemplateStatus defines a status for an AWSMachineTemplate.
Field | Description |
---|---|
capacity Kubernetes core/v1.ResourceList |
(Optional)
Capacity defines the resource capacity for this machine. This value is used for autoscaling from zero operations as defined in: https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20210310-opt-in-autoscaling-from-zero.md |
AWSMachineTemplateWebhook
AWSMachineTemplateWebhook implements a custom validation webhook for AWSMachineTemplate. Note: we use a custom validator to access the request context for SSA of AWSMachineTemplate.
AWSManagedCluster
AWSManagedCluster is the Schema for the awsmanagedclusters API
Field | Description | ||
---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||
spec AWSManagedClusterSpec |
|
||
status AWSManagedClusterStatus |
AWSManagedClusterSpec
(Appears on:AWSManagedCluster)
AWSManagedClusterSpec defines the desired state of AWSManagedCluster
Field | Description |
---|---|
controlPlaneEndpoint Cluster API api/v1beta1.APIEndpoint |
(Optional)
ControlPlaneEndpoint represents the endpoint used to communicate with the control plane. |
AWSManagedClusterStatus
(Appears on:AWSManagedCluster)
AWSManagedClusterStatus defines the observed state of AWSManagedCluster
Field | Description |
---|---|
ready bool |
(Optional)
Ready is when the AWSManagedControlPlane has a API server URL. |
failureDomains Cluster API api/v1beta1.FailureDomains |
(Optional)
FailureDomains specifies a list fo available availability zones that can be used |
AWSResourceReference
(Appears on:AWSMachineSpec, AWSLaunchTemplate, AWSMachinePoolSpec, AWSLaunchTemplate, AWSMachinePoolSpec)
AWSResourceReference is a reference to a specific AWS resource by ID or filters. Only one of ID or Filters may be specified. Specifying more than one will result in a validation error.
Field | Description |
---|---|
id string |
(Optional)
ID of resource |
filters []Filter |
(Optional)
Filters is a set of key/value pairs used to identify a resource They are applied according to the rules defined by the AWS API: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Filtering.html |
AWSRoleSpec
(Appears on:AWSClusterRoleIdentitySpec)
AWSRoleSpec defines the specifications for all identities based around AWS roles.
Field | Description |
---|---|
roleARN string |
The Amazon Resource Name (ARN) of the role to assume. |
sessionName string |
An identifier for the assumed role session |
durationSeconds int32 |
The duration, in seconds, of the role session before it is renewed. |
inlinePolicy string |
An IAM policy as a JSON-encoded string that you want to use as an inline session policy. |
policyARNs []string |
The Amazon Resource Names (ARNs) of the IAM managed policies that you want to use as managed session policies. The policies must exist in the same account as the role. |
AZSelectionScheme
(string
alias)
(Appears on:VPCSpec)
AZSelectionScheme defines the scheme of selecting AZs.
AdditionalListenerSpec
(Appears on:AWSLoadBalancerSpec)
AdditionalListenerSpec defines the desired state of an additional listener on an AWS load balancer.
Field | Description |
---|---|
port int64 |
Port sets the port for the additional listener. |
protocol ELBProtocol |
Protocol sets the protocol for the additional listener. Currently only TCP is supported. |
healthCheck TargetGroupHealthCheckAdditionalSpec |
(Optional)
HealthCheck sets the optional custom health check configuration to the API target group. |
AllowedNamespaces
(Appears on:AWSClusterIdentitySpec)
AllowedNamespaces is a selector of namespaces that AWSClusters can use this ClusterPrincipal from. This is a standard Kubernetes LabelSelector, a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed.
Field | Description |
---|---|
list []string |
(Optional)
An nil or empty list indicates that AWSClusters cannot use the identity from any namespace. |
selector Kubernetes meta/v1.LabelSelector |
(Optional)
An empty selector indicates that AWSClusters cannot use this AWSClusterIdentity from any namespace. |
Bastion
(Appears on:AWSClusterSpec, AWSManagedControlPlaneSpec, AWSManagedControlPlaneSpec)
Bastion defines a bastion host.
Field | Description |
---|---|
enabled bool |
(Optional)
Enabled allows this provider to create a bastion host instance with a public ip to access the VPC private network. |
disableIngressRules bool |
(Optional)
DisableIngressRules will ensure there are no Ingress rules in the bastion host’s security group. Requires AllowedCIDRBlocks to be empty. |
allowedCIDRBlocks []string |
(Optional)
AllowedCIDRBlocks is a list of CIDR blocks allowed to access the bastion host. They are set as ingress rules for the Bastion host’s Security Group (defaults to 0.0.0.0/0). |
instanceType string |
InstanceType will use the specified instance type for the bastion. If not specified, Cluster API Provider AWS will use t3.micro for all regions except us-east-1, where t2.micro will be the default. |
ami string |
(Optional)
AMI will use the specified AMI to boot the bastion. If not specified, the AMI will default to one picked out in public space. |
BuildParams
BuildParams is used to build tags around an aws resource.
Field | Description |
---|---|
Lifecycle ResourceLifecycle |
Lifecycle determines the resource lifecycle. |
ClusterName string |
ClusterName is the cluster associated with the resource. |
ResourceID string |
ResourceID is the unique identifier of the resource to be tagged. |
Name string |
(Optional)
Name is the name of the resource, it’s applied as the tag “Name” on AWS. |
Role string |
(Optional)
Role is the role associated to the resource. |
Additional Tags |
(Optional)
Any additional tags to be added to the resource. |
CNIIngressRule
CNIIngressRule defines an AWS ingress rule for CNI requirements.
Field | Description |
---|---|
description string |
|
protocol SecurityGroupProtocol |
|
fromPort int64 |
|
toPort int64 |
CNIIngressRules
([]sigs.k8s.io/cluster-api-provider-aws/v2/api/v1beta2.CNIIngressRule
alias)
(Appears on:CNISpec)
CNIIngressRules is a slice of CNIIngressRule.
CNISpec
(Appears on:NetworkSpec)
CNISpec defines configuration for CNI.
Field | Description |
---|---|
cniIngressRules CNIIngressRules |
CNIIngressRules specify rules to apply to control plane and worker node security groups. The source for the rule will be set to control plane and worker security group IDs. |
ClassicELBAttributes
(Appears on:LoadBalancer)
ClassicELBAttributes defines extra attributes associated with a classic load balancer.
Field | Description |
---|---|
idleTimeout time.Duration |
IdleTimeout is time that the connection is allowed to be idle (no data has been sent over the connection) before it is closed by the load balancer. |
crossZoneLoadBalancing bool |
(Optional)
CrossZoneLoadBalancing enables the classic load balancer load balancing. |
ClassicELBHealthCheck
(Appears on:LoadBalancer)
ClassicELBHealthCheck defines an AWS classic load balancer health check.
Field | Description |
---|---|
target string |
|
interval time.Duration |
|
timeout time.Duration |
|
healthyThreshold int64 |
|
unhealthyThreshold int64 |
ClassicELBListener
(Appears on:LoadBalancer)
ClassicELBListener defines an AWS classic load balancer listener.
Field | Description |
---|---|
protocol ELBProtocol |
|
port int64 |
|
instanceProtocol ELBProtocol |
|
instancePort int64 |
CloudInit
(Appears on:AWSMachineSpec)
CloudInit defines options related to the bootstrapping systems where CloudInit is used.
Field | Description |
---|---|
insecureSkipSecretsManager bool |
InsecureSkipSecretsManager, when set to true will not use AWS Secrets Manager or AWS Systems Manager Parameter Store to ensure privacy of userdata. By default, a cloud-init boothook shell script is prepended to download the userdata from Secrets Manager and additionally delete the secret. |
secretCount int32 |
(Optional)
SecretCount is the number of secrets used to form the complete secret |
secretPrefix string |
(Optional)
SecretPrefix is the prefix for the secret name. This is stored temporarily, and deleted when the machine registers as a node against the workload cluster. |
secureSecretsBackend SecretBackend |
(Optional)
SecureSecretsBackend, when set to parameter-store will utilize the AWS Systems Manager Parameter Storage to distribute secrets. By default or with the value of secrets-manager, will use AWS Secrets Manager instead. |
EKSAMILookupType
(string
alias)
(Appears on:AMIReference)
EKSAMILookupType specifies which AWS AMI to use for a AWSMachine and AWSMachinePool.
ELBProtocol
(string
alias)
(Appears on:AWSLoadBalancerSpec, AdditionalListenerSpec, ClassicELBListener, Listener, TargetGroupSpec)
ELBProtocol defines listener protocols for a load balancer.
ELBScheme
(string
alias)
(Appears on:AWSLoadBalancerSpec, LoadBalancer)
ELBScheme defines the scheme of a load balancer.
ElasticIPPool
(Appears on:AWSMachineSpec, VPCSpec)
ElasticIPPool allows configuring a Elastic IP pool for resources allocating public IPv4 addresses on public subnets.
Field | Description |
---|---|
publicIpv4Pool string |
(Optional)
PublicIpv4Pool sets a custom Public IPv4 Pool used to create Elastic IP address for resources created in public IPv4 subnets. Every IPv4 address, Elastic IP, will be allocated from the custom Public IPv4 pool that you brought to AWS, instead of Amazon-provided pool. The public IPv4 pool resource ID starts with ‘ipv4pool-ec2’. |
publicIpv4PoolFallbackOrder PublicIpv4PoolFallbackOrder |
(Optional)
PublicIpv4PoolFallBackOrder defines the fallback action when the Public IPv4 Pool has been exhausted, no more IPv4 address available in the pool. When set to ‘amazon-pool’, the controller check if the pool has available IPv4 address, when pool has reached the IPv4 limit, the address will be claimed from Amazon-pool (default). When set to ‘none’, the controller will fail the Elastic IP allocation when the publicIpv4Pool is exhausted. |
Filter
(Appears on:AWSResourceReference)
Filter is a filter used to identify an AWS resource.
Field | Description |
---|---|
name string |
Name of the filter. Filter names are case-sensitive. |
values []string |
Values includes one or more filter values. Filter values are case-sensitive. |
GCTask
(string
alias)
GCTask defines a task to be executed by the garbage collector.
HTTPTokensState
(string
alias)
(Appears on:InstanceMetadataOptions)
HTTPTokensState describes the state of InstanceMetadataOptions.HTTPTokensState
IPAMPool
IPAMPool defines the IPAM pool to be used for VPC.
Field | Description |
---|---|
id string |
ID is the ID of the IPAM pool this provider should use to create VPC. |
name string |
Name is the name of the IPAM pool this provider should use to create VPC. |
netmaskLength int64 |
The netmask length of the IPv4 CIDR you want to allocate to VPC from an Amazon VPC IP Address Manager (IPAM) pool. Defaults to /16 for IPv4 if not specified. |
IPv6
(Appears on:VPCSpec)
IPv6 contains ipv6 specific settings for the network.
Field | Description |
---|---|
cidrBlock string |
(Optional)
CidrBlock is the CIDR block provided by Amazon when VPC has enabled IPv6. Mutually exclusive with IPAMPool. |
poolId string |
(Optional)
PoolID is the IP pool which must be defined in case of BYO IP is defined. Must be specified if CidrBlock is set. Mutually exclusive with IPAMPool. |
egressOnlyInternetGatewayId string |
(Optional)
EgressOnlyInternetGatewayID is the id of the egress only internet gateway associated with an IPv6 enabled VPC. |
ipamPool IPAMPool |
(Optional)
IPAMPool defines the IPAMv6 pool to be used for VPC. Mutually exclusive with CidrBlock. |
Ignition
(Appears on:AWSMachineSpec)
Ignition defines options related to the bootstrapping systems where Ignition is used. For more information on Ignition configuration, see https://coreos.github.io/butane/specs/
Field | Description |
---|---|
version string |
(Optional)
Version defines which version of Ignition will be used to generate bootstrap data. |
storageType IgnitionStorageTypeOption |
(Optional)
StorageType defines how to store the boostrap user data for Ignition. This can be used to instruct Ignition from where to fetch the user data to bootstrap an instance. When omitted, the storage option will default to ClusterObjectStore. When set to “ClusterObjectStore”, if the capability is available and a Cluster ObjectStore configuration is correctly provided in the Cluster object (under .spec.s3Bucket), an object store will be used to store bootstrap user data. When set to “UnencryptedUserData”, EC2 Instance User Data will be used to store the machine bootstrap user data, unencrypted. This option is considered less secure than others as user data may contain sensitive informations (keys, certificates, etc.) and users with ec2:DescribeInstances permission or users running pods that can access the ec2 metadata service have access to this sensitive information. So this is only to be used at ones own risk, and only when other more secure options are not viable. |
proxy IgnitionProxy |
(Optional)
Proxy defines proxy settings for Ignition. Only valid for Ignition versions 3.1 and above. |
tls IgnitionTLS |
(Optional)
TLS defines TLS settings for Ignition. Only valid for Ignition versions 3.1 and above. |
IgnitionCASource
(string
alias)
(Appears on:IgnitionTLS)
IgnitionCASource defines the source of the certificate authority to use for Ignition.
IgnitionNoProxy
(string
alias)
(Appears on:IgnitionProxy)
IgnitionNoProxy defines the list of domains to not proxy for Ignition.
IgnitionProxy
(Appears on:Ignition)
IgnitionProxy defines proxy settings for Ignition.
Field | Description |
---|---|
httpProxy string |
(Optional)
HTTPProxy is the HTTP proxy to use for Ignition. A single URL that specifies the proxy server to use for HTTP and HTTPS requests, unless overridden by the HTTPSProxy or NoProxy options. |
httpsProxy string |
(Optional)
HTTPSProxy is the HTTPS proxy to use for Ignition. A single URL that specifies the proxy server to use for HTTPS requests, unless overridden by the NoProxy option. |
noProxy []IgnitionNoProxy |
(Optional)
NoProxy is the list of domains to not proxy for Ignition. Specifies a list of strings to hosts that should be excluded from proxying. Each value is represented by: - An IP address prefix (1.2.3.4) - An IP address prefix in CIDR notation (1.2.3.4⁄8) - A domain name - A domain name matches that name and all subdomains - A domain name with a leading . matches subdomains only - A special DNS label (*), indicates that no proxying should be done An IP address prefix and domain name can also include a literal port number (1.2.3.4:80). |
IgnitionStorageTypeOption
(string
alias)
(Appears on:Ignition)
IgnitionStorageTypeOption defines the different storage types for Ignition.
IgnitionTLS
(Appears on:Ignition)
IgnitionTLS defines TLS settings for Ignition.
Field | Description |
---|---|
certificateAuthorities []IgnitionCASource |
(Optional)
CASources defines the list of certificate authorities to use for Ignition.
The value is the certificate bundle (in PEM format). The bundle can contain multiple concatenated certificates.
Supported schemes are http, https, tftp, s3, arn, gs, and |
IngressRule
(Appears on:AWSLoadBalancerSpec, NetworkSpec)
IngressRule defines an AWS ingress rule for security groups.
Field | Description |
---|---|
description string |
Description provides extended information about the ingress rule. |
protocol SecurityGroupProtocol |
Protocol is the protocol for the ingress rule. Accepted values are “-1” (all), “4” (IP in IP),“tcp”, “udp”, “icmp”, and “58” (ICMPv6), “50” (ESP). |
fromPort int64 |
FromPort is the start of port range. |
toPort int64 |
ToPort is the end of port range. |
cidrBlocks []string |
(Optional)
List of CIDR blocks to allow access from. Cannot be specified with SourceSecurityGroupID. |
ipv6CidrBlocks []string |
(Optional)
List of IPv6 CIDR blocks to allow access from. Cannot be specified with SourceSecurityGroupID. |
sourceSecurityGroupIds []string |
(Optional)
The security group id to allow access from. Cannot be specified with CidrBlocks. |
sourceSecurityGroupRoles []SecurityGroupRole |
(Optional)
The security group role to allow access from. Cannot be specified with CidrBlocks. The field will be combined with source security group IDs if specified. |
natGatewaysIPsSource bool |
(Optional)
NatGatewaysIPsSource use the NAT gateways IPs as the source for the ingress rule. |
IngressRules
([]sigs.k8s.io/cluster-api-provider-aws/v2/api/v1beta2.IngressRule
alias)
(Appears on:SecurityGroup)
IngressRules is a slice of AWS ingress rules for security groups.
Instance
(Appears on:AWSClusterStatus, AWSManagedControlPlaneStatus, AWSManagedControlPlaneStatus, AutoScalingGroup, AutoScalingGroup)
Instance describes an AWS instance.
Field | Description |
---|---|
id string |
|
instanceState InstanceState |
The current state of the instance. |
type string |
The instance type. |
subnetId string |
The ID of the subnet of the instance. |
imageId string |
The ID of the AMI used to launch the instance. |
sshKeyName string |
The name of the SSH key pair. |
securityGroupIds []string |
SecurityGroupIDs are one or more security group IDs this instance belongs to. |
userData string |
UserData is the raw data script passed to the instance which is run upon bootstrap. This field must not be base64 encoded and should only be used when running a new instance. |
iamProfile string |
The name of the IAM instance profile associated with the instance, if applicable. |
addresses []Cluster API api/v1beta1.MachineAddress |
Addresses contains the AWS instance associated addresses. |
privateIp string |
The private IPv4 address assigned to the instance. |
publicIp string |
The public IPv4 address assigned to the instance, if applicable. |
enaSupport bool |
Specifies whether enhanced networking with ENA is enabled. |
ebsOptimized bool |
Indicates whether the instance is optimized for Amazon EBS I/O. |
rootVolume Volume |
(Optional)
Configuration options for the root storage volume. |
nonRootVolumes []Volume |
(Optional)
Configuration options for the non root storage volumes. |
networkInterfaces []string |
Specifies ENIs attached to instance |
tags map[string]string |
The tags associated with the instance. |
availabilityZone string |
Availability zone of instance |
spotMarketOptions SpotMarketOptions |
SpotMarketOptions option for configuring instances to be run using AWS Spot instances. |
placementGroupName string |
(Optional)
PlacementGroupName specifies the name of the placement group in which to launch the instance. |
placementGroupPartition int64 |
(Optional)
PlacementGroupPartition is the partition number within the placement group in which to launch the instance.
This value is only valid if the placement group, referred in |
tenancy string |
(Optional)
Tenancy indicates if instance should run on shared or single-tenant hardware. |
volumeIDs []string |
(Optional)
IDs of the instance’s volumes |
instanceMetadataOptions InstanceMetadataOptions |
(Optional)
InstanceMetadataOptions is the metadata options for the EC2 instance. |
privateDnsName PrivateDNSName |
(Optional)
PrivateDNSName is the options for the instance hostname. |
publicIPOnLaunch bool |
(Optional)
PublicIPOnLaunch is the option to associate a public IP on instance launch |
capacityReservationId string |
(Optional)
CapacityReservationID specifies the target Capacity Reservation into which the instance should be launched. |
InstanceMetadataOptions
(Appears on:AWSMachineSpec, Instance, AWSLaunchTemplate)
InstanceMetadataOptions describes metadata options for the EC2 instance.
Field | Description |
---|---|
httpEndpoint InstanceMetadataState |
Enables or disables the HTTP metadata endpoint on your instances. If you specify a value of disabled, you cannot access your instance metadata. Default: enabled |
httpPutResponseHopLimit int64 |
The desired HTTP PUT response hop limit for instance metadata requests. The larger the number, the further instance metadata requests can travel. Default: 1 |
httpTokens HTTPTokensState |
The state of token usage for your instance metadata requests. If the state is optional, you can choose to retrieve instance metadata with or without a session token on your request. If you retrieve the IAM role credentials without a token, the version 1.0 role credentials are returned. If you retrieve the IAM role credentials using a valid session token, the version 2.0 role credentials are returned. If the state is required, you must send a session token with any instance metadata retrieval requests. In this state, retrieving the IAM role credentials always returns the version 2.0 credentials; the version 1.0 credentials are not available. Default: optional |
instanceMetadataTags InstanceMetadataState |
Set to enabled to allow access to instance tags from the instance metadata. Set to disabled to turn off access to instance tags from the instance metadata. For more information, see Work with instance tags using the instance metadata (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#work-with-tags-in-IMDS). Default: disabled |
InstanceMetadataState
(string
alias)
(Appears on:InstanceMetadataOptions)
InstanceMetadataState describes the state of InstanceMetadataOptions.HttpEndpoint and InstanceMetadataOptions.InstanceMetadataTags
InstanceState
(string
alias)
(Appears on:AWSMachineStatus, Instance)
InstanceState describes the state of an AWS instance.
Listener
(Appears on:LoadBalancer)
Listener defines an AWS network load balancer listener.
Field | Description |
---|---|
protocol ELBProtocol |
|
port int64 |
|
targetGroup TargetGroupSpec |
LoadBalancer
(Appears on:NetworkStatus)
LoadBalancer defines an AWS load balancer.
Field | Description |
---|---|
arn string |
ARN of the load balancer. Unlike the ClassicLB, ARN is used mostly to define and get it. |
name string |
(Optional)
The name of the load balancer. It must be unique within the set of load balancers defined in the region. It also serves as identifier. |
dnsName string |
DNSName is the dns name of the load balancer. |
scheme ELBScheme |
Scheme is the load balancer scheme, either internet-facing or private. |
availabilityZones []string |
AvailabilityZones is an array of availability zones in the VPC attached to the load balancer. |
subnetIds []string |
SubnetIDs is an array of subnets in the VPC attached to the load balancer. |
securityGroupIds []string |
SecurityGroupIDs is an array of security groups assigned to the load balancer. |
listeners []ClassicELBListener |
ClassicELBListeners is an array of classic elb listeners associated with the load balancer. There must be at least one. |
healthChecks ClassicELBHealthCheck |
HealthCheck is the classic elb health check associated with the load balancer. |
attributes ClassicELBAttributes |
ClassicElbAttributes defines extra attributes associated with the load balancer. |
tags map[string]string |
Tags is a map of tags associated with the load balancer. |
elbListeners []Listener |
ELBListeners is an array of listeners associated with the load balancer. There must be at least one. |
elbAttributes map[string]*string |
ELBAttributes defines extra attributes associated with v2 load balancers. |
loadBalancerType LoadBalancerType |
LoadBalancerType sets the type for a load balancer. The default type is classic. |
LoadBalancerAttribute
(string
alias)
LoadBalancerAttribute defines a set of attributes for a V2 load balancer.
LoadBalancerType
(string
alias)
(Appears on:AWSLoadBalancerSpec, LoadBalancer)
LoadBalancerType defines the type of load balancer to use.
NetworkSpec
(Appears on:AWSClusterSpec, AWSManagedControlPlaneSpec, AWSManagedControlPlaneSpec)
NetworkSpec encapsulates all things related to AWS network.
Field | Description |
---|---|
vpc VPCSpec |
(Optional)
VPC configuration. |
subnets Subnets |
(Optional)
Subnets configuration. |
cni CNISpec |
(Optional)
CNI configuration |
securityGroupOverrides map[sigs.k8s.io/cluster-api-provider-aws/v2/api/v1beta2.SecurityGroupRole]string |
(Optional)
SecurityGroupOverrides is an optional set of security groups to use for cluster instances This is optional - if not provided new security groups will be created for the cluster |
additionalControlPlaneIngressRules []IngressRule |
(Optional)
AdditionalControlPlaneIngressRules is an optional set of ingress rules to add to the control plane |
nodePortIngressRuleCidrBlocks []string |
(Optional)
NodePortIngressRuleCidrBlocks is an optional set of CIDR blocks to allow traffic to nodes’ NodePort services. If none are specified here, all IPs are allowed to connect. |
NetworkStatus
(Appears on:AWSClusterStatus, AWSManagedControlPlaneStatus, AWSManagedControlPlaneStatus)
NetworkStatus encapsulates AWS networking resources.
Field | Description |
---|---|
securityGroups map[sigs.k8s.io/cluster-api-provider-aws/v2/api/v1beta2.SecurityGroupRole]sigs.k8s.io/cluster-api-provider-aws/v2/api/v1beta2.SecurityGroup |
SecurityGroups is a map from the role/kind of the security group to its unique name, if any. |
apiServerElb LoadBalancer |
APIServerELB is the Kubernetes api server load balancer. |
secondaryAPIServerELB LoadBalancer |
SecondaryAPIServerELB is the secondary Kubernetes api server load balancer. |
natGatewaysIPs []string |
NatGatewaysIPs contains the public IPs of the NAT Gateways |
PrivateDNSName
(Appears on:AWSMachineSpec, Instance, AWSLaunchTemplate)
PrivateDNSName is the options for the instance hostname.
Field | Description |
---|---|
enableResourceNameDnsAAAARecord bool |
(Optional)
EnableResourceNameDNSAAAARecord indicates whether to respond to DNS queries for instance hostnames with DNS AAAA records. |
enableResourceNameDnsARecord bool |
(Optional)
EnableResourceNameDNSARecord indicates whether to respond to DNS queries for instance hostnames with DNS A records. |
hostnameType string |
(Optional)
The type of hostname to assign to an instance. |
PublicIpv4PoolFallbackOrder
(string
alias)
(Appears on:ElasticIPPool)
PublicIpv4PoolFallbackOrder defines the list of available fallback action when the PublicIpv4Pool is exhausted. ‘none’ let the controllers return failures when the PublicIpv4Pool is exhausted - no more IPv4 available. ‘amazon-pool’ let the controllers to skip the PublicIpv4Pool and use the Amazon pool, the default.
ResourceLifecycle
(string
alias)
(Appears on:BuildParams)
ResourceLifecycle configures the lifecycle of a resource.
RouteTable
RouteTable defines an AWS routing table.
Field | Description |
---|---|
id string |
S3Bucket
(Appears on:AWSClusterSpec)
S3Bucket defines a supporting S3 bucket for the cluster, currently can be optionally used for Ignition.
Field | Description |
---|---|
controlPlaneIAMInstanceProfile string |
(Optional)
ControlPlaneIAMInstanceProfile is a name of the IAMInstanceProfile, which will be allowed to read control-plane node bootstrap data from S3 Bucket. |
nodesIAMInstanceProfiles []string |
(Optional)
NodesIAMInstanceProfiles is a list of IAM instance profiles, which will be allowed to read worker nodes bootstrap data from S3 Bucket. |
presignedURLDuration Kubernetes meta/v1.Duration |
(Optional)
PresignedURLDuration defines the duration for which presigned URLs are valid. This is used to generate presigned URLs for S3 Bucket objects, which are used by control-plane and worker nodes to fetch bootstrap data. When enabled, the IAM instance profiles specified are not used. |
name string |
Name defines name of S3 Bucket to be created. |
bestEffortDeleteObjects bool |
(Optional)
BestEffortDeleteObjects defines whether access/permission errors during object deletion should be ignored. |
SecretBackend
(string
alias)
(Appears on:CloudInit, AWSIAMConfigurationSpec, AWSIAMConfigurationSpec)
SecretBackend defines variants for backend secret storage.
SecurityGroup
(Appears on:NetworkStatus)
SecurityGroup defines an AWS security group.
Field | Description |
---|---|
id string |
ID is a unique identifier. |
name string |
Name is the security group name. |
ingressRule IngressRules |
(Optional)
IngressRules is the inbound rules associated with the security group. |
tags Tags |
Tags is a map of tags associated with the security group. |
SecurityGroupProtocol
(string
alias)
(Appears on:CNIIngressRule, IngressRule)
SecurityGroupProtocol defines the protocol type for a security group rule.
SecurityGroupRole
(string
alias)
(Appears on:IngressRule)
SecurityGroupRole defines the unique role of a security group.
SpotMarketOptions
(Appears on:AWSMachineSpec, Instance, AWSLaunchTemplate, AWSLaunchTemplate)
SpotMarketOptions defines the options available to a user when configuring Machines to run on Spot instances. Most users should provide an empty struct.
Field | Description |
---|---|
maxPrice string |
(Optional)
MaxPrice defines the maximum price the user is willing to pay for Spot VM instances |
SubnetSchemaType
(string
alias)
(Appears on:VPCSpec)
SubnetSchemaType specifies how given network should be divided on subnets in the VPC depending on the number of AZs.
SubnetSpec
SubnetSpec configures an AWS Subnet.
Field | Description |
---|---|
id string |
ID defines a unique identifier to reference this resource.
If you’re bringing your subnet, set the AWS subnet-id here, it must start with When the VPC is managed by CAPA, and you’d like the provider to create a subnet for you,
the id can be set to any placeholder value that does not start with |
resourceID string |
(Optional)
ResourceID is the subnet identifier from AWS, READ ONLY. This field is populated when the provider manages the subnet. |
cidrBlock string |
CidrBlock is the CIDR block to be used when the provider creates a managed VPC. |
ipv6CidrBlock string |
(Optional)
IPv6CidrBlock is the IPv6 CIDR block to be used when the provider creates a managed VPC. A subnet can have an IPv4 and an IPv6 address. IPv6 is only supported in managed clusters, this field cannot be set on AWSCluster object. |
availabilityZone string |
AvailabilityZone defines the availability zone to use for this subnet in the cluster’s region. |
isPublic bool |
(Optional)
IsPublic defines the subnet as a public subnet. A subnet is public when it is associated with a route table that has a route to an internet gateway. |
isIpv6 bool |
(Optional)
IsIPv6 defines the subnet as an IPv6 subnet. A subnet is IPv6 when it is associated with a VPC that has IPv6 enabled. IPv6 is only supported in managed clusters, this field cannot be set on AWSCluster object. |
routeTableId string |
(Optional)
RouteTableID is the routing table id associated with the subnet. |
natGatewayId string |
(Optional)
NatGatewayID is the NAT gateway id associated with the subnet. Ignored unless the subnet is managed by the provider, in which case this is set on the public subnet where the NAT gateway resides. It is then used to determine routes for private subnets in the same AZ as the public subnet. |
tags Tags |
Tags is a collection of tags describing the resource. |
zoneType ZoneType |
(Optional)
ZoneType defines the type of the zone where the subnet is created. The valid values are availability-zone, local-zone, and wavelength-zone. Subnet with zone type availability-zone (regular) is always selected to create cluster resources, like Load Balancers, NAT Gateways, Contol Plane nodes, etc. Subnet with zone type local-zone or wavelength-zone is not eligible to automatically create regular cluster resources. The public subnet in availability-zone or local-zone is associated with regular public route table with default route entry to a Internet Gateway. The public subnet in wavelength-zone is associated with a carrier public route table with default route entry to a Carrier Gateway. The private subnet in the availability-zone is associated with a private route table with the default route entry to a NAT Gateway created in that zone. The private subnet in the local-zone or wavelength-zone is associated with a private route table with the default route entry re-using the NAT Gateway in the Region (preferred from the parent zone, the zone type availability-zone in the region, or first table available). |
parentZoneName string |
(Optional)
ParentZoneName is the zone name where the current subnet’s zone is tied when the zone is a Local Zone. The subnets in Local Zone or Wavelength Zone locations consume the ParentZoneName to select the correct private route table to egress traffic to the internet. |
Subnets
([]sigs.k8s.io/cluster-api-provider-aws/v2/api/v1beta2.SubnetSpec
alias)
(Appears on:NetworkSpec)
Subnets is a slice of Subnet.
Tags
(map[string]string
alias)
(Appears on:AWSClusterSpec, AWSMachineSpec, BuildParams, SecurityGroup, SubnetSpec, VPCSpec, AWSIAMRoleSpec, BootstrapUser, AWSIAMRoleSpec, BootstrapUser, AWSManagedControlPlaneSpec, OIDCIdentityProviderConfig, AWSManagedControlPlaneSpec, OIDCIdentityProviderConfig, RosaControlPlaneSpec, AWSMachinePoolSpec, AWSManagedMachinePoolSpec, AutoScalingGroup, FargateProfileSpec, AWSMachinePoolSpec, AWSManagedMachinePoolSpec, AutoScalingGroup, FargateProfileSpec, RosaMachinePoolSpec)
Tags defines a map of tags.
TargetGroupAttribute
(string
alias)
TargetGroupAttribute defines attribute key values for V2 Load Balancer Attributes.
TargetGroupHealthCheck
(Appears on:TargetGroupSpec)
TargetGroupHealthCheck defines health check settings for the target group.
Field | Description |
---|---|
protocol string |
|
path string |
|
port string |
|
intervalSeconds int64 |
|
timeoutSeconds int64 |
|
thresholdCount int64 |
|
unhealthyThresholdCount int64 |
TargetGroupHealthCheckAPISpec
(Appears on:AWSLoadBalancerSpec)
TargetGroupHealthCheckAPISpec defines the optional health check settings for the API target group.
Field | Description |
---|---|
intervalSeconds int64 |
(Optional)
The approximate amount of time, in seconds, between health checks of an individual target. |
timeoutSeconds int64 |
(Optional)
The amount of time, in seconds, during which no response from a target means a failed health check. |
thresholdCount int64 |
(Optional)
The number of consecutive health check successes required before considering a target healthy. |
unhealthyThresholdCount int64 |
(Optional)
The number of consecutive health check failures required before considering a target unhealthy. |
TargetGroupHealthCheckAdditionalSpec
(Appears on:AdditionalListenerSpec)
TargetGroupHealthCheckAdditionalSpec defines the optional health check settings for the additional target groups.
Field | Description |
---|---|
protocol string |
(Optional)
The protocol to use to health check connect with the target. When not specified the Protocol will be the same of the listener. |
port string |
(Optional)
The port the load balancer uses when performing health checks for additional target groups. When not specified this value will be set for the same of listener port. |
path string |
(Optional)
The destination for health checks on the targets when using the protocol HTTP or HTTPS, otherwise the path will be ignored. |
intervalSeconds int64 |
(Optional)
The approximate amount of time, in seconds, between health checks of an individual target. |
timeoutSeconds int64 |
(Optional)
The amount of time, in seconds, during which no response from a target means a failed health check. |
thresholdCount int64 |
(Optional)
The number of consecutive health check successes required before considering a target healthy. |
unhealthyThresholdCount int64 |
(Optional)
The number of consecutive health check failures required before considering a target unhealthy. |
TargetGroupSpec
(Appears on:Listener)
TargetGroupSpec specifies target group settings for a given listener. This is created first, and the ARN is then passed to the listener.
Field | Description |
---|---|
name string |
Name of the TargetGroup. Must be unique over the same group of listeners. |
port int64 |
Port is the exposed port |
protocol ELBProtocol |
|
vpcId string |
|
targetGroupHealthCheck TargetGroupHealthCheck |
HealthCheck is the elb health check associated with the load balancer. |
VPCSpec
(Appears on:NetworkSpec)
VPCSpec configures an AWS VPC.
Field | Description |
---|---|
id string |
ID is the vpc-id of the VPC this provider should use to create resources. |
cidrBlock string |
CidrBlock is the CIDR block to be used when the provider creates a managed VPC. Defaults to 10.0.0.0/16. Mutually exclusive with IPAMPool. |
secondaryCidrBlocks []VpcCidrBlock |
(Optional)
SecondaryCidrBlocks are additional CIDR blocks to be associated when the provider creates a managed VPC. Defaults to none. Mutually exclusive with IPAMPool. This makes sense to use if, for example, you want to use a separate IP range for pods (e.g. Cilium ENI mode). |
ipamPool IPAMPool |
IPAMPool defines the IPAMv4 pool to be used for VPC. Mutually exclusive with CidrBlock. |
ipv6 IPv6 |
(Optional)
IPv6 contains ipv6 specific settings for the network. Supported only in managed clusters. This field cannot be set on AWSCluster object. |
internetGatewayId string |
(Optional)
InternetGatewayID is the id of the internet gateway associated with the VPC. |
carrierGatewayId string |
(Optional)
CarrierGatewayID is the id of the internet gateway associated with the VPC, for carrier network (Wavelength Zones). |
tags Tags |
Tags is a collection of tags describing the resource. |
availabilityZoneUsageLimit int |
AvailabilityZoneUsageLimit specifies the maximum number of availability zones (AZ) that should be used in a region when automatically creating subnets. If a region has more than this number of AZs then this number of AZs will be picked randomly when creating default subnets. Defaults to 3 |
availabilityZoneSelection AZSelectionScheme |
AvailabilityZoneSelection specifies how AZs should be selected if there are more AZs in a region than specified by AvailabilityZoneUsageLimit. There are 2 selection schemes: Ordered - selects based on alphabetical order Random - selects AZs randomly in a region Defaults to Ordered |
emptyRoutesDefaultVPCSecurityGroup bool |
(Optional)
EmptyRoutesDefaultVPCSecurityGroup specifies whether the default VPC security group ingress and egress rules should be removed. By default, when creating a VPC, AWS creates a security group called NOTE: This only applies when the VPC is managed by the Cluster API AWS controller. |
privateDnsHostnameTypeOnLaunch string |
(Optional)
PrivateDNSHostnameTypeOnLaunch is the type of hostname to assign to instances in the subnet at launch. For IPv4-only and dual-stack (IPv4 and IPv6) subnets, an instance DNS name can be based on the instance IPv4 address (ip-name) or the instance ID (resource-name). For IPv6 only subnets, an instance DNS name must be based on the instance ID (resource-name). |
elasticIpPool ElasticIPPool |
(Optional)
ElasticIPPool contains specific configuration to allocate Public IPv4 address (Elastic IP) from user-defined pool brought to AWS for core infrastructure resources, like NAT Gateways and Public Network Load Balancers for the API Server. |
subnetSchema SubnetSchemaType |
(Optional)
SubnetSchema specifies how CidrBlock should be divided on subnets in the VPC depending on the number of AZs. PreferPrivate - one private subnet for each AZ plus one other subnet that will be further sub-divided for the public subnets. PreferPublic - have the reverse logic of PreferPrivate, one public subnet for each AZ plus one other subnet that will be further sub-divided for the private subnets. Defaults to PreferPrivate |
Volume
(Appears on:AWSMachineSpec, Instance, AWSLaunchTemplate, AWSLaunchTemplate)
Volume encapsulates the configuration options for the storage device.
Field | Description |
---|---|
deviceName string |
(Optional)
Device name |
size int64 |
Size specifies size (in Gi) of the storage device. Must be greater than the image snapshot size or 8 (whichever is greater). |
type VolumeType |
(Optional)
Type is the type of the volume (e.g. gp2, io1, etc…). |
iops int64 |
(Optional)
IOPS is the number of IOPS requested for the disk. Not applicable to all types. |
throughput int64 |
(Optional)
Throughput to provision in MiB/s supported for the volume type. Not applicable to all types. |
encrypted bool |
(Optional)
Encrypted is whether the volume should be encrypted or not. |
encryptionKey string |
(Optional)
EncryptionKey is the KMS key to use to encrypt the volume. Can be either a KMS key ID or ARN. If Encrypted is set and this is omitted, the default AWS key will be used. The key must already exist and be accessible by the controller. |
VolumeType
(string
alias)
(Appears on:Volume)
VolumeType describes the EBS volume type. See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html
VpcCidrBlock
(Appears on:VPCSpec)
VpcCidrBlock defines the CIDR block and settings to associate with the managed VPC. Currently, only IPv4 is supported.
Field | Description |
---|---|
ipv4CidrBlock string |
IPv4CidrBlock is the IPv4 CIDR block to associate with the managed VPC. |
ZoneType
(string
alias)
(Appears on:SubnetSpec)
ZoneType defines listener AWS Availability Zone type.
ASGStatus
(string
alias)
(Appears on:AWSMachinePoolStatus, AutoScalingGroup)
ASGStatus is a status string returned by the autoscaling API.
AWSFargateProfile
AWSFargateProfile is the Schema for the awsfargateprofiles API.
Field | Description | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||||||
spec FargateProfileSpec |
|
||||||||||||
status FargateProfileStatus |
AWSLaunchTemplate
(Appears on:AWSMachinePoolSpec, AWSManagedMachinePoolSpec)
AWSLaunchTemplate defines the desired state of AWSLaunchTemplate.
Field | Description |
---|---|
name string |
The name of the launch template. |
iamInstanceProfile string |
The name or the Amazon Resource Name (ARN) of the instance profile associated with the IAM role for the instance. The instance profile contains the IAM role. |
ami AMIReference |
(Optional)
AMI is the reference to the AMI from which to create the machine instance. |
imageLookupFormat string |
(Optional)
ImageLookupFormat is the AMI naming format to look up the image for this machine It will be ignored if an explicit AMI is set. Supports substitutions for {{.BaseOS}} and {{.K8sVersion}} with the base OS and kubernetes version, respectively. The BaseOS will be the value in ImageLookupBaseOS or ubuntu (the default), and the kubernetes version as defined by the packages produced by kubernetes/release without v as a prefix: 1.13.0, 1.12.5-mybuild.1, or 1.17.3. For example, the default image format of capa-ami-{{.BaseOS}}-?{{.K8sVersion}}-* will end up searching for AMIs that match the pattern capa-ami-ubuntu-?1.18.0-* for a Machine that is targeting kubernetes v1.18.0 and the ubuntu base OS. See also: https://golang.org/pkg/text/template/ |
imageLookupOrg string |
ImageLookupOrg is the AWS Organization ID to use for image lookup if AMI is not set. |
imageLookupBaseOS string |
ImageLookupBaseOS is the name of the base operating system to use for image lookup the AMI is not set. |
instanceType string |
InstanceType is the type of instance to create. Example: m4.xlarge |
rootVolume Volume |
(Optional)
RootVolume encapsulates the configuration options for the root volume |
nonRootVolumes []Volume |
(Optional)
Configuration options for the non root storage volumes. |
sshKeyName string |
(Optional)
SSHKeyName is the name of the ssh key to attach to the instance. Valid values are empty string (do not use SSH keys), a valid SSH key name, or omitted (use the default SSH key name) |
versionNumber int64 |
VersionNumber is the version of the launch template that is applied. Typically a new version is created when at least one of the following happens: 1) A new launch template spec is applied. 2) One or more parameters in an existing template is changed. 3) A new AMI is discovered. |
additionalSecurityGroups []AWSResourceReference |
(Optional)
AdditionalSecurityGroups is an array of references to security groups that should be applied to the instances. These security groups would be set in addition to any security groups defined at the cluster level or in the actuator. |
spotMarketOptions SpotMarketOptions |
SpotMarketOptions are options for configuring AWSMachinePool instances to be run using AWS Spot instances. |
instanceMetadataOptions InstanceMetadataOptions |
(Optional)
InstanceMetadataOptions defines the behavior for applying metadata to instances. |
privateDnsName PrivateDNSName |
(Optional)
PrivateDNSName is the options for the instance hostname. |
AWSMachinePool
AWSMachinePool is the Schema for the awsmachinepools API.
Field | Description | ||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||||||||||||||||||||||||
spec AWSMachinePoolSpec |
|
||||||||||||||||||||||||||||||
status AWSMachinePoolStatus |
AWSMachinePoolInstanceStatus
(Appears on:AWSMachinePoolStatus)
AWSMachinePoolInstanceStatus defines the status of the AWSMachinePoolInstance.
Field | Description |
---|---|
instanceID string |
(Optional)
InstanceID is the identification of the Machine Instance within ASG |
version string |
(Optional)
Version defines the Kubernetes version for the Machine Instance |
AWSMachinePoolSpec
(Appears on:AWSMachinePool)
AWSMachinePoolSpec defines the desired state of AWSMachinePool.
Field | Description |
---|---|
providerID string |
(Optional)
ProviderID is the ARN of the associated ASG |
minSize int32 |
MinSize defines the minimum size of the group. |
maxSize int32 |
MaxSize defines the maximum size of the group. |
availabilityZones []string |
AvailabilityZones is an array of availability zones instances can run in |
availabilityZoneSubnetType AZSubnetType |
(Optional)
AvailabilityZoneSubnetType specifies which type of subnets to use when an availability zone is specified. |
subnets []AWSResourceReference |
(Optional)
Subnets is an array of subnet configurations |
additionalTags Tags |
(Optional)
AdditionalTags is an optional set of tags to add to an instance, in addition to the ones added by default by the AWS provider. |
awsLaunchTemplate AWSLaunchTemplate |
AWSLaunchTemplate specifies the launch template and version to use when an instance is launched. |
mixedInstancesPolicy MixedInstancesPolicy |
MixedInstancesPolicy describes how multiple instance types will be used by the ASG. |
providerIDList []string |
(Optional)
ProviderIDList are the identification IDs of machine instances provided by the provider. This field must match the provider IDs as seen on the node objects corresponding to a machine pool’s machine instances. |
defaultCoolDown Kubernetes meta/v1.Duration |
(Optional)
The amount of time, in seconds, after a scaling activity completes before another scaling activity can start. If no value is supplied by user a default value of 300 seconds is set |
defaultInstanceWarmup Kubernetes meta/v1.Duration |
(Optional)
The amount of time, in seconds, until a new instance is considered to have finished initializing and resource consumption to become stable after it enters the InService state. If no value is supplied by user a default value of 300 seconds is set |
refreshPreferences RefreshPreferences |
(Optional)
RefreshPreferences describes set of preferences associated with the instance refresh request. |
capacityRebalance bool |
(Optional)
Enable or disable the capacity rebalance autoscaling group feature |
suspendProcesses SuspendProcessesTypes |
SuspendProcesses defines a list of processes to suspend for the given ASG. This is constantly reconciled. If a process is removed from this list it will automatically be resumed. |
AWSMachinePoolStatus
(Appears on:AWSMachinePool)
AWSMachinePoolStatus defines the observed state of AWSMachinePool.
Field | Description |
---|---|
ready bool |
(Optional)
Ready is true when the provider resource is ready. |
replicas int32 |
(Optional)
Replicas is the most recently observed number of replicas |
conditions Cluster API api/v1beta1.Conditions |
(Optional)
Conditions defines current service state of the AWSMachinePool. |
instances []AWSMachinePoolInstanceStatus |
(Optional)
Instances contains the status for each instance in the pool |
launchTemplateID string |
The ID of the launch template |
launchTemplateVersion string |
(Optional)
The version of the launch template |
failureReason Cluster API errors.MachineStatusError |
(Optional)
FailureReason will be set in the event that there is a terminal problem reconciling the Machine and will contain a succinct value suitable for machine interpretation. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the Machine’s spec or the configuration of the controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the controller, or the responsible controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of Machines can be added as events to the Machine object and/or logged in the controller’s output. |
failureMessage string |
(Optional)
FailureMessage will be set in the event that there is a terminal problem reconciling the Machine and will contain a more verbose string suitable for logging and human consumption. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the Machine’s spec or the configuration of the controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the controller, or the responsible controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of Machines can be added as events to the Machine object and/or logged in the controller’s output. |
asgStatus ASGStatus |
AWSManagedMachinePool
AWSManagedMachinePool is the Schema for the awsmanagedmachinepools API.
Field | Description | ||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||||||||||||||||||||||||||||||||
spec AWSManagedMachinePoolSpec |
|
||||||||||||||||||||||||||||||||||||||
status AWSManagedMachinePoolStatus |
AWSManagedMachinePoolSpec
(Appears on:AWSManagedMachinePool)
AWSManagedMachinePoolSpec defines the desired state of AWSManagedMachinePool.
Field | Description |
---|---|
eksNodegroupName string |
(Optional)
EKSNodegroupName specifies the name of the nodegroup in AWS corresponding to this MachinePool. If you don’t specify a name then a default name will be created based on the namespace and name of the managed machine pool. |
availabilityZones []string |
AvailabilityZones is an array of availability zones instances can run in |
availabilityZoneSubnetType AZSubnetType |
(Optional)
AvailabilityZoneSubnetType specifies which type of subnets to use when an availability zone is specified. |
subnetIDs []string |
(Optional)
SubnetIDs specifies which subnets are used for the auto scaling group of this nodegroup |
additionalTags Tags |
(Optional)
AdditionalTags is an optional set of tags to add to AWS resources managed by the AWS provider, in addition to the ones added by default. |
roleAdditionalPolicies []string |
(Optional)
RoleAdditionalPolicies allows you to attach additional polices to the node group role. You must enable the EKSAllowAddRoles feature flag to incorporate these into the created role. |
roleName string |
(Optional)
RoleName specifies the name of IAM role for the node group. If the role is pre-existing we will treat it as unmanaged and not delete it on deletion. If the EKSEnableIAM feature flag is true and no name is supplied then a role is created. |
amiVersion string |
(Optional)
AMIVersion defines the desired AMI release version. If no version number is supplied then the latest version for the Kubernetes version will be used |
amiType ManagedMachineAMIType |
(Optional)
AMIType defines the AMI type |
labels map[string]string |
(Optional)
Labels specifies labels for the Kubernetes node objects |
taints Taints |
(Optional)
Taints specifies the taints to apply to the nodes of the machine pool |
diskSize int32 |
(Optional)
DiskSize specifies the root disk size |
instanceType string |
(Optional)
InstanceType specifies the AWS instance type |
scaling ManagedMachinePoolScaling |
(Optional)
Scaling specifies scaling for the ASG behind this pool |
remoteAccess ManagedRemoteAccess |
(Optional)
RemoteAccess specifies how machines can be accessed remotely |
providerIDList []string |
(Optional)
ProviderIDList are the provider IDs of instances in the autoscaling group corresponding to the nodegroup represented by this machine pool |
capacityType ManagedMachinePoolCapacityType |
(Optional)
CapacityType specifies the capacity type for the ASG behind this pool |
updateConfig UpdateConfig |
(Optional)
UpdateConfig holds the optional config to control the behaviour of the update to the nodegroup. |
awsLaunchTemplate AWSLaunchTemplate |
(Optional)
AWSLaunchTemplate specifies the launch template to use to create the managed node group. If AWSLaunchTemplate is specified, certain node group configuraions outside of launch template are prohibited (https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html). |
AWSManagedMachinePoolStatus
(Appears on:AWSManagedMachinePool)
AWSManagedMachinePoolStatus defines the observed state of AWSManagedMachinePool.
Field | Description |
---|---|
ready bool |
Ready denotes that the AWSManagedMachinePool nodegroup has joined the cluster |
replicas int32 |
(Optional)
Replicas is the most recently observed number of replicas. |
launchTemplateID string |
(Optional)
The ID of the launch template |
launchTemplateVersion string |
(Optional)
The version of the launch template |
failureReason Cluster API errors.MachineStatusError |
(Optional)
FailureReason will be set in the event that there is a terminal problem reconciling the MachinePool and will contain a succinct value suitable for machine interpretation. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the Machine’s spec or the configuration of the controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the controller, or the responsible controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of MachinePools can be added as events to the MachinePool object and/or logged in the controller’s output. |
failureMessage string |
(Optional)
FailureMessage will be set in the event that there is a terminal problem reconciling the MachinePool and will contain a more verbose string suitable for logging and human consumption. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the MachinePool’s spec or the configuration of the controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the controller, or the responsible controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of MachinePools can be added as events to the MachinePool object and/or logged in the controller’s output. |
conditions Cluster API api/v1beta1.Conditions |
(Optional)
Conditions defines current service state of the managed machine pool |
AZSubnetType
(string
alias)
(Appears on:AWSMachinePoolSpec, AWSManagedMachinePoolSpec)
AZSubnetType is the type of subnet to use when an availability zone is specified.
Value | Description |
---|---|
"all" |
AZSubnetTypeAll is all subnets in an availability zone. |
"private" |
AZSubnetTypePrivate is a private subnet. |
"public" |
AZSubnetTypePublic is a public subnet. |
AutoScalingGroup
AutoScalingGroup describes an AWS autoscaling group.
Field | Description |
---|---|
id string |
The tags associated with the instance. |
tags Tags |
|
name string |
|
desiredCapacity int32 |
|
maxSize int32 |
|
minSize int32 |
|
placementGroup string |
|
subnets []string |
|
defaultCoolDown Kubernetes meta/v1.Duration |
|
defaultInstanceWarmup Kubernetes meta/v1.Duration |
|
capacityRebalance bool |
|
mixedInstancesPolicy MixedInstancesPolicy |
|
Status ASGStatus |
|
instances []Instance |
|
currentlySuspendProcesses []string |
BlockDeviceMapping
BlockDeviceMapping specifies the block devices for the instance. You can specify virtual devices and EBS volumes.
Field | Description |
---|---|
deviceName string |
The device name exposed to the EC2 instance (for example, /dev/sdh or xvdh). |
ebs EBS |
(Optional)
You can specify either VirtualName or Ebs, but not both. |
EBS
(Appears on:BlockDeviceMapping)
EBS can be used to automatically set up EBS volumes when an instance is launched.
Field | Description |
---|---|
encrypted bool |
(Optional)
Encrypted is whether the volume should be encrypted or not. |
volumeSize int64 |
(Optional)
The size of the volume, in GiB. This can be a number from 1-1,024 for standard, 4-16,384 for io1, 1-16,384 for gp2, and 500-16,384 for st1 and sc1. If you specify a snapshot, the volume size must be equal to or larger than the snapshot size. |
volumeType string |
(Optional)
The volume type For more information, see Amazon EBS Volume Types (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) |
FargateProfileSpec
(Appears on:AWSFargateProfile)
FargateProfileSpec defines the desired state of FargateProfile.
Field | Description |
---|---|
clusterName string |
ClusterName is the name of the Cluster this object belongs to. |
profileName string |
ProfileName specifies the profile name. |
subnetIDs []string |
(Optional)
SubnetIDs specifies which subnets are used for the auto scaling group of this nodegroup. |
additionalTags Tags |
(Optional)
AdditionalTags is an optional set of tags to add to AWS resources managed by the AWS provider, in addition to the ones added by default. |
roleName string |
(Optional)
RoleName specifies the name of IAM role for this fargate pool If the role is pre-existing we will treat it as unmanaged and not delete it on deletion. If the EKSEnableIAM feature flag is true and no name is supplied then a role is created. |
selectors []FargateSelector |
Selectors specify fargate pod selectors. |
FargateProfileStatus
(Appears on:AWSFargateProfile)
FargateProfileStatus defines the observed state of FargateProfile.
Field | Description |
---|---|
ready bool |
Ready denotes that the FargateProfile is available. |
failureReason Cluster API errors.MachineStatusError |
(Optional)
FailureReason will be set in the event that there is a terminal problem reconciling the FargateProfile and will contain a succinct value suitable for machine interpretation. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the FargateProfile’s spec or the configuration of the controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the controller, or the responsible controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of FargateProfiles can be added as events to the FargateProfile object and/or logged in the controller’s output. |
failureMessage string |
(Optional)
FailureMessage will be set in the event that there is a terminal problem reconciling the FargateProfile and will contain a more verbose string suitable for logging and human consumption. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the FargateProfile’s spec or the configuration of the controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the controller, or the responsible controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of FargateProfiles can be added as events to the FargateProfile object and/or logged in the controller’s output. |
conditions Cluster API api/v1beta1.Conditions |
(Optional)
Conditions defines current state of the Fargate profile. |
FargateSelector
(Appears on:FargateProfileSpec)
FargateSelector specifies a selector for pods that should run on this fargate pool.
Field | Description |
---|---|
labels map[string]string |
Labels specifies which pod labels this selector should match. |
namespace string |
Namespace specifies which namespace this selector should match. |
InstancesDistribution
(Appears on:MixedInstancesPolicy)
InstancesDistribution to configure distribution of On-Demand Instances and Spot Instances.
Field | Description |
---|---|
onDemandAllocationStrategy OnDemandAllocationStrategy |
|
spotAllocationStrategy SpotAllocationStrategy |
|
onDemandBaseCapacity int64 |
|
onDemandPercentageAboveBaseCapacity int64 |
ManagedMachineAMIType
(string
alias)
(Appears on:AWSManagedMachinePoolSpec)
ManagedMachineAMIType specifies which AWS AMI to use for a managed MachinePool.
Value | Description |
---|---|
"AL2023_ARM_64_STANDARD" |
Al2023Arm64 is the AL2023 Arm AMI type. |
"AL2023_x86_64_STANDARD" |
Al2023x86_64 is the AL2023 x86-64 AMI type. |
"AL2_ARM_64" |
Al2Arm64 is the Arm AMI type. |
"AL2_x86_64" |
Al2x86_64 is the default AMI type. |
"AL2_x86_64_GPU" |
Al2x86_64GPU is the x86-64 GPU AMI type. |
ManagedMachinePoolCapacityType
(string
alias)
(Appears on:AWSManagedMachinePoolSpec)
ManagedMachinePoolCapacityType specifies the capacity type to be used for the managed MachinePool.
Value | Description |
---|---|
"onDemand" |
ManagedMachinePoolCapacityTypeOnDemand is the default capacity type, to launch on-demand instances. |
"spot" |
ManagedMachinePoolCapacityTypeSpot is the spot instance capacity type to launch spot instances. |
ManagedMachinePoolScaling
(Appears on:AWSManagedMachinePoolSpec)
ManagedMachinePoolScaling specifies scaling options.
Field | Description |
---|---|
minSize int32 |
|
maxSize int32 |
ManagedRemoteAccess
(Appears on:AWSManagedMachinePoolSpec)
ManagedRemoteAccess specifies remote access settings for EC2 instances.
Field | Description |
---|---|
sshKeyName string |
SSHKeyName specifies which EC2 SSH key can be used to access machines. If left empty, the key from the control plane is used. |
sourceSecurityGroups []string |
SourceSecurityGroups specifies which security groups are allowed access |
public bool |
Public specifies whether to open port 22 to the public internet |
MixedInstancesPolicy
(Appears on:AWSMachinePoolSpec, AutoScalingGroup)
MixedInstancesPolicy for an Auto Scaling group.
Field | Description |
---|---|
instancesDistribution InstancesDistribution |
|
overrides []Overrides |
OnDemandAllocationStrategy
(string
alias)
(Appears on:InstancesDistribution)
OnDemandAllocationStrategy indicates how to allocate instance types to fulfill On-Demand capacity.
Overrides
(Appears on:MixedInstancesPolicy)
Overrides are used to override the instance type specified by the launch template with multiple instance types that can be used to launch On-Demand Instances and Spot Instances.
Field | Description |
---|---|
instanceType string |
Processes
(Appears on:SuspendProcessesTypes)
Processes defines the processes which can be enabled or disabled individually.
Field | Description |
---|---|
launch bool |
|
terminate bool |
|
addToLoadBalancer bool |
|
alarmNotification bool |
|
azRebalance bool |
|
healthCheck bool |
|
instanceRefresh bool |
|
replaceUnhealthy bool |
|
scheduledActions bool |
ROSACluster
ROSACluster is the Schema for the ROSAClusters API.
Field | Description | ||
---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||
spec ROSAClusterSpec |
|
||
status ROSAClusterStatus |
ROSAClusterSpec
(Appears on:ROSACluster)
ROSAClusterSpec defines the desired state of ROSACluster.
Field | Description |
---|---|
controlPlaneEndpoint Cluster API api/v1beta1.APIEndpoint |
(Optional)
ControlPlaneEndpoint represents the endpoint used to communicate with the control plane. |
ROSAClusterStatus
(Appears on:ROSACluster)
ROSAClusterStatus defines the observed state of ROSACluster.
Field | Description |
---|---|
ready bool |
(Optional)
Ready is when the ROSAControlPlane has a API server URL. |
failureDomains Cluster API api/v1beta1.FailureDomains |
(Optional)
FailureDomains specifies a list fo available availability zones that can be used |
ROSAMachinePool
ROSAMachinePool is the Schema for the rosamachinepools API.
Field | Description | ||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||||||||||||||||||||||||
spec RosaMachinePoolSpec |
|
||||||||||||||||||||||||||||||
status RosaMachinePoolStatus |
RefreshPreferences
(Appears on:AWSMachinePoolSpec)
RefreshPreferences defines the specs for instance refreshing.
Field | Description |
---|---|
disable bool |
(Optional)
Disable, if true, disables instance refresh from triggering when new launch templates are detected. This is useful in scenarios where ASG nodes are externally managed. |
strategy string |
(Optional)
The strategy to use for the instance refresh. The only valid value is Rolling. A rolling update is an update that is applied to all instances in an Auto Scaling group until all instances have been updated. |
instanceWarmup int64 |
(Optional)
The number of seconds until a newly launched instance is configured and ready to use. During this time, the next replacement will not be initiated. The default is to use the value for the health check grace period defined for the group. |
minHealthyPercentage int64 |
(Optional)
The amount of capacity as a percentage in ASG that must remain healthy during an instance refresh. The default is 90. |
maxHealthyPercentage int64 |
(Optional)
The amount of capacity as a percentage in ASG that can be in service and healthy, or pending, to support your workload when replacing instances. The value is expressed as a percentage of the desired capacity of the ASG. Value range is 100 to 200. If you specify MaxHealthyPercentage , you must also specify MinHealthyPercentage , and the difference between them cannot be greater than 100. A larger range increases the number of instances that can be replaced at the same time. |
RollingUpdate
(Appears on:RosaUpdateConfig)
RollingUpdate specifies MaxUnavailable & MaxSurge number of nodes during update.
Field | Description |
---|---|
maxUnavailable k8s.io/apimachinery/pkg/util/intstr.IntOrString |
(Optional)
MaxUnavailable is the maximum number of nodes that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of desired nodes (ex: 10%). Absolute number is calculated from percentage by rounding down. MaxUnavailable can not be 0 if MaxSurge is 0, default is 0. Both MaxUnavailable & MaxSurge must use the same units (absolute value or percentage). Example: when MaxUnavailable is set to 30%, old nodes can be deleted down to 70% of desired nodes immediately when the rolling update starts. Once new nodes are ready, more old nodes be deleted, followed by provisioning new nodes, ensuring that the total number of nodes available at all times during the update is at least 70% of desired nodes. |
maxSurge k8s.io/apimachinery/pkg/util/intstr.IntOrString |
(Optional)
MaxSurge is the maximum number of nodes that can be provisioned above the desired number of nodes. Value can be an absolute number (ex: 5) or a percentage of desired nodes (ex: 10%). Absolute number is calculated from percentage by rounding up. MaxSurge can not be 0 if MaxUnavailable is 0, default is 1. Both MaxSurge & MaxUnavailable must use the same units (absolute value or percentage). Example: when MaxSurge is set to 30%, new nodes can be provisioned immediately when the rolling update starts, such that the total number of old and new nodes do not exceed 130% of desired nodes. Once old nodes have been deleted, new nodes can be provisioned, ensuring that total number of nodes running at any time during the update is at most 130% of desired nodes. |
RosaMachinePoolAutoScaling
(Appears on:DefaultMachinePoolSpec, RosaMachinePoolSpec)
RosaMachinePoolAutoScaling specifies scaling options.
Field | Description |
---|---|
minReplicas int |
|
maxReplicas int |
RosaMachinePoolSpec
(Appears on:ROSAMachinePool)
RosaMachinePoolSpec defines the desired state of RosaMachinePool.
Field | Description |
---|---|
nodePoolName string |
NodePoolName specifies the name of the nodepool in Rosa must be a valid DNS-1035 label, so it must consist of lower case alphanumeric and have a max length of 15 characters. |
version string |
(Optional)
Version specifies the OpenShift version of the nodes associated with this machinepool. ROSAControlPlane version is used if not set. |
availabilityZone string |
(Optional)
AvailabilityZone is an optinal field specifying the availability zone where instances of this machine pool should run For Multi-AZ clusters, you can create a machine pool in a Single-AZ of your choice. |
subnet string |
(Optional) |
labels map[string]string |
(Optional)
Labels specifies labels for the Kubernetes node objects |
taints []RosaTaint |
(Optional)
Taints specifies the taints to apply to the nodes of the machine pool |
additionalTags Tags |
(Optional)
AdditionalTags are user-defined tags to be added on the underlying EC2 instances associated with this machine pool. |
autoRepair bool |
(Optional)
AutoRepair specifies whether health checks should be enabled for machines in the NodePool. The default is true. |
instanceType string |
InstanceType specifies the AWS instance type |
autoscaling RosaMachinePoolAutoScaling |
(Optional)
Autoscaling specifies auto scaling behaviour for this MachinePool. required if Replicas is not configured |
tuningConfigs []string |
(Optional)
TuningConfigs specifies the names of the tuning configs to be applied to this MachinePool. Tuning configs must already exist. |
additionalSecurityGroups []string |
(Optional)
AdditionalSecurityGroups is an optional set of security groups to associate with all node instances of the machine pool. |
providerIDList []string |
(Optional)
ProviderIDList contain a ProviderID for each machine instance that’s currently managed by this machine pool. |
nodeDrainGracePeriod Kubernetes meta/v1.Duration |
(Optional)
NodeDrainGracePeriod is grace period for how long Pod Disruption Budget-protected workloads will be respected during upgrades. After this grace period, any workloads protected by Pod Disruption Budgets that have not been successfully drained from a node will be forcibly evicted. Valid values are from 0 to 1 week(10080m|168h) . 0 or empty value means that the MachinePool can be drained without any time limitation. |
updateConfig RosaUpdateConfig |
(Optional)
UpdateConfig specifies update configurations. |
RosaMachinePoolStatus
(Appears on:ROSAMachinePool)
RosaMachinePoolStatus defines the observed state of RosaMachinePool.
Field | Description |
---|---|
ready bool |
Ready denotes that the RosaMachinePool nodepool has joined the cluster |
replicas int32 |
(Optional)
Replicas is the most recently observed number of replicas. |
conditions Cluster API api/v1beta1.Conditions |
(Optional)
Conditions defines current service state of the managed machine pool |
failureMessage string |
(Optional)
FailureMessage will be set in the event that there is a terminal problem reconciling the state and will be set to a descriptive error message. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the spec or the configuration of the controller, and that manual intervention is required. |
id string |
ID is the ID given by ROSA. |
RosaTaint
(Appears on:RosaMachinePoolSpec)
RosaTaint represents a taint to be applied to a node.
Field | Description |
---|---|
key string |
The taint key to be applied to a node. |
value string |
(Optional)
The taint value corresponding to the taint key. |
effect Kubernetes core/v1.TaintEffect |
The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. |
RosaUpdateConfig
(Appears on:RosaMachinePoolSpec)
RosaUpdateConfig specifies update configuration
Field | Description |
---|---|
rollingUpdate RollingUpdate |
(Optional)
RollingUpdate specifies MaxUnavailable & MaxSurge number of nodes during update. |
SpotAllocationStrategy
(string
alias)
(Appears on:InstancesDistribution)
SpotAllocationStrategy indicates how to allocate instances across Spot Instance pools.
SuspendProcessesTypes
(Appears on:AWSMachinePoolSpec)
SuspendProcessesTypes contains user friendly auto-completable values for suspended process names.
Field | Description |
---|---|
all bool |
|
processes Processes |
Tags
(map[string]string
alias)
Tags is a mapping for tags.
Taint
Taint defines the specs for a Kubernetes taint.
Field | Description |
---|---|
effect TaintEffect |
Effect specifies the effect for the taint |
key string |
Key is the key of the taint |
value string |
Value is the value of the taint |
TaintEffect
(string
alias)
(Appears on:Taint)
TaintEffect is the effect for a Kubernetes taint.
Taints
([]sigs.k8s.io/cluster-api-provider-aws/v2/exp/api/v1beta2.Taint
alias)
(Appears on:AWSManagedMachinePoolSpec)
Taints is an array of Taints.
UpdateConfig
(Appears on:AWSManagedMachinePoolSpec)
UpdateConfig is the configuration options for updating a nodegroup. Only one of MaxUnavailable and MaxUnavailablePercentage should be specified.
Field | Description |
---|---|
maxUnavailable int |
(Optional)
MaxUnavailable is the maximum number of nodes unavailable at once during a version update. Nodes will be updated in parallel. The maximum number is 100. |
maxUnavailablePercentage int |
(Optional)
MaxUnavailablePercentage is the maximum percentage of nodes unavailable during a version update. This percentage of nodes will be updated in parallel, up to 100 nodes at once. |
Reference
Table of feature gates and their corresponding environment variables
Feature Gate | Environment Variable | Default |
---|---|---|
EKS | CAPA_EKS | true |
EKSEnableIAM | CAPA_EKS_IAM | false |
EKSAllowAddRoles | CAPA_EKS_ADD_ROLES | flase |
EKSFargate | EXP_EKS_FARGATE | flase |
MachinePool | EXP_MACHINE_POOL | false |
EventBridgeInstanceState | EVENT_BRIDGE_INSTANCE_STATE | flase |
AutoControllerIdentityCreator | AUTO_CONTROLLER_IDENTITY_CREATOR | true |
BootstrapFormatIgnition | EXP_BOOTSTRAP_FORMAT_IGNITION | false |
ExternalResourceGC | EXP_EXTERNAL_RESOURCE_GC | false |
AlternativeGCStrategy | EXP_ALTERNATIVE_GC_STRATEGY | false |
TagUnmanagedNetworkResources | TAG_UNMANAGED_NETWORK_RESOURCES | true |
ROSA | EXP_ROSA | false |
Glossary
Table of Contents
A | B | C | D | E | H | I | K | L| M | N | O | P | R | S | T | W
A
Add-ons
Services beyond the fundamental components of Kubernetes.
- Core Add-ons: Addons that are required to deploy a Kubernetes-conformant cluster: DNS, kube-proxy, CNI.
- Additional Add-ons: Addons that are not required for a Kubernetes-conformant cluster (e.g. metrics/Heapster, Dashboard).
B
Bootstrap
The process of turning a server into a Kubernetes node. This may involve assembling data to provide when creating the server that backs the Machine, as well as runtime configuration of the software running on that server.
Bootstrap cluster
A temporary cluster that is used to provision a Target Management cluster.
Bootstrap provider
Refers to a provider that implements a solution for the bootstrap process. Bootstrap provider’s interaction with Cluster API is based on what is defined in the Cluster API contract.
See CABPK.
C
CAEP
Cluster API Enhancement Proposal - patterned after KEP. See template
CAPI
CAPA
Cluster API Provider AWS
CABPK
Cluster API Bootstrap Provider Kubeadm
CABPOCNE
Cluster API Bootstrap Provider Oracle Cloud Native Environment (OCNE)
CACPOCNE
Cluster API Control Plane Provider Oracle Cloud Native Environment (OCNE)
CAPC
Cluster API Provider CloudStack
CAPD
Cluster API Provider Docker
CAPDO
Cluster API Provider DigitalOcean
CAPG
Cluster API Google Cloud Provider
CAPH
Cluster API Provider Hetzner
CAPHV
Cluster API Provider Hivelocity
CAPIBM
Cluster API Provider IBM Cloud
CAPIM
Cluster API Provider In Memory
CAPIO
Cluster API Operator
CAPL
Cluster API Provider Akamai (Linode)
CAPM3
Cluster API Provider Metal3
CAPN
Cluster API Provider Nested
CAPX
Cluster API Provider Nutanix
CAPKK
Cluster API Provider KubeKey
CAPK
Cluster API Provider Kubevirt
CAPO
Cluster API Provider OpenStack
CAPOSC
Cluster API Provider Outscale
CAPOCI
Cluster API Provider Oracle Cloud Infrastructure (OCI)
CAPT
Cluster API Provider Tinkerbell
CAPV
Cluster API Provider vSphere
CAPVC
Cluster API Provider vcluster
CAPVCD
Cluster API Provider VMware Cloud Director
CAPZ
Cluster API Provider Azure
CAIPAMIC
Cluster API IPAM Provider In Cluster
Cloud provider
Or Cloud service provider
Refers to an information technology (IT) company that provides computing resources (e.g. AWS, Azure, Google, etc.).
Cluster
A full Kubernetes deployment. See Management Cluster and Workload Cluster.
ClusterClass
A collection of templates that define a topology (control plane and workers) to be used to continuously reconcile one or more Clusters. See ClusterClass
Cluster API
Or Cluster API project
The Cluster API sub-project of the SIG-cluster-lifecycle. It is also used to refer to the software components, APIs, and community that produce them.
See Core Cluster API, CAPI
Cluster API Runtime
The Cluster API execution model, a set of controllers cooperating in managing the Kubernetes cluster lifecycle.
Cluster Infrastructure
or Kubernetes Cluster Infrastructure
Defines the infrastructure that supports a Kubernetes cluster, like e.g. VPC, security groups, load balancers, etc. Please note that in the context of managed Kubernetes some of those components are going to be provided by the corresponding abstraction for a specific Cloud provider (EKS, OKE, AKS etc), and thus Cluster API should not take care of managing a subset or all those components.
Contract
Or Cluster API contract
Defines a set of rules a provider is expected to comply with in order to interact with Cluster API. Those rules can be in the form of CustomResourceDefinition (CRD) fields and/or expected behaviors to be implemented.
Control plane
The set of Kubernetes services that form the basis of a cluster. See also https://kubernetes.io/docs/concepts/#kubernetes-control-plane There are two variants:
- Self-provisioned: A Kubernetes control plane consisting of pods or machines wholly managed by a single Cluster API deployment.
- External or Managed: A control plane offered and controlled by some system other than Cluster API (e.g., GKE, AKS, EKS, IKS).
Control plane provider
Refers to a provider that implements a solution for the management of a Kubernetes control plane. Control plane provider’s interaction with Cluster API is based on what is defined in the Cluster API contract.
See KCP.
Core Cluster API
With “core” Cluster API we refer to the common set of API and controllers that are required to run any Cluster API provider.
Please note that in the Cluster API code base, side by side of “core” Cluster API components there is also a limited number of in-tree providers: CABPK, KCP, CAPD, CAPIM
See Cluster API, CAPI.
Core provider
Refers to a provider that implements Cluster API core controllers
See Cluster API, CAPI.
Core controllers
The set of controllers in Core Cluster API.
See Cluster API, CAPI.
D
Default implementation
A feature implementation offered as part of the Cluster API project and maintained by the CAPI core team; For example KCP is a default implementation for a control plane provider.
E
External patch
Patch generated by an external component using Runtime SDK. Alternative to inline patch.
External patch extension
A runtime extension that implements a topology mutation hook.
H
Horizontal Scaling
The ability to add more machines based on policy and well-defined metrics. For example, add a machine to a cluster when CPU load average > (X) for a period of time (Y).
Host
see Server
I
Infrastructure provider
Refers to a provider that implements provisioning of infrastructure/computational resources required by the Cluster or by Machines (e.g. VMs, networking, etc.). Infrastructure provider’s interaction with Cluster API is based on what is defined in the Cluster API contract.
Clouds infrastructure providers include AWS, Azure, or Google; while VMware, MAAS, or metal3.io can be defined as bare metal providers. When there is more than one way to obtain resources from the same infrastructure provider (e.g. EC2 vs. EKS in AWS) each way is referred to as a variant.
For a complete list of providers see Provider Implementations.
Inline patch
A patch defined inline in a ClusterClass. An alternative to an external patch.
In-place mutable fields
Fields which changes would only impact Kubernetes objects or/and controller behaviour but they won’t mutate in any way provider infrastructure nor the software running on it. In-place mutable fields are propagated in place by CAPI controllers to avoid the more elaborated mechanics of a replace rollout. They include metadata, MinReadySeconds, NodeDrainTimeout, NodeVolumeDetachTimeout and NodeDeletionTimeout but are not limited to be expanded in the future.
Instance
see Server
Immutability
A resource that does not mutate. In Kubernetes we often state the instance of a running pod is immutable or does not change once it is run. In order to make a change, a new pod is run. In the context of Cluster API we often refer to a running instance of a Machine as being immutable, from a Cluster API perspective.
IPAM provider
Refers to a provider that allows Cluster API to interact with IPAM solutions.
IPAM provider’s interaction with Cluster API is based on the IPAddressClaim
and IPAddress
API types.
K
Kubernetes-conformant
Or Kubernetes-compliant
A cluster that passes the Kubernetes conformance tests.
k/k
Refers to the main Kubernetes git repository or the main Kubernetes project.
KCP
Kubeadm Control plane Provider
L
Lifecycle hook
A Runtime Hook that allows external components to interact with the lifecycle of a Cluster.
See Implementing Lifecycle Hooks
M
Machine
Or Machine Resource
The Custom Resource for Kubernetes that represents a request to have a place to run kubelet.
See also: Server
Manage a cluster
Perform create, scale, upgrade, or destroy operations on the cluster.
Managed Kubernetes
Managed Kubernetes refers to any Kubernetes cluster provisioning and maintenance abstraction, usually exposed as an API, that is natively available in a Cloud provider. For example: EKS, OKE, AKS, GKE, IBM Cloud Kubernetes Service, DOKS, and many more throughout the Kubernetes Cloud Native ecosystem.
Managed Topology
See Topology
Management cluster
The cluster where one or more Infrastructure Providers run, and where resources (e.g. Machines) are stored. Typically referred to when you are provisioning multiple workload clusters.
Multi-tenancy
Multi tenancy in Cluster API defines the capability of an infrastructure provider to manage different credentials, each one of them corresponding to an infrastructure tenant.
Please note that up until v1alpha3 this concept had a different meaning, referring to the capability to run multiple instances of the same provider, each one with its own credentials; starting from v1alpha4 we are disambiguating the two concepts.
See also Support multiple instances.
N
Node pools
A node pool is a group of nodes within a cluster that all have the same configuration.
O
Operating system
Or OS
A generically understood combination of a kernel and system-level userspace interface, such as Linux or Windows, as opposed to a particular distribution.
P
Patch
A set of instructions describing modifications to a Kubernetes object. Examples include JSON Patch and JSON Merge Patch.
Pivot
Pivot is a process for moving the provider components and declared cluster-api resources from a Source Management cluster to a Target Management cluster.
The pivot process is also used for deleting a management cluster and could also be used during an upgrade of the management cluster.
Provider
Or Cluster API provider
This term was originally used as abbreviation for Infrastructure provider, but currently it is used to refer to any project that can be deployed and provides functionality to the Cluster API management Cluster.
See Bootstrap provider, Control plane provider, Core provider, Infrastructure provider, IPAM provider Runtime extension provider.
Provider components
Refers to the YAML artifact published as part of the release process for providers; it usually includes Custom Resource Definitions (CRDs), Deployments (to run the controller manager), RBAC, etc.
In some cases, the same expression is used to refer to the instances of above components deployed in a management cluster.
Provider repository
Refers to the location where the YAML for provider components are hosted; usually a provider repository hosts many version of provider components, one for each released version.
R
Runtime Extension
An external component which is part of a system built on top of Cluster API that can handle requests for a specific Runtime Hook.
See Runtime SDK
Runtime Extension provider
Refers to a provider that implements one or more runtime extensions. Runtime Extension provider’s interaction with Cluster API are based on the Open API spec for runtime hooks.
Runtime Hook
A single, well identified, extension point allowing applications built on top of Cluster API to hook into specific moments of the Cluster API Runtime, e.g. BeforeClusterUpgrade, TopologyMutationHook.
See Runtime SDK
Runtime SDK
A developer toolkit required to build Runtime Hooks and Runtime Extensions.
See Runtime SDK
S
Scaling
Unless otherwise specified, this refers to horizontal scaling.
Stacked control plane
A control plane node where etcd is colocated with the Kubernetes API server, and is running as a static pod.
Server
The infrastructure that backs a Machine Resource, typically either a cloud instance, virtual machine, or physical host.
T
Topology
A field in the Cluster object spec that allows defining and managing the shape of the Cluster’s control plane and worker machines from a single point of control. The Cluster’s topology is based on a ClusterClass. Sometimes it is also referred as a managed topology.
See ClusterClass
Topology Mutation Hook
A Runtime Hook that allows external components to generate patches for customizing Kubernetes objects that are part of a Cluster topology.
W
Workload Cluster
A cluster created by a ClusterAPI controller, which is not a bootstrap cluster, and is meant to be used by end-users, as opposed to by CAPI tooling.
WorkerClass
A collection of templates that define a set of worker nodes in the cluster. A ClusterClass contains zero or more WorkerClass definitions.
See ClusterClass
Ports used by CAPA
Name | Port Number | Description |
---|---|---|
metrics | Port that exposes the metrics. This can be customized by setting the --metrics-bind-addr flag when starting the manager. The default is to only listen on localhost:8080 | |
webhook | 9443 | Webhook server port. To disable this set --webhook-port flag to 0 . |
health | 9440 | Port that exposes the health endpoint. This can be customized by setting the --health-addr flag when starting the manager. |
profiler | Expose the pprof profiler. By default is not configured. Can set the --profiler-address flag. e.g. --profiler-address 6060 |
Jobs
This document intends to provide an overview over our jobs running via Prow, GitHub actions and Google Cloud Build.
Builds and Tests running on the main branch
NOTE: To see which test jobs execute which tests or e2e tests, you can click on the links which lead to the respective test overviews in [test-grid].
Presubmits
Prow Presubmits:
- pull-cluster-api-provider-aws-test
./scripts/ci-test.sh
- pull-cluster-api-provider-aws-build
./scripts/ci-build.sh
- pull-cluster-api-provider-aws-verify
make verify
- pull-cluster-api-provider-aws-e2e-conformance
./scripts/ci-conformance.sh
- pull-cluster-api-provider-aws-e2e-conformance-with-ci-artifacts
./scripts/ci-conformance.sh
- E2E_ARGS:
-kubetest.use-ci-artifacts
- E2E_ARGS:
- pull-cluster-api-provider-aws-e2e-blocking
./scripts/ci-e2e.sh
- GINKGO_FOCUS:
[PR-Blocking]
- GINKGO_FOCUS:
- pull-cluster-api-provider-aws-e2e
./scripts/ci-e2e.sh
- pull-cluster-api-provider-aws-e2e-eks
./scripts/ci-e2e-eks.sh
Postsubmits
Prow Postsubmits:
- ci-cluster-api-provider-aws-e2e
./scripts/ci-e2e.sh
- ci-cluster-api-provider-aws-eks-e2e
./scripts/ci-e2e-eks.sh
- ci-cluster-api-provider-aws-e2e-conformance
./scripts/ci-conformance.sh
- post-cluster-api-provider-aws-push-images Google Cloud Build:
make release-staging
Periodics
Prow Periodics:
- periodic-cluster-api-provider-aws-e2e
./scripts/ci-e2e.sh
- periodic-cluster-api-provider-aws-eks-e2e
/scripts/ci-e2e-eks.sh
- periodic-cluster-api-provider-aws-e2e-conformance
./scripts/ci-conformance.sh
- periodic-cluster-api-provider-aws-e2e-conformance-with-k8s-ci-artifacts
./scripts/ci-conformance.sh
- E2E_ARGS:
-kubetest.use-ci-artifacts
- E2E_ARGS:
- periodic-cluster-api-provider-aws-coverage
./scripts/ci-test-coverage.sh
- cluster-api-provider-aws-push-images-nightly Google Cloud Build:
make release-staging-nightly
CAPA Version Support
Release Versioning
CAPA follows the semantic versionining specification:
MAJOR version release for incompatible API changes, MINOR version release for backwards compatible feature additions, and PATCH version release for only bug fixes.
Example versions:
- Minor release:
v0.1.0
- Patch release:
v0.1.1
- Major release:
v1.0.0
Compatibility with Cluster API Versions
CAPA’s versions are compatible with the following versions of Cluster API
API Version | Cluster API v1alpha3 (v0.3) | Cluster API v1alpha4 (v0.4) |
---|---|---|
AWS Provider v1alpha3 (v0.6) | ✓ | |
AWS Provider v1alpha4 (v0.7) | ✓ |
CAPA v1beta1 versions are not released in lock-step with Cluster API releases. Multiple CAPA minor releases can use the same Cluster API minor release.
For compatibility, check the release notes here to see which v1beta1 Cluster API version each CAPA version is compatible with.
For example:
- CAPA v1.0.x, v1.1.x, v1.2.x is compatible with Cluster API v1.0.x
- CAPA v1.3.x is compatible with Cluster API v1.1.x
End-of-Life Timeline
CAPA team maintains branches for v1.x (v1beta1), v0.7 (v1alpha4), and v0.6 (v1alpha3).
CAPA branches follow their compatible Cluster API branch EOL date.
API Version | Branch | Supported Until |
---|---|---|
v1alpha4 | release-0.7 | 2022-04-06 |
v1alpha3 | release-0.6 | 2022-02-23 |
Compatibility with Kubernetes Versions
CAPA API versions support all Kubernetes versions that is supported by its compatible Cluster API version:
API Versions | CAPI v1alpha3 (v0.3) | CAPI v1alpha4 (v0.4) | CAPI v1beta1 (v1.x) |
---|---|---|---|
CAPA v1alpha3 (v0.6) | ✓ | ||
CAPA v1alpha4 (v0.7) | ✓ | ||
CAPA v1beta1 (v1.x) | ✓ |
(See Kubernetes support matrix of Cluster API versions).
Contributing guidelines
Sign the CLA
Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests. Please see https://git.k8s.io/community/CLA.md for more info
Contributing A Patch
- Submit an issue describing your proposed change to the repo in question.
- The repo owners will respond to your issue promptly.
- If your proposed change is accepted, and you haven’t already done so, sign a Contributor License Agreement (see details above).
- Fork the desired repo, develop and test your code changes.
See the developer guide on how to setup your development environment. 5. Submit a pull request.
Contributer Ladder
We broadly follow the requirements from the Kubernetes Community Membership.
When making changes to OWNER_ALIASES please check that the sig-cluster-lifecycle-leads, cluster-api-admins and cluster-api-maintainers are correct.
Becoming a reviewer
If you would like to become a reviewer, then please ask one of the current maintainers.
We generally try to follow the requirements for a reviewer from upstream Kubernetes. But if you feel that you don’t full meet the requirements then reach out to us, they are not set in stone.
A reviewer can get PRs automatically assigned for review, and can /lgtm
PRs.
To become a reviewer, ensure you are a member of the kubernetes-sigs Github organisation following https://github.com/kubernetes/org/issues/new/choose.
The steps to add someone as a reviewer are:
- Add the GitHub alias to the cluster-api-aws-reviewers section of OWNERS_ALIASES
- Create a PR with the change that is held (i.e. by using
/hold
) - Announce the change within the CAPA slack channel and as a PSA in the next CAPA office hours
- After 7 days of lazy consensus or after the next CAPA office hours (whichever is longer) the PR can be merged
Becoming a maintainer
If you have made significant contributions to Cluster API Provider AWS, a maintainer may nominate you to become a maintainer for the project.
We generally follow the requirements for a approver from upstream Kubernetes. However, if you don’t fully meet the requirements then a quorum of maintainers may still propose you if they feel you will make significant contributions.
Maintainers are able to approve PRs, as well as participate in release processes and have write access to the repo. As a maintainer you will be expected to run the office hours, especially if no else wants to.
Maintainers require membership of the Kubernetes Github organisation via https://github.com/kubernetes/org/issues/new/choose
The steps to add someone as a reviewer are:
- Add the GitHub alias to the cluster-api-aws-maintainers and remove them from cluster-api-aws-reviewers sections of OWNERS_ALIASES
- Create a PR with the change that is held (i.e. by using
/hold
) - Announce the change within the CAPA slack channel and as a PSA in the next CAPA office hours
- After 7 days of lazy consensus or after the next CAPA office hours (whichever is longer) the PR can be merged
- Open PR to add Github username to cluster-api-provider-aws-maintainers to https://github.com/kubernetes/org/blob/main/config/kubernetes-sigs/sig-cluster-lifecycle/teams.yaml
- Open PR to add Github username to https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes-sigs/cluster-api-provider-aws/OWNERS
- Open PR to add Github username to https://github.com/kubernetes/k8s.io/blob/main/k8s.gcr.io/images/k8s-staging-cluster-api-aws/OWNERS
- Open PR to add Google ID to the k8s-infra-staging-cluster-api-aws@kubernetes.io Google group in https://github.com/kubernetes/k8s.io/blob/main/groups/groups.yaml
Becoming a admin
After a period of time one of the existing CAPA or CAPI admins may propose you to become an admin of the CAPA project.
Admins have GitHub admin access to perform tasks on the repo.
The steps to add someone as an admin are:
- Add the GitHub alias to the cluster-api-aws-admins section of OWNERS_ALIASES
- Create a PR with the change that is held (i.e. by using
/hold
) - Announce the change within the CAPA slack channel and as a PSA in the next CAPA office hours
- After 7 days of lazy consensus or after the next CAPA office hours (whichever is longer) the PR can be merged
- Open PR to add Github username to cluster-api-provider-aws-admins to https://github.com/kubernetes/org/blob/main/config/kubernetes-sigs/sig-cluster-lifecycle/teams.yaml
Cluster API Provider AWS Roadmap
This roadmap is a constant work in progress, subject to frequent revision. Dates are approximations.
v1.5.x (v1beta1) - April/May 2022
- Network load balancer support
- Graduating EventBridge experimental feature
- EFS CSI driver support
- AWSManagedMachinePool - Launch Template support
v1.6.x (v1beta1) - June/July 2022
- Spot instance support for AWSMachinePools
- Node draining support for AWSMachinePools
- IPv6 Support
- Security group customization support
v2.0.x (v1beta2) - End of 2022
TBD
- AWS Fault injector integration to improve resiliency
- AWSMachinePool implementation backed by Spot Fleet and EC2 Fleet
- Dual stack IPv4/IPv6 support
- Windows Worker Node Support for Windows Server 2019/2022 for both CAPA-managed and EKS-managed Clusters
- FIPS/NIST/STIG compliance
- Workload identity support to CAPA-managed clusters
- Use ACK/CrossPlane as backend for AWS SDK calls
- Karpenter support
- Draining resources created by CCM/CSI like LBs, SGs
- OpenTelemetry integration