Creating a ROSA cluster
Permissions
Authentication using service account credentials
CAPA controller requires service account credentials to be able to provision ROSA clusters:
-
Visit https://console.redhat.com/iam/service-accounts and create a new service account.
-
Create a new kubernetes secret with the service account credentials to be referenced later by
ROSAControlPlane
kubectl create secret generic rosa-creds-secret \ --from-literal=ocmClientID='....' \ --from-literal=ocmClientSecret='eyJhbGciOiJIUzI1NiIsI....' \ --from-literal=ocmApiUrl='https://api.openshift.com'
Note: to consume the secret without the need to reference it from your
ROSAControlPlane
, name your secret asrosa-creds-secret
and create it in the CAPA manager namespace (usuallycapa-system
)kubectl -n capa-system create secret generic rosa-creds-secret \ --from-literal=ocmClientID='....' \ --from-literal=ocmClientSecret='eyJhbGciOiJIUzI1NiIsI....' \ --from-literal=ocmApiUrl='https://api.openshift.com'
Authentication using SSO offline token (DEPRECATED)
The SSO offline token is being deprecated and it is recommended to use service account credentials instead, as described above.
-
Visit https://console.redhat.com/openshift/token to retrieve your SSO offline authentication token
-
Create a credentials secret within the target namespace with the token to be referenced later by
ROSAControlePlane
kubectl create secret generic rosa-creds-secret \
--from-literal=ocmToken='eyJhbGciOiJIUzI1NiIsI....' \
--from-literal=ocmApiUrl='https://api.openshift.com'
```
Alternatively, you can edit CAPA controller deployment to provide the credentials
```shell
kubectl edit deployment -n capa-system capa-controller-manager
and add the following environment variables to the manager container
env:
- name: OCM_TOKEN
value: "<token>"
- name: OCM_API_URL
value: "https://api.openshift.com" # or https://api.stage.openshift.com
Migration from offline token to service account authentication
-
Visit https://console.redhat.com/iam/service-accounts and create a new service account.
-
If you previously used kubernetes secret to specify the OCM credentials secret, edit the secret:
kubectl edit secret rosa-creds-secret
where you will remove the ocmToken
credentials and add base64 encoded ocmClientID
and ocmClientSecret
credentials like so:
apiVersion: v1
data:
ocmApiUrl: aHR0cHM6Ly9hcGkub3BlbnNoaWZ0LmNvbQ==
ocmClientID: Y2xpZW50X2lk...
ocmClientSecret: Y2xpZW50X3NlY3JldA==...
kind: Secret
type: Opaque
- If you previously used capa manager deployment to specify the OCM offline token as environment variable, edit the manager deployment:
kubectl -n capa-system edit deployment capa-controller-manager
and remove the OCM_TOKEN
and OCM_API_URL
variables, followed by kubectl -n capa-system rollout restart deploy capa-controller-manager
. Then create the new default
secret in the capa-system
namespace with:
kubectl -n capa-system create secret generic rosa-creds-secret \
--from-literal=ocmClientID='....' \
--from-literal=ocmClientSecret='eyJhbGciOiJIUzI1NiIsI....' \
--from-literal=ocmApiUrl='https://api.openshift.com'
Prerequisites
Follow the guide here up until Step 3 to install the required tools and setup the prerequisite infrastructure. Once Step 3 is done, you will be ready to proceed with creating a ROSA cluster using cluster-api.
Creating the cluster
-
Prepare the environment:
export OPENSHIFT_VERSION="4.14.5" export AWS_REGION="us-west-2" export AWS_AVAILABILITY_ZONE="us-west-2a" export AWS_ACCOUNT_ID="<account_id>" export AWS_CREATOR_ARN="<user_arn>" # can be retrieved e.g. using `aws sts get-caller-identity` export OIDC_CONFIG_ID="<oidc_id>" # OIDC config id creating previously with `rosa create oidc-config` export ACCOUNT_ROLES_PREFIX="ManagedOpenShift-HCP" # prefix used to create account IAM roles with `rosa create account-roles` export OPERATOR_ROLES_PREFIX="capi-rosa-quickstart" # prefix used to create operator roles with `rosa create operator-roles --prefix <PREFIX_NAME>` # subnet IDs created earlier export PUBLIC_SUBNET_ID="subnet-0b54a1111111111111" export PRIVATE_SUBNET_ID="subnet-05e72222222222222"
-
Render the cluster manifest using the ROSA cluster template:
clusterctl generate cluster <cluster-name> --from templates/cluster-template-rosa.yaml > rosa-capi-cluster.yaml
Note: The AWS role name must be no more than 64 characters in length. Otherwise an error will be returned. Truncate values exceeding 64 characters.
-
If a credentials secret was created earlier, edit
ROSAControlPlane
to reference it:apiVersion: controlplane.cluster.x-k8s.io/v1beta2 kind: ROSAControlPlane metadata: name: "capi-rosa-quickstart-control-plane" spec: credentialsSecretRef: name: rosa-creds-secret ...
-
Provide an AWS identity reference
apiVersion: controlplane.cluster.x-k8s.io/v1beta2 kind: ROSAControlPlane metadata: name: "capi-rosa-quickstart-control-plane" spec: identityRef: kind: <IdentityType> name: <IdentityName> ...
Otherwise, make sure the following
AWSClusterControllerIdentity
singleton exists in your management cluster:apiVersion: infrastructure.cluster.x-k8s.io/v1beta2 kind: AWSClusterControllerIdentity metadata: name: "default" spec: allowedNamespaces: {} # matches all namespaces
see Multi-tenancy for more details
-
Finally apply the manifest to create your Rosa cluster:
kubectl apply -f rosa-capi-cluster.yaml
see ROSAControlPlane CRD Reference for all possible configurations.