Cluster API v1.1 compared to v1.2
This document provides an overview over relevant changes between ClusterAPI v1.1 and v1.2 for maintainers of providers and consumers of our Go API.
Minimum Kubernetes version for the management cluster
- The minimum Kubernetes version that can be used for a management cluster is now 1.20.0
- The minimum Kubernetes version that can be used for a management cluster with ClusterClass is now 1.22.0
NOTE: compliance with minimum Kubernetes version is enforced both by clusterctl and when the CAPI controller starts.
Minimum Go version
- The Go version used by Cluster API is now Go 1.18.x
- If you are using the gcb-docker-gcloud image in cloudbuild, bump to an image which is using
Go 1.18, e.g.:
gcr.io/k8s-staging-test-infra/gcb-docker-gcloud:v20220609-2e4c91eb7e.
- If you are using the gcb-docker-gcloud image in cloudbuild, bump to an image which is using
Go 1.18, e.g.:
Dependencies
Note: Only the most relevant dependencies are listed, k8s.io/ and ginkgo/gomega dependencies
in ClusterAPI are kept in sync with the versions used by sigs.k8s.io/controller-runtime.
- sigs.k8s.io/controller-runtime: v0.11.x => v0.12.3
- sigs.k8s.io/controller-tools: v0.8.x => v0.9.x
- sigs.k8s.io/kind: v0.11.x => v0.14.x
- k8s.io/*: v0.23.x => v0.24.x (derived from controller-runtime)
- github.com/onsi/gomega: v0.17.0 => v0.18.1 (derived from controller-runtime)
- k8s.io/kubectl: v0.23.5 => 0.24.0
Changes by Kind
Deprecation
util.MachinesByCreationTimestamphas been deprecated and will be removed in a future release.- the
topology.cluster.x-k8s.io/managed-field-pathsannotation has been deprecated and will be removed in a future release. - the
experimentalRetryJoinfield in the KubeadmConfig and, as they compose the same types, KubeadmConfigTemplate, KubeadmControlPlane and KubeadmControlPlaneTemplate, has been deprecated and will be removed in a future release.
Removals
- The
third_party/kubernetes-drainpackage has been removed, as we’re now usingk8s.io/kubectl/pkg/draininstead (PR). util/version.CompareWithBuildIdentifiershas been removed, please useutil/version.Compare(a, b, WithBuildTags())instead.- The functions
annotations.HasPausedAnnotationandannotations.HasSkipRemediationAnnotationhave been removed, please useannotations.HasPausedandannotations.HasSkipRemediationrespectively instead. ObjectMeta.ClusterNamehas been removed fromk8s.io/apimachinery/pkg/apis/meta/v1.
golang API Changes
util.ClusterToInfrastructureMapFuncWithExternallyManagedCheckwas removed and the externally managed check was added toutil.ClusterToInfrastructureMapFunc, which required changing its signature. Users of the former simply need to start using the latter and users of the latter need to add the new arguments to their call.conditions.NewPatchfrom the “sigs.k8s.io/cluster-api/util/conditions” package has had its return type modified. Previously the function returnedPatch. It now returns(Patch, error). Users ofNewPatchneed to be update usages to handle the error.
Required API Changes for providers
-
ClusterClass and managed topologies are now using Server Side Apply to properly manage other controllers like CAPA/CAPZ coauthoring slices, see #6320. In order to take advantage of this feature providers are required to add marker to their API types as described in merge-strategy. NOTE: the change will cause a rollout on existing clusters created with ClusterClass
E.g. in CAPA
// +optional Subnets Subnets `json:"subnets,omitempty"Must be modified into:
// +optional // +listType=map // +listMapKey=id Subnets Subnets `json:"subnets,omitempty" -
Server Side Apply implementation in ClusterClass and managed topologies requires to dry-run changes on templates. If infrastructure or bootstrap providers have implemented immutability checks in their InfrastructureMachineTemplate or BootstrapConfigTemplate webhooks, it is required to implement the following changes in order to prevent dry-run to return errors. The implementation requires
sigs.k8s.io/controller-runtimein version>= v0.12.3.E.g. in CAPD following changes should be applied to the DockerMachineTemplate webhook:
+ type DockerMachineTemplateWebhook struct{} + func (m *DockerMachineTemplateWebhook) SetupWebhookWithManager(mgr ctrl.Manager) error { - func (m *DockerMachineTemplate) SetupWebhookWithManager(mgr ctrl.Manager) error { return ctrl.NewWebhookManagedBy(mgr). - For(m). + For(&DockerMachineTemplate{}). + WithValidator(m). Complete() } // +kubebuilder:webhook:verbs=create;update,path=/validate-infrastructure-cluster-x-k8s-io-v1beta1-dockermachinetemplate,mutating=false,failurePolicy=fail,matchPolicy=Equivalent,groups=infrastructure.cluster.x-k8s.io,resources=dockermachinetemplates,versions=v1beta1,name=validation.dockermachinetemplate.infrastructure.cluster.x-k8s.io,sideEffects=None,admissionReviewVersions=v1;v1beta1 + var _ webhook.CustomValidator = &DockerMachineTemplateWebhook{} - var _ webhook.Validator = &DockerMachineTemplate{} + func (*DockerMachineTemplateWebhook) ValidateCreate(ctx context.Context, _ runtime.Object) error { - func (m *DockerMachineTemplate) ValidateCreate() error { ... } + func (*DockerMachineTemplateWebhook) ValidateUpdate(ctx context.Context, oldRaw runtime.Object, newRaw runtime.Object) error { + newObj, ok := newRaw.(*DockerMachineTemplate) + if !ok { + return apierrors.NewBadRequest(fmt.Sprintf("expected a DockerMachineTemplate but got a %T", newRaw)) + } - func (m *DockerMachineTemplate) ValidateUpdate(oldRaw runtime.Object) error { oldObj, ok := oldRaw.(*DockerMachineTemplate) if !ok { return apierrors.NewBadRequest(fmt.Sprintf("expected a DockerMachineTemplate but got a %T", oldRaw)) } + req, err := admission.RequestFromContext(ctx) + if err != nil { + return apierrors.NewBadRequest(fmt.Sprintf("expected a admission.Request inside context: %v", err)) + } ... // Immutability check + if !topology.ShouldSkipImmutabilityChecks(req, newObj) && + !reflect.DeepEqual(newObj.Spec.Template.Spec, oldObj.Spec.Template.Spec) { - if !reflect.DeepEqual(m.Spec.Template.Spec, old.Spec.Template.Spec) { allErrs = append(allErrs, field.Invalid(field.NewPath("spec", "template", "spec"), m, dockerMachineTemplateImmutableMsg)) } ... } + func (*DockerMachineTemplateWebhook) ValidateDelete(ctx context.Context, _ runtime.Object) error { - func (m *DockerMachineTemplate) ValidateDelete() error { ... }
NOTES:
- We are introducing a
DockerMachineTemplateWebhookstruct because we are going to use a controller runtimeCustomValidator. This will allow to skip the immutability check only when the topology controller is dry running while preserving the validation behaviour for all other cases. - By using
CustomValidatorsit is possible to move webhooks to other packages, thus removing some controller runtime dependency from the API types. However, choosing to do so or not is up to the provider implementers and independent of this change.
Other
-
Logging:
-
To align with the upstream Kubernetes community CAPI now configures logging via
component-base/logs. This provides advantages like support for the JSON logging format (via--logging-format=json) and automatic deprecation of klog flags aligned to the upstream Kubernetes deprecation period.View
main.godiffimport ( ... + "k8s.io/component-base/logs" + _ "k8s.io/component-base/logs/json/register" ) var ( ... + logOptions = logs.NewOptions() ) func init() { - klog.InitFlags(nil) func InitFlags(fs *pflag.FlagSet) { + logs.AddFlags(fs, logs.SkipLoggingConfigurationFlags()) + logOptions.AddFlags(fs) func main() { ... pflag.Parse() + if err := logOptions.ValidateAndApply(); err != nil { + setupLog.Error(err, "unable to start manager") + os.Exit(1) + } + + // klog.Background will automatically use the right logger. + ctrl.SetLogger(klog.Background()) - ctrl.SetLogger(klogr.New())This change has been introduced in CAPI in the following PRs: #6072, #6190, #6602. Note: This change is not mandatory for providers, but highly recommended.
-
-
Following E2E framework functions are now checking that machines are created in the expected failure domain (if defined); all E2E tests can now verify failure domains too.
ApplyClusterTemplateAndWaitWaitForControlPlaneAndMachinesReadyDiscoveryAndWaitForMachineDeployments
-
The
AssertControlPlaneFailureDomainsfunction in the E2E test framework has been modified to allow proper failure domain testing. -
After investigating an issue we discovered that improper implementation of a check on
cluster.status.infrastructureReadycan lead to problems during cluster deletion. As a consequence, we recommend that all providers ensure:- The check for
cluster.status.infrastructureReady=trueusually existing at the beginning of the reconcile loop for control-plane providers is implemented after setting external objects ref; - The check for
cluster.status.infrastructureReady=trueusually existing at the beginning of the reconcile loop for infrastructure provider does not prevent the object to be deleted
rif. PR #6183
- The check for
-
CAPI added support for the new control plane label and taint introduced by v1.24 with PR#5919. Providers should tolerate both
control-planeandmastertaints for compatibility with v1.24 control planes. Further, if they use the label in theirmanager.yaml, it should be adjusted since v1.24 only adds thenode-role.kubernetes.io/control-planelabel. An example of such an accommodation can be seen in the capi-provider-aws manager.yaml