Release notes

1.13 Bugfix Release

February 21, 2019 - canonical-kubernetes-435

Fixes

  • Fixed docker does not start when docker_runtime is set to nvidia (Issue)
  • Fixed snapd_refresh charm option conflict (Issue)

CVE-2018-18264

January 10, 2019

What happened

  • A security vulnerability was found in the Kubernetes dashboard that affected all versions of the dashboard.

A new dashboard version, v1.10.1, was released to address this vulnerability. This includes an important change to logging in to the dashboard. The Skip button is now missing from the login page and a user and password is now required. The easiest way to log in to the dashboard is to select your ~/.kube/config file and use credentials from there.

1.13 Release Notes

December 10, 2018

What’s new

  • LDAP and Keystone support

Added support for LDAP-based authentication and authorisation via Keystone. Please read the documentation for details on how to enable this.

  • Vault PKI support

Added support for using Vault for PKI in place of EasyRSA. Vault is more secure and robust than EasyRSA and supports more advanced features for certificate management. See the documentation for details of how to add Vault to CDK and configure it as a root or intermediary CA.

  • Encryption-at-rest support using Vault

Added support for encryption-at-rest for cluster secrets, leveraging Vault for data protection. This ensures that even the keys used to encrypt the data are protected at rest, unlike many configurations of encryption-at-rest for Kubernetes. Please see the documentation for further details.

  • Private Docker registry support

Added support for the Docker Registry charm to provide Docker images to cluster components without requiring access to public registries. Full instructions on using this feature are in the documentation.

  • Keepalived support

The keepalived charm can be used to run multiple kube-api-loadbalancers behind a virtual IP. For more details, please see the documentation.

  • Nginx update

Nginx was updated to v0.21.0, which brings a few changes of which to be aware. The first is that nginx is now in a namespace by itself, which is derived from the application name. By default this will be ingress-nginx-kubernetes-worker. The second change relates to custom configmaps. The name has changed to nginx-configuration and the configmap needs to reside in the same namespace as the nginx deployment.

Fixes

  • Added post deployment script for jaas/jujushell (Issue)
  • Added support for load-balancer failover (Issue)
  • Added always restart for etcd (Issue)
  • Added Xenial support to Azure integrator (Issue)
  • Added Bionic support to Openstack integrator (Issue)
  • Added support for ELB service-linked role (Issue)
  • Added ability to configure Docker install source (Issue)
  • Fixed EasyRSA does not run as an LXD container on 18.04 (Issue)
  • Fixed ceph volumes cannot be attached to the pods after 1.12 (Issue)
  • Fixed ceph volumes fail to attach with “node has no NodeID annotation” (Issue)
  • Fixed ceph-xfs volumes failing to format due to “executable file not found in $PATH” (Issue)
  • Fixed ceph volumes not detaching properly (Issue)
  • Fixed ceph-csi addons not getting cleaned up properly (Issue)
  • Fixed Calico/Canal not working with kube-proxy on master (Issue)
  • Fixed issue with Canal charm not populating the kubeconfig option in 10-canal.conflist (Issue)
  • Fixed cannot access logs after enabling RBAC (Issue)
  • Fixed RBAC breaking prometheus/grafana metric collection (Issue)
  • Fixed upstream Docker charm config option using wrong package source (Issue)
  • Fixed a timing issue where ceph can appear broken when it’s not (Issue)
  • Fixed status when cni is not ready (Issue)
  • Fixed an issue with calico-node service failures not surfacing (Issue)
  • Fixed empty configuration due to timing issue with cni. (Issue)
  • Fixed an issue where the calico-node service failed to start (Issue)
  • Fixed updating policy definitions during upgrade-charm on AWS integrator (Issue)
  • Fixed parsing credentials config value (Issue)
  • Fixed pvc stuck in pending (azure-integrator)
  • Fixed updating properties of the openstack integrator charm do not propagate automatically (openstack-integrator)
  • Fixed flannel error during install hook due to incorrect resource (flannel)
  • Updated master and worker to handle upstream changes from OpenStack Integrator (Issue)
  • Updated to CNI 0.7.4 (Issue)
  • Updated to Flannel v0.10.0 (Issue)
  • Updated Calico and Canal charms to Calico v2.6.12 (Issue, Issue)
  • Updated to latest CUDA and removed version pins of nvidia-docker stack (Issue)
  • Updated to nginx-ingress-controller v0.21.0 (Issue)
  • Removed portmap from Calico resource (Issue)
  • Removed CNI bins from flannel resource (Issue)

Known issues

  • A current bug in Kubernetes could prevent the upgrade from properly deleting old pods. kubectl delete pod <pod_name> --force --grace-period=0 can be used to clean them up.

1.12 Release Notes

  • Added support for Ubuntu 18.04 (Bionic)

New deployments will get Ubuntu 18.04 machines by default. We will also continue to support CDK on Ubuntu 16.04 (Xenial) machines for existing deployments.

  • Added kube-proxy to kubernetes-master

The kubernetes-master charm now installs and runs kube-proxy along with the other master services. This makes it possible for the master services to reach Service IPs within the cluster, making it easier to enable certain integrations that depend on this functionality (e.g. Keystone).

For operators of offline deployments, please note that this change may require you to attach a kube-proxy resource to kubernetes-master.

  • New kubernetes-worker charm config: kubelet-extra-config

In Kubernetes 1.10, a new KubeletConfiguration file was introduced, and many of Kubelet’s command line options were moved there and marked as deprecated. In order to accomodate this change, we’ve introduced a new charm config to kubernetes-worker: kubelet-extra-config.

This config can be used to override KubeletConfiguration values provided by the charm, and is usable on any Canonical cluster running Kubernetes 1.10+.

The value for this config must be a YAML mapping that can be safely merged with a KubeletConfiguration file. For example:

juju config kubernetes-worker kubelet-extra-config="{evictionHard: {memory.available: 200Mi}}"

For more information about KubeletConfiguration, see upstream docs: https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/

  • Added support for Dynamic Kubelet Configuration

While we recommend kubelet-extra-config as a more robust and approachable way to configure Kubelet, we’ve also made it possible to configure kubelet using the Dynamic Kubelet Configuration feature that comes with Kubernetes 1.11+. You can read about that here: https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/

  • New etcd charm config: bind_to_all_interfaces (PR)

Default true, which retains the old behavior of binding to 0.0.0.0. Setting this to false makes etcd bind only to the addresses it expects traffic on, as determined by the configuration of Juju endpoint bindings.

Special thanks to @rmescandon for this contribution!

  • Updated proxy configuration

For operators who currently use the http-proxy, https-proxy and no-proxy Juju model configs, we recommend using the newer juju-http-proxy, juju-https-proxy and juju-no-proxy model configs instead. See the Proxy configuration page for details.

Fixes

  • Fixed kube-dns constantly restarting on 18.04 (Issue)
  • Fixed LXD machines not working on 18.04 (Issue)
  • Fixed kubernetes-worker unable to restart services after kubernetes-master leader is removed (Issue)
  • Fixed kubeapi-load-balancer default timeout might be too low (Issue)
  • Fixed unable to deploy on NVidia hardware (Issue)