This page describes how to upgrade an existing Istio deployment (including both control plane and sidecar proxy) to a new release of Istio. The upgrade process may install new binaries and may change configuration and API schemas. The upgrade process may result in service downtime. To minimize downtime, please ensure your Istio control plane components and your applications are highly available with multiple replicas.
In the following steps, we assume that the Istio components are installed and upgraded in the
Download the new Istio release and change directory to the new release directory.
Upgrade Istio’s Custom Resource Definitions via
kubectl apply, and wait a few seconds for the CRDs to be committed in the kube-apiserver:
$ kubectl apply -f @install/kubernetes/helm/istio/templates/crds.yaml@ -n istio-system
Control plane upgrade
The Istio control plane components include: Citadel, Ingress gateway, Egress gateway, Pilot, Policy, Telemetry and Sidecar injector.
If you installed Istio with Helm the preferred upgrade option is to let Helm take care of the upgrade:
$ helm upgrade istio install/kubernetes/helm/istio --namespace istio-system
Kubernetes rolling update
You can also use Kubernetes’ rolling update mechanism to upgrade the control plane components. This is suitable for cases when Istio hasn’t been installed using Helm.
First, generate the desired Istio control plane yaml file, e.g.
$ helm template install/kubernetes/helm/istio --name istio \ --namespace istio-system > install/kubernetes/istio.yaml
$ helm template install/kubernetes/helm/istio --name istio \ --namespace istio-system --set global.mtls.enabled=true > install/kubernetes/istio-auth.yaml
If using Kubernetes versions prior to 1.9, you should add
Second, simply apply the new version of the desired Istio control plane yaml file directly, e.g.
$ kubectl apply -f install/kubernetes/istio.yaml
$ kubectl apply -f install/kubernetes/istio-auth.yaml
The rolling update process will upgrade all deployments and configmaps to the new version. After this process finishes, your Istio control plane should be updated to the new version. Your existing application should continue to work without any change, using the Envoy v1 proxy and the v1alpha1 route rules. If there is any critical issue with the new control plane, you can rollback the changes by applying the yaml files from the old version.
After the control plane upgrade, the applications already running Istio will still be using an older sidecar. To upgrade the sidecar, you will need to re-inject it.
If you’re using automatic sidecar injection, you can upgrade the sidecar by doing a rolling update for all the pods, so that the new version of the sidecar will be automatically re-injected. There are some tricks to reload all pods. E.g. There is a bash script which triggers the rolling update by patching the grace termination period.
If you’re using manual injection, you can upgrade the sidecar by executing:
$ kubectl apply -f <(istioctl kube-inject -f $ORIGINAL_DEPLOYMENT_YAML)
If the sidecar was previously injected with some customized inject configuration files, you will need to change the version tag in the configuration files to the new version and re-inject the sidecar as follows:
$ kubectl apply -f <(istioctl kube-inject \ --injectConfigFile inject-config.yaml \ --filename $ORIGINAL_DEPLOYMENT_YAML)
Migrating to the new networking APIs
Once you’ve upgraded the control plane and sidecar, you can gradually update your deployment to use the new Envoy sidecar. You can do this by using one of the options below:
Add the following to your pod annotation for your deployment:
kind: Deployment ... spec: template: metadata: annotations: sidecar.istio.io/proxyImage: docker.io/istio/proxyv2:0.8.0
Then replace your deployment with your updated application yaml file:
$ kubectl replace -f $UPDATED_DEPLOYMENT_YAML
docker.io/istio/proxyv2:0.8.0as the proxy image. If you don’t have an
injectConfigFile, you can generate one.
injectConfigFileis recommended if you need to add the
sidecar.istio.io/proxyImageannotations in multiple deployment definitions.
$ kubectl replace -f <(istioctl kube-inject --injectConfigFile inject-config.yaml -f $ORIGINAL_DEPLOYMENT_YAML)
istioctl experimental convert-networking-config to convert your existing ingress or route rules:
If your yaml file contains more than the ingress definition such as deployment or service definition, move the ingress definition out to a separate yaml file for the
istioctl experimental convert-networking-configtool to process.
Execute the following to generate the new network configuration file, where replacing FILE*.yaml with your ingress file or deprecated route rule files. Tip: Make sure to feed all the files using
-ffor one or more deployments.
$ istioctl experimental convert-networking-configuration-f FILE1.yaml -f FILE2.yaml -f FILE3.yaml > UPDATED_NETWORK_CONFIG.yaml
UPDATED_NETWORK_CONFIG.yamlto update all namespace references to your desired namespace. There is a known issue with the
convert-networking-configtool where the
istio-systemnamespace is used incorrectly. Further, ensure the
hostsvalue is correct.
Deploy the updated network configuration file:
$ kubectl replace -f UPDATED_NETWORK_CONFIG.yaml
When all your applications have been migrated and tested, you can repeat the Istio upgrade process, removing the
--set global.proxy.image=proxy option. This will set the default proxy to
docker.io/istio/proxyv2 for all
sidecars injected in the future.
Migrating per-service mutual TLS enablement via annotations to authentication policy
For example, if you install Istio with mutual TLS enabled, and disable it for service
foo using a service annotation like below:
kind: Service metadata: name: foo namespace: bar annotations: auth.istio.io/8000: NONE
You need to replace this with this authentication policy and destination rule (deleting the old annotation is optional)
apiVersion: "authentication.istio.io/v1alpha1" kind: "Policy" metadata: name: "disable-mTLS-foo" namespace: bar spec: targets: - name: foo ports: - number: 8000 peers: --- apiVersion: "networking.istio.io/v1alpha3" kind: "DestinationRule" metadata: name: "disable-mTLS-foo" namespace: "bar" spec: host: "foo" trafficPolicy: tls: mode: ISTIO_MUTUAL portLevelSettings: - port: number: 8000 tls: mode: DISABLE
If you already have destination rules for
foo, you must edit that rule instead of creating a new one.
When create a new destination rule, make sure to include other settings, i.e
connection pool and
outlier detection if necessary.
foo doesn’t have sidecar, you can skip authentication policy, but still need to add destination rule.
If 8000 is the only port that service
foo provides (or you want to disable mutual TLS for all ports), the policies can be simplified as:
apiVersion: "authentication.istio.io/v1alpha1" kind: "Policy" metadata: name: "disable-mTLS-foo" namespace: bar spec: targets: - name: foo peers: --- apiVersion: "networking.istio.io/v1alpha3" kind: "DestinationRule" metadata: name: "disable-mTLS-foo" namespace: "bar" spec: host: "foo" trafficPolicy: tls: mode: DISABLE
mtls_excluded_services configuration to destination rules
If you installed Istio with mutual TLS enabled, and used the mesh configuration option
disable mutual TLS when connecting to these services (e.g Kubernetes API server), you need to replace this by adding a destination rule. For example:
apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: "kubernetes-master" namespace: "default" spec: host: "kubernetes.default.svc.cluster.local" trafficPolicy: tls: mode: DISABLE
Describes the requirements for Kubernetes pods and services to run Istio.
Instructions for installing the Istio sidecar in application pods automatically using the sidecar injector webhook or manually using istioctl CLI.
Instructions to setup the Istio service mesh in a Kubernetes cluster.
Instructions to setup a Google Kubernetes Engine cluster for Istio.
Example multicluster IBM Cloud Private install of Istio.
Instructions to setup an OKE cluster for Istio.