Authentication Policy

This task covers the primary activities you might need to perform when enabling, configuring, and using Istio authentication policies. Find out more about the underlying concepts in the authentication overview.

Before you begin

Setup

Our examples use two namespaces foo and bar, with two services, httpbin and sleep, both running with an Envoy sidecar proxy. We also use second instances of httpbin and sleep running without the sidecar in the legacy namespace. If you’d like to use the same examples when trying the tasks, run the following:

ZipZipZipZipZipZip
$ kubectl create ns foo
$ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@) -n foo
$ kubectl apply -f <(istioctl kube-inject -f @samples/sleep/sleep.yaml@) -n foo
$ kubectl create ns bar
$ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@) -n bar
$ kubectl apply -f <(istioctl kube-inject -f @samples/sleep/sleep.yaml@) -n bar
$ kubectl create ns legacy
$ kubectl apply -f @samples/httpbin/httpbin.yaml@ -n legacy
$ kubectl apply -f @samples/sleep/sleep.yaml@ -n legacy

You can verify setup by sending an HTTP request with curl from any sleep pod in the namespace foo, bar or legacy to either httpbin.foo, httpbin.bar or httpbin.legacy. All requests should succeed with HTTP code 200.

For example, here is a command to check sleep.bar to httpbin.foo reachability:

$ kubectl exec $(kubectl get pod -l app=sleep -n bar -o jsonpath={.items..metadata.name}) -c sleep -n bar -- curl http://httpbin.foo:8000/ip -s -o /dev/null -w "%{http_code}\n"
200

This one-liner command conveniently iterates through all reachability combinations:

$ for from in "foo" "bar" "legacy"; do for to in "foo" "bar" "legacy"; do kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
sleep.foo to httpbin.foo: 200
sleep.foo to httpbin.bar: 200
sleep.foo to httpbin.legacy: 200
sleep.bar to httpbin.foo: 200
sleep.bar to httpbin.bar: 200
sleep.bar to httpbin.legacy: 200
sleep.legacy to httpbin.foo: 200
sleep.legacy to httpbin.bar: 200
sleep.legacy to httpbin.legacy: 200

You should also verify that there is a default mesh authentication policy in the system, which you can do as follows:

$ kubectl get policies.authentication.istio.io --all-namespaces
No resources found.
$ kubectl get meshpolicies.authentication.istio.io
NAME      AGE
default   3m

Last but not least, verify that there are no destination rules that apply on the example services. You can do this by checking the host: value of existing destination rules and make sure they do not match. For example:

$ kubectl get destinationrules.networking.istio.io --all-namespaces -o yaml | grep "host:"
    host: istio-policy.istio-system.svc.cluster.local
    host: istio-telemetry.istio-system.svc.cluster.local

Globally enabling Istio mutual TLS

To set a mesh-wide authentication policy that enables mutual TLS, submit mesh authentication policy like below:

$ kubectl apply -f - <<EOF
apiVersion: "authentication.istio.io/v1alpha1"
kind: "MeshPolicy"
metadata:
  name: "default"
spec:
  peers:
  - mtls: {}
EOF

This policy specifies that all workloads in the mesh will only accept encrypted requests using TLS. As you can see, this authentication policy has the kind: MeshPolicy. The name of the policy must be default, and it contains no targets specification (as it is intended to apply to all services in the mesh).

At this point, only the receiving side is configured to use mutual TLS. If you run the curl command between Istio services (i.e those with sidecars), all requests will fail with a 503 error code as the client side is still using plain-text.

$ for from in "foo" "bar"; do for to in "foo" "bar"; do kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
sleep.foo to httpbin.foo: 503
sleep.foo to httpbin.bar: 503
sleep.bar to httpbin.foo: 503
sleep.bar to httpbin.bar: 503

To configure the client side, you need to set destination rules to use mutual TLS. It’s possible to use multiple destination rules, one for each applicable service (or namespace). However, it’s more convenient to use a rule with the * wildcard to match all services so that it is on par with the mesh-wide authentication policy.

$ kubectl apply -f - <<EOF
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
  name: "default"
  namespace: "istio-system"
spec:
  host: "*.local"
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL
EOF

Don’t forget that destination rules are also used for non-auth reasons such as setting up canarying, but the same order of precedence applies. So if a service requires a specific destination rule for any reason - for example, for a configuration load balancer - the rule must contain a similar TLS block with ISTIO_MUTUAL mode, as otherwise it will override the mesh- or namespace-wide TLS settings and disable TLS.

Re-running the testing command as above, you will see all requests between Istio-services are now completed successfully:

$ for from in "foo" "bar"; do for to in "foo" "bar"; do kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
sleep.foo to httpbin.foo: 200
sleep.foo to httpbin.bar: 200
sleep.bar to httpbin.foo: 200
sleep.bar to httpbin.bar: 200

Request from non-Istio services to Istio services

The non-Istio service, e.g sleep.legacy doesn’t have a sidecar, so it cannot initiate the required TLS connection to Istio services. As a result, requests from sleep.legacy to httpbin.foo or httpbin.bar will fail:

$ for from in "legacy"; do for to in "foo" "bar"; do kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
sleep.legacy to httpbin.foo: 000
command terminated with exit code 56
sleep.legacy to httpbin.bar: 000
command terminated with exit code 56

This works as intended, and unfortunately, there is no solution for this without reducing authentication requirements for these services.

Request from Istio services to non-Istio services

Try to send requests to httpbin.legacy from sleep.foo (or sleep.bar). You will see requests fail as Istio configures clients as instructed in our destination rule to use mutual TLS, but httpbin.legacy does not have a sidecar so it’s unable to handle it.

$ for from in "foo" "bar"; do for to in "legacy"; do kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
sleep.foo to httpbin.legacy: 503
sleep.bar to httpbin.legacy: 503

To fix this issue, we can add a destination rule to overwrite the TLS setting for httpbin.legacy. For example:

$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
 name: "httpbin-legacy"
 namespace: "legacy"
spec:
 host: "httpbin.legacy.svc.cluster.local"
 trafficPolicy:
   tls:
     mode: DISABLE
EOF

Request from Istio services to Kubernetes API server

The Kubernetes API server doesn’t have a sidecar, thus request from Istio services such as sleep.foo will fail due to the same problem as when sending requests to any non-Istio service.

$ TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default-token | cut -f1 -d ' ' | head -1) | grep -E '^token' | cut -f2 -d':' | tr -d '\t')
$ kubectl exec $(kubectl get pod -l app=sleep -n foo -o jsonpath={.items..metadata.name}) -c sleep -n foo -- curl https://kubernetes.default/api --header "Authorization: Bearer $TOKEN" --insecure -s -o /dev/null -w "%{http_code}\n"
000
command terminated with exit code 35

Again, we can correct this by overriding the destination rule for the API server (kubernetes.default)

$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
 name: "api-server"
 namespace: istio-system
spec:
 host: "kubernetes.default.svc.cluster.local"
 trafficPolicy:
   tls:
     mode: DISABLE
EOF

Re-run the testing command above to confirm that it returns 200 after the rule is added:

$ TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default-token | cut -f1 -d ' ' | head -1) | grep -E '^token' | cut -f2 -d':' | tr -d '\t')
$ kubectl exec $(kubectl get pod -l app=sleep -n foo -o jsonpath={.items..metadata.name}) -c sleep -n foo -- curl https://kubernetes.default/api --header "Authorization: Bearer $TOKEN" --insecure -s -o /dev/null -w "%{http_code}\n"
200

Cleanup part 1

Remove global authentication policy and destination rules added in the session:

$ kubectl delete meshpolicy default
$ kubectl delete destinationrules httpbin-legacy -n legacy
$ kubectl delete destinationrules api-server -n istio-system

Enable mutual TLS per namespace or service

In addition to specifying an authentication policy for your entire mesh, Istio also lets you specify policies for particular namespaces or services. A namespace-wide policy takes precedence over the mesh-wide policy, while a service-specific policy has higher precedence still.

Namespace-wide policy

The example below shows the policy to enable mutual TLS for all services in namespace foo. As you can see, it uses kind: “Policy” rather than “MeshPolicy”, and specifies a namespace, in this case, foo. If you don’t specify a namespace value the policy will apply to the default namespace.

$ kubectl apply -f - <<EOF
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
  name: "default"
  namespace: "foo"
spec:
  peers:
  - mtls: {}
EOF

Add corresponding destination rule:

$ kubectl apply -f - <<EOF
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
  name: "default"
  namespace: "foo"
spec:
  host: "*.foo.svc.cluster.local"
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL
EOF

As these policy and destination rule are applied on services in namespace foo only, you should see only request from client-without-sidecar (sleep.legacy) to httpbin.foo start to fail.

$ for from in "foo" "bar" "legacy"; do for to in "foo" "bar" "legacy"; do kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
sleep.foo to httpbin.foo: 200
sleep.foo to httpbin.bar: 200
sleep.foo to httpbin.legacy: 200
sleep.bar to httpbin.foo: 200
sleep.bar to httpbin.bar: 200
sleep.bar to httpbin.legacy: 200
sleep.legacy to httpbin.foo: 000
command terminated with exit code 56
sleep.legacy to httpbin.bar: 200
sleep.legacy to httpbin.legacy: 200

Service-specific policy

You can also set authentication policy and destination rule for a specific service. Run this command to set another policy only for httpbin.bar service.

$ cat <<EOF | kubectl apply -n bar -f -
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
  name: "httpbin"
spec:
  targets:
  - name: httpbin
  peers:
  - mtls: {}
EOF

And a destination rule:

$ cat <<EOF | kubectl apply -n bar -f -
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
  name: "httpbin"
spec:
  host: "httpbin.bar.svc.cluster.local"
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL
EOF

Again, run the probing command. As expected, request from sleep.legacy to httpbin.bar starts failing with the same reasons.

...
sleep.legacy to httpbin.bar: 000
command terminated with exit code 56

If we have more services in namespace bar, we should see traffic to them won’t be affected. Instead of adding more services to demonstrate this behavior, we edit the policy slightly to apply on a specific port:

$ cat <<EOF | kubectl apply -n bar -f -
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
  name: "httpbin"
spec:
  targets:
  - name: httpbin
    ports:
    - number: 1234
  peers:
  - mtls: {}
EOF

And a corresponding change to the destination rule:

$ cat <<EOF | kubectl apply -n bar -f -
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
  name: "httpbin"
spec:
  host: httpbin.bar.svc.cluster.local
  trafficPolicy:
    tls:
      mode: DISABLE
    portLevelSettings:
    - port:
        number: 1234
      tls:
        mode: ISTIO_MUTUAL
EOF

This new policy will apply only to the httpbin service on port 1234. As a result, mutual TLS is disabled (again) on port 8000 and requests from sleep.legacy will resume working.

$ kubectl exec $(kubectl get pod -l app=sleep -n legacy -o jsonpath={.items..metadata.name}) -c sleep -n legacy -- curl http://httpbin.bar:8000/ip -s -o /dev/null -w "%{http_code}\n"
200

Policy precedence

To illustrate how a service-specific policy takes precedence over namespace-wide policy, you can add a policy to disable mutual TLS for httpbin.foo as below. Note that you’ve already created a namespace-wide policy that enables mutual TLS for all services in namespace foo and observe that requests from sleep.legacy to httpbin.foo are failing (see above).

$ cat <<EOF | kubectl apply -n foo -f -
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
  name: "overwrite-example"
spec:
  targets:
  - name: httpbin
EOF

and destination rule:

$ cat <<EOF | kubectl apply -n foo -f -
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
  name: "overwrite-example"
spec:
  host: httpbin.foo.svc.cluster.local
  trafficPolicy:
    tls:
      mode: DISABLE
EOF

Re-running the request from sleep.legacy, you should see a success return code again (200), confirming service-specific policy overrides the namespace-wide policy.

$ kubectl exec $(kubectl get pod -l app=sleep -n legacy -o jsonpath={.items..metadata.name}) -c sleep -n legacy -- curl http://httpbin.foo:8000/ip -s -o /dev/null -w "%{http_code}\n"
200

Cleanup part 2

Remove policies and destination rules created in the above steps:

$ kubectl delete policy default overwrite-example -n foo
$ kubectl delete policy httpbin -n bar
$ kubectl delete destinationrules default overwrite-example -n foo
$ kubectl delete destinationrules httpbin -n bar

End-user authentication

To experiment with this feature, you need a valid JWT. The JWT must correspond to the JWKS endpoint you want to use for the demo. In this tutorial, we use this JWT test and this JWKS endpoint from the Istio code base.

Also, for convenience, expose httpbin.foo via ingressgateway (for more details, see the ingress task).

$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: httpbin-gateway
  namespace: foo
spec:
  selector:
    istio: ingressgateway # use Istio default gateway implementation
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
EOF
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
  namespace: foo
spec:
  hosts:
  - "*"
  gateways:
  - httpbin-gateway
  http:
  - route:
    - destination:
        port:
          number: 8000
        host: httpbin.foo.svc.cluster.local
EOF

Get ingress IP

$ export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

And run a test query

$ curl $INGRESS_HOST/headers -s -o /dev/null -w "%{http_code}\n"
200

Now, add a policy that requires end-user JWT for httpbin.foo. The next command assumes there is no service-specific policy for httpbin.foo (which should be the case if you run cleanup as described). You can run kubectl get policies.authentication.istio.io -n foo to confirm.

$ cat <<EOF | kubectl apply -n foo -f -
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
  name: "jwt-example"
spec:
  targets:
  - name: httpbin
  origins:
  - jwt:
      issuer: "testing@secure.istio.io"
      jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.1/security/tools/jwt/samples/jwks.json"
  principalBinding: USE_ORIGIN
EOF

The same curl command from before will return with 401 error code, as a result of server is expecting JWT but none was provided:

$ curl $INGRESS_HOST/headers -s -o /dev/null -w "%{http_code}\n"
401

Attaching the valid token generated above returns success:

$ TOKEN=$(curl https://raw.githubusercontent.com/istio/istio/release-1.1/security/tools/jwt/samples/demo.jwt -s)
$ curl --header "Authorization: Bearer $TOKEN" $INGRESS_HOST/headers -s -o /dev/null -w "%{http_code}\n"
200

To observe other aspects of JWT validation, use the script gen-jwt.py to generate new tokens to test with different issuer, audiences, expiry date, etc. The script can be downloaded from the Istio repository:

$ wget https://raw.githubusercontent.com/istio/istio/release-1.1/security/tools/jwt/samples/gen-jwt.py
$ chmod +x gen-jwt.py

You also need the key.pem file:

$ wget https://raw.githubusercontent.com/istio/istio/release-1.1/security/tools/jwt/samples/key.pem

For example, the command below creates a token that expires in 5 seconds. As you see, Istio authenticates requests using that token successfully at first but rejects them after 5 seconds:

$ TOKEN=$(./gen-jwt.py ./key.pem --expire 5)
$ for i in `seq 1 10`; do curl --header "Authorization: Bearer $TOKEN" $INGRESS_HOST/headers -s -o /dev/null -w "%{http_code}\n"; sleep 1; done
200
200
200
200
200
401
401
401
401
401

You can also add a JWT policy to an ingress gateway (e.g., service istio-ingressgateway.istio-system.svc.cluster.local). This is often used to define a JWT policy for all services bound to the gateway, instead of for individual services.

End-user authentication with per-path requirements

End-user authentication can be enabled or disabled based on request path. This is useful if you want to disable authentication for some paths, for example, the path used for health check or status report. You can also specify different JWT requirements on different paths.

Disable End-user authentication for specific paths

Modify the jwt-example policy to disable End-user authentication for path /user-agent:

$ cat <<EOF | kubectl apply -n foo -f -
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
  name: "jwt-example"
spec:
  targets:
  - name: httpbin
  origins:
  - jwt:
      issuer: "testing@secure.istio.io"
      jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.1/security/tools/jwt/samples/jwks.json"
      trigger_rules:
      - excluded_paths:
        - exact: /user-agent
  principalBinding: USE_ORIGIN
EOF

Confirm it’s allowed to access the path /user-agent without JWT tokens:

$ curl $INGRESS_HOST/user-agent -s -o /dev/null -w "%{http_code}\n"
200

Confirm it’s denied to access paths other than /user-agent without JWT tokens:

$ curl $INGRESS_HOST/headers -s -o /dev/null -w "%{http_code}\n"
401

Enable End-user authentication for specific paths

Modify the jwt-example policy to enable End-user authentication only for path /ip:

$ cat <<EOF | kubectl apply -n foo -f -
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
  name: "jwt-example"
spec:
  targets:
  - name: httpbin
  origins:
  - jwt:
      issuer: "testing@secure.istio.io"
      jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.1/security/tools/jwt/samples/jwks.json"
      trigger_rules:
      - included_paths:
        - exact: /ip
  principalBinding: USE_ORIGIN
EOF

Confirm it’s allowed to access paths other than /ip without JWT tokens:

$ curl $INGRESS_HOST/user-agent -s -o /dev/null -w "%{http_code}\n"
200

Confirm it’s denied to access the path /ip without JWT tokens:

$ curl $INGRESS_HOST/ip -s -o /dev/null -w "%{http_code}\n"
401

Confirm it’s allowed to access the path /ip with a valid JWT token:

$ TOKEN=$(curl https://raw.githubusercontent.com/istio/istio/release-1.1/security/tools/jwt/samples/demo.jwt -s)
$ curl --header "Authorization: Bearer $TOKEN" $INGRESS_HOST/ip -s -o /dev/null -w "%{http_code}\n"
200

End-user authentication with mutual TLS

End-user authentication and mutual TLS can be used together. Modify the policy above to define both mutual TLS and end-user JWT authentication:

$ cat <<EOF | kubectl apply -n foo -f -
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
  name: "jwt-example"
spec:
  targets:
  - name: httpbin
  peers:
  - mtls: {}
  origins:
  - jwt:
      issuer: "testing@secure.istio.io"
      jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.1/security/tools/jwt/samples/jwks.json"
  principalBinding: USE_ORIGIN
EOF

And add a destination rule:

$ kubectl apply -f - <<EOF
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
  name: "httpbin"
  namespace: "foo"
spec:
  host: "httpbin.foo.svc.cluster.local"
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL
EOF

After these changes, traffic from Istio services, including ingress gateway, to httpbin.foo will use mutual TLS. The test command above will still work. Requests from Istio services directly to httpbin.foo also work, given the correct token:

$ TOKEN=$(curl https://raw.githubusercontent.com/istio/istio/release-1.1/security/tools/jwt/samples/demo.jwt -s)
$ kubectl exec $(kubectl get pod -l app=sleep -n foo -o jsonpath={.items..metadata.name}) -c sleep -n foo -- curl http://httpbin.foo:8000/ip -s -o /dev/null -w "%{http_code}\n" --header "Authorization: Bearer $TOKEN"
200

However, requests from non-Istio services, which use plain-text will fail:

$ kubectl exec $(kubectl get pod -l app=sleep -n legacy -o jsonpath={.items..metadata.name}) -c sleep -n legacy -- curl http://httpbin.foo:8000/ip -s -o /dev/null -w "%{http_code}\n" --header "Authorization: Bearer $TOKEN"
000
command terminated with exit code 56

Cleanup part 3

  1. Remove authentication policy:

    $ kubectl -n foo delete policy jwt-example
    
  2. Remove destination rule:

    $ kubectl -n foo delete destinationrule httpbin
    
  3. If you are not planning to explore any follow-on tasks, you can remove all resources simply by deleting test namespaces.

    $ kubectl delete ns foo bar legacy