Mixer

Q: Why does Istio need Mixer?

Mixer provides a rich intermediation layer between the Istio components as well as Istio-based services, and the infrastructure backends used to perform access control checks and telemetry capture. This layer enables operators to have rich insights and control over service behavior without requiring changes to service binaries.

Mixer is designed as a stand-alone component, distinct from Envoy. This has numerous benefits:

  • Scalability. The work that Mixer and Envoy do is very different in nature, leading to different scalability requirements. Keeping the components separate enables independent component-appropriate scaling.

  • Resource Usage. Istio depends on being able to deploy many instances of its proxy, making it important to minimize the cost of each individual instance. Moving Mixer’s complex logic into a distinct component makes it possible for Envoy to remain svelte and agile.

  • Reliability. Mixer and its open-ended extensibility model represents the most complex parts of the data path processing pipeline. By hosting this functionality in Mixer rather than Envoy, it creates distinct failure domains which enables Envoy to continue operating even if Mixer fails, preventing outages.

  • Isolation. Mixer provides a level of insulation between Istio and the infrastructure backends. Each Envoy instance can be configured to have a very narrow scope of interaction, limiting the impact of potential attacks.

  • Extensibility. It was imperative to design a simple extensibility model to allow Istio to interoperate with as widest breath of backends as possible. Due to its design and language choice, Mixer is inherently easier to extend than Envoy is. The separation of concerns also makes it possible to use Istio policy and telemetry processing with different proxies, just as a mix of Envoy and NGINX.

Envoy implements sophisticated caching, batching, and prefetching, to largely mitigate the latency impact of needing to interact with Mixer on the request path.

Q: How do I see all of the configuration for Mixer?

Configuration for instances, handlers, and rules is stored as Kubernetes Custom Resources. Configuration may be accessed by using kubectl to query the Kubernetes API server for the resources.

Rules

To see the list of all rules, execute the following:

kubectl get rules --all-namespaces

Output will be similar to:

NAMESPACE      NAME        KIND
default        mongoprom   rule.v1alpha2.config.istio.io
istio-system   promhttp    rule.v1alpha2.config.istio.io
istio-system   promtcp     rule.v1alpha2.config.istio.io
istio-system   stdio       rule.v1alpha2.config.istio.io

To see an individual rule configuration, execute the following:

kubectl -n <namespace> get rules <name> -o yaml

Handlers

Handlers are defined based on Kubernetes Custom Resource Definitions for adapters.

First, identify the list of adapter kinds:

kubectl get crd -listio=mixer-adapter

The output will be similar to:

NAME                           KIND
deniers.config.istio.io        CustomResourceDefinition.v1beta1.apiextensions.k8s.io
listcheckers.config.istio.io   CustomResourceDefinition.v1beta1.apiextensions.k8s.io
memquotas.config.istio.io      CustomResourceDefinition.v1beta1.apiextensions.k8s.io
noops.config.istio.io          CustomResourceDefinition.v1beta1.apiextensions.k8s.io
prometheuses.config.istio.io   CustomResourceDefinition.v1beta1.apiextensions.k8s.io
stackdrivers.config.istio.io   CustomResourceDefinition.v1beta1.apiextensions.k8s.io
statsds.config.istio.io        CustomResourceDefinition.v1beta1.apiextensions.k8s.io
stdios.config.istio.io         CustomResourceDefinition.v1beta1.apiextensions.k8s.io
svcctrls.config.istio.io       CustomResourceDefinition.v1beta1.apiextensions.k8s.io

Then, for each adapter kind in that list, issue the following command:

kubectl get <adapter kind name> --all-namespaces

Output for stdios will be similar to:

NAMESPACE      NAME      KIND
istio-system   handler   stdio.v1alpha2.config.istio.io

To see an individual handler configuration, execute the following:

kubectl -n <namespace> get <adapter kind name> <name> -o yaml

Instances

Instances are defined according to Kubernetes Custom Resource Definitions for instances.

First, identify the list of instance kinds:

kubectl get crd -listio=mixer-instance

The output will be similar to:

NAME                             KIND
checknothings.config.istio.io    CustomResourceDefinition.v1beta1.apiextensions.k8s.io
listentries.config.istio.io      CustomResourceDefinition.v1beta1.apiextensions.k8s.io
logentries.config.istio.io       CustomResourceDefinition.v1beta1.apiextensions.k8s.io
metrics.config.istio.io          CustomResourceDefinition.v1beta1.apiextensions.k8s.io
quotas.config.istio.io           CustomResourceDefinition.v1beta1.apiextensions.k8s.io
reportnothings.config.istio.io   CustomResourceDefinition.v1beta1.apiextensions.k8s.io

Then, for each instance kind in that list, issue the following command:

kubectl get <instance kind name> --all-namespaces

Output for metrics will be similar to:

NAMESPACE      NAME                 KIND
default        mongoreceivedbytes   metric.v1alpha2.config.istio.io
default        mongosentbytes       metric.v1alpha2.config.istio.io
istio-system   requestcount         metric.v1alpha2.config.istio.io
istio-system   requestduration      metric.v1alpha2.config.istio.io
istio-system   requestsize          metric.v1alpha2.config.istio.io
istio-system   responsesize         metric.v1alpha2.config.istio.io
istio-system   tcpbytereceived      metric.v1alpha2.config.istio.io
istio-system   tcpbytesent          metric.v1alpha2.config.istio.io

To see an individual instance configuration, execute the following:

kubectl -n <namespace> get <instance kind name> <name> -o yaml

Q: What is the full set of attribute expressions Mixer supports?

Please see the Expression Language Reference for the full set of supported attribute expressions.

Q: Does Mixer provide any self-monitoring?

Mixer exposes a monitoring endpoint (default port: 9093). There are a few useful paths to investigate Mixer performance and audit function:

  • /metrics provides Prometheus metrics on the Mixer process as well as gRPC metrics related to API calls and metrics on adapter dispatch.
  • /debug/pprof provides an endpoint for profiling data in pprof format.
  • /debug/vars provides an endpoint exposing server metrics in JSON format.

Mixer logs can be accessed via a kubectl logs command, as follows:

kubectl -n istio-system logs $(kubectl -n istio-system get pods -listio=mixer -o jsonpath='{.items[0].metadata.name}') mixer

Mixer trace generation is controlled by the command-line flag traceOutput. If the flag value is set to STDOUT or STDERR trace data will be written directly to those locations. If a URL is provided, Mixer will post Zipkin-formatted data to that endpoint (example: http://zipkin:9411/api/v1/spans).

In the 0.2 release, Mixer only supports Zipkin tracing.

Q: How can I write a custom adapter for Mixer?

Learn how to implement a new adapter for Mixer by consulting the Adapter Developer’s Guide.