AKS: Kubenet vs Azure CNI

Mykhailo Zahlada
5 min readDec 4, 2020

--

The recent call with one of the customers inspired me to write this article.

Special thanks to my friend, ex-colleague, and decent engineer Rostyslav Fridman

AKS(Kubernetes) is a complex container-orchestration system that consists of many parts. Among other plugins, networking is a fundamental part of the cluster. Kubernetes has adopted the Container Network Interface(CNI) specification for managing network resources on a cluster.

So, which networking option(CNI) should you choose for your AKS (Azure Kubernetes Service) deployment in Production? Kubenet or Azure CNI? Let’s find it out.

TL;DR

For production deployments, both Kubenet and Azure CNI are valid options.

But let’s deep dive into details and figure out which one brings more benefits or simply better fits your applications.

General concepts

Running AKS(Kubernetes) means you need to have connectivity to your nodes, pods, services.

Nodes(Nodepools) CIDR

The network where your nodes reside. With Azure CNI, nodes get IP addresses from the subnet of the pre-defined VNET. With Kubenet, nodes get IP addresses from the subnet of the VNET that resides in the system resource group (‘MC_’-group). Azure creates this VNET and subnet automatically.

Pods CIDR

The network, where your pods reside. With Azure CNI, pods get IP addresses from the same subnet of the pre-defined VNET Nodes reside. With Kubenet, pods get IP addresses from the internal Kubernetes subnet. It is possible to reuse same PODs CIDR for multiple AKS clusters.

Services CIDR

The Services CIDR is used to assign internal services in the AKS cluster to an IP address. This IP address range should be an address space that isn’t used outside of the AKS clusters. However, you can reuse the same service CIDR for multiple AKS clusters. Services is an obstraction. When using services, traffic is always routed (L3, internal AKS network).

Kubenet

https://docs.microsoft.com/en-us/azure/aks/configure-kubenet

AKS backed by Kubenet

Kubenet is a very basic, simple network plugin, on Linux only. It does not, of itself, implement more advanced features like cross-node networking or network policy. It is typically used together with a cloud provider that sets up routing rules for communication between nodes, or in single-node environments.

When AKS is backed by Kubenet, a Linux node creates a bridge(cbr0) and a veth(e.g. veth8870d4c1) pair for each pod with the host end of each pair connected to the bridge(cbr0). All pods are assigned IP addresses from an internal VNET.

The source IP address(Pod’s IP) is NATed to the node’s primary IP address.
When an AKS cluster is set, a system group is being created by Azure. This system group, by default, starts with ‘MC_’. As AKS is backed by such primitives as VMs(VMSS), LoadBalancers, Route Tables, etc. Azure needs to place them somewhere. System MC_ group has been designed to host it. When you select Kubenet, VNET will be created automatically and placed into this system group. Nodes get IP addresses from the subnet of this VNET.
The advantage of this approach is that it reduces the number of IP addresses that you need to reserve in your network space.

Azure CNI

https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni

AKS backed by Azure CNI

Azure CNI requires a dedicated pre-defined VNET and a subnet to place Nodes and Pods. Every pod gets an IP address from the subnet and can be accessed directly (But, as pods are mortal, I’d rather you didn’t do any solution backed by pod direct access). If there is a need to establish peerings between the VNETs, these IP addresses must be unique across your network.

AKS networking: Kubenet vs Azure CNI

The benefits of Azure CNI networking for production are:

  • It allows for the separation of control and management of resources
  • Different teams can manage and secure resources, which is good for security reasons. Traffic coming from Pod doesn’t go through NAT, pod that establishes communication to other resources can be identified. With Kubenet the source of the packet is always Node IP address.
  • It lets connecting to existing Azure resources, on-premises resources, or other services directly via IP addresses assigned to each pod
  • Virtual Nodes (vKubelet) are available only with Azure CNI
  • Windows nodes are available only with Azure CNI

The benefits of Kubenet are:

  • Reduced number of IP addresses that are reserved in a network space for pods.
  • Simpler in design, management, etc. than Azure CNI
CNI Comparison Table

I did some synthetic tests with PerfKitBenchmarker. Iperf test, 7 rounds, Node antiaffinity (i.e. 2 pods were running tests each on the separate node).

Test testing environments:

  • AKS cluster of 3 nodes (Standard_DS2_v2) backed by Kubenet without any except system workloads
  • AKS cluster of 3 nodes (Standard_DS2_v2) backed by Azure CNI without any except system workloads

As shown, Azure CNI has slightly better results on an empty cluster. When it comes to a production environment, the difference in numbers(lower performance for Kubenet) can be much higher because of NAT.

Conclusions:

No matter which CNI is picked, VNET will be created anyway.

For Kubenet the default VNET with 10.0.0.0/8 CIDR will be created in the ‘MC_’ resource group. While it is still possible to modify resources(create peerings, etc) in the ‘MC_’ group it is strongly not recommended. NAT is applied which may slightly add an overhead. There is no option to add Windows nodes or to extend cluster with Virtual Nodes. For intercommunications with other services, e.g. VM, the source address of the packets arrived from AKS backed by Kubenet is NODE IP address. But this approach saves IP address space.

Azure CNI requires to design and create a VNET in advance. But it gives the ability later to get all the benefits of networking(peering, User Defined Routes), etc. It also enables you to add Windows nodes and Virtual nodes. For intercommunications with other Azure services, e.g. VM, the source address of the packets arrived from AKS backed by Azure CNI is the pod IP address. So, it adds transparency to a network.

I advise using Azure CNI unless you are confident with Kubenet. In other words, Azure CNI forces you to design networking in advance. With Kubenet, problems of this type are simply delayed.

Authors:

Mykhailo Zahlada

Rostyslav Fridman

--

--