Wednesday , November 20 2019
Home / Uncategorized / How to Understand and Set Up Kubernetes Networking

How to Understand and Set Up Kubernetes Networking



Kubernetes networking can be a pretty complex topic. Clustering yourself for yourself. Kubernetes cluster yourself.

This article does not cover the cluster test itself. Clan (but apply everywhere else as well). Kubernetes services such as EKS, AKS, GKE or IBM Cloud, if you are planning to use Kubernetes services such as EKS, AKS, GKE or IBM Cloud.

How to Utilize Kubernetes Networking

Many Kubernetes (K8s) deployment guides provide instructions for deploying to Kubernetes CNI networking as part of the K8s deployment. But if your K8s cluster is already running, and no network is yet deployed, deploying the network is provided. For example, to deploy flannel:

Image title

With this, K8s – from a network perspective – is ready to go. To test everything is working, we create 2 pods.

Image title

This will create two pods, which are already using our driver. We are looking for one of the containers, we find the network with the IP range 10.42.0.0/24 attached.

Image title

A quick test from the other pod shows that the network is working properly.

Image title

How Does Kubernetes Networking Work Compared to Docker Networking?

Kubernetes managed networking through CNIs on top of Docker, and just attaches devices to Docker. While Docker with Swarm also has its own networking capabilities (such as overlay, macvlan, bridging, etc), the CNIs provide similar types of functions.

K8s does not use docker0, which is Docker 'default bridge, but rather creates its own bridge named cbr0, which was chosen to differentiate from the docker0 bridge.

Why Do We Need Overlay Networks?

Overlay networks such as vxlan or ipsec (encapsulate the packet into another packet). This makes entities addressable that are outside the scope of another machine. Alternatives to overlay networks includes L3 solutions such as macvtap (lan) or even L2 solutions such as ivtap (lan), but with limitations or even unwanted side effects.

Any solution on L2 or L3 makes a pod addressable on the network. This is not just within the Docker network, but is also directly from the Docker network. These could be public or private IP addresses.

However, communication on L2 is cumbersome and your experience will vary on your network equipment. This is not necessarily the case with the network. You can also run into trouble because of the other hosts in the system. The macintosh and the neighboring problems are the reasons solutions such as ipvlan exist. These are not the only ones that are Maclines but instead route traffic over the existing one.

The conclusion – and my recommendation – is that for most users BGP and direct routing instead of overlay networks.

How Does Kubernetes Networking Under the Hood?

The first thing to understand in Kubernetes is that of a container, but is a collection of containers. A network stack. Kubernetes manages that by setting up the network itself on the container, which you will find for every pod you create. All other pods attach to the network of the container pause which itself does not supply the network. Therefore, it is also possible for a container, which is in the same definition of the same pod, via localhost.

Communication from Docker networks.

Kubernetes Traffic Routing

There are two scenarios that I will go into more detail in explaining how traffic gets routed between pods.

1. Routing Traffic on the Same Host:

There are two situations where the traffic does not leave the host. This is either when the service is running on the same node, or it is the same container within a single pod.
In the case of calling localhost: 80 from container 1 in the first pod and having the service running in the container 2, the traffic will pass the network device and forward the packet to its destination. In this case, the route is very short.

It gets a bit longer when we communicate to a different pod. The traffic will be passed on to the next subway and therefore directly to the traffic to the destination pod, as shown below.

Image title

2. Routing Traffic Across Hosts:

This gets a bit more complicated when we leave the node. cbr0 will now pass the traffic to the next node, whose configuration is managed by the CNI. These are basically just routes of the subnets with the destination host as a gateway. The destination host can be followed by its own traffic and the traffic to the destination pod, as shown below.

Image title

What Exactly is a CNI?

A CNI, which is short for Container Networking Interface, is basically an external module with a well-defined interface that can be called by Kubernetes.

You can find the maintained reference plugins, which includes most of the important ones in the official repo of container networking here.

CNI version 3.1 is not very complicated. It consists of three required functions, ADD, DEL and VERSION, which should be as managing the network. You can read the spec here.

The Different CNIs

To give you a bit of orientation, we will look at some of the most popular CNIs.

Flannel

Flannel is a simple network and is the easiest setup option for an overlay network. Its capabilities include native networking but has limitations when using it across multiple networks. Flannel is for most users, beneath Canal, the default network to choose, as well as some native networking capabilities such as host gateways. Flannel has some limitations, though, including lack of support for network security policies.

Calico

Calico takes a different approach than flannel. It is technically not an overlay network, but rather a system to configure routing between all systems involved. To accomplish this, Calico leverages the Border Gateway Protocol (BGP) which is used for the Internet in a process named peering, where every peer party exchanges traffic and participates in the BGP network. The BGP protocol itself propagates routes under its ASN, with the difference that these are private and there is no need to register them with RIPE.

However, for some scenarios, in this case IPINIP, which is always used when a node is hosted on a different network in order to enable the exchange of traffic between those two hosts.

Canal

Channel is based on Flannel, but with some Calico components such as the host agent, which allows you to use network security policies. These are normally missing in Flannel. So it basically extends Flannel with the addition of security policies.

Multus

Multus is a CNI that is actually a network interface itself. It orchestrates multiple interfaces and without an actual network. I know Multis is an enabler for multi-device and multi-subnet networks. Multis itself basically calls the real CNI instead of the kubelet and communicates back to the kubelet the results.

Image title

Kube-Router

Also worth mentioning is kube-router, which – like Calico – works with BGP and routing instead of an overlay network. Also like Calico, it utilizes IPINIP where needed. It also leverages ipvs for load-balancing.

Setting up a Multi-Network K8s Cluster

In the cases when you need to use multiple networks, you'll be required to use Multus. While Multis is quite mature, you should know that there are currently some limitations.

One of those limitations is that of the following issue on Github. This is going to be fixed in the future. But if you currently need to map ports (either nodePort configs or hostPort configs), you will not be able to do that to referenced bugs.

Setting Up Multus

Multus itself. This is pretty much the config from the Multis repositories examples, but with some important adjustments. See the link below to the sample.

The first thing was to adjust the config map. Because we plan to have a default network with Flannel, we define the configuration in the delegates array of the Multis config. Some important settings here marked in red are "Masterplugin": true and to define the bridge for the flannel network itself. You'll see why we need this in the next steps. Other than that there is not much else to adjust except the mounting definition of the config map.

Another important thing about this config map is the default that is automatically related to the containers without further specification. Please note that you need to kill and rerun the containers of the daemonset, or reboot your node to have the changes take effect.

The sample yaml file:

Image title

Image title

Setting Up the Primary Flannel Overlay Network

For the primary Flannel network, things are pretty much very easy. We can take the example from the Multis repository for this and just deploy it. CNI mount, adjustment of tolerances and some adjustments made for the CNI settings of Flannel. For example, adding "ForceAddress": true and removing "HairpinMode": true .
CNS from your host, in our case / opt / cni / bin, this was tested on a cluster that was set up with RKE, but should work on other clusters.

The Multus team themselves did not really change much; they only commented out the initcontainer config, which you could just safely delete. "CNI."

Here's the modified Flannel daemonset:

Image title

Image title

With these samples deployed, we are pretty much done and our pods should now be assigned to an IP address. Let's test it:

Image title

Image title

Image title

10.42.2.43 on the eth0 interface, which is the default interface. All extra interfaces will appear as netX, i.e. net1.

Setting Up the Secondary Network

The secondary network needs a few more adjustments and these are all made on the assumption that you want to deploy vxlan. To actually serve a secondary overlay we need to change the VXLAN Identifier "VIN," which by default is set to 1, and which is now already taken by our first overlay network. So we can change this by configuring the network on our server. We use the clusters own etcd, here marked in green (and we assume that the job runs on the host running the client etcd) and mount in our keys, here marked in the red, from the local host which in our case are stored in the / etc / kubernetes / ssl folder.

The entire sample YAML file:

Image title

Next, we can effectively deploy the secondary network. The setup of this is very similar to the principle one, but with some key differences. The most obvious is that we changed the subnet, but we also need to change a few other things.

First of all we need to set a different dataDir, i.e. / var / lib / cni / flannel2, and a different subnetFile, i.e. /run/flannel/flannel2.env. This is needed because they are otherwise occupied and already used by our primary network. Next we need to adjust the bridge because it is used by the primary Flannel overlay network.

Server that we configured before. In the primary network, this was done by connecting to the K8s API directly, which is done via the -kube-subnet-mgr flag. But we can not do that because we also need to modify the prefix from which we want to read. You can see this below in orange and settings for our clusters etcd connection in red. File again, marked in green in the sample. Last but not least, we add a network definition. The rest of the sample is identical to our main networks config.

See the sample config file for the above steps:

Image title

Image title

Once this is done we have our secondary network ready.

Assigning Extra Networks

Now that we have a secondary network ready we also need to assign this. To do this we also need to first define a NetworkAttachmentDefinition, which we can use afterward to assign this network to the container. Set up before when initializing Multus. This way we can mount the networks we need on demand. In this definition, we need to specify the network type in our case flannel and also necessary configurations. This includes the before mentioned subnetFile, dataDir and bridge name.

The last thing we need to decide is the name for the network, so we name ours flannel.2.

Image title

Now we're finally ready to spawn our first pod with our secondary network.

Image title

This should be your new pod with your secondary network, and we should

Image title

10.5.22.4 as its ip address.

Troubleshooting

Should this not work for you, you will need to look at the logs of your kubelet.
One common issue is missing CNIs. In my first tests, I was missing the CNI bridge since this was not deployed by RKE. From the container networking repo.

External Connectivity and Load Balancing

Now that we have a network and running, the next thing we want is to make our app reachable and configure them to be highly available and scalable. While it is the key component we need to have in place.

Kubernetes basically has four concepts to make an app externally available.

Using Load Balancers

Ingress

An Ingress is basically a load balancer with Layer 7 capabilities, specifically HTTP (s). The most commonly used implementation of an ingress controller is the NGINX ingress. But this can be your problem and you want to use it. For example, traefik or HA Proxy for which ingress controllers already exists. See the guide for an example on how to set up a different input controller.

Configuring an ingress is quite easy. In the following example, you see an example of a linked service. In blue, you will find the basic configuration which in this example points to a service. In green, you find the configuration required to link your SSL certificate unless you do not employ SSL. And last but not least, in brown, you will find an example to adjust some of the detailed settings of the NGINX ingress. You can look up over here.

Image title

Layer 4 Load Balancer

The Layer 4 load balancer, which is defined in Kubernetes with type: LoadBalancer , is a service provider dependent load balancing solution. This is probably the case with HA Proxy or a routing solution. Cloud providers may use their own solution, have special hardware in place or resort to an HA Proxy or routing solution as well. Should you manage a bare metal installation of a cluster K8s, you might want to give this a look.
High-level application layer protocols (layer 7) and is only capable of forwarding traffic. SSL termination. Most of the load balancers on this level. Annotated and is not standardized. I know this look up in the docs of your cloud provider accordingly.

Using {host, node} Ports

A {host, node} Port is basically the equivalent to docker -p port: port , especially the hostPort. The nodePort, unlike the hostPort, is available on all nodes instead of only on the nodes running the pod. For nodePort kubernetes creates a clusterIP first and then load balances traffic over this port. The nodePort itself is just an iptable rule to forward traffic on the port to the clusterIP.
A nodePort is rarely used except in quick testing and only really needed in production for monitoring. Layer 4 load balancer instead. And hostPort is only really used for testing purposes or very rarely to stick to specific node and publish under a specific ip address pointing to this node.

To give you an example, a hostPort is defined in the container spec, like the following:

Image title

What is a ClusterIp?

A clusterIP is an internally reachable IP for the kubernetes cluster and all services within it. This IP itself load balances traffic to all pods that matches its selector rules. A clusterIP also automatically generates a lot of cases, for example, when specifying a type: LoadBalancer service or setting up nodePort. The load balancing happens through the clusterIP.

The cluster has been created to solve the problem of multiple addressable hosts and the effective renewal of those. It is very easy to have a single IP that does not change from being time to service all the time for service discoveries. Although there are times when it is appropriate to use service discovery, if you want to explicit control, for example in some microservice environments.

Common Troubleshooting

Your cluster could be missing your firewall manually. For example, in AWS you will want to adjust your security groups to allow inter-cluster communication as well as ingress and egress. If you do not, this will lead to an inoperable cluster. Make sure you always open the required ports between master and worker nodes. Open directly, i.e. hostPort or nodePort.

Network Security

Now that we have set up all of our Kubernetes networking, we also need to make sure that we have some security in place. A simple rule in security is to give applications You will have a hard time digging deeper into your network. It certainly makes it a heck of a lot harder and more time-consuming. This is important because it gives you more time to react and prevent further damage. A prominent example is the combination of different exploits / vulnerabilities of different applications, which can be attacked by multiple vectors (e.g. network, container, host).

Network services. With network policy, we only have to work with CNIs. They work, for example, with Calico and Kube-router. Flannel does not support it but you can move to Canal, which makes the network policy from Calico usable by Flannel. For most other CNIs there is no support and also no support planned.

But this is not the only issue. Rule is a very simple firewall rule targeting a certain port. This means you can not apply any advanced settings. For example, you can not block a single container on demand Further traffic rules do not understand the traffic, and you are purely limited to the rules on the Layer 3 and 4 levels. And, lastly, there is no detection of network-based threats or attacks such as DDoS, DNS, SQL injection and other damaging network attacks.

This is where specialized container network security solutions provide the security needed for critical applications such as financial or compliance driven ones. I personally like NeuVector for this; it has a container firewall solution that I had experience with deploying at Arvato / Bertelsmann, and provided the Layer 7.

It should be noted that any network security solution must be cloud-native and self-scaling and adapting. You can not be checking on iptables or having to update anything when you deploy new applications or scales your pods. You can manage this all manually, but for any enterprise, you can not slow down the CI / CD pipeline.

In addition to the security and visibility, we also found the connection and the packet-level container network tools that helped debug applications during testing and staging. With a Kubernetes network you're never really sure where the packets are going and which pods are being routed to unless you can see the traffic.

CNI

Now that Kubernetes networking and CNIs have been covered, one big question always comes up. Which CNI solution should choose? I will try to provide some advice on how to go about making this decision.

First, Define the Problem

The first thing for every project is to define the problem you want to solve first in as much detail as possible. You will want to know what kind of applications you want to deploy and what kind of load they generate. Some of the questions you could ask yourself are:

Is my application:

Can I Withstand Major Downtime? Or Even Minor?

This is an important question because you should decide on the front – because if you choose one solution now and then you want to switch, you will need to re-setup the network and redeploy all your containers. This will mean a downtime for your service. It will be fine if you have a planned maintenance window, but as you grow, zero downtime becomes more important!

My Application Is on Multiple Networks

This scenario is quite common in on-premise installations. In fact, if you only want to separate the network and the public network, you will need to set up multiple networks or have clever routing.

Is There a Certain Feature I Need from The CNIs?

CNIs. Another thing influencing your decision. For example, you want to use Weave or you want more mature load balancing through ipvs.

What Network Performance Is Required?

If you know that your application is sensitive to latency or heavy on the network, you may want to avoid any overlay network. Overlays can be on performance and even more so. If this is the only way to improve on the network is used to avoid overlays and utilize networking utilities like routing. When you look for network performance, you have a few choices, for example:

  • Ipvlan: It has a good performance. But it also caveats, i.e. you can not use macv {tap, lan} at the same time on the same host.

  • Calico: Calico is not the most user-friendly CNI, but it provides better performance compared to vxlan and can be scaled without issues.

  • Kube-Router: Kube-router will give you better performance (like Calico), since they both use BGP and routing, plus support for LVS / IPVS. But it might not be as battle-tested as Calico.

  • Cloud Provider Solutions: Last but not least, some of the cloud providers provide their own Kubernetes networking solutions that may or may not perform better. Amazon, for example, has their aws-vpc which is supported on flannel. Aws-vpc performs in most scenarios about as good as ipvlan.

But I Just Want Something That Works!

Yes, that is understandable, and this is the case for most users. In this case, probably with Flame with vxlan will be the way to go because it is a no-brainer and just works. However, as I said before, you will have more resources as you grow. But this is definitely the easiest way to start.

Just Make a Decision

It is really a matter of making a decision rather than making none at all. If you do not have specific features, it is fine to start with Flannel and vxlan. It will not be any more difficult to make decisions at all.

With all this information, I hope that you will have some relevant background and a better understanding of how Kubernetes networking works.


Source link

Leave a Reply

Your email address will not be published.