By default, Kubernetes is not safe. Kubernetes, even though it is open source, is a product and it should be easy to get started to play around and experiment with. Imagine if you weren’t cluster-admin when you got started or the pods in the cluster couldn’t send HTTP requests to one another. The threshold of getting started would increase. On the other hand you would have a much more secure cluster. It is always a tradeoff. In this article I will write about network policies and how to harden your insecure Kubernetes cluster.

As always you can follow along and check out the source code or clone it here .

Network Policies

First, let’s understand why and what network policies are. As we have already concluded above, by default all pods in the cluster can send HTTP request to each other. It doesn’t matter if they run in different namespaces they can still send HTTP requests. This is not good and you know why it is not good, but I will give an example. You do remember the log4j vulnerability discovered? A really bad, probably as bad as it can get, vulnerability where you can execute scripts remotely and run malicious code. The malicious code usually tried to access sensitive data and send to remote servers controlled by the hackers. This could have been prevented with a properly configured firewall or network policy as it is called in Kubernetes. Most of your applications do not need to be able to send a HTTP request to any resource out on the Internet. Don’t allow them to do it! This will make it much harder for an attacker to e.g. extract sensitive data from your system. It will also slow them down when they don’t have access to other pods in the cluster or only certain ports, less vulnerabilities exposed, harder to move laterally and gives you more time to detect, respond and prevent.

Getting started

What is interesting with network policies is as soon as you apply a policy to one of your pods and don’t specify any rules it is deny by default. The pod will neither accept incoming traffic or outgoing traffic. I think this it’s good and means you need to be specific and explicit of what you allow the pod to do. Let’s show with an example.

First, let’s get the cluster up and running. Here I use Kind.

1
kind create cluster --config=config.yaml

…and create a namespace called test.

1
kubectl apply -f namespace.yml

Ok, we are up and running. I will show a very simple example of network policies but you will be able to apply it to any workload in your cluster. We will deploy two pods running nginx and we will pretend one pod is a database called db and the other pod is an application called app. The app needs to be able to communicate with the db but the db doesn’t need to be able to communicate to the app. Let’s see what that would look like.

Create the app pod:

1
kubectl apply -f app.yml

Create the db pod:

1
kubectl apply -f db.yml

Perfect, we now have to pods running in the test namespace. To quickly verify that our app can talk to the db we can run this curl command:

1
kubectl -n test exec -it app -- curl -I http://db

Above will give a you a 200 response code. Great, let’s try the opposite.

1
kubectl -n test exec -it db -- curl -I http://app

…and that gives you an error. Great, your cluster is already more secure!

Deny by default

Let’s take a closer look on how you define a network policy. Open the app.yml file and you will see this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: app
  namespace: test
spec:
  podSelector:
    matchLabels:
      name: app
  policyTypes:
    - Ingress
    - Egress
  egress:
    - to:
      - podSelector:
          matchLabels:
            name: db
      ports:
        - protocol: TCP
          port: 80
    # dns udp
    - to:
      - namespaceSelector:
          matchLabels:
            kubernetes.io/metadata.name: kube-system
      ports:
      - protocol: UDP
        port: 53

Let’s go through it step by step. To specify which pod(s) the network policy should be applied to you need to match the labels at spec.podSelector.matchLabels. Here name=app will only be applied to our app pod. Next section to take a look at is spec.ingress and spec.egress and if you pay attention we have not specified an ingress rule - remember network policies are deny by default so if nothing is specified everything is blocked. At spec.egress.to we have specified a couple of rules. At the first index we allow outgoing traffic to pods with the label that matches name=db, in our case the db pod. Quite simple. The next index is a bit more interesting and this example assumes you are running CoreDNS inside your cluster. Because of the deny by default nature of network policy we need to allow DNS queries to CoreDNS as well. In this example we allow all UDP traffic to kube-system namespace on port 53. Of course, you can be more specific and say which pod instead of the whole namespace.

Also, we only allow outgoing traffic to the db pod and the kube-system namespace. To prove network policies are deny by default let’s try to send a request to google:

1
kc -n test exec -it app -- curl -I http://www.google.com

Boom! No log4j will be able to extract data in this cluster.

Limitations

Of course, there are some limitations to network policies as well. First of all there are no events or logs raised when a network policy fails. They fail silently so for both debugging and monitoring you need to rely on the pods logging.

Another limitation is that you are not able to define a FQDN to your policy. As an example if you for some reason would like the app to send outgoing request to www.google.com you need to allow the specific IPs for www.google.com and not just the domain. This can be troublesome if www.google.com has dynamic IPs.

A third limitation is the lack of global rules. Even though it is possible to set “global” rules it is a bit hacky. As an example every pod in the cluster should be able to talk to the internal DNS in the cluster. As a Kubernetes admin you want to set a rule that allows all pods to reach the internal DNS. It is technically possible to do it with network policies but maybe not as smooth as one could wish. Network policies were designed with the application in mind, and in this case we need to move the responsibility to the application engineers more so than the cluster admins. I would even argue to only have one network policy per application instead of trying to be clever and reuse network policies between pods. It is possible to have one network policy for all pods or namespace but due to the first limitation, no logs or events raised, it could be a bit troublesome. Keep it simple and have a one to one relation between pod and network policy.

Some work is ongoing in the network SIG in Kubernetes to solve above limitations. It will be two new resources called AdminNetworkPolicy and BaseNetworkPolicy. The base network policy is intended to be used for an admin to set base rules. Here you can e.g. say that no pods should be able to send outgoing requests to Internet and make the cluster deny by default. These rules can be overridden by the app’s network policy. However, the admin network policy has the highest priority and here you can define some explicit rules you want to be applied to the whole cluster. A good example here would be the internal DNS. All apps should be able to communicate with the DNS. And instead of defining it in each network policy it can be defined on a global level.

I’m excited to see these features come out in a future release, until then make sure you start using network policies if you are not already. It is an easy thing to implement with high impact on your security.

That’s it! Thank you for reading!