Network Policy in Kubernetes

Sabir Piludiya
6 min readJun 18, 2022

In this Story, we will go through the concepts of the Network Policy in Kubernetes, Its use cases, and the instances where we can leverage the benefits. Also, We will set this up in local k8s cluster and play with different scenarios.


  • Minikube setup
  • Basic idea on deployments and pods


By the end of this tutorial you will be able to

  • Understand the basics and the use cases on the Network Policy
  • You will get hands-on practice with network policies to control the traffic between services running on Kubernetes.

What is Network Policy in Kubernetes ?

The Kubernetes Network Policy lets users define a policy to control network traffic. By applying it, we can allow or restrict the traffic to/from any pod. Network policies can be thought of as the firewall. However, Kubernetes lacks in an integrated ability to implement the network policy. To implement the network policy, we need to use a network plugin.

The Network plugin needs to support of Network Policy to enforce a Network Policy definition in a Kubernetes cluster. Otherwise, any rules that we apply are useless.

Following are the network plugins that support Network Policy-

All k8s pods are non-isolated by default and can communicate with each other, however pods can be isolated by having a Network Policy in Kubernetes. A Network Policy can work on all the pods within a namespace or we can use selectors to apply the rules to pods with a specific label. The Network policy can be applied to ingress as well as egress traffic. Policies are applied without the need to restart the running Pods.

By means of ingress and egress rules, we can define the incoming or outgoing traffic rules from/to:

  • Pods with a specific label
  • Pods belonging to a namespace with a particular label
  • A combination of both rules restricts the selection of labelled pods in labelled namespaces
  • Specific IP ranges

Ingress and Egress :

We can secure the networking traffic by applying the ingress and egress rules to the Network Policy in Kubernetes.

The following diagram depicts the flow for ingress and egress traffic.

Ingress is incoming traffic to the pod.

Traffic coming to the Kubernetes pod
Ingress Traffic

Egress is outgoing traffic from the pod.

Traffic goes from the Kubernetes pod
Egress Traffic

Network Policy in Action

Let’s create the Network Policy in locally running Minikube cluster.

Running Minikube with the network plugin (Calico) :

In order to enable the Calico network plugin, Run the Minikube with the following command -

minikube start --network-plugin=cni --cni=calico
Kubernetes running with Calico CNI in Minikube

Run following command in order to check Calico is running and verify whether Calico is enabled by looking for calico pods in kube-system namespace.

kubectl get pods -l k8s-app=calico-node -n kube-system
List down the calico pods

So far so good! Now we have a Minikube cluster running with Calico.

Checking default allow-all rule

As we discussed previously, any pod can talk to any other pod by default. Let’s verify this behaviour.

We will create pods for the front-end, back-end and database application that should have initially abilities to communicate with each other as shown in below diagram.

Default behaviour of pod communication
Default behaviour of pod communication

To test this out, deploy that pods by applying following YAML file.

YAML file contains the 3 services which is pointing to 3 pods as below -

  • frontend service => frontend pod
  • backend service => backend pod
  • db service => mysql pod

Keep a note that all of these pods have unique labels assigned.

Deploy the YAML using following command -

kubectl apply -f network-policy-demo.yaml
Apply the YAML file to create pods and services

3 pods and 3 services will be created as shown below -

Running 3 pods and 3 services

Let’s check the communication from frontend pod to the backend pod using curl. We are calling the backend service with the port 80.

Checking communication to backend service from frontend pod before applying Policy

So, It’s accessible. Now let’s try to call database pod from the frontend pod. We will verify that database pod with port number 3306 is accessible from frontend pod using telnet utility.

Install the telnet using following command in the frontend pod -

apt update && apt install telnet -y

Call the db service from frontend pod using following command which is pointing to the database pod on port 3306.

telnet db 3306
Database pod is accessible from frontend pod before applying Policy

It’s connected. Frontend pod is able to connect to database. That’s the default behaviour of communication in Kubernetes (Communicate with everyone). It’s not acceptable, Right?

Not to mention that backend pod is also able to connect to the database as shown in below screenshot.

Database pod is accessible from backend pod before applying Policy

For the security purposes, Frontend pod should not communicate to the database directly. Network Policy can be used to prevent that communication. Let’s allow the communication only to the backend pod using the Network policy.

Network policy for the Ingress traffic to the Database pod
Network policy for the Ingress traffic to the Database pod

As shown in the above diagram, We want to create the policy for the ingress traffic to the database pod. All the incoming traffic on database pod should be blocked except the backend pod. So, eventually only backend pod will be able to communicate with database.

Create the Network policy using the following YAML file.

podSelector: selects particular Pods in the same namespace as the NetworkPolicy which should be allowed as ingress sources or egress destinations.

policyTypes: It may include either Ingress, Egress, or both. The policyTypes field indicates whether or not the given policy applies to ingress traffic to selected pod, egress traffic from selected pods, or both.

In the YAML,

  • spec.podSelector defines the pod on which we are going to apply the policy. In this case, it’s MySQL database pod.
  • spec.ingress[].from[].podSelector defines the only pod from which ingress traffic should be accepted.
  • spec.ingress[].ports[] defines the only port number on which ingress traffic should be accepted.

So all the incoming traffic to the database pod should be accepted only if came to port 3306 from the backend pod only.

Let’s apply the network policy and verify.

Deploy the YAML using the following command -

kubectl apply -f db-netpol.yaml
Apply the Network policy and verify

As shown in the screenshot, the policy is applied to the MySQL pod.

Backend pod is still able to connect to the database pod using the telnet utility as shown below -

Database pod is accessible from backend pod after applying Policy

However, the frontend pod can’t connect to the database pod and the access is restricted.

Database pod is not accessible from frontend pod after applying Policy

Voila! we have restricted the traffic and improved the security at some level. However, Still all other traffic are allowed. We can also create Security Policies for all of them based on the security requirements.

To make sure that all pods in the namespace are secure, a best practice is to establish a default network policy. Default deny-all ingress and egress policy prevents all traffic to/from pods. So create deny-all policy to block all the traffic and establish other policies based on the security requirements then we can end-to-end test the communication between pods and tune the policies accordingly.


Above blog is submitted as part of ‘Devtron Blogathon 2022’ —

Check out Devtron’s GitHub repo — and give a to show your love & support.

Follow Devtron on LinkedIn — and Twitter —, to keep yourself updated on this Open Source project.

Questions? Comments? Feel free to leave your feedback in the comments section or contact me directly at

Thanks! Hope you like it,

See you in the another story👋.!



Sabir Piludiya

DevOps Engineer. I am fond of learning new technologies and focusing on K8s, Automation, and Data Analytics. Contact at