Life of a Packet in Kubernetes — Part 3

Part 1 Basic container networking

Part 2 — Calico CNI

Part 3:

  1. Pod-to-Pod
  2. Pod-to-External
  3. Pod-to-Service
  4. External-to-Pod
  5. External Traffic Policy
  6. Kube-Proxy
  7. iptable rules processing flow
  8. Network Policy basics

Pod-to-Pod

kube-proxy is not involved in Pod-to-Pod communication as the CNI would configure the Node and Pod with required routes. All the containers can communicate with all other containers without NAT; all nodes can communicate with all containers (and vice-versa) without NAT.

Pod-to-external

For the traffic that goes from pod to external addresses, Kubernetes uses SNAT. What it does is replace the pod’s internal source IP:port with the host’s IP:port. When the return packet comes back to the host, it rewrites the pod’s IP:port as the destination and sends it back to the original pod. The whole process is transparent to the original pod, who doesn’t know the address translation.

Pod-to-Service

ClusterIP

Kubernetes has a concept called “service,” which is simply an L4 load balancer in front of pods. There are several different types of services. The most basic type is called ClusterIP. This type of service has a unique VIP address that is only routable inside the cluster.

apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
labels:
app: webapp
spec:
replicas: 2
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: nginx
image: nginx:1.14.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth
labels:
app: auth
spec:
replicas: 2
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: nginx
image: nginx:1.14.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: frontend
spec:
ports:
- port: 80
protocol: TCP
type: ClusterIP
selector:
app: webapp
---
apiVersion: v1
kind: Service
metadata:
name: backend
labels:
app: backend
spec:
ports:
- port: 80
protocol: TCP
type: ClusterIP
selector:
app: auth
...

NodePort (External-to-Pod)

Now we have the DNS that can be used to communicate between the services in the cluster. However, the external requests can’t reach the service that lives inside the cluster as the IP address are virtual and Private.

NodePort 1.1
---
apiVersion
: v1
kind: Service
metadata:
name: frontend
spec:
type: NodePort
selector:
app: webapp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 31380
...
NodePort 1.2

External Traffic Policy

ExternalTrafficPolicy denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. “Local” preserves the client source IP and avoids a second hop for NodePort type services, but risks potentially imbalanced traffic spreading. “Cluster” obscures the client source IP and may cause a second hop to another node, but should have good overall load-balancing

Cluster Traffic Policy

This is the default external traffic policy for Kubernetes Services. The assumption here is that you always want to route traffic to all pods (across all the nodes) running a service with equal distribution.

  • client sends the packet to node2:31380
  • node2 replaces the source IP address (SNAT) in the packet with its own IP address
  • node2 replaces the destination IP on the packet with the pod IP
  • packet is routed to node 1 or 3, and then to the endpoint
  • the pod’s reply is routed back to node2
  • the pod’s reply is sent back to the client
NodePort 1.3

Local Traffic Policy

With this external traffic policy, kube-proxy will add proxy rules on a specific NodePort (30000–32767) only for pods that exist on the same node (local) instead of every pod for a service regardless of where it was placed.

---
apiVersion
: v1
kind: Service
metadata:
name: frontend
spec:
type: NodePort
externalTrafficPolicy: Local
selector:
app: webapp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 31380
...
  • client sends the packet to node1:31380, which does have endpoints
  • node1 routes packet to the endpoint with the correct source IP
  • node1 won’t route the packet to node3 as the policy is Local
  • the client sends a packet to node2:31380, which doesn't have any endpoints
  • packet is dropped
NodePort 1.4
NodePort 1.5

Local traffic policy in LoadBalancer Service type

If you’re running on Google Kubernetes Engine/GCE, setting the same service.spec.externalTrafficPolicy field to Local forces nodes without Service endpoints to remove themselves from the list of nodes eligible for load-balanced traffic by deliberately failing health checks. So there won’t be any traffic drops. This model is great for applications that ingress a lot of external traffic and avoid unnecessary hops on the network to reduce latency. We can also preserve true client IPs since we no longer need SNAT traffic from a proxying node! However, the biggest downsides to using the “Local” external traffic policy, as mentioned in the Kubernetes docs, is that traffic to your application may be imbalanced.

Kube-Proxy (iptable mode)

The component in Kubernetes that implements ‘Service’ is called kube-proxy. It sits on every node and programs complicated iptables rules to do all kinds of filtering and NAT between pods and services. If you go to a Kubernetes node and type iptables-save, you’ll see the rules inserted by Kubernetes or other programs. The most important chains are KUBE-SERVICES, KUBE-SVC-* and KUBE-SEP-*.

  • KUBE-SERVICES is the entry point for service packets. What it does is that match the destination IP:port and dispatch the packet to the corresponding KUBE-SVC-* chain.
  • KUBE-SVC-* chain acts as a load balancer and distributes the packet to KUBE-SEP-*chain equally. Each KUBE-SVC-* has the same number of KUBE-SEP-* chains as the number of endpoints behind it.
  • KUBE-SEP-* chain represents a Service EndPoint. It simply does DNAT, replacing service IP:port with pod's endpoint IP:Port.
  • NEW: conntrack knows nothing about this packet, which happens when the SYN packet is received.
  • ESTABLISHED: conntrack knows the packet belongs to an established connection, which happens after the handshake is complete.
  • RELATED: The packet doesn’t belong to any connection, but it is affiliated to another connection, which is especially useful for protocols like FTP.
  • INVALID: Something is wrong with the packet, and conntrack doesn’t know how to deal with it. This state plays a centric role in this Kubernetes issue.
  • Client pod from the left-hand side sends a packet to a service: 2.2.2.10:80
  • The packet is going through iptables rules in the client node, and the destination is changed to pod IP, 1.1.1.20:80
  • Server pod handles the packet and sends back a packet with destination 1.1.1.10
  • The packet is going back to the client node, conntrack recognizes the packet and rewrites the source address back to 2.2.2.10:80
  • Client pod receives the response packet

iptables

In the Linux operating system, the firewalling is taken care of using netfilter. Which is a kernel module that decides what packets are allowed to come in or to go outside.iptables are just the interface to netfilter. The two might often be thought of as the same thing. A better perspective would be to think of it as a backend (netfilter) and a frontend (iptables).

chains

Each chain is responsible for a specific task,

  • PREROUTING: This chain decides what happens to a packet as soon as it arrives at the network interface. We have different options, such as altering the packet (for NAT probably), dropping a packet, or doing nothing at all and letting it slip and be handled elsewhere along the way.
  • INPUT: This is one of the popular chains as it almost always contains strict rules to avoid some evildoers on the internet harming our computer. If you want to open/block a port, this is where you’d do it.
  • FORWARD: This chain is responsible for packet forwarding. Which is what the name suggests. We may want to treat a computer as a router, and this is where some rules might apply to do the job.
  • OUTPUT: This chain is the one responsible for all your web browsing among many others. You can’t send a single packet without this chain allowing it. You have a lot of options, whether you want to allow a port to communicate or not. It’s the best place to limit your outbound traffic if you’re not sure what port each application is communicating through.
  • POSTROUTING: This chain is where packets leave their trace last, before leaving our computer. This is used for routing among many other tasks just to make sure the packets are treated the way we want them to.
node-1# sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1
node-1# cat /proc/sys/net/ipv4/ip_forward
1
net.ipv4.ip_forward = 1

tables

We are going to focus on the NAT table, but the following are the available tables.

  • Filter: This is the default table. In this table, you would decide whether a packet is allowed in/out of your computer. If you want to block a port to stop receiving anything, this is your stop.
  • Nat: This table is the second most popular table and is responsible for creating a new connection. Which is shorthand for Network Address Translation. And if you’re not familiar with the term, don’t worry. I’ll give you an example below.
  • Mangle: For specialized packets only. This table is for changing something inside the packet either before coming in or leaving out.
  • Raw: This table is dealing with the raw packet, as the name suggests. Mainly this is for tracking the connection state. We’ll see examples of this below when we want to allow success packets from SSH connection.
  • Security: It is responsible for securing your computer after the filter table. Which consists of SELinux. If you’re not familiar with the term, it’s a powerful security tool on modern Linux distributions.

Please read THIS article for more detailed info on iptables.

iptable configuration in Kubernetes

Let’s deploy an Nginx application with replica count two in minikube and dump the iptable rules.

master # kubectl get svc webapp
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
webapp NodePort 10.103.46.104 <none> 80:31380/TCP 3d13h
master # kubectl get ep webapp
NAME ENDPOINTS AGE
webapp 10.244.120.102:80,10.244.120.103:80 3d13h
master #
master # kubectl exec -i -t dnsutils -- nslookup webapp.defaultServer:  10.96.0.10Address: 10.96.0.10#53Name: webapp.default.svc.cluster.localAddress: 10.103.46.104
$ sudo iptables -t nat -L PREROUTING | column -t
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
cali-PREROUTING all -- anywhere anywhere /* cali:6gwbT8clXdHdC1b1 */
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL
$ sudo iptables -t nat -L KUBE-SERVICES | column -t
Chain KUBE-SERVICES (2 references)
target prot opt source destination
KUBE-MARK-MASQ tcp -- !10.244.0.0/16 10.103.46.104 /* default/webapp cluster IP */ tcp dpt:www
KUBE-SVC-2IRACUALRELARSND tcp -- anywhere 10.103.46.104 /* default/webapp cluster IP */ tcp dpt:www
KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
$ sudo iptables -t nat -L KUBE-NODEPORTS | column -t
Chain KUBE-NODEPORTS (1 references)
target prot opt source destination
KUBE-MARK-MASQ tcp -- anywhere anywhere /* default/webapp */ tcp dpt:31380
KUBE-SVC-2IRACUALRELARSND tcp -- anywhere anywhere /* default/webapp */ tcp dpt:31380
# statistic  mode  random -> Random load-balancing between endpoints.
$ sudo iptables -t nat -L KUBE-SVC-2IRACUALRELARSND | column -t
Chain KUBE-SVC-2IRACUALRELARSND (2 references)
target prot opt source destination
KUBE-SEP-AO6KYGU752IZFEZ4 all -- anywhere anywhere /* default/webapp */ statistic mode random probability 0.50000000000
KUBE-SEP-PJFBSHHDX4VZAOXM all -- anywhere anywhere /* default/webapp */

$ sudo iptables -t nat -L KUBE-SEP-AO6KYGU752IZFEZ4 | column -t
Chain KUBE-SEP-AO6KYGU752IZFEZ4 (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.244.120.102 anywhere /* default/webapp */
DNAT tcp -- anywhere anywhere /* default/webapp */ tcp to:10.244.120.102:80

$ sudo iptables -t nat -L KUBE-SEP-PJFBSHHDX4VZAOXM | column -t
Chain KUBE-SEP-PJFBSHHDX4VZAOXM (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.244.120.103 anywhere /* default/webapp */
DNAT tcp -- anywhere anywhere /* default/webapp */ tcp to:10.244.120.103:80

$ sudo iptables -t nat -L KUBE-MARK-MASQ | column -t
Chain KUBE-MARK-MASQ (24 references)
target prot opt source destination
MARK all -- anywhere anywhere MARK or 0x4000

KUBE-SERVICES → KUBE-SVC-XXX → KUBE-SEP-XXX

NodePort:

KUBE-SERVICES → KUBE-NODEPORTS → KUBE-SVC-XXX → KUBE-SEP-XXX

Note: The NodePort service will have a ClusterIP assigned to handle internal and external traffic.

ExtrenalTrafficPolicy: Local

As discussed before, using “externalTrafficPolicy: Local” will preserve source IP and drop packets from the agent node has no local endpoint. Let’s take a look at the iptable rules in the node with no local endpoint.

master # kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 6d1h v1.19.2
minikube-m02 Ready <none> 85m v1.19.2
master # kubectl get pods nginx-deployment-7759cc5c66-p45tz -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-7759cc5c66-p45tz 1/1 Running 0 29m 10.244.120.111 minikube <none> <none>
master # kubectl get svc webapp -o wide -o jsonpath={.spec.externalTrafficPolicy}
Local
master # kubectl get svc webapp -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
webapp NodePort 10.111.243.62 <none> 80:30080/TCP 29m app=webserver
$ sudo iptables -t nat -L KUBE-NODEPORTSChain KUBE-NODEPORTS (1 references)target prot opt source destinationKUBE-MARK-MASQ tcp — 127.0.0.0/8 anywhere /* default/webapp */ tcp dpt:30080KUBE-XLB-2IRACUALRELARSND tcp — anywhere anywhere /* default/webapp */ tcp dpt:30080
$ sudo iptables -t nat -L KUBE-XLB-2IRACUALRELARSNDChain KUBE-XLB-2IRACUALRELARSND (1 references)target prot opt source destinationKUBE-SVC-2IRACUALRELARSND all — 10.244.0.0/16 anywhere /* Redirect pods trying to reach external loadbalancer VIP to clusterIP */KUBE-MARK-MASQ all — anywhere anywhere /* masquerade LOCAL traffic for default/webapp LB IP */ ADDRTYPE match src-type LOCALKUBE-SVC-2IRACUALRELARSND all — anywhere anywhere /* route LOCAL traffic for default/webapp LB IP to service chain */ ADDRTYPE match src-type LOCALKUBE-MARK-DROP all — anywhere anywhere /* default/webapp has no local endpoints */
$ sudo iptables -t nat -L KUBE-NODEPORTSChain KUBE-NODEPORTS (1 references)target prot opt source destinationKUBE-MARK-MASQ tcp — 127.0.0.0/8 anywhere /* default/webapp */ tcp dpt:30080KUBE-XLB-2IRACUALRELARSND tcp — anywhere anywhere /* default/webapp */ tcp dpt:30080$ sudo iptables -t nat -L KUBE-XLB-2IRACUALRELARSNDChain KUBE-XLB-2IRACUALRELARSND (1 references)target prot opt source destinationKUBE-SVC-2IRACUALRELARSND all — 10.244.0.0/16 anywhere /* Redirect pods trying to reach external loadbalancer VIP to clusterIP */KUBE-MARK-MASQ all — anywhere anywhere /* masquerade LOCAL traffic for default/webapp LB IP */ ADDRTYPE match src-type LOCALKUBE-SVC-2IRACUALRELARSND all — anywhere anywhere /* route LOCAL traffic for default/webapp LB IP to service chain */ ADDRTYPE match src-type LOCALKUBE-SEP-5T4S2ILYSXWY3R2J all — anywhere anywhere /* Balancing rule 0 for default/webapp */$ sudo iptables -t nat -L KUBE-SVC-2IRACUALRELARSNDChain KUBE-SVC-2IRACUALRELARSND (3 references)target prot opt source destinationKUBE-SEP-5T4S2ILYSXWY3R2J all — anywhere anywhere /* default/webapp */

Headless Services

-Copied from Kubernetes documentation-

With selectors

For headless services that define selectors, the endpoints controller creates Endpoints records in the API, and modifies the DNS configuration to return records (addresses) that point directly to the Pods backing the Service.

master # kubectl get svc webapp-hsNAME        TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGEwebapp-hs   ClusterIP   None         <none>        80/TCP    24smaster # kubectl get ep webapp-hsNAME        ENDPOINTS                             AGEwebapp-hs   10.244.120.109:80,10.244.120.110:80   31s

Without selectors

For headless services that do not define selectors, the endpoints controller does not create Endpoints records. However, the DNS system looks for and configures either:

  • CNAME records for ExternalName-type Services.
  • A records for any Endpoints that share a name with the Service for all other types.

Network Policy

By now, you might have got an idea of how the network policy is implemented in Kubernetes. Yes, the iptables again; this time, the CNI takes care of implementing the network policy, not the kube-proxy. This section should have been added to the Calico (Part 2); however, I feel this is the right place to have the network policy details.

master # kubectl exec -it frontend-8b474f47-zdqdv -- /bin/sh# curl backendbackend-867fd6dff-mjf92# curl dbcurl: (7) Failed to connect to db port 80: Connection timed out
master # kubectl exec -it backend-867fd6dff-mjf92 -- /bin/sh# curl dbdb-8d66ff5f7-bp6kf
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-db-access
spec:
podSelector:
matchLabels:
app: "db"
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
networking/allow-db-access: "true"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
networking/allow-db-access: "true"
spec:
volumes:
- name: workdir
emptyDir: {}
containers:
- name: nginx
image: nginx:1.14.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
initContainers:
- name: install
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', "echo $HOSTNAME > /work-dir/index.html"]
volumeMounts:
- name: workdir
mountPath: "/work-dir"
...
master # calicoctl get networkPolicy --output yaml
apiVersion: projectcalico.org/v3
items:
- apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
creationTimestamp: "2020-11-05T05:26:27Z"
name: knp.default.allow-db-access
namespace: default
resourceVersion: /53872
uid: 1b3eb093-b1a8-4429-a77d-a9a054a6ae90
spec:
ingress:
- action: Allow
destination: {}
source:
selector: projectcalico.org/orchestrator == 'k8s' && networking/allow-db-access
== 'true'
order: 1000
selector: projectcalico.org/orchestrator == 'k8s' && app == 'db'
types:
- Ingress
kind: NetworkPolicyList
metadata:
resourceVersion: 56821/56821
master # calicoctl get workloadEndpoint
WORKLOAD NODE NETWORKS INTERFACE
backend-867fd6dff-mjf92 minikube 10.88.0.27/32 cali2b1490aa46a
db-8d66ff5f7-bp6kf minikube 10.88.0.26/32 cali95aa86cbb2a
frontend-8b474f47-zdqdv minikube 10.88.0.24/32 cali505cfbeac50
$ sudo iptables-save | grep cali95aa86cbb2a
:cali-fw-cali95aa86cbb2a - [0:0]
:cali-tw-cali95aa86cbb2a - [0:0]
-A cali-from-wl-dispatch -i cali95aa86cbb2a -m comment --comment "cali:R489GtivXlno-SCP" -g cali-fw-cali95aa86cbb2a
-A cali-fw-cali95aa86cbb2a -m comment --comment "cali:3XN24uu3MS3PMvfM" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-fw-cali95aa86cbb2a -m comment --comment "cali:xyfc0rlfldUi6JAS" -m conntrack --ctstate INVALID -j DROP
-A cali-fw-cali95aa86cbb2a -m comment --comment "cali:wG4_76ot8e_QgXek" -j MARK --set-xmark 0x0/0x10000
-A cali-fw-cali95aa86cbb2a -p udp -m comment --comment "cali:Ze6pH1ZM5N1pe76G" -m comment --comment "Drop VXLAN encapped packets originating in pods" -m multiport --dports 4789 -j DROP
-A cali-fw-cali95aa86cbb2a -p ipencap -m comment --comment "cali:3bjax7tRUEJ2Uzew" -m comment --comment "Drop IPinIP encapped packets originating in pods" -j DROP
-A cali-fw-cali95aa86cbb2a -m comment --comment "cali:0pCFB_VsKq1qUOGl" -j cali-pro-kns.default
-A cali-fw-cali95aa86cbb2a -m comment --comment "cali:mbgUOxlInVlwb2Ie" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-fw-cali95aa86cbb2a -m comment --comment "cali:I7GVOQegh6Wd9EMv" -j cali-pro-ksa.default.default
-A cali-fw-cali95aa86cbb2a -m comment --comment "cali:g5ViWVLiyVrKX91C" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-fw-cali95aa86cbb2a -m comment --comment "cali:RBmQDo38EoPmxJ0I" -m comment --comment "Drop if no profiles matched" -j DROP
-A cali-to-wl-dispatch -o cali95aa86cbb2a -m comment --comment "cali:v3sEoNToLYUOg7M6" -g cali-tw-cali95aa86cbb2a
-A cali-tw-cali95aa86cbb2a -m comment --comment "cali:eCrqwxNk3cKw9Eq6" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-tw-cali95aa86cbb2a -m comment --comment "cali:_krp5nzavhAu5avJ" -m conntrack --ctstate INVALID -j DROP
-A cali-tw-cali95aa86cbb2a -m comment --comment "cali:Cu-tVtfKKu413YTT" -j MARK --set-xmark 0x0/0x10000
-A cali-tw-cali95aa86cbb2a -m comment --comment "cali:leBL64hpAXM9y4nk" -m comment --comment "Start of policies" -j MARK --set-xmark 0x0/0x20000
-A cali-tw-cali95aa86cbb2a -m comment --comment "cali:pm-LK-c1ra31tRwz" -m mark --mark 0x0/0x20000 -j cali-pi-_tTE-E7yY40ogArNVgKt
-A cali-tw-cali95aa86cbb2a -m comment --comment "cali:q_zG8dAujKUIBe0Q" -m comment --comment "Return if policy accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-tw-cali95aa86cbb2a -m comment --comment "cali:FUDVBYh1Yr6tVRgq" -m comment --comment "Drop if no policies passed packet" -m mark --mark 0x0/0x20000 -j DROP
-A cali-tw-cali95aa86cbb2a -m comment --comment "cali:X19Z-Pa0qidaNsMH" -j cali-pri-kns.default
-A cali-tw-cali95aa86cbb2a -m comment --comment "cali:Ljj0xNidsduxDGUb" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-tw-cali95aa86cbb2a -m comment --comment "cali:0z9RRvvZI9Gud0Wv" -j cali-pri-ksa.default.default
-A cali-tw-cali95aa86cbb2a -m comment --comment "cali:pNCpK-SOYelSULC1" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-tw-cali95aa86cbb2a -m comment --comment "cali:sMkvrxvxj13WlTMK" -m comment --comment "Drop if no profiles matched" -j DROP
$ sudo iptables-save -t filter | grep cali-pi-_tTE-E7yY40ogArNVgKt
:cali-pi-_tTE-E7yY40ogArNVgKt - [0:0]
-A cali-pi-_tTE-E7yY40ogArNVgKt -m comment --comment "cali:M4Und37HGrw6jUk8" -m set --match-set cali40s:LrVD8vMIGQDyv8Y7sPFB1Ge src -j MARK --set-xmark 0x10000/0x10000
-A cali-pi-_tTE-E7yY40ogArNVgKt -m comment --comment "cali:sEnlfZagUFRSPRoe" -m mark --mark 0x10000/0x10000 -j RETURN
[root@minikube /]# ipset list
Name: cali40s:LrVD8vMIGQDyv8Y7sPFB1Ge
Type: hash:net
Revision: 6
Header: family inet hashsize 1024 maxelem 1048576
Size in memory: 408
References: 3
Number of entries: 1
Members:
10.88.0.27

References:

https://kubernetes.io
https://www.projectcalico.org/
https://rancher.com/
http://www.netfilter.org/

--

--

dineshkumarr.ramasamy@gmail.com

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store