Chapter 1. The Lack of Visibility
Kubernetes has become the de facto cloud operating system, and every day more and more critical applications are containerized and shifted to a cloud native landscape. This means Kubernetes is quickly becoming a rich target for both passive and targeted attackers. Kubernetes does not provide a default security configuration and provides no observability to discern if your pods or cluster has been attacked or compromised.
Understanding your security posture isnt just about applying security configuration and hoping for the best. Hope isnt a strategy. Just like the site reliability engineering (SRE) principle of service level objectives (SLOs) that [identify] an objective metric to represent the property of a system,
Achieving observability in a cloud native environment can be complicated. It often requires changes to applications or the management of yet another complex distributed system. However, eBPF provides a lightweight methodology to collect security observability natively in the kernel, without any changes to applications.
What Should We Monitor?
Kubernetes is constructed of several independent microservices that run the control plane (API server, controller manager, scheduler) and worker node components (kubelet, kube-proxy, container runtime). In a cloud native environment, there are a slew of additional components that make up a cloud native deployment, including continuous integration/continuous delivery (CI/CD), storage subsystems, container registries, observability (including eBPF), and many more.
Most of the systems that make up the For example, an internet-exposed pod that handles untrusted input is a much more likely attack vector than a control plane component on a private network with a hardened RBAC (role-based access control) configuration.
While container images are immutable, containers and pods are standard Linux processes that can have access to a set of binaries , package managers, interpreters, runtimes, etc. Pods can install packages , download tools, make internet connections, and cause all sorts of havoc in a Kubernetes environment, all without logging any of that behavior by default. Theres also the challenge of applying a least-privilege configuration for our workloads, by providing only the capabilities a container requires. Security observability monitors containers and can quickly identify and record all the capabilities a container requiresand nothing more. This means we should start by applying our security observability to pods.
Most organizations that have been around precloud native have existing security/detection tooling for their environments. So, why not just rely on those tools for cloud native security observability? Most legacy security tools dont support kernel namespaces to identify containerized processes. Existing network logs and firewalls are suboptimal for observability because pod IP addresses are ephemeral, which means that as pods come and go, IP addresses can be reused by entirely different apps by the time you investigate. eBPF security observability natively understands container attributes and provides process and network visibility thats closer to the pods that were monitoring, so we can detect events, including pre-NAT (network address translation), to retain the IP of the pod and understand the container or pod that initiated an action.
High-Fidelity Observability
When investigating a threat, the closer to the event the data is, the higher fidelity the data provides. A compromised pod that escalates its privileges and laterally moves through the network wont show up in our Kubernetes audit logs. If the pods are on the same host, the lateral movement wont even show up in our network logs. If our greatest attack surface is pods, well want our security observability as close to pods as possible. The further out we place our observability, the less critical security context were afforded. For example, firewall or network intrusion detection logs from the network generally map to the source IP address of the node that the offending pod resides on due to packet encapsulation that renders the identity of the source meaningless.
The same lateral movement event can be measured at the virtual ethernet (veth) interface of the pod or the physical network interface of the node. Measuring from the network includes the pre-NAT pod IP address and, with the help of eBPF, we can retrieve Kubernetes labels, namespaces, pod names, etc. We are improving our event fidelity.
But if we wanted to get even closer to pods, eBPF operates in-kernel where process requests are captured. We can assert a more meaningful identity of lateral movement than a network packet at the ), which includes the process that invoked the connection, any arguments, and the capabilities its running with. Or we can collect process events that never create a packet at all.