参考链接
- vpc-cni网络策略最佳实践,https://aws.github.io/aws-eks-best-practices/security/docs/network/#additional-resources
- vpc cni网络策略faq,https://github.com/aws/amazon-vpc-cni-k8s/blob/0703d03dec8afb8f83a7ff0c9d5eb5cc3363026e/docs/network-policy-faq.md
aws-cni插件pod的容器目录如下
aws-vpc-cni-init
首先通过aws-vpc-cni-init容器进行初始化,该容器是个一次性任务,具体的行为如下
- 通过IMDS获取到MAC地址,然后用MAC地址查询到主网卡
- 配置主网卡的参数
- 配置ipv6相关设置
aws-node
而aws-node容器的启动脚本为aws-vpc-cni
,由于当cni二进制文件安装到/opt/cni/bin
并且配置文件就绪后kubelet会认为cni插件就绪,对于vpc-cni来说则包括如下两部分:
- cni二进制文件和配置文件
- aws-k8s-agent守护进程(ipam控制器)
vpc-cni会确保aws-k8s-agent守护进程(ipam控制器)启动后再拷贝cni二进制文件,在aws-vpc-cni的启动脚本中会启动并等待ipamd启动,此时进程阻塞因此aws-node容器会继续保持运行,而ipamd则被当作ds进程在每个节点上运行。
在aws-node容器中始终会通过grpc-health-probe
进行健康和就绪检查
exec [/app/grpc-health-probe -addr=:50051 -connect-timeout=5s -rpc-timeout=5s]
此外ipamd会在61678端口暴露/metrics路径指标,可以通过安装cni-metrics-helper将这部分指标发布到cloudwatch中
# 192.168.2.119为aws-node的pod地址
$ curl 192.168.2.119:61678/metrics
通过ipamd暴露的路径还可以获取当前分配ip的情况
// get enis info
# curl http://localhost:61679/v1/enis | python -m json.tool
// get IP assignment info
# curl http://localhost:61679/v1/pods | python -m json.tool
其他的introspect路径包括
func (c *IPAMContext) setupIntrospectionServer() *http.Server {serverFunctions := map[string]func(w http.ResponseWriter, r *http.Request){"/v1/enis": eniV1RequestHandler(c),"/v1/eni-configs": eniConfigRequestHandler(c),"/v1/networkutils-env-settings": networkEnvV1RequestHandler(),"/v1/ipamd-env-settings": ipamdEnvV1RequestHandler(),}
routed-eni-cni-plugin
真正被kubelet调用进行pod网络环境初始化时,执行的cni二进制文件为routed-eni-cni-plugin
,会分别和ipamd和networkpolicy-agent通过rpc通信方式交互,对应的日志为/var/log/aws-routed-eni/plugin.log
。
ipamdAddress = "127.0.0.1:50051"
npAgentAddress = "127.0.0.1:50052"
cni二进制文件主要通过两个方法和ipamd交互,https://github.com/aws/amazon-vpc-cni-k8s/blob/0703d03dec8afb8f83a7ff0c9d5eb5cc3363026e/docs/cni-proposal.md
// amazon-vpc-cni-k8s/amazon-vpc-cni-k8s-master/cmd/routed-eni-cni-plugin/cni.go
type CNIBackendClient interface {AddNetwork(ctx context.Context, in *AddNetworkRequest, opts ...grpc.CallOption) (*AddNetworkReply, error)DelNetwork(ctx context.Context, in *DelNetworkRequest, opts ...grpc.CallOption) (*DelNetworkReply, error)
}// 例如,如下分配ip地址的主要逻辑
conn, err := grpcClient.Dial(ipamdAddress, grpc.WithTransportCredentials(insecure.NewCredentials()))
c := rpcClient.NewCNIBackendClient(conn)
r, err := c.AddNetwork(context.Background(),...)
如果是严格网络策略模式,则cni二进制文件会和np-node-agent通信,为新的pod强制添加网络策略
if utils.IsStrictMode(r.NetworkPolicyMode) {// Set up a connection to the network policy agentnpConn, err := grpcClient.Dial(npAgentAddress, grpc.WithTransportCredentials(insecure.NewCredentials()))npc := rpcClient.NewNPBackendClient(npConn)npr, err := npc.EnforceNpToPod(context.Background(),
np-node-agent
网络策略涉及到的组件包括如下
-
Network Policy Controller控制器会自动安装在 EKS Control Plane 上,监视 NetworkPolicy 对象并发送指令到 Node Agent
When you create a new Amazon EKS cluster, the network policy controller is automatically installed on the EKS control plane. It actively monitors the creation of network policies within your cluster and reconciles policy endpoints. Subsequently, the controller instructs the node agent to create or update eBPF programs on the node by publishing pod information through the policy endpoints
-
Network Policy Node Agent,通过创建 eBPF 程序在节点上实现网络策略
-
AWS eBPF SDK for Go,提供了一个接口,用于与节点上的 eBPF 程序进行交互
-
VPC Resource Controller,管理Kubernetes Pods的分支和中继网络接口
当需要涉及到网络策略相关操作时(例如创建netpl资源),np-controller会解析配置的网络策略,并通过自定义 CRD (PolicyEndpoints
在np-controller安装的时候自动安装) 资源发布解析的endpoints。np-node-agent获取PolicyEndpoint资源,并通过附加到pod主Veth 接口的 eBPF 探针来执行策略。对应的日志为/var/log/aws-routed-eninetwork-policy-agent.log
np-node-agent会将主机的/sys/fs/bpf文件系统挂载在卷bpf-pin-path
下
/sys/fs/bpf from bpf-pin-path (rw)
np-node-agent默认的启动参数为
Args: --enable-ipv6=false --enable-network-policy=false --enable-cloudwatch-logs=false --enable-policy-event-logs=false --log-file=/var/log/aws-routed-eni/network-policy-agent.log--metrics-bind-addr=:8162 --health-probe-bind-addr=:8163 --conntrack-cache-cleanup-period=300
np-node-agent启动日志中的主要逻辑如下
-
等待controller就绪
-
设置ConntrackTTL清理时间cleanupPeriod为300
-
校验并安装bpf程序到/opt/cni/bin/
$ ls -al /opt/cni/bin| grep bpf -rw-r--r-- 1 root root 27136 Oct 23 10:51 tc.v4egress.bpf.o -rw-r--r-- 1 root root 27360 Oct 23 10:51 tc.v4ingress.bpf.o -rw-r--r-- 1 root root 3384 Oct 23 10:51 v4events.bpf.o
-
安装aws-eks-na-cli命令到/opt/cni/bin/
-
设置全局默认maps和加载probe
-
初始化Conntrack client
-
开启rpc server监听127.0.0.1:50052
-
启动metrics server
如果收到了来自np-controller的指令,则np-node-agent会开始为pod创建网络策略,
-
如果没开np直接返回success
// EnforceNpToPod processes CNI Enforce NP network request func (s *server) EnforceNpToPod(ctx context.Context, in *rpc.EnforceNpRequest) (*rpc.EnforceNpReply, error) {if s.policyReconciler.GeteBPFClient() == nil {s.log.Info("Network policy is disabled, returning success")
-
同一个节点上的pod副本会使用同样的eBPF firewall maps
if s.policyReconciler.ArePoliciesAvailableInLocalCache(podIdentifier) && isMapUpdateRequired {//Derive Ingress and Egress Firewall Rules and Update the relevant eBPF mapsingressRules, egressRules, _ :=s.policyReconciler.DeriveFireWallRulesPerPodIdentifier(podIdentifier, in.K8S_POD_NAMESPACE)err = s.policyReconciler.GeteBPFClient().UpdateEbpfMaps(podIdentifier, ingressRules, egressRules)
由于vpc-cni插件会在节点上安装eBPF SDK集合,因此可以使用 eBPF SDK 工具(aws-eks-na-cli)来识别网络策略的问题
aws-eks-na-cli
命令只有在开启--enable-network-policy=true
后才会安装到节点
$ sudo /opt/cni/bin/aws-eks-na-cli ebpf -h
Dump all ebpf related dataUsage:aws-eks-na-cli ebpf [flags]aws-eks-na-cli ebpf [command]Aliases:ebpf, ebpfAvailable Commands:dump-maps Dump all ebpf maps related dataloaded-ebpfdata Dump all ebpf related datamaps Dump all ebpf maps related dataprogs Dump all ebpf program related data
具体的用法涉及到ebpf sdk的组件和api调用,需要进一步学习