简介
从官方的架构图中很容易就能找到 kubelet
执行 kubelet -h
看到 kubelet 的功能介绍:
-
kubelet 是每个 Node 节点上都运行的主要“节点代理”。使用如下的一个向 apiserver 注册 Node 节点:主机的
hostname
;覆盖host
的参数;或者云提供商指定的逻辑。 -
kubelet 基于
PodSpec
工作。PodSpec
是用YAML
或者JSON
对象来描述 Pod。Kubelet 接受通过各种机制(主要是 apiserver)提供的一组PodSpec
,并确保里面描述的容器良好运行。
除了由 apiserver 提供 PodSpec
,还可以通过以下方式提供:
-
文件
-
HTTP 端点
-
HTTP 服务器
kubelet 功能归纳一下就是上报 Node 节点信息,和管理(创建、销毁)Pod。 功能看似简单,实际不然。每一个点拿出来都需要很大的篇幅来讲,比如 Node 节点的计算资源,除了传统的 CPU、内存、硬盘,还提供扩展来支持类似 GPU 等资源;Pod 不仅仅有容器,还有相关的网络、安全策略等。
架构
kubelet 的架构由 N 多的组件组成,下面简单介绍下比较重要的几个:
PLEG
即 Pod Lifecycle Event Generator,字面意思 Pod 生命周期事件(ContainerStarted
、ContainerDied
、ContainerRemoved
、ContainerChanged
)生成器。
其维护着 Pod 缓存;定期通过 ContainerRuntime
获取 Pod 的信息,与缓存中的信息比较,生成如上的事件;将事件写入其维护的通道(channel)中。
PodWorkers
处理事件中 Pod 的同步。核心方法 managePodLoop()
间接调用 kubelet.syncPod()
完成 Pod 的同步:
-
如果 Pod 正在被创建,记录其延迟
-
生成 Pod 的 API Status,即
v1.PodStatus
:从运行时的 status 转换成 api status -
记录 Pod 从
pending
到running
的耗时 -
在
StatusManager
中更新 pod 的状态 -
杀掉不应该运行的 Pod
-
如果网络插件未就绪,只启动使用了主机网络(host network)的 Pod
-
如果 static pod 不存在,为其创建镜像(Mirror)Pod
-
为 Pod 创建文件系统目录:Pod 目录、卷目录、插件目录
-
使用
VolumeManager
为 Pod 挂载卷 -
获取 image pull secrets
-
调用容器运行时(container runtime)的
#SyncPod()
方法
PodManager
存储 Pod 的期望状态,kubelet 服务的不同渠道的 Pod
StatusProvider
提供节点和容器的统计信息,有 cAdvisor
和 CRI
两种实现。
ContainerRuntime
顾名思义,容器运行时。与遵循 CRI 规范的高级容器运行时进行交互。
生命周期事件生成器PLEG
对于 Pod来说,Kubelet 会从多个数据来源(api、file以及http) watch Pod spec 中的变化。对于容器来说,Kubelet 会定期轮询容器运行时,以获取所有容器的最新状态。随着 Pod 和容器数量的增加,轮询会产生较大开销,带来的周期性大量并发请求会导致较高的 CPU 使用率峰值,降低节点性能,从而降低节点的可靠性。为了降低 Pod 的管理开销,提升 Kubelet 的性能和可扩展性,引入了 PLEG(Pod Lifecycle Event Generator)。改进了之前的工作方式:
-
减少空闲期间的不必要工作(例如 Pod 的定义和容器的状态没有发生更改)。
-
减少获取容器状态的并发请求数量。
为了进一步降低损耗,社区推出了基于event实现的PLEG,当然也需要CRI运行时支持。
它定期检查节点上 Pod 运行情况,如果发现感兴趣的变化,PLEG 就会把这种变化包 装成 Event 发送给 Kubelet 的主同步机制 syncLoop 去处理。
syncLoop
Kubelet启动后通过syncLoop进入到主循环处理Node上Pod Changes事件,监听来自file,apiserver,http三类的事件并汇聚到kubetypes.PodUpdate Channel(Config Channel)中,由syncLoopIteration不断从kubetypes.PodUpdate Channel中消费。
源码分析
源码版本
origin/release-1.30
启动流程
入口函数
入口位于 cmd/kubelet/app/server.go 。
其中 RunKubelet() 接口下的 startKubelet 即是启动 kubelet 的启动函数。
startKubelet
func startKubelet(k kubelet.Bootstrap, podCfg *config.PodConfig, kubeCfg *kubeletconfiginternal.KubeletConfiguration, kubeDeps *kubelet.Dependencies, enableServer bool) {// start the kubeletgo k.Run(podCfg.Updates())// start the kubelet serverif enableServer {go k.ListenAndServe(kubeCfg, kubeDeps.TLSOptions, kubeDeps.Auth, kubeDeps.TracerProvider)}if kubeCfg.ReadOnlyPort > 0 {go k.ListenAndServeReadOnly(netutils.ParseIPSloppy(kubeCfg.Address), uint(kubeCfg.ReadOnlyPort))}go k.ListenAndServePodResources()
}
跳转到 pkg/kubelet/kubelet.go ,这里就是 kubelet 的启动接口代码所在。
kubelet.Run
// Run starts the kubelet reacting to config updates
func (kl *Kubelet) Run(updates <-chan kubetypes.PodUpdate) {// ...// Start the cloud provider sync managerif kl.cloudResourceSyncManager != nil {go kl.cloudResourceSyncManager.Run(wait.NeverStop)}// 加载部分模块,如镜像管理, 观察oom, if err := kl.initializeModules(); err != nil {kl.recorder.Eventf(kl.nodeRef, v1.EventTypeWarning, events.KubeletSetupFailed, err.Error())klog.ErrorS(err, "Failed to initialize internal modules")os.Exit(1)}// Start volume manager// 开启volume 管理go kl.volumeManager.Run(kl.sourcesReady, wait.NeverStop)//...// 开启pleg, 即watch方式获取 runtime的event// Start the pod lifecycle event generator.kl.pleg.Start()// Start eventedPLEG only if EventedPLEG feature gate is enabled.if utilfeature.DefaultFeatureGate.Enabled(features.EventedPLEG) {kl.eventedPleg.Start()}// 启动主进程,接口 kl.syncLoopIteration()
// 监听来自 file/apiserver/http 事件源发送过来的事件,并对事件做出对应的同步处理。kl.syncLoop(ctx, updates, kl)
}
事件watch
syncLoop
// syncLoop is the main loop for processing changes. It watches for changes from
// three channels (file, apiserver, and http) and creates a union of them. For
// any new change seen, will run a sync against desired state and running state. If
// no changes are seen to the configuration, will synchronize the last known desired
// state every sync-frequency seconds. Never returns.
func (kl *Kubelet) syncLoop(ctx context.Context, updates <-chan kubetypes.PodUpdate, handler SyncHandler) {//...for {if err := kl.runtimeState.runtimeErrors(); err != nil {klog.ErrorS(err, "Skipping pod synchronization")// exponential backofftime.Sleep(duration)duration = time.Duration(math.Min(float64(max), factor*float64(duration)))continue}// reset backoff if we have a successduration = basekl.syncLoopMonitor.Store(kl.clock.Now())// updatedates为配置聚合通道,handler 为kubelet对象的接口集合// plegch 是来自runtime变化事件的通道if !kl.syncLoopIteration(ctx, updates, handler, syncTicker.C, \housekeepingTicker.C, plegCh) {break}kl.syncLoopMonitor.Store(kl.clock.Now())}
}
syncLoop参数中的updates 是来自下面文件的配置聚合通道
// PodConfig is a configuration mux that merges many sources of pod configuration into a single
// consistent structure, and then delivers incremental change notifications to listeners
// in order.
type PodConfig struct {pods *podStoragemux *mux// the channel of denormalized changes passed to listeners// updates通道updates chan kubetypes.PodUpdate// contains the list of all configured sourcessourcesLock sync.Mutexsources sets.String
}
handler interface 传入的是实现了下面方法的kubelet对象
syncLoopIteration
func (kl *Kubelet) syncLoopIteration(ctx context.Context, configCh <-chan kubetypes.PodUpdate, handler SyncHandler,syncCh <-chan time.Time, housekeepingCh <-chan time.Time, plegCh <-chan *pleg.PodLifecycleEvent) bool {select {// 获取来自配置通道的事件case u, open := <-configCh:// Update from a config source; dispatch it to the right handler// callback.if !open {klog.ErrorS(nil, "Update channel is closed, exiting the sync loop")return false}// 下面为对收到pod配置后各个操作的处理switch u.Op {// 关注ADD操作case kubetypes.ADD:klog.V(2).InfoS("SyncLoop ADD", "source", u.Source, "pods", klog.KObjSlice(u.Pods))// After restarting, kubelet will get all existing pods through// ADD as if they are new pods. These pods will then go through the// admission process and *may* be rejected. This can be resolved// once we have checkpointing.handler.HandlePodAdditions(u.Pods)case kubetypes.UPDATE:klog.V(2).InfoS("SyncLoop UPDATE", "source", u.Source, "pods", klog.KObjSlice(u.Pods))handler.HandlePodUpdates(u.Pods)case kubetypes.REMOVE:klog.V(2).InfoS("SyncLoop REMOVE", "source", u.Source, "pods", klog.KObjSlice(u.Pods))handler.HandlePodRemoves(u.Pods)case kubetypes.RECONCILE:klog.V(4).InfoS("SyncLoop RECONCILE", "source", u.Source, "pods", klog.KObjSlice(u.Pods))handler.HandlePodReconcile(u.Pods)case kubetypes.DELETE:klog.V(2).InfoS("SyncLoop DELETE", "source", u.Source, "pods", klog.KObjSlice(u.Pods))// DELETE is treated as a UPDATE because of graceful deletion.handler.HandlePodUpdates(u.Pods)case kubetypes.SET:// TODO: Do we want to support this?klog.ErrorS(nil, "Kubelet does not support snapshot update")default:klog.ErrorS(nil, "Invalid operation type received", "operation", u.Op)}kl.sourcesReady.AddSource(u.Source)// 获取来自pleg通道的事件case e := <-plegCh:if isSyncPodWorthy(e) {// PLEG event for a pod; sync it.if pod, ok := kl.podManager.GetPodByUID(e.ID); ok {klog.V(2).InfoS("SyncLoop (PLEG): event for pod", "pod", klog.KObj(pod), "event", e)// 处理 同步自容器运行时的容器最新信息handler.HandlePodSyncs([]*v1.Pod{pod})} else {// If the pod no longer exists, ignore the event.klog.V(4).InfoS("SyncLoop (PLEG): pod does not exist, ignore irrelevant event", "event", e)}}if e.Type == pleg.ContainerDied {if containerID, ok := e.Data.(string); ok {kl.cleanUpContainersInPod(e.ID, containerID)}}//...}return true
}
Pod的创建
容器的创建
HandlePodAdditions
重点关注ADD操作
kubetypes.ADD
kubetypes.ADD 既是 Pod 的创建,它调用了 handler.HandlePodAdditions(u.Pods) 接口来响应 Pod 的创建操作,我们先看看 Pod 具体是怎么样被创建的。
-
HandlePodAdditions() 接口接收的是 pods 切片,意味着可能会出现多个 pod 同时创建。
-
pods 按创建时间排序,以此对排好序的 pods 进行循环遍历。
-
获取 kubelet 所在节点上现有的所有 pods 。
-
kl.podManager.AddPod(pod) 把当前 pod 信息缓存进 podManager 的 podByUID 这个 map 里面。
-
检查 pod 是否 MirrorPod ,如果是 MirrorPod ,则调用 kl.handleMirrorPod(pod, start) 对 Pod 进行处理 。
-
无论是 MirrorPod 还是正常 Pod ,最终都是调用 kl.dispatchWork() 接口进行处理,他们的区别在于传进去的参数的不同。
-
MirrorPod 是 静态 Pod 在 apiserver 上的一个镜像,kubelet 并不需要在节点上给他创建真实的容器,它的 ADD/UPDATE/DELETE 操作类型都被当做 UPDATE 来处理,更新它在 apiserver 上的状态。
// HandlePodAdditions is the callback in SyncHandler for pods being added from
// a config source.
func (kl *Kubelet) HandlePodAdditions(pods []*v1.Pod) {start := kl.clock.Now()// 按照创建事件排序sort.Sort(sliceutils.PodsByCreationTime(pods))if utilfeature.DefaultFeatureGate.Enabled(features.InPlacePodVerticalScaling) {kl.podResizeMutex.Lock()defer kl.podResizeMutex.Unlock()}// 收到的chungking事件通知是一个pods数组,pods可能有多个for _, pod := range pods {existingPods := kl.podManager.GetPods()// Always add the pod to the pod manager. Kubelet relies on the pod// manager as the source of truth for the desired state. If a pod does// not exist in the pod manager, it means that it has been deleted in// the apiserver and no action (other than cleanup) is required.//把当前 pod 信息缓存进 podManager 的 podByUID 这个 map 里面kl.podManager.AddPod(pod)pod, mirrorPod, wasMirror := kl.podManager.GetPodAndMirrorPod(pod)if wasMirror {if pod == nil {klog.V(2).InfoS("Unable to find pod for mirror pod, skipping", "mirrorPod", klog.KObj(mirrorPod), "mirrorPodUID", mirrorPod.UID)continue}kl.podWorkers.UpdatePod(UpdatePodOptions{Pod: pod,MirrorPod: mirrorPod,UpdateType: kubetypes.SyncPodUpdate,StartTime: start,})continue}//...// 转移到podworkers的Update继续处理pod的创建kl.podWorkers.UpdatePod(UpdatePodOptions{Pod: pod,MirrorPod: mirrorPod,UpdateType: kubetypes.SyncPodCreate,StartTime: start,})}
}
podWorkers.UpdatePod
// UpdatePod carries a configuration change or termination state to a pod. A pod is either runnable,
// terminating, or terminated, and will transition to terminating if: deleted on the apiserver,
// discovered to have a terminal phase (Succeeded or Failed), or evicted by the kubelet.
func (p *podWorkers) UpdatePod(options UpdatePodOptions) {//...// start the pod worker goroutine if it doesn't existpodUpdates, exists := p.podUpdates[uid]// 当没有该pod时将会执行创建流程, 是以goroutine 方式启动运行if !exists {// buffer the channel to avoid blocking this methodpodUpdates = make(chan struct{}, 1)p.podUpdates[uid] = podUpdates// ensure that static pods start in the order they are received by UpdatePodif kubetypes.IsStaticPod(pod) {p.waitingToStartStaticPodsByFullname[status.fullname] =append(p.waitingToStartStaticPodsByFullname[status.fullname], uid)}// allow testing of delays in the pod update channelvar outCh <-chan struct{}if p.workerChannelFn != nil {outCh = p.workerChannelFn(uid, podUpdates)} else {outCh = podUpdates}// 在协程中启动pod// spawn a pod workergo func() {// TODO: this should be a wait.Until with backoff to handle panics, and// accept a context for shutdowndefer runtime.HandleCrash()defer klog.V(3).InfoS("Pod worker has stopped", "podUID", uid)// 启动pod逻辑p.podWorkerLoop(uid, outCh)}()}//...
}
podWorkerLoop
func (p *podWorkers) podWorkerLoop(podUID types.UID, podUpdates <-chan struct{}) {var lastSyncTime time.Timefor range podUpdates {ctx, update, canStart, canEverStart, ok := p.startPodSync(podUID)//...// Take the appropriate action (illegal phases are prevented by UpdatePod)switch {case update.WorkType == TerminatedPod:err = p.podSyncer.SyncTerminatedPod(ctx, update.Options.Pod, status)case update.WorkType == TerminatingPod:var gracePeriod *int64if opt := update.Options.KillPodOptions; opt != nil {gracePeriod = opt.PodTerminationGracePeriodSecondsOverride}podStatusFn := p.acknowledgeTerminating(podUID)// if we only have a running pod, terminate it directlyif update.Options.RunningPod != nil {err = p.podSyncer.SyncTerminatingRuntimePod(ctx, update.Options.RunningPod)} else {err = p.podSyncer.SyncTerminatingPod(ctx, update.Options.Pod, status, gracePeriod, podStatusFn)}default:// 继续进入pod的创建isTerminal, err = p.podSyncer.SyncPod(ctx, update.Options.UpdateType, update.Options.Pod, update.Options.MirrorPod, status)}lastSyncTime = p.clock.Now()return err}()}
}
Kubelet.SyncPod
// This operation writes all events that are dispatched in order to provide
// the most accurate information possible about an error situation to aid debugging.
// Callers should not write an event if this operation returns an error.
func (kl *Kubelet) SyncPod(ctx context.Context, updateType kubetypes.SyncPodType, pod, mirrorPod *v1.Pod, podStatus *kubecontainer.PodStatus) (isTerminal bool, err error) {// ...sctx := context.WithoutCancel(ctx)// 创建podresult := kl.containerRuntime.SyncPod(sctx, pod, podStatus, pullSecrets, kl.backOff)kl.reasonCache.Update(pod.UID, result)// ...return false, nil
}
kubeGenericRuntimeManager.SyncPod
这里就是调用容器运行时 SyncPod() 接口的源码所在,很长,我们先总结一下他的步骤。
-
计算 sandbox 容器和普通容器的状态有没有变更。
-
如果必要,kill 掉 sandbox 容器重建。
-
如果容器不该被运行,则 kill 掉 pod 下的所有容器。
-
如果需要,创建 sandbox 容器。
-
创建临时容器。
-
创建初始化容器 initContainers 。
-
创建普通容器。
我们接下来,在根据上面的每一个步骤,来分析这些步骤的具体内容。
注意
sanbox 容器到底是什么东西?kubelet 创建 pod 的时候,你通过 docker ps 去看,会发现它使用 pause 镜像也跟着创建一个同名容器。
57f7fc7cf97b 605c77e624dd "/docker-entrypoint.…" 14 hours ago Up 14 hours k8s_nginx_octopus-deployment-549545bf46-x9cqm_default_c799bb7f-d5d9-41bd-ba60-9a968f0fac54_0 4e0c7fb21e49 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 14 hours ago Up 14 hours k8s_POD_octopus-deployment-549545bf46-x9cqm_default_c799bb7f-d5d9-41bd-ba60-9a968f0fac54_0
它是用来为 pod 下的一组容器创建各种 namespace 环境和资源隔离用的, pod 下的各个容器就可以在这个隔离的
环境里面共享各种资源。使得容器无需像 kvm 一样,需要创建一个操作系统实例来对资源进行隔离,这样就可以很好地利用到宿主机的各种资源,这也就是 kubernetes 的精髓所在。
// SyncPod syncs the running pod into the desired pod by executing following steps:
//
// 1. Compute sandbox and container changes.
// 2. Kill pod sandbox if necessary.
// 3. Kill any containers that should not be running.
// 4. Create sandbox if necessary.
// 5. Create ephemeral containers.
// 6. Create init containers.
// 7. Resize running containers (if InPlacePodVerticalScaling==true)
// 8. Create normal containers.
func (m *kubeGenericRuntimeManager) SyncPod(ctx context.Context, pod *v1.Pod, podStatus *kubecontainer.PodStatus, pullSecrets []v1.Secret, backOff *flowcontrol.Backoff) (result kubecontainer.PodSyncResult) {// Step 1: Compute sandbox and container changes.// Step 2: Kill the pod if the sandbox has changed.// Step 3: kill any running containers in this pod which are not to keep.// Step 4: Create a sandbox for the pod if necessary.podSandboxID := podContainerChanges.SandboxIDif podContainerChanges.CreateSandbox {// ...podSandboxID, msg, err = m.createPodSandbox(ctx, pod, podContainerChanges.Attempt)}//...// pod 中container 启动方法// Helper containing boilerplate common to starting all types of containers.// typeName is a description used to describe this type of container in log messages,// currently: "container", "init container" or "ephemeral container"// metricLabel is the label used to describe this type of container in monitoring metrics.// currently: "container", "init_container" or "ephemeral_container"start := func(ctx context.Context, typeName, metricLabel string, spec *startSpec) error {startContainerResult := kubecontainer.NewSyncResult(kubecontainer.StartContainer, spec.container.Name)result.AddSyncResult(startContainerResult)isInBackOff, msg, err := m.doBackOff(pod, spec.container, podStatus, backOff)if isInBackOff {startContainerResult.Fail(err, msg)klog.V(4).InfoS("Backing Off restarting container in pod", "containerType", typeName, "container", spec.container, "pod", klog.KObj(pod))return err}metrics.StartedContainersTotal.WithLabelValues(metricLabel).Inc()if sc.HasWindowsHostProcessRequest(pod, spec.container) {metrics.StartedHostProcessContainersTotal.WithLabelValues(metricLabel).Inc()}klog.V(4).InfoS("Creating container in pod", "containerType", typeName, "container", spec.container, "pod", klog.KObj(pod))// NOTE (aramase) podIPs are populated for single stack and dual stack clusters. Send only podIPs.if msg, err := m.startContainer(ctx, podSandboxID, podSandboxConfig, spec, pod, podStatus, pullSecrets, podIP, podIPs); err != nil {// startContainer() returns well-defined error codes that have reasonable cardinality for metrics and are// useful to cluster administrators to distinguish "server errors" from "user errors".metrics.StartedContainersErrorsTotal.WithLabelValues(metricLabel, err.Error()).Inc()if sc.HasWindowsHostProcessRequest(pod, spec.container) {metrics.StartedHostProcessContainersErrorsTotal.WithLabelValues(metricLabel, err.Error()).Inc()}startContainerResult.Fail(err, msg)// known errors that are logged in other places are logged at higher levels here to avoid// repetitive log spamswitch {case err == images.ErrImagePullBackOff:klog.V(3).InfoS("Container start failed in pod", "containerType", typeName, "container", spec.container, "pod", klog.KObj(pod), "containerMessage", msg, "err", err)default:utilruntime.HandleError(fmt.Errorf("%v %+v start failed in pod %v: %v: %s", typeName, spec.container, format.Pod(pod), err, msg))}return err}return nil}// Step 5: start ephemeral containers// These are started "prior" to init containers to allow running ephemeral containers even when there// are errors starting an init container. In practice init containers will start first since ephemeral// containers cannot be specified on pod creation.// 创建临时容器, 主要用于需要在运行的 Pod 内执行一些临时任务,// 例如调试、日志收集、文件系统浏览等。for _, idx := range podContainerChanges.EphemeralContainersToStart {start(ctx, "ephemeral container", metrics.EphemeralContainer, ephemeralContainerStartSpec(&pod.Spec.EphemeralContainers[idx]))}if !utilfeature.DefaultFeatureGate.Enabled(features.SidecarContainers) {// Step 6: start the init container.if container := podContainerChanges.NextInitContainerToStart; container != nil {// Start the next init container.if err := start(ctx, "init container", metrics.InitContainer, containerStartSpec(container)); err != nil {return}// Successfully started the container; clear the entry in the failureklog.V(4).InfoS("Completed init container for pod", "containerName", container.Name, "pod", klog.KObj(pod))}} else {// 启动init container // 可以帮助确保在主应用容器启动之前完成必要的预处理任务,以确保应用程序的正确运行和可靠性。// Step 6: start init containers.for _, idx := range podContainerChanges.InitContainersToStart {container := &pod.Spec.InitContainers[idx]// Start the next init container.if err := start(ctx, "init container", metrics.InitContainer, containerStartSpec(container)); err != nil {if types.IsRestartableInitContainer(container) {klog.V(4).InfoS("Failed to start the restartable init container for the pod, skipping", "initContainerName", container.Name, "pod", klog.KObj(pod))continue}klog.V(4).InfoS("Failed to initialize the pod, as the init container failed to start, aborting", "initContainerName", container.Name, "pod", klog.KObj(pod))return}// Successfully started the container; clear the entry in the failureklog.V(4).InfoS("Completed init container for pod", "containerName", container.Name, "pod", klog.KObj(pod))}}// Step 7: For containers in podContainerChanges.ContainersToUpdate[CPU,Memory] list, invoke UpdateContainerResourcesif isInPlacePodVerticalScalingAllowed(pod) {if len(podContainerChanges.ContainersToUpdate) > 0 || podContainerChanges.UpdatePodResources {m.doPodResizeAction(pod, podStatus, podContainerChanges, result)}}// 启动业务container// Step 8: start containers in podContainerChanges.ContainersToStart.for _, idx := range podContainerChanges.ContainersToStart {start(ctx, "container", metrics.Container, containerStartSpec(&pod.Spec.Containers[idx]))}return
}
Init containers(初始化容器)是 Kubernetes 中的一个概念,它是一种特殊类型的容器,用于在 Pod 中运行在主应用容器启动之前执行的初始化任务。
当创建一个 Pod 时,除了主应用容器之外,你可以定义一个或多个 init containers。这些 init containers 会按照定义的顺序依次执行,并且每个 init container 必须成功完成(即退出状态码为 0)才能继续下一个 init container 的执行,以及主应用容器的启动。
Init containers 可以用于在主应用容器启动之前执行各种初始化任务,例如:
数据库初始化:在主应用容器启动之前,可以使用 init container 创建数据库并执行必要的初始化脚本。
依赖检查:在主应用容器启动之前,可以使用 init container 检查依赖的服务或资源是否可用。
文件下载:在主应用容器启动之前,可以使用 init container 下载配置文件或其他必要的资源。
安全设置:在主应用容器启动之前,可以使用 init container 配置安全相关的设置。
每个 init container 都可以有自己的镜像和配置,它们共享 Pod 的网络命名空间和存储卷。一旦所有的 init containers 成功完成,主应用容器将启动并加入到 Pod 中。
使用 init containers 可以帮助确保在主应用容器启动之前完成必要的预处理任务,以确保应用程序的正确运行和可靠性。
请注意,init containers 是顺序执行的,因此如果某个 init container 失败,则整个 Pod 的启动过程将被阻塞,直到该 init container 成功或手动重启 Pod。
pause容器的注入
containerd.RunPodSandbox
使用的pasue 是在containerd创建sandbox 时。在配置中指定的
// RunPodSandbox creates and starts a pod-level sandbox. Runtimes should ensure
// the sandbox is in ready state.
func (c *criService) RunPodSandbox(ctx context.Context, r *runtime.RunPodSandboxRequest) (_ *runtime.RunPodSandboxResponse, retErr error) {...// Ensure sandbox container image snapshot.// 使用的sandbox镜像,一般就是pauseimage, err := c.ensureImageExists(ctx, c.config.SandboxImage, config)if err != nil {return nil, fmt.Errorf("failed to get sandbox image %q: %w", c.config.SandboxImage, err)}containerdImage, err := c.toContainerdImage(ctx, *image)if err != nil {return nil, fmt.Errorf("failed to get image from containerd %q: %w", image.ID, err)}ociRuntime, err := c.getSandboxRuntime(config, r.GetRuntimeHandler())if err != nil {return nil, fmt.Errorf("failed to get sandbox runtime: %w", err)}log.G(ctx).WithField("podsandboxid", id).Debugf("use OCI runtime %+v", ociRuntime)runtimeStart := time.Now()// Create sandbox container.// NOTE: sandboxContainerSpec SHOULD NOT have side// effect, e.g. accessing/creating files, so that we can test// it safely.// NOTE: the network namespace path will be created later and update through updateNetNamespacePath functionspec, err := c.sandboxContainerSpec(id, config, &image.ImageSpec.Config, "", ociRuntime.PodAnnotations)if err != nil {return nil, fmt.Errorf("failed to generate sandbox container spec: %w", err)}log.G(ctx).WithField("podsandboxid", id).Debugf("sandbox container spec: %#+v", spew.NewFormatter(spec))sandbox.ProcessLabel = spec.Process.SelinuxLabeldefer func() {if retErr != nil {selinux.ReleaseLabel(sandbox.ProcessLabel)}}()// ...container, err := c.client.NewContainer(ctx, id, opts...)if err != nil {return nil, fmt.Errorf("failed to create containerd container: %w", err)}// Add container into sandbox store in INIT state.sandbox.Container = containerdefer func() {// Put the sandbox into sandbox store when the some resource fails to be cleaned.if retErr != nil && cleanupErr != nil {log.G(ctx).WithError(cleanupErr).Errorf("encountered an error cleaning up failed sandbox %q, marking sandbox state as SANDBOX_UNKNOWN", id)if err := c.sandboxStore.Add(sandbox); err != nil {log.G(ctx).WithError(err).Errorf("failed to add sandbox %+v into store", sandbox)}}}()...
Pause 容器
Pause容器的功能可参考 pause.c文件
pause功能只是截获 SIGINT, SIGTERM, SIGCHLD 信号,其中SIGCHLD只是收集退出的子进程,避免成为孤儿进程。
puase容器主要是为了:
-
保持有一个容器在sandbox中运行,维持sandbox环境。
-
其他容器停止时需要在同空间下有主进程进行回收,避免成为孤儿进程。
/*
Copyright 2016 The Kubernetes Authors.Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <unistd.h>#define STRINGIFY(x) #x
#define VERSION_STRING(x) STRINGIFY(x)#ifndef VERSION
#define VERSION HEAD
#endifstatic void sigdown(int signo) {psignal(signo, "Shutting down, got signal");exit(0);
}static void sigreap(int signo) {while (waitpid(-1, NULL, WNOHANG) > 0);
}int main(int argc, char **argv) {int i;for (i = 1; i < argc; ++i) {if (!strcasecmp(argv[i], "-v")) {printf("pause.c %s\n", VERSION_STRING(VERSION));return 0;}}if (getpid() != 1)/* Not an error because pause sees use outside of infra containers. */fprintf(stderr, "Warning: pause should be the first process\n");// 收集退出的子进程if (sigaction(SIGINT, &(struct sigaction){.sa_handler = sigdown}, NULL) < 0)return 1;if (sigaction(SIGTERM, &(struct sigaction){.sa_handler = sigdown}, NULL) < 0)return 2;if (sigaction(SIGCHLD, &(struct sigaction){.sa_handler = sigreap,.sa_flags = SA_NOCLDSTOP},NULL) < 0)return 3;for (;;)pause();fprintf(stderr, "Error: infinite loop terminated\n");return 42;
}