kube-scheduler调度任务的执行过程分析与源码解读(二)

概述

摘要: 上文我们对Kube-scheduler的启动流程进行了分析,本文继续探究kube-scheduler执行pod的调度任务的过程。

正文

说明:基于 kubernetes v1.12.0 源码分析

上文讲到kube-scheduler组件通过sched.Run() 启动调度器实例。在sched.Run() 中循环的执行sched.scheduleOne获取一个pod,并执行Pod的调度任务。

源码位置k8s.io/kubernetes/cmd/kube-scheduler/scheduler.go

// Run begins watching and scheduling. It waits for cache to be synced, then starts a goroutine and returns immediately.
func (sched *Scheduler) Run() {if !sched.config.WaitForCacheSync() {return}// 启动协程循环的执行调度任务go wait.Until(sched.scheduleOne, 0, sched.config.StopEverything)
}

(sched *Scheduler) scheduleOne

继续探究scheduleOne函数,scheduleOne函数非常关键。函数的第一步就是 getNextPod() 从调度队列podQueue中取出一个待调度 pod

// scheduleOne does the entire scheduling workflow for a single pod.  It is serialized on the scheduling algorithm's host fitting.
func (sched *Scheduler) scheduleOne() {// 从调度队列podQueue中取出一个待调度 podpod := sched.config.NextPod()if pod.DeletionTimestamp != nil {sched.config.Recorder.Eventf(pod, v1.EventTypeWarning, "FailedScheduling", "skip schedule deleting pod: %v/%v", pod.Namespace, pod.Name)glog.V(3).Infof("Skip schedule deleting pod: %v/%v", pod.Namespace, pod.Name)return}glog.V(3).Infof("Attempting to schedule pod: %v/%v", pod.Namespace, pod.Name)// Synchronously attempt to find a fit for the pod.start := time.Now()suggestedHost, err := sched.schedule(pod)if err != nil {// schedule() may have failed because the pod would not fit on any host, so we try to// preempt, with the expectation that the next time the pod is tried for scheduling it// will fit due to the preemption. It is also possible that a different pod will schedule// into the resources that were preempted, but this is harmless.if fitError, ok := err.(*core.FitError); ok {preemptionStartTime := time.Now()sched.preempt(pod, fitError)metrics.PreemptionAttempts.Inc()metrics.SchedulingAlgorithmPremptionEvaluationDuration.Observe(metrics.SinceInMicroseconds(preemptionStartTime))metrics.SchedulingLatency.WithLabelValues(metrics.PreemptionEvaluation).Observe(metrics.SinceInSeconds(preemptionStartTime))}return}metrics.SchedulingAlgorithmLatency.Observe(metrics.SinceInMicroseconds(start))// Tell the cache to assume that a pod now is running on a given node, even though it hasn't been bound yet.// This allows us to keep scheduling without waiting on binding to occur.assumedPod := pod.DeepCopy()// Assume volumes first before assuming the pod.//// If all volumes are completely bound, then allBound is true and binding will be skipped.//// Otherwise, binding of volumes is started after the pod is assumed, but before pod binding.//// This function modifies 'assumedPod' if volume binding is required.allBound, err := sched.assumeVolumes(assumedPod, suggestedHost)if err != nil {return}// assume modifies `assumedPod` by setting NodeName=suggestedHosterr = sched.assume(assumedPod, suggestedHost)if err != nil {return}// bind the pod to its host asynchronously (we can do this b/c of the assumption step above).go func() {// Bind volumes first before Podif !allBound {err = sched.bindVolumes(assumedPod)if err != nil {return}}err := sched.bind(assumedPod, &v1.Binding{ObjectMeta: metav1.ObjectMeta{Namespace: assumedPod.Namespace, Name: assumedPod.Name, UID: assumedPod.UID},Target: v1.ObjectReference{Kind: "Node",Name: suggestedHost,},})metrics.E2eSchedulingLatency.Observe(metrics.SinceInMicroseconds(start))if err != nil {glog.Errorf("Internal error binding pod: (%v)", err)}}()
}

scheduleOne函数较长我们提炼出关键步骤:

  • sched.config.NextPod() 从调度队列podQueue中取出一个待调度 pod
  • sched.schedule(pod) 执行对pod的调度任务,调度完成会返回一个合适的host(suggestedHost)
  • sched.bind 执行 bind ,即将 pod 绑定到 node,并发送post请求给apiserver
// scheduleOne does the entire scheduling workflow for a single pod.  It is serialized on the scheduling algorithm's host fitting.
func (sched *Scheduler) scheduleOne() {// 从调度队列podQueue中取出一个待调度 podpod := sched.config.NextPod()// 执行pod的调度任务,调度完成会返回一个合适的hostsuggestedHost, err := sched.schedule(pod)// bind the pod to its host asynchronously (we can do this b/c of the assumption step above).go func() {// 执行 bind ,将 pod 绑定到 nodeerr := sched.bind(assumedPod, &v1.Binding{}()
}

接下来,我们分别对获取待调度pod执行pod调度任务pod的绑定详细探究

源码如何实现调度任务

(sched *Scheduler) schedule

(sched *Scheduler) schedule 的作用是给定一个待调度pod,利用调度算法,返回一个合适的Host。函数内部直接执行调度任务的是sched.config.Algorithm.Schedule.

// schedule implements the scheduling algorithm and returns the suggested host.
func (sched *Scheduler) schedule(pod *v1.Pod) (string, error) {// 给定一个pod 和 一个NodeLister,返回一个合适的hosthost, err := sched.config.Algorithm.Schedule(pod, sched.config.NodeLister)if err != nil {pod = pod.DeepCopy()sched.config.Error(pod, err)sched.config.Recorder.Eventf(pod, v1.EventTypeWarning, "FailedScheduling", "%v", err)sched.config.PodConditionUpdater.Update(pod, &v1.PodCondition{Type:          v1.PodScheduled,Status:        v1.ConditionFalse,LastProbeTime: metav1.Now(),Reason:        v1.PodReasonUnschedulable,Message:       err.Error(),})return "", err}return host, err
}

(g *genericScheduler) Schedule

sched.config.Algorithm.Schedule 是一个接口,具体实现是genericScheduler结构的Schedule()方法。

在这里插入图片描述

找到genericScheduler结构体的Schedule()方法的源码

// Schedule tries to schedule the given pod to one of the nodes in the node list.
// If it succeeds, it will return the name of the node.
// If it fails, it will return a FitError error with reasons.
func (g *genericScheduler) Schedule(pod *v1.Pod, nodeLister algorithm.NodeLister) (string, error) {trace := utiltrace.New(fmt.Sprintf("Scheduling %s/%s", pod.Namespace, pod.Name))defer trace.LogIfLong(100 * time.Millisecond)// 执行一些检查工作if err := podPassesBasicChecks(pod, g.pvcLister); err != nil {return "", err}// 获取 所有node信息nodes, err := nodeLister.List()if err != nil {return "", err}if len(nodes) == 0 {return "", ErrNoNodesAvailable}// Used for all fit and priority funcs.// 更新缓存中Node信息err = g.cache.UpdateNodeNameToInfoMap(g.cachedNodeInfoMap)if err != nil {return "", err}trace.Step("Computing predicates")// findNodesThatFit 执行预选算法,过滤出符合的node节点startPredicateEvalTime := time.Now()filteredNodes, failedPredicateMap, err := g.findNodesThatFit(pod, nodes)if err != nil {return "", err}if len(filteredNodes) == 0 {return "", &FitError{Pod:              pod,NumAllNodes:      len(nodes),FailedPredicates: failedPredicateMap,}}// 记录数据用于promethues监控采集metrics.SchedulingAlgorithmPredicateEvaluationDuration.Observe(metrics.SinceInMicroseconds(startPredicateEvalTime))metrics.SchedulingLatency.WithLabelValues(metrics.PredicateEvaluation).Observe(metrics.SinceInSeconds(startPredicateEvalTime))trace.Step("Prioritizing")// 接下来将执行优选算法(打分),选出最优nodestartPriorityEvalTime := time.Now()// When only one node after predicate, just use it.// 如果最优的节点只有一个,就直接返回该节点if len(filteredNodes) == 1 {metrics.SchedulingAlgorithmPriorityEvaluationDuration.Observe(metrics.SinceInMicroseconds(startPriorityEvalTime))return filteredNodes[0].Name, nil}metaPrioritiesInterface := g.priorityMetaProducer(pod, g.cachedNodeInfoMap)// PrioritizeNodes 执行优选算法(打分),选出最优的nodepriorityList, err := PrioritizeNodes(pod, g.cachedNodeInfoMap, metaPrioritiesInterface, g.prioritizers, filteredNodes, g.extenders)if err != nil {return "", err}metrics.SchedulingAlgorithmPriorityEvaluationDuration.Observe(metrics.SinceInMicroseconds(startPriorityEvalTime))metrics.SchedulingLatency.WithLabelValues(metrics.PriorityEvaluation).Observe(metrics.SinceInSeconds(startPriorityEvalTime))trace.Step("Selecting host")// selectHost 从最优的node列表中选出,最优的一个node节点return g.selectHost(priorityList)
}

代码较长,老规矩我们提炼出精华。(g *genericScheduler) Schedule的工作主要有3点:

  • g.findNodesThatFit(pod, nodes) 从nodes列表中过滤出适合pod调度的node列表,即所谓的“预选”。
  • PrioritizeNodes 对node列表中的node执行"打分",选出分数最高的node列表(因为可能出现一些node的分数最高且相同,所以返回的是多个node的列表),即所谓的"“优选”。
  • .selectHost(priorityList) 从最优列表中选出一个最最最合适的node节点。
// Schedule tries to schedule the given pod to one of the nodes in the node list.
// If it succeeds, it will return the name of the node.
// If it fails, it will return a FitError error with reasons.
func (g *genericScheduler) Schedule(pod *v1.Pod, nodeLister algorithm.NodeLister) (string, error) {// 1. 获取所有node信息nodes, err := nodeLister.List()// 2. findNodesThatFit 执行预选算法,过滤出符合的node节点filteredNodes, failedPredicateMap, err := g.findNodesThatFit(pod, nodes)// When only one node after predicate, just use it.// 2.1 如果最优的节点只有一个,就直接返回该节点if len(filteredNodes) == 1 { return filteredNodes[0].Name, nil}// 3. PrioritizeNodes 执行优选算法(打分),选出最优的nodepriorityList, err := PrioritizeNodes(pod, g.cachedNodeInfoMap, metaPrioritiesInterface, g.prioritizers, filteredNodes, g.extenders)// 4. selectHost 从最优的node列表中选出,最优的一个node节点return g.selectHost(priorityList)
}

g.findNodesThatFitPrioritizeNodes 内容较多后面写一个文章单独分析。

(g *genericScheduler) Schedule的执行流程如图所示

img

(图片来自网络,如有侵权,请联系作者删除)

(g *genericScheduler) selectHost

柿子挑软的捏。selectHost逻辑最简单,所以先分析它。 selectHost的任务是从优选列表中选出``一个最优node,注意如果有多个node最大且分数一样时,会通过索引累计方式,避免多次调度到同一个宿主`。


// selectHost takes a prioritized list of nodes and then picks one
// in a round-robin manner from the nodes that had the highest score.
func (g *genericScheduler) selectHost(priorityList schedulerapi.HostPriorityList) (string, error) {if len(priorityList) == 0 {return "", fmt.Errorf("empty priorityList")}// findMaxScores 从 priorityList 计算出最大分数的一个或多个 node 的索引maxScores := findMaxScores(priorityList)// 重要!!!通过索引累计方式,避免多次调度到同一个宿主ix := int(g.lastNodeIndex % uint64(len(maxScores)))g.lastNodeIndex++return priorityList[maxScores[ix]].Host, nil
}

findMaxScores

findMaxScores 对 priorityList 遍历,找出分数最大node的索引并放入一个列表,如果最大分数有相同的,就都会放入列表。


// findMaxScores returns the indexes of nodes in the "priorityList" that has the highest "Score".
func findMaxScores(priorityList schedulerapi.HostPriorityList) []int {maxScoreIndexes := make([]int, 0, len(priorityList)/2)maxScore := priorityList[0].Scorefor i, hp := range priorityList {if hp.Score > maxScore {maxScore = hp.Score// 如果 node 的 score大于 maxScore就把所有放入列表 maxScoreIndexesmaxScoreIndexes = maxScoreIndexes[:0]maxScoreIndexes = append(maxScoreIndexes, i)} else if hp.Score == maxScore {maxScoreIndexes = append(maxScoreIndexes, i)}}return maxScoreIndexes
}

源码如何实现从调度队列中获取待调度pod

schduleOne函数执行的第一步sched.config.NextPod(),作用是从调度队列podQueue中取出一个待调度pod。那么问题来了,podQueue是如何初始化的呢?是如何将数据放到队列里面去的呢?

我们先将视线回到server.go中的run()函数,会执行NewSchedulerConfig(c) 获取调度配置。

Run

// Run runs the Scheduler.
func Run(c schedulerserverconfig.CompletedConfig, stopCh <-chan struct{}) error {// 代码省略// Build a scheduler config from the provided algorithm source.schedulerConfig, err := NewSchedulerConfig(c)if err != nil {return err}// 代码省略
}

NewSchedulerConfig

NewSchedulerConfig 调用了 CreateFromProvider 函数。进一步查找 CreateFromProvider 函数

// NewSchedulerConfig creates the scheduler configuration. This is exposed for use by tests.
func NewSchedulerConfig(s schedulerserverconfig.CompletedConfig) (*scheduler.Config, error) {// 代码省略source := s.ComponentConfig.AlgorithmSourcevar config *scheduler.Configswitch {case source.Provider != nil:// Create the config from a named algorithm provider.sc, err := configurator.CreateFromProvider(*source.Provider)if err != nil {return nil, fmt.Errorf("couldn't create scheduler using provider %q: %v", *source.Provider, err)}config = sc// 代码省略
}

(c *configFactory) CreateFromProvider

CreateFromProvider 调用了 CreateFromKeys 函数

// Creates a scheduler from the name of a registered algorithm provider.
func (c *configFactory) CreateFromProvider(providerName string) (*scheduler.Config, error) {glog.V(2).Infof("Creating scheduler from algorithm provider '%v'", providerName)provider, err := GetAlgorithmProvider(providerName)if err != nil {return nil, err}// 注意return c.CreateFromKeys(provider.FitPredicateKeys, provider.PriorityFunctionKeys, []algorithm.SchedulerExtender{})
}

(c *configFactory) CreateFromKeys

源码位置kubernetes/pkg/scheduler/factory/factory.go

CreateFromKeys函数中终于找到NextPod的定义,我们继续查找c.getNextPod()

// Creates a scheduler from a set of registered fit predicate keys and priority keys.
func (c *configFactory) CreateFromKeys(predicateKeys, priorityKeys sets.String, extenders []algorithm.SchedulerExtender) (*scheduler.Config, error) {glog.V(2).Infof("Creating scheduler with fit predicates '%v' and priority functions '%v'", predicateKeys, priorityKeys)if c.GetHardPodAffinitySymmetricWeight() < 1 || c.GetHardPodAffinitySymmetricWeight() > 100 {return nil, fmt.Errorf("invalid hardPodAffinitySymmetricWeight: %d, must be in the range 1-100", c.GetHardPodAffinitySymmetricWeight())}predicateFuncs, err := c.GetPredicates(predicateKeys)if err != nil {return nil, err}priorityConfigs, err := c.GetPriorityFunctionConfigs(priorityKeys)if err != nil {return nil, err}priorityMetaProducer, err := c.GetPriorityMetadataProducer()if err != nil {return nil, err}predicateMetaProducer, err := c.GetPredicateMetadataProducer()if err != nil {return nil, err}// Init equivalence class cacheif c.enableEquivalenceClassCache {c.equivalencePodCache = equivalence.NewCache()glog.Info("Created equivalence class cache")}// 重要!!! 创建通用调度器algo := core.NewGenericScheduler(c.schedulerCache,c.equivalencePodCache,c.podQueue,predicateFuncs,predicateMetaProducer,priorityConfigs,priorityMetaProducer,extenders,c.volumeBinder,c.pVCLister,c.alwaysCheckAllPredicates,c.disablePreemption,c.percentageOfNodesToScore,)podBackoff := util.CreateDefaultPodBackoff()return &scheduler.Config{SchedulerCache: c.schedulerCache,Ecache:         c.equivalencePodCache,// The scheduler only needs to consider schedulable nodes.NodeLister:          &nodeLister{c.nodeLister},Algorithm:           algo,GetBinder:           c.getBinderFunc(extenders),PodConditionUpdater: &podConditionUpdater{c.client},PodPreemptor:        &podPreemptor{c.client},WaitForCacheSync: func() bool {return cache.WaitForCacheSync(c.StopEverything, c.scheduledPodsHasSynced)},// 重要!!!从调度队列中取出一个 待调度 podNextPod: func() *v1.Pod {return c.getNextPod()},Error:           c.MakeDefaultErrorFunc(podBackoff, c.podQueue),StopEverything:  c.StopEverything,VolumeBinder:    c.volumeBinder,// 定义调度队列SchedulingQueue: c.podQueue,}, nil
}

getNextPod() 的逻辑很简单,就是从调度队列podQueue中Pop()出一个待调度的pod。podQueue是一个优先级队列,如果待调度pod在spec中配置了优先级,pop()会弹出优先级高的pod执行调度任务。

func (c *configFactory) getNextPod() *v1.Pod {// 从调度队列从弹出一个待调度的podpod, err := c.podQueue.Pop()if err == nil {glog.V(4).Infof("About to try and schedule pod %v/%v", pod.Namespace, pod.Name)return pod}glog.Errorf("Error while retrieving next pod from scheduling queue: %v", err)return nil
}

到此,我们明白了,shed.run()启动s时会启动informer,informer中定义了pod事件handler回调函数,handler会将待调度的pod放入调度队列podQueue。 而消费调度队列里面的pod,则是sheduleOne方法通过c.podQueue.Pop()取出一个待调度pod。

源码如何实现bind操作

scheduleOne选出一个 node 之后,调度器会创建一个v1.Binding 对象, Pod 的 ObjectReference 字段的值就是选中的 suggestedHost 的名字

下面代码片段是scheduleOne中关于bind操作的内容

// 构建一个 v1.Binding 对象
err := sched.bind(assumedPod, &v1.Binding{ObjectMeta: metav1.ObjectMeta{Namespace: assumedPod.Namespace, Name: assumedPod.Name, UID: assumedPod.UID},Target: v1.ObjectReference{Kind: "Node",Name: suggestedHost,},})

v1.Binding资源类型的介绍

v1.Binding 是 Kubernetes API 的一个对象类型,用于将一个或多个 Pod 绑定到一个特定的节点上。为了将 Pod 绑定到特定的节点上,可以使用 v1.Binding 对象。该对象需要指定 Pod 的名称和命名空间,以及要绑定到的节点名称。

(b *binder) Bind 代码很多,但核心的就一行:

​ err := sched.config.GetBinder(assumed).Bind(b)

调用Binder接口的Bind方法。进一可以看到,Bind方法 是利用clientset向apiserver发起一个Bind的POST请求

// bind binds a pod to a given node defined in a binding object.  We expect this to run asynchronously, so we
// handle binding metrics internally.
func (sched *Scheduler) bind(assumed *v1.Pod, b *v1.Binding) error {bindingStart := time.Now()// If binding succeeded then PodScheduled condition will be updated in apiserver so that// it's atomic with setting host.// 调用Binder接口的Bind方法err := sched.config.GetBinder(assumed).Bind(b)// 检查cache中对应pod的状态,确认是否bind成功if err := sched.config.SchedulerCache.FinishBinding(assumed); err != nil {glog.Errorf("scheduler cache FinishBinding failed: %v", err)}// 如果没有bind绑定成功,则记录错误原因if err != nil {glog.V(1).Infof("Failed to bind pod: %v/%v", assumed.Namespace, assumed.Name)if err := sched.config.SchedulerCache.ForgetPod(assumed); err != nil {glog.Errorf("scheduler cache ForgetPod failed: %v", err)}sched.config.Error(assumed, err)sched.config.Recorder.Eventf(assumed, v1.EventTypeWarning, "FailedScheduling", "Binding rejected: %v", err)sched.config.PodConditionUpdater.Update(assumed, &v1.PodCondition{Type:          v1.PodScheduled,Status:        v1.ConditionFalse,LastProbeTime: metav1.Now(),Reason:        "BindingRejected",})return err}// 记录metrics监控数据metrics.BindingLatency.Observe(metrics.SinceInMicroseconds(bindingStart))metrics.SchedulingLatency.WithLabelValues(metrics.Binding).Observe(metrics.SinceInSeconds(bindingStart))sched.config.Recorder.Eventf(assumed, v1.EventTypeNormal, "Scheduled", "Successfully assigned %v/%v to %v", assumed.Namespace, assumed.Name, b.Target.Name)return nil
}

Bind 是调用了clientset向apiserver发起一个Bind的POST请求

// Bind just does a POST binding RPC.
func (b *binder) Bind(binding *v1.Binding) error {glog.V(3).Infof("Attempting to bind %v to %v", binding.Name, binding.Target.Name)// 通过clientset向apiserver发起一个Bind的POST请求return b.Client.CoreV1().Pods(binding.Namespace).Bind(binding)
}

补充内容: apiserver收到bind请求后

当apiserver 收到这个 Binding object 请求后,将会更新 Pod 对象的下列字段:

设置 pod.Spec.NodeName
添加 annotations
设置 PodScheduled status 为 True

// k8s.io/kubernetes/pkg/registry/core/pod/storage/storage.go// assignPod assigns the given pod to the given machine.
// 处理 api 收到到binding RESTful 请求
func (r *BindingREST) assignPod(ctx context.Context, podID string, machine string, annotations map[string]string, dryRun bool) (err error) {// 设置pod的 host 和 Annotation 信息if _, err = r.setPodHostAndAnnotations(ctx, podID, "", machine, annotations, dryRun); err != nil {err = storeerr.InterpretGetError(err, api.Resource("pods"), podID)err = storeerr.InterpretUpdateError(err, api.Resource("pods"), podID)if _, ok := err.(*errors.StatusError); !ok {err = errors.NewConflict(api.Resource("pods/binding"), podID, err)}}return
}
// k8s.io/kubernetes/pkg/registry/core/pod/storage/storage.gofunc (r *BindingREST) setPodHostAndAnnotations(ctx context.Context, podID, oldMachine, machine string,annotations map[string]string, dryRun bool) (finalPod *api.Pod, err error) {podKey := r.store.KeyFunc(ctx, podID)r.store.Storage.GuaranteedUpdate(ctx, podKey, &api.Pod{}, false, nil,storage.SimpleUpdate(func(obj runtime.Object) (runtime.Object, error) {pod, ok := obj.(*api.Pod)// 设置 pod.Spec.NodeName pod.Spec.NodeName = machineif pod.Annotations == nil {pod.Annotations = make(map[string]string)}// 更新 pod 的 annotationsfor k, v := range annotations {pod.Annotations[k] = v}// 设置pod 的 pod.status.conditions中的status值为truezpodutil.UpdatePodCondition(&pod.Status, &api.PodCondition{Type:   api.PodScheduled,Status: api.ConditionTrue,})return pod, nil}), dryRun, nil)
}

更新pod.spec.Nodename后,将数据写入到etcd,之后 被指定调度的node节点上的 kubelet监听到这个事件,kubelet接下来会在该机器上创建pod.

总结

经过分析scheduleOne函数会循环的从调度队列sheduling_Queue`,取出pop()待调度的pod, 之后利用`sched.schedule()执行调度任务,选出最佳的sugestHost。 最后sched.bind()通过clientset向apiserver发起绑定bind的POST请求。至此,完成了一个完成的pod调度过程,流程如图所示。

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/412674.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

校园牛奶订购配送小程序开发制作方案

校园牛奶订购配送小程序系统的开发方案&#xff0c;包括对用户需求的分析、目标用户的界定、使用场景的设定以及开发功能模块的规划。校园牛奶订购配送小程序系统主要是为校园内学生和教职工提供牛奶订购与配送服务。 目标用户 主要面向在校学生、教职工以及其他有牛奶订购需求…

这四种人不能合作做生意

合伙创业千万不要和这四种人合伙&#xff0c;不然公司做大了都不是你的&#xff01; 一、不愿出钱的人&#xff0c;不愿出钱就不会有决心。公司一旦有风吹草动&#xff0c;最先跑路的都是没有出钱的。 二、不愿付出时间的人&#xff0c;想用业余时间参与&#xff0c;不愿全身心…

如何使用Svg矢量图封装引用到vue3项目中

前言 在现代前端开发中&#xff0c;SVG&#xff08;可缩放矢量图形&#xff09;因其高质量和灵活性成为了图标和图形设计的热门选择。对于 Vue 3 项目而言&#xff0c;将 SVG 图标封装和引用到项目中不仅能提升性能&#xff0c;还能带来更高的可维护性和一致性。SVG 图标本质上…

卡西莫多的诗文集2022-2024.8月定稿

通过网盘分享的文件&#xff1a;卡西莫多的诗文集2022-2024.8月30-A5.pdf 链接: https://pan.baidu.com/s/1_BrcKvUthFLlty8dWNZxjQ?pwdutwd 提取码: utwd 自从解锁了一项新技能后&#xff0c;从藏内容诗开始&#xff0c;自己积攒到现在不知不觉也积累了一些诗&#xff0c;看…

Runway删库跑路,真的run away了!

没有任何通知&#xff0c;Runway在Hugging Face上的内容全部删除了&#xff01; 目前具体原因不明。Runway的主页只留下了一句话&#xff1a; 我们不再对HuggingFace账号进行维护。 据悉&#xff0c;Runway在Hugging Face上&#xff0c;最火的、也是争议最大的项目&#xff0c;…

【spring】学习笔记1:starter和application

https://spring.io/toolsSpring Boot Extension Pack vs版本 使用Spring Tool Suite初始化Spring项目

Python__面向对象__多态学习

目录 一、多态 1.多态定义理解 2.实例属性和类属性 3.类相关的函数 (1) 实例方法 (2)类方法 (3)静态方法 一、多态 1.多态定义理解 在Python中&#xff0c;多态是一种特性&#xff0c;类似于一个接口&#xff0c;允许在一个对象中的一个操作可作用在不同类型的对象上…

IT管理员的秘密武器:高效管理服务器的远程控制软件

如果你出外勤却紧急需要一份文件&#xff0c;是不是有种热锅上蚂蚁的感觉。这时候如果能远程公司的电脑就能获得马上获得那份心心念念的文件咯。今天我就分享几款轻松好操作的远程控制工具帮你走出困境。 1.向日葵远程控制 链接直通车&#xff1a;https://down.oray.com 这个…

进程间通信----管道篇

目录 一丶 无名管道 1. 特点 2. 读写特性 3. 函数接口 二丶有名管道 1.特点&#xff1a; 2.函数接口 3. 读写特性 一丶 无名管道 1. 特点 1. 只能用于具有亲缘关系的进程之间的通信 2. 半双工的通信模式&#xff0c;具有固定的读端和写端 3. 管道可以…

4G手机智能遥控开关

什么是4G手机智能遥控开关 4G手机智能遥控开关作为现代智能家居与工业自动化的重要组成部分&#xff0c;提供了便捷、高效的远程控制方案。它利用4G通信技术&#xff0c;允许用户随时随地通过智能手机或其他移动设备控制电器设备的开关状态&#xff0c;适用于家庭、办公、工业等…

【Android】MotionLayout实现动画效果

【Android】MotionLayout实现开场动画 在移动应用开发中&#xff0c;动画不仅仅是美化界面的工具&#xff0c;它更是提升用户体验的关键手段。Android 平台一直以来都提供了丰富的动画框架&#xff0c;但随着应用复杂性的增加&#xff0c;开发者对动画的需求也变得更加复杂和多…

AcWing898. 数字三角形

线性DP 董晓老师的讲解是从下标0开始算的&#xff0c;其实我们从1开始也可以&#xff0c;我感觉这里从1开始更好理解。是从下往上计算的。j负责列的计算&#xff0c;往上计算时逐步收窄横向的范围&#xff0c;i是纵向的从下往上算&#xff0c; 下面是内存布局 下面是逻辑上的…

android 离线的方式使用下载到本地的gradle

1、android studio在下载gradle的时候&#xff0c;特别慢&#xff0c;有的时候会下载不完的情况&#xff0c;这样我们就要离线使用了。 2、下载Gradle Gradle | Releases 或者 Releases gradle/gradle GitHub Gradle | Releases 这里我们下载8.10 complete版本&#xff0c…

Tomcat10安装

Tomcat下载 进入官网下载https://tomcat.apache.org 注意tomcat版本和Java版本的对应关系&#xff1a; 配置好JAVA_HOME 安装tomcat前&#xff0c;需要先配置好JAVA_HOME&#xff0c;因为tomcat启动时候默认会找环境里面的JAVA_HOME&#xff0c;这里选择的Java版本是java1…

【工具篇】高效记忆方法之AnKi工具

&#x1f60a;你好&#xff0c;我是南极。正在变强的路上不断地努力着&#x1f4aa; &#x1f514;今天和大家分享一些记忆的方法&#xff0c;以及推荐了一款用于复习和巩固知识的软件AnKi。 对我们程序员而言&#xff0c;平常学习的东西会比较多&#xff0c;有时呢学的东西会…

结合代码详细讲解DDPM的训练和采样过程

本篇文章结合代码讲解Denoising Diffusion Probabilistic Models&#xff08;DDPM&#xff09;&#xff0c;首先我们先不关注推导过程&#xff0c;而是结合代码来看一下训练和推理过程是如何实现的&#xff0c;推导过程会在别的文章中讲解&#xff1b;首先我们来看一下论文中的…

<C++> AVLTree

目录 1. AVL概念 2. AVL树节点的定义 3. AVL树的插入 4. AVL树的旋转 5. AVL树的验证 6. AVL树的删除 7. AVL树的性能 暴力搜索、二分搜索、二叉搜索树、二叉平衡搜索树&#xff08;AVL、红黑树&#xff09;、多叉平衡搜索树&#xff08;B树&#xff09;、哈希表 1. AVL概念 二…

【C++ Primer Plus习题】7.2

问题: 解答: #include <iostream> using namespace std;#define MAX 10int input(float* grade, int len) {int i 0;for (i 0; i < len; i){cout << "请输入第" << i 1 << "个高尔夫成绩(按0结束):";cin >> grade[i]…

更改了ip地址怎么改回来

在日常的网络使用中&#xff0c;‌我们有时会因为特定的需求更改设备的IP地址&#xff0c;‌比如解决IP冲突、‌访问特定网络资源或进行网络测试等。‌然而&#xff0c;‌更改IP地址后&#xff0c;‌我们可能又因为某些原因需要将IP地址改回原来的设置。‌本文将详细介绍如何改…

视频号单场直播GMV超500万!开学季助力品牌高效转化

开学在即&#xff0c;友望数据发现&#xff0c;不少学习机、学练机、智能机器人、词典笔等学习相关的电子教育产品开始畅销 ▲ 图片来源&#xff1a;友望数据-商品排行榜 新学年开始&#xff0c;家长们又要为孩子新的学业操碎心&#xff0c;而教育培训商家也在开学季迎来了他们…