面试题整理12----K8s中Pod创建常见错误

面试题整理12----K8s中Pod创建常见错误

  • 1. 镜像获取失败
    • 1.1 ErrImagePull(镜像拉取错误)
    • 1.2 ImagePullBackOff(镜像拉取退避)
    • 1.3 故障复现
    • 1.4 解决方法
    • 1.5 确认恢复正常
  • 2. Pending
    • 2.1 镜像拉取失败
    • 2.2 资源不足(CPU,内存)
      • 2.2.1 故障复现
      • 2.2.2 解决故障
    • 2.3 资源不足(存储)
      • 2.3.1 故障复现
      • 2.3.2 故障修复
    • 2.4 标签选择器或亲和
      • 2.4.1 故障复现
      • 2.4.2 故障修复
  • 3. 补充:Pod常见状态及原因
    • 3.1 ContainerCreating(容器创建中)
    • 3.2 ErrImagePull(镜像拉取错误)
    • 3.3 ImagePullBackOff(镜像拉取退避)
    • 3.4 CrashLoopBackOff(崩溃循环退避)
    • 3.5 Running - Ready(运行中 - 就绪)
    • 3.6 Terminating(终止中)
    • 3.7 Pending - ImagePullBackOff(待定 - 镜像拉取退避)

在Kubernetes中,Pod是核心资源对象,其稳定运行至关重要。然而,Pod可能会遇到各种错误状态,影响其正常运行。以下是一些常见错误及其解决方法:

1. 镜像获取失败

此错误通常是以ErrImagePullImagePullBackOff的错误出现.

1.1 ErrImagePull(镜像拉取错误)

Kubernetes 无法从镜像仓库拉取容器镜像。
可能的原因包括镜像名称错误、镜像不存在、认证失败、网络问题等。

1.2 ImagePullBackOff(镜像拉取退避)

类似于 ErrImagePull,但在多次尝试失败后,Kubernetes 会进入退避状态,等待一段时间后重试

1.3 故障复现

root@k8s-master01:~# kubectl get pods
NAME                               READY   STATUS             RESTARTS   AGE
nginx-deployment-d556bf558-9swpd   0/1     ImagePullBackOff   0          46m
nginx-deployment-d556bf558-d2482   0/1     ErrImagePull       0          46m
nginx-deployment-d556bf558-r4v4z   0/1     ErrImagePull       0          46m
root@k8s-master01:~# kubectl describe pods nginx-deployment-d556bf558-r4v4z |tail -10Normal   Scheduled  47m                   default-scheduler  Successfully assigned default/nginx-deployment-d556bf558-r4v4z to k8s-node03Warning  Failed     46m                   kubelet            Failed to pull image "nginx:1.14.2": failed to pull and unpack image "docker.io/library/nginx:1.14.2": failed to resolve reference "docker.io/library/nginx:1.14.2": failed to do request: Head "https://registry-1.docker.io/v2/library/nginx/manifests/1.14.2": dial tcp 162.125.32.13:443: connect: connection refusedWarning  Failed     46m                   kubelet            Failed to pull image "nginx:1.14.2": failed to pull and unpack image "docker.io/library/nginx:1.14.2": failed to resolve reference "docker.io/library/nginx:1.14.2": failed to do request: Head "https://registry-1.docker.io/v2/library/nginx/manifests/1.14.2": dial tcp 69.171.229.11:443: connect: connection refusedWarning  Failed     45m                   kubelet            Failed to pull image "nginx:1.14.2": failed to pull and unpack image "docker.io/library/nginx:1.14.2": failed to resolve reference "docker.io/library/nginx:1.14.2": failed to do request: Head "https://registry-1.docker.io/v2/library/nginx/manifests/1.14.2": dial tcp 157.240.11.40:443: connect: connection refusedNormal   Pulling    44m (x4 over 47m)     kubelet            Pulling image "nginx:1.14.2"Warning  Failed     44m (x4 over 46m)     kubelet            Error: ErrImagePullWarning  Failed     44m                   kubelet            Failed to pull image "nginx:1.14.2": failed to pull and unpack image "docker.io/library/nginx:1.14.2": failed to resolve reference "docker.io/library/nginx:1.14.2": failed to do request: Head "https://registry-1.docker.io/v2/library/nginx/manifests/1.14.2": dial tcp 108.160.165.48:443: connect: connection refusedWarning  Failed     44m (x6 over 46m)     kubelet            Error: ImagePullBackOffWarning  Failed     12m (x4 over 28m)     kubelet            (combined from similar events): Failed to pull image "nginx:1.14.2": failed to pull and unpack image "docker.io/library/nginx:1.14.2": failed to resolve reference "docker.io/library/nginx:1.14.2": failed to do request: Head "https://registry-1.docker.io/v2/library/nginx/manifests/1.14.2": dial tcp 108.160.172.1:443: connect: connection refusedNormal   BackOff    2m8s (x178 over 46m)  kubelet            Back-off pulling image "nginx:1.14.2"

1.4 解决方法

  1. 下载镜像
  2. 修改并上传本地harbor(或者保存到每个节点)
  3. 将deployment中image修改为内网harbor镜像
# 下载nginx镜像
root@k8s-master01:~/yaml# nerdctl pull nginx:1.14.2
docker.io/library/nginx:1.14.2:                                                   resolved       |++++++++++++++++++++++++++++++++++++++| 
index-sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d:    done           |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:706446e9c6667c0880d5da3f39c09a6c7d2114f5a5d6b74a2fafd24ae30d2078: done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:295c7be079025306c4f1d65997fcf7adb411c88f139ad1d34b537164aa060369:   done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:8ca774778e858d3f97d9ec1bec1de879ac5e10096856dc22ed325a3ad944f78a:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:27833a3ba0a545deda33bb01eaf95a14d05d43bf30bce9267d92d17f069fe897:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:0f23e58bd0b7c74311703e20c21c690a6847e62240ed456f8821f4c067d3659b:    done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 826.6s                                                                   total:  42.6 M (52.8 KiB/s)                                      
root@k8s-master01:~/yaml# nerdctl tag nginx:1.14.2 harbor.panasonic.cn/nginx/nginx:1.14.2
# 将镜像推送至harbor仓库
root@k8s-master01:~/yaml# nerdctl push harbor.panasonic.cn/nginx/nginx:1.14.2
INFO[0000] pushing as a reduced-platform image (application/vnd.docker.distribution.manifest.list.v2+json, sha256:3d206f335adbabfc33b20c0190ef88cb47d627d21546d48e72e051e5fc27451a) 
index-sha256:3d206f335adbabfc33b20c0190ef88cb47d627d21546d48e72e051e5fc27451a:    done           |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:706446e9c6667c0880d5da3f39c09a6c7d2114f5a5d6b74a2fafd24ae30d2078: done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:295c7be079025306c4f1d65997fcf7adb411c88f139ad1d34b537164aa060369:   done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 0.6 s                                                                    total:  7.1 Ki (11.8 KiB/s)                                      
root@k8s-master01:~/yaml# cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentlabels:app: nginx
spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginx#image: nginx:1.14.2  # 注释原镜像# 使用harbor作为镜像image: harbor.intra.com/nginx/nginx:1.14.2ports:- containerPort: 80
deployment.apps "nginx-deployment" deleted
root@k8s-master01:~/yaml# kubectl apply -f deployment.yaml
deployment.apps/nginx-deployment created

1.5 确认恢复正常

root@k8s-master01:~/yaml# kubectl get pods 
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-8677887b4f-2h2rd   1/1     Running   0          36s
nginx-deployment-8677887b4f-j7kwj   1/1     Running   0          36s
nginx-deployment-8677887b4f-vfmfq   1/1     Running   0          36s
root@k8s-master01:~/yaml# kubectl describe pods nginx-deployment-8677887b4f-vfmfq |tail
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type    Reason     Age   From               Message----    ------     ----  ----               -------Normal  Scheduled  49s   default-scheduler  Successfully assigned default/nginx-deployment-8677887b4f-vfmfq to k8s-node01Normal  Pulling    49s   kubelet            Pulling image "harbor.intra.com/nginx/nginx:1.14.2"Normal  Pulled     46s   kubelet            Successfully pulled image "harbor.intra.com/nginx/nginx:1.14.2" in 3.069s (3.069s including waiting). Image size: 44708492 bytes.Normal  Created    46s   kubelet            Created container nginxNormal  Started    46s   kubelet            Started container nginx
root@k8s-master01:~/yaml# 

2. Pending

Pending是K8s最常见的一种错误状态,这个报错主要原因有:

  1. 镜像拉取失败
  2. 资源不足
  3. 调度约束
  4. 依赖不存在

2.1 镜像拉取失败

这个在1里面已经详细表述过了,常见会伴有ErrImagePullImagePullBackOff的报错.这里就不再复述

2.2 资源不足(CPU,内存)

这个故障的原因就是Pod做了资源限制或者由于亲和或者指定node等情况,出现CPU,内存资源不足.容器创建后无法提供于是就处于Pending的状态

2.2.1 故障复现

可以看到node节点的内存资源基本都是1G以下,那么当我们申请一个6G的内存作为requests,当pod创建被提交后,一直无法得到内存大于6G的node来调度pod,于是Pod的状态就一直处于Pending的状态.

root@k8s-master01:~/yaml# kubectl top nodes
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01   78m          0%     1203Mi          15%       
k8s-node01     26m          0%     1091Mi          28%       
k8s-node02     25m          0%     739Mi           19%       
k8s-node03     24m          0%     701Mi           18%  
root@k8s-master01:~/yaml# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentlabels:app: nginx
spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginx#image: nginx:1.14.2image: harbor.intra.com/nginx/nginx:1.14.2ports:- containerPort: 80resources:requests:memory: "6Gi"cpu: "1"limits:memory: "6Gi"cpu: "1"root@k8s-master01:~/yaml# kubectl apply -f deployment.yaml 
deployment.apps/nginx-deployment created
root@k8s-master01:~/yaml# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-554d6d7fd9-62dn7   0/1     Pending   0          6s
nginx-deployment-554d6d7fd9-bcwvt   0/1     Pending   0          6s
nginx-deployment-554d6d7fd9-n9dnp   0/1     Pending   0          6s
root@k8s-master01:~/yaml# kubectl describe pod nginx-deployment-554d6d7fd9-n9dnp | tail -4
Events:Type     Reason            Age   From               Message----     ------            ----  ----               -------Warning  FailedScheduling  5m1s  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 3 Insufficient memory. preemption: 0/4 nodes are available: 1 Preemption is not helpful for scheduling, 3 No preemption victims found for incoming pod.

可以看到有Insufficient memory 的告警出现在日志中.说明内存不足

2.2.2 解决故障

经过我们对应用的测试,适当调整requests.memory的值,使得node节点有足够的资源进行调度,然后重新发布deployment使得配置内容生效.

root@k8s-master01:~/yaml# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentlabels:app: nginx
spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginx#image: nginx:1.14.2image: harbor.intra.com/nginx/nginx:1.14.2ports:- containerPort: 80resources:requests:memory: "200Mi"cpu: "1"limits:memory: "400Mi"cpu: "1"root@k8s-master01:~/yaml# kubectl apply -f deployment.yaml
deployment.apps/nginx-deployment configured
root@k8s-master01:~/yaml# kubectl get po
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-5b696b7fc8-2gkgc   1/1     Running   0          66s
nginx-deployment-5b696b7fc8-8kt6p   1/1     Running   0          64s
nginx-deployment-5b696b7fc8-dm8jt   1/1     Running   0          67s

此时Pod状态都是Running了

2.3 资源不足(存储)

这种情况也非常常见,通常是CM,Secret或者PVC等存储资源在Pod中申明,但在Pod启动前并没有被正确创建.当Pod创建时无法引用这些资源,就停在Pending状态.
通常会有persistentvolumeclaim "xxxx--xxx not found.的报错

2.3.1 故障复现

root@k8s-master01:~/yaml# cat nginx-nfs.yaml 
---
apiVersion: v1
kind: Pod
metadata:name: nginx-nfs-examplenamespace: default
spec:containers:- image: harbor.panasonic.cn/nginx/nginx:1.14.2name: nginxports:- containerPort: 80protocol: TCPvolumeMounts:- mountPath: /var/wwwname: pvc-nginxreadOnly: falsevolumes:- name: pvc-nginxpersistentVolumeClaim:claimName: nfs-pvc-default
root@k8s-master01:~/yaml# kubectl apply -f  nginx-nfs.yaml 
pod/nginx-nfs-example created
root@k8s-master01:~/yaml# kubectl get pods
NAME                READY   STATUS    RESTARTS   AGE
nginx-nfs-example   0/1     Pending   0          5s
root@k8s-master01:~/yaml# kubectl describe pod nginx-nfs-example |tail -5node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type     Reason            Age   From               Message----     ------            ----  ----               -------Warning  FailedScheduling  16s   default-scheduler  0/4 nodes are available: persistentvolumeclaim "nfs-pvc-default" not found. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.

2.3.2 故障修复

添加pv和pvc资源提供给pod挂载

root@k8s-master01:~/yaml# cat nginx-nfs.yaml 
---
apiVersion: v1
kind: PersistentVolume
metadata:name: nfs-pv
spec:capacity:storage: 200MiaccessModes:- ReadWriteManynfs:path: /nfsserver: 192.168.31.104
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: nfs-pvc
spec:accessModes:- ReadWriteManyresources:requests:storage: 200Mi
---
apiVersion: v1
kind: Pod
metadata:name: nginx-nfsnamespace: default
spec:containers:- image: harbor.panasonic.cn/nginx/nginx:1.14.2name: nginxports:- containerPort: 80protocol: TCPvolumeMounts:- mountPath: /var/wwwname: nfs-pvcreadOnly: falsevolumes:- name: nfs-pvcpersistentVolumeClaim:claimName: nfs-pvc

应用配置后故障消除

root@k8s-master01:~/yaml# kubectl apply -f nginx-nfs.yaml
persistentvolume/nfs-pv created
persistentvolumeclaim/nfs-pvc created
pod/nginx-nfs created
root@k8s-master01:~/yaml# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                     STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
nfs-pv                                     200Mi      RWX            Retain           Bound         default/nfs-pvc                          <unset>                          3s
pvc-0748bb20-1e4a-4741-845c-0bae59160ef6   10Gi       RWX            Delete           Bound         default/pvc-nfs-dynamic   nfs-csi        <unset>                          32d
pvc-7a0bba72-8d63-4393-861d-c4a409d48933   2Gi        RWO            Delete           Terminating   test/nfs-pvc              nfs-storage    <unset>                          32d
root@k8s-master01:~/yaml# kubectl get pvc
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
nfs-pvc           Bound    nfs-pv                                     200Mi      RWX                           <unset>                 6s
pvc-nfs-dynamic   Bound    pvc-0748bb20-1e4a-4741-845c-0bae59160ef6   10Gi       RWX            nfs-csi        <unset>                 32d
root@k8s-master01:~/yaml# kubectl get pods
NAME        READY   STATUS    RESTARTS   AGE
nginx-nfs   1/1     Running   0          9s

CM和Secret等资源也是类似.

2.4 标签选择器或亲和

这类故障通常是由于标签选择或者强亲和造成没有配置正确的node节点或node节点没有足够的资源

2.4.1 故障复现

给node节点打上worker=true的label,这是我们在配置deployment时错误的将nodeselector设置成了错误的值,这样pod状态就会变成Pending

root@k8s-master01:~/yaml# kubectl get nodes --label-columns worker=true
NAME           STATUS   ROLES           AGE   VERSION   WORKER=TRUE
k8s-master01   Ready    control-plane   94d   v1.31.0   
k8s-node01     Ready    <none>          94d   v1.31.0   
k8s-node02     Ready    <none>          94d   v1.31.0   
k8s-node03     Ready    <none>          94d   v1.31.0   
root@k8s-master01:~/yaml# cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentlabels:app: nginx
spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:nodeSelector:worker: node8containers:- name: nginx#image: nginx:1.14.2image: harbor.intra.com/nginx/nginx:1.14.2ports:- containerPort: 80resources:requests:memory: "6Gi"cpu: "1"limits:memory: "6Gi"cpu: "1"root@k8s-master01:~/yaml# kubectl apply -f deployment.yaml
deployment.apps/nginx-deployment created
root@k8s-master01:~/yaml# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-86895b4d79-dm6z4   0/1     Pending   0          84s
nginx-deployment-86895b4d79-tptlw   0/1     Pending   0          84s
nginx-deployment-86895b4d79-v6bfh   0/1     Pending   0          84s
root@k8s-master01:~/yaml# kubectl describe pods nginx-deployment-86895b4d79-v6bfh | tail -5node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type     Reason            Age   From               Message----     ------            ----  ----               -------Warning  FailedScheduling  104s  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.

2.4.2 故障修复

这里一般2种做法.

  1. 修改deployment中的nodeselector改为正确值.
  2. 可能生成环境中不想停止应用,那么就给对应的节点打上指定的标签
    我们这里修改yaml中的nodeSelector然后重新部署
root@k8s-master01:~/yaml# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentlabels:app: nginx
spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:nodeSelector:worker: 'true'containers:- name: nginx#image: nginx:1.14.2image: harbor.intra.com/nginx/nginx:1.14.2ports:- containerPort: 80resources:requests:memory: "200Mi"cpu: "0.1"limits:memory: "500Mi"cpu: "1"root@k8s-master01:~/yaml# kubectl apply -f deployment.yaml 
deployment.apps/nginx-deployment configured
root@k8s-master01:~/yaml# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-55cdb49d65-2jkxl   1/1     Running   0          2s
nginx-deployment-55cdb49d65-bdltk   1/1     Running   0          3s
nginx-deployment-55cdb49d65-cb44w   1/1     Running   0          5s

常见的一般就是这几种情况,基本就是依赖未实现造成的,一般用kubectl describe pods <POD_NAME>就能发现问题,然后根据报错进行排错就可以了.

3. 补充:Pod常见状态及原因

常见的具体状态或事件

3.1 ContainerCreating(容器创建中)

  • Kubernetes 正在创建 Pod 的容器,但尚未完成。
  • 可能的原因包括等待存储卷挂载、配置网络等。

3.2 ErrImagePull(镜像拉取错误)

  • Kubernetes 无法从镜像仓库拉取容器镜像。
  • 可能的原因包括镜像名称错误、镜像不存在、认证失败、网络问题等。

3.3 ImagePullBackOff(镜像拉取退避)

类似于 ErrImagePull,但在多次尝试失败后,Kubernetes 会进入退避状态,等待一段时间后重试。

3.4 CrashLoopBackOff(崩溃循环退避)

容器启动后立即崩溃,并且 Kubernetes 正在尝试重启容器,但连续失败后进入退避状态。
可能的原因包括应用程序错误、配置错误、资源不足等。

3.5 Running - Ready(运行中 - 就绪)

Pod 中的所有容器都在运行,并且已经通过健康检查,可以接收流量。

3.6 Terminating(终止中)

Kubernetes 正在终止 Pod,可能是因为删除 Pod 或者节点维护等原因。

3.7 Pending - ImagePullBackOff(待定 - 镜像拉取退避)

Pod 处于 Pending 状态,并且因为镜像拉取失败进入退避状态。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/496752.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

USB 状态机及状态转换

文章目录 USB 状态机及状态转换连接状态供电状态默认状态地址状态配置状态挂起状态USB 状态机及状态转换 枚举完成之前,USB 设备要经过一系列的状态变化,才能最终完成枚举。这些状态是 连接状态 - attached供电状态 - powered默认状态 - default地址状态 - address配置状态 -…

Simulink的Goto和From标签如何限定作用域

网上有很多关于Goto和From的标签文章&#xff0c;但是很少有人将Goto和From标签如何使用限定作用域的问题 Simulink 中 Goto 和 From 标签作用域设置及使用 在 Simulink 中&#xff0c;Goto 和 From 标签用于传递信号以简化模型的连线结构。通过限定它们的作用域&#xff0c;…

No.29 笔记 | CTF 学习干货

大家好呀&#xff01;我刚参加了美国线上CTF比赛&#xff0c;收获超多&#xff0c;特别感谢老师教我的干货知识。今天就和大家分享我的学习笔记。CTF像刺激冒险&#xff0c;有挑战有惊喜。 学习中我懂了很多知识技能&#xff0c;像密码学、Web安全、Misc题型等&#xff0c;它们…

Etcd注册中心基本实现

Etcd入门 什么是Etcd GitHub&#xff1a;https://github.com/etcd-io/etcd Etcd数据结构与特性 键值对格式&#xff0c;类似文件层次结构。 Etcd如何保证数据一致性&#xff1f; 表面来看&#xff0c;Etcd支持事务操作&#xff0c;能够保证数据一致性。 底层来看&#xff0…

【游戏设计原理】31 - 头脑风暴的方法

在游戏设计中&#xff0c;头脑风暴的方法可以贯穿整个创作流程&#xff0c;帮助设计师从最初的概念生成到具体机制的打磨。以下是如何在不同阶段应用头脑风暴方法的详细步骤&#xff1a; 1. 自由思考法的应用 阶段&#xff1a;创意萌发与初期概念设计 目标&#xff1a;找到游…

Unable to create schema compiler

问题 Unable to create schema compiler 原因分析 可能一 服务上只安装了jre&#xff0c;缺少需要的jar包 可能二 jdk的版本是9以上&#xff0c;默认不带这些jar包 解决办法 方案一&#xff08;亲测可用&#xff09; 上面的报错是在使用CXF框架生成动态客户端client时…

D类音频应用EMI管理

1、前言 对于EMI&#xff0c;首先需要理解天线。频率和波长之间的关系&#xff0c;如下图所示。   作为有效天线所需的最短长度是λ/4。在空气中&#xff0c;介电常数是1&#xff0c;但是在FR4或玻璃环氧PCB的情况下&#xff0c;介电常数大约4.8。这种效应会导致信号在FR4材…

CSES-1687 Company Queries I(倍增法)

题目传送门https://vjudge.net/problem/CSES-1687#authorGPT_zh 解题思路 其实和倍增法求 LCA 是一样的…… 首先设 表示 号点的上面的第 个祖先是谁。 同倍增法&#xff1a; 然后&#xff0c;题目要求我们向上跳 个点。 枚举 &#xff08;从大到小&#xff0c;想想为…

【从零开始入门unity游戏开发之——unity篇01】unity6基础入门开篇——游戏引擎是什么、主流的游戏引擎、为什么选择Unity

文章目录 前言**游戏引擎是什么&#xff1f;****游戏引擎对于我们的意义**1、**降低游戏开发的门槛**2、**提升游戏开发效率** **以前做游戏****现在做游戏****主流的游戏引擎有哪些&#xff1f;**Unity 相比其他游戏引擎的优势&#xff1f;**为什么选择Unity&#xff1f;**Uni…

Apifox 12月更新|接口的测试覆盖情况、测试场景支持修改记录、迭代分支能力升级、自定义项目角色权限、接口可评论

Apifox 新版本上线啦&#xff01;&#xff01;&#xff01; 在快速迭代的开发流程中&#xff0c;接口测试工具的强大功能往往决定了项目的效率和质量。而 Apifox 在 12 月的更新中&#xff0c;再次引领潮流&#xff0c;推出了一系列重磅功能&#xff01;测试覆盖情况分析、场景…

C# GDI+数码管数字控件

调用方法 int zhi 15;private void button1_Click(object sender, EventArgs e){if (zhi > 19){zhi 0;}lcdDisplayControl1.DisplayText zhi.ToString();} 运行效果 控件代码 using System; using System.Collections.Generic; using System.Drawing.Drawing2D; using …

WebRTC服务质量(12)- Pacer机制(04) 向Pacer中插入数据

WebRTC服务质量&#xff08;01&#xff09;- Qos概述 WebRTC服务质量&#xff08;02&#xff09;- RTP协议 WebRTC服务质量&#xff08;03&#xff09;- RTCP协议 WebRTC服务质量&#xff08;04&#xff09;- 重传机制&#xff08;01) RTX NACK概述 WebRTC服务质量&#xff08;…

C#实现调用DLL 套壳读卡程序(桌面程序开发)

背景 正常业务已经支持 读三代卡了&#xff0c;前端调用医保封装好的服务就可以了&#xff0c;但是长护要读卡&#xff0c;就需要去访问万达&#xff0c;他们又搞了一套读卡的动态库&#xff0c;为了能够掉万达的接口&#xff0c;就需要去想办法调用它们提供的动态库方法&…

低代码开发平台排名2024

低代码开发平台在过去几年中迅速崛起&#xff0c;成为企业数字化转型的重要工具。这些平台通过可视化界面和拖放组件&#xff0c;使业务人员和技术人员都能快速构建应用程序&#xff0c;大大缩短了开发周期。以下是一些在2024年值得关注和使用的低代码开发平台。 一、Zoho Cre…

rocketmq-push模式-消费侧重平衡-类流程图分析

1、观察consumer线程 使用arthas分析 MQClientFactoryScheduledThread 定时任务线程 定时任务线程&#xff0c;包含如下任务&#xff1a; 每2分钟更新nameServer列表 每30秒更新topic的路由信息 每30秒检查broker的存活&#xff0c;发送心跳请求 每5秒持久化消费队列的offset…

群落生态学研究进展▌Hmsc包对于群落生态学假说的解读、Hmsc包开展单物种和多物种分析的技术细节及Hmsc包的实际应用

HMSC&#xff08;Hierarchical Species Distribution Models&#xff09;是一种用于预测物种分布的统计模型。它在群落生态学中的应用广泛&#xff0c;可以帮助科学家研究物种在不同环境条件下的分布规律&#xff0c;以及预测物种在未来环境变化下的潜在分布范围。 举例来说&a…

影视仓最新接口+内置本包方法的研究(2024.12.27)

近日喜欢上了研究影视的本地仓库内置&#xff0c;也做了一个分享到了群里。 内置本地仓库包的好处很明显&#xff0c;当前线路接口都是依赖网络上的代码站存放&#xff0c;如果维护者删除那就GG。 虽然有高手制作了很多本地包&#xff0c;但推送本地包到APP&#xff0c;难倒一片…

教育元宇宙的优势与核心功能解析

随着科技的飞速发展&#xff0c;教育领域正迎来一场前所未有的变革。教育元宇宙作为新兴的教育形态&#xff0c;以其独特的优势和丰富的功能&#xff0c;正在逐步改变我们的学习方式。本文将深入探讨教育元宇宙的优势以及其核心功能&#xff0c;为您揭示这一未来教育的新趋势。…

多个微服务 Mybatis 过程中出现了Invalid bound statement (not found)的特殊问题

针对多个微服务的场景&#xff0c;记录一下这个特殊问题&#xff1a; 如果启动类上用了这个MapperScan注解 在resource 目录下必须建相同的 com.demo.biz.mapper 目录结构&#xff0c;否则会加载不到XML资源文件 。 并且切记是com/demo/biz 这样的格式创建&#xff0c;不要使用…

Java基础知识(四) -- 面向对象(下)

1.类变量和类方法 1.1 类变量背景 有一群小孩在玩堆雪人,不时有新的小孩加入,请问如何知道现在共有多少人在玩? 思路分析: 核心在于如何让变量count被所有对象共享 public class Child {private String name;// 定义静态变量(所有Child对象共享)public static int count 0;p…