记Kubernetes(k8s)初始化报错:“Error getting node“ err=“node \“k8s-master\“ not found“

记Kubernetes(k8s)初始化报错:"Error getting node" err="node \"k8s-master\" not found"

  • 1、报错详情
  • 2、问题排查
  • 3、尝试问题解决


💖The Begin💖点点关注,收藏不迷路💖

1、报错详情

"Error getting node" err="node \"k8s-master\" not found"

查看日志报错:

[root@k8s-master ~]# journalctl -u kubelet

Apr 01 21:39:47 k8s-master kubelet[9638]: E0401 21:39:47.267736    9638 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
Apr 01 21:39:47 k8s-master kubelet[9638]: E0401 21:39:47.368592    9638 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
Apr 01 21:39:47 k8s-master kubelet[9638]: E0401 21:39:47.469741    9638 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
Apr 01 21:39:47 k8s-master kubelet[9638]: E0401 21:39:47.571557    9638 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
Apr 01 21:39:47 k8s-master kubelet[9638]: E0401 21:39:47.671768    9638 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
Apr 01 21:39:47 k8s-master kubelet[9638]: E0401 21:39:47.772126    9638 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
Apr 01 21:39:47 k8s-master kubelet[9638]: E0401 21:39:47.872910    9638 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
Apr 01 21:39:47 k8s-master kubelet[9638]: E0401 21:39:47.973850    9638 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"

2、问题排查

1、操作系统centos7.9

[root@k8s-master ~]# cat /etc/centos-release
CentOS Linux release 7.9.2009 (Core)
[root@k8s-master ~]# 

2、docker 版本检查

[root@k8s-master ~]# docker -v
Docker version 24.0.5, build ced0996
[root@k8s-master ~]# 

3、kubelet 版本检查

[root@k8s-master ~]# kubelet --version
Kubernetes v1.24.1
[root@k8s-master ~]# 

查找资料:

Kubernetes在v1.24版本之后正式放弃了对Docker的支持。这意味着Kubernetes的官方支持不再包括Docker作为容器运行时。相反,官方现在推荐使用Containerd或CRI-O作为容器运行时。

3、尝试问题解决

1、降级到Kubernetes的v1.23.6版本。需要卸载所有当前安装的1.24版本的kubelet、kubectl和kubeadm。

sudo yum remove kubelet kubectl kubeadm

在这里插入图片描述

2、下载并安装Kubernetes v1.23.6版本的kubelet、kubectl和kubeadm。

sudo yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6 --disableexcludes=kubernetes

3、验证安装是否成功,可以运行以下命令:

kubelet --version
kubectl version --short
kubeadm version

在这里插入图片描述

4、重置本地机器上的Kubernetes集群状态

重置本地机器上的Kubernetes集群状态。将本地机器上的Kubernetes相关配置、数据和状态恢复到初始状态。

删除etcd中的所有数据。
删除所有由kubeadm创建的配置文件和系统单元文件。
恢复iptables和ipvs等网络配置。
删除CNI插件配置。

kubeadm reset

在这里插入图片描述

这个命令通常在需要清理Kubernetes集群环境、重新初始化集群或者彻底卸载Kubernetes时使用。执行kubeadm reset命令后,可以重新初始化一个全新的Kubernetes集群。

要不会报错:

[init] Using Kubernetes version: v1.23.6
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR Port-6443]: Port 6443 is in use[ERROR Port-10259]: Port 10259 is in use[ERROR Port-10257]: Port 10257 is in use[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists[ERROR Port-10250]: Port 10250 is in use[ERROR Port-2379]: Port 2379 is in use[ERROR Port-2380]: Port 2380 is in use[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

5、再次初始化,指定Kubernetes的版本为v1.23.6。重新初始化一个全新的Kubernetes集群。

kubeadm init \--apiserver-advertise-address=192.168.234.20 \--control-plane-endpoint=k8s-master \--image-repository registry.aliyuncs.com/google_containers \--kubernetes-version v1.23.6 \--service-cidr=10.11.0.0/16 \--pod-network-cidr=172.30.0.0/16 \--cri-socket unix:///var/run/cri-dockerd.sock

在这里插入图片描述

————————————问题解决,初始化成功——————————

[root@k8s-master ~]# kubeadm init \
>   --apiserver-advertise-address=192.168.234.20 \
>   --control-plane-endpoint=k8s-master \
>   --image-repository registry.aliyuncs.com/google_containers \
>   --kubernetes-version v1.23.6 \
>   --service-cidr=10.11.0.0/16 \
>   --pod-network-cidr=172.30.0.0/16 \
>   --cri-socket unix:///var/run/cri-dockerd.sock
[init] Using Kubernetes version: v1.23.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.11.0.1 192.168.234.20]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.234.20 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.234.20 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 5.003345 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: lad5yi.ib6gterchvmkw2xd
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join k8s-master:6443 --token lad5yi.ib6gterchvmkw2xd \--discovery-token-ca-cert-hash sha256:eb567a446cd6a0d79da694f4ab23b5c7bf2be4df86f4aecfadef07716fbabd2b \--control-plane Then you can join any number of worker nodes by running the following on each as root:kubeadm join k8s-master:6443 --token lad5yi.ib6gterchvmkw2xd \--discovery-token-ca-cert-hash sha256:eb567a446cd6a0d79da694f4ab23b5c7bf2be4df86f4aecfadef07716fbabd2b 
[root@k8s-master ~]# 

在这里插入图片描述


💖The End💖点点关注,收藏不迷路💖

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/297927.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【数据库】数据库的介绍、分类、作用和特点,AI人工智能数据如何存储

欢迎来到《小5讲堂》,大家好,我是全栈小5。 这是《数据库》系列文章,每篇文章将以博主理解的角度展开讲解, 特别是针对知识点的概念进行叙说,大部分文章将会对这些概念进行实际例子验证,以此达到加深对知识…

【拓扑空间】示例及详解1

例1 度量空间的任意两球形邻域的交集是若干球形邻域的并集 Proof: 任取空间的两个球形邻域、,令 任取,令 球形领域 例2 规定X的子集族,证明是X上的一个拓扑 Proof: 1. 2., (若干个球形邻域的并集都是的元素,元素…

【数据结构(一)】初识数据结构

❣博主主页: 33的博客❣ ▶文章专栏分类: Java从入门到精通◀ 🚚我的代码仓库: 33的代码仓库🚚 🫵🫵🫵关注我带你学更多数据结构知识 目录 1.前言2.集合架构3.时间和空间复杂度3.1算法效率3.2时间复杂度3.2.1大O的渐进…

DFS:深搜+回溯+剪枝解决矩阵搜索问题

创作不易&#xff0c;感谢三连&#xff01;&#xff01; 一、N皇后 . - 力扣&#xff08;LeetCode&#xff09; class Solution { public:vector<vector<string>> ret;vector<string> path;bool checkcol[9];bool checkdig1[18];bool checkdig2[18];int n…

【C++】vector问题解决(非法的间接寻址,迭代器失效 , memcpy拷贝问题)

送给大家一句话&#xff1a; 世界在旋转&#xff0c;我们跌跌撞撞前进&#xff0c;这就够了 —— 阿贝尔 加缪 vector问题解决 1 前言2 迭代器区间拷贝3 迭代器失效问题4 memcpy拷贝问题 1 前言 我们之前实现了手搓vector&#xff0c;但是当时依然有些问题没有解决&#xff…

【Java笔记】多线程0:JVM线程是用户态还是内核态?Java 线程与OS线程的联系

文章目录 JVM线程是用户态线程还是内核态线程什么是用户态线程与内核态线程绿色线程绿色线程的缺点 线程映射稍微回顾下线程映射模型JVM线程映射 线程状态操作系统的线程状态JVM的线程状态JVM线程与OS线程的状态关系 Reference 今天复盘一下Java中&#xff0c;JVM线程与实际操作…

使用虚拟引擎为AR体验提供动力

Powering AR Experiences with Unreal Engine ​​​​​​​ 目录 1. 虚拟引擎概述 2. 虚拟引擎如何为AR体验提供动力 3. 虚拟引擎中AR体验的组成部分是什么&#xff1f; 4. 使用虚拟引擎创建AR体验 5. 虚拟引擎中AR的优化提示 6. 将互动性融入AR与虚拟引擎 7. 在AR中…

简述JMeter实现分布式并发及操作

为什么要分布式并发&#xff1f; JMeter性能实践过程中&#xff0c;一旦进行高并发操作时就会出现以下尴尬场景&#xff0c;JMeter客户端卡死、请求错误或是超时等&#xff0c;导致很难得出准确的性能测试结论。 目前知道的有两个方法可以解决JMeter支撑高并发&#xff1a; …

【Android Studio】上位机-安卓系统手机-蓝牙调试助手

【Android Studio】上位机-安卓系统手机-蓝牙调试助手 文章目录 前言AS官网一、手机配置二、移植工程三、配置四、BUG五、Java语言总结 前言 提示&#xff1a;以下是本篇文章正文内容&#xff0c;下面案例可供参考 AS官网 AS官网 一、手机配置 Android Studio 下真机调试 …

Rust egui(4) 增加自己的tab页面

如下图&#xff0c;增加一个Sins也面&#xff0c;里面添加一个配置组为Sin Paraemters&#xff0c;里面包含一个nums的参数&#xff0c;范围是1-1024&#xff0c;根据nums的数量&#xff0c;在Panel中画sin函数的line。 demo见&#xff1a;https://crazyskady.github.io/index.…

Spring Boot 介绍

1、SpringBoot 介绍 用通俗的话讲&#xff0c;SpringBoot 在Spring生态基础上发展而来&#xff0c;它的发现不是取代Spring&#xff0c;是为了让人们更容易使用Spring。 2、相关依赖关系 Spring IOC/AOP > Spring > Spring Boot > Spring Cloud 3、 SpringBoot工作原…

ENSP中AC登录web界面

拓扑 虚拟网卡配置 云团配置&#xff1a; **AC配置** vlan batch 100 # interface GigabitEthernet0/0/1port link-type accessport default vlan 100 # interface Vlanif100ip address 192.168.0.1 255.255.255.0 #http server enable浏览器输入&#xff1a;http://192.168.…

前端 - 基础 表单标签 - 表单元素 input - type 属性 ( 单选按钮和复选按钮 )

input 标签 type 属性 &#xff0c;上一篇讲了 输入框 和 密码框 这节看看 单选按钮 和 复选 按钮 目录 单选按钮 &#xff1a; 复选按钮 # 看上图就可以看到 单选按钮 -- radio 和 复选 按钮 -- checkbox 单选按钮 &#xff1a; 所谓单选按钮就是 有时…

某音乐平台歌曲信息逆向之参数寻找

如何逆向加密参数&#xff1a;某音乐平台歌曲信息逆向之webpack扣取-CSDN博客 参数构建 {"comm": {"cv": 4747474,"ct": 24,"format": "json","inCharset": "utf-8","outCharset": "ut…

HTML:框架

案例&#xff1a; <frameset cols"5%,*" ><frame src"left_frame.html"><frame src"right_frame.html"> </frameset> 一、<frameset>标签 <frameset>标签&#xff1a;称为框架标记&#xff0c;将一个HTML…

动态规划详解(Dynamic Programming)

目录 引入什么是动态规划&#xff1f;动态规划的特点解题办法解题套路框架举例说明斐波那契数列题目描述解题思路方式一&#xff1a;暴力求解思考 方式二&#xff1a;带备忘录的递归解法方式三&#xff1a;动态规划 推荐练手题目 引入 动态规划问题&#xff08;Dynamic Progra…

基于SpringBoot的“数码论坛系统设计与实现”的设计与实现(源码+数据库+文档+PPT)

基于SpringBoot的“数码论坛系统设计与实现”的设计与实现&#xff08;源码数据库文档PPT) 开发语言&#xff1a;Java 数据库&#xff1a;MySQL 技术&#xff1a;SpringBoot 工具&#xff1a;IDEA/Ecilpse、Navicat、Maven 系统展示 系统总体结构图 系统首页界面图 数码板…

一次java.lang.NullPointerException的排查之旅

一次java.lang.NullPointerException的排查之旅 问题由来问题分析问题处理 问题由来 最近在项目中遇到了一个比较奇怪的java.lang.NullPointerException&#xff0c;就是说在自己的本地环境中&#xff0c;功能正常&#xff0c;运行无异常。但是测试环境点击同样的功能时却总是…

【解读Kubernetes架构】全面指南,带你掌握Kubernetes的设计原理与构成!

了解 Kubernetes 架构&#xff1a;综合指南 前言一、什么是 Kubernetes 架构&#xff1f;1.1、控制平面1.2、工作节点 二、Kubernetes 控制平面组件2.1、kube-api服务器2.2、etcd2.3、kube-scheduler2.4、Kube 控制器管理器2.5、云控制器管理器 &#xff08;CCM&#xff09; 三…

CAD Plant3D 2023 下载地址及安装教程

CAD Plant3D是一款专业的三维工厂设计软件&#xff0c;用于在工业设备和管道设计领域进行建模和绘图。它是Autodesk公司旗下的AutoCAD系列产品之一&#xff0c;专门针对工艺、石油、化工、电力等行业的设计和工程项目。 CAD Plant3D提供了一套丰富的工具和功能&#xff0c;帮助…