kubernetes|云原生|部署单master的kubernetes 1.25.5版本集群完全记录(使用contained 运行时)

一、

部署目标:

kubernetes版本1.19,1.23的前后差异还是比较巨大的,到1.25版本,为了追求高性能,自然还是需要使用containerd,本文将主要讲述在centos7虚拟机下部署kubernetes 1.25.5集群,使用containerd运行时,并支持IP v6,尽量做一个详细的记录

环境介绍:

采用VMware17安装一个centos7的虚拟机,并克隆两个虚拟机,总计三个虚拟机,虚拟机IP分别为:

192.168.123.15

192.168.123.16

192.168.123.17

计划192.168.123.15作为该集群的master,192.168.123.16和192.168.123.17为工作节点

虚拟机cpu为4核,内存为4G,磁盘空间为100G,三台虚拟机统一的规格

本文所有相关安装资料都在网盘内,需要复现的同学自取:

通过网盘分享的文件:kubernetes-1.25.5二进制部署
链接: https://pan.baidu.com/s/10FpHD4hhOltlAXXO9O-rug?pwd=g44r 提取码: g44r 
--来自百度网盘超级会员v6的分享

二、

部署前的环境准备

1、

时间服务器

时间服务器采用阿里云的ntp.aliyun.com,时间同步服务使用的是ntpd,假设一直是有网络环境的情况,ntp的配置文件写server ntp.aliyun.com 就可以了,然后重启ntpd服务,这些就不在此废话了

检查是否如下显示,表示时间服务正常:

[root@centos18 ~]# ntpstat
synchronised to NTP server (203.107.6.88) at stratum 3time correct to within 61 mspolling server every 64 s
[root@centos18 ~]# ntpq -premote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*203.107.6.88    100.107.25.114   2 u   40   64  377   72.882   -7.423   2.373

2、

swap关闭

关于swap为什么要关闭,主要原因是kubernetes集群需要稳定运行,而swap虚拟交换内存会影响kubernetes集群的稳定

本例中,虚拟机安装的时候就是手动分区,没有启用swap,因此,不存在此问题,但如果是生产环境,估计通常会有swap,还是需要关闭的,这里也不废话过多,自行百度即可

基本是swapoff命令的使用,非常简单,不在这里废话,主要是后续的grub里swap别忘记关闭了

sed -ri.bak 's/.*swap.*/#&/' /etc/fstab
swapoff -a && sysctl -w vm.swappiness=0cat /etc/fstab
# /dev/mapper/centos-swap swap                    swap    defaults        0 0# 参数解释:
# 
# -ri: 这个参数用于在原文件中替换匹配的模式。-r表示扩展正则表达式,-i允许直接修改文件。
#.bak :修改前备份此文件
# 's/.*swap.*/#&/': 这是一个sed命令,用于在文件/etc/fstab中找到包含swap的行,并在行首添加#来注释掉该行。
# /etc/fstab: 这是一个文件路径,即/etc/fstab文件,用于存储文件系统表。
# swapoff -a: 这个命令用于关闭所有启用的交换分区。
# sysctl -w vm.swappiness=0: 这个命令用于修改vm.swappiness参数的值为0,表示系统在物理内存充足时更倾向于使用物理内存而非交换分区。

3、

ansible运维工具

该工具主要是快速部署etcd集群使用,本例中,是已经配置好了本地仓库,直接yum安装就完成的,ansible版本不太重要,随意安装一下就可以了

本例是安装的是2.9.27

[root@centos18 ~]# ansible --version
ansible 2.9.27config file = /etc/ansible/ansible.cfgconfigured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']ansible python module location = /usr/lib/python2.7/site-packages/ansibleexecutable location = /usr/bin/ansiblepython version = 2.7.5 (default, Aug  7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]

4、

关闭防火墙和selinux 

这个就不用说了,太简单了,知道有这么个事就行了

5、

etcd集群部署

etcd集群部署教程见此:centos7操作系统 ---ansible剧本离线快速部署etcd集群_ansible 部署etcd集群-CSDN博客

本例使用的版本是3.4,通过ansible安装的,安装完毕后,如下:

[root@centos18 ansible-deployment-etcd]# export ETCDCTL_API=3[root@centos18 ansible-deployment-etcd]# /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.123.16:2379,https://192.168.123.15:2379,https://192.168.123.17:2379" endpoint health --write-out=table
+-----------------------------+--------+------------+-------+
|          ENDPOINT           | HEALTH |    TOOK    | ERROR |
+-----------------------------+--------+------------+-------+
| https://192.168.123.18:2379 |   true | 9.243662ms |       |
| https://192.168.123.19:2379 |   true | 9.177001ms |       |
| https://192.168.123.20:2379 |   true | 9.611179ms |       |
+-----------------------------+--------+------------+-------+[root@centos18 ansible-deployment-etcd]# /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.123.18:2379,https://192.168.123.19:2379,https://192.168.123.20:2379" endpoint status --write-out=table
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.123.18:2379 | 11421e4257ee06cc |   3.4.9 |   20 kB |     false |      false |         3 |          9 |                  9 |        |
| https://192.168.123.19:2379 | bc92071008542226 |   3.4.9 |   25 kB |     false |      false |         3 |          9 |                  9 |        |
| https://192.168.123.20:2379 | f6a940ea58745f7a |   3.4.9 |   20 kB |      true |      false |         3 |          9 |                  9 |        |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

6、

网络映射

cat >/etc/hosts<<EOF
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.123.18 k8s-master
192.168.123.19 k8s-node1
192.168.123.20 k8s-node2
fc00::18 k8s-master
fc00::19 k8s-node01
fc00::20 k8s-node02
EOF

7、

免密登录

三个服务器全部都做,互相之间全部免密,免密就不在这废话了,还是知道有这么个事就行了,免密主要是方便配置,传递证书,没有其它的特殊用途

8、

升级内核

本例是直接升级到5.4.266版本,内核升级主要是高级网络配置,其次是让系统更加稳定

yum install kernel-lt-5.4.266-1.el7.elrepo.x86_64.rpmgrub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg#这个命令的输出是:
[root@centos2 ~]# grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.16.9-1.el7.elrepo.x86_64
Found initrd image: /boot/initramfs-5.16.9-1.el7.elrepo.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-1062.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-1062.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-8526a2b32ead4d979c011ddc805a5d14
Found initrd image: /boot/initramfs-0-rescue-8526a2b32ead4d979c011ddc805a5d14.img
donegrubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"

9、

系统内核网络相关配置

# Ubuntu忽略,CentOS执行,CentOS9不支持方式一# 方式一
# systemctl disable --now NetworkManager
# systemctl start network && systemctl enable network# 方式二
cat > /etc/NetworkManager/conf.d/calico.conf << EOF 
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*
EOF
systemctl restart NetworkManager# 参数解释
#
# 这个参数用于指定不由 NetworkManager 管理的设备。它由以下两个部分组成
# 
# interface-name:cali*
# 表示以 "cali" 开头的接口名称被排除在 NetworkManager 管理之外。例如,"cali0", "cali1" 等接口不受 NetworkManager 管理。
# 
# interface-name:tunl*
# 表示以 "tunl" 开头的接口名称被排除在 NetworkManager 管理之外。例如,"tunl0", "tunl1" 等接口不受 NetworkManager 管理。
# 
# 通过使用这个参数,可以将特定的接口排除在 NetworkManager 的管理范围之外,以便其他工具或进程可以独立地管理和配置这些接口。

主要还是提高系统的稳定性作用,好像是network和NetWorkManger有一点点小冲突

10、

安装ipvsadm

三台服务器都执行


# 对于 Ubuntu
# apt install ipvsadm ipset sysstat conntrack -y# 对于 CentOS
yum install ipvsadm ipset sysstat conntrack libseccomp -y
cat >> /etc/modules-load.d/ipvs.conf <<EOF 
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOFsystemctl restart systemd-modules-load.servicelsmod | grep -e ip_vs -e nf_conntrack
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 237568  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          217088  3 nf_nat,nft_ct,ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  5 nf_conntrack,nf_nat,nf_tables,xfs,ip_vs# 参数解释
#
# ip_vs
# IPVS 是 Linux 内核中的一个模块,用于实现负载均衡和高可用性。它通过在前端代理服务器上分发传入请求到后端实际服务器上,提供了高性能和可扩展的网络服务。
#
# ip_vs_rr
# IPVS 的一种调度算法之一,使用轮询方式分发请求到后端服务器,每个请求按顺序依次分发。
#
# ip_vs_wrr
# IPVS 的一种调度算法之一,使用加权轮询方式分发请求到后端服务器,每个请求按照指定的权重比例分发。
#
# ip_vs_sh
# IPVS 的一种调度算法之一,使用哈希方式根据源 IP 地址和目标 IP 地址来分发请求。
#
# nf_conntrack
# 这是一个内核模块,用于跟踪和管理网络连接,包括 TCP、UDP 和 ICMP 等协议。它是实现防火墙状态跟踪的基础。
#
# ip_tables
# 这是一个内核模块,提供了对 Linux 系统 IP 数据包过滤和网络地址转换(NAT)功能的支持。
#
# ip_set
# 这是一个内核模块,扩展了 iptables 的功能,支持更高效的 IP 地址集合操作。
#
# xt_set
# 这是一个内核模块,扩展了 iptables 的功能,支持更高效的数据包匹配和操作。
#
# ipt_set
# 这是一个用户空间工具,用于配置和管理 xt_set 内核模块。
#
# ipt_rpfilter
# 这是一个内核模块,用于实现反向路径过滤,用于防止 IP 欺骗和 DDoS 攻击。
#
# ipt_REJECT
# 这是一个 iptables 目标,用于拒绝 IP 数据包,并向发送方发送响应,指示数据包被拒绝。
#
# ipip
# 这是一个内核模块,用于实现 IP 封装在 IP(IP-over-IP)的隧道功能。它可以在不同网络之间创建虚拟隧道来传输 IP 数据包。

11、

修改系统内核运行参数

三台服务器都执行

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
EOFsysctl --system# 这些是Linux系统的一些参数设置,用于配置和优化网络、文件系统和虚拟内存等方面的功能。以下是每个参数的详细解释:
# 
# 1. net.ipv4.ip_forward = 1
#    - 这个参数启用了IPv4的IP转发功能,允许服务器作为网络路由器转发数据包。
# 
# 2. net.bridge.bridge-nf-call-iptables = 1
#    - 当使用网络桥接技术时,将数据包传递到iptables进行处理。
#   
# 3. fs.may_detach_mounts = 1
#    - 允许在挂载文件系统时,允许被其他进程使用。
#   
# 4. vm.overcommit_memory=1
#    - 该设置允许原始的内存过量分配策略,当系统的内存已经被完全使用时,系统仍然会分配额外的内存。
# 
# 5. vm.panic_on_oom=0
#    - 当系统内存不足(OOM)时,禁用系统崩溃和重启。
# 
# 6. fs.inotify.max_user_watches=89100
#    - 设置系统允许一个用户的inotify实例可以监控的文件数目的上限。
# 
# 7. fs.file-max=52706963
#    - 设置系统同时打开的文件数的上限。
# 
# 8. fs.nr_open=52706963
#    - 设置系统同时打开的文件描述符数的上限。
# 
# 9. net.netfilter.nf_conntrack_max=2310720
#    - 设置系统可以创建的网络连接跟踪表项的最大数量。
# 
# 10. net.ipv4.tcp_keepalive_time = 600
#     - 设置TCP套接字的空闲超时时间(秒),超过该时间没有活动数据时,内核会发送心跳包。
# 
# 11. net.ipv4.tcp_keepalive_probes = 3
#     - 设置未收到响应的TCP心跳探测次数。
# 
# 12. net.ipv4.tcp_keepalive_intvl = 15
#     - 设置TCP心跳探测的时间间隔(秒)。
# 
# 13. net.ipv4.tcp_max_tw_buckets = 36000
#     - 设置系统可以使用的TIME_WAIT套接字的最大数量。
# 
# 14. net.ipv4.tcp_tw_reuse = 1
#     - 启用TIME_WAIT套接字的重新利用,允许新的套接字使用旧的TIME_WAIT套接字。
# 
# 15. net.ipv4.tcp_max_orphans = 327680
#     - 设置系统可以同时存在的TCP套接字垃圾回收包裹数的最大数量。
# 
# 16. net.ipv4.tcp_orphan_retries = 3
#     - 设置系统对于孤立的TCP套接字的重试次数。
# 
# 17. net.ipv4.tcp_syncookies = 1
#     - 启用TCP SYN cookies保护,用于防止SYN洪泛攻击。
# 
# 18. net.ipv4.tcp_max_syn_backlog = 16384
#     - 设置新的TCP连接的半连接数(半连接队列)的最大长度。
# 
# 19. net.ipv4.ip_conntrack_max = 65536
#     - 设置系统可以创建的网络连接跟踪表项的最大数量。
# 
# 20. net.ipv4.tcp_timestamps = 0
#     - 关闭TCP时间戳功能,用于提供更好的安全性。
# 
# 21. net.core.somaxconn = 16384
#     - 设置系统核心层的连接队列的最大值。
# 
# 22. net.ipv6.conf.all.disable_ipv6 = 0
#     - 启用IPv6协议。
# 
# 23. net.ipv6.conf.default.disable_ipv6 = 0
#     - 启用IPv6协议。
# 
# 24. net.ipv6.conf.lo.disable_ipv6 = 0
#     - 启用IPv6协议。
# 
# 25. net.ipv6.conf.all.forwarding = 1
#     - 允许IPv6数据包转发。

12、

修改系统打开文件数参数

ulimit -SHn 65535
cat >> /etc/security/limits.conf <<EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* seft memlock unlimited
* hard memlock unlimitedd
EOF# 参数解释
#
# soft nofile 655360
# soft表示软限制,nofile表示一个进程可打开的最大文件数,默认值为1024。这里的软限制设置为655360,即一个进程可打开的最大文件数为655360。
#
# hard nofile 131072
# hard表示硬限制,即系统设置的最大值。nofile表示一个进程可打开的最大文件数,默认值为4096。这里的硬限制设置为131072,即系统设置的最大文件数为131072。
#
# soft nproc 655350
# soft表示软限制,nproc表示一个用户可创建的最大进程数,默认值为30720。这里的软限制设置为655350,即一个用户可创建的最大进程数为655350。
#
# hard nproc 655350
# hard表示硬限制,即系统设置的最大值。nproc表示一个用户可创建的最大进程数,默认值为4096。这里的硬限制设置为655350,即系统设置的最大进程数为655350。
#
# seft memlock unlimited
# seft表示软限制,memlock表示一个进程可锁定在RAM中的最大内存,默认值为64 KB。这里的软限制设置为unlimited,即一个进程可锁定的最大内存为无限制。
#
# hard memlock unlimited
# hard表示硬限制,即系统设置的最大值。memlock表示一个进程可锁定在RAM中的最大内存,默认值为64 KB。这里的硬限制设置为unlimited,即系统设置的最大内存锁定为无限制。

 13、

安装Containerd作为Runtime和cni

首先安装cni

#创建cni插件所需目录
mkdir -p /etc/cni/net.d /opt/cni/bin 
#解压cni二进制包
tar xvf cni-plugins-linux-amd64-v*.tgz -C /opt/cni/bin/#分发cni二进制包
[root@centos18 ~]# scp cni-plugins-linux-amd64-v1.6.2.tgz k8s-node1:~/
cni-plugins-linux-amd64-v1.6.2.tgz                                       100%   50MB 154.2MB/s   00:00    
[root@centos18 ~]# scp cni-plugins-linux-amd64-v1.6.2.tgz k8s-node2:~/
cni-plugins-linux-amd64-v1.6.2.tgz                                       100%   50MB 167.7MB/s   00:00  #查看cni文件是否可执行
[root@centos20 ~]# ls -alh /opt/cni/bin/
bandwidth  dhcp   firewall     host-local  LICENSE   macvlan  ptp        sbr     tap     vlan
bridge     dummy  host-device  ipvlan      loopback  portmap  README.md  static  tuning  vrf

开始安装containerd

这里需要说明一下,使用的是containerd-2.0.2-linux-amd64.tar.gz文件,里面包含有containerd,runc,因此解压后放到/usr/bin目录下就可以了

[root@centos20 ~]# whereis containerd
containerd: /usr/bin/containerd
[root@k8s-node1 ~]# containerd -v
containerd github.com/containerd/containerd/v2 v2.0.2 c507a0257ea6462fbd6f5ba4f5c74facb04021f4

 #创建containerd的服务启动文件

cat > /etc/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
TasksMax=infinity
OOMScoreAdjust=-999[Install]
WantedBy=multi-user.target
EOF
# 参数解释:
#
# 这是一个用于启动containerd容器运行时的systemd unit文件。下面是对该文件不同部分的详细解释:
# 
# [Unit]
# Description=containerd container runtime
# 描述该unit的作用是作为containerd容器运行时。
# 
# Documentation=https://containerd.io
# 指向容器运行时的文档的URL。
# 
# After=network.target local-fs.target
# 定义了在哪些依赖项之后该unit应该被启动。在网络和本地文件系统加载完成后启动,确保了容器运行时在这些依赖项可用时才会启动。
# 
# [Service]
# ExecStartPre=-/sbin/modprobe overlay
# 在启动containerd之前执行的命令。这里的命令是尝试加载内核的overlay模块,如果失败则忽略错误继续执行下面的命令。
# 
# ExecStart=/usr/local/bin/containerd
# 实际执行的命令,用于启动containerd容器运行时。
# 
# Type=notify
# 指定服务的通知类型。这里使用notify类型,表示当服务就绪时会通过通知的方式告知systemd。
# 
# Delegate=yes
# 允许systemd对此服务进行重启和停止操作。
# 
# KillMode=process
# 在终止容器运行时时使用的kill模式。这里使用process模式,表示通过终止进程来停止容器运行时。
# 
# Restart=always
# 定义了当容器运行时终止后的重启策略。这里设置为always,表示无论何时终止容器运行时,都会自动重新启动。
# 
# RestartSec=5
# 在容器运行时终止后重新启动之前等待的秒数。
# 
# LimitNPROC=infinity
# 指定容器运行时可以使用的最大进程数量。这里设置为无限制。
# 
# LimitCORE=infinity
# 指定容器运行时可以使用的最大CPU核心数量。这里设置为无限制。
# 
# LimitNOFILE=infinity
# 指定容器运行时可以打开的最大文件数。这里设置为无限制。
# 
# TasksMax=infinity
# 指定容器运行时可以创建的最大任务数。这里设置为无限制。
# 
# OOMScoreAdjust=-999
# 指定容器运行时的OOM(Out-Of-Memory)分数调整值。负数值表示容器运行时的优先级较高。
# 
# [Install]
# WantedBy=multi-user.target
# 定义了服务的安装位置。这里指定为multi-user.target,表示将服务安装为多用户模式下的启动项

配置Containerd所需的内核模块 

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF# 参数解释:
#
# containerd是一个容器运行时,用于管理和运行容器。它支持多种不同的参数配置来自定义容器运行时的行为和功能。
# 
# 1. overlay:overlay是容器d默认使用的存储驱动,它提供了一种轻量级的、可堆叠的、逐层增量的文件系统。它通过在现有文件系统上叠加文件系统层来创建容器的文件系统视图。每个容器可以有自己的一组文件系统层,这些层可以共享基础镜像中的文件,并在容器内部进行修改。使用overlay可以有效地使用磁盘空间,并使容器更加轻量级。
# 
# 2. br_netfilter:br_netfilter是Linux内核提供的一个网络过滤器模块,用于在容器网络中进行网络过滤和NAT转发。当容器和主机之间的网络通信需要进行DNAT或者SNAT时,br_netfilter模块可以将IP地址进行转换。它还可以提供基于iptables规则的网络过滤功能,用于限制容器之间或容器与外部网络之间的通信。
# 
# 这些参数可以在containerd的配置文件或者命令行中指定。例如,可以通过设置--storage-driver参数来选择使用overlay作为存储驱动,通过设置--iptables参数来启用或禁用br_netfilter模块。具体的使用方法和配置细节可以参考containerd的官方文档。

加载contrainerd需要的内核模块

systemctl restart systemd-modules-load.service# 参数解释:
# - `systemctl`: 是Linux系统管理服务的命令行工具,可以管理systemd init系统。
# - `restart`: 是systemctl命令的一个选项,用于重新启动服务。
# - `systemd-modules-load.service`: 是一个系统服务,用于加载内核模块。
# 
# 将上述参数结合在一起来解释`systemctl restart systemd-modules-load.service`的含义:
# 这个命令用于重新启动系统服务`systemd-modules-load.service`,它是负责加载内核模块的服务。在重新启动该服务后,系统会重新加载所有的内核模块。

配置Containerd所需的内核运行参数

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF# 加载内核
sysctl --system# 参数解释:
# 
# 这些参数是Linux操作系统中用于网络和网络桥接设置的参数。
# 
# - net.bridge.bridge-nf-call-iptables:这个参数控制网络桥接设备是否调用iptables规则处理网络数据包。当该参数设置为1时,网络数据包将被传递到iptables进行处理;当该参数设置为0时,网络数据包将绕过iptables直接传递。默认情况下,这个参数的值是1,即启用iptables规则处理网络数据包。
# 
# - net.ipv4.ip_forward:这个参数用于控制是否启用IP转发功能。IP转发使得操作系统可以将接收到的数据包从一个网络接口转发到另一个网络接口。当该参数设置为1时,启用IP转发功能;当该参数设置为0时,禁用IP转发功能。在网络环境中,通常需要启用IP转发功能来实现不同网络之间的通信。默认情况下,这个参数的值是0,即禁用IP转发功能。
# 
# - net.bridge.bridge-nf-call-ip6tables:这个参数与net.bridge.bridge-nf-call-iptables类似,但是它用于IPv6数据包的处理。当该参数设置为1时,IPv6数据包将被传递到ip6tables进行处理;当该参数设置为0时,IPv6数据包将绕过ip6tables直接传递。默认情况下,这个参数的值是1,即启用ip6tables规则处理IPv6数据包。
# 
# 这些参数的值可以通过修改操作系统的配置文件(通常是'/etc/sysctl.conf')来进行设置。修改完成后,需要使用'sysctl -p'命令重载配置文件使参数生效。

创建Containerd的配置文件

1. 创建默认配置文件
mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml
#mkdir -p /etc/containerd:创建 /etc/containerd 目录,如果目录已存在则不报错。
#containerd config default:生成 containerd 的默认配置文件。
#tee /etc/containerd/config.toml:将默认配置文件输出到 /etc/containerd/config.toml 文件。
2. 修改 SystemdCgroup 参数
sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep SystemdCgroup
#sed -i:直接修改文件内容。
#SystemdCgroup = false 改为 SystemdCgroup = true:启用 SystemdCgroup 功能,使 containerd 使用 systemd 的 Cgroup 驱动。
grep SystemdCgroup:验证修改是否成功。
3. 修改沙箱镜像地址
sed -i 's@registry.k8s.io/pause:3.8@registry.aliyuncs.com/google_containers/pause:3.8@' /etc/containerd/config.toml
#grep sandbox:验证修改是否成功。
cat /etc/containerd/config.toml | grep sandbox
#将 registry.k8s.io 替换为 registry.aliyuncs.com/google_containers:修改 sandbox_image 的镜像地址,通常用于加速镜像拉取或使用自定义镜像。4. 配置镜像加速器
mkdir /etc/containerd/certs.d/docker.io -pv
cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://f1361db2.m.daocloud.io"]capabilities = ["pull", "resolve"]
EOF
#mkdir -pv:创建 /etc/containerd/certs.d/docker.io 目录,如果目录已存在则不报错。
#cat >:创建并写入 /etc/containerd/certs.d/docker.io/hosts.toml 文件,配置镜像加速器。
server:指定默认的镜像仓库地址。
#host:指定镜像加速器的地址。
#capabilities:指定加速器的功能(如 pull 和 resolve)。
二、解释分析
1. SystemdCgroup 参数的作用
SystemdCgroup 参数用于指定 containerd 是否使用 systemd 的 Cgroup 驱动。
如果设置为 true,containerd 会使用 systemd 的 Cgroup 驱动,适用于使用 systemd 作为初始化系统的 Linux 发行版(如 CentOS、Ubuntu 等)。
如果设置为 false,containerd 会使用默认的 Cgroup 驱动。
2. 沙箱镜像的作用
sandbox_image 是 Kubernetes 中用于运行 Pod 沙箱的镜像(通常是 pause 镜像)。
修改 sandbox_image 的地址可以加速镜像拉取,或者使用自定义的 pause 镜像。
3. 镜像加速器的作用
镜像加速器用于加速从公共镜像仓库(如 Docker Hub)拉取镜像的速度。
通过配置 hosts.toml 文件,可以将镜像拉取请求重定向到镜像加速器(如阿里云、网易云等)。确保镜像加速器的地址(如 https://jockerhub.com)是有效的,并且支持 pull 和 resolve 功能。
如果需要使用其他镜像加速器(如阿里云、网易云),可以修改 host 的地址。重启 containerd 服务修改配置文件后,需要重启 containerd 服务以使配置生效:
systemctl daemon-reload
systemctl enable --now containerd
systemctl restart containerd
systemctl status containerd

如果不想修改,直接使用下面的配置文件也可以:

cat >/etc/containerd/config.toml<<EOF
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2[cgroup]path = ""[debug]address = ""format = ""gid = 0level = ""uid = 0[grpc]address = "/run/containerd/containerd.sock"gid = 0max_recv_message_size = 16777216max_send_message_size = 16777216tcp_address = ""tcp_tls_ca = ""tcp_tls_cert = ""tcp_tls_key = ""uid = 0[metrics]address = ""grpc_histogram = false[plugins][plugins."io.containerd.gc.v1.scheduler"]deletion_threshold = 0mutation_threshold = 100pause_threshold = 0.02schedule_delay = "0s"startup_delay = "100ms"[plugins."io.containerd.grpc.v1.cri"]cdi_spec_dirs = ["/etc/cdi", "/var/run/cdi"]device_ownership_from_security_context = falsedisable_apparmor = falsedisable_cgroup = falsedisable_hugetlb_controller = truedisable_proc_mount = falsedisable_tcp_service = truedrain_exec_sync_io_timeout = "0s"enable_cdi = falseenable_selinux = falseenable_tls_streaming = falseenable_unprivileged_icmp = falseenable_unprivileged_ports = falseignore_deprecation_warnings = []ignore_image_defined_volumes = falseimage_pull_progress_timeout = "5m0s"image_pull_with_sync_fs = falsemax_concurrent_downloads = 3max_container_log_line_size = 16384netns_mounts_under_state_dir = falserestrict_oom_score_adj = falsesandbox_image = "registry.aliyuncs.com/chenby/pause:3.8"selinux_category_range = 1024stats_collect_period = 10stream_idle_timeout = "4h0m0s"stream_server_address = "127.0.0.1"stream_server_port = "0"systemd_cgroup = falsetolerate_missing_hugetlb_controller = trueunset_seccomp_profile = ""[plugins."io.containerd.grpc.v1.cri".cni]bin_dir = "/opt/cni/bin"conf_dir = "/etc/cni/net.d"conf_template = ""ip_pref = ""max_conf_num = 1setup_serially = false[plugins."io.containerd.grpc.v1.cri".containerd]default_runtime_name = "runc"disable_snapshot_annotations = truediscard_unpacked_layers = falseignore_blockio_not_enabled_errors = falseignore_rdt_not_enabled_errors = falseno_pivot = falsesnapshotter = "overlayfs"[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]base_runtime_spec = ""cni_conf_dir = ""cni_max_conf_num = 0container_annotations = []pod_annotations = []privileged_without_host_devices = falseprivileged_without_host_devices_all_devices_allowed = falseruntime_engine = ""runtime_path = ""runtime_root = ""runtime_type = ""sandbox_mode = ""snapshotter = ""[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options][plugins."io.containerd.grpc.v1.cri".containerd.runtimes][plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]base_runtime_spec = ""cni_conf_dir = ""cni_max_conf_num = 0container_annotations = []pod_annotations = []privileged_without_host_devices = falseprivileged_without_host_devices_all_devices_allowed = falseruntime_engine = ""runtime_path = ""runtime_root = ""runtime_type = "io.containerd.runc.v2"sandbox_mode = "podsandbox"snapshotter = ""[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]BinaryName = ""CriuImagePath = ""CriuPath = ""CriuWorkPath = ""IoGid = 0IoUid = 0NoNewKeyring = falseNoPivotRoot = falseRoot = ""ShimCgroup = ""SystemdCgroup = true[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]base_runtime_spec = ""cni_conf_dir = ""cni_max_conf_num = 0container_annotations = []pod_annotations = []privileged_without_host_devices = falseprivileged_without_host_devices_all_devices_allowed = falseruntime_engine = ""runtime_path = ""runtime_root = ""runtime_type = ""sandbox_mode = ""snapshotter = ""[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options][plugins."io.containerd.grpc.v1.cri".image_decryption]key_model = "node"[plugins."io.containerd.grpc.v1.cri".registry]config_path = ""[plugins."io.containerd.grpc.v1.cri".registry.auths][plugins."io.containerd.grpc.v1.cri".registry.configs][plugins."io.containerd.grpc.v1.cri".registry.headers][plugins."io.containerd.grpc.v1.cri".registry.mirrors][plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]tls_cert_file = ""tls_key_file = ""[plugins."io.containerd.internal.v1.opt"]path = "/opt/containerd"[plugins."io.containerd.internal.v1.restart"]interval = "10s"[plugins."io.containerd.internal.v1.tracing"][plugins."io.containerd.metadata.v1.bolt"]content_sharing_policy = "shared"[plugins."io.containerd.monitor.v1.cgroups"]no_prometheus = false[plugins."io.containerd.nri.v1.nri"]disable = truedisable_connections = falseplugin_config_path = "/etc/nri/conf.d"plugin_path = "/opt/nri/plugins"plugin_registration_timeout = "5s"plugin_request_timeout = "2s"socket_path = "/var/run/nri/nri.sock"[plugins."io.containerd.runtime.v1.linux"]no_shim = falseruntime = "runc"runtime_root = ""shim = "containerd-shim"shim_debug = false[plugins."io.containerd.runtime.v2.task"]platforms = ["linux/amd64"]sched_core = false[plugins."io.containerd.service.v1.diff-service"]default = ["walking"][plugins."io.containerd.service.v1.tasks-service"]blockio_config_file = ""rdt_config_file = ""[plugins."io.containerd.snapshotter.v1.aufs"]root_path = ""[plugins."io.containerd.snapshotter.v1.blockfile"]fs_type = ""mount_options = []root_path = ""scratch_file = ""[plugins."io.containerd.snapshotter.v1.devmapper"]async_remove = falsebase_image_size = ""discard_blocks = falsefs_options = ""fs_type = ""pool_name = ""root_path = ""[plugins."io.containerd.snapshotter.v1.native"]root_path = ""[plugins."io.containerd.snapshotter.v1.overlayfs"]mount_options = []root_path = ""sync_remove = falseupperdir_label = false[plugins."io.containerd.snapshotter.v1.zfs"]root_path = ""[plugins."io.containerd.tracing.processor.v1.otlp"][plugins."io.containerd.transfer.v1.local"]config_path = ""max_concurrent_downloads = 3max_concurrent_uploaded_layers = 3[[plugins."io.containerd.transfer.v1.local".unpack_config]]differ = ""platform = "linux/amd64"snapshotter = "overlayfs"[proxy_plugins][stream_processors][stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]path = "ctd-decoder"returns = "application/vnd.oci.image.layer.v1.tar"[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]path = "ctd-decoder"returns = "application/vnd.oci.image.layer.v1.tar+gzip"[timeouts]"io.containerd.timeout.bolt.open" = "0s""io.containerd.timeout.metrics.shimstats" = "2s""io.containerd.timeout.shim.cleanup" = "5s""io.containerd.timeout.shim.load" = "5s""io.containerd.timeout.shim.shutdown" = "3s""io.containerd.timeout.task.state" = "2s"[ttrpc]address = ""gid = 0uid = 0
EOF

安装crictl

[root@k8s-master ~]# scp crictl-v1.32.0-linux-amd64.tar.gz k8s-node1:~/
crictl-v1.32.0-linux-amd64.tar.gz                                                                                                                                                                                                                                                                         100%   18MB 149.0MB/s   00:00    
[root@k8s-master ~]# scp crictl-v1.32.0-linux-amd64.tar.gz k8s-node2:~/
crictl-v1.32.0-linux-amd64.tar.gz   
tar xvf crictl-v*-linux-amd64.tar.gz -C /usr/bin/cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF#再次重启containerd
systemctl restart containerd

测试镜像加速是否正常:

crictl的用法是十分接近docker命令的

[root@centos18 ~]# crictl pull registry.aliyuncs.com/google_containers/pause:3.8
Image is up to date for sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
[root@centos18 ~]# crictl pull registry.aliyuncs.com/google_containers/pause:3.10
Image is up to date for sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
[root@centos18 ~]# crictl images
IMAGE                                           TAG                 IMAGE ID            SIZE
registry.aliyuncs.com/google_containers/pause   3.10                873ed75102791       320kB
registry.aliyuncs.com/google_containers/pause   3.8                 4873874c08efc       311kB

ctr命令和docker命令差异比较大 

[root@centos18 ~]# ctr -n k8s.io images pull registry.aliyuncs.com/google_containers/pause:3.8#输出如下
registry.aliyuncs.com/google_containers/pause:3.8:                                resolved       |++++++++++++++++++++++++++++++++++++++| 
index-sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d:    exists         |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:f5944f2d1daf66463768a1503d0c8c5e8dde7c1674d3f85abc70cef9c7e32e95: exists         |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:9457426d68990df190301d2e20b8450c4f67d7559bdb7ded6c40d41ced6731f7:    done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517:   done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 1.0 s                                                                    total:   0.0 B (0.0 B/s)                                         
unpacking linux/amd64 sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d...
done: 9.211539ms	#列出所下载的镜像
[root@centos18 ~]# ctr -n k8s.io images list
REF                                                                                                                   TYPE                                                      DIGEST                                                                  SIZE      PLATFORMS                                                                    LABELS                                                          
registry.aliyuncs.com/google_containers/pause:3.10                                                                    application/vnd.docker.distribution.manifest.list.v2+json sha256:0ca1162b75bf9fc55c4cac99a1ff06f7095c881d5c07acfa07c853e72225c36f 312.1 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed                                 
registry.aliyuncs.com/google_containers/pause:3.8                                                                     application/vnd.docker.distribution.manifest.list.v2+json sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d 304.0 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed,io.cri-containerd.pinned=pinned 
registry.aliyuncs.com/google_containers/pause@sha256:0ca1162b75bf9fc55c4cac99a1ff06f7095c881d5c07acfa07c853e72225c36f application/vnd.docker.distribution.manifest.list.v2+json sha256:0ca1162b75bf9fc55c4cac99a1ff06f7095c881d5c07acfa07c853e72225c36f 312.1 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed                                 
registry.aliyuncs.com/google_containers/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d application/vnd.docker.distribution.manifest.list.v2+json sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d 304.0 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed,io.cri-containerd.pinned=pinned 
sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517                                               application/vnd.docker.distribution.manifest.list.v2+json sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d 304.0 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed,io.cri-containerd.pinned=pinned 
sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136                                               application/vnd.docker.distribution.manifest.list.v2+json sha256:0ca1162b75bf9fc55c4cac99a1ff06f7095c881d5c07acfa07c853e72225c36f 312.1 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed      ctr -n k8s.io images export  gitlab.tar registry.gitlab.cn/omnibus/gitlab-jh:latest

=======================================================================================================

三、

kubernetes集群证书生成

1、

目录规划说明

创建相关目录,这里需要说明一下目录规划,所有的配置文件存放到cfg文件夹内,证书全部存放到ssl目录内,二进制文件存放到/opt/kubernetes/bin目录内,主要操作都在cfg目录内进行,下面的创建目录命令三个节点都执行,执行完创建目录后,将二进制文件放到bin目录下,master节点是kubelet,kube-apiserver,kube-controllermanager,kube-schedule,kube-proxy,kubectl文件放到/usr/bin目录下,node节点的/opt/kubernetes/bin目录下只需要放kubelet,kube-proxy这两个文件,其它的文件不需要

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
cp kube-proxy  kube-apiserver kubelet  kube-controller-manager kube-scheduler /opt/kubernetes/bin/
chmod  a+x /opt/kubernetes/bin/*
cp kubectl  /usr/bin/
chmod  a+x /usr/bin/kubectl
CA证书的配置文件

这里证书过期时间是100年,该文件定义了证书的有效期、加密算法等设置。

cat > /opt/kubernetes/cfg/ca-config.json << EOF 
{"signing": {"default": {"expiry": "876000h"},"profiles": {"kubernetes": {"usages": ["signing","key encipherment","server auth","client auth"],"expiry": "876000h"}}}
}
EOF
# 这段配置文件是用于配置加密和认证签名的一些参数。
# 
# 在这里,有两个部分:`signing`和`profiles`。
# 
# `signing`包含了默认签名配置和配置文件。
# 默认签名配置`default`指定了证书的过期时间为`876000h`。`876000h`表示证书有效期为100年。
# 
# `profiles`部分定义了不同的证书配置文件。
# 在这里,只有一个配置文件`kubernetes`。它包含了以下`usages`和过期时间`expiry`:
# 
# 1. `signing`:用于对其他证书进行签名
# 2. `key encipherment`:用于加密和解密传输数据
# 3. `server auth`:用于服务器身份验证
# 4. `client auth`:用于客户端身份验证
# 
# 对于`kubernetes`配置文件,证书的过期时间也是`876000h`,即100年。

2、

创建k8s集群根证书

cat > /opt/kubernetes/cfg/ca-csr.json   << EOF 
{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "Kubernetes","OU": "Kubernetes-manual"}],"ca": {"expiry": "876000h"}
}
EOF

生成kubernetes根证书:

会生成两个ca前缀的pem文件

[root@k8s-master cfg]# pwd
/opt/kubernetes/cfg#生成证书,证书名称前缀是ca
cfssl gencert -initca /opt/kubernetes/cfg/ca-csr.json | cfssljson -bare /opt/kubernetes/ssl/ca -

此时会生成两个ca为前缀,后缀为pem的证书文件,证书文件直接存放到ssl目录内

[root@k8s-master cfg]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

 参数说明:

# 这是一个用于生成 Kubernetes 相关证书的配置文件。该配置文件中包含以下信息:
# 
# - CN:CommonName,即用于标识证书的通用名称。在此配置中,CN 设置为 "kubernetes",表示该证书是用于 Kubernetes。
# - key:用于生成证书的算法和大小。在此配置中,使用的算法是 RSA,大小是 2048 位。
# - names:用于证书中的名称字段的详细信息。在此配置中,有以下字段信息:
#   - C:Country,即国家。在此配置中,设置为 "CN"。
#   - ST:State,即省/州。在此配置中,设置为 "Beijing"。
#   - L:Locality,即城市。在此配置中,设置为 "Beijing"。
#   - O:Organization,即组织。在此配置中,设置为 "Kubernetes"。
#   - OU:Organization Unit,即组织单位。在此配置中,设置为 "Kubernetes-manual"。
# - ca:用于证书签名的证书颁发机构(CA)的配置信息。在此配置中,设置了证书的有效期为 876000 小时。
# 
# 这个配置文件可以用于生成 Kubernetes 相关的证书,以确保集群中的通信安全性

3、

生成apiserver所使用的证书:

cat > /opt/kubernetes/cfg/apiserver-csr.json << EOF 
{
"CN": "kube-apiserver",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.123.15",
"192.168.123.16",
"192.168.123.17",
"::1", 
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "Kubernetes","OU": "Kubernetes-manual"}]
}
EOF

cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/cfg/ca-config.json -profile=kubernetes /opt/kubernetes/cfg/apiserver-csr.json | cfssljson -bare /opt/kubernetes/ssl/apiserver -

会生成两个apiserver前缀的pem文件 

[root@k8s-master cfg]# ll
total 36
-rw-r--r-- 1 root root 1314 Mar  9 18:50 apiserver.csr
-rw-r--r-- 1 root root  464 Mar  9 18:50 apiserver-csr.json
-rw------- 1 root root 1679 Mar  9 18:50 apiserver-key.pem
-rw-r--r-- 1 root root 1708 Mar  9 18:50 apiserver.pem
-rw-r--r-- 1 root root  294 Mar  9 17:12 ca-config.json
-rw-r--r-- 1 root root 1025 Mar  9 17:21 ca.csr
-rw-r--r-- 1 root root  265 Mar  9 17:20 ca-csr.json
-rw------- 1 root root 1675 Mar  9 17:21 ca-key.pem
-rw-r--r-- 1 root root 1411 Mar  9 17:21 ca.pem

4、

生成apiserver聚合证书

cat > /opt/kubernetes/cfg/front-proxy-ca-csr.json  << EOF 
{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"ca": {"expiry": "876000h"}
}
EOF

cfssl gencert   -initca /opt/kubernetes/cfg/front-proxy-ca-csr.json | cfssljson -bare /opt/kubernetes/ssl/front-proxy-ca 

5、生成apiserver聚合客户端证书: 

cat > /opt/kubernetes/cfg/front-proxy-client-csr.json  << EOF 
{"CN": "front-proxy-client","key": {"algo": "rsa","size": 2048}
}
EOF

cfssl gencert  \
-ca=/opt/kubernetes/ssl/front-proxy-ca.pem   \
-ca-key=/opt/kubernetes/ssl/front-proxy-ca-key.pem   \
-config=/opt/kubernetes/cfg/ca-config.json   \
-profile=kubernetes   /opt/kubernetes/cfg/front-proxy-client-csr.json | cfssljson -bare /opt/kubernetes/ssl/front-proxy-client

6、创建ServiceAccount Key ——secret

这里会在ssl目录生成两个sa为前缀的文件

openssl genrsa -out /opt/kubernetes/ssl/sa.key 2048
openssl rsa -in /opt/kubernetes/ssl/sa.key -pubout -out /opt/kubernetes/ssl/sa.pub

7、生成kube-controller-manager服务需要使用的证书:

cat > /opt/kubernetes/cfg/manager-csr.json << EOF 
{"CN": "system:kube-controller-manager","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:kube-controller-manager","OU": "Kubernetes-manual"}]
}
EOF

cfssl gencert \-ca=/opt/kubernetes/ssl/ca.pem \-ca-key=/opt/kubernetes/ssl/ca-key.pem \-config=/opt/kubernetes/cfg/ca-config.json \-profile=kubernetes \/opt/kubernetes/cfg/manager-csr.json | cfssljson -bare /opt/kubernetes/ssl/controller-manager

8、生成kube-scheduler的证书

cat > /opt/kubernetes/cfg/scheduler-csr.json << EOF 
{"CN": "system:kube-scheduler","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:kube-scheduler","OU": "Kubernetes-manual"}]
}
EOF

cfssl gencert \-ca=/opt/kubernetes/ssl/ca.pem \-ca-key=/opt/kubernetes/ssl/ca-key.pem \-config=/opt/kubernetes/cfg/ca-config.json \-profile=kubernetes \/opt/kubernetes/cfg/scheduler-csr.json | cfssljson -bare /opt/kubernetes/ssl/scheduler

9、生成admin的证书配置

cat > /opt/kubernetes/cfg/admin-csr.json << EOF 
{"CN": "admin","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:masters","OU": "Kubernetes-manual"}]
}
EOF

生成证书:

cfssl gencert \-ca=/opt/kubernetes/ssl/ca.pem \-ca-key=/opt/kubernetes/ssl/ca-key.pem \-config=/opt/kubernetes/cfg/ca-config.json \-profile=kubernetes \/opt/kubernetes/cfg/admin-csr.json | cfssljson -bare /opt/kubernetes/ssl/admin

10、创建kube-proxy证书

cat > /opt/kubernetes/cfg/kube-proxy-csr.json  << EOF 
{"CN": "system:kube-proxy","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:kube-proxy","OU": "Kubernetes-manual"}]
}
EOF

生成证书:

cfssl gencert \-ca=/opt/kubernetes/ssl/ca.pem \-ca-key=/opt/kubernetes/ssl/ca-key.pem \-config=/opt/kubernetes/cfg/ca-config.json \-profile=kubernetes \/opt/kubernetes/cfg/kube-proxy-csr.json | cfssljson -bare /opt/kubernetes/ssl/kube-proxy

总计生成26个证书 

[root@k8s-master cfg]# ll /opt/kubernetes/ssl/
total 104
-rw-r--r-- 1 root root 1025 Mar  9 19:09 admin.csr
-rw------- 1 root root 1675 Mar  9 19:09 admin-key.pem
-rw-r--r-- 1 root root 1444 Mar  9 19:09 admin.pem
-rw-r--r-- 1 root root 1314 Mar  9 18:56 apiserver.csr
-rw------- 1 root root 1679 Mar  9 18:56 apiserver-key.pem
-rw-r--r-- 1 root root 1708 Mar  9 18:56 apiserver.pem
-rw-r--r-- 1 root root 1025 Mar  9 18:54 ca.csr
-rw------- 1 root root 1679 Mar  9 18:54 ca-key.pem
-rw-r--r-- 1 root root 1411 Mar  9 18:54 ca.pem
-rw-r--r-- 1 root root 1082 Mar  9 19:02 controller-manager.csr
-rw------- 1 root root 1675 Mar  9 19:02 controller-manager-key.pem
-rw-r--r-- 1 root root 1501 Mar  9 19:02 controller-manager.pem
-rw-r--r-- 1 root root  891 Mar  9 18:57 front-proxy-ca.csr
-rw------- 1 root root 1675 Mar  9 18:57 front-proxy-ca-key.pem
-rw-r--r-- 1 root root 1143 Mar  9 18:57 front-proxy-ca.pem
-rw-r--r-- 1 root root  903 Mar  9 19:00 front-proxy-client.csr
-rw------- 1 root root 1679 Mar  9 19:00 front-proxy-client-key.pem
-rw-r--r-- 1 root root 1188 Mar  9 19:00 front-proxy-client.pem
-rw-r--r-- 1 root root 1045 Mar  9 19:07 kube-proxy.csr
-rw------- 1 root root 1675 Mar  9 19:07 kube-proxy-key.pem
-rw-r--r-- 1 root root 1464 Mar  9 19:07 kube-proxy.pem
-rw-r--r-- 1 root root 1675 Mar  9 19:00 sa.key
-rw-r--r-- 1 root root  451 Mar  9 19:00 sa.pub
-rw-r--r-- 1 root root 1058 Mar  9 19:37 scheduler.csr
-rw------- 1 root root 1679 Mar  9 19:37 scheduler-key.pem
-rw-r--r-- 1 root root 1476 Mar  9 19:37 scheduler.pem
[root@k8s-master cfg]#  ll /opt/kubernetes/ssl/  |wc  -l
27




四、kubernetes集群核心组件配置和相关服务的启动

 1、配置kube-apiserver service

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/opt/kubernetes/bin/kube-apiserver \\--v=2  \\--allow-privileged=true  \\--bind-address=0.0.0.0  \\--secure-port=6443  \\--advertise-address=192.168.123.15 \\--service-cluster-ip-range=10.0.0.0/12,fd00:1111::/112  \\--service-node-port-range=30000-32767  \\--etcd-servers=https://192.168.123.15:2379,https://192.168.123.16:2379,https://192.168.123.17:2379 \\--etcd-cafile=/opt/etcd/ssl/ca.pem  \\--etcd-certfile=/opt/etcd/ssl/server.pem  \\--etcd-keyfile=/opt/etcd/ssl/server-key.pem  \\--client-ca-file=/opt/kubernetes/ssl/ca.pem  \\--tls-cert-file=/opt/kubernetes/ssl/apiserver.pem  \\--tls-private-key-file=/opt/kubernetes/ssl/apiserver-key.pem  \\--kubelet-client-certificate=/opt/kubernetes/ssl/apiserver.pem  \\--kubelet-client-key=/opt/kubernetes/ssl/apiserver-key.pem  \\--service-account-key-file=/opt/kubernetes/ssl/sa.pub  \\--service-account-signing-key-file=/opt/kubernetes/ssl/sa.key  \\--service-account-issuer=https://kubernetes.default.svc.cluster.local \\--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \\--authorization-mode=Node,RBAC  \\--enable-bootstrap-token-auth=true  \\--requestheader-client-ca-file=/opt/kubernetes/ssl/front-proxy-ca.pem  \\--proxy-client-cert-file=/opt/kubernetes/ssl/front-proxy-client.pem  \\--proxy-client-key-file=/opt/kubernetes/ssl/front-proxy-client-key.pem  \\--requestheader-allowed-names=aggregator  \\--requestheader-group-headers=X-Remote-Group  \\--requestheader-extra-headers-prefix=X-Remote-Extra-  \\--requestheader-username-headers=X-Remote-User \\--enable-aggregator-routing=trueRestart=on-failure
RestartSec=10s
LimitNOFILE=65535[Install]
WantedBy=multi-user.targetEOF

启动服务,并查看服务状态:

[root@k8s-master cfg]# systemctl enable --now kube-apiserver
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

可以看到很多failed,这里表示服务证书有问题,最好还是重新检查并再次生成

[root@k8s-master cfg]# systemctl status  kube-apiserver
● kube-apiserver.service - Kubernetes API ServerLoaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)Active: active (running) since Sun 2025-03-09 19:45:16 CST; 29s agoDocs: https://github.com/kubernetes/kubernetesMain PID: 3244 (kube-apiserver)Tasks: 15Memory: 241.0MCGroup: /system.slice/kube-apiserver.service└─3244 /opt/kubernetes/bin/kube-apiserver --v=2 --allow-privileged=true --bind-address=0.0.0.0 --secure-port=6443 --advertise-address=192.168.123.15 --service-cluster-ip-range=10.0.0.0/12,fd00:1111::/112 --service-node-port-range=30000-32767 --etcd-servers=https://192.168.123.15:2379,https://192.168.123.16:2379,http...Mar 09 19:45:18 k8s-master kube-apiserver[3244]: I0309 19:45:18.871933    3244 healthz.go:257] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz
Mar 09 19:45:18 k8s-master kube-apiserver[3244]: [-]poststarthook/rbac/bootstrap-roles failed: not finished
Mar 09 19:45:18 k8s-master kube-apiserver[3244]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
Mar 09 19:45:18 k8s-master kube-apiserver[3244]: I0309 19:45:18.942861    3244 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
Mar 09 19:45:18 k8s-master kube-apiserver[3244]: I0309 19:45:18.973704    3244 healthz.go:257] poststarthook/rbac/bootstrap-roles check failed: readyz
Mar 09 19:45:18 k8s-master kube-apiserver[3244]: [-]poststarthook/rbac/bootstrap-roles failed: not finished
Mar 09 19:45:19 k8s-master kube-apiserver[3244]: I0309 19:45:19.069760    3244 healthz.go:257] poststarthook/rbac/bootstrap-roles check failed: readyz
Mar 09 19:45:19 k8s-master kube-apiserver[3244]: [-]poststarthook/rbac/bootstrap-roles failed: not finished
Mar 09 19:45:19 k8s-master kube-apiserver[3244]: I0309 19:45:19.171202    3244 healthz.go:257] poststarthook/rbac/bootstrap-roles check failed: readyz
Mar 09 19:45:19 k8s-master kube-apiserver[3244]: [-]poststarthook/rbac/bootstrap-roles failed: not finished

该服务正确的状态大概是这样的:

[root@k8s-master cfg]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API ServerLoaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)Active: active (running) since Sun 2025-03-09 19:45:16 CST; 3h 10min agoDocs: https://github.com/kubernetes/kubernetesMain PID: 3244 (kube-apiserver)Tasks: 15Memory: 314.3MCGroup: /system.slice/kube-apiserver.service└─3244 /opt/kubernetes/bin/kube-apiserver --v=2 --allow-privileged=true --bind-address=0.0.0.0 --secure-port=6443 --advertise-address=192.168.123.15 --service-cluster-ip-range=10.0.0.0/12,fd00:1111::/112 --service-node-port-range=30000-32767 --etcd-servers=https://192.168.123.15:2379,https://192.168.123.16:2379,http...Mar 09 22:27:59 k8s-master kube-apiserver[3244]: I0309 22:27:59.594396    3244 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/opt/kubernetes/ssl/front-proxy-ca.pem"
Mar 09 22:27:59 k8s-master kube-apiserver[3244]: I0309 22:27:59.594613    3244 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/opt/kubernetes/ssl/front-proxy-ca.pem"
Mar 09 22:27:59 k8s-master kube-apiserver[3244]: I0309 22:27:59.594680    3244 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca-bundle::/opt/kubernetes/ssl/ca.pem,request-header::/opt/kubernetes/ssl/front-proxy-ca.pem" certDetail="\"kubernetes\" [] groups=[Kubernetes] issuer=\"<self>\" (2025-03-09 14:20:00 +000...
Mar 09 22:27:59 k8s-master kube-apiserver[3244]: I0309 22:27:59.594697    3244 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/opt/kubernetes/ssl/ca.pem,request-header::/opt/kubernetes/ssl/front-proxy-ca.pem" certDetail="\"kubernetes\" [] issuer=\"<self>\" (2025-03-09 14:23:00 +0000 UTC to 2125-02-13 ...
Mar 09 22:27:59 k8s-master kube-apiserver[3244]: I0309 22:27:59.594767    3244 cluster_authentication_trust_controller.go:165] writing updated authentication info to  kube-system configmaps/extension-apiserver-authentication
Mar 09 22:27:59 k8s-master kube-apiserver[3244]: I0309 22:27:59.594795    3244 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/opt/kubernetes/ssl/apiserver.pem::/opt/kubernetes/ssl/apiserver-key.pem" certDetail="\"kube-apiserver\" [serving,client] groups=[Kubernetes] validServingFor=[....0.1,192.168.123.15,192.16
Mar 09 22:27:59 k8s-master kube-apiserver[3244]: I0309 22:27:59.594862    3244 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1741520716\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1741520716\" (2025-0...
Mar 09 22:28:31 k8s-master kube-apiserver[3244]: I0309 22:28:31.667891    3244 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="aggregator-proxy-cert::/opt/kubernetes/ssl/front-proxy-client.pem::/opt/kubernetes/ssl/front-proxy-client-key.pem"
Mar 09 22:32:54 k8s-master kube-apiserver[3244]: I0309 22:32:54.354362    3244 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
Mar 09 22:32:54 k8s-master kube-apiserver[3244]: I0309 22:32:54.374813    3244 controller.go:616] quota admission added evaluator for: serviceaccounts

2、配置kube-controller-manager service

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/opt/kubernetes/bin/kube-controller-manager \\--v=2 \\--bind-address=0.0.0.0 \\--root-ca-file=/opt/kubernetes/ssl/ca.pem \\--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\--service-account-private-key-file=/opt/kubernetes/ssl/sa.key \\--kubeconfig=/opt/kubernetes/cfg/controller-manager.kubeconfig \\--leader-elect=true \\--use-service-account-credentials=true \\--node-monitor-grace-period=40s \\--node-monitor-period=5s \\--controllers=*,bootstrapsigner,tokencleaner \\--allocate-node-cidrs=true \\--service-cluster-ip-range=10.0.0.0/12,fd00:1111::/112 \\--cluster-cidr=172.16.0.0/12,fc00:2222::/112 \\--node-cidr-mask-size-ipv4=24 \\--node-cidr-mask-size-ipv6=120 \\--requestheader-client-ca-file=/opt/kubernetes/ssl/front-proxy-ca.pemRestart=always
RestartSec=10s[Install]
WantedBy=multi-user.target
EOF

下面是四个固定步骤,这里和低版本的kubernetes不一样,主要作用好像是提高集群安全 


设置集群角色:

 

​
kubectl config set-cluster kubernetes \--certificate-authority=/opt/kubernetes/ssl/ca.pem \--embed-certs=true \--server=https://192.168.123.15:6443 \--kubeconfig=/opt/kubernetes/cfg/controller-manager.kubeconfig

 上面命令执行完毕后,会创建一个新文件,里面包含有该服务连接apiserver的信息:

[root@k8s-master ~]# cat /opt/kubernetes/cfg/controller-manager.kubeconfig
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ1RENDQXN5Z0F3SUJBZ0lVZDNGU09ZOEZsbWdoaUcwRGtwZk8wdE52RFhRd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2R6RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEV6QVJCZ05WQkFvVENrdDFZbVZ5Ym1WMFpYTXhHakFZQmdOVkJBc1RFVXQxWW1WeWJtVjBaWE10CmJXRnVkV0ZzTVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1DQVhEVEkxTURNeE5URTFNRGN3TUZvWUR6SXgKTWpVd01qRTVNVFV3TnpBd1dqQjNNUXN3Q1FZRFZRUUdFd0pEVGpFUU1BNEdBMVVFQ0JNSFFtVnBhbWx1WnpFUQpNQTRHQTFVRUJ4TUhRbVZwYW1sdVp6RVRNQkVHQTFVRUNoTUtTM1ZpWlhKdVpYUmxjekVhTUJnR0ExVUVDeE1SClMzVmlaWEp1WlhSbGN5MXRZVzUxWVd3eEV6QVJCZ05WQkFNVENtdDFZbVZ5Ym1WMFpYTXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQzhVVGN0bTZUWHJ1V0VIOUhDNnRuM0Jzbyt1TkhIc1lTTwp3T2lrbDFpR3lBeHRCZ3dvN0p6NnNwSE9yaUh0WE0xRFU5RkpwcEZlSlFBR3E5VmFkWGw0cFIvaUk2ZEtVQW9VCmZXaTZtdkNjdmtQeC8rRCtiWXRKZ0FiaTNtdmREMGM4TGZXaXJ5aGdhRENGVTdwYUw4ZUo5QVMzUXJhaSs4c1cKVzBwQnFYTzUxQUJ6L0Z2Tlc4QTViZnF6aTAxZFp4NzVpRk5WaHdjOEU3SHFVL2gvdnV4cXNJZk1XSjBmcU9QRApDcjNaY3h4dlZiYVo0aEpKWE1ySDl5UXR2RmkyeG1SV3JmTXRMMnFBUDB6UFVGNTIzTnp6L1hMaXZGaCtsT2RSCkdES2UwRzhCYlliMWVlZ1Nra3V2VCthd0NVMyt5UVBQTlpFTXlCK3hzYVBtb3dwaWRVbVBBZ01CQUFHalpqQmsKTUE0R0ExVWREd0VCL3dRRUF3SUJCakFTQmdOVkhSTUJBZjhFQ0RBR0FRSC9BZ0VDTUIwR0ExVWREZ1FXQkJTQwpvWXQyVm5PREl2RTBZblp3YXBDSzdKR0dJakFmQmdOVkhTTUVHREFXZ0JTQ29ZdDJWbk9ESXZFMFluWndhcENLCjdKR0dJakFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBWU1ucXYvUkdiRUNzR1FXMGlCRk1OOTY4dTRZdm9PNjAKaGZxMjNTdStMQis5RkFHTXRKT1JrK0VNbmVkelB4bFZndXlaSldZUWpGTlozTDBHV2NsS051cFhWdU54U2htTwpSZm53OGZLdEM0THI4YWlmNTNmQlJEcURKeWdLQ1VVZGNjVWtVbmFyYVl1QlpXTkZ6UXNHcmIwU0VqUk1XdlVnCisraTU0VkE5R1dZVlRBejlFMHhNTzZGTUdSTnJOOU41RGlQT3Zmcjdha29QdUlKKzgrODd0VC92SWJKRENucGwKQ3JIeHFRZS9COVFiWVdnVVZsTFlUVmc5UjBKaGxCNGt0YXVtcjNPME5HZWRzdm92eW04bGtNc28zR1FGVUM3TgpSeVp2Z2lUNGhTcHRTd0ZONDJaL2pKTGpGdlZITG1Wb1RyS2JUUVhhRm9icUh1WmkvT1I0K2c9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==server: https://192.168.123.15:6443name: kubernetes
contexts:
- context:cluster: kubernetesuser: system:kube-controller-managername: system:kube-controller-manager@kubernetes
current-context: system:kube-controller-manager@kubernetes
kind: Config
preferences: {}
users:
- name: system:kube-controller-manageruser:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVKakNDQXc2Z0F3SUJBZ0lVQ29zcklXZjAxQUZ1T0VQQytVWmpoVC9HYTBRd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2R6RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEV6QVJCZ05WQkFvVENrdDFZbVZ5Ym1WMFpYTXhHakFZQmdOVkJBc1RFVXQxWW1WeWJtVjBaWE10CmJXRnVkV0ZzTVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1DQVhEVEkxTURNeE5URTFNRGt3TUZvWUR6SXgKTWpVd01qRTVNVFV3T1RBd1dqQ0JuekVMTUFrR0ExVUVCaE1DUTA0eEVEQU9CZ05WQkFnVEIwSmxhV3BwYm1jeApFREFPQmdOVkJBY1RCMEpsYVdwcGJtY3hKekFsQmdOVkJBb1RIbk41YzNSbGJUcHJkV0psTFdOdmJuUnliMnhzClpYSXRiV0Z1WVdkbGNqRWFNQmdHQTFVRUN4TVJTM1ZpWlhKdVpYUmxjeTF0WVc1MVlXd3hKekFsQmdOVkJBTVQKSG5ONWMzUmxiVHByZFdKbExXTnZiblJ5YjJ4c1pYSXRiV0Z1WVdkbGNqQ0NBU0l3RFFZSktvWklodmNOQVFFQgpCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFLbUYyS0JXbTF1OXJCaXFjSFF5a2x0NXZoMHRTRytlR3J1KzBnUlMyR0xoClZnOXdmck4weThFYnA2alZUa1JSRHVLVERkeW56OUNaL001cWtGa25mcVJNN3V6aHhMeFY1NFR0Q1RHZnlKYXYKVG4yMm8wM3ZCZVBvZ1g2NTlaYjRrYS8rV0g2VG9sb25GUDFQKzVKOFN1M0duV0dQQXNQaVlsL0RQYlpEdkVLcwo5RTNFWjJRMTdMUkRiZkFVcXl3Um84MDNrK3o1NDAxQ2FaM0UwR1FDVm9LdElJSFk3YTh1RHpiRExnQVlleEtkClJ2WU4rZVo5SGlZZHQxVEVja3lpcEl3SS91bGliOGVlZDMxRi8zUUhDeG91Z1cxOUIzQzJ2cFB3eTVMM2xIZVcKakc0c2dVck9adnhDTFBpVlUxdmRERFhNdDNZUWtwaG53cWJRbkFjNkFRVUNBd0VBQWFOL01IMHdEZ1lEVlIwUApBUUgvQkFRREFnV2dNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUZCd01CQmdnckJnRUZCUWNEQWpBTUJnTlZIUk1CCkFmOEVBakFBTUIwR0ExVWREZ1FXQkJTeFlDTU1oQ2xrTDh6TXdJeGoyS1A1OXI0WHVUQWZCZ05WSFNNRUdEQVcKZ0JTQ29ZdDJWbk9ESXZFMFluWndhcENLN0pHR0lqQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFzZWZUYkx6Qwp5cjR3OFVxSDJuNHpDR2lEV3V6bG94cjNhTnV2VXQzRmM1eFJaem5zSTAzSTlvYnNFZzBubnByOVYva1orVFgrClNrNGp5NTNpb2lhZ05Da01mM3NhazF4MklSN1hVR0R4SmwyYmI4VS94U01JMHdBK1B5bGFTNC9heHRMc1hiK3EKamNCRW5SajRtdmlDRVhJVFpMUHU1VE9XZVhhNnBtKzhIeTNtMENmbGJNQ2JOTE51U1l3SDVUejJabTNHVkNtVwpRRGJNQkxHNU50dTRLM1doV2tld0ErWkJmOG5GeSt6SCtqV3hjQlRJTVlFR3l4Rm44RlpzSXNteC9oUzcyZzdzCkZQbUNJTzVpeDNlMUtFYVJzOXlGeGpPRWFxUlNQeEdnRDlOb0lqRmdwWmdiVTV0RHdkMWw5ZEhRTnlnVkNOM0MKVjgrZTBsb0o4bGtncEE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBcVlYWW9GYWJXNzJzR0twd2RES1NXM20rSFMxSWI1NGF1NzdTQkZMWVl1RldEM0IrCnMzVEx3UnVucU5WT1JGRU80cE1OM0tmUDBKbjh6bXFRV1NkK3BFenU3T0hFdkZYbmhPMEpNWi9JbHE5T2ZiYWoKVGU4RjQraUJmcm4xbHZpUnIvNVlmcE9pV2ljVS9VLzdrbnhLN2NhZFlZOEN3K0ppWDhNOXRrTzhRcXowVGNSbgpaRFhzdEVOdDhCU3JMQkdqelRlVDdQbmpUVUpwbmNUUVpBSldncTBnZ2RqdHJ5NFBOc011QUJoN0VwMUc5ZzM1CjVuMGVKaDIzVk1SeVRLS2tqQWorNldKdng1NTNmVVgvZEFjTEdpNkJiWDBIY0xhK2svRExrdmVVZDVhTWJpeUIKU3M1bS9FSXMrSlZUVzkwTU5jeTNkaENTbUdmQ3B0Q2NCem9CQlFJREFRQUJBb0lCQUFoNVZGYlBmWHljZ3BuSgpDMDREcVNWQmRLdG90QkpBU1AzUmtCWC9QaW5UTWNZYnpYbVRBUXhxSVh0akRrS0Qrc2FBcTFTNFhyOENzNHh0CjcwRkZwQ0w2WlljWjBnZklFNGFsZ3F3a1hKYUx0TlM3NkhYZzJONkhwTkY4NGpYbHZ3S1pLRHRMamZZTHRoVGoKaHFQUDZyN3dDclh4dkNXbWoxeFNHaWVlUXk3aCtaQnIwR28rK2h2Rk9NMGkxKzdVY0JvZUlsRHpHdFd3TDVOdApkMU1SdlM3aEFnc05zQWxHRHBCYldwVy82VzFZWDlXM21CUFRHVDVXanRZNEtWbkttN0prbUNxL1daUlhLMWlLCnd1dDR0V0xtdjR0bmhkYTY5M21ZQnpKaVR3OVFwcW1LdnRjMlAyR2pxdFMwSHpiNVN3MXppL2RHRnY4ZDRHWjYKWW1wQ1dwRUNnWUVBem9XMWhNMWV4QmVTcTdUK04vbEJ2YisrRWl3SWpURDBDMFJrSCtNSkoxUXM3eklMc24reAp6YnYzRlRQbHlyS2NQUkFjWkYxSkJzcUFIaXFlL0lWVitBVVdFWm4ybE16MUNQc0RDQmVZZTNBMlVHUEpBTEJQCkdlcUJsbWE1RmZBeHdKNU9LTThXQkxtVG10ZU9KekFpbVozazE5ek9NOVlCUmUvaUplVE1LRXNDZ1lFQTBpTG8KLzM5eDJ6RG5halp0UXo1Wk0vODc1VGw4Y3g1Q0ZFeDdDeHZ0bVdqSUExdXpXOFpMMGJPSlNja3drTFRpVUdDRQpxdFlwWVZPQkQrSUUvRmltbEdGcXdabmFLMVhnS0RPajZDN1RKTXNYa2hTM1hoak5EdVRjM0RaeEJNOVBFdWh6CnVpcXRqYmVvNkY3Ly9WcmpQSjJrUHdiUUdqWDhQN3dIRmMrSlNlOENnWUVBbmgzQTIreGdvY0ZxaWd3SWx0SS8KZ3dkS2ZwODM3T0dOOXhKbGxnUTgvZDkwaDYzSlJ1QjlRUWFvSlpPV1c1cGtCRnhSWUlZTFl5ZW03M0d5UjZQTwpMaFcyejhNUkJ0dmt2dnR2VnNLNSs4ZnR6WjZZUmU2Mm9TRHFGanlQZUR4dDJ3dFl5K1hBOWQwZkpXM2phQjF5CllDcURDc0lESGF5N1ROQmNWS2JYcUFFQ2dZRUFvOUVwV1QxdWNsVnVvd09wTEZubnlyeDRZaHZ0cEFUN3VMMWkKMHVkRDFHdTJPYi84VkZpRGRRaUV4NnY3bTJRWVpsYklOakpjOGczYU9KcVlEbTNCOXp6MnI5VVVHdUtJckc2TgpvZzBXOThFSU1BN01ZS1B2QUdSMHRZd1BrWk5YN3NLZXJZeGwzbXhHVmxqeE1mN3YxYXFEaDhyMDR1b2hLMEtYCk4yNmlkYXNDZ1lFQXZNNHRObkVlUmk2NXAvWnIrY01YeENCcHI4K3NndkMvYk9uS2RpaTQ2aURKQnV3U2Y1NS8KUHN2dEpZdmN1bzQrQTU5d2RleGhLM2ZvSmo2RkhKZkZMYkhieVVveHZaZDJ3bTdYWHFkcnNQMUlrUTA3ajNUZwpZMmtNQmk1VmdpQ2hlMlE0UXpsdmViVHpNZEpISVU4MjJGUURBYmZ2WmJJeU05YWRxa2lLRTJvPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

​设置一个环境项,一个上下文 

 

kubectl config set-context system:kube-controller-manager@kubernetes \--cluster=kubernetes \--user=system:kube-controller-manager \--kubeconfig=/opt/kubernetes/cfg/controller-manager.kubeconfig


设置一个用户项(集群内相关角色会被创建)

​
kubectl config set-credentials system:kube-controller-manager \--client-certificate=/opt/kubernetes/ssl/controller-manager.pem \--client-key=/opt/kubernetes/ssl/controller-manager-key.pem \--embed-certs=true \--kubeconfig=/opt/kubernetes/cfg/controller-manager.kubeconfig


切换上下文

kubectl config use-context system:kube-controller-manager@kubernetes \--kubeconfig=/opt/kubernetes/cfg/controller-manager.kubeconfig

启动服务:

systemctl enable --now kube-controller-manager


查看服务状态;

systemctl status kube-controller-manager

输出如下:

看到Garbage collector: all resource monitors have synced. Proceeding to collect garbage表示服务没有问题,否则需要排查

[root@centos18 cfg]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller ManagerLoaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)Active: active (running) since Sat 2025-03-01 18:05:45 CST; 45s agoDocs: https://github.com/kubernetes/kubernetesMain PID: 4329 (kube-controller)Tasks: 12Memory: 145.1MCGroup: /system.slice/kube-controller-manager.service└─4329 /opt/kubernetes/bin/kube-controller-manager --v=2 --bind-address=0.0.0.0 --root-ca-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem --service-account-private-key-file=/opt/kubernetes/ssl/sa.key --kubeconfig=/opt/kub...Mar 01 18:06:00 centos18 kube-controller-manager[4329]: I0301 18:06:00.796243    4329 resource_quota_controller.go:462] synced quota controller
Mar 01 18:06:00 centos18 kube-controller-manager[4329]: I0301 18:06:00.878920    4329 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
Mar 01 18:06:00 centos18 kube-controller-manager[4329]: I0301 18:06:00.883068    4329 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
Mar 01 18:06:00 centos18 kube-controller-manager[4329]: I0301 18:06:00.883111    4329 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
Mar 01 18:06:00 centos18 kube-controller-manager[4329]: I0301 18:06:00.883120    4329 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
Mar 01 18:06:00 centos18 kube-controller-manager[4329]: I0301 18:06:00.883144    4329 shared_informer.go:262] Caches are synced for certificate-csrapproving
Mar 01 18:06:01 centos18 kube-controller-manager[4329]: I0301 18:06:01.264393    4329 shared_informer.go:262] Caches are synced for garbage collector
Mar 01 18:06:01 centos18 kube-controller-manager[4329]: I0301 18:06:01.264418    4329 garbagecollector.go:263] synced garbage collector
Mar 01 18:06:01 centos18 kube-controller-manager[4329]: I0301 18:06:01.274314    4329 shared_informer.go:262] Caches are synced for garbage collector
Mar 01 18:06:01 centos18 kube-controller-manager[4329]: I0301 18:06:01.274333    4329 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage

2、配置kube-scheduler service

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/opt/kubernetes/bin/kube-scheduler \\--v=2 \\--bind-address=0.0.0.0 \\--leader-elect=true \\--kubeconfig=/opt/kubernetes/cfg/scheduler.kubeconfigRestart=always
RestartSec=10s[Install]
WantedBy=multi-user.targetEOF

设置集群角色:

kubectl config set-cluster kubernetes \--certificate-authority=/opt/kubernetes/ssl/ca.pem \--embed-certs=true \--server=https://192.168.123.15:6443 \--kubeconfig=/opt/kubernetes/cfg/scheduler.kubeconfig

 创建用户:

kubectl config set-credentials system:kube-scheduler \--client-certificate=/opt/kubernetes/ssl/scheduler.pem \--client-key=/opt/kubernetes/ssl/scheduler-key.pem \--embed-certs=true \--kubeconfig=/opt/kubernetes/cfg/scheduler.kubeconfig

设置上下文:

kubectl config set-context system:kube-scheduler@kubernetes \--cluster=kubernetes \--user=system:kube-scheduler \--kubeconfig=/opt/kubernetes/cfg/scheduler.kubeconfig

 切换上下文: 

kubectl config use-context system:kube-scheduler@kubernetes \--kubeconfig=/opt/kubernetes/cfg/scheduler.kubeconfig

启动服务:

systemctl enable --now kube-scheduler

查看服务状态:

systemctl status kube-scheduler

输出如下:

看到这个  successfully acquired lease kube-system/kube-scheduler表示服务没有问题
 

[root@k8s-master cfg]# systemctl status kube-scheduler
● kube-scheduler.service - Kubernetes SchedulerLoaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)Active: active (running) since Sun 2025-03-09 22:36:15 CST; 8min agoDocs: https://github.com/kubernetes/kubernetesMain PID: 6427 (kube-scheduler)Tasks: 9Memory: 22.7MCGroup: /system.slice/kube-scheduler.service└─6427 /opt/kubernetes/bin/kube-scheduler --v=2 --bind-address=0.0.0.0 --leader-elect=true --kubeconfig=/opt/kubernetes/cfg/scheduler.kubeconfigMar 09 22:36:15 k8s-master kube-scheduler[6427]: schedulerName: default-scheduler
Mar 09 22:36:15 k8s-master kube-scheduler[6427]: >
Mar 09 22:36:15 k8s-master kube-scheduler[6427]: I0309 22:36:15.908672    6427 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.5"
Mar 09 22:36:15 k8s-master kube-scheduler[6427]: I0309 22:36:15.908681    6427 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Mar 09 22:36:15 k8s-master kube-scheduler[6427]: I0309 22:36:15.909351    6427 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1741530975\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1741530975\" (2025-03-09 13:36:15 +0000 UTC to 2026-03...
Mar 09 22:36:15 k8s-master kube-scheduler[6427]: I0309 22:36:15.909418    6427 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1741530975\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1741530975\" (2025-0...
Mar 09 22:36:15 k8s-master kube-scheduler[6427]: I0309 22:36:15.909428    6427 secure_serving.go:210] Serving securely on [::]:10259
Mar 09 22:36:15 k8s-master kube-scheduler[6427]: I0309 22:36:15.909459    6427 tlsconfig.go:240] "Starting DynamicServingCertificateController"
Mar 09 22:36:16 k8s-master kube-scheduler[6427]: I0309 22:36:16.009788    6427 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-scheduler...
Mar 09 22:36:16 k8s-master kube-scheduler[6427]: I0309 22:36:16.015898    6427 leaderelection.go:258] successfully acquired lease kube-system/kube-scheduler


配置kubectl集群管理

kubectl config set-cluster kubernetes     \--certificate-authority=/opt/kubernetes/ssl/ca.pem     \--embed-certs=true     \--server=https://192.168.123.15:6443     \--kubeconfig=/opt/kubernetes/cfg/admin.kubeconfig

kubectl命令补全: 

yum install bash-completion -y
source /usr/share/bash-completion/bash_completion
echo "source <(kubectl completion bash)" >>/etc/profile
echo "source /usr/share/bash-completion/bash_completion" >>/etc/profile
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/nullecho "alias k=kubectl">>/etc/profile
echo "complete -F __start_kubectl k">>/etc/profile
source /etc/profile

 创建集群管理用户:

kubectl config set-credentials kubernetes-admin  \--client-certificate=/opt/kubernetes/ssl/admin.pem     \--client-key=/opt/kubernetes/ssl/admin-key.pem     \--embed-certs=true     \--kubeconfig=/opt/kubernetes/cfg/admin.kubeconfig

 设置上下文:

kubectl config set-context kubernetes-admin@kubernetes    \--cluster=kubernetes     \--user=kubernetes-admin     \--kubeconfig=/opt/kubernetes/cfg/admin.kubeconfig

切换上下文:

kubectl config use-context kubernetes-admin@kubernetes  \
--kubeconfig=/opt/kubernetes/cfg/admin.kubeconfig

mkdir -p /root/.kube ; cp /opt/kubernetes/cfg/admin.kubeconfig /root/.kube/config

此时kubectl命令可以使用,可以看集群的健康状态,也可以看到集群角色,endpoints这些资源了: 

[root@centos18 cfg]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
[root@centos18 cfg]# kubectl get csr
No resources found[root@k8s-master ~]# kubectl get endpoints
NAME         ENDPOINTS             AGE
kubernetes   192.168.123.15:6443   10m[root@k8s-master ~]# kubectl get clusterroles
NAME                                                                   CREATED AT
admin                                                                  2025-03-15T15:25:22Z
cluster-admin                                                          2025-03-15T15:25:22Z
edit                                                                   2025-03-15T15:25:22Z
system:aggregate-to-admin                                              2025-03-15T15:25:22Z
system:aggregate-to-edit                                               2025-03-15T15:25:22Z
system:aggregate-to-view                                               2025-03-15T15:25:22Z
system:auth-delegator                                                  2025-03-15T15:25:22Z
system:basic-user                                                      2025-03-15T15:25:22Z
system:certificates.k8s.io:certificatesigningrequests:nodeclient       2025-03-15T15:25:22Z
system:certificates.k8s.io:certificatesigningrequests:selfnodeclient   2025-03-15T15:25:22Z
system:certificates.k8s.io:kube-apiserver-client-approver              2025-03-15T15:25:22Z
system:certificates.k8s.io:kube-apiserver-client-kubelet-approver      2025-03-15T15:25:22Z
system:certificates.k8s.io:kubelet-serving-approver                    2025-03-15T15:25:22Z
system:certificates.k8s.io:legacy-unknown-approver                     2025-03-15T15:25:22Z
system:controller:attachdetach-controller                              2025-03-15T15:25:22Z
system:controller:certificate-controller                               2025-03-15T15:25:22Z
system:controller:clusterrole-aggregation-controller                   2025-03-15T15:25:22Z
system:controller:cronjob-controller                                   2025-03-15T15:25:22Z
system:controller:daemon-set-controller                                2025-03-15T15:25:22Z
system:controller:deployment-controller                                2025-03-15T15:25:22Z
system:controller:disruption-controller                                2025-03-15T15:25:22Z
system:controller:endpoint-controller                                  2025-03-15T15:25:22Z
system:controller:endpointslice-controller                             2025-03-15T15:25:22Z
system:controller:endpointslicemirroring-controller                    2025-03-15T15:25:22Z
system:controller:ephemeral-volume-controller                          2025-03-15T15:25:22Z
system:controller:expand-controller                                    2025-03-15T15:25:22Z
system:controller:generic-garbage-collector                            2025-03-15T15:25:22Z
system:controller:horizontal-pod-autoscaler                            2025-03-15T15:25:22Z
system:controller:job-controller                                       2025-03-15T15:25:22Z
system:controller:namespace-controller                                 2025-03-15T15:25:22Z
system:controller:node-controller                                      2025-03-15T15:25:22Z
system:controller:persistent-volume-binder                             2025-03-15T15:25:22Z
system:controller:pod-garbage-collector                                2025-03-15T15:25:22Z
system:controller:pv-protection-controller                             2025-03-15T15:25:22Z
system:controller:pvc-protection-controller                            2025-03-15T15:25:22Z
system:controller:replicaset-controller                                2025-03-15T15:25:22Z
system:controller:replication-controller                               2025-03-15T15:25:22Z
system:controller:resourcequota-controller                             2025-03-15T15:25:22Z
system:controller:root-ca-cert-publisher                               2025-03-15T15:25:22Z
system:controller:route-controller                                     2025-03-15T15:25:22Z
system:controller:service-account-controller                           2025-03-15T15:25:22Z
system:controller:service-controller                                   2025-03-15T15:25:22Z
system:controller:statefulset-controller                               2025-03-15T15:25:22Z
system:controller:ttl-after-finished-controller                        2025-03-15T15:25:22Z
system:controller:ttl-controller                                       2025-03-15T15:25:22Z
system:discovery                                                       2025-03-15T15:25:22Z
system:heapster                                                        2025-03-15T15:25:22Z
system:kube-aggregator                                                 2025-03-15T15:25:22Z
system:kube-controller-manager                                         2025-03-15T15:25:22Z
system:kube-dns                                                        2025-03-15T15:25:22Z
system:kube-scheduler                                                  2025-03-15T15:25:22Z
system:kubelet-api-admin                                               2025-03-15T15:25:22Z
system:monitoring                                                      2025-03-15T15:25:22Z
system:node                                                            2025-03-15T15:25:22Z
system:node-bootstrapper                                               2025-03-15T15:25:22Z
system:node-problem-detector                                           2025-03-15T15:25:22Z
system:node-proxier                                                    2025-03-15T15:25:22Z
system:persistent-volume-provisioner                                   2025-03-15T15:25:22Z
system:public-info-viewer                                              2025-03-15T15:25:22Z
system:service-account-issuer-discovery                                2025-03-15T15:25:22Z
system:volume-scheduler                                                2025-03-15T15:25:22Z
view                                                                   2025-03-15T15:25:22Z

五、TLS Bootstrapping配置

TLS Bootstrapping主要作用是注册引导,也就是工作节点自动加入集群,这一步配置完成,kubect get no 才可以看到节点

kubectl config set-cluster kubernetes     \
--certificate-authority=/opt/kubernetes/ssl/ca.pem     \
--embed-certs=true     --server=https://192.168.123.15:6443     \
--kubeconfig=/opt/kubernetes/cfg/bootstrap-kubelet.kubeconfig

输出如下:

[root@centos18 cfg]# kubectl config set-cluster kubernetes     \
> --certificate-authority=/opt/kubernetes/ssl/ca.pem     \
> --embed-certs=true     --server=https://192.168.123.15:6443     \
> --kubeconfig=/opt/kubernetes/cfg/bootstrap-kubelet.kubeconfig
Cluster "kubernetes" set.

生成token:

echo "$(head -c 6 /dev/urandom | md5sum | head -c 6)"."$(head -c 16 /dev/urandom | md5sum | head -c 16)"

输出如下:

[root@centos18 cfg]# echo "$(head -c 6 /dev/urandom | md5sum | head -c 6)"."$(head -c 16 /dev/urandom | md5sum | head -c 16)"
4ea6ae.df8a8a04e34bff81

生成管理集群的用户:

kubectl config set-credentials tls-bootstrap-token-user     \
--token=4ea6ae.df8a8a04e34bff81 \
--kubeconfig=/opt/kubernetes/cfg/bootstrap-kubelet.kubeconfig

创建上下文:

kubectl config set-context tls-bootstrap-token-user@kubernetes     \
--cluster=kubernetes     \
--user=tls-bootstrap-token-user     \
--kubeconfig=/opt/kubernetes/cfg/bootstrap-kubelet.kubeconfig

切换上下文:

kubectl config use-context tls-bootstrap-token-user@kubernetes     \
--kubeconfig=/opt/kubernetes/cfg/bootstrap-kubelet.kubeconfig




写入bootstrap-token:

这里的token-id和token-secret要和前面echo "$(head -c 6 /dev/urandom | 这一串命令生成的token相对应

cat > /opt/kubernetes/cfg/bootstrap.secret.yaml << EOF
apiVersion: v1
kind: Secret
metadata:name: bootstrap-token-4ea6aenamespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:description: "The default bootstrap token generated by 'kubelet '."token-id: 4ea6aetoken-secret: df8a8a04e34bff81usage-bootstrap-authentication: "true"usage-bootstrap-signing: "true"auth-extra-groups:  system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kubelet-bootstrap
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Groupname: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: node-autoapprove-bootstrap
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Groupname: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: node-autoapprove-certificate-rotation
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Groupname: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:kube-apiserver-to-kubelet
rules:- apiGroups:- ""resources:- nodes/proxy- nodes/stats- nodes/log- nodes/spec- nodes/metricsverbs:- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: system:kube-apiservernamespace: ""
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:kube-apiserver-to-kubelet
subjects:- apiGroup: rbac.authorization.k8s.iokind: Username: kube-apiserver
EOF

应用此部署:

kubectl create -f /opt/kubernetes/cfg/bootstrap.secret.yaml

输出如下:

[root@k8s-master cfg]# kubectl create -f bootstrap.secret.yaml
secret/bootstrap-token-4ea6ae created
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created

一般进行到这里的时候,基本就算是集群部署完成了一半了,剩下的是kubelet服务配置,网络插件的配置了,以及后续的服务发布管理插件ingress或者traefik等等这些东西的部署了 




六、kubelet服务和kube-proxy服务配置

可执行程序和相关证书移动到k8s-node1和k8s-node2 节点,还包括bootstrap-kubelet.kubeconfig这个文件:

[root@k8s-master ~]# scp kubelet  192.168.123.16:/opt/kubernetes/bin/
kubelet                                                                                                                                                                                                                                                                                                   100%  109MB  86.5MB/s   00:01    
[root@k8s-master ~]# scp kubelet  192.168.123.17:/opt/kubernetes/bin/
kubelet                                                                                                                                                                                                                                                                                                   100%  109MB 188.7MB/s   00:00    
[root@k8s-master ~]# scp /opt/kubernetes/ssl/* k8s-node1:/opt/kubernetes/ssl/
admin.csr                                                                                                                                                                                                                                                                                                 100% 1025   730.2KB/s   00:00    
admin-key.pem                                                                                                                                                                                                                                                                                             100% 1679   585.3KB/s   00:00    
admin.pem                                                                                                                                                                                                                                                                                                 100% 1444     1.5MB/s   00:00    
apiserver.csr                                                                                                                                                                                                                                                                                             100% 1314     1.4MB/s   00:00    
apiserver-key.pem                                                                                                                                                                                                                                                                                         100% 1679     1.1MB/s   00:00    
apiserver.pem                                                                                                                                                                                                                                                                                             100% 1708     3.4MB/s   00:00    
ca.csr                                                                                                                                                                                                                                                                                                    100% 1025     1.0MB/s   00:00    
ca-key.pem                                                                                                                                                                                                                                                                                                100% 1679     3.4MB/s   00:00    
ca.pem                                                                                                                                                                                                                                                                                                    100% 1411   881.7KB/s   00:00    
controller-manager.csr                                                                                                                                                                                                                                                                                    100% 1082     1.1MB/s   00:00    
controller-manager-key.pem                                                                                                                                                                                                                                                                                100% 1679     1.9MB/s   00:00    
controller-manager.pem                                                                                                                                                                                                                                                                                    100% 1501     1.7MB/s   00:00    
front-proxy-ca.csr                                                                                                                                                                                                                                                                                        100%  891     1.1MB/s   00:00    
front-proxy-ca-key.pem                                                                                                                                                                                                                                                                                    100% 1679     1.9MB/s   00:00    
front-proxy-ca.pem                                                                                                                                                                                                                                                                                        100% 1143     2.4MB/s   00:00    
front-proxy-client.csr                                                                                                                                                                                                                                                                                    100%  903     1.0MB/s   00:00    
front-proxy-client-key.pem                                                                                                                                                                                                                                                                                100% 1675     3.3MB/s   00:00    
front-proxy-client.pem                                                                                                                                                                                                                                                                                    100% 1188     2.5MB/s   00:00    
kube-proxy.csr                                                                                                                                                                                                                                                                                            100% 1045     1.4MB/s   00:00    
kube-proxy-key.pem                                                                                                                                                                                                                                                                                        100% 1679     2.4MB/s   00:00    
kube-proxy.pem                                                                                                                                                                                                                                                                                            100% 1464     1.9MB/s   00:00    
sa.key                                                                                                                                                                                                                                                                                                    100% 1679     3.4MB/s   00:00    
sa.pub                                                                                                                                                                                                                                                                                                    100%  451     1.1MB/s   00:00    
scheduler.csr                                                                                                                                                                                                                                                                                             100% 1058     1.4MB/s   00:00    
scheduler-key.pem                                                                                                                                                                                                                                                                                         100% 1679     2.4MB/s   00:00    
scheduler.pem                                                                                                                                                                                                                                                                                             100% 1476     1.7MB/s   00:00    
[root@k8s-master ~]# scp /opt/kubernetes/ssl/* k8s-node2:/opt/kubernetes/ssl/
admin.csr                                                                                                                                                                                                                                                                                                 100% 1025     1.6MB/s   00:00    
admin-key.pem                                                                                                                                                                                                                                                                                             100% 1679     2.8MB/s   00:00    
admin.pem                                                                                                                                                                                                                                                                                                 100% 1444     2.5MB/s   00:00    
apiserver.csr                                                                                                                                                                                                                                                                                             100% 1314   890.4KB/s   00:00    
apiserver-key.pem                                                                                                                                                                                                                                                                                         100% 1679     3.2MB/s   00:00    
apiserver.pem                                                                                                                                                                                                                                                                                             100% 1708     3.2MB/s   00:00    
ca.csr                                                                                                                                                                                                                                                                                                    100% 1025     1.4MB/s   00:00    
ca-key.pem                                                                                                                                                                                                                                                                                                100% 1679   565.6KB/s   00:00    
ca.pem                                                                                                                                                                                                                                                                                                    100% 1411     2.4MB/s   00:00    
controller-manager.csr                                                                                                                                                                                                                                                                                    100% 1082     1.9MB/s   00:00    
controller-manager-key.pem                                                                                                                                                                                                                                                                                100% 1679     2.8MB/s   00:00    
controller-manager.pem                                                                                                                                                                                                                                                                                    100% 1501     2.3MB/s   00:00    
front-proxy-ca.csr                                                                                                                                                                                                                                                                                        100%  891     1.5MB/s   00:00    
front-proxy-ca-key.pem                                                                                                                                                                                                                                                                                    100% 1679     3.0MB/s   00:00    
front-proxy-ca.pem                                                                                                                                                                                                                                                                                        100% 1143     2.3MB/s   00:00    
front-proxy-client.csr                                                                                                                                                                                                                                                                                    100%  903     1.7MB/s   00:00    
front-proxy-client-key.pem                                                                                                                                                                                                                                                                                100% 1675     3.2MB/s   00:00    
front-proxy-client.pem                                                                                                                                                                                                                                                                                    100% 1188     2.3MB/s   00:00    
kube-proxy.csr                                                                                                                                                                                                                                                                                            100% 1045     1.9MB/s   00:00    
kube-proxy-key.pem                                                                                                                                                                                                                                                                                        100% 1679     3.1MB/s   00:00    
kube-proxy.pem                                                                                                                                                                                                                                                                                            100% 1464     3.2MB/s   00:00    
sa.key                                                                                                                                                                                                                                                                                                    100% 1679     3.1MB/s   00:00    
sa.pub                                                                                                                                                                                                                                                                                                    100%  451   755.4KB/s   00:00    
scheduler.csr                                                                                                                                                                                                                                                                                             100% 1058     2.1MB/s   00:00    
scheduler-key.pem                                                                                                                                                                                                                                                                                         100% 1679     2.5MB/s   00:00    
scheduler.pem        
[root@k8s-master ~]# scp /opt/kubernetes/cfg/bootstrap-kubelet.kubeconfig k8s-node1:/opt/kubernetes/cfg/
bootstrap-kubelet.kubeconfig                                                                                                                                                                                                                                                                              100% 2301     1.7MB/s   00:00    
[root@k8s-master ~]# scp /opt/kubernetes/cfg/bootstrap-kubelet.kubeconfig k8s-node2:/opt/kubernetes/cfg/
bootstrap-kubelet.kubeconfig

创建相关目录,所有节点都执行这个 ,千万别忘记了: 

mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/

其他工作节点要赋予kubelet可执行文件执行权限:

chmod a+x /opt/kubernetes/bin/kubelet

所有k8s节点创建kubelet的配置文件

cat > /opt/kubernetes/cfg/kubelet-conf.yml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
logging:level: debug
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:imagefs.available: 15%memory.available: 100Minodefs.available: 10%nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF

kubelet启动脚本   --node-ip=192.168.123.15  在master执行就是master的IP,在哪个节点就哪个节点的IP,这里千万别忘了:

cat > /usr/lib/systemd/system/kubelet.service << EOF[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=containerd.service[Service]
ExecStart=/opt/kubernetes/bin/kubelet \\--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap-kubelet.kubeconfig  \\--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\--config=/opt/kubernetes/cfg/kubelet-conf.yml \\--container-runtime-endpoint=unix:///run/containerd/containerd.sock  \\--node-labels=node.kubernetes.io/node= \\--node-ip=192.168.123.15[Install]
WantedBy=multi-user.target
EOF

启动kubelet:

systemctl enable --now kubelet
systemctl restart kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy kubelet

这里需要注意,如果感觉证书不对,执行下面的命令: 

openssl s_client -connect 192.168.123.15:6443 -CAfile  /opt/kubernetes/ssl/ca.pem

输出大体如下,看到两个  Verify return code: 0 (ok) 表示证书没有问题,否则证书可能需要重新生成

CONNECTED(00000003)
depth=1 C = CN, ST = Beijing, L = Beijing, O = Kubernetes, OU = Kubernetes-manual, CN = kubernetes
verify return:1
depth=0 C = CN, ST = Beijing, L = Beijing, O = Kubernetes, OU = Kubernetes-manual, CN = kube-apiserver
verify return:1
---
Certificate chain0 s:/C=CN/ST=Beijing/L=Beijing/O=Kubernetes/OU=Kubernetes-manual/CN=kube-apiserveri:/C=CN/ST=Beijing/L=Beijing/O=Kubernetes/OU=Kubernetes-manual/CN=kubernetes
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIEwTCCA6mgAwIBAgIUNWuJRgjLNZowR4UI0AOtL/8DD9IwDQYJKoZIhvcNAQEL
BQAwdzELMAkGA1UEBhMCQ04xEDAOBgNVBAgTB0JlaWppbmcxEDAOBgNVBAcTB0Jl
aWppbmcxEzARBgNVBAoTCkt1YmVybmV0ZXMxGjAYBgNVBAsTEUt1YmVybmV0ZXMt
bWFudWFsMRMwEQYDVQQDEwprdWJlcm5ldGVzMCAXDTI1MDMxNTE1MDgwMFoYDzIx
MjUwMjE5MTUwODAwWjB7MQswCQYDVQQGEwJDTjEQMA4GA1UECBMHQmVpamluZzEQ
MA4GA1UEBxMHQmVpamluZzETMBEGA1UEChMKS3ViZXJuZXRlczEaMBgGA1UECxMR
S3ViZXJuZXRlcy1tYW51YWwxFzAVBgNVBAMTDmt1YmUtYXBpc2VydmVyMIIBIjAN
BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAqKq66wWb1bXZrDqKceR6wh8Zqn47
rhVxYGdDcdJ7pSRLZLZI/SgAMr2rWC+ZntgYsGN2tNjkVovaAMKA1MigetXQKnQr
iW3UGXj5+IE0e89eylx00PbIQKLIsexa+juEE8K3YEUCrGkh3GpnLoh0cjhJKZ4J
+Jh1v15XDYnOlUAUHStY7hr7M48rdf91E33JaCs5VdH8z21jABMt463+9r5/KkoD
kw76wkUl1LVI7YfNbHk+ztLx4EIG+tiSt9ycCnZPISK8FVkZn2nZaDm5VgfE+dnV
xkjMqIwC6S+5OqSRkrT4+bmJocSaL+jQOSsI8VPw9HKrPDaQVQfsFfJn9QIDAQAB
o4IBPTCCATkwDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggr
BgEFBQcDAjAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBQMs1pbv2E2g/ozdbegXnbY
PTPEmTAfBgNVHSMEGDAWgBSCoYt2VnODIvE0YnZwapCK7JGGIjCBuQYDVR0RBIGx
MIGuggprdWJlcm5ldGVzghJrdWJlcm5ldGVzLmRlZmF1bHSCFmt1YmVybmV0ZXMu
ZGVmYXVsdC5zdmOCHmt1YmVybmV0ZXMuZGVmYXVsdC5zdmMuY2x1c3RlcoIka3Vi
ZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FshwQKAAABhwR/AAABhwTA
qHsPhwTAqHsQhwTAqHsRhxAAAAAAAAAAAAAAAAAAAAABMA0GCSqGSIb3DQEBCwUA
A4IBAQBYuaXIJxqeGuIz4YYFyJ8vnpiv1OQMZ/WuUCff3hV84uv7ERU/hdvVUiY2
dunN0qY5Q69xP2hOkhYLtPzdAHaR2qiTslggmXcaUzmY/cAkz8vPyLABo00zQhW9
s6bukrnSw4DhO4uI4M6kOCg4+KHb0aGqKPfXYefKaEJrkrJoGZlb2eu6dyv7/cZ/
hwwz1ePFMq0igye+s4cYJ5CT4SCO61BKHo2UANl4q4lvlpnlhC/lsO6NpSY25JpP
MPiqYqJH2c3oX+nnAWRd2fOI+XyBoV4WeCOKOiJK8RUR57o+WKxa/gd3QTty4N31
1ZioNjK/KbEqclMhzghYATnnrOJW
-----END CERTIFICATE-----
subject=/C=CN/ST=Beijing/L=Beijing/O=Kubernetes/OU=Kubernetes-manual/CN=kube-apiserver
issuer=/C=CN/ST=Beijing/L=Beijing/O=Kubernetes/OU=Kubernetes-manual/CN=kubernetes
---
Acceptable client certificate CA names
/C=CN/ST=Beijing/L=Beijing/O=Kubernetes/OU=Kubernetes-manual/CN=kubernetes
/CN=kubernetes
Client Certificate Types: RSA sign, ECDSA sign
Requested Signature Algorithms: 0x04+0x08:ECDSA+SHA256:0x07+0x08:0x05+0x08:0x06+0x08:RSA+SHA256:RSA+SHA384:RSA+SHA512:ECDSA+SHA384:ECDSA+SHA512:RSA+SHA1:ECDSA+SHA1
Shared Requested Signature Algorithms: ECDSA+SHA256:RSA+SHA256:RSA+SHA384:RSA+SHA512:ECDSA+SHA384:ECDSA+SHA512:RSA+SHA1:ECDSA+SHA1
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 2030 bytes and written 427 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:Protocol  : TLSv1.2Cipher    : ECDHE-RSA-AES128-GCM-SHA256Session-ID: 77A21B12A0B13D5F811F700D21C60ECEA6D0A3CBB7240FBD03A9C5CE7566431ASession-ID-ctx: Master-Key: 60CA9924BE1D4A12338AC0580AF7A3200FCD07AAC10DA8A8A70CC518DB4DEAE96FBF15E14263174C9CF6FC374C176307Key-Arg   : NoneKrb5 Principal: NonePSK identity: NonePSK identity hint: NoneTLS session ticket:0000 - 1e a1 38 c1 c6 a3 ae 3f-29 8e 09 58 64 fb b1 44   ..8....?)..Xd..D0010 - f2 28 fa 57 91 53 ef 4a-76 2a c1 fd db bb 7f 49   .(.W.S.Jv*.....I0020 - 95 ce f4 b7 4b 3d 61 d0-9a 1c f3 c7 1a bb fe 49   ....K=a........I0030 - ea 57 c8 b3 84 6e a7 6e-eb 87 be e7 d3 e2 5e 35   .W...n.n......^50040 - dd bf ff 50 3a ef 1c 74-5e 75 92 64 6f a2 7e ff   ...P:..t^u.do.~.0050 - 94 66 a2 2f 5e 77 b6 d2-4f 6b b2 33 04 c6 0d 5a   .f./^w..Ok.3...Z0060 - fa 44 77 1d 47 48 af 24-4a 29 75 b5 b4 ad ab 97   .Dw.GH.$J)u.....0070 - 2b 44 82 54 4b b6 42 a1-b4 3d d1 48 18 90 f9 37   +D.TK.B..=.H...70080 - 15                                                .Start Time: 1742118135Timeout   : 300 (sec)Verify return code: 0 (ok)

查看/var/log/message 系統日志可以看到tls批准加入node節點:

Mar  2 13:33:46 k8s-master kube-scheduler: >
Mar  2 13:33:46 k8s-master kube-scheduler: I0302 13:33:46.133091    5259 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.5"
Mar  2 13:33:46 k8s-master kube-scheduler: I0302 13:33:46.133100    5259 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Mar  2 13:33:46 k8s-master kube-scheduler: I0302 13:33:46.133794    5259 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1740893625\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1740893625\" (2025-03-02 04:33:45 +0000 UTC to 2026-03-02 04:33:45 +0000 UTC (now=2025-03-02 05:33:46.133776044 +0000 UTC))"
Mar  2 13:33:46 k8s-master kube-scheduler: I0302 13:33:46.133873    5259 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1740893626\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1740893625\" (2025-03-02 04:33:45 +0000 UTC to 2026-03-02 04:33:45 +0000 UTC (now=2025-03-02 05:33:46.13386154 +0000 UTC))"
Mar  2 13:33:46 k8s-master kube-scheduler: I0302 13:33:46.133905    5259 secure_serving.go:210] Serving securely on [::]:10259
Mar  2 13:33:46 k8s-master kube-scheduler: I0302 13:33:46.133941    5259 tlsconfig.go:240] "Starting DynamicServingCertificateController"
Mar  2 13:33:46 k8s-master kube-scheduler: I0302 13:33:46.141969    5259 node_tree.go:65] "Added node in listed group to NodeTree" node="centos18" zone=""
Mar  2 13:33:46 k8s-master kubelet: E0302 13:33:46.206912    5084 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Mar  2 13:33:46 k8s-master kube-scheduler: I0302 13:33:46.235180    5259 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-scheduler...

可以看到CSR在不停的通过,这里我是没有搞明白为什么状态反复变化:

[root@centos18 cfg]# kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR                 REQUESTEDDURATION   CONDITION
node-csr-jn7m9UjPNvMLbEzSGRQLGb7kXywspZyNGqNdR9isEGo   30m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:4ea6ae   <none>              Pending
[root@centos18 cfg]# kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR                 REQUESTEDDURATION   CONDITION
node-csr-jn7m9UjPNvMLbEzSGRQLGb7kXywspZyNGqNdR9isEGo   30m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:4ea6ae   <none>              Approved,Issued

在工作节点看kubelet服务的状态,应该是如下的输出:

[root@k8s-node2 ~]# systemctl status  kubelet
● kubelet.service - Kubernetes KubeletLoaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)Active: active (running) since Sun 2025-03-16 00:07:48 CST; 1s agoDocs: https://github.com/kubernetes/kubernetesMain PID: 2395 (kubelet)Tasks: 15Memory: 39.4MCGroup: /system.slice/kubelet.service└─2395 /opt/kubernetes/bin/kubelet --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap-kubelet.kubeconfig --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --config=/opt/kubernetes/cfg/kubelet-conf.yml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --node-labels=node.kubernetes.io/node= --node...Mar 16 00:07:48 k8s-node2 kubelet[2395]: I0316 00:07:48.957938    2395 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Mar 16 00:07:48 k8s-node2 kubelet[2395]: I0316 00:07:48.958046    2395 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Mar 16 00:07:49 k8s-node2 kubelet[2395]: I0316 00:07:49.024192    2395 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="172.16.2.0/24,fc00:2222::200/120"
Mar 16 00:07:49 k8s-node2 kubelet[2395]: I0316 00:07:49.024670    2395 kubelet_node_status.go:70] "Attempting to register node" node="k8s-node2"
Mar 16 00:07:49 k8s-node2 kubelet[2395]: I0316 00:07:49.024861    2395 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="172.16.2.0/24,fc00:2222::200/120"
Mar 16 00:07:49 k8s-node2 kubelet[2395]: E0316 00:07:49.025245    2395 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Mar 16 00:07:49 k8s-node2 kubelet[2395]: I0316 00:07:49.030804    2395 kubelet_node_status.go:108] "Node was previously registered" node="k8s-node2"
Mar 16 00:07:49 k8s-node2 kubelet[2395]: I0316 00:07:49.030855    2395 kubelet_node_status.go:73] "Successfully registered node" node="k8s-node2"
Mar 16 00:07:49 k8s-node2 kubelet[2395]: I0316 00:07:49.917613    2395 apiserver.go:52] "Watching apiserver"
Mar 16 00:07:49 k8s-node2 kubelet[2395]: I0316 00:07:49.930444    2395 reconciler.go:169] "Reconciler: start to sync state"

不过现在是可以看到节点了:

[root@centos18 cfg]# kubectl  get no
NAME       STATUS     ROLES    AGE   VERSION
centos18   NotReady   <none>   25m   v1.25.5

这个阶段还可以看到ns,svc,secret ,clusterrolers这些资源,其它资源目前还看不到,主要是网络插件还没有安装的缘故:

[root@centos18 cfg]# kubectl get ns
NAME              STATUS   AGE
default           Active   26h
kube-node-lease   Active   26h
kube-public       Active   26h
kube-system       Active   26h
[root@centos18 cfg]# kubectl get svc -A
NAMESPACE   NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   144m
[root@centos18 cfg]# kubectl get secret -A
NAMESPACE     NAME                     TYPE                            DATA   AGE
kube-system   bootstrap-token-4ea6ae   bootstrap.kubernetes.io/token   6      40m[root@centos18 cfg]# kubectl get clusterroles
NAME                                                                   CREATED AT
admin                                                                  2025-03-01T03:02:39Z
cluster-admin                                                          2025-03-01T03:02:39Z
edit                                                                   2025-03-01T03:02:39Z
system:aggregate-to-admin                                              2025-03-01T03:02:39Z
system:aggregate-to-edit                                               2025-03-01T03:02:39Z
system:aggregate-to-view                                               2025-03-01T03:02:39Z
system:auth-delegator                                                  2025-03-01T03:02:39Z
system:basic-user                                                      2025-03-01T03:02:39Z
system:certificates.k8s.io:certificatesigningrequests:nodeclient       2025-03-01T03:02:39Z
system:certificates.k8s.io:certificatesigningrequests:selfnodeclient   2025-03-01T03:02:39Z
system:certificates.k8s.io:kube-apiserver-client-approver              2025-03-01T03:02:39Z
system:certificates.k8s.io:kube-apiserver-client-kubelet-approver      2025-03-01T03:02:39Z
system:certificates.k8s.io:kubelet-serving-approver                    2025-03-01T03:02:39Z
system:certificates.k8s.io:legacy-unknown-approver                     2025-03-01T03:02:39Z
system:controller:attachdetach-controller                              2025-03-01T03:02:39Z
system:controller:certificate-controller                               2025-03-01T03:02:39Z
system:controller:clusterrole-aggregation-controller                   2025-03-01T03:02:39Z
system:controller:cronjob-controller                                   2025-03-01T03:02:39Z
system:controller:daemon-set-controller                                2025-03-01T03:02:39Z
system:controller:deployment-controller                                2025-03-01T03:02:39Z
system:controller:disruption-controller                                2025-03-01T03:02:39Z
system:controller:endpoint-controller                                  2025-03-01T03:02:39Z
system:controller:endpointslice-controller                             2025-03-01T03:02:39Z
system:controller:endpointslicemirroring-controller                    2025-03-01T03:02:39Z
system:controller:ephemeral-volume-controller                          2025-03-01T03:02:39Z
system:controller:expand-controller                                    2025-03-01T03:02:39Z
system:controller:generic-garbage-collector                            2025-03-01T03:02:39Z
system:controller:horizontal-pod-autoscaler                            2025-03-01T03:02:39Z
system:controller:job-controller                                       2025-03-01T03:02:39Z
system:controller:namespace-controller                                 2025-03-01T03:02:39Z
system:controller:node-controller                                      2025-03-01T03:02:39Z
system:controller:persistent-volume-binder                             2025-03-01T03:02:39Z
system:controller:pod-garbage-collector                                2025-03-01T03:02:39Z
system:controller:pv-protection-controller                             2025-03-01T03:02:39Z
system:controller:pvc-protection-controller                            2025-03-01T03:02:39Z
system:controller:replicaset-controller                                2025-03-01T03:02:39Z
system:controller:replication-controller                               2025-03-01T03:02:39Z
system:controller:resourcequota-controller                             2025-03-01T03:02:39Z
system:controller:root-ca-cert-publisher                               2025-03-01T03:02:39Z
system:controller:route-controller                                     2025-03-01T03:02:39Z
system:controller:service-account-controller                           2025-03-01T03:02:39Z
system:controller:service-controller                                   2025-03-01T03:02:39Z
system:controller:statefulset-controller                               2025-03-01T03:02:39Z
system:controller:ttl-after-finished-controller                        2025-03-01T03:02:39Z
system:controller:ttl-controller                                       2025-03-01T03:02:39Z
system:discovery                                                       2025-03-01T03:02:39Z
system:heapster                                                        2025-03-01T03:02:39Z
system:kube-aggregator                                                 2025-03-01T03:02:39Z
system:kube-apiserver-to-kubelet                                       2025-03-02T05:17:11Z
system:kube-controller-manager                                         2025-03-01T03:02:39Z
system:kube-dns                                                        2025-03-01T03:02:39Z
system:kube-scheduler                                                  2025-03-01T03:02:39Z
system:kubelet-api-admin                                               2025-03-01T03:02:39Z
system:monitoring                                                      2025-03-01T03:02:39Z
system:node                                                            2025-03-01T03:02:39Z
system:node-bootstrapper                                               2025-03-01T03:02:39Z
system:node-problem-detector                                           2025-03-01T03:02:39Z
system:node-proxier                                                    2025-03-01T03:02:39Z
system:persistent-volume-provisioner                                   2025-03-01T03:02:39Z
system:public-info-viewer                                              2025-03-01T03:02:39Z
system:service-account-issuer-discovery                                2025-03-01T03:02:39Z
system:volume-scheduler                                                2025-03-01T03:02:39Z
view                                                                   2025-03-01T03:02:39Z

启动kube-proxy服务:

生成kube-proxy.kubeconfig文件:

kubectl config set-cluster kubernetes     \--certificate-authority=/opt/kubernetes/ssl/ca.pem     \--embed-certs=true     \--server=https://192.168.123.15:6443     \--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy  \--client-certificate=/opt/kubernetes/ssl/kube-proxy.pem     \--client-key=/opt/kubernetes/ssl/kube-proxy-key.pem     \--embed-certs=true     \--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
kubectl config set-context kube-proxy@kubernetes    \--cluster=kubernetes     \--user=kube-proxy     \--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig


kubectl config use-context kube-proxy@kubernetes \--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig

​​​​​​​

cat >  /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/opt/kubernetes/bin/kube-proxy \\--config=/opt/kubernetes/cfg/kube-proxy.yaml \\--cluster-cidr=172.16.0.0/12,fc00:2222::/112 \\--v=2
Restart=always
RestartSec=10s[Install]
WantedBy=multi-user.targetEOF
cat > /opt/kubernetes/cfg/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:acceptContentTypes: ""burst: 10contentType: application/vnd.kubernetes.protobufkubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfigqps: 5
clusterCIDR: 172.16.0.0/12,fc00:2222::/112
configSyncPeriod: 15m0s
conntrack:max: nullmaxPerCore: 32768min: 131072tcpCloseWaitTimeout: 1h0m0stcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:masqueradeAll: falsemasqueradeBit: 14minSyncPeriod: 0ssyncPeriod: 30s
ipvs:masqueradeAll: trueminSyncPeriod: 5sscheduler: "rr"syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
EOF
systemctl enable --now kube-proxy

最终该服务的状态应该是这样的:

[root@k8s-master ~]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Kube ProxyLoaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)Active: active (running) since Sat 2025-03-15 23:53:38 CST; 16min agoDocs: https://github.com/kubernetes/kubernetesMain PID: 4753 (kube-proxy)Tasks: 9Memory: 22.7MCGroup: /system.slice/kube-proxy.service└─4753 /opt/kubernetes/bin/kube-proxy --config=/opt/kubernetes/cfg/kube-proxy.yaml --cluster-cidr=172.16.0.0/12,fc00:2222::/112 --v=2Mar 15 23:53:38 k8s-master kube-proxy[4753]: I0315 23:53:38.664756    4753 shared_informer.go:255] Waiting for caches to sync for node config
Mar 15 23:53:38 k8s-master kube-proxy[4753]: I0315 23:53:38.666727    4753 service.go:324] "Service updated ports" service="default/kubernetes" portCount=1
Mar 15 23:53:38 k8s-master kube-proxy[4753]: I0315 23:53:38.666743    4753 proxier.go:1009] "Not syncing ipvs rules until Services and Endpoints have been received from master"
Mar 15 23:53:38 k8s-master kube-proxy[4753]: I0315 23:53:38.666763    4753 proxier.go:1009] "Not syncing ipvs rules until Services and Endpoints have been received from master"
Mar 15 23:53:38 k8s-master kube-proxy[4753]: I0315 23:53:38.764797    4753 shared_informer.go:262] Caches are synced for service config
Mar 15 23:53:38 k8s-master kube-proxy[4753]: I0315 23:53:38.764819    4753 shared_informer.go:262] Caches are synced for node config
Mar 15 23:53:38 k8s-master kube-proxy[4753]: I0315 23:53:38.764837    4753 shared_informer.go:262] Caches are synced for endpoint slice config
Mar 15 23:53:38 k8s-master kube-proxy[4753]: I0315 23:53:38.764836    4753 proxier.go:1009] "Not syncing ipvs rules until Services and Endpoints have been received from master"
Mar 15 23:53:38 k8s-master kube-proxy[4753]: I0315 23:53:38.764849    4753 proxier.go:1009] "Not syncing ipvs rules until Services and Endpoints have been received from master"
Mar 15 23:53:38 k8s-master kube-proxy[4753]: I0315 23:53:38.764874    4753 service.go:440] "Adding new service port" portName="default/kubernetes:https" servicePort="10.0.0.1:443/TCP"

​​​​​​​ 网络配置,网络插件的安装:

cat >kube-flannel.yml <<EOF
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: flannelname: flannel
rules:
- apiGroups:- ""resources:- podsverbs:- get
- apiGroups:- ""resources:- nodesverbs:- get- list- watch
- apiGroups:- ""resources:- nodes/statusverbs:- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: flannelname: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: flannelname: flannelnamespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:name: kube-flannel-cfgnamespace: kube-systemlabels:tier: nodek8s-app: flannelapp: flannel
data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "172.16.0.0/12","EnableNFTables": false,"Backend": {"Type": "vxlan"}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-dsnamespace: kube-systemlabels:tier: nodeapp: flannelk8s-app: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxhostNetwork: truepriorityClassName: system-node-criticaltolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cni-pluginimage: ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1command:- cpargs:- -f- /flannel- /opt/cni/bin/flannelvolumeMounts:- name: cni-pluginmountPath: /opt/cni/bin- name: install-cniimage: ghcr.io/flannel-io/flannel:v0.26.4command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: ghcr.io/flannel-io/flannel:v0.26.4command:- /opt/bin/flanneldargs:- --ip-masq
#        - --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN", "NET_RAW"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: EVENT_QUEUE_DEPTHvalue: "5000"volumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/- name: xtables-lockmountPath: /run/xtables.lockvolumes:- name: runhostPath:path: /run/flannel- name: cni-pluginhostPath:path: /opt/cni/bin- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate
EOF

那么,网络方面,1.25是不支持psp的,#        - --kube-subnet-mgr这一段是注释的,下面这段使用的cidr172.16.0.1/12

需要coredns吗?感觉不一定,ingress怎么安装还没想法,集群安装就到这里吧,太麻烦了!!!

kind: ClusterRole"Network": "172.16.0.0/12","EnableNFTables": false,"Backend": {"Type": "vxlan"}}

后面在补充coredns,ingress,kubeboard这些的部署安装

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/34691.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

DeepSeek+Dify本地部署私有化知识库

1.Windows安装docker Windows安装Docker-CSDN博客 2.安装olloma https://ollama.com/ 安装完成&#xff0c;可以在桌面右下角看到olloma图标 3.安装deepseekR1模型 ollama官网&#xff08;deepseek-r1&#xff09;&#xff0c;找到deepseek模型 选择合适大小的模型&#xff…

[Linux][经验总结]Ubuntu6.11.0 docker更换镜像源(实操可用的正确方法)

一、前言 关于Ubuntu更换docker镜像源&#xff0c;网上有很多的教程&#xff0c;但在实操中发现&#xff0c;更换的源无法生效——原因是我的docker是在系统安装时&#xff0c;选择附加安装的package的方式安装的。 现将处理过程记录如下。 二、获取镜像源 在网上随便找个几…

NHANES指标推荐:BRI!

文章题目&#xff1a;Association of body roundness index with cardiovascular disease in patients with cardiometabolic syndrome: a cross-sectional study based on NHANES 2009-2018 DOI&#xff1a;10.3389/fendo.2025.1524352 中文标题&#xff1a;心脏代谢综合征患者…

3.水中看月

前言 这篇文章讲解套接字分配IP地址和端口号。这部分内容也相对有些枯燥&#xff0c;但并不难&#xff0c;而 且是学习后续那些有趣内容必备的基础知识&#xff08;计算机网络基础&#xff09;。 一、分配给套接字的IP地址与端口号 IP是InternetProtocol&#xff08;网络协议…

Linux驱动开发-①pinctrl 和 gpio 子系统②并发和竞争③内核定时器

Linux驱动开发-①pinctrl 和 gpio 子系统②并发和竞争③内核定时器 一&#xff0c;pinctrl 和 gpio 子系统1.pinctrl子系统2.GPIO子系统 二&#xff0c;并发和竞争1.原子操作2.自旋锁3.信号量4.互斥体 三&#xff0c;按键实验四&#xff0c;内核定时器1.关于定时器的有关概念1.…

奇安信二面

《网安面试指南》https://mp.weixin.qq.com/s/RIVYDmxI9g_TgGrpbdDKtA?token1860256701&langzh_CN 5000篇网安资料库https://mp.weixin.qq.com/s?__bizMzkwNjY1Mzc0Nw&mid2247486065&idx2&snb30ade8200e842743339d428f414475e&chksmc0e4732df793fa3bf39…

Python库安装报错解决思路以及机器学习环境配置详细方案

文章目录 概要第三方库gdalpymoltalibmahotasgraphviznltk-datalazypredictscikit-surprisenb_extensionspyqt5-toolsspacy、en_core_web_sm 机器学习GPU-torch安装torch_geometric安装ubuntu安装显卡驱动dlib安装torch-cluster、torch-scatter、torch-sparse和torch-geometric…

Power Apps 技术分享:连接SharePoint列表数据源

前言 在使用Power Apps的时候&#xff0c;使用列表作为数据源是非常方便和经济的&#xff0c;列表创建简单&#xff0c;SharePoint的存储也不像Dataverse需要按照容量付费。 正文 1.我们先在SharePoint中建一个列表&#xff0c;添加一些测试数据&#xff0c;如下图&#xff1a;…

【Linux】learning notes(4)cat、more、less、head、tail、vi、vim

文章目录 catmore 查看整个文件less 查看整个文件head 查看部分文件tail 查看部分文件vim / vi cat cat 命令在 Linux 和 Unix 系统中非常常用&#xff0c;它用于连接文件并打印到标准输出设备&#xff08;通常是屏幕&#xff09;。虽然 cat 的基本用法很简单&#xff0c;但它…

C++11函数包装器

目录 std::function 注意事项 包装静态成员函数 包装非静态成员函数 std::bind 用法 应用场景 std::function function是C11引入的类&#xff0c;可以用任何可调用对象作为参数&#xff0c;构造出一个新对象。 可调用对象有函数指针&#xff0c;仿函数&#xff0c;lamb…

maven的安装配置

目录 一、官网下载压缩包 二、配置环境变量 设置 MAVEN_HOME 添加 MAVEN_HOME\bin 到 PATH 三、配置本机仓库和远程仓库 四、配置idea 一、官网下载压缩包 Download Apache Maven – Maven 如上图。选择这个压缩包 选择好文件&#xff0c;下载完后&#xff0c;配置环境变…

分布式事务

1 事务 众所周知&#xff0c;事务具有ACID四大特性&#xff1a; 原子性&#xff08;Atomicity&#xff09;&#xff1a;事务作为一个整体被执行&#xff0c;包含在其中的对数据库的操作要么全部被执行&#xff0c;要么都不执行。 一致性&#xff08;Consistency&#xff09;&a…

Postman中Authorization和Headers的区别

案例 笔者在进行token验证的时候碰到的问题 一般如果是进行token验证&#xff0c;大部分是在Headers下面添加token名称及token的值 这样&#xff1a;后端提取请求头的token即可 还有一种是&#xff0c;左侧选择Bearer Token&#xff0c;右侧添加token的值,后端传递的 大概…

1.备战SISAP 2025挑战:调研2024挑战

简介 紧张刺激的SISAP 2025 challenge发布了&#xff0c;此博客用于记录备战的一些准备&#xff0c;思路和实验。 25年挑战介绍 详细信息参考SISAP Indexing challenge 2025 Task 1&#xff1a;内存受限索引 这项任务要求参与者开发具有reranking&#xff08;重排&#xf…

FPGA学习(二)——实现LED流水灯

FPGA学习(二)——实现LED流水灯 目录 FPGA学习(二)——实现LED流水灯一、DE2-115时钟源二、控制6个LED灯实现流水灯1、核心逻辑2、代码实现3、引脚配置4、实现效果 三、模块化代码1、分频模块2、复位暂停模块3、顶层模块 四、总结 一、DE2-115时钟源 DE2-115板子包含一个50MHz…

进程间通信--匿名管道

进程间通信介绍 进程间通信目的 数据传输&#xff1a;一个进程需要将它的数据发送给另一个进程资源共享&#xff1a;多个进程之间共享同样的资源。通知事件&#xff1a;一个进程需要向另一个或一组进程发送消息&#xff0c;通知它&#xff08;它们&#xff09;发生了某种事件&…

【鸿蒙开发】Hi3861学习笔记-Visual Studio Code安装(New)

00. 目录 文章目录 00. 目录01. Visual Studio Code概述02. Visual Studio Code下载03. Visual Studio Code安装04. Visual Studio Code插件05. 附录 01. Visual Studio Code概述 vscode是一种简化且高效的代码编辑器&#xff0c;同时支持诸如调试&#xff0c;任务执行和版本管…

人工智能 Day06 pandas库进阶

1.处理缺失数据 总体流程是这样的&#xff0c; 归根在于如何处理NAN&#xff0c;接下来详细赘述 1.1. 处理缺失值的相关函数 判断缺失值 pd.isnull(df)&#xff1a;用于判断 DataFrame df 中的元素是否为缺失值&#xff08;NaN &#xff09;&#xff0c;返回一个与df 形状相同…

【Tools】Visual Studio Code安装保姆级教程(2025版)

00. 目录 文章目录 00. 目录01. Visual Studio Code概述02. Visual Studio Code下载03. Visual Studio Code安装04. Visual Studio Code配置05. 附录 01. Visual Studio Code概述 Visual Studio Code&#xff08;简称 VS Code&#xff09;是由微软开发的一款免费、开源且跨平台…

14.使用各种读写包操作 Excel 文件:辅助模块

一 各种读写包 这些是 pandas 在底层使用的各种读写包。无须安装 pandas&#xff0c;直接使用这些读写包就能够读写 Excel 工作簿。可以尽可能地使用 pandas 来解决这类问题&#xff0c;只在 pandas 没有提供你所需要的功能时才用到读写包。 表中没有 xlwings &#xff0c;因为…