ubuntu使用kubeadm搭建k8s集群

一、卸载k8s

kubeadm reset -f
modprobe -r ipip
lsmod
rm -rf ~/.kube/# 自己选择性删除 坑点哦
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd
apt clean all
apt remove kube*

UBUNTU下载

oldubuntu-releases镜像_oldubuntu-releases下载地址_oldubuntu-releases安装教程-阿里巴巴开源镜像站

建议不要下载desktop的镜像,直接使用server的镜像,选择的版本22.04 :ubuntu-22.04.4-live-server-amd64.iso

 下载完成进入服务器替换apt源

ubuntu | 镜像站使用帮助 | 清华大学开源软件镜像站 | Tsinghua Open Source Mirror

 按照提示的版本内容替换源即可。

二、安装K8S一主两从集群

k8s-master01 192.168.124.132     操作系统: Ubuntu22.04

k8s-node01 192.168.124.133      操作系统: Ubuntu22.04

k8s-node02 192.168.124.134      操作系统: Ubuntu22.04

最低配置:2核 2G内存 20G硬盘

 1、环境准备:(所有服务器都需要操作)

  1)  时间同步

timedatectl set-timezone Asia/Shanghai

sudo apt install ntpdate

sudo ntpdate ntp.ubuntu.com

2)  固定IP地址

Ubuntu固定虚拟机的ip地址-CSDN博客

3)修改主机名

[root@k8s-master1 ~]# hostnamectl set-hostname k8s-master
[root@k8s-node1 ~]# hostnamectl set-hostname k8s-node1
[root@k8s-node2 ~]# hostnamectl set-hostname k8s-node2

4)关闭交换分区

#临时关闭所有的交换分区
swapoff -a#永久关闭所有的交换分区
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab

5)所有节点都添加集群ip与主机名到hosts中:

cat >> /etc/hosts << EOF 
172.16.11.221 k8s-master
172.16.11.222 k8s-node1
172.16.11.223 k8s-node2
EOF
配置内核转发及网桥过滤:cat > /etc/modules-load.d/k8s.conf << EOF
overlay
br_netfilter
EOF
加载配置:modprobe overlaymodprobe br_netfilter
查看是否加载:lsmod |grep overlaylsmod |grep br_netfilter
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

让配置生效: 

sysctl -p /etc/sysctl.d/k8s.conf

查看是否加载生效

lsmod |grep br_netfilter

6)关闭防火墙

sudo systemctl disable --now ufw

7)修改Ubuntu的sources.list

ubuntu | 镜像站使用帮助 | 清华大学开源软件镜像站 | Tsinghua Open Source Mirror

# 默认注释了源码镜像以提高 apt update 速度,如有需要可自行取消注释
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse# 以下安全更新软件源包含了官方源与镜像站配置,如有需要可自行修改注释切换
deb http://security.ubuntu.com/ubuntu/ focal-security main restricted universe multiverse
# deb-src http://security.ubuntu.com/ubuntu/ focal-security main restricted universe multiverse# 预发布软件源,不建议启用
# deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-proposed main restricted universe multiverse
# # deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-proposed main restricted universe multiverse

 8)安装ipset和ipvsadm

apt install ipset ipvsadm

配置 ipvsadm 模块加载方式

cat > /etc/modules-load.d/ipvs.conf << EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

写成一个脚本文件

cat << EOF | tee ipvs.sh
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

授权运行检查:

授权运行检查:chmod 755 /etc/modules-load.d/ipvs.conf && bash /etc/modules-load.d/ipvs.conf && lsmod | grep -e ip_vs -e nf_conntrack

9)容器运行时containetd

wget https://github.com/containerd/containerd/releases/download/v1.7.15/cri-containerd-1.7.15-linux-amd64.tar.gz

可能由于网络问题:链接: https://pan.baidu.com/s/1VoYMDB6ikOTSn4W-tziY9g 提取码: 3cvd 
--来自百度网盘超级会员v2的分享

解压并查看

tar xf cri-containerd-1.7.15-linux-amd64.tar.gz -C /

which containerd

10)containerd配置文件生成并修改

创建文件:mkdir /etc/containerd生成配置文件:containerd config default > /etc/containerd/config.toml

修改配置文件将3.8改为3.9

或者改为阿里云:registry.aliyuncs.com/google_containers/pause:3.9

vim /etc/containerd/config.toml

建议使用阿里云的镜像

disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2[cgroup]path = ""[debug]address = ""format = ""gid = 0level = ""uid = 0[grpc]address = "/run/containerd/containerd.sock"gid = 0max_recv_message_size = 16777216max_send_message_size = 16777216tcp_address = ""tcp_tls_ca = ""tcp_tls_cert = ""tcp_tls_key = ""uid = 0[metrics]address = ""grpc_histogram = false[plugins][plugins."io.containerd.gc.v1.scheduler"]deletion_threshold = 0mutation_threshold = 100pause_threshold = 0.02schedule_delay = "0s"startup_delay = "100ms"[plugins."io.containerd.grpc.v1.cri"]cdi_spec_dirs = ["/etc/cdi", "/var/run/cdi"]device_ownership_from_security_context = falsedisable_apparmor = falsedisable_cgroup = falsedisable_hugetlb_controller = truedisable_proc_mount = falsedisable_tcp_service = truedrain_exec_sync_io_timeout = "0s"enable_cdi = falseenable_selinux = falseenable_tls_streaming = falseenable_unprivileged_icmp = falseenable_unprivileged_ports = falseignore_deprecation_warnings = []ignore_image_defined_volumes = falseimage_pull_progress_timeout = "5m0s"image_pull_with_sync_fs = falsemax_concurrent_downloads = 3max_container_log_line_size = 16384netns_mounts_under_state_dir = falserestrict_oom_score_adj = falsesandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"selinux_category_range = 1024stats_collect_period = 10stream_idle_timeout = "4h0m0s"stream_server_address = "127.0.0.1"stream_server_port = "0"systemd_cgroup = falsetolerate_missing_hugetlb_controller = trueunset_seccomp_profile = ""[plugins."io.containerd.grpc.v1.cri".cni]bin_dir = "/opt/cni/bin"conf_dir = "/etc/cni/net.d"conf_template = ""ip_pref = ""max_conf_num = 1setup_serially = false[plugins."io.containerd.grpc.v1.cri".containerd]default_runtime_name = "runc"disable_snapshot_annotations = truediscard_unpacked_layers = falseignore_blockio_not_enabled_errors = falseignore_rdt_not_enabled_errors = falseno_pivot = falsesnapshotter = "overlayfs"[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]base_runtime_spec = ""cni_conf_dir = ""cni_max_conf_num = 0container_annotations = []pod_annotations = []privileged_without_host_devices = falseprivileged_without_host_devices_all_devices_allowed = falseruntime_engine = ""runtime_path = ""runtime_root = ""runtime_type = ""sandbox_mode = ""snapshotter = ""[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options][plugins."io.containerd.grpc.v1.cri".containerd.runtimes][plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]base_runtime_spec = ""cni_conf_dir = ""cni_max_conf_num = 0container_annotations = []pod_annotations = []privileged_without_host_devices = falseprivileged_without_host_devices_all_devices_allowed = falseruntime_engine = ""runtime_path = ""runtime_root = ""runtime_type = "io.containerd.runc.v2"sandbox_mode = "podsandbox"snapshotter = ""[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]BinaryName = ""CriuImagePath = ""CriuPath = ""CriuWorkPath = ""IoGid = 0IoUid = 0NoNewKeyring = falseNoPivotRoot = falseRoot = ""ShimCgroup = ""SystemdCgroup = true[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]base_runtime_spec = ""cni_conf_dir = ""cni_max_conf_num = 0container_annotations = []pod_annotations = []privileged_without_host_devices = falseprivileged_without_host_devices_all_devices_allowed = falseruntime_engine = ""runtime_path = ""runtime_root = ""runtime_type = ""sandbox_mode = ""snapshotter = ""[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options][plugins."io.containerd.grpc.v1.cri".image_decryption]key_model = "node"[plugins."io.containerd.grpc.v1.cri".registry]config_path = "/etc/containerd/certs.d"[plugins."io.containerd.grpc.v1.cri".registry.auths][plugins."io.containerd.grpc.v1.cri".registry.configs][plugins."io.containerd.grpc.v1.cri".registry.headers][plugins."io.containerd.grpc.v1.cri".registry.mirrors][plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]tls_cert_file = ""tls_key_file = ""[plugins."io.containerd.internal.v1.opt"]path = "/opt/containerd"[plugins."io.containerd.internal.v1.restart"]interval = "10s"[plugins."io.containerd.internal.v1.tracing"][plugins."io.containerd.metadata.v1.bolt"]content_sharing_policy = "shared"[plugins."io.containerd.monitor.v1.cgroups"]no_prometheus = false[plugins."io.containerd.nri.v1.nri"]disable = truedisable_connections = falseplugin_config_path = "/etc/nri/conf.d"plugin_path = "/opt/nri/plugins"plugin_registration_timeout = "5s"plugin_request_timeout = "2s"socket_path = "/var/run/nri/nri.sock"[plugins."io.containerd.runtime.v1.linux"]no_shim = falseruntime = "runc"runtime_root = ""shim = "containerd-shim"shim_debug = false[plugins."io.containerd.runtime.v2.task"]platforms = ["linux/amd64"]sched_core = false[plugins."io.containerd.service.v1.diff-service"]default = ["walking"][plugins."io.containerd.service.v1.tasks-service"]blockio_config_file = ""rdt_config_file = ""[plugins."io.containerd.snapshotter.v1.aufs"]root_path = ""[plugins."io.containerd.snapshotter.v1.blockfile"]fs_type = ""mount_options = []root_path = ""scratch_file = ""[plugins."io.containerd.snapshotter.v1.btrfs"]root_path = ""[plugins."io.containerd.snapshotter.v1.devmapper"]async_remove = falsebase_image_size = ""discard_blocks = falsefs_options = ""fs_type = ""pool_name = ""root_path = ""[plugins."io.containerd.snapshotter.v1.native"]root_path = ""[plugins."io.containerd.snapshotter.v1.overlayfs"]mount_options = []root_path = ""sync_remove = falseupperdir_label = false[plugins."io.containerd.snapshotter.v1.zfs"]root_path = ""[plugins."io.containerd.tracing.processor.v1.otlp"][plugins."io.containerd.transfer.v1.local"]config_path = ""max_concurrent_downloads = 3max_concurrent_uploaded_layers = 3[[plugins."io.containerd.transfer.v1.local".unpack_config]]differ = ""platform = "linux/amd64"snapshotter = "overlayfs"[proxy_plugins][stream_processors][stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]path = "ctd-decoder"returns = "application/vnd.oci.image.layer.v1.tar"[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]path = "ctd-decoder"returns = "application/vnd.oci.image.layer.v1.tar+gzip"[timeouts]"io.containerd.timeout.bolt.open" = "0s""io.containerd.timeout.metrics.shimstats" = "2s""io.containerd.timeout.shim.cleanup" = "5s""io.containerd.timeout.shim.load" = "5s""io.containerd.timeout.shim.shutdown" = "3s""io.containerd.timeout.task.state" = "2s"[ttrpc]address = ""gid = 0uid = 0
创建上图相应的文件目录mkdir -p /etc/containerd/certs.d/docker.io配置加速cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://x46sxvnb.mirror.aliyuncs.com"]
capabilities = ["pull", "resolve"]
EOF

启动并设置开机自启

systemctl enable --now containerd

查看版本:

containerd --version

 查看状态 systemctl status containerd.service

2、集群部署(所有服务器都需要操作)

1、下载用于kubernetes软件包仓库的公告签名密钥


k8s社区源(和下面的阿里云源二选一即可)网络问题选择阿里云

创建目录:sudo mkdir -p /etc/apt/keyrings/下载密钥curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
阿里云源创建目录:sudo mkdir -p /etc/apt/keyrings/下载密钥curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
k8s社区(和下面的阿里云源二选一即可)添加kubernetes apt仓库echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
阿里云添加kubernetes apt仓库echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list

更新仓库

apt-get update

查看软件列表:

apt-cache policy kubeadm

安装指定版本:sudo apt-get install -y kubelet=1.30.0-1.1 kubeadm=1.30.0-1.1 kubectl=1.30.0-1.1
修改kubelet配置vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
设置为开机自启systemctl enable kubelet
锁定版本,防止后期自动更新。sudo apt-mark hold kubelet kubeadm kubectl
解锁版本,可以执行更新sudo apt-mark unhold kubelet kubeadm kubectl

3、集群初始化:(k8s-master01节点操作)

查看版本

kubeadm version

生成配置文件:

kubeadm config print init-defaults > kubeadm-config.yaml

完整的 kubeadm-config.yaml,修改对应的ip和name即可。

apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
localAPIEndpoint:advertiseAddress: 192.168.124.150bindPort: 6443
nodeRegistration:criSocket: unix:///var/run/containerd/containerd.sockimagePullPolicy: IfNotPresentname: k8s-mastertaints: null---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:timeoutForControlPlane: 4m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kubernetesVersion: 1.30.0
networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16
scheduler: {}---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

查看镜像

kubeadm config images list --config kubeadm-config.yaml

下载镜像

kubeadm config images pull --config kubeadm-config.yaml

查看镜像

crictl images

查看镜像:

kubeadm config images list --config kubeadm-config.yaml

如果出现报错镜像拉取不下来如下图错误

解决办法(我们在阿里云仓库进行拉取镜像)

kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers

初始化集群

kubeadm init --config kubeadm-config.yaml

非首次安装会出现下面这个错误,需要 kubeadm reset -f 再进行安装

还是失败,参考下面两篇博客,我解决了上面的问题   使用 journalctl -xeu kubelet 查看报错日志

“validate service connection: validate CRI v1 runtime API for endpoint \“unix:///var/run/containerd/-CSDN博客

KS8初始化遇到问题:rpc error: code = Unknown desc = failed to get sandbox image \“registry.aliyuncs.com/goog-CSDN博客

再次安装成功!!!

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.124.150:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:aeceb14ab601148161cd69b9e041f7ac09e79f5e45135fdac9c4ee3e42e1acb8 

执行上面的命令,初始化集群。


node节点执行完之后去master节点查看node, kubectl get node

可以看到节点已经安装成功!

4、网络插件安装部署(k8s-master01节点操作)

访问calico的官网查看

Quickstart for Calico on Kubernetes | Calico Documentation (tigera.io)

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml

查看是否运行:

kubectl get pods -n tigera-operator

kubectl get ns

下载calico文件 建议使用wget 复制官网的链接进行下载,因为官网的命令是直接下载运行的,我们需要对文件先进行修改在运行,

wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-resources.yaml

由于网络问题:科学上网:

# This section includes base Calico installation configuration.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:name: default
spec:# Configures Calico networking.calicoNetwork:ipPools:- name: default-ipv4-ippoolblockSize: 26cidr: 10.244.0.0/16encapsulation: VXLANCrossSubnetnatOutgoing: EnablednodeSelector: all()---# This section configures the Calico API server.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:name: default
spec: {}

修改文件

注我们在kubeadm-config.yaml文件中添加的pod网段是10.244.0.0/16所以我们要修改为一样的

vim custom-resources.yaml

 

启动

 kubectl create -f custom-resources.yaml

查看命名空间

kubectl get ns

查看命名空间中运行的pods 如果没有全部起来稍微等一下,应该是在创建中,如果网络慢可能要半个小时。

kubectl get pods -n calico-system

查看命名空间中运行的pods

kubectl get pods -n kube-system

然后再次查看我们的集群状态:

kubectl get nodes

 

由于kubectl get pods -n calico-system 太慢了 我们先删除掉

kubectl delete -f custom-resources.yaml

选择网络插件:https://kubernetes.io/docs/concepts/cluster-administration/addons/
快速开始配置:https://projectcalico.docs.tigera.io/archive/v3.20/getting-started/clis/calicoctl/install
calico网络插件:https://docs.projectcalico.org/v3.9/getting-started/kubernetes/

kubectl apply -f https://docs.projectcalico.org/v3.9/manifests/calico.yaml 

https://docs.tigera.io/archive/v3.9/getting-started/kubernetes/

下载calico.yaml文件到本地:

---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:name: calico-confignamespace: kube-system
data:# Typha is disabled.typha_service_name: "none"# Configure the backend to use.calico_backend: "bird"# Configure the MTU to useveth_mtu: "1440"# The CNI network configuration to install on each node.  The special# values in this config will be automatically populated.cni_network_config: |-{"name": "k8s-pod-network","cniVersion": "0.3.1","plugins": [{"type": "calico","log_level": "info","datastore_type": "kubernetes","nodename": "__KUBERNETES_NODE_NAME__","mtu": __CNI_MTU__,"ipam": {"type": "calico-ipam"},"policy": {"type": "k8s"},"kubernetes": {"kubeconfig": "__KUBECONFIG_FILEPATH__"}},{"type": "portmap","snat": true,"capabilities": {"portMappings": true}}]}---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: felixconfigurations.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: FelixConfigurationplural: felixconfigurationssingular: felixconfiguration
---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: ipamblocks.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPAMBlockplural: ipamblockssingular: ipamblock---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: blockaffinities.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: BlockAffinityplural: blockaffinitiessingular: blockaffinity---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: ipamhandles.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPAMHandleplural: ipamhandlessingular: ipamhandle---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: ipamconfigs.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPAMConfigplural: ipamconfigssingular: ipamconfig---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: bgppeers.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: BGPPeerplural: bgppeerssingular: bgppeer---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: bgpconfigurations.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: BGPConfigurationplural: bgpconfigurationssingular: bgpconfiguration---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: ippools.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPPoolplural: ippoolssingular: ippool---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: hostendpoints.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: HostEndpointplural: hostendpointssingular: hostendpoint---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: clusterinformations.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: ClusterInformationplural: clusterinformationssingular: clusterinformation---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: globalnetworkpolicies.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: GlobalNetworkPolicyplural: globalnetworkpoliciessingular: globalnetworkpolicy---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: globalnetworksets.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: GlobalNetworkSetplural: globalnetworksetssingular: globalnetworkset---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: networkpolicies.crd.projectcalico.org
spec:scope: Namespacedgroup: crd.projectcalico.orgversion: v1names:kind: NetworkPolicyplural: networkpoliciessingular: networkpolicy---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: networksets.crd.projectcalico.org
spec:scope: Namespacedgroup: crd.projectcalico.orgversion: v1names:kind: NetworkSetplural: networksetssingular: networkset
---
# Source: calico/templates/rbac.yaml# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-kube-controllers
rules:# Nodes are watched to monitor for deletions.- apiGroups: [""]resources:- nodesverbs:- watch- list- get# Pods are queried to check for existence.- apiGroups: [""]resources:- podsverbs:- get# IPAM resources are manipulated when nodes are deleted.- apiGroups: ["crd.projectcalico.org"]resources:- ippoolsverbs:- list- apiGroups: ["crd.projectcalico.org"]resources:- blockaffinities- ipamblocks- ipamhandlesverbs:- get- list- create- update- delete# Needs access to update clusterinformations.- apiGroups: ["crd.projectcalico.org"]resources:- clusterinformationsverbs:- get- create- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-kube-controllers
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: calico-kube-controllers
subjects:
- kind: ServiceAccountname: calico-kube-controllersnamespace: kube-system
---
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-node
rules:# The CNI plugin needs to get pods, nodes, and namespaces.- apiGroups: [""]resources:- pods- nodes- namespacesverbs:- get- apiGroups: [""]resources:- endpoints- servicesverbs:# Used to discover service IPs for advertisement.- watch- list# Used to discover Typhas.- get- apiGroups: [""]resources:- nodes/statusverbs:# Needed for clearing NodeNetworkUnavailable flag.- patch# Calico stores some configuration information in node annotations.- update# Watch for changes to Kubernetes NetworkPolicies.- apiGroups: ["networking.k8s.io"]resources:- networkpoliciesverbs:- watch- list# Used by Calico for policy information.- apiGroups: [""]resources:- pods- namespaces- serviceaccountsverbs:- list- watch# The CNI plugin patches pods/status.- apiGroups: [""]resources:- pods/statusverbs:- patch# Calico monitors various CRDs for config.- apiGroups: ["crd.projectcalico.org"]resources:- globalfelixconfigs- felixconfigurations- bgppeers- globalbgpconfigs- bgpconfigurations- ippools- ipamblocks- globalnetworkpolicies- globalnetworksets- networkpolicies- networksets- clusterinformations- hostendpoints- blockaffinitiesverbs:- get- list- watch# Calico must create and update some CRDs on startup.- apiGroups: ["crd.projectcalico.org"]resources:- ippools- felixconfigurations- clusterinformationsverbs:- create- update# Calico stores some configuration information on the node.- apiGroups: [""]resources:- nodesverbs:- get- list- watch# These permissions are only requried for upgrade from v2.6, and can# be removed after upgrade or on fresh installations.- apiGroups: ["crd.projectcalico.org"]resources:- bgpconfigurations- bgppeersverbs:- create- update# These permissions are required for Calico CNI to perform IPAM allocations.- apiGroups: ["crd.projectcalico.org"]resources:- blockaffinities- ipamblocks- ipamhandlesverbs:- get- list- create- update- delete- apiGroups: ["crd.projectcalico.org"]resources:- ipamconfigsverbs:- get# Block affinities must also be watchable by confd for route aggregation.- apiGroups: ["crd.projectcalico.org"]resources:- blockaffinitiesverbs:- watch# The Calico IPAM migration needs to get daemonsets. These permissions can be# removed if not upgrading from an installation using host-local IPAM.- apiGroups: ["apps"]resources:- daemonsetsverbs:- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: calico-node
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: calico-node
subjects:
- kind: ServiceAccountname: calico-nodenamespace: kube-system---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:name: calico-nodenamespace: kube-systemlabels:k8s-app: calico-node
spec:selector:matchLabels:k8s-app: calico-nodeupdateStrategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1template:metadata:labels:k8s-app: calico-nodeannotations:# This, along with the CriticalAddonsOnly toleration below,# marks the pod as a critical add-on, ensuring it gets# priority scheduling and that its resources are reserved# if it ever gets evicted.scheduler.alpha.kubernetes.io/critical-pod: ''spec:nodeSelector:beta.kubernetes.io/os: linuxhostNetwork: truetolerations:# Make sure calico-node gets scheduled on all nodes.- effect: NoScheduleoperator: Exists# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- effect: NoExecuteoperator: ExistsserviceAccountName: calico-node# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.terminationGracePeriodSeconds: 0priorityClassName: system-node-criticalinitContainers:# This container performs upgrade from host-local IPAM to calico-ipam.# It can be deleted if this is a fresh installation, or if you have already# upgraded to use calico-ipam.- name: upgrade-ipamimage: calico/cni:v3.9.6command: ["/opt/cni/bin/calico-ipam", "-upgrade"]env:- name: KUBERNETES_NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name: calico-configkey: calico_backendvolumeMounts:- mountPath: /var/lib/cni/networksname: host-local-net-dir- mountPath: /host/opt/cni/binname: cni-bin-dir# This container installs the CNI binaries# and CNI network config file on each node.- name: install-cniimage: calico/cni:v3.9.6command: ["/install-cni.sh"]env:# Name of the CNI config file to create.- name: CNI_CONF_NAMEvalue: "10-calico.conflist"# The CNI network config to install on each node.- name: CNI_NETWORK_CONFIGvalueFrom:configMapKeyRef:name: calico-configkey: cni_network_config# Set the hostname based on the k8s node name.- name: KUBERNETES_NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# CNI MTU Config variable- name: CNI_MTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# Prevents the container from sleeping forever.- name: SLEEPvalue: "false"volumeMounts:- mountPath: /host/opt/cni/binname: cni-bin-dir- mountPath: /host/etc/cni/net.dname: cni-net-dir# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes# to communicate with Felix over the Policy Sync API.- name: flexvol-driverimage: calico/pod2daemon-flexvol:v3.9.6volumeMounts:- name: flexvol-driver-hostmountPath: /host/drivercontainers:# Runs calico-node container on each Kubernetes node.  This# container programs network policy and routes on each# host.- name: calico-nodeimage: calico/node:v3.9.6env:# Use Kubernetes API as the backing datastore.- name: DATASTORE_TYPEvalue: "kubernetes"# Wait for the datastore.- name: WAIT_FOR_DATASTOREvalue: "true"# Set based on the k8s node name.- name: NODENAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# Choose the backend to use.- name: CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name: calico-configkey: calico_backend# Cluster type to identify the deployment type- name: CLUSTER_TYPEvalue: "k8s,bgp"# Auto-detect the BGP IP address.- name: IPvalue: "autodetect"# Enable IPIP- name: CALICO_IPV4POOL_IPIPvalue: "Always"# Set MTU for tunnel device used if ipip is enabled- name: FELIX_IPINIPMTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# The default IPv4 pool to create on startup if none exists. Pod IPs will be# chosen from this range. Changing this value after installation will have# no effect. This should fall within `--cluster-cidr`.- name: CALICO_IPV4POOL_CIDRvalue: "192.168.0.0/16"# Disable file logging so `kubectl logs` works.- name: CALICO_DISABLE_FILE_LOGGINGvalue: "true"# Set Felix endpoint to host default action to ACCEPT.- name: FELIX_DEFAULTENDPOINTTOHOSTACTIONvalue: "ACCEPT"# Disable IPv6 on Kubernetes.- name: FELIX_IPV6SUPPORTvalue: "false"# Set Felix logging to "info"- name: FELIX_LOGSEVERITYSCREENvalue: "info"- name: FELIX_HEALTHENABLEDvalue: "true"securityContext:privileged: trueresources:requests:cpu: 250mlivenessProbe:exec:command:- /bin/calico-node- -felix-live- -bird-liveperiodSeconds: 10initialDelaySeconds: 10failureThreshold: 6readinessProbe:exec:command:- /bin/calico-node- -felix-ready- -bird-readyperiodSeconds: 10volumeMounts:- mountPath: /lib/modulesname: lib-modulesreadOnly: true- mountPath: /run/xtables.lockname: xtables-lockreadOnly: false- mountPath: /var/run/caliconame: var-run-calicoreadOnly: false- mountPath: /var/lib/caliconame: var-lib-calicoreadOnly: false- name: policysyncmountPath: /var/run/nodeagentvolumes:# Used by calico-node.- name: lib-moduleshostPath:path: /lib/modules- name: var-run-calicohostPath:path: /var/run/calico- name: var-lib-calicohostPath:path: /var/lib/calico- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate# Used to install CNI.- name: cni-bin-dirhostPath:path: /opt/cni/bin- name: cni-net-dirhostPath:path: /etc/cni/net.d# Mount in the directory for host-local IPAM allocations. This is# used when upgrading from host-local to calico-ipam, and can be removed# if not using the upgrade-ipam init container.- name: host-local-net-dirhostPath:path: /var/lib/cni/networks# Used to create per-pod Unix Domain Sockets- name: policysynchostPath:type: DirectoryOrCreatepath: /var/run/nodeagent# Used to install Flex Volume Driver- name: flexvol-driver-hosthostPath:type: DirectoryOrCreatepath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---apiVersion: v1
kind: ServiceAccount
metadata:name: calico-nodenamespace: kube-system---
# Source: calico/templates/calico-kube-controllers.yaml# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:name: calico-kube-controllersnamespace: kube-systemlabels:k8s-app: calico-kube-controllers
spec:# The controllers can only have a single active instance.replicas: 1selector:matchLabels:k8s-app: calico-kube-controllersstrategy:type: Recreatetemplate:metadata:name: calico-kube-controllersnamespace: kube-systemlabels:k8s-app: calico-kube-controllersannotations:scheduler.alpha.kubernetes.io/critical-pod: ''spec:nodeSelector:beta.kubernetes.io/os: linuxtolerations:# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- key: node-role.kubernetes.io/mastereffect: NoScheduleserviceAccountName: calico-kube-controllerspriorityClassName: system-cluster-criticalcontainers:- name: calico-kube-controllersimage: calico/kube-controllers:v3.9.6env:# Choose which controllers to run.- name: ENABLED_CONTROLLERSvalue: node- name: DATASTORE_TYPEvalue: kubernetesreadinessProbe:exec:command:- /usr/bin/check-status- -r---apiVersion: v1
kind: ServiceAccount
metadata:name: calico-kube-controllersnamespace: kube-system
---
# Source: calico/templates/calico-etcd-secrets.yaml---
# Source: calico/templates/calico-typha.yaml---
# Source: calico/templates/configure-canal.yaml

kubectl get pods --all-namespaces -w 查看集群信息

5、由于国内的网络问题,所以对应的镜像拉不下来

删除上面的网络,安装flannel网络插件

这里参考这个大神的博客方法一解决了!!!

kubernetes(1.28)配置flannel:kubelet无法拉取镜像(NotReady ImagePullBackOff)同时解决k8s配置harbor私人镜像仓库问题_flannel镜像拉取失败-CSDN博客

链接:https://pan.baidu.com/s/1bkDHddFxhRfFS3iiJv8ScA 
提取码:d8bg 
--来自百度网盘超级会员V2的分享 下载对应的资料

 kube-flannel.yml来源:GitHub - flannel-io/flannel: flannel is a network fabric for containers, designed for Kubernetes

1、安装docker 

Ubuntu安装docker,指定版本-CSDN博客

 2、使用docker工具,将上面的文件导入为本地镜像

但是,在1.24之后的版本之后kubelet 彻底移除了dockershim,改为默认使用Containerd。在本地有docker镜像,yml修改为本地配置镜像地址,也无法使用这一镜像。还需要继续操作。

1.将docker镜像save为tar(因为ctr无法直接import这两个镜像压缩包,只好借助docker转一手)
#docker -o tar压缩包名要保存的路径 本地镜像
docker save -o flannel-flannel-cni-plugin-v1.4.1-flannel1-amd64.tar flannel/flannel-cni-plugin:v1.4.1-flannel1
docker save -o flannel-flannel-v0.25.1-amd64.tar flannel/flannel:v0.25.1

操作成功,在当前文件夹下能看到这两个tar文件。
2将tar镜像压缩包,导入到containerd的k8s.io命名空间中

#-n后面加命名空间名称

ctr -n k8s.io images import flannel-flannel-v0.25.1-amd64.tar
ctr -n k8s.io images import flannel-flannel-cni-plugin-v1.4.1-flannel1-amd64.tar


操作成功,显示如下:

ctr -n k8s.io i check | grep flannel

查看k8s.io下面是否有flannel的镜像文件

注:Kubernetes 下使用的 containerd 默认命名空间是k8s.io,只有导入这个命名空间,kubelet才能从本地拉取这个镜像文件。本文kubernetes集群是1.28的,所以不再是默认地从本地docker镜像里获取镜像,而是从containerd的本地镜像里面获取。如果是1.24之前的版本,默认仍是docker,那么1.2以及1.3的操作是不需要的,kubelet能直接拉取本地docker镜像。

1.4修改kube-flanne.yml

将开头提及的kube-flannel.yml下载到本地后,修改所有image键值对,并添加镜像拉取策略。修改内容如下:

 

kube-flannel.yml完整版,修改后的 

apiVersion: v1
kind: Namespace
metadata:labels:k8s-app: flannelpod-security.kubernetes.io/enforce: privilegedname: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: flannelname: flannelnamespace: kube-flannel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: flannelname: flannel
rules:
- apiGroups:- ""resources:- podsverbs:- get
- apiGroups:- ""resources:- nodesverbs:- get- list- watch
- apiGroups:- ""resources:- nodes/statusverbs:- patch
- apiGroups:- networking.k8s.ioresources:- clustercidrsverbs:- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: flannelname: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-flannel
---
apiVersion: v1
data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}}
kind: ConfigMap
metadata:labels:app: flannelk8s-app: flanneltier: nodename: kube-flannel-cfgnamespace: kube-flannel
---
apiVersion: apps/v1
kind: DaemonSet
metadata:labels:app: flannelk8s-app: flanneltier: nodename: kube-flannel-dsnamespace: kube-flannel
spec:selector:matchLabels:app: flannelk8s-app: flanneltemplate:metadata:labels:app: flannelk8s-app: flanneltier: nodespec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxcontainers:- args:- --ip-masq- --kube-subnet-mgrcommand:- /opt/bin/flanneldenv:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: EVENT_QUEUE_DEPTHvalue: "5000"image: docker.io/flannel/flannel:v0.25.1imagePullPolicy: Nevername: kube-flannelresources:requests:cpu: 100mmemory: 50MisecurityContext:capabilities:add:- NET_ADMIN- NET_RAWprivileged: falsevolumeMounts:- mountPath: /run/flannelname: run- mountPath: /etc/kube-flannel/name: flannel-cfg- mountPath: /run/xtables.lockname: xtables-lockhostNetwork: trueinitContainers:- args:- -f- /flannel- /opt/cni/bin/flannelcommand:- cpimage: docker.io/flannel/flannel-cni-plugin:v1.4.1-flannel1imagePullPolicy: Nevername: install-cni-pluginvolumeMounts:- mountPath: /opt/cni/binname: cni-plugin- args:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistcommand:- cpimage: docker.io/flannel/flannel:v0.25.1imagePullPolicy: Nevername: install-cnivolumeMounts:- mountPath: /etc/cni/net.dname: cni- mountPath: /etc/kube-flannel/name: flannel-cfgpriorityClassName: system-node-criticalserviceAccountName: flanneltolerations:- effect: NoScheduleoperator: Existsvolumes:- hostPath:path: /run/flannelname: run- hostPath:path: /opt/cni/binname: cni-plugin- hostPath:path: /etc/cni/net.dname: cni- configMap:name: kube-flannel-cfgname: flannel-cfg- hostPath:path: /run/xtables.locktype: FileOrCreatename: xtables-lock
  1. #删除资源

  2. kubectl delete -f kube-flannel.yml

  3. #重新运行

  4. kubectl apply -f kube-flannel.yml

如下图,集群终于搭建成功了!!!

我使用方式一解决了网络插件问题,其实使用私仓更优雅!

解决办法二:k8s1.28,为集群配置harbor镜像仓库,上传镜像,从仓库拉取相关镜像
针对1.24之前的版本,网上有许多配置harbor镜像仓库的办法,重点是/etc/docker/daemon.json文件,这里不再赘述。

1.24之后 kubelet 彻底移除dockershim,改为默认Containerd。containerd 不能像docker一样 “docker login 私人镜像仓库地址” 登录到镜像仓库,即便是本地docker登录harbor ,集群仍是无法从harbor拉取到镜像。这里需要修改配置文件。

本文的containerd为1.6,按照网上常规的修改/etc/containerd/config.toml文件的方式不再被推荐。

参考官方文档——containerd/docs/cri/config.md at main · containerd/containerd · GitHub

配置相应的hosts.toml,为集群设置Harbor镜像仓库
步骤如下(2.1-2.4操作每一台机器都要进行!!!):

2.1修改/etc/containerd/config.toml

2.2自定义证书

绕过TLS验证(containerd/docs/hosts.md at main · containerd/containerd · GitHub)

#创建文件目录 IP和端口替换为你的私人镜像仓库的IP以及对外开放的端口
mkdir -p /etc/containerd/certs.d/192.168.12.34:5000
#创建-打开hosts.toml
vi /etc/containerd/certs.d/192.168.12.34:5000/hosts.toml
 
#想hosts.toml中写入以下内容 IP和端口和前面一样进行替换
server = "http://192.168.12.34:5000"
 
[host."192.168.12.34:5000"]
  capabilities = ["pull", "resolve", "push"]
  skip_verify = true
2.3重新启动containerd

systemctl daemon-reload
systemctl restart containerd
2.4完成上述操作,可能crictl以及集群yml文件仍然无法正常从私人镜像仓库拉取镜像

可能报错如下:

解决操作:

vim /etc/crictl.yaml
将runtime以及image endpoint均替换为"unix:///run/containerd/containerd.sock"

 :wq保存退出

2.5测试将镜像推送至harbor

使用docker上传(传统操作)

docker tag flannel/flannel-cni-plugin:v1.4.1-flannel1 IP:PORT/flannel/flannel-cni-plugin:v1.4.1-flannel1
docker push IP:PORT/flannel/flannel-cni-plugin:v1.4.1-flannel1
另一个镜像也是相同操作(如果报错,可能是需要先在harbor上创建一个flannel的project)。

2.6修改kube-flannel.yml文件

:wq保存退出。重新删除并运行kube-flannel.yml发现成功拉取镜像(可以通过kubectl describe详查pod)。

总结
方法二比方法一更为正式,毕竟配置集群总该要配置私人镜像,省去了每一个节点导入镜像的操作。要是还未配置私人镜像仓库,使用方法一也可行。

其余知识点:/etc/containerd/config.toml配置文件对ctr不生效,这个配置文件由CRI使用,即对crictl生效。

如果你的ctr images pull总是报错: http: server gave HTTP response to HTTPS client。可能这并不是你的配置出错,而是你需要在后面pull操作后加上--plain-http参数。表示你当前client与镜像仓库之间接受http形式的通信。或者是使用自定义的hosts-dir来指定配置文件。

ctr images pull IP:PORT/library/service:v1.0.0 --plain-http
ctr images pull IP:PORT/library/service:v1.0.0 --hosts-dir "/etc/containerd/certs.d"

感谢参考:ubuntu系统 kubeadm方式搭建k8s集群_ubuntu kubeadm-CSDN博客

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/375957.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【常见开源库的二次开发】一文学懂CJSON

简介&#xff1a; JSON&#xff08;JavaScript Object Notation&#xff09;是一种轻量级的数据交换格式。它基于JavaScript的一个子集&#xff0c;但是JSON是独立于语言的&#xff0c;这意味着尽管JSON是由JavaScript语法衍生出来的&#xff0c;它可以被任何编程语言读取和生成…

自动驾驶(萝卜快跑)是毁灭出租司机工作机会的灾难?

引言 自动驾驶技术的飞速发展在全球范围内引发了广泛的讨论和担忧&#xff0c;特别是在中国&#xff0c;自动驾驶出租车服务“萝卜快跑”成为了热门话题。本文探讨自动驾驶对出租司机工作机会的影响&#xff0c;以及这种技术变革背后的社会经济因素。 自动驾驶的历史与现状 …

【深度学习】PyTorch深度学习笔记01-Overview

参考学习&#xff1a;B站视频【《PyTorch深度学习实践》完结合集】-刘二大人 ------------------------------------------------------------------------------------------------------- 1. 基于规则的深度学习 2. 经典的机器学习——手动提取一些简单的特征 3. 表示学习…

Qt下使用OpenCV的鼠标回调函数进行圆形/矩形/多边形的绘制

文章目录 前言一、设置imshow显示窗口二、绘制圆形三、绘制矩形四、绘制多边形五、示例完整代码总结 前言 本文主要讲述了在Qt下使用OpenCV的鼠标回调在OpenCV的namedWindow和imshow函数显示出来的界面上进行一些图形的绘制&#xff0c;并最终将绘制好的图形显示在QLabel上。示…

python开发prometheus exporter--用于hadoop-yarn监控

首先写python的exporter需要知道Prometheus提供4种类型Metrics 分别是&#xff1a;Counter, Gauge, Summary和Histogram * Counter可以增长&#xff0c;并且在程序重启的时候会被重设为0&#xff0c;常被用于任务个数&#xff0c;总处理时间&#xff0c;错误个数等只增不减的指…

GAMMA软件适配航天宏图一号多星干涉数据

文章目录 1.航天宏图一号 X-频段 多基雷达星座2.航天宏图算法人员小结3.双基成像与单基成像干涉处理区别 GAMMA软件是世界著名的瑞士GAMMA遥感公司开发的专门用于干涉雷达数据处理的全功能商业软件。作为业内标杆软件&#xff0c;被全球范围内的研究人员、公司和公共机构广泛使…

Vim使用教程

目录 引言1. Vim的基本概念1.1 模式1.2 启动和退出 2. 基础操作2.1 导航2.2 插入文本2.3 删除和复制2.4 查找和替换 3. 高级功能3.1 多文件编辑3.2 宏录制和执行3.3 使用插件3.4 自定义快捷键 4. Vim脚本和自定义配置4.1 基本配置4.2 编写Vim脚本 5. 实用技巧5.1 快速移动5.2 批…

MT6985(天玑9200)芯片性能参数_MTK联发科旗舰5G移动平台处理器

MT6985天玑 9200 旗舰移动平台拥有专业级影像、沉浸式游戏和先进移动显示技术&#xff0c;以及更快捷、覆盖更广的 5G 和 支持 Wi-Fi 7 连接&#xff0c;具有高性能、高能效、低功耗表现。率先采用 Armv9 性能核&#xff0c;全部支持纯 64 位应用&#xff0c;开启高能效架构设计…

IC后端设计中的shrink系数设置方法

我正在「拾陆楼」和朋友们讨论有趣的话题,你⼀起来吧? 拾陆楼知识星球入口 在一些成熟的工艺节点通过shrink的方式(光照过程中缩小特征尺寸比例)得到了半节点,比如40nm从45nm shrink得到,28nm从32nm shrink得到,由于半节点的性能更优异,成本又低,漏电等不利因素也可以…

nginx正向代理和反向代理

nginx正向代理和反向代理 正向代理以及缓存配置 代理&#xff1a;客户端不再是直接访问服务器&#xff0c;通过代理服务器访问服务端。 正向代理&#xff1a;面向客户端&#xff0c;我们通过代理服务器的IP地址访问目标服务端。 服务端只知道代理服务器的地址&#xff0c;真…

RMAN备份与还原

进入 rman 工具 rman target / 查看 rman 配置 rman> show all; 修改rman 配置 数据库全备 rman> run {allocate channel c1 type disk;allocate channel c2 type disk;backup incremental level 0 database format /home/oracle/backup/full_%d_%s_%t.bak;sql alte…

三个锦囊妙计助效率提升

前言 本文列出了3个常用的配置&#xff0c;可以帮助我们从繁琐重复的任务中解脱出来、实现自动化操作。日积月累&#xff0c;一定有助于提升效率。 1. gvim配置自动插入字符串 在.vimrc中加入以下一行代码&#xff0c;可以帮助你在gvim文本编辑器中快速插入一个带有日期或自定…

iPhone数据恢复篇:在 iPhone 上恢复找回短信的 5 种方法

方法 1&#xff1a;检查最近删除的文件夹 iOS 允许您在 30 天内恢复已删除的短信。您需要先从“设置”菜单启用“过滤器”。让我们来实际检查一下。 步骤 1&#xff1a;打开“设置” > “信息”。 步骤 2&#xff1a;选择“未知和垃圾邮件”&#xff0c;然后切换到“过滤…

SpringCloud第二篇(如何将大型项目拆分成微服务项目)

文章目录 一、认识微服务二、微服务拆分原则三、模块拆分1.根据不同功能创建模块2.修改配置文件3.搬运包 四、远程调用 这一章我们从单体架构的优缺点来分析&#xff0c;看看开发大型项目采用单体架构存在哪些问题&#xff0c;而微服务架构又是如何解决这些问题的 一、认识微服…

科技创新引领水利行业升级:深入分析智慧水利解决方案的核心价值,展望其在未来水资源管理中的重要地位与作用

目录 引言 一、智慧水利的概念与内涵 二、智慧水利解决方案的核心价值 1. 精准监测与预警 2. 优化资源配置 3. 智能运维管理 4. 公众参与与决策支持 三、智慧水利在未来水资源管理中的重要地位与作用 1. 推动水利行业转型升级 2. 保障国家水安全 3. 促进生态文明建设…

5G中的RedCap

5G中的RedCap&#xff1a;降低能力的重要性和实现方式 随着5G技术的推广和普及&#xff0c;设备和终端的多样化使得网络能力的管理变得更加复杂和关键。RedCap&#xff08;Reduced Capability&#xff09;作为一个重要的概念&#xff0c;旨在解决设备能力差异对网络服务和用户…

什么是STM32?嵌入式和STM32简单介绍

1、嵌入式和STM32 1.1.什么是嵌入式 除了桌面PC之外&#xff0c;所有的控制类设备都是嵌入式 嵌入式系统的定义&#xff1a;“用于控制、监视或者辅助操作机器和设备的装置”。 嵌入式系统是一个控制程序存储在ROM中的嵌入式处理器控制板&#xff0c;是一种专用的计算机系统。…

mybatis 延迟加载

MyBatis的延迟加载&#xff08;Lazy Loading&#xff09;是一种优化技术&#xff0c;用于在需要时才加载关联对象或集合&#xff0c;从而提高性能和效率。以下是对MyBatis延迟加载的详细介绍&#xff1a; 延迟加载的基本概念 延迟加载是指在第一次访问对象的属性时才加载该对象…

嵌入式面试准备

兆易创新 Linux中使用mkdir命令创建新的目录时&#xff0c;在其父目录不在时先创建父目录的选项&#xff1a; -m &#xff1a;–mode模式&#xff0c;建立目录的时候同时设置目录的权限。-p&#xff1a;–parents若所建立的上层目录目前尚未建立&#xff0c;则会一并建立上层…

SCI一区级 | Matlab实现NGO-CNN-LSTM-Mutilhead-Attention多变量时间序列预测

SCI一区级 | Matlab实现NGO-CNN-LSTM-Mutilhead-Attention多变量时间序列预测 目录 SCI一区级 | Matlab实现NGO-CNN-LSTM-Mutilhead-Attention多变量时间序列预测预测效果基本介绍程序设计参考资料 预测效果 基本介绍 1.Matlab实现NGO-CNN-LSTM-Mutilhead-Attention北方苍鹰算…