信创:鲲鹏(arm64)+麒麟(kylin v10)离线部署k8s和kubesphere(含离线部署新方式)

本文将详细介绍,如何基于鲲鹏CPU(arm64)和操作系统 Kylin V10 SP2/SP3,利用 KubeKey 制作 KubeSphere 和 Kubernetes 离线安装包,并实战部署 KubeSphere 3.3.1 和 Kubernetes 1.22.12 集群。

服务器配置

主机名IPCPUOS用途
master-1192.168.10.2Kunpeng-920Kylin V10 SP2离线环境 KubeSphere/k8s-master
master-2192.168.10.3Kunpeng-920Kylin V10 SP2离线环境 KubeSphere/k8s-master
master-3192.168.10.4Kunpeng-920Kylin V10 SP2离线环境 KubeSphere/k8s-master
deploy192.168.200.7Kunpeng-920Kylin V10 SP3联网主机用于制作离线包

实战环境涉及软件版本信息

  • 服务器芯片:Kunpeng-920

  • 操作系统:麒麟V10 SP2/SP3 aarch64

  • Docker: 24.0.7

  • Harbor: v2.7.1

  • KubeSphere:v3.3.1

  • Kubernetes:v1.22.12

  • KubeKey: v2.3.1

本文介绍

本文介绍了如何在 麒麟 V10 aarch64 架构服务器上制品和离线部署 KubeSphere 和 Kubernetes 集群。我们将使用 KubeSphere 开发的 KubeKey 工具实现自动化部署,在三台服务器上实现高可用模式最小化部署 Kubernetes 集群和 KubeSphere。

KubeSphere 和 Kubernetes 在 ARM 架构 和 X86 架构的服务器上部署,最大的区别在于所有服务使用的容器镜像架构类型的不同,KubeSphere 开源版对于 ARM 架构的默认支持可以实现 KubeSphere-Core 功能,即可以实现最小化的 KubeSphere 和完整的 Kubernetes 集群的部署。当启用了 KubeSphere 可插拔组件时,会遇到个别组件部署失败的情况,需要我们手工替换官方或是第三方提供的 ARM 版镜像或是根据官方源码手工构建 ARM 版镜像。如果需要实现开箱即用及更多的技术支持,则需要购买企业版的 KubeSphere。

1.1 确认操作系统配置

在执行下文的任务之前,先确认操作系统相关配置。

  • 操作系统类型

    [root@localhost ~]# cat /etc/os-release
    NAME="Kylin Linux Advanced Server"
    VERSION="V10 (Halberd)"
    ID="kylin"
    VERSION_ID="V10"
    PRETTY_NAME="Kylin Linux Advanced Server V10 (Halberd)"
    ANSI_COLOR="0;31
  • 操作系统内核

    [root@node1 ~]# uname -r
    Linux node1 4.19.90-52.22.v2207.ky10.aarch64 
  • 服务器 CPU 信息

    [root@node1 ~]# lscpu
    Architecture:                    aarch64
    CPU op-mode(s):                  64-bit
    Byte Order:                      Little Endian
    CPU(s):                          32
    On-line CPU(s) list:             0-31
    Thread(s) per core:              1
    Core(s) per socket:              1
    Socket(s):                       32
    NUMA node(s):                    2
    Vendor ID:                       HiSilicon
    Model:                           0
    Model name:                      Kunpeng-920
    Stepping:                        0x1
    BogoMIPS:                        200.00
    NUMA node0 CPU(s):               0-15
    NUMA node1 CPU(s):               16-31
    Vulnerability Itlb multihit:     Not affected
    Vulnerability L1tf:              Not affected
    Vulnerability Mds:               Not affected
    Vulnerability Meltdown:          Not affected
    Vulnerability Spec store bypass: Not affected
    Vulnerability Spectre v1:        Mitigation; __user pointer sanitization
    Vulnerability Spectre v2:        Not affected
    Vulnerability Srbds:             Not affected
    Vulnerability Tsx async abort:   Not affected
    Flags:                           fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm

    2. 安装k8s依赖服务

    这里使用能联网的 deploy 节点,用来制作离线部署资源包。由于harbor官方不支持arm,先使用在线安装kubesphere,后续根据kubekey生成的文件作为伪制品。故在192.168.200.7服务器以单节点形式部署ks。

    以下为多阶段部署,目的是方便制作离线安装包

    2.1 部署docker和docker-compose

    具体可参考以下文章第一部分

    天行1st,公众号:编码如写诗鲲鹏+欧拉部署KubeSphere3.4

    这里采用制作好的安装包形式,直接安装,解压后执行其中的install.sh

链接: https://pan.baidu.com/s/1j9EUonkZPfwQlDC9U1hRMQ?pwd=7y53

2.2 部署harbor

上传安装包后解压后执行其中的install.sh

链接: https://pan.baidu.com/s/1diy4-vxmWXDMp5MWYpaqMw?pwd=nr9y 

2.3 下载麒麟系统k8s依赖包

mkdir -p /root/kubesphere/k8s-init
# 该命令将下载相关依赖到/root/kubesphere/k8s-init目录
yum -y install openssl socat conntrack ipset ebtables chrony ipvsadm --downloadonly --downloaddir /root/kubesphere/k8s-init
# 编写安装脚本
cat install.sh
#!/bin/bash
# rpm -ivh *.rpm --force --nodeps# 打成压缩包,方便离线部署使用
tar -czvf k8s-init-Kylin_V10-arm.tar.gz ./k8s-init/*

2.4 下载ks相关镜像

下载kubesphere3.3.1所需要的arm镜像


#!/bin/bash
# 
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.3.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.3.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.3.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.3.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.3.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.12
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.12
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.12
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.5.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.11
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.25.2
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.34.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.13.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.5.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch:2.6.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:latest
docker pull kubesphere/fluent-bit:v2.0.6

这里使用ks阿里云镜像,其中有些镜像会下载失败,具体可查看运维有术文章。对于下载失败的镜像,可通过本地电脑,直接去hub.docker.com下载。例如:

docker pull kubesphere/fluent-bit:v2.0.6 --platform arm64
#官方ks-console:v3.3.1(arm版)在麒麟中跑不起来,据运维有术介绍,需要使用node14基础镜像。当在鲲鹏服务器准备自己构建时报错淘宝源https过期,使用https://registry.npmmirror.com仍然报错,于是放弃使用该3.3.0镜像,重命名为3.3.1
docker pull zl862520682/ks-console:v3.3.0
docker tag zl862520682/ks-console:v3.3.0 dockerhub.kubekey.local/kubesphereio/ks-console:v3.3.1
## mc和minio也需要重新拉取打tag
docker pull minio/minio:RELEASE.2020-11-25T22-36-25Z-arm64
docker tag  minio/minio:RELEASE.2020-11-25T22-36-25Z-arm64 dockerhub.kubekey.local/kubesphereio/minio:RELEASE

2.5 重命名镜像

重新给镜像打tag,标记为私有仓库镜像

docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.3  dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.27.3docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.3  dockerhub.kubekey.local/kubesphereio/cni:v3.27.3docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.3  dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.27.3docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.3  dockerhub.kubekey.local/kubesphereio/node:v3.27.3docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.3.1  dockerhub.kubekey.local/kubesphereio/ks-console:v3.3.1docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14  dockerhub.kubekey.local/kubesphereio/alpine:3.14docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20  dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.22.20docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.3.1  dockerhub.kubekey.local/kubesphereio/ks-controller-manager:v3.3.1docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.3.1  dockerhub.kubekey.local/kubesphereio/ks-installer:v3.3.1docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.3.1  dockerhub.kubekey.local/kubesphereio/ks-apiserver:v3.3.1docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.3.1  dockerhub.kubekey.local/kubesphereio/openpitrix-jobs:v3.3.1docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.12  dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.22.12docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12  dockerhub.kubekey.local/kubesphereio/docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.12  dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.22.12docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.12  dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.22.12docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0  dockerhub.kubekey.local/kubesphereio/provisioner-localpv:3.3.0docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0  dockerhub.kubekey.local/kubesphereio/linux-utils:3.3.0docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.5.0  dockerhub.kubekey.local/kubesphereio/kube-state-metrics:v2.5.0docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.11  dockerhub.kubekey.local/kubesphereio/fluent-bit:v1.8.11docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1  dockerhub.kubekey.local/kubesphereio/prometheus-config-reloader:v0.55.1docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1  dockerhub.kubekey.local/kubesphereio/prometheus-operator:v0.55.1docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.25.2  dockerhub.kubekey.local/kubesphereio/thanos:v0.25.2docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.34.0  dockerhub.kubekey.local/kubesphereio/prometheus:v2.34.0docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.13.0  dockerhub.kubekey.local/kubesphereio/fluentbit-operator:v0.13.0docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1  dockerhub.kubekey.local/kubesphereio/node-exporter:v1.3.1docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0  dockerhub.kubekey.local/kubesphereio/kubectl:v1.22.0docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0  dockerhub.kubekey.local/kubesphereio/notification-manager:v1.4.0docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0  dockerhub.kubekey.local/kubesphereio/notification-tenant-sidecar:v3.2.0docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0  dockerhub.kubekey.local/kubesphereio/notification-manager-operator:v1.4.0docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0  dockerhub.kubekey.local/kubesphereio/alertmanager:v0.23.0docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0  dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.11.0docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03  dockerhub.kubekey.local/kubesphereio/docker:19.03docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2  dockerhub.kubekey.local/kubesphereio/metrics-server:v0.4.2docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5  dockerhub.kubekey.local/kubesphereio/pause:3.5docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.5.0  dockerhub.kubekey.local/kubesphereio/configmap-reload:v0.5.0docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0  dockerhub.kubekey.local/kubesphereio/snapshot-controller:v4.0.0docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z  dockerhub.kubekey.local/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Zdocker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z  dockerhub.kubekey.local/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Zdocker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0  dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.8.0docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0  dockerhub.kubekey.local/kubesphereio/coredns:1.8.0docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1  dockerhub.kubekey.local/kubesphereio/log-sidecar-injector:1.1docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4  dockerhub.kubekey.local/kubesphereio/defaultbackend-amd64:1.4docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12  dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.22.12
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20 dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.15.12
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2    dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2   dockerhub.kubekey.local/kubesphereio/cni:v3.23.2
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2   dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.23.2
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2  dockerhub.kubekey.local/kubesphereio/node:v3.23.2
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch:2.6.0 dockerhub.kubekey.local/kubesphereio/opensearch:2.6.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:latest dockerhub.kubekey.local/kubesphereio/busybox:latest
docker tag kubesphere/fluent-bit:v2.0.6 dockerhub.kubekey.local/kubesphereio/fluent-bit:v2.0.6 # 也可重命名为v1.8.11,可省下后续修改fluent的yaml,这里采用后修改方式

2.6 推送至harbor私有仓库

#!/bin/bash
# docker load < ks3.3.1-images.tar.gzdocker login -u admin -p Harbor12345 dockerhub.kubekey.localdocker push dockerhub.kubekey.local/kubesphereio/ks-console:v3.3.1
docker push dockerhub.kubekey.local/kubesphereio/ks-controller-manager:v3.3.1
docker push dockerhub.kubekey.local/kubesphereio/ks-installer:v3.3.1
docker push dockerhub.kubekey.local/kubesphereio/ks-apiserver:v3.3.1
docker push dockerhub.kubekey.local/kubesphereio/openpitrix-jobs:v3.3.1
docker push dockerhub.kubekey.local/kubesphereio/alpine:3.14
docker push dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.22.12
docker push dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.22.12
docker push dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.22.12
docker push dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.22.12
docker push dockerhub.kubekey.local/kubesphereio/provisioner-localpv:3.3.0
docker push dockerhub.kubekey.local/kubesphereio/linux-utils:3.3.0
docker push dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2
docker push dockerhub.kubekey.local/kubesphereio/cni:v3.23.2
docker push dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.23.2
docker push dockerhub.kubekey.local/kubesphereio/node:v3.23.2
docker push dockerhub.kubekey.local/kubesphereio/kube-state-metrics:v2.5.0
docker push dockerhub.kubekey.local/kubesphereio/fluent-bit:v1.8.11
docker push dockerhub.kubekey.local/kubesphereio/prometheus-config-reloader:v0.55.1
docker push dockerhub.kubekey.local/kubesphereio/prometheus-operator:v0.55.1
docker push dockerhub.kubekey.local/kubesphereio/thanos:v0.25.2
docker push dockerhub.kubekey.local/kubesphereio/prometheus:v2.34.0
docker push dockerhub.kubekey.local/kubesphereio/fluentbit-operator:v0.13.0
docker push dockerhub.kubekey.local/kubesphereio/node-exporter:v1.3.1
docker push dockerhub.kubekey.local/kubesphereio/kubectl:v1.22.0
docker push dockerhub.kubekey.local/kubesphereio/notification-manager:v1.4.0
docker push dockerhub.kubekey.local/kubesphereio/notification-tenant-sidecar:v3.2.0
docker push dockerhub.kubekey.local/kubesphereio/notification-manager-operator:v1.4.0
docker push dockerhub.kubekey.local/kubesphereio/alertmanager:v0.23.0
docker push dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.11.0
docker push dockerhub.kubekey.local/kubesphereio/docker:19.03
docker push dockerhub.kubekey.local/kubesphereio/pause:3.5
docker push dockerhub.kubekey.local/kubesphereio/configmap-reload:v0.5.0
docker push dockerhub.kubekey.local/kubesphereio/snapshot-controller:v4.0.0
docker push dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.8.0
docker push dockerhub.kubekey.local/kubesphereio/coredns:1.8.0
docker push dockerhub.kubekey.local/kubesphereio/log-sidecar-injector:1.1
docker push dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.15.12
docker push dockerhub.kubekey.local/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z
docker push dockerhub.kubekey.local/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z
docker push dockerhub.kubekey.local/kubesphereio/defaultbackend-amd64:1.4
docker push dockerhub.kubekey.local/kubesphereio/redis:5.0.14-alpine
docker push dockerhub.kubekey.local/kubesphereio/haproxy:2.3
docker push dockerhub.kubekey.local/kubesphereio/opensearch:2.6.0
docker push dockerhub.kubekey.local/kubesphereio/busybox:latest
docker push dockerhub.kubekey.local/kubesphereio/fluent-bit:v2.0.6

3. 使用kk部署kubesphere

3.1 移除麒麟系统自带的podman

podman是麒麟系统自带的容器引擎,为避免后续与docker冲突,直接卸载。否则后续coredns/nodelocaldns也会受影响无法启动以及各种docker权限问题。

yum remove podman

3.2 下载kubekey

下载 kubekey-v2.3.1-linux-arm64.tar.gz。具体 KubeKey 版本号可以在 KubeKey 发行页面([1]) 查看。

  • 方式一

    cd ~
    mkdir kubesphere
    cd kubesphere/# 选择中文区下载(访问 GitHub 受限时使用)
    export KKZONE=cn# 执行下载命令,获取最新版的 kk(受限于网络,有时需要执行多次)
    curl -sfL https://get-kk.kubesphere.io/v2.3.1/kubekey-v2.3.1-linux-arm64.tar.gz | tar xzf -
    
  • 方式二

        使用本地电脑,直接去github下载 Releases · kubesphere/kubekey

        上传至服务器/root/kubesphere目录解压 

tar zxf kubekey-v2.3.1-linux-arm64.tar.gz

3.3  生成集群创建配置文件

创建集群配置文件,本示例中,选择 KubeSphere v3.3.1 和 Kubernetes v1.22.12。

./kk create config -f kubesphere-v331-v12212.yaml --with-kubernetes v1.22.12 --with-kubesphere v3.3.1

命令执行成功后,在当前目录会生成文件名为 kubesphere-v331-v12212.yaml 的配置文件。

注意: 生成的默认配置文件内容较多,这里就不做过多展示了,更多详细的配置参数请参考 官方配置示例。

本文示例采用 3 个节点同时作为 control-plane、etcd 节点和 worker 节点。

编辑配置文件 kubesphere-v331-v12212.yaml,主要修改 kind: Cluster 和 kind: ClusterConfiguration 两小节的相关配置

修改 kind: Cluster 小节中 hosts 和 roleGroups 等信息,修改说明如下。

  • hosts:指定节点的 IP、ssh 用户、ssh 密码、ssh 端口。特别注意 一定要手工指定 arch: arm64,否则部署的时候会安装 X86 架构的软件包。

  • roleGroups:指定 3 个 etcd、control-plane 节点,复用相同的机器作为 3 个 worker 节点。

  • internalLoadbalancer:启用内置的 HAProxy 负载均衡器。

  • domain:自定义了一个 opsman.top

  • containerManager:使用了 containerd

  • storage.openebs.basePath:新增配置,指定默认存储路径为 /data/openebs/local

修改后的示例如下:

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:name: sample
spec:hosts:- {name: node1, address: 192.168.200.7, internalAddress: 192.168.200.7, user: root, password: "123456", arch: arm64}roleGroups:etcd:- node1control-plane: - node1worker:- node1registry:- node1controlPlaneEndpoint:## Internal loadbalancer for apiservers # internalLoadbalancer: haproxydomain: lb.kubesphere.localaddress: ""port: 6443kubernetes:version: v1.22.12clusterName: cluster.localautoRenewCerts: truecontainerManager: dockeretcd:type: kubekeynetwork:plugin: calicokubePodsCIDR: 10.233.64.0/18kubeServiceCIDR: 10.233.0.0/18## multus support. https://github.com/k8snetworkplumbingwg/multus-cnimultusCNI:enabled: falseregistry:type: harborauths:"dockerhub.kubekey.local":username: adminpassword: Harbor12345privateRegistry: "dockerhub.kubekey.local"namespaceOverride: "kubesphereio"registryMirrors: []insecureRegistries: []addons: []---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:name: ks-installernamespace: kubesphere-systemlabels:version: v3.3.1
spec:persistence:storageClass: ""authentication:jwtSecret: ""zone: ""local_registry: ""namespace_override: ""# dev_tag: ""etcd:monitoring: trueendpointIps: localhostport: 2379tlsEnable: truecommon:core:console:enableMultiLogin: trueport: 30880type: NodePort# apiserver:#  resources: {}# controllerManager:#  resources: {}redis:enabled: falsevolumeSize: 2Giopenldap:enabled: falsevolumeSize: 2Giminio:volumeSize: 20Gimonitoring:# type: externalendpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090GPUMonitoring:enabled: falsegpu:kinds:- resourceName: "nvidia.com/gpu"resourceType: "GPU"default: truees:# master:#   volumeSize: 4Gi#   replicas: 1#   resources: {}# data:#   volumeSize: 20Gi#   replicas: 1#   resources: {}logMaxAge: 7elkPrefix: logstashbasicAuth:enabled: falseusername: ""password: ""externalElasticsearchHost: ""externalElasticsearchPort: ""alerting:enabled: true# thanosruler:#   replicas: 1#   resources: {}auditing:enabled: false# operator:#   resources: {}# webhook:#   resources: {}devops:enabled: false# resources: {}jenkinsMemoryLim: 8GijenkinsMemoryReq: 4GijenkinsVolumeSize: 8Gievents:enabled: false# operator:#   resources: {}# exporter:#   resources: {}# ruler:#   enabled: true#   replicas: 2#   resources: {}logging:enabled: truelogsidecar:enabled: truereplicas: 2# resources: {}metrics_server:enabled: falsemonitoring:storageClass: ""node_exporter:port: 9100# resources: {}# kube_rbac_proxy:#   resources: {}# kube_state_metrics:#   resources: {}# prometheus:#   replicas: 1#   volumeSize: 20Gi#   resources: {}#   operator:#     resources: {}# alertmanager:#   replicas: 1#   resources: {}# notification_manager:#   resources: {}#   operator:#     resources: {}#   proxy:#     resources: {}gpu:nvidia_dcgm_exporter:enabled: false# resources: {}multicluster:clusterRole: nonenetwork:networkpolicy:enabled: falseippool:type: nonetopology:type: noneopenpitrix:store:enabled: trueservicemesh:enabled: falseistio:components:ingressGateways:- name: istio-ingressgatewayenabled: falsecni:enabled: falseedgeruntime:enabled: falsekubeedge:enabled: falsecloudCore:cloudHub:advertiseAddress:- ""service:cloudhubNodePort: "30000"cloudhubQuicNodePort: "30001"cloudhubHttpsNodePort: "30002"cloudstreamNodePort: "30003"tunnelNodePort: "30004"# resources: {}# hostNetWork: falseiptables-manager:enabled: truemode: "external"# resources: {}# edgeService:#   resources: {}terminal:timeout: 600

3.4 执行安装

./kk create cluster -f kubesphere-v331-v122123.yaml

此节点之所以安装kubesphere是因为kk在安装过程中会产生kubekey文件夹并将k8s所需要的依赖都下载到kubekey目录。后续我们离线安装主要使用kubekey文件夹,配合一下脚本代替之前的制品。

4. 制作离线部署资源

4.1 导出k8s基础依赖包

yum -y install openssl socat conntrack ipset ebtables chrony ipvsadm --downloadonly --downloaddir /root/kubesphere/k8s-init
# 打成压缩包
tar -czvf k8s-init-Kylin_V10-arm.tar.gz ./k8s-init/*

4.2 导出ks需要的镜像

导出ks相关的镜像至ks3.3.1-images.tar

docker save -o ks3.3.1-images.tar  dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.27.3  dockerhub.kubekey.local/kubesphereio/cni:v3.27.3  dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.27.3  dockerhub.kubekey.local/kubesphereio/node:v3.27.3  dockerhub.kubekey.local/kubesphereio/ks-console:v3.3.1  dockerhub.kubekey.local/kubesphereio/alpine:3.14  dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.22.20  dockerhub.kubekey.local/kubesphereio/ks-controller-manager:v3.3.1  dockerhub.kubekey.local/kubesphereio/ks-installer:v3.3.1  dockerhub.kubekey.local/kubesphereio/ks-apiserver:v3.3.1  dockerhub.kubekey.local/kubesphereio/openpitrix-jobs:v3.3.1  dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.22.12  dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.22.12  dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.22.12  dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.22.12  dockerhub.kubekey.local/kubesphereio/provisioner-localpv:3.3.0  dockerhub.kubekey.local/kubesphereio/linux-utils:3.3.0  dockerhub.kubekey.local/kubesphereio/kube-state-metrics:v2.5.0  dockerhub.kubekey.local/kubesphereio/fluent-bit:v2.0.6  dockerhub.kubekey.local/kubesphereio/prometheus-config-reloader:v0.55.1  dockerhub.kubekey.local/kubesphereio/prometheus-operator:v0.55.1  dockerhub.kubekey.local/kubesphereio/thanos:v0.25.2  dockerhub.kubekey.local/kubesphereio/prometheus:v2.34.0  dockerhub.kubekey.local/kubesphereio/fluentbit-operator:v0.13.0   dockerhub.kubekey.local/kubesphereio/node-exporter:v1.3.1  dockerhub.kubekey.local/kubesphereio/kubectl:v1.22.0  dockerhub.kubekey.local/kubesphereio/notification-manager:v1.4.0  dockerhub.kubekey.local/kubesphereio/notification-tenant-sidecar:v3.2.0  dockerhub.kubekey.local/kubesphereio/notification-manager-operator:v1.4.0  dockerhub.kubekey.local/kubesphereio/alertmanager:v0.23.0  dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.11.0  dockerhub.kubekey.local/kubesphereio/docker:19.03  dockerhub.kubekey.local/kubesphereio/metrics-server:v0.4.2  dockerhub.kubekey.local/kubesphereio/pause:3.5  dockerhub.kubekey.local/kubesphereio/configmap-reload:v0.5.0  dockerhub.kubekey.local/kubesphereio/snapshot-controller:v4.0.0  dockerhub.kubekey.local/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z  dockerhub.kubekey.local/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z  dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.8.0  dockerhub.kubekey.local/kubesphereio/coredns:1.8.0   dockerhub.kubekey.local/kubesphereio/defaultbackend-amd64:1.4 dockerhub.kubekey.local/kubesphereio/redis:5.0.14-alpine dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.15.12 dockerhub.kubekey.local/kubesphereio/node:v3.23.2 dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.23.2 dockerhub.kubekey.local/kubesphereio/cni:v3.23.2 dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2 dockerhub.kubekey.local/kubesphereio/haproxy:2.3 dockerhub.kubekey.local/kubesphereio/busybox:latest dockerhub.kubekey.local/kubesphereio/opensearch:2.6.0 dockerhub.kubekey.local/kubesphereio/fluent-bit:v2.0.6

压缩

gzip ks3.3.1-images.tar

4.3 导出kubesphere文件夹

[root@node1 ~]# cd /root/kubesphere
[root@node1 kubesphere]# ls
create_project_harbor.sh  docker-24.0.7-arm.tar.gz  fluent-bit-daemonset.yaml  harbor-arm.tar.gz  harbor.tar.gz  install.sh  k8s-init-Kylin_V10-arm.tar.gz  ks3.3.1-images.tar.gz  ks3.3.1-offline  push-images.shtar -czvf kubeshpere.tar.gz ./kubesphere/*
编写install.sh用于后续一键离线安装kk

#!/usr/bin/env bash
read -p "请先修改机器配置文件ks3.3.1-offline/kubesphere-v331-v12212.yaml中相关IP地址,是否已修改(yes/no)" B 
do_k8s_init(){echo "--------开始进行依赖包初始化------"yum remove podman -ytar zxf k8s-init-Kylin_V10-arm.tar.gzcd k8s-init && ./install.shcd -rm -rf k8s-init
}install_docker(){echo "--------开始安装docker--------"tar zxf docker-24.0.7-arm.tar.gzcd docker && ./install.shcd -}
install_harbor(){echo "-------开始安装harbor----------"tar zxf  harbor-arm.tar.gzcd harbor && ./install.shcd -echo "--------开始推送镜像----------"source create_project_harbor.shsource push-images.shecho "--------镜像推送完成--------"
}
install_ks(){echo "--------开始安装kubesphere--------"
#        tar zxf ks3.3.1-offline.tar.gzcd ks3.3.1-offline && ./install.sh
}if [ "$B" = "yes" ] || [ "$B" = "y" ]; thendo_k8s_initinstall_dockerinstall_harborinstall_ks
elseecho "请先配置集群配置文件"exit 1
fi

5. 离线环境安装k8s和kubesphere

5.1 卸载podman和安装k8s依赖

所有节点都需要操作,

上传k8s-init-Kylin_V10-arm.tar.gz并解压后执行install.sh,如果单节点离线部署可直接使用下一步。

yum remove podman -y

5.2 安装ks集群

上传kubeshpere.tar.gz并解压,修改./kubesphere/ks3.3.1-offline/kubesphere-v331-v12212.yaml集群配置文件中相关ip,密码等信息。修改后执行install.sh,等待十分钟左右可看到如下消息:

**************************************************
Waiting for all tasks to be completed ...
task alerting status is successful  (1/6)
task network status is successful  (2/6)
task multicluster status is successful  (3/6)
task openpitrix status is successful  (4/6)
task logging status is successful  (5/6)
task monitoring status is successful  (6/6)
**************************************************
Collecting installation results ...
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################Console: http://192.168.10.2:30880
Account: admin
Password: P@88w0rd
NOTES:1. After you log into the console, please check themonitoring status of service components in"Cluster Management". If any service is notready, please wait patiently until all components are up and running.2. Please change the default password after login.#####################################################
https://kubesphere.io             2024-07-03 11:10:11
#####################################################

5.3 其他修改

由于开启了日志功能,arm-麒麟版本fluent-bit一直报错,需更改为2.0.6

kubectl edit daemonsets fluent-bit -n kubesphere-logging-system#修改其中fluent-bit版本1.8.11为2.0.6

如果不需要日志,可以修改ks创建集群配置文件不安装log插件,镜像也可以更加简化

6.测试查看

6.1 验证集群状态

[root@node1 ~]# kubectl get nodes 
NAME    STATUS   ROLES                         AGE   VERSION
node1   Ready    control-plane,master,worker   25h   v1.22.12
node2   Ready    control-plane,master,worker   25h   v1.22.12
node3   Ready    control-plane,master,worker   25h   v1.22.12

页面查看

图片

7.总结

本文主要实战演示了ARM 版 麒麟 V10服务器通过在线环境部署k8s和kubesphere,并将基础依赖,需要的docker镜像和harbor,以及kubekey部署ks下载的各类包一起打包。通过shell脚本编写简单的部署过程,实现离线环境安装k8s和kubesphere。

离线安装主要知识点

  • 卸载podman

  • 安装k8s依赖包

  • 安装Docker

  • 安装harbor

  • 将k8s和ks需要的镜像推送到harbor

  • 使用kk部署集群

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/373186.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

变长输入神经网络设计

我对使用 PyTorch 可以轻松构建动态神经网络的想法很感兴趣&#xff0c;因此我决定尝试一下。 我脑海中的应用程序具有可变数量的相同类型的输入。对于可变数量的输入&#xff0c;已经使用了循环或递归神经网络。但是&#xff0c;这些结构在给定行的输入之间施加了一些顺序或层…

7.9实验室总结 SceneBuilder的使用方法+使用javafx等

由于下错了东西&#xff0c;所以一直运行不出来&#xff0c;今天一直在配置环境&#xff0c;配置好了才学&#xff0c;所以没学多少&#xff0c;看了网课学习了SceneBuilder的使用方法还有了解了javafx是怎么写项目的&#xff0c;&#xff0c; 学习了怎么跳转页面&#xff1a;…

html H5 dialog弹窗学习,实现弹窗显示内容 替代confirm、alert

html H5 dialog弹窗学习,实现弹窗内容 替代confirm 框架使用的mui,使用mui.confirm() 弹窗内容过多时,弹窗被撑的到屏幕外去了,使用H5 dialog 标签自定义一个固定大小的弹窗,内容过多时可下拉显示 效果展示 隐私政策内容很多,可以下拉显示 代码 myDialog.css dialog{p…

【光伏仿真系统】光伏设计的基本步骤

随着全球对可再生能源需求的不断增长&#xff0c;光伏发电作为一种清洁、可再生的能源形式&#xff0c;正日益受到重视。光伏设计是确保光伏系统高效、安全、经济运行的关键环节&#xff0c;它涉及从选址评估到系统安装与维护的全过程。本文将详细介绍光伏设计的基本步骤&#…

【RHCE】转发服务器实验

1.在本地主机上操作 2.在客户端操作设置主机的IP地址为dns 3.测试,客户机是否能ping通

LLM应用构建前的非结构化数据处理(三)文档表格的提取

1.学习内容 本节次学习内容来自于吴恩达老师的Preprocessing Unstructured Data for LLM Applications课程&#xff0c;因涉及到非结构化数据的相关处理&#xff0c;遂做学习整理。 本节主要学习pdf中的表格数据处理 2.环境准备 和之前一样&#xff0c;可以参考LLM应用构建前…

Raylib 实现超大地图放大缩小与两种模式瓦片地图刷新

原理&#xff1a; 一种刷新模式&#xff1a; 在宫格内整体刷新&#xff0c;类似九宫格移动到边缘&#xff0c;则九宫格整体平移一个宫格&#xff0c;不过这里是移动一个瓦片像素&#xff0c;实际上就是全屏刷新&#xff0c;这个上限是 笔记本 3060 70帧 100*100个瓦片每帧都…

压缩感知3——重构算法正交匹配追踪算法

算法流程 问题的实质是&#xff1a;AX Y 求解&#xff08;A是M维&#xff0c;Y是N维且N>>M并且稀疏度K<M&#xff09;明显X有无穷多解&#xff0c;重构过程是M次采样得到的采样值升维的过程。OMP算法的具体步骤&#xff1a;(1)用X表示信号&#xff0c;初始化残差e0 …

802.11漫游流程简单解析与笔记_Part2_05_wpa_supplicant如何通过nl80211控制内核开始关联

最近在进行和802.11漫游有关的工作&#xff0c;需要对wpa_supplicant认证流程和漫游过程有更多的了解&#xff0c;所以通过阅读论文等方式&#xff0c;记录整理漫游相关知识。Part1将记录802.11漫游的基本流程、802.11R的基本流程、与认证和漫游都有关的三层秘钥基础。Part1将包…

C#用反射机制调用dll文件的字段、属性和方法

1、创建dll文件 using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks;namespace CLStudent {public class Student{//字段public string Name "Tom";//属性public double ChineseScore { get; s…

connect to github中personal access token生成token方法

一、问题 执行git push时弹出以下提示框 二、解决方法 去github官网生成Token&#xff0c;步骤如下 选择要授予此 令牌token 的 范围 或 权限 要使用 token 从命令行访问仓库&#xff0c;请选择 repo 。 要使用 token 从命令行删除仓库&#xff0c;请选择 delete_repo 其他根…

下载Windows版本的pycharm

Python环境搭建 第一步下载安装python 等待安装完成 验证python是否安装成功 Python开发工具安装部署 JetBrains: Essential tools for software developers and teams PyCharm: the Python IDE for data science and web development 下载社区版本的PyCharm 双击打开下载好的…

C++20中的基于范围的for循环(range-based for loop)

C11中引入了对基于范围的for循环(range-based for loop)的支持&#xff1a;该循环对一系列值(例如容器中的所有元素)进行操作。代码段如下&#xff1a; const std::vector<int> vec{ 1,2,3,4,5 }; for (const auto& i : vec)std::cout << i << ", …

Github Actions 构建Vue3 + Vite项目

本篇文章以自己创建的项目为例&#xff0c;用Github Actions构建。 Github地址&#xff1a;https://github.com/ling08140814/myCarousel 访问地址&#xff1a;https://ling08140814.github.io/myCarousel/ 具体步骤&#xff1a; 1、创建一个Vue3的项目&#xff0c;并完成代…

书生大模型实战营(暑假场)-入门岛-第一关

书生大模型实战营暑假场重磅开启&#xff01;&#xff0c;这场学习路线看起来很好玩呀&#xff0c;闯关学习既能学到知识又有免费算力可得&#xff0c;太良心啦。感兴趣的小伙伴赶快一起报名学习吧&#xff01;&#xff01;&#xff01; 关卡任务 好的&#xff0c;我们废话不多…

CentOS6用文件配置IP模板

CentOS6用文件配置IP模板 到 CentOS6.9 , 默认还不能用 systemctl , 能用 service chkconfig sshd on 对应 systemctl enable sshd 启用,开机启动该服务 ### chkconfig sshd on 对应 systemctl enable sshd 启用,开机启动该服务 sudo chkconfig sshd onservice sshd start …

Profibus转ModbusTCP网关模块实现Profibus_DP向ModbusTCP转换

Profibus和ModbusTCP是工业控制自动化常用的二种通信协议。Profibus是一种串口通信协议&#xff0c;它提供了迅速靠谱的数据传输和各种拓扑结构&#xff0c;如总线和星型构造。Profibus可以和感应器、执行器、PLC等各类设备进行通信。 ModbusTCP是一种基于TCP/IP协议的通信协议…

FPGA开发笔试1

1. 流程简介 我是两天前有FPGA公司的HR来问我实习的事情&#xff0c;她当时问我距离能不能接受&#xff0c;不过确实距离有点远&#xff08;地铁通勤要将近一个半小时&#xff09;&#xff0c;然后她说给我约个时间&#xff0c;抽空做1个小时的题目&#xff08;线上做&#xf…

800 元打造家庭版 SOC 安全运营中心

今天,我们开始一系列新的文章,将从独特而全面的角度探索网络安全世界,结合安全双方:红队和蓝队。 这种方法通常称为“紫队”,集成了进攻和防御技术,以提供对威胁和安全解决方案的全面了解。 在本系列的第一篇文章中,我们将指导您完成以 100 欧元约800元左右的预算创建…

Flutter【组件】标签

简介 flutter 标签组件。标签组件是一种常见的 UI 元素&#xff0c;用于显示和管理多个标签&#xff08;或标签集合&#xff09;。 github地址&#xff1a; https://github.com/ThinkerJack/jac_uikit pub地址&#xff1a;https://pub.dev/packages/jac_uikit 使用方式&…