Kubernetes 使用 helm 部署 NFS Provisioner

文章目录

    • 1. 介绍
    • 2. 预备条件
    • 3. 部署 nfs
    • 4. 部署 NFS subdir external provisioner
      • 4.1 集群配置 containerd 代理
      • 4.2 配置代理堡垒机通过 kubeconfig 部署
    • 部署 MinIO
      • 添加仓库
      • 修改可配置项
    • 访问
      • nodepot
      • ingress

1. 介绍

NFS subdir external provisioner 使用现有且已配置的NFS 服务器来支持通过持久卷声明动态配置 Kubernetes 持久卷。持久卷配置为${namespace}-${pvcName}-${pvName}.

变量配置:

VariableValue
nfs_provisioner_namespacenfsstorage
nfs_provisioner_rolenfs-provisioner-runner
nfs_provisioner_serviceaccountnfs-provisioner
nfs_provisioner_namehpe.com/nfs
nfs_provisioner_storage_class_namenfs
nfs_provisioner_server_iphpe2-nfs.am2.cloudra.local
nfs_provisioner_server_share/k8s

注意:此存储库是从https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client迁移的。作为迁移的一部分:容器镜像名称和存储库已分别更改为registry.k8s.io/sig-storagenfs-subdir-external-provisioner。为了保持与早期部署文件的向后兼容性,NFS Client Provisioner 的命名保留为nfs-client-provisioner部署 YAML 中的名称

2. 预备条件

  • CentOS Linux release 7.9.2009 (Core)
  • kubernetes 集群
$ kubectl get node
NAME      STATUS   ROLES           AGE    VERSION
master1   Ready    control-plane   275d   v1.25.0
node1     Ready    <none>          275d   v1.25.0
node2     Ready    <none>          275d   v1.25.0

3. 部署 nfs

  • linux 配置 NFS 共享服务
[root@master1 helm]# exportfs -s
/app/nfs/k8snfs  192.168.10.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)

4. 部署 NFS subdir external provisioner

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=192.168.10.61 --set nfs.path=/app/nfs/k8snfs -n nfs-provisioner --create-namespace

报错:Error: INSTALLATION FAILED: failed to download "nfs-subdir-external-provisioner/nfs-subdir-external-provisioner"

忘记配置代理无法拉取 helm charts 和 registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2

有两种办法,但都需要找到一个专门配置代理的节点

4.1 集群配置 containerd 代理

$ vim /etc/systemd/system/containerd.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.10.105:7890"
Environment="HTTPS_PROXY=http://192.168.10.105:7890"
Environment="NO_PROXY=localhost"#重启
$ systemctl restart containerd.service

这样镜像的问题就解决了。下面解决拉取 helm charts的问题

再执行部署 debug ,发现拉取的 helm charts 的版本

$ helm --debug install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=192.168.10.61 --set nfs.path=/app/nfs/k8snfs -n nfs-provisioner --create-namespace
Error: INSTALLATION FAILED: Get "https://objects.githubusercontent.com/github-production-release-asset-2e65be/250135810/33156d2f-3fef-4b00-bf34-1817d30653bc?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230716T153116Z&X-Amz-Expires=300&X-Amz-Signature=7219da0622fe22795d526f742064ee0da00a5821c37a5e1fe1bb0eb6b046e3c0&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=250135810&response-content-disposition=attachment%3B%20filename%3Dnfs-subdir-external-provisioner-4.0.18.tgz&response-content-type=application%2Foctet-stream": read tcp 192.168.10.28:46032->192.168.10.105:7890: read: connection reset by peer
helm.go:84: [debug] Get "https://objects.githubusercontent.com/github-production-release-asset-2e65be/250135810/33156d2f-3fef-4b00-bf34-1817d30653bc?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230716T153116Z&X-Amz-Expires=300&X-Amz-Signature=7219da0622fe22795d526f742064ee0da00a5821c37a5e1fe1bb0eb6b046e3c0&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=250135810&response-content-disposition=attachment%3B%20filename%3Dnfs-subdir-external-provisioner-4.0.18.tgz&response-content-type=application%2Foctet-stream": read tcp 192.168.10.28:46032->192.168.10.105:7890: read: connection reset by peer

手动去下载 nfs-subdir-external-provisioner-4.0.18.tgz


再指定本地 helm charts 包执行部署

helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner-4.0.18.tgz --set nfs.server=192.168.10.61 --set nfs.path=/app/nfs/k8snfs -n nfs-provisioner --create-namespace

4.2 配置代理堡垒机通过 kubeconfig 部署

拉取 registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2

$ podman pull registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
Trying to pull registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2...
Getting image source signatures
Copying blob 528677575c0b done  
Copying blob 60775238382e done  
Copying config 932b0bface done  
Writing manifest to image destination
Storing signatures
932b0bface75b80e713245d7c2ce8c44b7e127c075bd2d27281a16677c8efef3
$ podman save -o nfs-subdir-external-provisioner-v4.0.2.tar  registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
Getting image source signatures
Copying blob 1a5ede0c966b done  
Copying blob ad321585b8f5 done  
Copying config 932b0bface done  
Writing manifest to image destination
Storing signatures
$ scp nfs-subdir-external-provisioner-v4.0.2.tar root@192.168.10.62:/root
$ scp nfs-subdir-external-provisioner-v4.0.2.tar root@192.168.10.63:/root

配置 kubeconfig

$ mkdir kubeconfig
$ vim kubeconfig/61cluster.yaml
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2VENDQWRHZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQ0FYRFRJeU1UQXhOREE1TURFeE9Gb1lEekl4TWpJd09USXdNRGt3TVRFNFdqQVZNUk13RVFZRApWUVFERXdwcmRXSmxjbTVsZEdWek1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBCnR1WTAvblE1OTZXVm5pZFFOdmJpWFNRczJjSVh5WCthZVBMZ0ptUXVpb0pjeGlyQ2dxdStLT0hQTWcwamgra1MKT0RqWS80K3hvZlpjakhydFRDYlg0U1dpUUFqK0diSTJVdmd1ei91U29JVHhhZzNId2JCVnk0REZrUjdpSVUxOQpVVWd0Yy9VYlB6L2I0aGJnT3prYkcyVGo0eDF1b3U4aTErTUVyZnRZRmtyTjJ1bzNTU1RaMVhZejB5d08xbzZvCkxiYktudDB3TUthUmFqKzRKS3lPRkd2dHVMODhjTXRYSXN3KzZ5QndqNWVlYUFnZXVRbUZYcHZ3M1BNRWt3djIKWFN6RTVMRy9SUUhaWTNTeGpWdUNPVXU5SllvNFVWK2RwRUdncUdmRXJDOHNvWHAvcG9PSkhERVhKNFlwdnFDOApJSnErRldaUXE1VEhKYy8rMUFoenhRSURBUUFCbzBJd1FEQU9CZ05WSFE4QkFmOEVCQU1DQXFRd0R3WURWUjBUCkFRSC9CQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVMb0ZPcDZ1cFBHVldUQ1N3WWlpRkpqZkowOWd3RFFZSktvWkkKaHZjTkFRRUxCUUFEZ2dFQkFFV2orRmcxTFNkSnRSM1FFdFBnKzdHbEJHNnJsZktCS3U2Q041TnJGeEN5Y3UwMwpNNG1JUEg3VXREYUMyRHNtQVNUSWwrYXMzMkUrZzBHWXZDK0VWK0F4dG40RktYaHhVSkJ2Smw3RFFsY2VWQTEyCjk0bDExYUk1VE5IOGN5WDVsQ3draXRRMks4ekxTdUgySFlKeG15cTVVK092UVBaS3J4ekN3NFBCdk5Rem1lSFMKR0VuKzdVUjFFamZQaGZ5UTZIdGh5VmZ2MWNtL283L2tCWkJ4OGJmQWt4T0drUnR4eHo4V1JVVTNOUkwwbUt4YwpIc2xPMm43a09BZnB4U3Jya2w3UFRXd0doSEN1VGtxRUdaOEsycW9wK285ajQyS3U5eldqUUlaMjJLcytLMXk2CjFmd3h0Zit2c2hFaFZURGZSU2ZoTDYyUEh3RnAxQklZTFZoVUhJcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=server: https://192.168.10.61:6443name: kubernetes
contexts:
- context:cluster: kubernetesuser: kubernetes-adminname: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-adminuser:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGVENDQWYyZ0F3SUJBZ0lJWVNHaHV4c1poUWt3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWdGdzB5TWpFd01UUXdPVEF4TVRoYUdBOHlNVEl5TURreU1EQTVNREV5TmxvdwpOREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhHVEFYQmdOVkJBTVRFR3QxWW1WeWJtVjBaWE10CllXUnRhVzR3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQ25KeHVSd0FERGw5RkMKMGRtSWVmV05hcE9DL2R1OXUwWWIwTXA5Nzh2eW5IcFJXMEI4QWlTTitkOHZCelNwMi9GdmVZeGlPSUpwbDlTVgpwcTdtSXM0T1A3cXN5Znc0TTBXKzM5c2dEditGYlJ1OUVMUlV6cXg1T1RwZVlDZVRnaFplQXRSU0dOamhKS2N0Cmd1SzA5OHJoNkpSWnZhUk1TYkYzK21GZ0RrbHNpL0Z4c2s1Uzl1Rk9Zb3lxTWdTUjdGTjFlOHVRSmxwU09Zem8KQlBWc3NsQ2FUTUNoQ2RrVnFteThiRVVtdzFvRzhhTGwrYXRuaW1QdEFXaWNzMGZjMGV0Zm9MRUpDcno4Wlo4UApBSnRackVHaDcxM0d0czdGblpXNnJ6RFppc3Z0Zml1WGFyanFQd2Z3a0ZBekJhYlRiYUF1NlJIdWloSWZSZWJxCjB2djR0c2tCQWdNQkFBR2pTREJHTUE0R0ExVWREd0VCL3dRRUF3SUZvREFUQmdOVkhTVUVEREFLQmdnckJnRUYKQlFjREFqQWZCZ05WSFNNRUdEQVdnQlF1Z1U2bnE2azhaVlpNSkxCaUtJVW1OOG5UMkRBTkJna3Foa2lHOXcwQgpBUXNGQUFPQ0FRRUF0akk4c2c3KzlORUNRaStwdDZ5bWVtWjZqOG5SQjFnbm5aU2dGN21GYk03NXdQSUQ0NDJYCkhENnIwOEF6bDZGei9sZEtxbkN0cDJ2QnJWQmxVaWl6Ry9naWVWQTVKa3NIVEtveFFpV1llWEwwYmxsVDA2RDcKV240V1BTKzUvcGZMWktmd25jL20xR0owVWtQQUJHQVdSVTFJSi9kK0dJUlFtNTJTck9VYktLUTIzbHhGa2xqMwpYaDYveEg0eVRUeGsxRjVEVUhwcnFSTVdDTXZRYkRkM0pUaEpvdWNpZWRtcCs1YWV0ZStQaGZLSUtCT1JoMC9OCnIyTWpCZjNNaENyMUMwK0dydGMyeC80eC9PejRwbGRGNmQ1a2c3NmZvOCtiTW1ISmVyaVV6MXZKSkU0bFYxUDEKK21wN1E5Y1BGUVBJdkpNakRBVXdBUkRGcHNNTEhYQ0FYZz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0Kclient-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBcHljYmtjQUF3NWZSUXRIWmlIbjFqV3FUZ3YzYnZidEdHOURLZmUvTDhweDZVVnRBCmZBSWtqZm5mTHdjMHFkdnhiM21NWWppQ2FaZlVsYWF1NWlMT0RqKzZyTW44T0RORnZ0L2JJQTcvaFcwYnZSQzAKVk02c2VUazZYbUFuazRJV1hnTFVVaGpZNFNTbkxZTGl0UGZLNGVpVVdiMmtURW14ZC9waFlBNUpiSXZ4Y2JKTwpVdmJoVG1LTXFqSUVrZXhUZFh2TGtDWmFVam1NNkFUMWJMSlFta3pBb1FuWkZhcHN2R3hGSnNOYUJ2R2k1Zm1yClo0cGo3UUZvbkxOSDNOSHJYNkN4Q1FxOC9HV2ZEd0NiV2F4Qm9lOWR4cmJPeFoyVnVxOHcyWXJMN1g0cmwycTQKNmo4SDhKQlFNd1dtMDIyZ0x1a1I3b29TSDBYbTZ0TDcrTGJKQVFJREFRQUJBb0lCQUNoY29DS2NtMUtmaVM4NgpYdTIralZXZGc0c2c0M3U0Q2VEVGxPRytFcUE5dXFlRWdsaXZaOFpFck9pOU03RkVZOU5JSldiZVFGZGhDenNyCnFaWDJsNDBIUkh0T3RyR1haK01FU1BRL3l1R2NEQk9tUWZVc2hxY3E4M1l3ZjczMXJwTDYyZXdOQmVtdm9SS3oKUlN6dm5MVGFKV0JhRTU4OE9EZEJaVnY5ZHl0WFoxSkVqWHZTVUowaWY4bWZvMUlxNUdBa1FLZWZuMlVLcTRROApYYzJTTkd5QTZxUThGNGd0ZWJ1WGI2QVFLdko4K05KRlI1b2ppNG9hWVlkcE5yR0MzUnJ5VHVSc29ZNFIxRko5ClA5WjcwZGtCcnExcDlNOVA2aDFxWVlaT1FISDdNRklaaFBra3dHNllVLzdzRVBZS2h1R29LVVNJR0FsUXU1czIKOGFtM0toVUNnWUVBM1AwaDRRd0xoeXFXVDBnRC9CcFRPR21iWjd2enA2Z3B3NTNhQXppWDE1UVRHcjdwaWF5RApFSlI1c01vUkF5ckthdUVVZjR1MzkyeGc0NlY4eVJNN3ZlVzZzZ2ZDSnUva3N6Nkxqa3FWRXBXUktycWVQRzhKClIwZXQ2TXRIaExxRHBDSytIdTJIYXhtbWdzMzB6Wk9EQ0Vma2dOSGY3cmM0ZnlxY2pETEpOZHNDZ1lFQXdhSisKRmhQSmpTdTVBYlJ1d2dVUDJnd0REM2ZiQ09peW5VSHpweGhUMWhRcUNPNm14dE1VaUE4bFJraTgxb1NLVEN2eAoxd1VpcnMwYzVNVFRiUS9kekpSVEtTSlRGZFhWNUdxUXppclc3SE5meGcvS1RkTVUyNDRvZG9WY2E4M0Q5WjJ6CmxybVNQQkEvaS9SOVVSNTRnODdFbHBuVi9Cc21wSDcrbUlkQzZWTUNnWUI3dGZsUlVyemhYaVhuSEJtZTk5MisKcHVBb29qODBqQjlWTXZqbzlMV01LWWpJWURlOHFxWjBrYW5PSGxDSHhWeXJtSFV4TWJZNi9LRUF6NU9idlBpawp4Z1pOdzZvY3dnNzFpUDMzR2lsNXplRUdXcEphb280L0tSRmlVT29vazRFK1VYUzlPNXVqaVNoOThXNHA1M3BqCkdGd0RBWHFxMkViNGFaSlpxZFNhSVFLQmdRQ3A2TTdneW40cVhQcGJUNXQ4cm5wcForN3JqTTFyZE56K2R0ZTUKZ1BSWHZwdmYrS0hwaDJEVnZ3eURMdUpkRGpKWWdwc1VoVklZdHEwcTVMZHRWT1hZVlRMZnZsblBxREttMndlegprUTNFcjd5VGpGbUZqcm9YcWhkQllPWm5Sa2cwWnl3bUR6SU5lR2g2ZzQvUE5ZQ2trRFFhdm1SeGN0V210RFR0ClhJdFBOd0tCZ0RkTnlRRU5pNmptd0tEaDBMeUNJTXBlWVA4TEYyKzVGSHZPWExBSFBuSzFEb2I1djMrMjFZTVoKTmtibGNJNzNBd2RiRnJpRjhqbVBxYXZmdUowNlA4UUJZVGVEbGhiSjZBZW1nWG1kVlRaL2IwTnV1ZktiNFdvVgo0eHA3TUJYa0NYNTUxWVB6djloc2M2RTZkYm5KRHJCajV4M3RsbWdyV2ZmL00weUtTOEF4Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

测试

$ kubectl --kubeconfig kubeconfig/61cluster.yaml get node
NAME      STATUS   ROLES           AGE    VERSION
master1   Ready    control-plane   275d   v1.25.0
node1     Ready    <none>          275d   v1.25.0
node2     Ready    <none>          275d   v1.25.0

部署

$ helm install --kubeconfig kubeconfig/61cluster.yaml  nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=192.168.10.61 --set nfs.path=/app/nfs/k8snfs -n nfs-provisioner --create-namespace
NAME: nfs-subdir-external-provisioner
LAST DEPLOYED: Sun Jul 16 22:51:28 2023
NAMESPACE: nfs-provisioner
STATUS: deployed
REVISION: 1
TEST SUITE: None$ kubectl --kubeconfig kubeconfig/61cluster.yaml get all -n nfs-provisioner  -owide
NAME                                                   READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
pod/nfs-subdir-external-provisioner-688456c5d9-f5xkt   1/1     Running   0          39m   100.108.11.220   node2   <none>           <none>NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS                        IMAGES                                                               SELECTOR
deployment.apps/nfs-subdir-external-provisioner   1/1     1            1           39m   nfs-subdir-external-provisioner   registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2   app=nfs-subdir-external-provisioner,release=nfs-subdir-external-provisionerNAME                                                         DESIRED   CURRENT   READY   AGE   CONTAINERS                        IMAGES                                                               SELECTOR
replicaset.apps/nfs-subdir-external-provisioner-688456c5d9   1         1         1       39m   nfs-subdir-external-provisioner   registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2   app=nfs-subdir-external-provisioner,pod-template-hash=688456c5d9,release=nfs-subdir-external-provisioner$ kubectl --kubeconfig kubeconfig/61cluster.yaml get sc
NAME         PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   cluster.local/nfs-subdir-external-provisioner   Delete          Immediate           true                   37m

遇到这种镜像无法拉取的 helm charts ,我们可以定制属于自己的 helm charts,方便日常测试使用。

部署 MinIO

添加仓库

kubectl create ns minio
helm repo add minio https://helm.min.io/
helm repo update
helm search repo minio/minio

修改可配置项

helm show values minio/minio > values.yaml

修改内容:

accessKey: 'minio'
secretKey: 'minio123'
persistence:enabled: truestorageCalss: 'nfs-client'VolumeName: ''accessMode: ReadWriteOncesize: 5Giservice:type: ClusterIPclusterIP: ~port: 9000# nodePort: 32000resources:requests:memory: 128M

如果你想知道最终生成的模版,可以使用 helm template 命令。

helm template -f values.yaml --namespace minio minio/minio | tee -a  minio.yaml

输出:

---
# Source: minio/templates/post-install-prometheus-metrics-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: release-name-minio-update-prometheus-secretlabels:app: minio-update-prometheus-secretchart: minio-8.0.10release: release-nameheritage: Helm
---
# Source: minio/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: "release-name-minio"namespace: "minio"labels:app: miniochart: minio-8.0.10release: "release-name"
---
# Source: minio/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:name: release-name-miniolabels:app: miniochart: minio-8.0.10release: release-nameheritage: Helm
type: Opaque
data:accesskey: "bWluaW8="secretkey: "bWluaW8xMjM="
---
# Source: minio/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:name: release-name-miniolabels:app: miniochart: minio-8.0.10release: release-nameheritage: Helm
data:initialize: |-#!/bin/shset -e ; # Have script exit in the event of a failed command.MC_CONFIG_DIR="/etc/minio/mc/"MC="/usr/bin/mc --insecure --config-dir ${MC_CONFIG_DIR}"# connectToMinio# Use a check-sleep-check loop to wait for Minio service to be availableconnectToMinio() {SCHEME=$1ATTEMPTS=0 ; LIMIT=29 ; # Allow 30 attemptsset -e ; # fail if we can't read the keys.ACCESS=$(cat /config/accesskey) ; SECRET=$(cat /config/secretkey) ;set +e ; # The connections to minio are allowed to fail.echo "Connecting to Minio server: $SCHEME://$MINIO_ENDPOINT:$MINIO_PORT" ;MC_COMMAND="${MC} config host add myminio $SCHEME://$MINIO_ENDPOINT:$MINIO_PORT $ACCESS $SECRET" ;$MC_COMMAND ;STATUS=$? ;until [ $STATUS = 0 ]doATTEMPTS=`expr $ATTEMPTS + 1` ;echo \"Failed attempts: $ATTEMPTS\" ;if [ $ATTEMPTS -gt $LIMIT ]; thenexit 1 ;fi ;sleep 2 ; # 1 second intervals between attempts$MC_COMMAND ;STATUS=$? ;done ;set -e ; # reset `e` as activereturn 0}# checkBucketExists ($bucket)# Check if the bucket exists, by using the exit code of `mc ls`checkBucketExists() {BUCKET=$1CMD=$(${MC} ls myminio/$BUCKET > /dev/null 2>&1)return $?}# createBucket ($bucket, $policy, $purge)# Ensure bucket exists, purging if asked tocreateBucket() {BUCKET=$1POLICY=$2PURGE=$3VERSIONING=$4# Purge the bucket, if set & exists# Since PURGE is user input, check explicitly for `true`if [ $PURGE = true ]; thenif checkBucketExists $BUCKET ; thenecho "Purging bucket '$BUCKET'."set +e ; # don't exit if this fails${MC} rm -r --force myminio/$BUCKETset -e ; # reset `e` as activeelseecho "Bucket '$BUCKET' does not exist, skipping purge."fifi# Create the bucket if it does not existif ! checkBucketExists $BUCKET ; thenecho "Creating bucket '$BUCKET'"${MC} mb myminio/$BUCKETelseecho "Bucket '$BUCKET' already exists."fi# set versioning for bucketif [ ! -z $VERSIONING ] ; thenif [ $VERSIONING = true ] ; thenecho "Enabling versioning for '$BUCKET'"${MC} version enable myminio/$BUCKETelif [ $VERSIONING = false ] ; thenecho "Suspending versioning for '$BUCKET'"${MC} version suspend myminio/$BUCKETfielseecho "Bucket '$BUCKET' versioning unchanged."fi# At this point, the bucket should exist, skip checking for existence# Set policy on the bucketecho "Setting policy of bucket '$BUCKET' to '$POLICY'."${MC} policy set $POLICY myminio/$BUCKET}# Try connecting to Minio instancescheme=httpconnectToMinio $scheme
---
# Source: minio/templates/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: release-name-miniolabels:app: miniochart: minio-8.0.10release: release-nameheritage: Helm
spec:accessModes:- "ReadWriteOnce"resources:requests:storage: "1Gi"storageClassName: "nfs-client"
---
# Source: minio/templates/post-install-prometheus-metrics-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:name: release-name-minio-update-prometheus-secretlabels:app: minio-update-prometheus-secretchart: minio-8.0.10release: release-nameheritage: Helm
rules:- apiGroups:- ""resources:- secretsverbs:- get- create- update- patchresourceNames:- release-name-minio-prometheus- apiGroups:- ""resources:- secretsverbs:- create- apiGroups:- monitoring.coreos.comresources:- servicemonitorsverbs:- getresourceNames:- release-name-minio
---
# Source: minio/templates/post-install-prometheus-metrics-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: release-name-minio-update-prometheus-secretlabels:app: minio-update-prometheus-secretchart: minio-8.0.10release: release-nameheritage: Helm
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: release-name-minio-update-prometheus-secret
subjects:- kind: ServiceAccountname: release-name-minio-update-prometheus-secretnamespace: "minio"
---
# Source: minio/templates/service.yaml
apiVersion: v1
kind: Service
metadata:name: release-name-miniolabels:app: miniochart: minio-8.0.10release: release-nameheritage: Helm
spec:type: NodePortports:- name: httpport: 9000protocol: TCPnodePort: 32000selector:app: miniorelease: release-name
---
# Source: minio/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: release-name-miniolabels:app: miniochart: minio-8.0.10release: release-nameheritage: Helm
spec:strategy:type: RollingUpdaterollingUpdate:maxSurge: 100%maxUnavailable: 0selector:matchLabels:app: miniorelease: release-nametemplate:metadata:name: release-name-miniolabels:app: miniorelease: release-nameannotations:checksum/secrets: f48e042461f5cd95fe36906895a8518c7f1592bd568c0caa8ffeeb803c36d4a4checksum/config: 9ec705e3000d8e1f256b822bee35dc238f149dbb09229548a99c6409154a12b8spec:serviceAccountName: "release-name-minio"securityContext:runAsUser: 1000runAsGroup: 1000fsGroup: 1000containers:- name: minioimage: "minio/minio:RELEASE.2021-02-14T04-01-33Z"imagePullPolicy: IfNotPresentcommand:- "/bin/sh"- "-ce"- "/usr/bin/docker-entrypoint.sh minio -S /etc/minio/certs/ server /export"volumeMounts:- name: exportmountPath: /export            ports:- name: httpcontainerPort: 9000env:- name: MINIO_ACCESS_KEYvalueFrom:secretKeyRef:name: release-name-miniokey: accesskey- name: MINIO_SECRET_KEYvalueFrom:secretKeyRef:name: release-name-miniokey: secretkeyresources:requests:memory: 1Gi      volumes:- name: exportpersistentVolumeClaim:claimName: release-name-minio- name: minio-usersecret:secretName: release-name-minio

创建 MinIO

helm install -f values.yaml minio  minio/minio -n minio

输出:

NAME: minio
LAST DEPLOYED: Wed Jul 19 10:56:23 2023
NAMESPACE: minio
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Minio can be accessed via port 9000 on the following DNS name from within your cluster:
minio.minio.svc.cluster.localTo access Minio from localhost, run the below commands:1. export POD_NAME=$(kubectl get pods --namespace minio -l "release=minio" -o jsonpath="{.items[0].metadata.name}")2. kubectl port-forward $POD_NAME 9000 --namespace minioRead more about port forwarding here: http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/You can now access Minio server on http://localhost:9000. Follow the below steps to connect to Minio server with mc client:1. Download the Minio mc client - https://docs.minio.io/docs/minio-client-quickstart-guide2. Get the ACCESS_KEY=$(kubectl get secret minio -o jsonpath="{.data.accesskey}" | base64 --decode) and the SECRET_KEY=$(kubectl get secret minio -o jsonpath="{.data.secretkey}" | base64 --decode)3. mc alias set minio-local http://localhost:9000 "$ACCESS_KEY" "$SECRET_KEY" --api s3v44. mc ls minio-localAlternately, you can use your browser or the Minio SDK to access the server - https://docs.minio.io/categories/17

查看 minio 状态

$ kubectl get pod -n minio
NAME                     READY   STATUS    RESTARTS   AGE
minio-66f8b9444b-lml5f   1/1     Running   0          62s
[root@master1 helm]# kubectl get all -n minio
NAME                         READY   STATUS    RESTARTS   AGE
pod/minio-66f8b9444b-lml5f   1/1     Running   0          73sNAME            TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
service/minio   NodePort   10.96.0.232   <none>        9000:32000/TCP   73sNAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/minio   1/1     1            1           73sNAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/minio-66f8b9444b   1         1         1       73s$ kubectl get pv,pvc,sc -n minio
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM         STORAGECLASS   REASON   AGE
persistentvolume/pvc-667a9c76-7d14-484c-aeeb-6e07cffd2c10   1Gi        RWO            Delete           Bound    minio/minio   nfs-client              2m20sNAME                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/minio   Bound    pvc-667a9c76-7d14-484c-aeeb-6e07cffd2c10   1Gi        RWO            nfs-client     2m20sNAME                                     PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/nfs-client   cluster.local/nfs-subdir-external-provisioner   Delete          Immediate           true                   2d12h

访问

nodepot

界面访问:http://192.168.10.61:32000
在这里插入图片描述

ingress

修改 values.yamlservice

service:type: ClusterIPclusterIP: ~port: 9000

更新

$ helm upgrade -f values.yaml minio  minio/minio -n minio
Release "minio" has been upgraded. Happy Helming!
NAME: minio
LAST DEPLOYED: Wed Jul 19 11:49:22 2023
NAMESPACE: minio
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
Minio can be accessed via port 9000 on the following DNS name from within your cluster:
minio.minio.svc.cluster.local$ kubectl get all -n minio
NAME                         READY   STATUS    RESTARTS   AGE
pod/minio-66f8b9444b-lml5f   1/1     Running   0          53mNAME            TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
service/minio   ClusterIP   10.96.0.232   <none>        9000/TCP   53mNAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/minio   1/1     1            1           53mNAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/minio-66f8b9444b   1         1         1       53m

service 已经由 nodePort 类型改为 ClusterIP

接下来,我们需要配置证书和域名,你需要在集群内 部署 cert-manager

查看 minio的 secret tls 证书

$ kubectl get secret -n minio
NAME                          TYPE                 DATA   AGE
minio                         Opaque               2      58m
minio-letsencrypt-tls-fn4vt   Opaque               1      2m47s

查看已经创建好的 cluster-issuer名称

$ kubectl get ClusterIssuer
NAME               READY   AGE
letsencrypt-prod   True    33m
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: minionamespace: minioannotations:cert-manager.io/cluster-issuer: letsencrypt-prod # 配置自动生成 https 证书kubernetes.io/ingress.class: nginxnginx.ingress.kubernetes.io/rewrite-target: /
spec:tls:- hosts:- 'minio.demo.com'secretName: minio-letsencrypt-tlsrules:- host: minio.demo.comhttp:paths:- path: /pathType: Prefixbackend:service:name: minioport:number: 9000

创建

kubectl apply -f ingress.yaml

域名解析:

  • linux 在 /etc/hosts 添加 192.168.10.61 minio.demo.com
  • windows 在 C:\Windows\System32\drivers\etc\hosts 添加 192.168.10.61 minio.demo.com

参考:

  • Deploying the NFS provisioner for Kubernetes
  • https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
  • 部署 MinIO 以支持对象存储

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/74111.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

设计模式再探——代理模式

目录 一、背景介绍二、思路&方案三、过程1.代理模式简介2.代理模式的类图3.代理模式代码4.代理模式还可以优化的地方5.代理模式的项目实战&#xff0c;优化后(只加了泛型方式&#xff0c;使用CGLIB的代理) 四、总结五、升华 一、背景介绍 最近在做产品过程中对于日志的统一…

2023年自然语言处理与信息检索国际会议(ECNLPIR 2023) | EI Compendex, Scopus双检索

会议简介 Brief Introduction 2023年自然语言处理与信息检索国际会议(ECNLPIR 2023) 会议时间&#xff1a;2023年9月22日-24日 召开地点&#xff1a;中国杭州 大会官网&#xff1a;ECNLPIR 2023-2023 Eurasian Conference on Natural Language Processing and Information Retr…

【机器学习】Cost Function for Logistic Regression

Cost Function for Logistic Regression 1. 平方差能否用于逻辑回归&#xff1f;2. 逻辑损失函数loss3. 损失函数cost附录 导入所需的库 import numpy as np %matplotlib widget import matplotlib.pyplot as plt from plt_logistic_loss import plt_logistic_cost, plt_two_…

自己实现MyBatis 底层机制--抽丝剥茧(上)

&#x1f600;前言 本篇博文是学习过程中的笔记和对于MyBatis底层机制的分析思路&#xff0c;希望能够给您带来帮助&#x1f60a; &#x1f3e0;个人主页&#xff1a;晨犀主页 &#x1f9d1;个人简介&#xff1a;大家好&#xff0c;我是晨犀&#xff0c;希望我的文章可以帮助到…

誉天程序员-瀑布模型-敏捷开发模型-DevOps模型比较

文章目录 2. 项目开发-开发方式2.1. 瀑布开发模型2.2. 敏捷开发模型2.3. DevOps开发模型2.4. 区别 自增主键策略1、数据库支持主键自增自增和uuid方案优缺点 2. 项目开发-开发方式 由传统的瀑布开发模型、敏捷开发模型&#xff0c;一跃升级到DevOps开发运维一体化开发模型。 …

IPv6 over IPv4隧道配置举例

配置IPv6 over IPv4手动隧道示例 组网需求 如图1所示&#xff0c;两台IPv6主机分别通过SwitchA和SwitchC与IPv4骨干网络连接&#xff0c;客户希望两台IPv6主机能通过IPv4骨干网互通。 图1 配置IPv6 over IPv4手动隧道组网图 配置思路 配置IPv6 over IPv4手动隧道的思路如下&…

【雕爷学编程】MicroPython动手做(10)——零基础学MaixPy之神经网络KPU2

KPU的基础架构 让我们回顾下经典神经网络的基础运算操作&#xff1a; 卷积&#xff08;Convolution&#xff09;:1x1卷积&#xff0c;3x3卷积&#xff0c;5x5及更高的卷积 批归一化&#xff08;Batch Normalization&#xff09; 激活&#xff08;Activate&#xff09; 池化&…

单例模式(Singleton)

单例模式保证一个类仅有一个实例&#xff0c;并提供一个全局访问点来访问它&#xff0c;这个类称为单例类。可见&#xff0c;在实现单例模式时&#xff0c;除了保证一个类只能创建一个实例外&#xff0c;还需提供一个全局访问点。 Singleton is a creational design pattern t…

JavaScript场景应用:Canvas实战开发一个二维折线图插件

&#x1f3c6;作者简介&#xff0c;黑夜开发者&#xff0c;全栈领域新星创作者✌&#xff0c;阿里云社区专家博主&#xff0c;2023年6月csdn上海赛道top4。 &#x1f3c6;数年电商行业从业经验&#xff0c;历任核心研发工程师&#xff0c;项目技术负责人。 &#x1f3c6;本文已…

VB6中FSO具体应用详解

文前申明:原文为通用版实例代码,本菜鸟在每例之后加入一个简单的实例(均验证通过),供有需要的朋友参考. 您正在看的VB教程是:VB入门基础认识VB的文件系统对象FSO。 在 VB 编程中经常需要和文件系统打交道&#xff0c;比如获取硬盘的剩余空间、判断文件夹或文件是否存在等。在…

认识主被动无人机遥感数据、预处理无人机遥感数据、定量估算农林植被关键性状、期刊论文插图精细制作与Appdesigner应用开发

目录 第一章、认识主被动无人机遥感数据 第二章、预处理无人机遥感数据 第三章、定量估算农林植被关键性状 第四章、期刊论文插图精细制作与Appdesigner应用开发 更多推荐 遥感技术作为一种空间大数据手段&#xff0c;能够从多时、多维、多地等角度&#xff0c;获取大量的…

PHP语言基础知识(超详细)

文章目录 前言第一章 PHP语言学习介绍 1.1 PHP部署安装环境1.2 PHP代码工具选择 第二章 PHP代码基本语法 2.1 PHP函数知识介绍2.2 PHP常量变量介绍 2.2.1 PHP变量知识&#xff1a;2.2.2 PHP常量知识&#xff1a; 2.3 PHP注释信息介绍2.4 PHP数据类型介绍 2.4.1 整形数据类型2.4…

基于量子同态的安全多方量子求和加密

摘要安全多方计算在经典密码学中一直扮演着重要的角色。量子同态加密(QHE)可以在不解密的情况下对加密数据进行计算。目前&#xff0c;大多数协议使用半诚实的第三方(TP)来保护参与者的秘密。我们使用量子同态加密方案代替TP来保护各方的隐私。在量子同态加密的基础上&#xff…

2023年自动化测试已成为标配?一篇彻底打通自动化测试...

目录&#xff1a;导读 前言一、Python编程入门到精通二、接口自动化项目实战三、Web自动化项目实战四、App自动化项目实战五、一线大厂简历六、测试开发DevOps体系七、常用自动化测试工具八、JMeter性能测试九、总结&#xff08;尾部小惊喜&#xff09; 前言 首先我们从招聘岗…

《面试1v1》ElasticSearch 和 Lucene

&#x1f345; 作者简介&#xff1a;王哥&#xff0c;CSDN2022博客总榜Top100&#x1f3c6;、博客专家&#x1f4aa; &#x1f345; 技术交流&#xff1a;定期更新Java硬核干货&#xff0c;不定期送书活动 &#x1f345; 王哥多年工作总结&#xff1a;Java学习路线总结&#xf…

智慧~经典开源项目数字孪生智慧商场——开源工程及源码

深圳南山某商场的工程和源码免费赠送&#xff0c;助您打造智慧商场。立即获取&#xff0c;提升商场管理效能&#xff01; 项目介绍 凤凰商场作为南山地区的繁华商业中心&#xff0c;提供多样化的购物和娱乐体验。通过此项目&#xff0c;凤凰商场将迈向更智能的商业模式。 本项目…

基于SaaS模式的Java基层卫生健康云HIS系统源码【运维管理+运营管理+综合监管】

云HIS综合管理平台 一、模板管理 模板分为两种&#xff1a;病历模板和报表模板。模板管理是运营管理的核心组成部分&#xff0c;是基层卫生健康云中各医疗机构定制电子病历和报表的地方&#xff0c;各医疗机构可根据自身特点特色定制电子病历和报表&#xff0c;制作的电子病历…

Python-Python基础综合案例:数据可视化 - 折线图可视化

版本说明 当前版本号[20230729]。 版本修改说明20230729初版 目录 文章目录 版本说明目录知识总览图Python基础综合案例&#xff1a;数据可视化 - 折线图可视化json数据格式什么是jsonjson有什么用json格式数据转化Python数据和Json数据的相互转化 pyecharts模块介绍概况如何…

Golang 函数参数的传递方式 值传递,引用传递

基本介绍 我们在讲解函数注意事项和使用细节时&#xff0c;已经讲过值类型和引用类型了&#xff0c;这里我们再系统总结一下&#xff0c;因为这是重难点&#xff0c;值类型参数默认就是值传递&#xff0c;而引用类型参数默认就是引用传递。 两种传递方式&#xff08;函数默认都…

BUG分析以及BUG定位

一般来说bug大多数存在于3个模块&#xff1a; 1、前台界面&#xff0c;包括界面的显示&#xff0c;兼容性&#xff0c;数据提交的判断&#xff0c;页面的跳转等等&#xff0c;这些bug基本都是一眼可见的&#xff0c;不太需要定位&#xff0c;当然也不排除一些特殊情况&#xf…