ceph跨集群迁移ceph pool rgw

1、跨集群迁移ceph pool   rgw

我这里是迁移rgw的pool

l老环境

[root@ceph-1 data]# yum install s3cmd -y
[root@ceph-1 ~]# ceph config dump
WHO   MASK LEVEL    OPTION                                VALUE                                    RO mon      advanced auth_allow_insecure_global_id_reclaim false                                       mgr      advanced mgr/dashboard/ALERTMANAGER_API_HOST   http://20.3.10.91:9093                   *  mgr      advanced mgr/dashboard/GRAFANA_API_PASSWORD    admin                                    *  mgr      advanced mgr/dashboard/GRAFANA_API_SSL_VERIFY  false                                    *  mgr      advanced mgr/dashboard/GRAFANA_API_URL         https://20.3.10.93:3000                  *  mgr      advanced mgr/dashboard/GRAFANA_API_USERNAME    admin                                    *  mgr      advanced mgr/dashboard/PROMETHEUS_API_HOST     http://20.3.10.91:9092                   *  mgr      advanced mgr/dashboard/RGW_API_ACCESS_KEY      9UYWS54KEGHPTXIZK61J                     *  mgr      advanced mgr/dashboard/RGW_API_HOST            20.3.10.91                               *  mgr      advanced mgr/dashboard/RGW_API_PORT            8080                                     *  mgr      advanced mgr/dashboard/RGW_API_SCHEME          http                                     *  mgr      advanced mgr/dashboard/RGW_API_SECRET_KEY      MGaia4UnZhKO6DRRtRu89iKwUJjZ0KVS8IgjA2p8 *  mgr      advanced mgr/dashboard/RGW_API_USER_ID         ceph-dashboard                           *  mgr      advanced mgr/dashboard/ceph-1/server_addr      20.3.10.91                               *  mgr      advanced mgr/dashboard/ceph-2/server_addr      20.3.10.92                               *  mgr      advanced mgr/dashboard/ceph-3/server_addr      20.3.10.93                               *  mgr      advanced mgr/dashboard/server_port             8443                                     *  mgr      advanced mgr/dashboard/ssl                     true                                     *  mgr      advanced mgr/dashboard/ssl_server_port         8443                                     *  
[root@ceph-1 ~]# cat /root/.s3cfg 
[default]
access_key = 9UYWS54KEGHPTXIZK61J
access_token =
add_encoding_exts =
add_headers =
bucket_location = US
ca_certs_file =
cache_file =
check_ssl_certificate = True
check_ssl_hostname = True
cloudfront_host = cloudfront.amazonaws.com
connection_max_age = 5
connection_pooling = True
content_disposition =
content_type =
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encrypt = False
expiry_date =
expiry_days =
expiry_prefix =
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase =
guess_mime_type = True
host_base = 20.3.10.91:8080
host_bucket = 20.3.10.91:8080%(bucket)
human_readable_sizes = False
invalidate_default_index_on_cf = False
invalidate_default_index_root_on_cf = True
invalidate_on_cf = False
kms_key =
limit = -1
limitrate = 0
list_allow_unordered = False
list_md5 = False
log_target_prefix =
long_listing = False
max_delete = -1
mime_type =
multipart_chunk_size_mb = 15
multipart_copy_chunk_size_mb = 1024
multipart_max_chunks = 10000
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
public_url_use_https = False
put_continue = False
recursive = False
recv_chunk = 65536
reduced_redundancy = False
requester_pays = False
restore_days = 1
restore_priority = Standard
secret_key = MGaia4UnZhKO6DRRtRu89iKwUJjZ0KVS8IgjA2p8
send_chunk = 65536
server_side_encryption = False
signature_v2 = False
signurl_use_https = False
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
ssl_client_cert_file =
ssl_client_key_file =
stats = False
stop_on_error = False
storage_class =
throttle_max = 100
upload_id =
urlencoding_mode = normal
use_http_expect = False
use_https = False
use_mime_magic = True
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error =
website_index = index.html
[root@ceph-1 ~]# s3cmd ls
2024-01-24 10:52  s3://000002
2024-02-01 08:20  s3://000010
2024-01-24 10:40  s3://cloudengine
2024-02-07 02:58  s3://component-000010
2024-01-24 10:52  s3://component-pub
2024-02-27 10:55  s3://deploy-2
2024-01-26 10:53  s3://digital-000002
2024-01-26 11:14  s3://digital-000010
2024-01-29 02:04  s3://docker-000010
2024-01-26 11:46  s3://docker-pub
2024-03-06 11:42  s3://warp-benchmark-bucket[root@ceph-1 data]# ceph df
RAW STORAGE:CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED hdd       900 GiB     154 GiB     740 GiB      746 GiB         82.86 TOTAL     900 GiB     154 GiB     740 GiB      746 GiB         82.86 POOLS:POOL                           ID     PGS     STORED      OBJECTS     USED        %USED      MAX AVAIL cephfs_data                     1      16         0 B           0         0 B          0           0 B cephfs_metadata                 2      16     1.0 MiB          23     4.7 MiB     100.00           0 B .rgw.root                       3      16     3.5 KiB           8     1.5 MiB     100.00           0 B default.rgw.control             4      16         0 B           8         0 B          0           0 B default.rgw.meta                5      16     8.5 KiB          31     5.4 MiB     100.00           0 B default.rgw.log                 6      16      64 KiB         207      64 KiB     100.00           0 B default.rgw.buckets.index       7      16     3.5 MiB         192     3.5 MiB     100.00           0 B default.rgw.buckets.data        8      16     146 GiB      51.55k     440 GiB     100.00           0 B default.rgw.buckets.non-ec      9      16     123 KiB          10     2.0 MiB     100.00           0 B 

只迁移default.rgw.buckets.data发现没有backet桶的信息所以要迁移

default.rgw.buckets.data  这里是数据

default.rgw.meta   这里面存的是用户信息和桶的信息

default.rgw.buckets.index  这里是对应关系

1、通过 rados -p pool_name  export  --all    文件

[root@ceph-1 data]# rados -p default.rgw.buckets.data  export  --all   rgwdata
[root@ceph-1 data]# rados -p default.rgw.buckets.index  export   --all   rgwindex
[root@ceph-1 data]# rados -p default.rgw.meta  export   --all   rgwmeta
[root@ceph-1 data]# ls
rgwdata  rgwindex  rgwmeta

2、获取原集群的user信息记住ceph-dashboard的access_key和secret_key  原因是default.rgw.meta 有用户ceph-dashboard 恰好在里面

[root@ceph-1 data]# radosgw-admin user list
["registry","ceph-dashboard"
]
[root@ceph-1 data]# radosgw-admin user info --uid=ceph-dashboard
{"user_id": "ceph-dashboard","display_name": "Ceph dashboard","email": "","suspended": 0,"max_buckets": 1000,"subusers": [],"keys": [{"user": "ceph-dashboard","access_key": "9UYWS54KEGHPTXIZK61J","secret_key": "MGaia4UnZhKO6DRRtRu89iKwUJjZ0KVS8IgjA2p8"}],"swift_keys": [],"caps": [],"op_mask": "read, write, delete","system": "true","default_placement": "","default_storage_class": "","placement_tags": [],"bucket_quota": {"enabled": true,"check_on_raw": false,"max_size": -1,"max_size_kb": 0,"max_objects": 1638400},"user_quota": {"enabled": false,"check_on_raw": false,"max_size": -1,"max_size_kb": 0,"max_objects": -1},"temp_url_keys": [],"type": "rgw","mfa_ids": []
}

新环境

切换到新建好的集群

[root@ceph-1 data]# ceph -scluster:id:     d073f5d6-6b4a-4c87-901b-a0f4694ee878health: HEALTH_WARNmon is allowing insecure global_id reclaimservices:mon: 1 daemons, quorum ceph-1 (age 46h)mgr: ceph-1(active, since 46h)mds: cephfs:1 {0=ceph-1=up:active}osd: 2 osds: 2 up (since 46h), 2 in (since 8d)rgw: 1 daemon active (ceph-1.rgw0)task status:data:pools:   9 pools, 144 pgsobjects: 61.83k objects, 184 GiBusage:   1.9 TiB used, 3.3 TiB / 5.2 TiB availpgs:     144 active+cleanio:client:   58 KiB/s rd, 7 op/s rd, 0 op/s wr

测试是都可以用rgw

[root@ceph-1 data]# yum install s3cmd -y
[root@ceph-1 data]# ceph config dump
WHO    MASK LEVEL    OPTION                               VALUE                                    RO 
global      advanced mon_warn_on_pool_no_redundancy       false                                       mgr       advanced mgr/dashboard/ALERTMANAGER_API_HOST  http://20.3.14.124:9093                  *  mgr       advanced mgr/dashboard/GRAFANA_API_PASSWORD   admin                                    *  mgr       advanced mgr/dashboard/GRAFANA_API_SSL_VERIFY false                                    *  mgr       advanced mgr/dashboard/GRAFANA_API_URL        https://20.3.14.124:3000                 *  mgr       advanced mgr/dashboard/GRAFANA_API_USERNAME   admin                                    *  mgr       advanced mgr/dashboard/PROMETHEUS_API_HOST    http://20.3.14.124:9092                  *  mgr       advanced mgr/dashboard/RGW_API_ACCESS_KEY     9UYWS54KEGHPTXIZK61J                     *  mgr       advanced mgr/dashboard/RGW_API_HOST           20.3.14.124                              *  mgr       advanced mgr/dashboard/RGW_API_PORT           8090                                     *  mgr       advanced mgr/dashboard/RGW_API_SCHEME         http                                     *  mgr       advanced mgr/dashboard/RGW_API_SECRET_KEY     MGaia4UnZhKO6DRRtRu89iKwUJjZ0KVS8IgjA2p8 *  mgr       advanced mgr/dashboard/RGW_API_USER_ID        ceph-dashboard                           *  mgr       advanced mgr/dashboard/ceph-1/server_addr     20.3.14.124                              *  mgr       advanced mgr/dashboard/server_port            8443                                     *  mgr       advanced mgr/dashboard/ssl                    true                                     *  mgr       advanced mgr/dashboard/ssl_server_port        8443                                     *  
[root@ceph-1 data]# s3cmd ls
#创建桶
[root@ceph-1 data]# s3cmd mb s3://test
Bucket 's3://test/' created
#上传测试
[root@ceph-1 data]# s3cmd put test.txt s3://test -r
upload: 'test.txt' -> 's3://test/1234'  [1 of 1]29498 of 29498   100% in    0s   634.42 KB/s  done
#删除桶文件
[root@ceph-1 data]# s3cmd del s3://test --recursive --force
delete: 's3://test/1234'
#删除桶
[root@ceph-1 data]# s3cmd rb s3://test --recursive --force
Bucket 's3://test/' removed[root@ceph-1 data]# ceph df
RAW STORAGE:CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED ssd       5.2 TiB     3.3 TiB     1.9 TiB      1.9 TiB         36.80 TOTAL     5.2 TiB     3.3 TiB     1.9 TiB      1.9 TiB         36.80 POOLS:POOL                           ID     PGS     STORED      OBJECTS     USED        %USED     MAX AVAIL cephfs_data                     1      16      36 GiB       9.33k      36 GiB      1.16       3.0 TiB cephfs_metadata                 2      16     137 KiB          23     169 KiB         0       3.0 TiB .rgw.root                       3      16     1.2 KiB           4      16 KiB         0       3.0 TiB default.rgw.control             4      16         0 B           8         0 B         0       3.0 TiB default.rgw.meta                5      16     6.5 KiB          32     120 KiB         0       3.0 TiB default.rgw.log                 6      16     1.6 MiB         207     1.6 MiB         0       3.0 TiB default.rgw.buckets.index       7      16         0 B         192         0 B         0       3.0 TiB default.rgw.buckets.data        8      16         0 B      52.04k         0 B      4.52       3.0 TiB default.rgw.buckets.non-ec      9      16         0 B           0         0 B         0       3.0 TiB 

1、把上面的文件传输到当前集群

[root@ceph-1 data]# ls
rgwdata  rgwindex  rgwmeta
[root@ceph-1 data]#rados -p default.rgw.buckets.data  import rgwdata
Importing pool
***Overwrite*** #-9223372036854775808:00000000:::obj_delete_at_hint.0000000078:head#
***Overwrite*** #-9223372036854775808:00000000:gc::gc.30:head#
***Overwrite*** #-9223372036854775808:00000000:::obj_delete_at_hint.0000000070:head#
..............
[root@ceph-1 data]#rados -p default.rgw.buckets.index   import rgwindex
Importing pool
***Overwrite*** #-9223372036854775808:00000000:::obj_delete_at_hint.0000000078:head#
***Overwrite*** #-9223372036854775808:00000000:gc::gc.30:head#
***Overwrite*** #-9223372036854775808:00000000:::obj_delete_at_hint.0000000070:head#
..............
[root@ceph-1 data]#rados -p default.rgw.meta    import rgwmeta
Importing pool
***Overwrite*** #-9223372036854775808:00000000:::obj_delete_at_hint.0000000078:head#
***Overwrite*** #-9223372036854775808:00000000:gc::gc.30:head#
***Overwrite*** #-9223372036854775808:00000000:::obj_delete_at_hint.0000000070:head#
..............[root@ceph-1 data]# ceph df
RAW STORAGE:CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED ssd       5.2 TiB     3.3 TiB     1.9 TiB      1.9 TiB         36.80 TOTAL     5.2 TiB     3.3 TiB     1.9 TiB      1.9 TiB         36.80 POOLS:POOL                           ID     PGS     STORED      OBJECTS     USED        %USED     MAX AVAIL cephfs_data                     1      16      36 GiB       9.33k      36 GiB      1.16       3.0 TiB cephfs_metadata                 2      16     137 KiB          23     169 KiB         0       3.0 TiB .rgw.root                       3      16     1.2 KiB           4      16 KiB         0       3.0 TiB default.rgw.control             4      16         0 B           8         0 B         0       3.0 TiB default.rgw.meta                5      16     6.5 KiB          32     120 KiB         0       3.0 TiB default.rgw.log                 6      16     1.6 MiB         207     1.6 MiB         0       3.0 TiB default.rgw.buckets.index       7      16         0 B         192         0 B         0       3.0 TiB default.rgw.buckets.data        8      16     147 GiB      52.04k     147 GiB      4.52       3.0 TiB default.rgw.buckets.non-ec      9      16         0 B           0         0 B         0       3.0 TiB 
[root@ceph-1 data]# 

2、传输完了发现网页异常和s3cmd 不可用了  

1、发现ceph config dump的RGW_API_ACCESS_KEY和RGW_API_SECRET_KEY和 radosgw-admin user info --uid=ceph-dashboard 输出的结果不一样 

radosgw-admin user info --uid=ceph-dashboard结果是老集群的access_key和secret_key 

导入的radosgw-admin user info --uid=ceph-dashboard的access_key和secret_key

   [root@ceph-1 data]# radosgw-admin user info --uid=ceph-dashboard[root@ceph-1 data]#  echo 9UYWS54KEGHPTXIZK61J  > access_key[root@ceph-1 data]#  echo MGaia4UnZhKO6DRRtRu89iKwUJjZ0KVS8IgjA2p8  > secret_key[root@ceph-1 data]# ceph dashboard set-rgw-api-access-key -i  access_key[root@ceph-1 data]# ceph dashboard set-rgw-api-secret-key -i secret_key

发现一样了

网页也正常了

用新的access_key和secret_key 配置s3cmd

迁移后的registry 无法使用failed to retrieve info about container registry (HTTP Error: 301: 301 Moved Permanently)  和CEPH RGW集群和bucket的zone group 不一致导致的404异常解决 及 使用radosgw-admin metadata 命令设置bucket metadata 的方法

[root@ceph-1 data]# vi /var/log/ceph/ceph-rgw-ceph-1.rgw0.log

查看 zonegroup  发现不一致

radosgw-admin zonegroup list   看default_info是

79ee051e-ac44-4677-b011-c7f3ad0d1d75 

但是 radosgw-admin metadata get bucket.instance:registry:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1里面的zonegroup

3ea718b5-ddfe-4641-8f80-53152066e03e

[root@ceph-1 data]# radosgw-admin zonegroup list
{"default_info": "79ee051e-ac44-4677-b011-c7f3ad0d1d75","zonegroups": ["default"]
}[root@ceph-1 data]# radosgw-admin metadata list bucket.instance
["docker-000010:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.4","docker-pub:204e1689-81f3-41e6-a487-8a0cfe918e2e.4743.3","digital-000010:204e1689-81f3-41e6-a487-8a0cfe918e2e.4801.1","digital-000002:204e1689-81f3-41e6-a487-8a0cfe918e2e.4743.2","000002:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.3","component-pub:204e1689-81f3-41e6-a487-8a0cfe918e2e.4743.1","cloudengine:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.2","deploy-2:204e1689-81f3-41e6-a487-8a0cfe918e2e.4801.5","warp-benchmark-bucket:204e1689-81f3-41e6-a487-8a0cfe918e2e.4801.6","registry:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1","000010:204e1689-81f3-41e6-a487-8a0cfe918e2e.4801.2","component-000010:204e1689-81f3-41e6-a487-8a0cfe918e2e.4743.4"
][root@ceph-1 data]# radosgw-admin metadata get bucket.instance:registry:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1
{"key": "bucket.instance:registry:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1","ver": {"tag": "_CMGeYR69ptByuWSkghrYCln","ver": 1},"mtime": "2024-03-08 07:42:50.397826Z","data": {"bucket_info": {"bucket": {"name": "registry","marker": "204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1","bucket_id": "204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1","tenant": "","explicit_placement": {"data_pool": "","data_extra_pool": "","index_pool": ""}},"creation_time": "2024-01-24 10:36:55.798976Z","owner": "registry","flags": 0,"zonegroup": "3ea718b5-ddfe-4641-8f80-53152066e03e","placement_rule": "default-placement","has_instance_obj": "true","quota": {"enabled": false,"check_on_raw": true,"max_size": -1,"max_size_kb": 0,"max_objects": -1},"num_shards": 16,"bi_shard_hash_type": 0,"requester_pays": "false","has_website": "false","swift_versioning": "false","swift_ver_location": "","index_type": 0,"mdsearch_config": [],"reshard_status": 0,"new_bucket_instance_id": ""},"attrs": [{"key": "user.rgw.acl","val": "AgKTAAAAAwIYAAAACAAAAHJlZ2lzdHJ5CAAAAHJlZ2lzdHJ5BANvAAAAAQEAAAAIAAAAcmVnaXN0cnkPAAAAAQAAAAgAAAByZWdpc3RyeQUDPAAAAAICBAAAAAAAAAAIAAAAcmVnaXN0cnkAAAAAAAAAAAICBAAAAA8AAAAIAAAAcmVnaXN0cnkAAAAAAAAAAAAAAAAAAAAA"}]}
}

解决

1、把registry信息导入文件

[root@ceph-1 data]# radosgw-admin metadata get bucket.instance:registry:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1 > conf.json

2、获取当前集群的zonegroup

[root@ceph-1 data]# radosgw-admin zonegroup list
{"default_info": "79ee051e-ac44-4677-b011-c7f3ad0d1d75","zonegroups": ["default"]
}

3、修改conf.json的zonegroup

结果如下

 [root@ceph-1 data]# cat conf.json
{"key": "bucket.instance:registry:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1","ver": {"tag": "_CMGeYR69ptByuWSkghrYCln","ver": 1},"mtime": "2024-03-08 07:42:50.397826Z","data": {"bucket_info": {"bucket": {"name": "registry","marker": "204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1","bucket_id": "204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1","tenant": "","explicit_placement": {"data_pool": "","data_extra_pool": "","index_pool": ""}},"creation_time": "2024-01-24 10:36:55.798976Z","owner": "registry","flags": 0,"zonegroup": "79ee051e-ac44-4677-b011-c7f3ad0d1d75",  #替换成radosgw-admin zonegroup list的default_inf"placement_rule": "default-placement","has_instance_obj": "true","quota": {"enabled": false,"check_on_raw": true,"max_size": -1,"max_size_kb": 0,"max_objects": -1},"num_shards": 16,"bi_shard_hash_type": 0,"requester_pays": "false","has_website": "false","swift_versioning": "false","swift_ver_location": "","index_type": 0,"mdsearch_config": [],"reshard_status": 0,"new_bucket_instance_id": ""},"attrs": [{"key": "user.rgw.acl","val": "AgKTAAAAAwIYAAAACAAAAHJlZ2lzdHJ5CAAAAHJlZ2lzdHJ5BANvAAAAAQEAAAAIAAAAcmVnaXN0cnkPAAAAAQAAAAgAAAByZWdpc3RyeQUDPAAAAAICBAAAAAAAAAAIAAAAcmVnaXN0cnkAAAAAAAAAAAICBAAAAA8AAAAIAAAAcmVnaXN0cnkAAAAAAAAAAAAAAAAAAAAA"}]}
}

4、导入信息

[root@ceph-1 data]# radosgw-admin metadata  put bucket.instance:registry:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1 < conf.json

5、查看"zonegroup": "79ee051e-ac44-4677-b011-c7f3ad0d1d75",   和当前集群一直

[root@ceph-1 data]# radosgw-admin metadata list bucket.instance
["docker-000010:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.4","docker-pub:204e1689-81f3-41e6-a487-8a0cfe918e2e.4743.3","digital-000010:204e1689-81f3-41e6-a487-8a0cfe918e2e.4801.1","digital-000002:204e1689-81f3-41e6-a487-8a0cfe918e2e.4743.2","000002:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.3","component-pub:204e1689-81f3-41e6-a487-8a0cfe918e2e.4743.1","cloudengine:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.2","deploy-2:204e1689-81f3-41e6-a487-8a0cfe918e2e.4801.5","warp-benchmark-bucket:204e1689-81f3-41e6-a487-8a0cfe918e2e.4801.6","registry:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1","000010:204e1689-81f3-41e6-a487-8a0cfe918e2e.4801.2","component-000010:204e1689-81f3-41e6-a487-8a0cfe918e2e.4743.4"
][root@ceph-1 data]# radosgw-admin metadata get bucket.instance:registry:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1
{"key": "bucket.instance:registry:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1","ver": {"tag": "_CMGeYR69ptByuWSkghrYCln","ver": 1},"mtime": "2024-03-08 07:42:50.397826Z","data": {"bucket_info": {"bucket": {"name": "registry","marker": "204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1","bucket_id": "204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1","tenant": "","explicit_placement": {"data_pool": "","data_extra_pool": "","index_pool": ""}},"creation_time": "2024-01-24 10:36:55.798976Z","owner": "registry","flags": 0,"zonegroup": "79ee051e-ac44-4677-b011-c7f3ad0d1d75","placement_rule": "default-placement","has_instance_obj": "true","quota": {"enabled": false,"check_on_raw": true,"max_size": -1,"max_size_kb": 0,"max_objects": -1},"num_shards": 16,"bi_shard_hash_type": 0,"requester_pays": "false","has_website": "false","swift_versioning": "false","swift_ver_location": "","index_type": 0,"mdsearch_config": [],"reshard_status": 0,"new_bucket_instance_id": ""},"attrs": [{"key": "user.rgw.acl","val": "AgKTAAAAAwIYAAAACAAAAHJlZ2lzdHJ5CAAAAHJlZ2lzdHJ5BANvAAAAAQEAAAAIAAAAcmVnaXN0cnkPAAAAAQAAAAgAAAByZWdpc3RyeQUDPAAAAAICBAAAAAAAAAAIAAAAcmVnaXN0cnkAAAAAAAAAAAICBAAAAA8AAAAIAAAAcmVnaXN0cnkAAAAAAAAAAAAAAAAAAAAA"}]}
}

6、重启registry   问题解决

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/273784.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

CKB转型为BTC Layer2后月涨超 300%,还有哪些转型热门赛道的老项目?

虽然说牛市下&#xff0c;炒新不炒旧。但一些渡过漫长熊市的老牌项目方&#xff0c;重新回到牌桌前开始新叙事后&#xff0c;市场依然有人买单。 部分项目方已经初步尝到了甜头&#xff0c;Arweave&#xff08;AR&#xff09;宣布从去中心化数据存储转换到「以太坊杀手」后&am…

HTML静态网页成品作业(HTML+CSS)——花主题介绍网页设计制作(1个页面)

&#x1f389;不定期分享源码&#xff0c;关注不丢失哦 文章目录 一、作品介绍二、作品演示三、代码目录四、网站代码HTML部分代码 五、源码获取 一、作品介绍 &#x1f3f7;️本套采用HTMLCSS&#xff0c;未使用Javacsript代码&#xff0c;共有1个页面。 二、作品演示 三、代…

深入理解React中的useState:函数组件状态管理的利器

&#x1f90d; 前端开发工程师、技术日更博主、已过CET6 &#x1f368; 阿珊和她的猫_CSDN博客专家、23年度博客之星前端领域TOP1 &#x1f560; 牛客高级专题作者、打造专栏《前端面试必备》 、《2024面试高频手撕题》 &#x1f35a; 蓝桥云课签约作者、上架课程《Vue.js 和 E…

洛谷 素数环 Prime Ring Problem

题目描述 PDF 输入格式 输出格式 题意翻译 输入正整数 nn&#xff0c;把整数 1,2,\dots ,n1,2,…,n 组成一个环&#xff0c;使得相邻两个整数之和均为素数。输出时&#xff0c;从整数 11 开始逆时针排列。同一个环恰好输出一次。n\leq 16n≤16&#xff0c;保证一定有解。 多…

day04-Maven-SpringBootWeb入门

文章目录 01. Maven1.1 课程安排1.2 什么是Maven1.3 Maven的作用1.4 Maven模型1.5 Maven仓库1.6 Maven安装1.6.1 下载1.6.2 安装步骤 2 IDEA集成Maven2.1 配置Maven环境2.1.1 当前工程设置2.1.2 全局设置 2.2 创建Maven项目2.3 POM配置详解2.4 Maven坐标详解2.5 导入Maven项目 …

【鸿蒙 HarmonyOS 4.0】弹性布局(Flex)

一、介绍 弹性布局&#xff08;Flex&#xff09;提供更加有效的方式对容器中的子元素进行排列、对齐和分配剩余空间。容器默认存在主轴与交叉轴&#xff0c;子元素默认沿主轴排列&#xff0c;子元素在主轴方向的尺寸称为主轴尺寸&#xff0c;在交叉轴方向的尺寸称为交叉轴尺寸…

设计模式:软件开发的秘密武器

&#x1f90d; 前端开发工程师、技术日更博主、已过CET6 &#x1f368; 阿珊和她的猫_CSDN博客专家、23年度博客之星前端领域TOP1 &#x1f560; 牛客高级专题作者、打造专栏《前端面试必备》 、《2024面试高频手撕题》 &#x1f35a; 蓝桥云课签约作者、上架课程《Vue.js 和 E…

20240310-1-Java后端开发知识体系

Java 基础 知识体系 Questions 1. HashMap 1.8与1.7的区别 1.71.8底层结构数组链表数组链表/红黑树插入方式头插法尾插法计算hash值4次位运算5次异或运算1次位运算1次异或运算扩容、插入先扩容再插入先插入再扩容扩容后位置计算重新hash原位置或原位置旧容量 (1) 扩容因子…

【吊打面试官系列】Java虚拟机JVM篇 - 关于JVM启动参数

大家好&#xff0c;我是锋哥。今天分享关于JVM启动参数的JVM面试题&#xff0c;希望对大家有帮助&#xff1b; 常用的JVM启动参数有哪些? JVM可配置参数已经达到1000多个&#xff0c;其中GC和内存配置相关的JVM参数就有600多个。 但在绝大部分业务场景下&#xff0c;常用的JV…

【C++】STL(二) string容器

一、string基本概念 1、本质 string是C风格的字符串&#xff0c;而string本质上是一个类 string和char * 区别&#xff1a; char * 是一个指针 string是一个类&#xff0c;类内部封装了char*&#xff0c;管理这个字符串&#xff0c;是一个char*型的容器。 2、特点 1、stri…

Dockerfile的使用,怎样制作镜像

Docker 提供了一种更便捷的方式&#xff0c;叫作 Dockerfile docker build命令用于根据给定的Dockerfile构建Docker镜像。 docker build命令参数&#xff1a; --build-arg&#xff0c;设置构建时的变量 --no-cache&#xff0c;默认false。设置该选项&#xff0c;将不使用Build …

地球系统模式(CESM)

目前通用地球系统模式&#xff08;Community Earth System Model&#xff0c;CESM&#xff09;在研究地球的过去、现在和未来的气候状况中具有越来越普遍的应用。CESM由美国NCAR于2010年07月推出以来&#xff0c;一直受到气候学界的密切关注。近年升级的CESM2.0在大气、陆地、海…

23.网络游戏逆向分析与漏洞攻防-网络通信数据包分析工具-实现配置工具数据结构

免责声明&#xff1a;内容仅供学习参考&#xff0c;请合法利用知识&#xff0c;禁止进行违法犯罪活动&#xff01; 如果看不懂、不知道现在做的什么&#xff0c;那就跟着做完看效果 内容参考于&#xff1a;易道云信息技术研究院VIP课 上一个内容&#xff1a;22.加载配置文件…

QML | 在QML中导入JavaScript资源、导入JavaScript资源、包含一个JavaScript 资源

01 在QML中导入JavaScript资源 JavaScript资源可以被QML文档和其他JavaScript通过相对或者绝对路径进行导入。如果使用相对路径,位置解析需要相对于包含import语句的QML文档或JavaScript资源的位置。如果JavaScript需要从网络资源中进行获取,组件的status属性会被设置为Loadi…

Extended Feature Pyramid Network for SmallObject Detection

摘要 各种尺度的特征耦合会削弱小对象的性能&#xff0c;本文中&#xff0c;我们提出了具有超高分辨率金字塔的扩展特征金字塔网络&#xff08;EFPN &#xff09;&#xff0c;专门用于小目标检测。具体来说&#xff0c;我们设计了一个新模块&#xff0c;称为特征纹理转移&#…

【C++】vector的使用及其模拟实现

这里写目录标题 一、vector的介绍及使用1. vector的介绍2. 构造函数3. 遍历方式4. 容量操作及空间增长问题5. 增删查改6. vector二维数组 二、vector的模拟实现1. 构造函数2. 迭代器和基本接口3. reserve和resize4. push_back和pop_back5. insert和erase5. 迭代器失效问题5. 浅…

Oracle.xs.dll‘ for module DBD::Oracle: load_file:找不到指定的模块

安装Ora2pg时,碰到 异常现象 D:\ProgramFiles\ora2pg>ora2pg -t show_report --estimate_cost -c ora2pg_conf.dist install_driver(Oracle) failed: Cant load D:/ProgramFiles/strawberry/perl/site/lib/auto/DBD/Oracle/Oracle.xs.dll for module DBD::Oracle: load_fil…

聊聊python中面向对象编程思想

面向对象编程思想 1、什么是面向过程 传统的面向过程的编程思想总结起来就八个字——自顶向下&#xff0c;逐步细化&#xff01; → 将要实现的功能描述为一个从开始到结束按部就班的连续的“步骤” → 依次逐步完成这些步骤&#xff0c;如果某一个步骤的难度较大&#xff…

JavaScript基础6之执行上下文、作用域链、函数创建、函数激活、checkScope的执行过程、闭包、this

JavaScript基础 执行上下文执行上下文中的属性变量对象全局上下文的变量对象函数上下文执行过程进入执行上下文代码执行思考题 作用域链函数创建函数激活checkScope的执行过程总结 闭包分析闭包 this 执行上下文 执行上下文中的属性 每一个执行上下文都有三个核心属性 变量对…

数据库查询操作

数据库查询操作 数据准备查询的基本操作查询部分字段的值取别名去重 条件查询比较运算符逻辑运算符模糊查询范围查询为空判断 排序分组聚合count(*) : 求表的总的记录数max(字段名): 查询对应字段的最大的值min(字段名): 查询对应字段的最小的值sum(字段名): 查询对应字段的值的…