devops-部署Harbor实现私有Docker镜像仓库

文章目录

  • 概述
  • 下载
  • 配置
  • 安装
  • 安装后生成的文件
  • 使用和维护Harbor
  • 参考资料

概述

Harbor是一个开源注册中心,它使用策略和基于角色的访问控制来保护工件,确保镜像被扫描并且没有漏洞,并将镜像签名为可信的。Harbor是CNCF的一个毕业项目,它提供了合规性、性能和互操作性,帮助你在Kubernetes和Docker等云原生计算平台上一致、安全地管理工件。Harbor可以安装在任何Kubernetes环境或支持Docker的系统上。

下载

下载最新的Harbor离线安装包。选择适合你操作系统的版本 ,一般是.tar.gz格式的压缩包。例如,下载后的文件名为harbor - offline - installer - v2.5.0.tgz
下载地址

将下载的安装包传输到服务器上,例如使用scp(在Linux系统之间)或者WinSCP(在Windows和Linux之间)等工具。

配置

解压,例如tar -zxvf harbor - offline - installer - v2.5.0.tgz
进入解压后的harbor目录,其中包含一个harbor.yml.tmpl文件,这是Harbor的配置模板文件。
复制一份配置模板文件并命名为harbor.yml,例如:
cp harbor.yml.tmpl harbor.yml

编辑harbor.yml文件,主要配置项包括:
主机名(hostname):将hostname设置为服务器的IP地址或域名,如hostname: 192.168.1.100。如果要使用HTTPS,还需要配置证书相关选项。
存储路径(data_volume):可以修改数据存储的路径,默认路径是/data,例如可以将其更改为/mnt/harbor - data,即data_volume: /mnt/harbor - data
账户密码(harbor_admin_password):设置Harbor管理员账户的密码,如harbor_admin_password: your - password

其他配置(可选)

  • 如果需要配置外部数据库(默认使用内置数据库)、LDAP认证等,可以在harbor.yml文件中按照官方文档的说明进行配置。

详细配置见以下内容:

# Configuration file of Harbor# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: reg.mydomain.com# http related config
http:# port for http, default is 80. If https enabled, this port will redirect to https portport: 80# https related config 如果不需要https,要注释掉不然会报错
https:# https port for harbor, default is 443port: 443# The path of cert and key files for nginxcertificate: /your/certificate/pathprivate_key: /your/private/key/path# enable strong ssl ciphers (default: false)# strong_ssl_ciphers: false# # Harbor will set ipv4 enabled only by default if this block is not configured
# # Otherwise, please uncomment this block to configure your own ip_family stacks
# ip_family:
#   # ipv6Enabled set to true if ipv6 is enabled in docker network, currently it affected the nginx related component
#   ipv6:
#     enabled: false
#   # ipv4Enabled set to true by default, currently it affected the nginx related component
#   ipv4:
#     enabled: true# # Uncomment following will enable tls communication between all harbor components
# internal_tls:
#   # set enabled to true means internal tls is enabled
#   enabled: true
#   # put your cert and key files on dir
#   dir: /etc/harbor/tls/internal# Uncomment external_url if you want to enable external proxy
# And when it enabled the hostname will no longer used
# external_url: https://reg.mydomain.com:8433# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
# UI控制台密码 默认用户名:admin
harbor_admin_password: Harbor12345# Harbor DB configuration 
database:# The password for the root user of Harbor DB. Change this before any production use.password: root123# The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.max_idle_conns: 100# The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.# Note: the default number of connections is 1024 for postgres of harbor.max_open_conns: 900# The maximum amount of time a connection may be reused. Expired connections may be closed lazily before reuse. If it <= 0, connections are not closed due to a connection's age.# The value is a duration string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".conn_max_lifetime: 5m# The maximum amount of time a connection may be idle. Expired connections may be closed lazily before reuse. If it <= 0, connections are not closed due to a connection's idle time.# The value is a duration string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".conn_max_idle_time: 0# 挂载目录
# The default data volume
data_volume: /data# Harbor Storage settings by default is using /data dir on local filesystem
# Uncomment storage_service setting If you want to using external storage
# storage_service:
#   # ca_bundle is the path to the custom root ca certificate, which will be injected into the truststore
#   # of registry's containers.  This is usually needed when the user hosts a internal storage with self signed certificate.
#   ca_bundle:#   # storage backend, default is filesystem, options include filesystem, azure, gcs, s3, swift and oss
#   # for more info about this configuration please refer https://distribution.github.io/distribution/about/configuration/
#   # and https://distribution.github.io/distribution/storage-drivers/
#   filesystem:
#     maxthreads: 100
#   # set disable to true when you want to disable registry redirect
#   redirect:
#     disable: false# Trivy configuration
#
# Trivy DB contains vulnerability information from NVD, Red Hat, and many other upstream vulnerability databases.
# It is downloaded by Trivy from the GitHub release page https://github.com/aquasecurity/trivy-db/releases and cached
# in the local file system. In addition, the database contains the update timestamp so Trivy can detect whether it
# should download a newer version from the Internet or use the cached one. Currently, the database is updated every
# 12 hours and published as a new release to GitHub.
trivy:# ignoreUnfixed The flag to display only fixed vulnerabilitiesignore_unfixed: false# skipUpdate The flag to enable or disable Trivy DB downloads from GitHub## You might want to enable this flag in test or CI/CD environments to avoid GitHub rate limiting issues.# If the flag is enabled you have to download the `trivy-offline.tar.gz` archive manually, extract `trivy.db` and# `metadata.json` files and mount them in the `/home/scanner/.cache/trivy/db` path.skip_update: false## skipJavaDBUpdate If the flag is enabled you have to manually download the `trivy-java.db` file and mount it in the# `/home/scanner/.cache/trivy/java-db/trivy-java.db` pathskip_java_db_update: false## The offline_scan option prevents Trivy from sending API requests to identify dependencies.# Scanning JAR files and pom.xml may require Internet access for better detection, but this option tries to avoid it.# For example, the offline mode will not try to resolve transitive dependencies in pom.xml when the dependency doesn't# exist in the local repositories. It means a number of detected vulnerabilities might be fewer in offline mode.# It would work if all the dependencies are in local.# This option doesn't affect DB download. You need to specify "skip-update" as well as "offline-scan" in an air-gapped environment.offline_scan: false## Comma-separated list of what security issues to detect. Possible values are `vuln`, `config` and `secret`. Defaults to `vuln`.security_check: vuln## insecure The flag to skip verifying registry certificateinsecure: false## timeout The duration to wait for scan completion.# There is upper bound of 30 minutes defined in scan job. So if this `timeout` is larger than 30m0s, it will also timeout at 30m0s.timeout: 5m0s## github_token The GitHub access token to download Trivy DB## Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough# for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000# requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult# https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting## You can create a GitHub token by following the instructions in# https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line## github_token: xxxjobservice:# Maximum number of job workers in job servicemax_job_workers: 10# The jobLoggers backend name, only support "STD_OUTPUT", "FILE" and/or "DB"job_loggers:- STD_OUTPUT- FILE# - DB# The jobLogger sweeper duration (ignored if `jobLogger` is `stdout`)logger_sweeper_duration: 1 #daysnotification:# Maximum retry count for webhook jobwebhook_job_max_retry: 3# HTTP client timeout for webhook jobwebhook_job_http_client_timeout: 3 #seconds# Log configurations
log:# options are debug, info, warning, error, fatallevel: info# configs for logs in local storagelocal:# Log files are rotated log_rotate_count times before being removed. If count is 0, old versions are removed rather than rotated.rotate_count: 50# Log files are rotated only if they grow bigger than log_rotate_size bytes. If size is followed by k, the size is assumed to be in kilobytes.# If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G# are all valid.rotate_size: 200M# The directory on your host that store loglocation: /var/log/harbor# Uncomment following lines to enable external syslog endpoint.# external_endpoint:#   # protocol used to transmit log to external endpoint, options is tcp or udp#   protocol: tcp#   # The host of external endpoint#   host: localhost#   # Port of external endpoint#   port: 5140#This attribute is for migrator to detect the version of the .cfg file, DO NOT MODIFY!
_version: 2.11.0# Uncomment external_database if using external database.
# external_database:
#   harbor:
#     host: harbor_db_host
#     port: harbor_db_port
#     db_name: harbor_db_name
#     username: harbor_db_username
#     password: harbor_db_password
#     ssl_mode: disable
#     max_idle_conns: 2
#     max_open_conns: 0# Uncomment redis if need to customize redis db
# redis:
#   # db_index 0 is for core, it's unchangeable
#   # registry_db_index: 1
#   # jobservice_db_index: 2
#   # trivy_db_index: 5
#   # it's optional, the db for harbor business misc, by default is 0, uncomment it if you want to change it.
#   # harbor_db_index: 6
#   # it's optional, the db for harbor cache layer, by default is 0, uncomment it if you want to change it.
#   # cache_layer_db_index: 7# Uncomment external_redis if using external Redis server
# external_redis:
#   # support redis, redis+sentinel
#   # host for redis: <host_redis>:<port_redis>
#   # host for redis+sentinel:
#   #  <host_sentinel1>:<port_sentinel1>,<host_sentinel2>:<port_sentinel2>,<host_sentinel3>:<port_sentinel3>
#   host: redis:6379
#   password: 
#   # Redis AUTH command was extended in Redis 6, it is possible to use it in the two-arguments AUTH <username> <password> form.
#   # there's a known issue when using external redis username ref:https://github.com/goharbor/harbor/issues/18892
#   # if you care about the image pull/push performance, please refer to this https://github.com/goharbor/harbor/wiki/Harbor-FAQs#external-redis-username-password-usage
#   # username:
#   # sentinel_master_set must be set to support redis+sentinel
#   #sentinel_master_set:
#   # db_index 0 is for core, it's unchangeable
#   registry_db_index: 1
#   jobservice_db_index: 2
#   trivy_db_index: 5
#   idle_timeout_seconds: 30
#   # it's optional, the db for harbor business misc, by default is 0, uncomment it if you want to change it.
#   # harbor_db_index: 6
#   # it's optional, the db for harbor cache layer, by default is 0, uncomment it if you want to change it.
#   # cache_layer_db_index: 7# Uncomment uaa for trusting the certificate of uaa instance that is hosted via self-signed cert.
# uaa:
#   ca_file: /path/to/ca# Global proxy
# Config http proxy for components, e.g. http://my.proxy.com:3128
# Components doesn't need to connect to each others via http proxy.
# Remove component from `components` array if want disable proxy
# for it. If you want use proxy for replication, MUST enable proxy
# for core and jobservice, and set `http_proxy` and `https_proxy`.
# Add domain to the `no_proxy` field, when you want disable proxy
# for some special registry.
proxy:http_proxy:https_proxy:no_proxy:components:- core- jobservice- trivy# metric:
#   enabled: false
#   port: 9090
#   path: /metrics# Trace related config
# only can enable one trace provider(jaeger or otel) at the same time,
# and when using jaeger as provider, can only enable it with agent mode or collector mode.
# if using jaeger collector mode, uncomment endpoint and uncomment username, password if needed
# if using jaeger agetn mode uncomment agent_host and agent_port
# trace:
#   enabled: true
#   # set sample_rate to 1 if you wanna sampling 100% of trace data; set 0.5 if you wanna sampling 50% of trace data, and so forth
#   sample_rate: 1
#   # # namespace used to differentiate different harbor services
#   # namespace:
#   # # attributes is a key value dict contains user defined attributes used to initialize trace provider
#   # attributes:
#   #   application: harbor
#   # # jaeger should be 1.26 or newer.
#   # jaeger:
#   #   endpoint: http://hostname:14268/api/traces
#   #   username:
#   #   password:
#   #   agent_host: hostname
#   #   # export trace data by jaeger.thrift in compact mode
#   #   agent_port: 6831
#   # otel:
#   #   endpoint: hostname:4318
#   #   url_path: /v1/traces
#   #   compression: false
#   #   insecure: true
#   #   # timeout is in seconds
#   #   timeout: 10# Enable purge _upload directories
upload_purging:enabled: true# remove files in _upload directories which exist for a period of time, default is one week.age: 168h# the interval of the purge operationsinterval: 24hdryrun: false# Cache layer configurations
# If this feature enabled, harbor will cache the resource
# `project/project_metadata/repository/artifact/manifest` in the redis
# which can especially help to improve the performance of high concurrent
# manifest pulling.
# NOTICE
# If you are deploying Harbor in HA mode, make sure that all the harbor
# instances have the same behaviour, all with caching enabled or disabled,
# otherwise it can lead to potential data inconsistency.
cache:# not enabled by defaultenabled: false# keep cache for one day by defaultexpire_hours: 24# Harbor core configurations
# Uncomment to enable the following harbor core related configuration items.
# core:
#   # The provider for updating project quota(usage), there are 2 options, redis or db,
#   # by default is implemented by db but you can switch the updation via redis which
#   # can improve the performance of high concurrent pushing to the same project,
#   # and reduce the database connections spike and occupies.
#   # By redis will bring up some delay for quota usage updation for display, so only
#   # suggest switch provider to redis if you were ran into the db connections spike around
#   # the scenario of high concurrent pushing to same project, no improvement for other scenes.
#   quota_update_provider: redis # Or db

安装

  1. 执行安装脚本

    • harbor目录下,执行./install.sh脚本。这个脚本会根据harbor.yml文件中的配置来启动Harbor相关的容器。安装过程可能需要一些时间,因为它需要下载和启动多个容器组件,如Harbor的核心服务、数据库、Redis等。 在这里插入图片描述
      在这里插入图片描述
      在这里插入图片描述
  2. 验证安装

    • 当安装完成后,可以通过浏览器访问配置的hostname和端口来访问Harbor的Web界面。例如,在浏览器中输入http://192.168.1.100:80(根据实际配置的主机名和端口),如果能够看到Harbor的登录界面,说明部署成功。可以使用默认的管理员账号admin和在harbor.yml文件中设置的密码进行登录,然后开始使用Harbor来管理容器镜像。

安装后生成的文件

在这里插入图片描述
在这里插入图片描述
访问管理控制台
http://192.168.79.30/harbor/projects
在这里插入图片描述

使用和维护Harbor

  1. 推送和拉取镜像
    • 在客户端机器上,需要将Harbor的服务器地址添加到Docker的信任列表中。例如,如果Harbor的服务器是192.168.79.30:80,可以在客户端机器的/etc/docker/daemon.json文件中添加以下内容(如果文件不存在可以创建):
      • {"insecure - registries": ["192.168.79.30:80"]
        }
        
    • 然后重启客户端机器的Docker服务。之后就可以将本地镜像推送到Harbor仓库,例如:docker push 192.168.79.30:80/your - image - name,也可以从Harbor仓库拉取镜像:docker pull 192.168.79.30:80/your - image - name
  2. 备份和更新Harbor
    定期备份Harbor的数据,包括存储在data_volume路径下的数据以及数据库备份(如果使用外部数据库)。同时,关注Harbor官方网站的更新信息,按照官方的更新指南及时更新Harbor版本,以确保系统的安全性和功能的完整性。

参考资料

https://goharbor.io/docs/2.12.0

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/492321.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Java深拷贝和浅拷贝区别?

大家好&#xff0c;我是锋哥。今天分享关于【Java深拷贝和浅拷贝区别?】面试题。希望对大家有帮助&#xff1b; Java深拷贝和浅拷贝区别? 1000道 互联网大厂Java工程师 精选面试题-Java资源分享网 在Java中&#xff0c;深拷贝&#xff08;Deep Copy&#xff09;和浅拷贝&am…

Linux文件属性 --- 硬链接、所有者、所属组

三、硬链接数 1.目录 使用“ll”命令查看&#xff0c;在文件权限的后面有一列数字&#xff0c;这是文件的硬链接数。 对于目录&#xff0c;硬链接的数量是它具有的直接子目录的数量加上其父目录和自身。 下图的“qwe”目录就是“abc”目录的直接子目录。 2.文件 对于文件可…

MobileLLM开发安卓AI的体验(一)

MobileLLM是一个在安卓端跑的大语言模型&#xff0c;关键它还有调动api的能力 https://github.com/facebookresearch/MobileLLM 项目地址是这个。 看了下&#xff0c;似乎还是中国人团队 article{liu2024mobilellm, title{MobileLLM: Optimizing Sub-billion Parameter Langua…

【WRF安装】WRF编译错误总结1:HDF5库包安装

目录 1 HDF5库包安装有误&#xff1a;HDF5 not set in environment. Will configure WRF for use without.HDF5的重新编译 错误原因1&#xff1a;提示 overflow 错误1. 检查系统是否缺少依赖库或工具2. 检查和更新编译器版本3. 检查 ./configure 报错信息4. 检查系统环境变量5.…

大模型微调---Prompt-tuning微调

目录 一、前言二、Prompt-tuning实战2.1、下载模型到本地2.2、加载模型与数据集2.3、处理数据2.4、Prompt-tuning微调2.5、训练参数配置2.6、开始训练 三、模型评估四、完整训练代码 一、前言 Prompt-tuning通过修改输入文本的提示&#xff08;Prompt&#xff09;来引导模型生…

【UE5】pmx导入UE5,套动作。(防止“气球人”现象。

参考视频&#xff1a;UE5Animation 16: MMD模型與動作導入 (繁中自動字幕) 问题所在&#xff1a; 做法记录&#xff08;自用&#xff09; 1.导入pmx&#xff0c;删除这两个。 2.转换给blender&#xff0c;清理节点。 3.导出时&#xff0c;内嵌贴图&#xff0c;选“复制”。 …

005 QT常用控件Qwidget_上

文章目录 前言控件概述QWidgetenable属性geometry属性windowTitle属性windowlcon属性 小结 前言 本文将会向你介绍常用的Qwidget属性 控件概述 Widget 是 Qt 中的核心概念. 英文原义是 “⼩部件”, 我们此处把它翻译为 “控件” . 控件是构成⼀个图形化界面的基本要素. QWi…

gitlab初始化+API批量操作

几年没接触gitlab了&#xff0c;新版本装完以后代码提交到默认的main分支&#xff0c;master不再是主分支 项目有几十个仓库&#xff0c;研发提交代码后仓库地址和之前的发生了变化 有几个点 需要注意 1、修改全局默认分支 2、关闭分支保护 上面修改了全局配置不会影响已经创…

如何用上AI视频工具Sora,基于ChatGPT升级Plus使用指南

没有GPT&#xff0c;可以参考这个教程&#xff1a;详情移步至底部参考原文查看哦~ 1.准备工作 详情移步至底部参考原文查看哦~ 详情移步至底部参考原文查看哦~ 4.Sora使用 详情移步至底部参考原文查看哦 参考文章&#xff1a;【包教包会】如何用上AI视频工具Sora&#xff…

如何查看K8S集群中service和pod定义的网段范围

在我们创建部署K8S集群的开头时候 不是需要在master节点上执行一条这样的命令嘛&#xff1f; kubeadm init --apiserver-advertise-address192.168.60.130 --control-plane-endpointcluster-master --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers -…

使用 Marp 将 Markdown 导出为 PPT 后不可编辑的原因说明及解决方案

Marp 是一个流行的 Markdown 演示文稿工具&#xff0c;能够将 Markdown 文件转换为 PPTX 格式。然而&#xff0c;用户在使用 Marp 导出 PPT 时&#xff0c;可能会遇到以下问题&#xff1a; 导出 PPT 不可直接编辑的原因 根据 Marp GitHub 讨论&#xff0c;Marp 导出的 PPTX 文…

redis 缓存使用

工具类 package org.springblade.questionnaire.redis;import com.fasterxml.jackson.core.JsonProcessingException; import com.fasterxml.jackson.core.type.TypeReference; import com.fasterxml.jackson.databind.ObjectMapper; import org.springframework.beans.factor…

level2逐笔委托查询接口

沪深逐笔委托队列查询 前置步骤 分配数据库服务器 查询模板 以下是沪深委托队列查询的请求模板&#xff1a; http://<数据库服务器>/sql?modeorder_book&code<股票代码>&offset<offset>&token<token>查询参数说明 参数名类型说明mo…

游戏AI实现-寻路算法(DFS)

​深度优先搜索算法&#xff08;英语&#xff1a;Depth-First-Search&#xff0c;缩写为DFS&#xff09;是一种用于遍历或搜索树或图的算法。 寻路地图搭建&#xff1a; 游戏AI实现-寻路地图搭建-CSDN博客 算法过程&#xff1a;遍历方向为从竖直向上沿顺时针方向 1.首先将开…

Yolov3源码解析

1. 制作数据集 我做的是三目标检测&#xff0c;人狗马目标检测 然后使用精力标注助手对图像进行目标标注。到出为xml文件。 得到这样格式的数据集 1. 1 读取xml文件&#xff0c;求建议框&#xff0c;向量数据库 import os from xml.dom.minidom import parse from kmeans im…

《变形金刚:赛博坦的陨落》游戏启动难题:‘buddha.dll’缺失的七大修复策略

《变形金刚&#xff1a;赛博坦的陨落》游戏启动时提示buddha.dll缺失&#xff1a;原因与解决方案 作为一名软件开发从业者&#xff0c;我在日常工作中经常遇到电脑游戏运行时出现的各种问题&#xff0c;如文件丢失、文件损坏和系统报错等。今天&#xff0c;我们就来探讨一下《…

java全栈day16--Web后端实战(数据库)

一、数据库介绍 二、Mysql安装&#xff08;自行在网上找&#xff0c;教程简单&#xff09; 安装好了进行Mysql连接 连接语法&#xff1a;winr输入cmd&#xff0c;在命令行中再输入mysql -uroot -p密码 方法二&#xff1a;winr输入cmd&#xff0c;在命令行中再输入mysql -uroo…

简单配置,全面保护:HZERO审计服务让安全触手可及

HZERO技术平台&#xff0c;凭借多年企业资源管理实施经验&#xff0c;深入理解企业痛点&#xff0c;为您提供了一套高效易用的审计解决方案。这套方案旨在帮助您轻松应对企业开发中的审计挑战&#xff0c;确保业务流程的合规性和透明度。 接下来&#xff0c;我将为大家详细介绍…

Microi吾码|开源低代码.NET、VUE低代码项目,表单引擎介绍

Microi吾码&#xff5c;开源低代码.NET、VUE低代码项目&#xff0c;表单引擎介绍 一、摘要二、Microi吾码介绍2.1 功能介绍2.2 团队介绍2.3 上线项目案例 三、Microi吾码表单引擎是什么&#xff1f;四、Microi吾码表单引擎功能4.1 模块引擎 - 由表单引擎驱动4.2 流程引擎 - 由表…

Spring Cloud Sleuth 分布式链路追踪入门

您好&#xff0c;我是今夜写代码,今天学习下分布式链路组件Spring Cloud Sleuth。 本文内容 介绍了分布式链路的思想 Sleuth 和 Zipkin 简单集成Demo,并不涉及 Sleuth原理。 为什么要用链路追踪&#xff1f; 微服务架构下&#xff0c;一个复杂的电商应用&#xff0c;完成下…