hue 4.11容器化部署,已结合Hive与Hadoop

配合《Hue 部署过程中的报错处理》食用更佳

官方配置说明页面:
https://docs.gethue.com/administrator/configuration/connectors/
官方配置hue.ini页面
https://github.com/cloudera/hue/blob/master/desktop/conf.dist/hue.ini

docker部署

注意

  • 初次部署需要先注释hue.ini配置文件的映射,待到部署成功后,将配置文件拷贝到指定目录后,再取消注释进行部署即可。
    或者到该地址下去复制并新建hue.ini,放到/data/hue/目录下(推荐)。https://github.com/cloudera/hue/blob/master/desktop/conf.dist/hue.ini
  • 如果已部署hive与hadoop,最好部署hue容器之前将文件映射配置修改完整
  • 如果未部署hive与Hadoop,忽略即可,后续有需要时再回来修改配置并重新部署hue容器即可
version: '3.3'  # 指定Docker Compose的版本
services:  # 定义服务列表hue:  # 服务的名称image: gethue/hue:4.11.0  # 指定使用的镜像及其版本container_name: hue  # 设置容器的名称restart: always  # 配置容器的重启策略为总是重启privileged: true  # 给予容器特权模式,拥有更多权限hostname: hue  # 设置容器的主机名ports:  # 端口映射- "9898:8888"  # 将容器的8888端口映射到宿主机的9898端口environment:  # 环境变量设置- TZ=Asia/Shanghai  # 设置时区为中国上海volumes:  # 数据卷配置- /data/hue/hue.ini:/usr/share/hue/desktop/conf/hue.ini  # 将宿主机的hue.ini配置文件映射到容器内的指定位置- /etc/localtime:/etc/localtime  # 将宿主机的本地时间设置映射到容器内,确保容器与宿主机时间一致- /opt/hive-3.1.3:/opt/hive-3.1.3 # 将宿主机的hive目录映射到容器内- /opt/hadoop-3.3.0:/opt/hadoop-3.3.0 # 将宿主机的hadoop目录映射到容器内networks:hue_default:ipv4_address: 172.15.0.2  # 指定静态IP地址networks:  # 定义网络hue_default:driver: bridge  # 使用bridge驱动ipam:config:- subnet: 172.15.0.0/16  # 指定子网范围
# 1. 创建hue部署文件
vi docker-compose-hue.yml# 2. 将上方的部署内容复制粘贴到docker-compose-hue.yml中
# 3. 使用docker compose部署hue
docker compose -f docker-compose-hue.yml  up -d# 4. 检验是否部署成功,同一局域网下的浏览器访问hue
# 由于这是您第一次登录,请选择任意用户名和密码。请务必记住这些,因为它们将成为您的 Hue 超级用户凭据。
主机ip:9898

hue初始化

  1. hue.ini 配置文件修改
vi /data/hue/hue.ini
# 修改Webserver监听地址http_host=127.0.0.1# 修改时区time_zone=Asia/Shanghai# 配置hue元数据数据库存储
[[database]]engine=mysqlhost=192.168.10.75port=3306user=rootpassword=HoMf@123name=hue# 配置hue数据库连接,用于查询Mysql数据库
[[interpreters]]
# Define the name and how to connect and execute the language.
# https://docs.gethue.com/administrator/configuration/editor/[[[mysql]]]name = MySQLinterface=sqlalchemy
#   ## https://docs.sqlalchemy.org/en/latest/dialects/mysql.htmloptions='{"url": "mysql://root:HoMf@123@192.168.10.75:3306/hue_meta"}'
#   ## options='{"url": "mysql://${USER}:${PASSWORD}@localhost:3306/hue"}'
  1. 创建数据库
# 进入数据库
mysql -uroot -pHoMf@123# 创建hue数据库
mysql> create database `hue` default character set utf8mb4 default collate utf8mb4_general_ci;
Query OK, 1 row affected (0.00 sec)# 创建hue_meta数据库
mysql> create database `hue_meta` default character set utf8mb4 default collate utf8mb4_general_ci;
Query OK, 1 row affected (0.00 sec)
  1. 应用配置文件,重启容器,进入容器,数据库初始化
# 重启hue容器
docker restart hue# 进入hue容器
docker exec -it hue bash# 执行数据库初始化
/usr/share/hue/build/env/bin/hue syncdb
/usr/share/hue/build/env/bin/hue migrate# 退出容器,ctrl + D,或
exit
  • 数据库初始化详细操作及返回结果
hue@hue:/usr/share/hue$ /usr/share/hue/build/env/bin/hue syncdb
[22/Nov/2024 15:38:07 +0800] settings     INFO     Welcome to Hue 4.11.0
[22/Nov/2024 15:38:08 +0800] conf         WARNING  enable_extract_uploaded_archive is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[22/Nov/2024 15:38:08 +0800] conf         WARNING  enable_new_create_table is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[22/Nov/2024 15:38:08 +0800] conf         WARNING  force_hs2_metadata is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[22/Nov/2024 15:38:08 +0800] conf         WARNING  show_table_erd is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[21/Nov/2024 23:38:08 -0800] backend      WARNING  mozilla_django_oidc module not found
[21/Nov/2024 23:38:09 -0800] apps         INFO     AXES: BEGIN LOG
[21/Nov/2024 23:38:09 -0800] apps         INFO     AXES: Using django-axes version 5.13.0
[21/Nov/2024 23:38:09 -0800] apps         INFO     AXES: blocking by IP only.
[21/Nov/2024 23:38:09 -0800] api3         WARNING  simple_salesforce module not found
[21/Nov/2024 23:38:09 -0800] jdbc         WARNING  Failed to import py4j
[21/Nov/2024 23:38:10 -0800] schemas      INFO     Resource 'XMLSchema.xsd' is already loaded
No changes detected
hue@hue:/usr/share/hue$ /usr/share/hue/build/env/bin/hue migrate
[22/Nov/2024 15:38:33 +0800] settings     INFO     Welcome to Hue 4.11.0
[22/Nov/2024 15:38:33 +0800] conf         WARNING  enable_extract_uploaded_archive is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[22/Nov/2024 15:38:33 +0800] conf         WARNING  enable_new_create_table is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[22/Nov/2024 15:38:33 +0800] conf         WARNING  force_hs2_metadata is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[22/Nov/2024 15:38:33 +0800] conf         WARNING  show_table_erd is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[21/Nov/2024 23:38:33 -0800] backend      WARNING  mozilla_django_oidc module not found
[21/Nov/2024 23:38:34 -0800] apps         INFO     AXES: BEGIN LOG
[21/Nov/2024 23:38:34 -0800] apps         INFO     AXES: Using django-axes version 5.13.0
[21/Nov/2024 23:38:34 -0800] apps         INFO     AXES: blocking by IP only.
[21/Nov/2024 23:38:34 -0800] api3         WARNING  simple_salesforce module not found
[21/Nov/2024 23:38:34 -0800] jdbc         WARNING  Failed to import py4j
[21/Nov/2024 23:38:35 -0800] schemas      INFO     Resource 'XMLSchema.xsd' is already loaded
Operations to perform:Apply all migrations: auth, authtoken, axes, beeswax, contenttypes, desktop, jobsub, oozie, pig, sessions, sites, useradmin
Running migrations:Applying contenttypes.0001_initial... OKApplying contenttypes.0002_remove_content_type_name... OKApplying auth.0001_initial... OKApplying auth.0002_alter_permission_name_max_length... OKApplying auth.0003_alter_user_email_max_length... OKApplying auth.0004_alter_user_username_opts... OKApplying auth.0005_alter_user_last_login_null... OKApplying auth.0006_require_contenttypes_0002... OKApplying auth.0007_alter_validators_add_error_messages... OKApplying auth.0008_alter_user_username_max_length... OKApplying auth.0009_alter_user_last_name_max_length... OKApplying auth.0010_alter_group_name_max_length... OKApplying auth.0011_update_proxy_permissions... OKApplying auth.0012_alter_user_first_name_max_length... OKApplying authtoken.0001_initial... OKApplying authtoken.0002_auto_20160226_1747... OKApplying authtoken.0003_tokenproxy... OKApplying axes.0001_initial... OKApplying axes.0002_auto_20151217_2044... OKApplying axes.0003_auto_20160322_0929... OKApplying axes.0004_auto_20181024_1538... OKApplying axes.0005_remove_accessattempt_trusted... OKApplying axes.0006_remove_accesslog_trusted... OKApplying axes.0007_add_accesslog_trusted... OKApplying axes.0008_remove_accesslog_trusted... OKApplying beeswax.0001_initial... OKApplying beeswax.0002_auto_20200320_0746... OKApplying beeswax.0003_compute_namespace... OKApplying desktop.0001_initial... OKApplying desktop.0002_initial... OKApplying desktop.0003_initial... OKApplying desktop.0004_initial... OKApplying desktop.0005_initial... OKApplying desktop.0006_initial... OKApplying desktop.0007_initial... OKApplying desktop.0008_auto_20191031_0704... OKApplying desktop.0009_auto_20191202_1056... OKApplying desktop.0010_auto_20200115_0908... OKApplying desktop.0011_document2_connector... OKApplying desktop.0012_connector_interface... OKApplying desktop.0013_alter_document2_is_trashed... OKApplying jobsub.0001_initial... OKApplying oozie.0001_initial... OKApplying oozie.0002_initial... OKApplying oozie.0003_initial... OKApplying oozie.0004_initial... OKApplying oozie.0005_initial... OKApplying oozie.0006_auto_20200714_1204... OKApplying oozie.0007_auto_20210126_2113... OKApplying oozie.0008_auto_20210216_0216... OKApplying pig.0001_initial... OKApplying pig.0002_auto_20200714_1204... OKApplying sessions.0001_initial... OKApplying sites.0001_initial... OKApplying sites.0002_alter_domain_unique... OKApplying useradmin.0001_initial... OKApplying useradmin.0002_userprofile_json_data... OKApplying useradmin.0003_auto_20200203_0802... OKApplying useradmin.0004_userprofile_hostname... OK
[21/Nov/2024 23:38:42 -0800] models       INFO     HuePermissions: 34 added, 0 updated, 0 up to date, 0 stale, 0 deleted
  1. 查看数据库表是否创建成功,判断数据库是否初始化完成
# 进入数据库
mysql -uroot -pHoMf@123# 进入hue数据库
mysql> use hue;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed# 查看hue数据库内的数据库表
mysql> show tables;
+--------------------------------+
| Tables_in_hue                  |
+--------------------------------+
| auth_group                     |
| auth_group_permissions         |
| auth_permission                |
| auth_user                      |
| auth_user_groups               |
| auth_user_user_permissions     |
| authtoken_token                |
| axes_accessattempt             |
| axes_accesslog                 |
| beeswax_compute                |
| beeswax_metainstall            |
| beeswax_namespace              |
| beeswax_queryhistory           |
| beeswax_savedquery             |
| beeswax_session                |
| defaultconfiguration_groups    |
| desktop_connector              |
| desktop_defaultconfiguration   |
| desktop_document               |
| desktop_document2              |
| desktop_document2_dependencies |
| desktop_document2permission    |
| desktop_document_tags          |
| desktop_documentpermission     |
| desktop_documenttag            |
| desktop_settings               |
| desktop_userpreferences        |
| django_content_type            |
| django_migrations              |
| django_session                 |
| django_site                    |
| documentpermission2_groups     |
| documentpermission2_users      |
| documentpermission_groups      |
| documentpermission_users       |
| jobsub_checkforsetup           |
| jobsub_jobdesign               |
| jobsub_jobhistory              |
| jobsub_oozieaction             |
| jobsub_ooziedesign             |
| jobsub_ooziejavaaction         |
| jobsub_ooziemapreduceaction    |
| jobsub_ooziestreamingaction    |
| oozie_bundle                   |
| oozie_bundledcoordinator       |
| oozie_coordinator              |
| oozie_datainput                |
| oozie_dataoutput               |
| oozie_dataset                  |
| oozie_decision                 |
| oozie_decisionend              |
| oozie_distcp                   |
| oozie_email                    |
| oozie_end                      |
| oozie_fork                     |
| oozie_fs                       |
| oozie_generic                  |
| oozie_history                  |
| oozie_hive                     |
| oozie_java                     |
| oozie_job                      |
| oozie_join                     |
| oozie_kill                     |
| oozie_link                     |
| oozie_mapreduce                |
| oozie_node                     |
| oozie_pig                      |
| oozie_shell                    |
| oozie_sqoop                    |
| oozie_ssh                      |
| oozie_start                    |
| oozie_streaming                |
| oozie_subworkflow              |
| oozie_workflow                 |
| pig_document                   |
| pig_pigscript                  |
| useradmin_grouppermission      |
| useradmin_huepermission        |
| useradmin_ldapgroup            |
| useradmin_userprofile          |
+--------------------------------+
80 rows in set (0.01 sec)

部署检验

此时应可以正常进入hue界面,并且Sources列表内能够看到MySql
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

配置hive

请先添加好文件映射,确保在容器内也可以访问到hive的配置文件目录
修改hue.ini配置文件,修改后重启容器即可
配置成功后即可在hue界面内的Sources列表内能够看到Hive
在这里插入图片描述

# 修改hue配置文件
vi /data/hue/hue.ini
# 取消注释即可[[[hive]]]name=Hiveinterface=hiveserver2[beeswax]# Host where HiveServer2 is running.
# If Kerberos security is enabled, use fully-qualified domain name (FQDN).
# hive主机地址hive_server_host=192.168.10.75# Binary thrift port for HiveServer2.
# hive-site.xml中的hive.server2.thrift.port配置hive_server_port=10000# Hive configuration directory, where hive-site.xml is located# hive配置文件目录,若为容器部署,则需要配置文件映射hive_conf_dir=/opt/hive-3.1.3/conf# Timeout in seconds for thrift calls to Hive serviceserver_conn_timeout=120# Override the default desktop username and password of the hue user used for authentications with other services.
# e.g. Used for LDAP/PAM pass-through authentication.auth_username=hueauth_password=root[metastore]
# Flag to turn on the new version of the create table wizard.enable_new_create_table=true
  • 附上我的hive-site.xml配置
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--Licensed to the Apache Software Foundation (ASF) under one or morecontributor license agreements.  See the NOTICE file distributed withthis work for additional information regarding copyright ownership.The ASF licenses this file to You under the Apache License, Version 2.0(the "License"); you may not use this file except in compliance withthe License.  You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.
-->
<configuration><property><name>hive.metastore.warehouse.dir</name><value>/user/hive-3.1.3/warehouse</value><description/></property><property><name>javax.jdo.option.ConnectionURL</name><value>jdbc:mysql://Linux-Master:3306/hive?createDatabaseIfNotExist=true&amp;useSSL=false&amp;allowPublicKeyRetrieval=true</value><description>数据库连接</description></property><property><name>javax.jdo.option.ConnectionDriverName</name><value>com.mysql.jdbc.Driver</value><description/></property><property><name>javax.jdo.option.ConnectionUserName</name><value>root</value><description/></property><property><name>javax.jdo.option.ConnectionPassword</name><value>HoMf@123</value><description/></property><property><name>hive.querylog.location</name><value>/home/hadoop/logs/hive-3.1.3/job-logs/${user.name}</value><description>Location of Hive run time structured log file</description></property><property><name>hive.exec.scratchdir</name><value>/user/hive-3.1.3/tmp</value></property><property><name>hive.server2.thrift.port</name><value>11000</value></property>
</configuration>

配置 Hadoop

配置 HDFS

修改hue.ini

# 修改hue配置文件
vi /data/hue/hue.ini
# hdfs集群配置
[[hdfs_clusters]]# HA support by using HttpFs[[[default]]]fs_defaultfs=hdfs://192.168.10.75:9000webhdfs_url=http://192.168.10.75:9870/webhdfs/v1hadoop_conf_dir=/opt/hadoop-3.3.0/etc/hadoop# yarn集群配置
[[yarn_clusters]][[[default]]]
# Enter the host on which you are running the ResourceManagerresourcemanager_host=192.168.10.75# The port where the ResourceManager IPC listens onresourcemanager_port=8032# Whether to submit jobs to this cluster
submit_to=True# Resource Manager logical name (required for HA)
## logical_name=# Change this if your YARN cluster is Kerberos-secured
## security_enabled=false# URL of the ResourceManager APIresourcemanager_api_url=http://192.168.10.75:8088# URL of the ProxyServer APIproxy_api_url=http://192.168.10.75:8088# URL of the HistoryServer APIhistory_server_api_url=http://localhost:19888

修改hadoop的core-site.xml

<!—允许通过 httpfs 方式访问 hdfs 的主机名 -->
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<!—允许通过 httpfs 方式访问 hdfs 的用户组 -->
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
  • 这里附上我的core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><property><name>hadoop.tmp.dir</name><value>/opt/hadoop-3.3.0/tmp</value><description>Abase for other temporary directories.</description></property><property><name>fs.defaultFS</name><!-- IP地址为主节点服务器地址 --><value>hdfs://192.168.10.75:9000</value></property><property><name>hadoop.proxyuser.root.hosts</name><value>*</value></property><property><name>hadoop.proxyuser.root.groups</name><value>*</value></property><property><name>hadoop.proxyuser.hue.hosts</name><value>*</value></property><property><name>hadoop.proxyuser.hue.groups</name><value>*</value></property>
</configuration>

修改hadoop的hdfs-site.xml

<property><name>dfs.webhdfs.enabled</name><value>true</value>
</property>
  • 这里附上我的hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><!-- nn web 端访问地址--><property><name>dfs.namenode.http-address</name><!-- master-ubuntu需要改为主节点机器名--><value>linux-master:9870</value></property><!-- 2nn web 端访问地址--><property><name>dfs.namenode.secondary.http-address</name><!-- slave1-ubuntu需要改为第一从节点台机器名--><value>linux-slave01:9868</value></property><property><name>dfs.permissions.enabled</name><value>false</value></property><property><name>dfs.webhdfs.enable</name><value>true</value></property>
</configuration>

配置yarn集群

  • 配置hadoop的yarn-site.xml
        <!-- 开启日志聚集功能 --><property><name>yarn.log-aggregation-enable</name><value>true</value></property><!-- 设置日志聚集服务器地址 --><property><name>yarn.log.server.url</name><!-- IP地址为主节点机器地址 --><value>http://192.168.10.75:19888/jobhistory/logs</value></property><!-- 设置日志保留时间为 7 天 --><property><name>yarn.log-aggregation.retain-seconds</name><value>604800</value></property>
  • 这里附上我的yarn-site.xml
<?xml version="1.0"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
-->
<configuration><!-- Site specific YARN configuration properties --><!-- 指定 MR 走 shuffle --><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property><!-- 指定 ResourceManager 的地址--><property><name>yarn.resourcemanager.hostname</name><!-- master-ubuntu需要改为为主节点机器名--><value>linux-master</value></property><!-- 环境变量的继承 --><property><name>yarn.nodemanager.env-whitelist</name><value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CO NF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value></property><!-- 开启日志聚集功能 --><property><name>yarn.log-aggregation-enable</name><value>true</value></property><!-- 设置日志聚集服务器地址 --><property><name>yarn.log.server.url</name><!-- IP地址为主节点机器地址 --><value>http://192.168.10.75:19888/jobhistory/logs</value></property><!-- 设置日志保留时间为 7 天 --><property><name>yarn.log-aggregation.retain-seconds</name><value>604800</value></property>
</configuration>

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/481070.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【记录:前端提高用户体验】

学习来源&#xff1a;https://segmentfault.com/a/1190000040970567 1.排版优化 左右排版 左边固定&#xff0c;右侧自适应 <div class"g-app-wrapper"><div class"g-sidebar"></div><div class"g-main"></div&g…

计算机网络 实验八 应用层相关协议分析

一、实验目的 熟悉CMailServer邮件服务软件和Outlook Express客户端软件的基本配置与使用&#xff1b;分析SMTP及POP3协议报文格式和SMTP及POP3协议的工作过程。 二、实验原理 为了观察到邮件发送的全部过程&#xff0c;需要在本地计算机上配置邮件服务器和客户代理。在这里我…

设计模式 外观模式 门面模式

结构性模式-外观模式 门面模式 适用场景&#xff1a;如果你需要一个指向复杂子系统的直接接口&#xff0c; 且该接口的功能有限&#xff0c; 则可以使用外观模式。 不用关心后面的查询具体操作 /*** 聚合查询接口*/ RestController RequestMapping("/search") Slf…

PKO-LSSVM-Adaboost班翠鸟优化最小二乘支持向量机结合AdaBoost分类模型

PKO-LSSVM-Adaboost班翠鸟优化最小二乘支持向量机结合AdaBoost分类模型 目录 PKO-LSSVM-Adaboost班翠鸟优化最小二乘支持向量机结合AdaBoost分类模型效果一览基本介绍程序设计参考资料 效果一览 基本介绍 1.PKO-LSSVM-Adaboost班翠鸟优化最小二乘支持向量机结合AdaBoost分类模…

【Jenkins】docker 部署 Jenkins 踩坑笔记

文章目录 1. docker pull 超时2. 初始化找不到 initialAdminPassword 1. docker pull 超时 docker pull 命令拉不下来 docker pull jenkins/jenkins:lts-jdk17 Error response from daemon: Get "https://registry-1.docker.io/v2/": 编辑docker配置 sudo mkdir -…

音视频相关的一些基本概念

音视频相关的一些基本概念 文章目录 音视频相关的一些基本概念RTTH264profile & levelI帧 vs IDRMP4 封装格式AAC封装格式TS封装格式Reference RTT TCP中的RTT指的是“往返时延”&#xff08;Round-Trip Time&#xff09;&#xff0c;即从发送方发送数据开始&#xff0c;到…

nVisual可视化资源管理工具

nVisual主要功能 支持自定义层次化的场景结构 与物理世界结构一致&#xff0c;从全国到区域、从室外到室内、从机房到设备。 支持自定义多种空间场景 支持图片、CAD、GIS、3D等多种可视化场景搭建。 丰富的模型库 支持图标、机柜、设备、线缆等多种资源对象创建。 资源可…

Web 表单开发全解析:从基础到高级掌握 HTML 表单设计

文章目录 前言一、什么是 Web 表单?二、表单元素详解总结前言 在现代 Web 开发中,表单 是用户与后端服务交互的重要桥梁。无论是用户登录、注册、搜索,还是提交反馈,表单都无处不在。在本文中,我们将从基础入手,全面解析表单的核心知识点,并通过示例带你轻松掌握表单开…

【LeetCode: 145. 二叉树的后序遍历 + 栈】

&#x1f680; 算法题 &#x1f680; &#x1f332; 算法刷题专栏 | 面试必备算法 | 面试高频算法 &#x1f340; &#x1f332; 越难的东西,越要努力坚持&#xff0c;因为它具有很高的价值&#xff0c;算法就是这样✨ &#x1f332; 作者简介&#xff1a;硕风和炜&#xff0c;…

2024年大热,Access平替升级方案,也适合Excel用户

欢迎各位看官&#xff0c;您来了&#xff0c;就对了&#xff01; 您多半是Access忠实粉丝&#xff0c;至少是excel用户&#xff0c;亦或是WPS用户吧。那就对了&#xff0c;今天的分享肯定对您有用。 本文1100字&#xff0c;阅读时长2分50秒&#xff01; 现实总是不尽人意&am…

【mac】mac自动定时开关机和其他常用命令,管理电源设置的工具pmset

一、操作步骤 1、打开终端 2、pmset 是用于管理电源设置的强大工具&#xff0c;我们将使用这个命令 &#xff08;1&#xff09;查询当前任务 pmset -g sched查看到我当前的设置是 唤醒电源开启在 工作日的每天早上8点半 上班时不用手动开机了 &#xff08;2&#xff09;删…

学习日记_20241126_聚类方法(自组织映射Self-Organizing Maps, SOM)

前言 提醒&#xff1a; 文章内容为方便作者自己后日复习与查阅而进行的书写与发布&#xff0c;其中引用内容都会使用链接表明出处&#xff08;如有侵权问题&#xff0c;请及时联系&#xff09;。 其中内容多为一次书写&#xff0c;缺少检查与订正&#xff0c;如有问题或其他拓展…

IIC和SPI的时序图

SCL的变化快慢决定了通信速率&#xff0c;当SCL为低电平的时候&#xff0c;无论SDA是1还是0都不识别&#xff1a; ACK应答&#xff1a;当从设备为低电平的时候识别为从设备有应答&#xff1a; 谁接收&#xff0c;谁应答&#xff1a; 起始位和停止位&#xff1a; IIC的时序图&am…

论文笔记-WWW2024-ClickPrompt

论文笔记-WWW2024-ClickPrompt: CTR Models are Strong Prompt Generators for Adapting Language Models to CTR Prediction ClickPrompt: CTR模型是大模型适配CTR预测任务的强大提示生成器摘要1.引言2.预备知识2.1传统CTR预测2.2基于PLM的CTR预测 3.方法3.1概述3.2模态转换3.…

【游资悟道】-作手新一悟道心法

作手新一经典语录节选&#xff1a; 乔帮主传完整版&#xff1a;做股票5年&#xff0c;炼成18式&#xff0c;成为A股低吸大神&#xff01;从小白到大神&#xff0c;散户炒股的六个过程&#xff0c;不看不知道自己水平 围着主线做&#xff0c;多研究龙头&#xff0c;研究涨停&am…

qt QToolButton详解

1、概述 QToolButton是Qt框架中的一个控件&#xff0c;它继承自QAbstractButton。QToolButton通常用于工具栏&#xff08;QToolBar&#xff09;中&#xff0c;提供了一种快速访问命令或选项的方式。与普通的QPushButton按钮相比&#xff0c;QToolButton通常只显示一个图标而不…

【算法day3】链表:增删改查及其应用

题目引用 移除链表元素设计链表翻转链表 链表介绍 链表与数组不同的是&#xff0c;它是以指针串联在一起的分布在内存随机位置上的&#xff0c;而不是像指针一样占用整块的连续空间。因此也不支持通过指针读取。所以在题目里面总是比较抽象&#xff0c;需要通过画图来帮助解题…

【通俗理解】步长和学习率在神经网络中是一回事吗?

【通俗理解】步长和学习率在神经网络中是一回事吗&#xff1f; 【核心结论】 步长&#xff08;Step Size&#xff09;和学习率&#xff08;Learning Rate, LR&#xff09;在神经网络中并不是同一个概念&#xff0c;但它们都关乎模型训练过程中的参数更新。 【通俗解释&#x…

Zookeeper选举算法与提案处理概览

共识算法(Consensus Algorithm) 共识算法即在分布式系统中节点达成共识的算法&#xff0c;提高系统在分布式环境下的容错性。 依据系统对故障组件的容错能力可分为&#xff1a; 崩溃容错协议(Crash Fault Tolerant, CFT) : 无恶意行为&#xff0c;如进程崩溃&#xff0c;只要…

Cesium 当前位置矩阵的获取

Cesium 位置矩阵的获取 在 3D 图形和地理信息系统&#xff08;GIS&#xff09;中&#xff0c;位置矩阵是将地理坐标&#xff08;如经纬度&#xff09;转换为世界坐标系的一种重要工具。Cesium 是一个强大的开源 JavaScript 库&#xff0c;用于创建 3D 地球和地图应用。在 Cesi…