[BigData:Hadoop]:安装部署篇

文章目录

  • 一:机器103设置密钥对免密登录
  • 二:机器102设置密钥对免密登录
  • 三:机器103安装Hadoop安装包
    • 3.1:wget拉取安装Hadoop包
    • 3.2:解压移到指定目录
      • 3.2.1:解压移动路径异常信息
      • 3.2.2:切换指定目录,进行解压,因为上述还是不行,应该是根目录解压有限制
      • 3.2.3:安装成功,查看Hadoop版本
        • 3.2.3.1:显示JAVA_HOME未找到
      • 3.2.4:配置Hadoop的Java环境变量
      • 3.2.4.1:查看环境变量配置发现没有JAVA_HOME
      • 3.2.4.2:解决:查看java安装目录
      • 3.2.3:再次查看Hadoop版本

一:机器103设置密钥对免密登录

[root@vboxnode3ccccccttttttchenyang ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory ‘/root/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:

[root@vboxnode3ccccccttttttchenyang ~]# ls
anaconda-ks.cfg before-calico.yaml bigdata calico.yaml logs recommended.yaml
[root@vboxnode3ccccccttttttchenyang ~]# cd /root
[root@vboxnode3ccccccttttttchenyang ~]# ls
anaconda-ks.cfg before-calico.yaml bigdata calico.yaml logs recommended.yaml

[root@vboxnode3ccccccttttttchenyang ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:iDJChgXvhOSh9xBSNORW8fW8UeeHgTxwEYpbu3MPGUc root@vboxnode3ccccccttttttchenyang
The key’s randomart image is:
±–[RSA 2048]----+
|oBB o. . .o=+o |
|=*.+ . . + ++o o |
|++B . . * .E .|
|o= o . . o + . . |
|. + o . S o . . |
| . o . + |
| o + |
| o o |
| . |
±—[SHA256]-----+
[root@vboxnode3ccccccttttttchenyang ~]# cd /root/.ssh
[root@vboxnode3ccccccttttttchenyang .ssh]# ls
id_rsa id_rsa.pub

二:机器102设置密钥对免密登录

[root@vboxnode3ccccccttttttchenyang .ssh]# ssh-copy-id chenyang-mine-vbox02
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'chenyang-mine-vbox02 (192.168.56.102)' can't be established.
ECDSA key fingerprint is SHA256:SGpvRTxwvfuiJB6N+Gl0IRJZ0Bh4ggdISEqytykpPN8.
ECDSA key fingerprint is MD5:2e:91:01:39:bd:6f:b9:a8:3b:3d:9c:07:3c:81:bc:c7.
Are you sure you want to continue connecting (yes/no)? y
Please type 'yes' or 'no': yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@chenyang-mine-vbox02's password:
Permission denied, please try again.
root@chenyang-mine-vbox02's password:
Permission denied, please try again.
root@chenyang-mine-vbox02's password:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
[root@vboxnode3ccccccttttttchenyang .ssh]# ssh chenyang-mine-vbox02
root@chenyang-mine-vbox02's password:
Last failed login: Sun Aug 27 22:58:22 CST 2023 from vboxnode3ccccccttttttchenyang on ssh:notty
There were 3 failed login attempts since the last successful login.
Last login: Sun Aug 27 22:52:26 2023 from 192.168.56.1
-bash: “export: 未找到命令
-bash: /etc/kubernetes/admin.conf: 没有那个文件或目录
-bash: /etc/kubernetes/kubelet.conf: 权限不够
-bash: “export: 未找到命令
[root@chenyang-mine-vbox02 ~]# ssh vboxnode3ccccccttttttchenyang
The authenticity of host 'vboxnode3ccccccttttttchenyang (192.168.56.103)' can't be established.
ECDSA key fingerprint is SHA256:SGpvRTxwvfuiJB6N+Gl0IRJZ0Bh4ggdISEqytykpPN8.
ECDSA key fingerprint is MD5:2e:91:01:39:bd:6f:b9:a8:3b:3d:9c:07:3c:81:bc:c7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'vboxnode3ccccccttttttchenyang,192.168.56.103' (ECDSA) to the list of known hosts.
root@vboxnode3ccccccttttttchenyang's password:
Last login: Sun Aug 27 22:48:56 2023 from 192.168.56.1[root@vboxnode3ccccccttttttchenyang ~]# ls
anaconda-ks.cfg  before-calico.yaml  bigdata  calico.yaml  logs  recommended.yaml

三:机器103安装Hadoop安装包

3.1:wget拉取安装Hadoop包

wget https://mirrors.aliyun.com/apache/hadoop/common/hadoop-2.10.1/hadoop-2.10.1.tar.gz

wget https://mirrors.aliyun.com/apache/hadoop/common/hadoop-2.10.1/hadoop-2.10.1.tar.gz
[root@vboxnode3ccccccttttttchenyang ~]# wget https://mirrors.aliyun.com/apache/hadoop/common/hadoop-2.10.1/hadoop-2.10.1.tar.gz
--2023-08-27 22:59:48--  https://mirrors.aliyun.com/apache/hadoop/common/hadoop-2.10.1/hadoop-2.10.1.tar.gz
正在解析主机 mirrors.aliyun.com (mirrors.aliyun.com)... 182.40.60.209, 140.249.32.209, 150.139.245.176, ...
正在连接 mirrors.aliyun.com (mirrors.aliyun.com)|182.40.60.209|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:408587111 (390M) [application/octet-stream]
正在保存至: “hadoop-2.10.1.tar.gz”100%[=================================================================================>] 408,587,111 7.86MB/s 用时 37s2023-08-27 23:00:26 (10.5 MB/s) - 已保存 “hadoop-2.10.1.tar.gz” [408587111/408587111])
[root@vboxnode3ccccccttttttchenyang ~]# cd /usr/local/home/
[root@vboxnode3ccccccttttttchenyang home]# ;ls
-bash: 未预期的符号 `;' 附近有语法错误
[root@vboxnode3ccccccttttttchenyang home]# ls
bigdata  docker  log.file  sentinel-dashboard-1.8.6.jar  server
[root@vboxnode3ccccccttttttchenyang home]# cd bigdata/
[root@vboxnode3ccccccttttttchenyang bigdata]# ls
[root@vboxnode3ccccccttttttchenyang bigdata]# pwd
/usr/local/home/bigdata
[root@vboxnode3ccccccttttttchenyang bigdata]# cd ~
[root@vboxnode3ccccccttttttchenyang ~]# ls
anaconda-ks.cfg  before-calico.yaml  bigdata  calico.yaml  hadoop-2.10.1.tar.gz  logs  recommended.yaml

3.2:解压移到指定目录

3.2.1:解压移动路径异常信息

[root@vboxnode3ccccccttttttchenyang ~]# tar -zxvf hadoop-2.10.1.tar.gz //usr/local/home/bigdata
tar: //usr/local/home/bigdata:归档中找不到
tar: 由于前次错误,将以上次的错误状态退出
[root@vboxnode3ccccccttttttchenyang ~]# tar -zxvf hadoop-2.10.1.tar.gz /usr/local/home/bigdata
tar: /usr/local/home/bigdata:归档中找不到
tar: 由于前次错误,将以上次的错误状态退出
[root@vboxnode3ccccccttttttchenyang ~]# rm -rf bigdata/
[root@vboxnode3ccccccttttttchenyang ~]# ls
anaconda-ks.cfg  before-calico.yaml  calico.yaml  logs  recommended.yaml
[root@vboxnode3ccccccttttttchenyang ~]# wget https://mirrors.aliyun.com/apache/hadoop/common/hadoop-2.10.1/hadoop-2.10.1.tar.gz
--2023-08-27 23:04:49--  https://mirrors.aliyun.com/apache/hadoop/common/hadoop-2.10.1/hadoop-2.10.1.tar.gz
正在解析主机 mirrors.aliyun.com (mirrors.aliyun.com)... 140.249.32.209, 150.139.245.176, 117.24.169.248, ...
正在连接 mirrors.aliyun.com (mirrors.aliyun.com)|140.249.32.209|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:408587111 (390M) [application/octet-stream]
正在保存至: “hadoop-2.10.1.tar.gz”100%[=================================================================================>] 408,587,111 15.4MB/s 用时 50s2023-08-27 23:05:39 (7.80 MB/s) - 已保存 “hadoop-2.10.1.tar.gz” [408587111/408587111])

3.2.2:切换指定目录,进行解压,因为上述还是不行,应该是根目录解压有限制

切换指定目录,进行解压,因为上述还是不行,应该是根目录解压有限制

[root@vboxnode3ccccccttttttchenyang ~]# cd /usr/local/home/bigdata
[root@vboxnode3ccccccttttttchenyang bigdata]# ls
[root@vboxnode3ccccccttttttchenyang bigdata]# wget https://mirrors.aliyun.com/apache/hadoop/common/hadoop-2.10.1/hadoop-2.10.1.tar.gz
--2023-08-27 23:07:12--  https://mirrors.aliyun.com/apache/hadoop/common/hadoop-2.10.1/hadoop-2.10.1.tar.gz
正在解析主机 mirrors.aliyun.com (mirrors.aliyun.com)... 182.40.60.209, 150.139.245.178, 150.139.245.179, ...
正在连接 mirrors.aliyun.com (mirrors.aliyun.com)|182.40.60.209|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:408587111 (390M) [application/octet-stream]
正在保存至: “hadoop-2.10.1.tar.gz”100%[=================================================================================>] 408,587,111 15.1MB/s 用时 26s2023-08-27 23:07:38 (15.1 MB/s) - 已保存 “hadoop-2.10.1.tar.gz” [408587111/408587111])
[root@vboxnode3ccccccttttttchenyang bigdata]# ls
hadoop-2.10.1.tar.gz
[root@vboxnode3ccccccttttttchenyang bigdata]# tar -zxvf hadoop-2.10.1.tar.gz
hadoop-2.10.1/
hadoop-2.10.1/bin/
hadoop-2.10.1/bin/hadoop
hadoop-2.10.1/bin/hadoop.cmd
hadoop-2.10.1/bin/rcc
hadoop-2.10.1/bin/hdfs
hadoop-2.10.1/bin/hdfs.cmd
hadoop-2.10.1/bin/container-executor
hadoop-2.10.1/bin/test-container-executor
hadoop-2.10.1/bin/yarn
hadoop-2.10.1/bin/yarn.cmd
hadoop-2.10.1/bin/mapred
hadoop-2.10.1/bin/mapred.cmd
hadoop-2.10.1/etc/
hadoop-2.10.1/etc/hadoop/
hadoop-2.10.1/etc/hadoop/core-site.xml
hadoop-2.10.1/etc/hadoop/ssl-client.xml.example
hadoop-2.10.1/etc/hadoop/ssl-server.xml.example
hadoop-2.10.1/etc/hadoop/hadoop-env.cmd
hadoop-2.10.1/etc/hadoop/hadoop-env.sh
hadoop-2.10.1/etc/hadoop/hadoop-metrics.properties
hadoop-2.10.1/etc/hadoop/hadoop-metrics2.properties
hadoop-2.10.1/etc/hadoop/hadoop-policy.xml
hadoop-2.10.1/etc/hadoop/log4j.properties
hadoop-2.10.1/etc/hadoop/hdfs-site.xml
hadoop-2.10.1/etc/hadoop/httpfs-log4j.properties
hadoop-2.10.1/etc/hadoop/httpfs-site.xml
hadoop-2.10.1/etc/hadoop/httpfs-env.sh
hadoop-2.10.1/etc/hadoop/httpfs-signature.secret
hadoop-2.10.1/etc/hadoop/kms-acls.xml
hadoop-2.10.1/etc/hadoop/kms-env.sh
hadoop-2.10.1/etc/hadoop/kms-log4j.properties
hadoop-2.10.1/etc/hadoop/kms-site.xml
hadoop-2.10.1/etc/hadoop/yarn-env.cmd
hadoop-2.10.1/etc/hadoop/yarn-site.xml
hadoop-2.10.1/etc/hadoop/container-executor.cfg
hadoop-2.10.1/etc/hadoop/slaves
hadoop-2.10.1/etc/hadoop/yarn-env.sh
hadoop-2.10.1/etc/hadoop/capacity-scheduler.xml
hadoop-2.10.1/etc/hadoop/configuration.xsl
hadoop-2.10.1/etc/hadoop/mapred-queues.xml.template
hadoop-2.10.1/etc/hadoop/mapred-env.cmd
hadoop-2.10.1/etc/hadoop/mapred-env.sh
hadoop-2.10.1/etc/hadoop/mapred-site.xml.template
hadoop-2.10.1/lib/
hadoop-2.10.1/lib/native/
hadoop-2.10.1/lib/native/examples/
hadoop-2.10.1/lib/native/examples/pipes-sort
hadoop-2.10.1/lib/native/examples/wordcount-nopipe
hadoop-2.10.1/lib/native/examples/wordcount-part
hadoop-2.10.1/lib/native/examples/wordcount-simple
hadoop-2.10.1/lib/native/libhadoop.a
hadoop-2.10.1/lib/native/libhadoop.so
hadoop-2.10.1/lib/native/libhadoop.so.1.0.0
hadoop-2.10.1/lib/native/libhdfs.a
hadoop-2.10.1/lib/native/libhdfs.so
hadoop-2.10.1/lib/native/libhdfs.so.0.0.0
hadoop-2.10.1/lib/native/libhadooputils.a
hadoop-2.10.1/lib/native/libhadooppipes.a
hadoop-2.10.1/libexec/
hadoop-2.10.1/libexec/hadoop-config.cmd
hadoop-2.10.1/libexec/hadoop-config.sh
hadoop-2.10.1/libexec/hdfs-config.cmd
hadoop-2.10.1/libexec/hdfs-config.sh
hadoop-2.10.1/libexec/httpfs-config.sh
hadoop-2.10.1/libexec/kms-config.sh
hadoop-2.10.1/libexec/yarn-config.cmd
hadoop-2.10.1/libexec/yarn-config.sh
hadoop-2.10.1/libexec/mapred-config.cmd
hadoop-2.10.1/libexec/mapred-config.sh
hadoop-2.10.1/sbin/
hadoop-2.10.1/sbin/FederationStateStore/
hadoop-2.10.1/sbin/FederationStateStore/MySQL/
hadoop-2.10.1/sbin/FederationStateStore/MySQL/FederationStateStoreDatabase.sql
hadoop-2.10.1/sbin/FederationStateStore/MySQL/FederationStateStoreStoredProcs.sql
hadoop-2.10.1/sbin/FederationStateStore/MySQL/FederationStateStoreUser.sql
hadoop-2.10.1/sbin/FederationStateStore/MySQL/dropDatabase.sql
hadoop-2.10.1/sbin/FederationStateStore/MySQL/dropStoreProcedures.sql
hadoop-2.10.1/sbin/FederationStateStore/MySQL/dropTables.sql
hadoop-2.10.1/sbin/FederationStateStore/MySQL/dropUser.sql
hadoop-2.10.1/sbin/FederationStateStore/MySQL/FederationStateStoreTables.sql
hadoop-2.10.1/sbin/FederationStateStore/SQLServer/
hadoop-2.10.1/sbin/FederationStateStore/SQLServer/FederationStateStoreStoreProcs.sql
hadoop-2.10.1/sbin/FederationStateStore/SQLServer/FederationStateStoreTables.sql
hadoop-2.10.1/sbin/stop-all.sh
hadoop-2.10.1/sbin/start-all.cmd
hadoop-2.10.1/sbin/stop-all.cmd
hadoop-2.10.1/sbin/hadoop-daemon.sh
hadoop-2.10.1/sbin/hadoop-daemons.sh
hadoop-2.10.1/sbin/slaves.sh
hadoop-2.10.1/sbin/start-all.sh
hadoop-2.10.1/sbin/distribute-exclude.sh
hadoop-2.10.1/sbin/hdfs-config.cmd
hadoop-2.10.1/sbin/start-dfs.cmd
hadoop-2.10.1/sbin/stop-dfs.cmd
hadoop-2.10.1/sbin/hdfs-config.sh
hadoop-2.10.1/sbin/refresh-namenodes.sh
hadoop-2.10.1/sbin/start-balancer.sh
hadoop-2.10.1/sbin/start-dfs.sh
hadoop-2.10.1/sbin/start-secure-dns.sh
hadoop-2.10.1/sbin/stop-balancer.sh
hadoop-2.10.1/sbin/stop-dfs.sh
hadoop-2.10.1/sbin/stop-secure-dns.sh
hadoop-2.10.1/sbin/httpfs.sh
hadoop-2.10.1/sbin/kms.sh
hadoop-2.10.1/sbin/start-yarn.cmd
hadoop-2.10.1/sbin/stop-yarn.cmd
hadoop-2.10.1/sbin/start-yarn.sh
hadoop-2.10.1/sbin/stop-yarn.sh
hadoop-2.10.1/sbin/yarn-daemon.sh
hadoop-2.10.1/sbin/yarn-daemons.sh
hadoop-2.10.1/sbin/mr-jobhistory-daemon.sh
hadoop-2.10.1/share/
hadoop-2.10.1/share/doc/
hadoop-2.10.1/share/doc/hadoop/
hadoop-2.10.1/share/doc/hadoop/common/
hadoop-2.10.1/share/doc/hadoop/common/api/

3.2.3:安装成功,查看Hadoop版本

3.2.3.1:显示JAVA_HOME未找到

hadoop version
[root@vboxnode3ccccccttttttchenyang bigdata]# hadoop version
Error: JAVA_HOME is not set and could not be found.

3.2.4:配置Hadoop的Java环境变量

在这里插入图片描述

[root@vboxnode3ccccccttttttchenyang bigdata]# vi /etc/profile
[root@vboxnode3ccccccttttttchenyang bigdata]# java -version
openjdk version "1.8.0_362"
OpenJDK Runtime Environment (build 1.8.0_362-b08)
OpenJDK 64-Bit Server VM (build 25.362-b08, mixed mode)
[root@vboxnode3ccccccttttttchenyang bigdata]# javac
用法: javac <options> <source files>
其中, 可能的选项包括:-g                         生成所有调试信息-g:none                    不生成任何调试信息-g:{lines,vars,source}     只生成某些调试信息-nowarn                    不生成任何警告-verbose                   输出有关编译器正在执行的操作的消息-deprecation               输出使用已过时的 API 的源位置-classpath <路径>            指定查找用户类文件和注释处理程序的位置-cp <路径>                   指定查找用户类文件和注释处理程序的位置-sourcepath <路径>           指定查找输入源文件的位置-bootclasspath <路径>        覆盖引导类文件的位置-extdirs <目录>              覆盖所安装扩展的位置-endorseddirs <目录>         覆盖签名的标准路径的位置-proc:{none,only}          控制是否执行注释处理和/或编译。-processor <class1>[,<class2>,<class3>...] 要运行的注释处理程序的名称; 绕过默认的搜索进程-processorpath <路径>        指定查找注释处理程序的位置-parameters                生成元数据以用于方法参数的反射-d <目录>                    指定放置生成的类文件的位置-s <目录>                    指定放置生成的源文件的位置-h <目录>                    指定放置生成的本机标头文件的位置-implicit:{none,class}     指定是否为隐式引用文件生成类文件-encoding <编码>             指定源文件使用的字符编码-source <发行版>              提供与指定发行版的源兼容性-target <发行版>              生成特定 VM 版本的类文件-profile <配置文件>            请确保使用的 API 在指定的配置文件中可用-version                   版本信息-help                      输出标准选项的提要-A关键字[=值]                  传递给注释处理程序的选项-X                         输出非标准选项的提要-J<标记>                     直接将 <标记> 传递给运行时系统-Werror                    出现警告时终止编译@<文件名>                     从文件读取选项和文件名

3.2.4.1:查看环境变量配置发现没有JAVA_HOME

[root@vboxnode3ccccccttttttchenyang bigdata]# export
declare -x HADOOP_HOME="/usr/local/home/bigdata/hadoop-2.10.1"
declare -x HISTCONTROL="ignoredups"
declare -x HISTSIZE="1000"
declare -x HOME="/root"
declare -x HOSTNAME="vboxnode3ccccccttttttchenyang"
declare -x KUBECONFIG="/etc/kubernetes/kubelet.conf"
declare -x LANG="zh_CN.UTF-8"
declare -x LESSOPEN="||/usr/bin/lesspipe.sh %s"
declare -x LOGNAME="root"
declare -x LS_COLORS="rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:"
declare -x MAIL="/var/spool/mail/root"
declare -x OLDPWD="/root"
declare -x PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/home/bigdata/hadoop-2.10.1/bin:/usr/local/home/bigdata/hadoop-2.10.1/sbin"
declare -x PWD="/usr/local/home/bigdata"
declare -x SELINUX_LEVEL_REQUESTED=""
declare -x SELINUX_ROLE_REQUESTED=""
declare -x SELINUX_USE_CURRENT_RANGE=""
declare -x SHELL="/bin/bash"
declare -x SHLVL="1"
declare -x SSH_CLIENT="192.168.56.102 48468 22"
declare -x SSH_CONNECTION="192.168.56.102 48468 192.168.56.103 22"
declare -x SSH_TTY="/dev/pts/1"
declare -x TERM="xterm"
declare -x USER="root"
declare -x XDG_RUNTIME_DIR="/run/user/0"
declare -x XDG_SESSION_ID="3"

3.2.4.2:解决:查看java安装目录

关于命令:ls -lr
列出目录中的文件和子目录列表。但是,ls-lr命令可以递归地列出指定目录下的所有的子目录文件和信息,并按照文件修改时间的顺序排序。
如果想在指定目录下,查看所有文件和子目录的详细信息并且以递归形式展示,可以使用“ls -lR”命令。
关于命令:ls -lrt 实际上是代表了 “-l -r -t” 这三个选项集合。

[root@vboxnode3ccccccttttttchenyang bigdata]# which java
/usr/bin/java
[root@vboxnode3ccccccttttttchenyang bigdata]# ls -lr /usr/bin/java
lrwxrwxrwx. 1 root root 22 4月   7 19:15 /usr/bin/java -> /etc/alternatives/java
[root@vboxnode3ccccccttttttchenyang bigdata]# ls -lrt /etc/alternatives/java
lrwxrwxrwx. 1 root root 73 4月   7 19:15 /etc/alternatives/java -> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.362.b08-1.el7_9.x86_64/jre/bin/java/
找到安装目录:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.362.b08-1.el7_9.x86_64
[root@vboxnode3ccccccttttttchenyang ~]# echo $JAVA_HOME
[root@vboxnode3ccccccttttttchenyang ~]# vi /etc/profile
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.362.b08-1.el7_9.x86_64
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATHexport PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL
export HADOOP_HOME=/usr/local/home/bigdata/hadoop-2.10.1
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME:$JRE_HOME/lib:$CLASSPATH
已经可以查看到JAVA_HOME
[root@vboxnode3ccccccttttttchenyang ~]# export
declare -x CLASSPATH="/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.362.b08-1.el7_9.x86_64/lib:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.362.b08-1.el7_9.x86_64/jre/lib:"
declare -x HADOOP_HOME="/usr/local/home/bigdata/hadoop-2.10.1"
declare -x HISTCONTROL="ignoredups"
declare -x HISTSIZE="1000"
declare -x HOME="/root"
declare -x HOSTNAME="vboxnode3ccccccttttttchenyang"
declare -x JAVA_HOME="/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.362.b08-1.el7_9.x86_64"
declare -x JRE_HOME="/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.362.b08-1.el7_9.x86_64/jre"
declare -x KUBECONFIG="/etc/kubernetes/kubelet.conf"
declare -x LANG="zh_CN.UTF-8"
declare -x LESSOPEN="||/usr/bin/lesspipe.sh %s"
declare -x LOGNAME="root"
[root@vboxnode3ccccccttttttchenyang ~]# echo $JAVA_HOME
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.362.b08-1.el7_9.x86_64

3.2.3:再次查看Hadoop版本

[root@vboxnode3ccccccttttttchenyang ~]# hadoop version
Hadoop 2.10.1
Subversion https://github.com/apache/hadoop -r 1827467c9a56f133025f28557bfc2c562d78e816
Compiled by centos on 2020-09-14T13:17Z
Compiled with protoc 2.5.0
From source with checksum 3114edef868f1f3824e7d0f68be03650
This command was run using /usr/local/home/bigdata/hadoop-2.10.1/share/hadoop/common/hadoop-common-2.10.1.jar

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/164849.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

windows设置右键打开 vscode的方法(简易版)

实现效果如下&#xff1a; 如果安装VScode时没有选择下面两项&#xff0c;则无法通过快捷键打开 如何设置右键打开VSCode&#xff1f; 具体步骤如下&#xff1a; 找到 VSCode的快捷方式&#xff0c;右键选择 属性。 复制目标中的文件所在路径。 创建一个文本文档&#…

什么是低代码开发平台?有什么优势?

目录 一、低代码平台演进 1. 低代码概念 2. 低代码衍生历程 二、为什么要用低代码&#xff1f; &#xff08;1&#xff09;降本提效&#xff0c;便捷开发 &#xff08;2&#xff09;降低开发门槛&#xff0c;扩大应用开发劳动力 &#xff08;3&#xff09;加快数字化转型建设 三…

Leetcode—137.只出现一次的数字II【中等】

2023每日刷题&#xff08;二&#xff09; Leetcode—137.只出现一次的数字II 没有满足空间复杂度的Map题解 class Solution { public:int singleNumber(vector<int>& nums) {unordered_map<int, int>count;for(int iter: nums) {count[iter];}int ans 0;for(…

【Eclipse】查看版本号

1.在Eclipse的启动页面会出现版本号 2. Eclipse的关于里面 Help - About Eclipse IDE 如下图所示&#xff0c;就为其版本 3.通过查看readme_eclipse.html文件

Python合并同类别且相交的矩形框

Python合并同类别且相交的矩形框 前言前提条件相关介绍实验环境Python合并同类别且相交的矩形框代码实现 前言 由于本人水平有限&#xff0c;难免出现错漏&#xff0c;敬请批评改正。更多精彩内容&#xff0c;可点击进入Python日常小操作专栏、YOLO系列专栏、自然语言处理专栏或…

YOLOv5算法改进(13)— 如何去更换主干网络(2)(包括代码+添加步骤+网络结构图)

前言:Hello大家好,我是小哥谈。为了给后面YOLOv5算法的进阶改进奠定基础,本篇文章就继续通过案例的方式给大家讲解如何在YOLOv5算法中更换主干网络,本篇文章的特色就是比较浅显易懂,附加了很多的网络结构图,通过结构图的形式向大家娓娓道来,希望大家学习之后能够有所收获…

记一次mysql事务并发优化

记一次mysql事务并发优化 背景 事情的情况大致是这样的。一个扣减库存的业务上线以后&#xff0c;隔几天会报一次错&#xff0c;错误内容如下&#xff1a; ERROR - exception: UncategorizedSQLException,"detail":"org.springframework.jdbc.UncategorizedSQ…

【Electron】Not allowed to load local resource

问题描述 使用 audio 标签播放音频文件&#xff0c;控制台报错 Not allowed to load local resource。 Not allowed to load local resource原因分析 通常是安全策略所引起的。Electron 默认情况下禁止加载本地资源&#xff0c;以防止潜在的安全风险。 解决方案 在 main.js…

协同创新、奔赴未来——“华为云杯”2023人工智能创新应用大赛华丽谢幕

9月27日&#xff0c;在苏州工业园区管理委员会、华为云计算技术有限公司的指导下&#xff0c;由SISPARK&#xff08;苏州国际科技园&#xff09;、华为&#xff08;苏州&#xff09;人工智能创新中心联合主办&#xff0c;东北大学工业智能与系统优化国家级前沿科学中心、浙江大…

批量xls转换为xlsx

import win32com.client as win32 import os# 另存为xlsx的文件路径 xlsx_file r"F:\志丹\1020Excel汇总\成果表备份\xlsx" xls_file r"F:\志丹\1020Excel汇总\成果表备份" for file in os.scandir(xls_file):suffix file.name.split(".")[-1…

虚实融合 智兴百业 | 赵捷副市长莅临拓世科技集团筹备展台指导,本月19号!拓世科技集团与您相约世界VR产业大会

新时代科技革命中&#xff0c;虚拟现实技术、5G和“元宇宙”概念崛起&#xff0c;助力全球范围内的数字经济和产业转型。我国也正迈向高质量发展攻坚阶段&#xff0c;在中部腹地的江西&#xff0c;政府结合全球技术趋势和自身发展需求&#xff0c;选择虚拟现实为新的经济增长点…

【Jetson 设备】window10主机下使用VNC可视化控制Jetson Orin NX

文章目录 前言VNC连接搭建(WiFi模式)Jetson Orin NX操作本地主机操作 VNC连接搭建(以太网模式)Jetson Orin NX操作本地主机操作 总结 前言 最近需要使用Jetson Orin NX对一些深度学习算法进行测试&#xff0c;为了方便主机与Jetson Orin NX之间的数据的传输&#xff0c;以及方…

测试Android webview 加载本地html

最近开发一个需要未联网功能的App, 不熟悉使用Java原生开发界面&#xff0c;于是想使用本地H5做界面&#xff0c;本文测试了使用本地html加载远程数据。直接上代码&#xff1a; MainActivity.java package com.alex.webviewlocal;import androidx.appcompat.app.AppCompatAct…

Scala语言入门

学习了这么久让我们来回顾一下之前的内容吧 Hadoop生态体系知识串讲 Scala编程语言 一、概述 http://scala-lang.org 专门为计算而生的语言&#xff0c;Scala将(Java后者C)面向对象设计和函数式编程结合在一起的简洁的高级编程语言。而函数式编程强调的是通过传递算子&…

VMware下linux中ping报错unknown host的解决办法

一、错误截图 二、解决办法 2.1 按照步骤查看本机虚拟IP 依次点击&#xff1a;【编辑】》【虚拟网络编辑器】&#xff0c;选中NET模式所属的行&#xff0c;就能看到子网地址。 比喻&#xff0c;我的子网地址是&#xff1a;192.168.18.0 那么&#xff0c;接下来要配置的linux…

基于主动移频法与AFD孤岛检测的单相并网逆变器仿真(Simulink仿真实现)

&#x1f4a5;&#x1f4a5;&#x1f49e;&#x1f49e;欢迎来到本博客❤️❤️&#x1f4a5;&#x1f4a5; &#x1f3c6;博主优势&#xff1a;&#x1f31e;&#x1f31e;&#x1f31e;博客内容尽量做到思维缜密&#xff0c;逻辑清晰&#xff0c;为了方便读者。 ⛳️座右铭&a…

驱动开发day2

任务&#xff1a;使用模块化编译安装驱动实现三盏LED灯的亮灭 驱动程序 #include <linux/init.h> #include <linux/module.h> #include <linux/fs.h> #include <linux/uaccess.h> #include <linux/io.h>#define PHY_RCC 0X50000A28 #define PH…

【高危安全通告】Oracle 10月月度安全漏洞预警

近日&#xff0c;安全狗应急响应中心关注到Oracle官方发布安全公告&#xff0c;共披露出在Oracle Weblogic中存在的6个高危漏洞。 漏洞描述 CVE-2023-22069&#xff1a;Oracle Weblogic 远程代码执行漏洞 Oracle WebLogic Server存在远程代码执行漏洞&#xff0c;该漏洞的CVS…

手搭手zabbix5.0监控redis7

Centos7安装配置Redis7 安装redis #安装gcc yum -y install gcc gcc-c #安装net-tools yum -y install net-tools #官网https://redis.io/ cd /opt/ wget http://download.redis.io/releases/redis-7.0.4.tar.gz 解压至/opt/目录下 tar -zxvf redis-7.0.4.tar.gz -C /opt/ #…

nginx 内存管理(一)

文章目录 前提知识nginx内存管理的基础内存分配不初始化封装malloc初始化malloc 内存池内存池结构清理函数cleanup大块内存large 创建内存池申请内存void *ngx_palloc(ngx_pool_t *pool, size_t size)void *ngx_pnalloc(ngx_pool_t *pool, size_t size)void *ngx_pcalloc(ngx_p…