重庆分公司,新征程启航
为企业提供网站建设、域名注册、服务器等服务
这篇文章将为大家详细讲解有关HDFS2.7.0中hdfs namenode -format的示例分析,小编觉得挺实用的,因此分享给大家做个参考,希望大家阅读完这篇文章后可以有所收获。
10年积累的网站建设、成都网站制作经验,可以快速应对客户对网站的新想法和需求。提供各种问题对应的解决方案。让选择我们的客户得到更好、更有力的网络服务。我虽然不认识你,你也不认识我。但先做网站设计后付款的网站建设流程,更有霍山免费网站建设让你可以放心的选择与我们合作。
执行hadoop namenode -format后
实际上是执行
/root/hadoop-2.7.0-bin/bin/hdfs namenode -format
下面就来分析这个脚本
---
bin=`which $0` bin=`dirname ${bin}` bin=`cd "$bin" > /dev/null; pwd`
打印
bin=/root/hadoop-2.7.0-bin/bin
---
DEFAULT_LIBEXEC_DIR="$bin"/../libexec
打印’
DEFAULT_LIBEXEC_DIR=/root/hadoop-2.7.0-bin/bin/../libexec
---
cygwin=false case "$(uname)" in CYGWIN*) cygwin=true;; esac
这个不会执行,过滤
---
接下来执行一个脚本
HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR} . $HADOOP_LIBEXEC_DIR/hdfs-config.sh
实际上执行的是
/root/hadoop-2.7.0-bin/libexec/hdfs-config.sh
这个脚本其实是调用另外一个脚本,调用的哪个脚本?读者可以自己去探索一下:)
---回到hdfs脚本
function print_usage(){ echo "Usage: hdfs [--config confdir] [--loglevel loglevel] COMMAND" echo " where COMMAND is one of:" echo " dfs run a filesystem command on the file systems supported in Hadoop." echo " classpath prints the classpath" echo " namenode -format format the DFS filesystem" echo " secondarynamenode run the DFS secondary namenode" echo " namenode run the DFS namenode" echo " journalnode run the DFS journalnode" echo " zkfc run the ZK Failover Controller daemon" echo " datanode run a DFS datanode" echo " dfsadmin run a DFS admin client" echo " haadmin run a DFS HA admin client" echo " fsck run a DFS filesystem checking utility" echo " balancer run a cluster balancing utility" echo " jmxget get JMX exported values from NameNode or DataNode." echo " mover run a utility to move block replicas across" echo " storage types" echo " oiv apply the offline fsimage viewer to an fsimage" echo " oiv_legacy apply the offline fsimage viewer to an legacy fsimage" echo " oev apply the offline edits viewer to an edits file" echo " fetchdt fetch a delegation token from the NameNode" echo " getconf get config values from configuration" echo " groups get the groups which users belong to" echo " snapshotDiff diff two snapshots of a directory or diff the" echo " current directory contents with a snapshot" echo " lsSnapshottableDir list all snapshottable dirs owned by the current user" echo " Use -help to see options" echo " portmap run a portmap service" echo " nfs3 run an NFS version 3 gateway" echo " cacheadmin configure the HDFS cache" echo " crypto configure HDFS encryption zones" echo " storagepolicies list/get/set block storage policies" echo " version print the version" echo "" echo "Most commands print help when invoked w/o parameters." # There are also debug commands, but they don't show up in this listing. } if [ $# = 0 ]; then print_usage exit fi
这个太简单,就是一个函数而已,告诉用途
---
接下来到了最关键的时刻了,就是执行命令
if [ "$COMMAND" = "namenode" ] ; then CLASS='org.apache.hadoop.hdfs.server.namenode.NameNode' HADOOP_OPTS="$HADOOP_OPTS $HADOOP_NAMENODE_OPTS"
其中
HADOOP_OPTS= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/root/hadoop-2.7.0-bin/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/root/hadoop-2.7.0-bin -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/root/hadoop-2.7.0-bin/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender
---
剩下的一段是cgwin,忽略
---
export CLASSPATH=$CLASSPATH HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,NullAppender}"
赋值语句不多说
---
接下来的一个if-else语句,实际上执行的是最后一个分支
else # run it exec "$JAVA" -Dproc_$COMMAND $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@" fi
庐山真面目要出来了,打印执行语句
/usr/java/jdk1.8.0_45/bin/java -Dproc_namenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/root/hadoop-2.7.0-bin/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/root/hadoop-2.7.0-bin -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/root/hadoop-2.7.0-bin/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.hdfs.server.namenode.NameNode -format
关于“HDFS2.7.0中hdfs namenode -format的示例分析”这篇文章就分享到这里了,希望以上内容可以对大家有一定的帮助,使各位可以学到更多知识,如果觉得文章不错,请把它分享出去让更多的人看到。