Can anyone say reasons for not showing hadoop demons in jps?

336 views Asked by At

I am using hadoop 2.7.3 version and after configuration namenode is not showing in jps. can anyone say what is the reason i gave correct permissions to the concerned files. I have deleted /tmp files and recreated and after reformatting the namenode same problem i am facing. Thanks in advance.

22561 Jps
21633 DataNode
21975 ResourceManager
22093 NodeManager
21821 SecondaryNameNode

core-site.xml:

<configuration>
 <property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
 </property> 
 <property>
  <name>fs.defaultFS</name>
  <value>hdfs://localhost:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
 </property>
</configuration>

hdfs-site.xml

<configuration>
<property>
  <name>dfs.replication</name>
  <value>1</value>
</property>
 <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/usr/local/hadoop_store/hdfs/namenode</value>
 </property>
 <property>
   <name>dfs.datanode.data.dir</name>
   <value>file:/usr/local/hadoop_store/hdfs/datanode</value>
 </property>
 <property>
   <name>dfs.permissions.enabled</name>
   <value>true</value>
</property>
</configuration>

mapred-site.xml

<configuration>
  <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapred.job.tracker</name>
        <value>localhost:54311</value>
        <description>The host and port that the MapReduce job tracker runs
          at.  If "local", then jobs are run in-process as a single map
          and reduce task.
        </description>
    </property>
</configuration>

yarn-site.xml

<configuration>
  <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapred.job.tracker</name>
        <value>localhost:54311</value>
        <description>The host and port that the MapReduce job tracker runs
          at.  If "local", then jobs are run in-process as a single map
          and reduce task.
        </description>
    </property>
</configuration>

namenode log file:

2017-08-30 16:49:29,764 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2017-08-30 16:49:29,771 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2017-08-30 16:49:30,131 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-08-30 16:49:30,246 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-08-30 16:49:30,246 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2017-08-30 16:49:30,249 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://localhost:54310
2017-08-30 16:49:30,250 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localhost:54310 to access this namenode/service.
2017-08-30 16:49:37,330 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2017-08-30 16:49:37,414 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2017-08-30 16:49:37,426 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2017-08-30 16:49:37,432 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2017-08-30 16:49:37,438 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2017-08-30 16:49:37,441 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2017-08-30 16:49:37,441 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2017-08-30 16:49:37,441 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2017-08-30 16:49:37,582 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2017-08-30 16:49:37,584 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2017-08-30 16:49:37,606 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: 0.0.0.0:50070
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919)
        at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:856)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:753)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:639)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
Caused by: java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:914)
        ... 8 more
2017-08-30 16:49:37,611 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2017-08-30 16:49:37,612 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2017-08-30 17:04:08,717 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2017-08-30 17:04:08,717 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.net.BindException: Port in use: 0.0.0.0:50070
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919)
        at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:856)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:753)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:639)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
Caused by: java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:914)
        ... 8 more
2017-08-30 17:04:08,716 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2017-08-30 17:04:08,717 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2017-08-30 17:04:08,717 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2017-08-30 17:04:08,717 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.net.BindException: Port in use: 0.0.0.0:50070
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919)
        at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:856)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:753)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:639)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
Caused by: java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
2

There are 2 answers

0
Grzegorz Żur On BEST ANSWER

There is another process listening on the same port 50070, stop it first.

1
Grzegorz Żur On

This situation happens if you don't format the hdfs before running it.

Run

hdfs namenode -format