localhost: ERROR: Cannot set priority of datanode process 32156

53.8k views Asked by At

I am trying to install hadoop on ubuntu 16.04 but while starting the hadoop it will give me following error

localhost: ERROR: Cannot set priority of datanode process 32156.
Starting secondary namenodes [it-OptiPlex-3020]
2017-09-18 21:13:48,343 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting resourcemanager
Starting nodemanagers

Please someone tell me why i am getting this error ? Thanks in advance.

15

There are 15 answers

1
moshimoshi On

I tried some of the methods above but didn't work out. Changed it to Java 8, and it worked for me.

0
SaKondri On

I also encountered this error and found that the error is from the core-site.xml file and changed the file to this form:

<configuration>
    <property>
            <name>fs.defaultFS</name>
            <value>hdfs://master:9000</value>
    </property>     
</configuration>
1
XYZ On

I have encountered the same issues as well.

My problem is as follows: the datanode folder permission is not granted, I have changed the rights as sudo chmod 777 ./datanode/

My advice is to check all the relevant paths/folders and make them 777 first (can changed back afterwards).

There might be some other reasons which lead to the failure of starting the datanode. Common reasons are

  1. wrong configuration in the hdfs-site.xml
  2. the folder specified in the hdfs-site.xml file is not created or do not have writing rights.
  3. the log folder has no writing rights. The log folder is usually under $HADOOP_HOME, change the folder rights as e.g. sudo chmod ...
  4. the ssh link configuration is not set up correctly or lost somehow, try ssh datanode1 to check

If everything has been checked, and something still does not work, we can login the datanode server and go to $HADOOP_HOME/logs folder and check the log information to debug.

0
Ogbonna Vitalis On

This can be caused by many things, usually a mistake in one of the configuration files. So it's best you check the log files

0
Prastab Dkl On

PROBLEM: You might get cannot set priority or cannot start secondary namenode error, let me share what worked for me:

Diagnosis: I checked if hdfs namenode -format gave any error(which I had)

Fixed the errors:

  1. Folders didn't exist: While setting up configuration, in your .xml files(5 files that you setup and you overwrite), make sure the directory you are pointing to is there. Otherwise create the directory if it is not there.

  2. Didn't have permission to read write execute: Change the ownership to 777 for all the directories you pointed to in .xml file as well as your hadoop folder using this command

sudo chmod -R 777 /path_to_folders

1
Aramis NSR On

Problem Solved Here!(Both two high ranked answers didnt work for me)

This issue happens because you are running your hadoop(namenode-user,datanode-user,...) with a user that is not the owner of all your hadoop file-folders.

just do a sudo chown YOURUSER:YOURUSER -R /home/YOURUSER/hadoop/*

0
Anurag Srivastava On

For me the other solutions didn't work. It was not related to directory permissions.

There is an entry JSVC_HOME in hadoop-env.sh that needs to be uncommented.

Download and make jsvc from here: http://commons.apache.org/proper/commons-daemon/jsvc.html

Alternatively, jvsc jar is also present in hadoop dir.

0
RecharBao On

The solution for my situation is add export HADOOP_SHELL_EXECNAME=root to the last line of $HADOOP_HOME/etc/hadoop/hadoop-env.sh ,otherwise, the default value of environment variable is hdfs

0
Fahad Siddiqui On

The issue with my system was that I was already running something on the ports hadoop tries to run its services on. For example, port 8040 was in use. I found out the culprit by first watching logs

tail -f /opt/homebrew/Cellar/hadoop/3.3.4/libexec/logs/*

And then stopping that particular service. You can also simply restart your system to check if it helps, unless your system start up scripts again spins up the services that have conflict with hadoop ports.

The default ports

  • NameNode: 8020 (default), configurable via fs.defaultFS property in core-site.xml
  • Secondary NameNode: 50090 (default), configurable via dfs.secondary.http.address property in hdfs-site.xml
  • DataNode: 50010 (default), configurable via dfs.datanode.address property in hdfs-site.xml
0
ahajib On

I had to deal with the same issue and kept getting the following exception:

Starting namenodes on [localhost]
Starting datanodes
localhost: ERROR: Cannot set priority of datanode process 8944
Starting secondary namenodes [MBPRO-0100.local]
2019-07-22 09:56:53,020 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

As others have mentioned, you need to first make sure that all path parameters are set correctly which is what I checked first. Then followed these steps to solve the issue:

1- Stop dfs service and format hdfs:

sbin/stop-dfs.sh
sudo bin/hdfs namenode -format

2- Change permissions for the hadoop temp directory:

sudo chmod -R 777 /usr/local/Cellar/hadoop/hdfs/tmp

3- Start service again:

sbin/start-dfs.sh

Good luck

1
ohadinho On

I suggest you take a look at your hadoop datanode logs. This is probably a configuration issue.

In my case, folders configured in dfs.datanode.data.dir didn't exist and an exception was thrown and written to log.

0
sunil On

Faced the same issue, flushed the folders: datanode & namenode. I have put the folders in /hadoop_store/hdfs/namenode & /hadoop_store/hdfs/datanode

Post deleting the folders, recreate and then run the command hdfs namenode -format

Start the hadoop:

After the fix the logs look good:

Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [ip]
2019-02-11 09:41:30,426 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

jps:

21857 NodeManager
21697 ResourceManager
21026 NameNode
22326 Jps
21207 DataNode
21435 SecondaryNameNode
0
Suruchi_beck On

just check the datanode logs and towards the end of the file, read the error message, it mentions exactly where the error is. In my case the error was due to the path name for datanode was mentioned wrongly in my hdfs-site.xml file. So when I corrected the path in that file, my datanode opened.

0
ENDEESA On
  1. This can occur for various reasons, its best to check logs @ $HADOOP_HOME/logs

  2. In my case the /etc/hosts file was misconfigured i.e. my host name was not resolving to localhost

Bottom line: Check your namenode/datanode log files :)

5
stana.he On

I have run into the same error when installing Hadoop 3.0.0-RC0. My situation was all services starting successfully except Datanode.

I found that some configs in hadoop-env.sh weren't correct in version 3.0.0-RC0, but were correct in version 2.x.

I ended up replacing my hadoop-env.sh with the official one and set JAVA_HOME and HADOOP_HOME. Now, Datanodes is working fine.