15/06/11 10:31:51 INFO mapreduce.Job: map 100% reduce 0%
I am trying to run open source kNN join MapReduce hbrj algorithm on a Hadoop 2.6.0 for single node cluster - pseudo-distributed operation installed on my laptop (OSX). (The source can be found here: http://www.cs.utah.edu/~lifeifei/knnj/). This algorithm is comprised of two MapReduce phases where the second phase uses the first phase's output files as its input. The first phase maps and reduces successfully - I can also look into the output files and everything seems right. However, when running the second phase the job is said to finish successfully even though it never reduces or even enters that stage I believe.
Here is what gets printed as I run phase 2 (I am including everything in hopes that it can be useful)
2015-06-11 10:31:47.526 java[3918:305930] Unable to load realm info from SCDynamicStore
15/06/11 10:31:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/06/11 10:31:49 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
15/06/11 10:31:49 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
15/06/11 10:31:49 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
15/06/11 10:31:49 INFO mapred.FileInputFormat: Total input paths to process : 64
15/06/11 10:31:49 INFO mapreduce.JobSubmitter: number of splits:64
15/06/11 10:31:50 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1089761712_0001
15/06/11 10:31:50 INFO mapred.LocalJobRunner: OutputCommitter set in config null
15/06/11 10:31:50 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
15/06/11 10:31:50 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapred.FileOutputCommitter
15/06/11 10:31:50 INFO mapreduce.Job: Running job: job_local1089761712_0001
15/06/11 10:31:50 INFO mapred.LocalJobRunner: Waiting for map tasks
15/06/11 10:31:50 INFO mapred.LocalJobRunner: Starting task: attempt_local1089761712_0001_m_000000_0
15/06/11 10:31:50 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree currently is supported only on Linux.
15/06/11 10:31:50 INFO mapred.Task: Using ResourceCalculatorProcessTree : null
15/06/11 10:31:50 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/sasha/hbrj/output/part-00042:0+872
15/06/11 10:31:50 INFO mapred.MapTask: numReduceTasks: 0
15/06/11 10:31:50 INFO mapred.LocalJobRunner:
15/06/11 10:31:50 INFO mapred.Task: Task:attempt_local1089761712_0001_m_000000_0 is done. And is in the process of committing
15/06/11 10:31:50 INFO mapred.LocalJobRunner:
15/06/11 10:31:50 INFO mapred.Task: Task attempt_local1089761712_0001_m_000000_0 is allowed to commit now
15/06/11 10:31:50 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1089761712_0001_m_000000_0' to hdfs://localhost:9000/user/sasha/hbrj/output2/_temporary/0/task_local1089761712_0001_m_000000
15/06/11 10:31:50 INFO mapred.MapTask: numReduceTasks: 0
15/06/11 10:31:50 INFO mapred.LocalJobRunner:
15/06/11 10:31:50 INFO mapred.Task: Task:attempt_local1089761712_0001_m_000000_0 is done. And is in the process of committing
15/06/11 10:31:50 INFO mapred.LocalJobRunner:
15/06/11 10:31:50 INFO mapred.Task: Task attempt_local1089761712_0001_m_000000_0 is allowed to commit now
15/06/11 10:31:50 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1089761712_0001_m_000000_0' to hdfs://localhost:9000/user/sasha/hbrj/output2/_temporary/0/task_local1089761712_0001_m_000000
15/06/11 10:31:50 INFO mapred.LocalJobRunner: hdfs://localhost:9000/user/sasha/hbrj/output/part-00042:0+872
15/06/11 10:31:50 INFO mapred.Task: Task 'attempt_local1089761712_0001_m_000000_0' done.
continues in this fashion until...
15/06/11 10:31:51 INFO mapred.LocalJobRunner: Finishing task: attempt_local1089761712_0001_m_000012_0
15/06/11 10:31:51 INFO mapred.LocalJobRunner: Starting task: attempt_local1089761712_0001_m_000013_0
15/06/11 10:31:51 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree currently is supported only on Linux.
15/06/11 10:31:51 INFO mapred.Task: Using ResourceCalculatorProcessTree : null
15/06/11 10:31:51 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/sasha/hbrj/output/part-00015:0+646
15/06/11 10:31:51 INFO mapred.MapTask: numReduceTasks: 0
15/06/11 10:31:51 INFO mapreduce.Job: Job job_local1089761712_0001 running in uber mode : false
15/06/11 10:31:51 INFO mapreduce.Job: map 100% reduce 0%
15/06/11 10:31:51 INFO mapred.LocalJobRunner:
15/06/11 10:31:51 INFO mapred.Task: Task:attempt_local1089761712_0001_m_000013_0 is done. And is in the process of committing
15/06/11 10:31:51 INFO mapred.LocalJobRunner:
15/06/11 10:31:51 INFO mapred.Task: Task attempt_local1089761712_0001_m_000013_0 is allowed to commit now
15/06/11 10:31:51 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1089761712_0001_m_000013_0' to hdfs://localhost:9000/user/sasha/hbrj/output2/_temporary/0/task_local1089761712_0001_m_000013
15/06/11 10:31:51 INFO mapred.LocalJobRunner: hdfs://localhost:9000/user/sasha/hbrj/output/part-00015:0+646
15/06/11 10:31:51 INFO mapred.Task: Task 'attempt_local1089761712_0001_m_000013_0' done.
15/06/11 10:31:51 INFO mapred.LocalJobRunner: Finishing task: attempt_local1089761712_0001_m_000013_0
15/06/11 10:31:51 INFO mapred.LocalJobRunner: Starting task: attempt_local1089761712_0001_m_000014_0
Starting task ... Finishing task
repeats in the manner as seen below (which is the last task) and the job is said to complete successfully:
15/06/11 10:31:53 INFO mapred.MapTask: numReduceTasks: 0
15/06/11 10:31:53 INFO mapred.LocalJobRunner:
15/06/11 10:31:53 INFO mapred.Task: Task:attempt_local1089761712_0001_m_000063_0 is done. And is in the process of committing
15/06/11 10:31:53 INFO mapred.LocalJobRunner:
15/06/11 10:31:53 INFO mapred.Task: Task attempt_local1089761712_0001_m_000063_0 is allowed to commit now
15/06/11 10:31:53 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1089761712_0001_m_000063_0' to hdfs://localhost:9000/user/sasha/hbrj/output2/_temporary/0/task_local1089761712_0001_m_000063
15/06/11 10:31:53 INFO mapred.LocalJobRunner: hdfs://localhost:9000/user/sasha/hbrj/output/part-00004:0+178
15/06/11 10:31:53 INFO mapred.Task: Task 'attempt_local1089761712_0001_m_000063_0' done.
15/06/11 10:31:53 INFO mapred.LocalJobRunner: Finishing task: attempt_local1089761712_0001_m_000063_0
15/06/11 10:31:53 INFO mapred.LocalJobRunner: map task executor complete.
15/06/11 10:31:54 INFO mapreduce.Job: Job job_local1089761712_0001 completed successfully
15/06/11 10:31:54 INFO mapreduce.Job: Counters: 20
File System Counters
FILE: Number of bytes read=96487226
FILE: Number of bytes written=106993472
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=1157797
HDFS: Number of bytes written=884212
HDFS: Number of read operations=8576
HDFS: Number of large read operations=0
HDFS: Number of write operations=4224
Map-Reduce Framework
Map input records=793
Map output records=793
Input split bytes=6848
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=42
Total committed heap usage (bytes)=12124160000
File Input Format Counters
Bytes Read=28599
File Output Format Counters
Bytes Written=21848
What I have done so far:
I have found a similar question here: Hadoop WordCount example stuck at map 100% reduce 0% and followed some of the advice given. In particular:
- At one point I configured Yarn so that I could go into localhost:8088 and monitor the job. All the mappers worked correctly - no failures, and the job ended abruptly right after the last mapper was successful, that is no reducers were ever started. Mapper was shown as 100% and reduce as 0%.
Using this command:
cat /path/to/logs/*.log | grep ERROR
returned nothing.
As I can see the output of the mapper stage I believe the problem does not lie there.
I have tried debugging: putting print statements in the configure method of the Reduce and the reduce method itself. None of them got printed when I rerun the file.
Additional note: Since the algorithm I am using was published and is supposed to work, I believe the problem could possibly arise from the fact that the code is 3 years old and was written for Hadoop 0.20.2 version but I guess I should not be too sure of this.
I understand that this is a specific question but I hope that someone could point me in the right direction. I would be happy to include anything else you might find useful. Any help is greatly appreciated!