I have written a Mapper and Reducer in Python and have executed it successfully on Amazon's Elastic MapReduce(EMR) using Hadoop Streaming.
The final result folder contains the output in three different files part-00000, part-00001 and part-00002. But I need the output as one single file. Is there a way I can do that?
Here is my code for the Mapper:
#!/usr/bin/env python
import sys
for line in sys.stdin:
line = line.strip()
words = line.split()
for word in words:
print '%s\t%s' % (word, 1)
And here is my code for the Reducer
#!/usr/bin/env python
from operator import itemgetter
import sys
current_word = None
current_count = 0
word = None
max_count=0
for line in sys.stdin:
line = line.strip()
word, count = line.split('\t', 1)
try:
count = int(count)
except ValueError:
continue
if current_word == word:
current_count += count
else:
if current_word:
# write result to STDOUT
if current_word[0] != '@':
print '%s\t%d' % (current_word, current_count)
if count > max_count:
max_count = count
current_count = count
current_word = word
if current_word == word:
print '%s\t%s' % (current_word, current_count)
I need the output of this as one single file.
My solution to the above problem was to execute the following hdfs command:
where /hdfs/path is a path containing all the parts (part-*****) of a job output. The -getmerge option of the hadoop fs, will merge all of the job output into a single file on our local file system.