Mehraban
Mehraban

Reputation: 3324

mrjob bad --steps error using make_runner on Hadoop cluster

I'm trying to run simple wordcount example programatically, but I can't make the code work on hadoop cluster.

job in test_job.py:

from mrjob.job import MRJob
import re


WORD_RE = re.compile(r"[\w']+")

class MRWordFreqCount(MRJob):

    def mapper(self, _, line):
        for word in WORD_RE.findall(line):
            yield word.lower(), 1

    def combiner(self, word, counts):
        yield word, sum(counts)

    def reducer(self, word, counts):
        yield word, sum(counts)

runner in mr_job_test.py:

from test_jobs import MRWordFreqCount

def test_runner(in_args, input_dir):
    tmp_output = []
    args = in_args + input_dir
    mr_job = MRWordFreqCount(args.split())
    with mr_job.make_runner() as runner:
        runner.run()
        for line in runner.stream_output():
            tmp_output = tmp_output + [line]
    return tmp_output

if __name__ == '__main__':
    input_dir = 'hdfs:///test_input/'
    args = '-r hadoop '
    print test_runner(args, input_dir)

I can run this code locally (with inline option), but on hadoop I got:

> Traceback (most recent call last):   File "mr_job_tester.py", line 17,
> in <module>
>     print test_runner(args, input_dir)   File "mr_job_tester.py", line 8, in test_runner
>     runner.run()   File "/usr/local/lib/python2.7/dist-packages/mrjob/runner.py", line 458, in
> run
>     self._run()   File "/usr/local/lib/python2.7/dist-packages/mrjob/hadoop.py", line 239, in
> _run
>     self._run_job_in_hadoop()   File "/usr/local/lib/python2.7/dist-packages/mrjob/hadoop.py", line 295, in
> _run_job_in_hadoop
>     for step_num in xrange(self._num_steps()):   File "/usr/local/lib/python2.7/dist-packages/mrjob/runner.py", line 742, in
> _num_steps
>     return len(self._get_steps())   File "/usr/local/lib/python2.7/dist-packages/mrjob/runner.py", line 721, in
> _get_steps
>     raise ValueError("Bad --steps response: \n%s" % stdout) ValueError: Bad --steps response:

Upvotes: 0

Views: 889

Answers (1)

Mehraban
Mehraban

Reputation: 3324

(According to this)The way mrjob submits job file and executes it remotely inside mapper and reducer, makes it necessary foe lines below to be in job declaration files:

if __name__ == "__main__":
    MRWordFreqCount.run()

Upvotes: 1

Related Questions