我有一堆大的HTML文件,我想在它们上运行Hadoop MapReduce工作,以找到最常用的单词。我在Python上写了我的映射器和简历,并使用了Hadoop Streaming来运行它们。

这是我的映射者:

#!/usr/bin/env python

import sys
import re
import string

def remove_html_tags(in_text):
'''
Remove any HTML tags that are found. 

'''
    global flag
    in_text=in_text.lstrip()
    in_text=in_text.rstrip()
    in_text=in_text+"\n"

    if flag==True: 
        in_text="<"+in_text
        flag=False
    if re.search('^<',in_text)!=None and re.search('(>\n+)$', in_text)==None: 
        in_text=in_text+">"
        flag=True
    p = re.compile(r'<[^<]*?>')
    in_text=p.sub('', in_text)
    return in_text

# input comes from STDIN (standard input)
global flag
flag=False
for line in sys.stdin:
    # remove leading and trailing whitespace, set to lowercase and remove HTMl tags
    line = line.strip().lower()
    line = remove_html_tags(line)
    # split the line into words
    words = line.split()
    # increase counters
    for word in words:
       # write the results to STDOUT (standard output);
       # what we output here will be the input for the
       # Reduce step, i.e. the input for reducer.py
       #
       # tab-delimited; the trivial word count is 1
       if word =='': continue
       for c in string.punctuation:
           word= word.replace(c,'')

       print '%s\t%s' % (word, 1)

这是我的还原器:

#!/usr/bin/env python

from operator import itemgetter
import sys

# maps words to their counts
word2count = {}

# input comes from STDIN
for line in sys.stdin:
    # remove leading and trailing whitespace
    line = line.strip()

    # parse the input we got from mapper.py
    word, count = line.split('\t', 1)
    # convert count (currently a string) to int
    try:
        count = int(count)
        word2count[word] = word2count.get(word, 0) + count
    except ValueError:
        pass

sorted_word2count = sorted(word2count.iteritems(), 
key=lambda(k,v):(v,k),reverse=True)

# write the results to STDOUT (standard output)
for word, count in sorted_word2count:
    print '%s\t%s'% (word, count)

每当我只用一个小样本小组,例如“ Hello World Hello Hello World ...”时,我都会获得排名列表的正确输出。但是,当我尝试使用一个小型HTML文件,并尝试使用CAT将HTML管道输送到我的映射器中时,我会收到以下错误(Input2包含一些HTML代码):

rohanbk@hadoop:~$ cat input2 | /home/rohanbk/mapper.py | sort | /home/rohanbk/reducer.py
Traceback (most recent call last):
  File "/home/rohanbk/reducer.py", line 15, in <module>
    word, count = line.split('\t', 1)
ValueError: need more than 1 value to unpack

谁能解释为什么我得到这个?另外,调试MapReduce工作计划的好方法是什么?

有帮助吗?

解决方案

您也可以通过以下方式复制错误。

echo "hello - world" | ./mapper.py  | sort | ./reducer.py

问题在这里:

if word =='': continue
for c in string.punctuation:
           word= word.replace(c,'')

如果 word 是一个单个标点标记,就像上述输入(拆分之后)一样,然后将其转换为空字符串。因此,只需将空字符串的支票移动到替换后即可。

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top