我在3节点集群的YARN env中使用hadooop2.6流媒体。在
我可以用一个map5或一个文件成功地运行一个map5或一个文件。 但是,当我给同一个mapreduce调用一个15或24 GB的数据文件时,当它到达reduce阶段时,它会失败,并出现以下错误:
15/08/16 18:58:55 INFO mapreduce.Job: map 69% reduce 20%
15/08/16 18:58:56 INFO mapreduce.Job: Task Id : attempt_1439307476930_0012_m_000094_2, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:322)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:535)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
stderr似乎没有任何帮助:
^{pr2}$下面是我的hadoop命令:
hadoop jar $HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-2.6.0.jar \
-D stream.map.output.field.separator=, \
-D stream.num.map.output.key.fields=5 \
-D mapreduce.map.output.key.field.separator=, \
-D mapreduce.partition.keypartitioner.options=-k1,2 \
-D log4j.configuration=/usr/hadoop/hadoop-2.6.0/etc/hadoop/log4j.properties \
-file /usr/hadoop/code/sgw/mapper_sgw_lgi.py \
-mapper 'python mapper_sgw_lgi.py 172.27.64.10' \
-file /usr/hadoop/code/sgw/reducer_sgw_lgi.py \
-reducer 'python reducer_sgw_lgi.py' \
-partitioner org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner \
-input /input/172.27.64.10_sgw_1-150_06212015-nl.log \
-output output3
目前没有回答
相关问题 更多 >
编程相关推荐