有 Java 编程相关的问题?

你可以在下面搜索框中键入要查询的问题!

MapReduce程序中函数错误的java不可映射参数

我正在eclipse中(在Ubuntu14.04LTS上)开发一个MapReduce Java项目,为此我使用Apache Avro序列化框架,我需要Avro-tools-1.7.7。jar文件。我从apache网站下载了这个jar,并使用下载的jar编写了java代码。当我执行这个程序时,我得到了java。lang.VerifyError错误。我从一些网站上读到,这个错误是由于jar中编译类文件的JDK版本与运行时JDK版本之间的版本不匹配造成的,所以我检查了下载的jar文件的版本。类版本和我的运行时JVM版本不匹配,所以我将JDK从1.7降级到1.6,没有不匹配。jar中编译的类有50个作为其主要版本,我当前的项目类文件也是如此。但我还是犯了这个错误

srimanth@srimanth-Inspiron-N5110:~$ hadoop jar Desktop/AvroMapReduceExamples.jar practice.AvroSort file:///home/srimanth/avrofile.avro file:///home/srimanth/sorted/ test.avro
15/04/19 22:14:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" java.lang.VerifyError: (class: org/apache/hadoop/mapred/JobTrackerInstrumentation, method: create signature: (Lorg/apache/hadoop/mapred/JobTracker;Lorg/apache/hadoop/mapred/JobConf;)Lorg/apache/hadoop/mapred/JobTrackerInstrumentation;) Incompatible argument to function
    at org.apache.hadoop.mapred.LocalJobRunner.<init>(LocalJobRunner.java:420)
    at org.apache.hadoop.mapred.JobClient.init(JobClient.java:470)
    at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:455)
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252)
    at practice.AvroSort.run(AvroSort.java:63)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
    at practice.AvroSort.main(AvroSort.java:67)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:622)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

这是我的java程序

package practice;

import java.io.File;
import java.io.IOException;

import org.apache.avro.Schema;
import org.apache.avro.mapred.AvroCollector;
import org.apache.avro.mapred.AvroJob;
import org.apache.avro.mapred.AvroMapper;
import org.apache.avro.mapred.AvroReducer;
import org.apache.avro.mapred.Pair;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class AvroSort extends Configured implements Tool {
static class SortMapper<K> extends AvroMapper<K, Pair<K, K>> {
    public void map(K datum, AvroCollector<Pair<K, K>> collector,
            Reporter reporter) throws IOException {
        collector.collect(new Pair<K, K>(datum, null, datum, null));
    }
}
    static class SortReducer<K> extends AvroReducer<K, K, K> {
        public void reduce(K key, Iterable<K> values,
                AvroCollector<K> collector,
                Reporter reporter) throws IOException {
            for (K value : values) {
                collector.collect(value);
            }
        }
    }
@Override
    public int run(String[] args) throws Exception {
    if (args.length != 3) {

        System.err.printf(
                "Usage: %s [generic options] <input> <output> <schema-file>\n",
                getClass().getSimpleName());
        ToolRunner.printGenericCommandUsage(System.err);
        return -1;
    }
    String input = args[0];
    String output = args[1];
    String schemaFile = args[2];
    JobConf conf = new JobConf(getConf(), getClass());
    conf.setJobName("Avro sort");
    FileInputFormat.addInputPath(conf, new Path(input));
    FileOutputFormat.setOutputPath(conf, new Path(output));
    Schema schema = new Schema.Parser().parse(new File(schemaFile));
    AvroJob.setInputSchema(conf, schema);
    Schema intermediateSchema = Pair.getPairSchema(schema, schema);
    AvroJob.setMapOutputSchema(conf, intermediateSchema);
    AvroJob.setOutputSchema(conf, schema);
    AvroJob.setMapperClass(conf, SortMapper.class);
    AvroJob.setReducerClass(conf, SortReducer.class);

    JobClient.runJob(conf);
    return 0;
    }
    public static void main(String[] args) throws Exception {
        int exitCode = ToolRunner.run(new AvroSort(), args);
        System.exit(exitCode);
    }
}

附加信息:JDK版本:1.6, Hadoop版本:2.6.0,我没有使用maven

请帮帮我,我整天都被困在这里。我真的很感谢你的帮助


共 (0) 个答案