有 Java 编程相关的问题?

你可以在下面搜索框中键入要查询的问题!

使用Spark函数实现的java序列化问题

我在理解Java中的Spark函数实现时遇到困难The documentation给出了使用mapreduce中函数的三种方法:

  1. 兰姆达大道
  2. 通过内联类实现FunctionFunction2
  3. 通过内部类实现FunctionFunction2

问题是我无法使2.3.工作。 例如,此代码:

public int countInline(String path) {

    String master = "local";
    SparkConf conf = new SparkConf().setAppName("charCounterInLine")
            .setMaster(master);
    JavaSparkContext sc = new JavaSparkContext(conf);
    JavaRDD<String> lines = sc.textFile(path);

    JavaRDD<Integer> lineLengths = lines
            .map(new Function<String, Integer>() {
                public Integer call(String s) {
                    return s.length();
                }
            });
    return lineLengths.reduce(new Function2<Integer, Integer, Integer>() {
        public Integer call(Integer a, Integer b) {
            return a + b;
        }
    }); // the line causing the error 
}

给我这个错误:

14/07/09 11:23:20 INFO DAGScheduler: Failed to run reduce at CharCounter.java:42
[WARNING]
java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:483)
        at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:297)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task not serializable: java.io.NotSerializableException: Hadoop.Spark.basique.CharCounter
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1033)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1017)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1015)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1015)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:770)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:713)
        at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:697)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1176)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
        at akka.actor.ActorCell.invoke(ActorCell.scala:456)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
        at akka.dispatch.Mailbox.run(Mailbox.scala:219)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

现在,我可以通过在公共外部类中实现FunctionFunction2来避免这个问题。然而,这与其说是一个深思熟虑的决定,不如说是一个幸运的猜测。此外,由于我无法使文档示例正常工作,我想有些事情我不明白

最后,我的问题是:

  • 如何使2.3.工作
  • 为什么只有lambda在工作
  • 还有其他方法使用functions

共 (2) 个答案

  1. # 1 楼答案

    为封闭类添加“implements Serializable”可以解决这个问题。它正在序列化封闭类,因为内部类是该类的成员,但封闭类似乎不可序列化

  2. # 2 楼答案

    此stracktrace的相关部分是:

    Task not serializable: java.io.NotSerializableException: Hadoop.Spark.basique.CharCounter
    

    当您将函数定义为内部类时,它们的封闭对象将被拉入函数闭包并序列化。如果这个类是不可序列化的,或者包含一个不可序列化的字段,那么您将遇到这个错误

    这里有几个选项:

    • 将封闭对象的不可序列化字段标记为transient
    • 将函数定义为外部类
    • 将函数定义为static nested classes