有 Java 编程相关的问题?

你可以在下面搜索框中键入要查询的问题!

Spark steaming从Kafka中读取并在Java中应用Spark SQL聚合

我有一个Spark作业,它从数据库读取数据并应用Spark SQL聚合。代码如下(仅省略conf选项):

    SparkConf sparkConf = new SparkConf().setAppName(appName).setMaster("local");
    JavaSparkContext sc = new JavaSparkContext(sparkConf);
    sqlContext = new SQLContext(sc);
    Dataset df = MongoSpark.read(sqlContext).options(readOptions).load();
    df.registerTempTable("data");
    df.cache();
    aggregators = sqlContext.sql(myQuery);

现在我想创建另一个作业,通过Spark streaming读取来自Kafka的消息,然后通过Spark SQL应用相同的聚合。我的代码如下:

    Map<String, Object> kafkaParams = new HashMap<>();
    kafkaParams.put("bootstrap.servers", "192.168.99.100:9092");
    kafkaParams.put("key.deserializer", StringDeserializer.class);
    kafkaParams.put("value.deserializer", KafkaStatisticsPayloadDeserializer.class);
    kafkaParams.put("group.id", "Group1");
    kafkaParams.put("auto.offset.reset", "earliest");
    kafkaParams.put("enable.auto.commit", false);

    Collection<String> topics = Arrays.asList(topic);

    SparkConf conf = new SparkConf().setAppName(topic).setMaster("local");

   /*
    * Spark streaming context
    */
    JavaStreamingContext streamingContext = new JavaStreamingContext(conf, Durations.seconds(2));
    /*
     * Create an input DStream for Receiving data from socket
     */
    JavaInputDStream<ConsumerRecord<String, StatisticsRecord>> stream =
            KafkaUtils.createDirectStream(
                    streamingContext,
                    LocationStrategies.PreferConsistent(),
                    ConsumerStrategies.<String, StatisticsRecord>Subscribe(topics, kafkaParams)
            );

到目前为止,我已经成功地阅读并反序列化了这些消息。所以我的问题是,我如何在它们上实际应用sparksql聚合。我尝试了以下方法,但不起作用。我想我需要首先隔离包含实际消息的“value”字段

    SQLContext sqlContext = new SQLContext(streamingContext.sparkContext());
    stream.foreachRDD(rdd -> {
        Dataset<Row> df = sqlContext.createDataFrame(rdd.rdd(), StatisticsRecord.class);
        df.createOrReplaceTempView("data");
        df.cache();
        Dataset aggregators = sqlContext.sql(SQLContextAggregations.ORDER_TYPE_DB);
        aggregators.show();
    });

共 (2) 个答案

  1. # 1 楼答案

    我已经用下面的代码解决了这个问题。请注意,我现在以JSON格式存储消息,而不是实际对象

        SparkConf conf = new SparkConf().setAppName(topic).setMaster("local");
        JavaStreamingContext streamingContext = new JavaStreamingContext(conf, Durations.seconds(2));
    
        SparkSession spark = SparkSession.builder().appName(topic).getOrCreate();
    
        /*
         * Kafka conf
         */
        Map<String, Object> kafkaParams = new HashMap<>();
    
        kafkaParams.put("bootstrap.servers", dbUri);
        kafkaParams.put("key.deserializer", StringDeserializer.class);
        kafkaParams.put("value.deserializer", StringDeserializer.class);
        kafkaParams.put("group.id", "Group4");
        kafkaParams.put("auto.offset.reset", "earliest");
        kafkaParams.put("enable.auto.commit", false);
    
        Collection<String> topics = Arrays.asList("Statistics");
    
        /*
         * Create an input DStream for Receiving data from socket
         */
        JavaInputDStream<ConsumerRecord<String, String>> stream =
                KafkaUtils.createDirectStream(
                        streamingContext,
                        LocationStrategies.PreferConsistent(),
                        ConsumerStrategies.<String, String>Subscribe(topics, kafkaParams)
                );
        /*
         * Keep only the actual message in JSON format
         */
        JavaDStream<String> recordStream = stream.flatMap(record -> Arrays.asList(record.value()).iterator());
        /*
         * Extract RDDs from stream and apply aggregation in each one
         */
        recordStream.foreachRDD(rdd -> {
            if (rdd.count() > 0) {
                Dataset<Row> df = spark.read().json(rdd.rdd());
                df.createOrReplaceTempView("data");
                df.cache();
    
                Dataset aggregators = spark.sql(SQLContextAggregations.ORDER_TYPE_DB);
                aggregators.show();
            }
        });
    
  2. # 2 楼答案

    您应该在应用于流的函数中调用上下文