Spark steaming从Kafka中读取并在Java中应用Spark SQL聚合
我有一个Spark作业,它从数据库读取数据并应用Spark SQL聚合。代码如下(仅省略conf选项):
SparkConf sparkConf = new SparkConf().setAppName(appName).setMaster("local");
JavaSparkContext sc = new JavaSparkContext(sparkConf);
sqlContext = new SQLContext(sc);
Dataset df = MongoSpark.read(sqlContext).options(readOptions).load();
df.registerTempTable("data");
df.cache();
aggregators = sqlContext.sql(myQuery);
现在我想创建另一个作业,通过Spark streaming读取来自Kafka的消息,然后通过Spark SQL应用相同的聚合。我的代码如下:
Map<String, Object> kafkaParams = new HashMap<>();
kafkaParams.put("bootstrap.servers", "192.168.99.100:9092");
kafkaParams.put("key.deserializer", StringDeserializer.class);
kafkaParams.put("value.deserializer", KafkaStatisticsPayloadDeserializer.class);
kafkaParams.put("group.id", "Group1");
kafkaParams.put("auto.offset.reset", "earliest");
kafkaParams.put("enable.auto.commit", false);
Collection<String> topics = Arrays.asList(topic);
SparkConf conf = new SparkConf().setAppName(topic).setMaster("local");
/*
* Spark streaming context
*/
JavaStreamingContext streamingContext = new JavaStreamingContext(conf, Durations.seconds(2));
/*
* Create an input DStream for Receiving data from socket
*/
JavaInputDStream<ConsumerRecord<String, StatisticsRecord>> stream =
KafkaUtils.createDirectStream(
streamingContext,
LocationStrategies.PreferConsistent(),
ConsumerStrategies.<String, StatisticsRecord>Subscribe(topics, kafkaParams)
);
到目前为止,我已经成功地阅读并反序列化了这些消息。所以我的问题是,我如何在它们上实际应用sparksql聚合。我尝试了以下方法,但不起作用。我想我需要首先隔离包含实际消息的“value”字段
SQLContext sqlContext = new SQLContext(streamingContext.sparkContext());
stream.foreachRDD(rdd -> {
Dataset<Row> df = sqlContext.createDataFrame(rdd.rdd(), StatisticsRecord.class);
df.createOrReplaceTempView("data");
df.cache();
Dataset aggregators = sqlContext.sql(SQLContextAggregations.ORDER_TYPE_DB);
aggregators.show();
});
# 1 楼答案
我已经用下面的代码解决了这个问题。请注意,我现在以JSON格式存储消息,而不是实际对象
# 2 楼答案
您应该在应用于流的函数中调用上下文