回答此问题可获得 20 贡献值,回答如果被采纳可获得 50 分。
<p>我看了看<a href="https://stackoverflow.com/questions/39699107/spark-rdd-to-dataframe-python">spark-rdd to dataframe</a></p>
<p>我将gzipedJSON读入rdd</p>
<pre><code>rdd1 =sc.textFile('s3://cw-milenko-tests/Json_gzips/ticr_calculated_2_2020-05-27T11-59-06.json.gz')
</code></pre>
<p>我想把它转换成spark数据帧。链接SO问题的第一种方法无效。这是文件的第一行</p>
<pre><code>{"code_event": "1092406", "code_event_system": "LOTTO", "company_id": "2", "date_event": "2020-05-27 12:00:00.000", "date_event_real": "0001-01-01 00:00:00.000", "ecode_class": "", "ecode_event": "183", "eperiod_event": "", "etl_date": "2020-05-27", "event_no": 1, "group_no": 0, "name_event": "Ungaria Putto - 8/20", "name_event_short": "Ungaria Putto - 8/20", "odd_coefficient": 1, "odd_coefficient_entry": 1, "odd_coefficient_user": 1, "odd_ekey": "11", "odd_name": "11", "odd_status": "", "odd_type": "11", "odd_voidfactor": 0, "odd_win_types": "", "special_bet_value": "", "ticket_id": "899M-E2X93P", "id_update": 8000001036823656, "topic_group": "cwg5", "kafka_key": "899M-E2X93P", "kafka_epoch": 1590580609424, "kafka_partition": 0, "kafka_topic": "tickets-calculated_2"}
</code></pre>
<p>如何推断模式</p>
<p>答案是这样的</p>
<pre><code>schema = StructType([StructField(str(i), StringType(), True) for i in range(32)])
</code></pre>
<p>为什么是范围(32)</p>