擅长:python、mysql、java
<p>另一种方法是:</p>
<pre><code>df1 = sqlContext.createDataFrame(
[(1, "a", 2.0), (2, "b", 3.0), (3, "c", 3.0)],
("x1", "x2", "x3"))
df2 = sqlContext.createDataFrame(
[(1, "f", -1.0), (2, "b", 0.0)], ("x1", "x2", "x4"))
df = df1.join(df2, ['x1','x2'])
df.show()
</code></pre>
<p>哪些输出:</p>
<pre><code>+---+---+---+---+
| x1| x2| x3| x4|
+---+---+---+---+
| 2| b|3.0|0.0|
+---+---+---+---+
</code></pre>
<p>其主要优点是连接表的列在输出中不重复,从而降低了遇到诸如<code>org.apache.spark.sql.AnalysisException: Reference 'x1' is ambiguous, could be: x1#50L, x1#57L.</code>之类错误的风险</p>
<hr/>
<p>每当两个表中的列具有不同的名称时(例如在上面的示例中,<code>df2</code>具有列<code>y1</code>、<code>y2</code>和<code>y4</code>),可以使用以下语法:</p>
<pre><code>df = df1.join(df2.withColumnRenamed('y1','x1').withColumnRenamed('y2','x2'), ['x1','x2'])
</code></pre>