我有一个这样的DataFrame
a = spark.createDataFrame([['Alice', '2020-03-03', '1'], ['Bob', '2020-03-03', '1'], ['Bob', '2020-03-05', '2']], ['name', 'dt', 'hits'])
a.show()
+-----+----------+----+
| name| dt|hits|
+-----+----------+----+
|Alice|2020-03-03| 1|
| Bob|2020-03-03| 1|
| Bob|2020-03-05| 2|
+-----+----------+----+
我想将dt汇总并点击Columns进入地图-
+-----+------------------------------------+
| name| map |
+-----+------------------------------------+
|Alice| {'2020-03-03': 1, '2020-03-05':2}|
| Bob| {'2020-03-03': 1} |
+-----+------------------------------------+
但是此代码引发异常:
from pyspark.sql import functions as F
a = a.groupBy(F.col('name')).agg(F.create_map(F.col('dt'), F.col('hits')))
Py4JJavaError: An error occurred while calling o2920.agg.
: org.apache.spark.sql.AnalysisException: expression '`dt`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.;;
Aggregate [name#1329], [name#1329, map(dt#1330, hits#1331) AS map(dt, hits)#1361]
+- LogicalRDD [name#1329, dt#1330, hits#1331], false
我究竟做错了什么?