Welcome toVigges Developer Community-Open, Learning,Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
307 views
in Technique[技术] by (71.8m points)

apache spark - Date format in Pyspark Dataframe

My data:

[Row(ID=2887628, Date_Time='11/01/2019 05:00:00 PM'),

My code:

from pyspark.sql import functions as F
df = df.withColumn('Date_Time',F.to_date(F.unix_timestamp('Date_Time', 'MM/dd/yyyy HH:mm:ss a').cast('timestamp')))

But the answer of Date_Time is wrong:

[Row(ID=2887628, Date_Time=None),

What's the problem here?


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Use lower case h instead of upper case. See the docs for correct datetime pattern usage.

df2 = df.withColumn(
    'Date_Time',
    F.to_date(F.unix_timestamp('Date_Time', 'MM/dd/yyyy hh:mm:ss a').cast('timestamp'))
)

Your code can also be simplified. No need for using unix_timestamp.

df2 = df.withColumn(
    'Date_Time',
    F.to_date('Date_Time', 'MM/dd/yyyy hh:mm:ss a')
)

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to Vigges Developer Community for programmer and developer-Open, Learning and Share
...