I was trying to handle null variables with imputer but I have a problem.
The problem is I have some null variables and to fill those values, I used imputer. Then I wanted to check new values which are applied imputer. So, I used the filter function and SQL to see new values but there is 0 raws when I use these methods.
My question is why I can't see these rows?
from pyspark.ml.feature import Imputer
imputer = Imputer()
imputer.setInputCols(["Life_expectancy","Adult_Mortality","Hepatitis_B","GDP","BMI","Polio"]) \
.setOutputCols(["Life_expectancy_Mean","Adult_Mortality_Mean","Hepatitis_B_Mean","GDP_Mean","BMI_Mean","Polio_Mean"])
model = imputer.fit(df)
from pyspark.sql import SQLContext
sqlContext = SQLContext(spark)
spark_df = spark.createDataFrame(df2)
spark_df.createTempView("table")
spark.sql("""
select * from table where Country = "Bahamas"
order by "Bahamas"
""").toPandas().head() # pyspark saying 0 raw 24 columns when ı use pyspark on spark
spark_df.filter(col("Country") == "Bahamas").toPandas().head(50) # pyspark saying 0 raw 28 columns when ı use filter
[![[![code screenshots](https://i.stack.imgur.com/Bs1AK.png)]](https://i.stack.imgur.com/jYqQh.png)]
I tried use pandas dataframe or spark dataframe but It didn't work. I didn't find any solve about this problem.
Related
I want to convert a date column into integer using Spark SQL.
I'm following this code, but I want to use Spark SQL and not PySpark.
Reproduce the example:
from pyspark.sql.types import *
import pyspark.sql.functions as F
# DUMMY DATA
simpleData = [("James",34,"2006-01-01","true","M",3000.60),
("Michael",33,"1980-01-10","true","F",3300.80),
("Robert",37,"1992-07-01","false","M",5000.50)
]
columns = ["firstname","age","jobStartDate","isGraduated","gender","salary"]
df = spark.createDataFrame(data = simpleData, schema = columns)
df = df.withColumn("jobStartDate", df['jobStartDate'].cast(DateType()))
df = df.withColumn("jobStartDateAsInteger1", F.unix_timestamp(df['jobStartDate']))
display(df)
What I want is to do the same transformation, but using Spark SQL. I am using the following code:
df.createOrReplaceTempView("date_to_integer")
%sql
select
seg.*,
CAST (jobStartDate AS INTEGER) as JobStartDateAsInteger2 -- return null value
from date_to_integer seg
How to solve it?
First you need to CAST your jobStartDate to DATE and then use UNIX_TIMESTAMP to transform it to UNIX integer.
SELECT
seg.*,
UNIX_TIMESTAMP(CAST (jobStartDate AS DATE)) AS JobStartDateAsInteger2
FROM date_to_integer seg
Hi Stackoverflow fams:
I am new to pyspark and trying to learn as much as I can. But for now, I want to convert GUID's into integers in pysprak. I can currently run the following statement in SQL to convert GUID's into an int.
CHECKSUM(HASHBYTES('sha2_512',GUID)) AS int_value_wanted
I wanted to do the same thing in pyspark and tried to create a temporary table out of spark dataframe and add the above statement in the sql query. But the code keeps throwing "Undefined function: 'CHECKSUM'". Is there a way I can add the "CHECKSUM" function into pyspark or do the same thing using another pyspark way?
from awsglue.context import GlueContext
from pyspark.sql import SQLContext
glueContext = GlueContext(SparkContext.getOrCreate())
spark_session = glueContext.spark_session
sqlContext = SQLContext(spark_session.sparkContext, spark_session)
spark_df = spark.createDataFrame(
[("2540f487-7a29-400a-98a0-c03902e67f73", "1386172469"),
("0b32389a-ce01-4e6a-855c-15940cc91e9e", "-2013240275")],
("GUDI","int_value_wanted")
)
spark_df.show(truncate=False)
spark_df.registerTempTable('temp')
new_df = sqlContext.sql("SELECT .*, CHECKSUM(HASHBYTES('sha2_512', GUDI)) AS detail_id FROM temp")
new_df.show(truncate=False)
+------------------------------------+----------------+
|GUDI |int_value_wanted|
+------------------------------------+----------------+
|2540f487-7a29-400a-98a0-c03902e67f73|1386172469 |
|0b32389a-ce01-4e6a-855c-15940cc91e9e|-2013240275 |
+------------------------------------+----------------+
Thanks
There is a sha2 built-in function, which returns the checksum for the SHA-2 family as a hex string. SHA-512 is also supported.
I want to add a column with a default date ('1901-01-01') with exiting dataframe using pyspark?
I used below code snippet
from pyspark.sql import functions as F
strRecordStartTime="1970-01-01"
recrodStartTime=hashNonKeyData.withColumn("RECORD_START_DATE_TIME",
lit(strRecordStartTime).cast("timestamp")
)
It gives me following error
org.apache.spark.sql.AnalysisException: cannot resolve '1970-01-01'
Any pointer is appreciated?
Try to use python native datetime with lit, I'm sorry don't have the access to machine now.
recrodStartTime = hashNonKeyData.withColumn('RECORD_START_DATE_TIME', lit(datetime.datetime(1970, 1, 1))
I have created one spark dataframe:
from pyspark.sql.types import StringType
df1 = spark.createDataFrame(["Ravi","Gaurav","Ketan","Mahesh"], StringType()).toDF("Name")
Now lets add one new column to the exiting dataframe:
from pyspark.sql.functions import lit
import dateutil.parser
yourdate = dateutil.parser.parse('1901-01-01')
df2= df1.withColumn('Age', lit(yourdate)) // addition of new column
df2.show() // to print the dataframe
You can validate your your schema by using below command.
df2.printSchema
Hope that helps.
from pyspark.sql import functions as F
strRecordStartTime = "1970-01-01"
recrodStartTime = hashNonKeyData.withColumn("RECORD_START_DATE_TIME", F.to_date(F.lit(strRecordStartTime)))
Using Spark I'm reading a csv and want to apply a function to a column on the csv. I have some code that works but it's very hacky. What is the proper way to do this?
My code
SparkContext().addPyFile("myfile.py")
spark = SparkSession\
.builder\
.appName("myApp")\
.getOrCreate()
from myfile import myFunction
df = spark.read.csv(sys.argv[1], header=True,
mode="DROPMALFORMED",)
a = df.rdd.map(lambda line: Row(id=line[0], user_id=line[1], message_id=line[2], message=myFunction(line[3]))).toDF()
I would like to be able to just call the function on the column name instead of mapping each row to line and then calling the function on line[index].
I'm using Spark version 2.0.1
You can simply use User Defined Functions (udf) combined with a withColumn :
from pyspark.sql.types import IntegerType
from pyspark.sql.functions import udf
udf_myFunction = udf(myFunction, IntegerType()) # if the function returns an int
df = df.withColumn("message", udf_myFunction("_3")) #"_3" being the column name of the column you want to consider
This will add a new column to the dataframe df containing the result of myFunction(line[3]).
I'm trying to use Spark SQL Data Frame to read some data in and apply a bunch of text clean up functions to each row.
import langid
from pyspark.sql.types import StringType
from pyspark.sql.functions import udf
from pyspark.sql import HiveContext
hsC = HiveContext(sc)
df = hsC.sql("select * from sometable")
def check_lang(data_str):
language = langid.classify(data_str)
# only english
record = ''
if language[0] == 'en':
# probability of correctly id'ing the language greater than 90%
if language[1] > 0.9:
record = data_str
return record
check_lang_udf = udf(lambda x: check_lang(x), StringType())
clean_df = df.select("Field1", check_lang_udf("TextField"))
However when I attempt to run this I get the following error:
py4j.protocol.Py4JJavaError: An error occurred while calling o31.select.
: java.lang.AssertionError: assertion failed: Unable to evaluate PythonUDF. Missing input attributes
I've spent a good deal trying to gather up more information on this but I can't find anything.
As a sidenote, I know the code below works but I'd like to stay with dataframes.
removeNonEn = data.map(lambda record: (record[0], check_lang(record[1])))
I haven't tried this code, but from the API docs suggest this should work:
hsC.registerFunction("check_lang", check_lang)
clean_df = df.selectExpr("Field1", "check_lang('TextField')")