How to correctly import pyspark.sql.functions? - apache-spark

from pyspark.sql.functions import isnan, when, count, sum , etc...
It is very tiresome adding all of it. Is there a way to import all of it at once?

You can try to use from pyspark.sql.functions import *. This method may lead to namespace coverage, such as pyspark sum function covering python built-in sum function.
Another insurance method: import pyspark.sql.functions as F, use method: F.sum.

Simply:
import pyspark.sql.functions as f
And use it as:
f.sum(...)
f.to_date(...)

Related

Cast all columns from StringType to DoubleType

This process is quite long and I wanted to know if I can reduce it to one line
I'd like to cast all the column's from string to double.
Use list comprehension in python.
Example:
from pyspark.sql.types import *
from pyspark.sql.functions import *
new_df1=df.select([col(c).cast("double") for c in df.columns])

NameError: name 'average' is not defined

How to filter the data from the dataframe using avg for replacing null values
When running this snippet of code:
df.select(colname).agg(avg(colname))
I receive this exception:
name error: avg not defined
what other command can I use?
got it..
have use..
from pyspark.sql.functions import *
from pyspark.sql.types import *
it would be cleaner a solution like this:
import pyspark.sql.functions as F
df.select(colname).agg(F.avg(colname))

How do I add a new date column with constant value to a Spark DataFrame (using PySpark)?

I want to add a column with a default date ('1901-01-01') with exiting dataframe using pyspark?
I used below code snippet
from pyspark.sql import functions as F
strRecordStartTime="1970-01-01"
recrodStartTime=hashNonKeyData.withColumn("RECORD_START_DATE_TIME",
lit(strRecordStartTime).cast("timestamp")
)
It gives me following error
org.apache.spark.sql.AnalysisException: cannot resolve '1970-01-01'
Any pointer is appreciated?
Try to use python native datetime with lit, I'm sorry don't have the access to machine now.
recrodStartTime = hashNonKeyData.withColumn('RECORD_START_DATE_TIME', lit(datetime.datetime(1970, 1, 1))
I have created one spark dataframe:
from pyspark.sql.types import StringType
df1 = spark.createDataFrame(["Ravi","Gaurav","Ketan","Mahesh"], StringType()).toDF("Name")
Now lets add one new column to the exiting dataframe:
from pyspark.sql.functions import lit
import dateutil.parser
yourdate = dateutil.parser.parse('1901-01-01')
df2= df1.withColumn('Age', lit(yourdate)) // addition of new column
df2.show() // to print the dataframe
You can validate your your schema by using below command.
df2.printSchema
Hope that helps.
from pyspark.sql import functions as F
strRecordStartTime = "1970-01-01"
recrodStartTime = hashNonKeyData.withColumn("RECORD_START_DATE_TIME", F.to_date(F.lit(strRecordStartTime)))

Join RDD using python conditions

I have two RDD. First one contains information related IP address (see col c_ip):
[Row(unic_key=1608422, idx=18, s_date='2016-12-31', s_time='15:00:07', c_ip='119.228.181.78', c_session='3hyj0tb434o23uxegpnmvzr0', origine_file='inFile', process_date='2017-03-13'),
Row(unic_key=1608423, idx=19, s_date='2016-12-31', s_time='15:00:08', c_ip='119.228.181.78', c_session='3hyj0tb434o23uxegpnmvzr0', origine_file='inFile', process_date='2017-03-13'),
]
And another RDD which is IP geolocation.
network,geoname_id,registered_country_geoname_id,represented_country_geoname_id,is_anonymous_proxy,is_satellite_provider,postal_code,latitude,longitude,accuracy_radius
1.0.0.0/24,2077456,2077456,,0,0,,-33.4940,143.2104,1000
1.0.1.0/24,1810821,1814991,,0,0,,26.0614,119.3061,50
1.0.2.0/23,1810821,1814991,,0,0,,26.0614,119.3061,50
1.0.4.0/22,2077456,2077456,,0,0,,-33.4940,143.2104,1000
I would like to match these two but the problem is that I dont have a strict equivalent between the column in both RDD.
I would like to use the Python3 Package ipaddress and do a check like this:
> import ipaddress
> ipaddress.IPv4Address('1.0.0.5') in ipaddress.ip_network('1.0.0.0/24')
True
Is it possible to use a python function to perform the join (left outer join to not exclude any lines from my first RDD)? How can I do that?
When using Apache Spark 1.6, you can still use an UDF function as a predicate in a join. After generating some test data:
import ipaddress
from pyspark.sql.functions import udf
from pyspark.sql.types import StringType, StructField, StructType, BooleanType, ArrayType, IntegerType
sessions = sc.parallelize([(1608422,'119.228.181.78'),(1608423, '119.228.181.78')]).toDF(['unic_key','c_ip'])
geo_ip = sc.parallelize([('1.0.0.0/24',2077456,2077456),
('1.0.1.0/24',1810821,1814991),
('1.0.2.0/23',1810821,1814991),
('1.0.4.0/22',2077456,2077456)]).toDF(['network','geoname_id','registered_country_geoname_id'])
You can create the UDF predicate as follows:
def ip_range(ip, network_range):
return ipaddress.IPv4Address(unicode(ip)) in ipaddress.ip_network(unicode(network_range))
pred = udf(lambda ip, network_range:ipaddress.IPv4Address(unicode(ip)) in ipaddress.ip_network(unicode(network_range)), BooleanType())
And then you can use the UDF if the following join:
sessions.join(geo_ip).where(pred(sessions.c_ip, geo_ip.network))
Unfortunately this currently doesn't work in Spark 2.x, see https://issues.apache.org/jira/browse/SPARK-19728

How to register UDF with no argument in Pyspark

I have tried Spark UDF with parameter using lambda function and register it. but how could I create udf with not argument and registrar it I have tried this my sample code will expected to show current time
from datetime import datetime
from pyspark.sql.functions import udf
def getTime():
timevalue=datetime.now()
return timevalue
udfGateTime=udf(getTime,TimestampType())
But PySpark is showing
NameError: name 'TimestampType' is not defined
which probably means my UDF is not registered
I was comfortable with this format
spark.udf.register('GATE_TIME', lambda():getTime(), TimestampType())
but does lambda function take empty argument? Though I didn't try it, I am a bit confused. How could I write the code for registering this getTime() function?
lambda expression can be nullary. You're just using incorrect syntax:
spark.udf.register('GATE_TIME', lambda: getTime(), TimestampType())
There is nothing special in lambda expressions in context of Spark. You can use getTime directly:
spark.udf.register('GetTime', getTime, TimestampType())
There is no need for inefficient udf at all. Spark provides required function out-of-the-box:
spark.sql("SELECT current_timestamp()")
or
from pyspark.sql.functions import current_timestamp
spark.range(0, 2).select(current_timestamp())
I have done a bit tweak here and it is working well for now
import datetime
from pyspark.sql.types import*
def getTime():
timevalue=datetime.datetime.now()
return timevalue
def GetVal(x):
if(True):
timevalue=getTime()
return timevalue
spark.udf.register('GetTime', lambda(x):GetVal(x),TimestampType())
spark.sql("select GetTime('currenttime')as value ").show()
instead of currenttime any value can pass at it will give current date time here
The error "NameError: name 'TimestampType' is not defined" seems to be due to the lack of:
import pyspark.sql.types.TimestampType
For more info regarding TimeStampType see this answer https://stackoverflow.com/a/30992905/5088142

Resources