I have a pyspark dataframe where occasionally the columns will have a wrong value that matches another column. It would look something like this:
| Date | Latitude |
| 2017-01-01 | 43.4553 |
| 2017-01-02 | 42.9399 |
| 2017-01-03 | 43.0091 |
| 2017-01-04 | 2017-01-04 |
Obviously, the last Latitude value is incorrect. I need to remove any and all rows that are like this. I thought about using .isin() but I can't seem to get it to work. If I try
df['Date'].isin(['Latitude'])
I get:
Column<(Date IN (Latitude))>
Any suggestions?
If you're more comfortable with SQL syntax, here is an alternative way using a pyspark-sql condition inside the filter():
df = df.filter("Date NOT IN (Latitude)")
Or equivalently using pyspark.sql.DataFrame.where():
df = df.where("Date NOT IN (Latitude)")
Related
I'm trying to obtain the following:
+--------------------+
|work_time | day_shift|
+--------------------+
| 00:45:40 | No |
| 10:05:47 | Yes |
| 15:25:28 | Yes |
| 19:38:52 | No |
where I classify the "work_time" into "day_shift".
"Yes" - if the time falls between 09:00:00 and 18:00:00
"No" - otherwise
My "work_time" is in datetime format showing only the time. I tried the following, but I'm just getting "No" for everything.
df = df.withColumn('day_shift', when(df.work_time >= to_timestamp(lit('09:00:00'), 'HH:mm:ss') & df.work_time <= to_timestamp(lit('18:00:00'), 'Yes').otherwise('No'))
You can use Column class method between. It works for both, timestamps and strings in format "HH:mm:ss". Use this:
F.col("work_time").between("09:00:00", "18:00:00")
Full test:
from pyspark.sql import functions as F
df = spark.createDataFrame([('00:45:40',), ('10:05:47',), ('15:25:28',), ('19:38:52',)], ['work_time'])
day_shift = F.col("work_time").between("09:00:00", "18:00:00")
df = df.withColumn("day_shift", F.when(day_shift, "Yes").otherwise("No"))
df.show()
# +---------+---------+
# |work_time|day_shift|
# +---------+---------+
# | 00:45:40| No|
# | 10:05:47| Yes|
# | 15:25:28| Yes|
# | 19:38:52| No|
# +---------+---------+
First of all, spark doesn't have so-called "Time" data type, it only supports either TimestampType or DateType. Therefore, I believe the work_time in your dataframe is a string.
Secondly, when you check your func.to_timestamp(func.lit('09:00:00'), 'HH:mm:ss') in selection statement, it will show:
+--------------------------------+
|to_timestamp(09:00:00, HH:mm:ss)|
+--------------------------------+
|1970-01-01 09:00:00 |
+--------------------------------+
only showing top 1 row
The best way to achieve is either split your work_time column to hour, minute and second column respectively and do the filtering, or add a date value in your work_time column before any timestamp filtering.
im using spark dataframe API.
i'm trying to give sum() a list parameter containing columns names as strings.
when i'm putting columns names directly into the function- the script works'
when i'm trying to provide it to the function as a parameter of type list- i get the error:
"py4j.protocol.Py4JJavaError: An error occurred while calling o155.sum.
: java.lang.ClassCastException: java.util.ArrayList cannot be cast to java.lang.String"
using same kind of list parameter for groupBy() is working.
this is my script:
groupBy_cols = ['date_expense_int', 'customer_id']
agged_cols_list = ['total_customer_exp_last_m','total_customer_exp_last_3m']
df = df.groupBy(groupBy_cols).sum(agged_cols_list)
when i write it like so it works:
df = df.groupBy(groupBy_cols).sum('total_customer_exp_last_m','total_customer_exp_last_3m')
i tryied also to give sum() a list of column by using
agged_cols_list2 = []
for i in agged_cols_list:
agged_cols_list2.append(col(i))
also didn't work
Unpack your list using the asterisk notation:
df = df.groupBy(groupBy_cols).sum(*agged_cols_list)
If you are having a df like below and want to sum a list of fields
df.show(5,truncate=False)
+---+---------+----+
|id |subject |mark|
+---+---------+----+
|100|English |45 |
|100|Maths |63 |
|100|Physics |40 |
|100|Chemistry|94 |
|100|Biology |74 |
+---+---------+----+
only showing top 5 rows
agged_cols_list=['subject', 'mark']
df.groupBy("id").agg(*[sum(col(c)) for c in agged_cols_list]).show(5,truncate=False)
+---+------------+---------+
|id |sum(subject)|sum(mark)|
+---+------------+---------+
|125|null |330.0 |
|124|null |332.0 |
|155|null |304.0 |
|132|null |382.0 |
|154|null |300.0 |
+---+------------+---------+
Note that sum(subject) beomes null as it is a string column.
In this case you may want to apply count to subject and sum to mark. So you can use a dictionary
summary={ "subject":"count","mark":"sum" }
df.groupBy("id").agg(summary).show(5,truncate=False)
+---+--------------+---------+
|id |count(subject)|sum(mark)|
+---+--------------+---------+
|125|5 |330.0 |
|124|5 |332.0 |
|155|5 |304.0 |
|132|5 |382.0 |
|154|5 |300.0 |
+---+--------------+---------+
only showing top 5 rows
I have two pandas DataFrames:
df1 from database A with connection parameters {"host":"hostname_a","port": "5432", "dbname":"database_a", "user": "user_a", "password": "secret_a"}. The column key is the primary key.
df1:
| | key | create_date | update_date |
|---:|------:|:-------------|:--------------|
| 0 | 57247 | 1976-07-29 | 2018-01-21 |
| 1 | 57248 | | 2018-01-21 |
| 2 | 57249 | 1992-12-22 | 2016-01-31 |
| 3 | 57250 | | 2015-01-21 |
| 4 | 57251 | 1991-12-23 | 2015-01-21 |
| 5 | 57262 | | 2015-01-21 |
| 6 | 57263 | | 2014-01-21 |
df2 from database B with connection parameters {"host": "hostname_b","port": "5433", "dbname":"database_b", "user": "user_b", "password": "secret_b"}. The column id is the primary key (these values are originally the same than the one in the column key in df1; it's only a renaming of the primary key column of df1).
df2:
| | id | create_date | update_date | user |
|---:|------:|:-------------|:--------------|:------|
| 0 | 57247 | 1976-07-29 | 2018-01-21 | |
| 1 | 57248 | | 2018-01-21 | |
| 2 | 57249 | 1992-12-24 | 2020-10-11 | klm |
| 3 | 57250 | 2001-07-14 | 2019-21-11 | ptl |
| 4 | 57251 | 1991-12-23 | 2015-01-21 | |
| 5 | 57262 | | 2015-01-21 | |
| 6 | 57263 | | 2014-01-21 | |
Notice that the row[2] and row[3] in df2 have more recent update_date values (2020-10-11 and 2019-21-11 respectively) than their counterpart in df1 (where id = key) because their creation_date have been modified (by the given users).
I would like to update rows (i.e. in concrete terms; create_date and update_date values) of df1 where update_date in df2 is more recent than its original value in df1 (for the same primary keys).
This is how I'm tackling this for the moment, using sqlalchemy and psycopg2 + the .to_sql() method of pandas' DataFrame:
import psycopg2
from sqlalchemy import create_engine
connector = psycopg2.connect(**database_parameters_dictionary)
engine = create_engine('postgresql+psycopg2://', creator=connector)
df1.update(df2) # 1) maybe there is something better to do here?
with engine.connect() as connection:
df1.to_sql(
name="database_table_name",
con=connection,
schema="public",
if_exists="replace", # 2) maybe there is also something better to do here?
index=True
)
The problem I have is that, according to the documentation, the if_exists argument can only do three things:
if_exists{‘fail’, ‘replace’, ‘append’}, default ‘fail’
Therefore, to update these two rows, I have to;
1) use .update() method on df1 using df2 as an argument, together with
2) replacing the whole table inside the .to_sql() method, which means "drop+recreate".
As the tables are really large (more than 500'000 entries), I have the feeling that this will need a lot of unnecessary work!
How could I efficiently update only those two newly updated rows? Do I have to generate some custom SQL queries to compares the dates for each rows and only take the ones that have really changed? But here again, I have the intuition that, looping through all rows to compare the update dates will take "a lot" of time. How is the more efficient way to do that? (It would have been easier in pure SQL if the two tables were on the same host/database but it's unfortunately not the case).
Pandas can't do partial updates of a table, no. There is a longstanding open bug for supporting sub-whole-table-granularity updates in .to_sql(), but you can see from the discussion there that it's a very complex feature to support in the general case.
However, limiting it to just your situation, I think there's a reasonable approach you could take.
Instead of using df1.update(df2), put together an expression that yields only the changed records with their new values (I don't use pandas often so I don't know this offhand); then iterate over the resulting dataframe and build the UPDATE statements yourself (or with the SQLAlchemy expression layer, if you're using that). Then, use the connection to DB A to issue all the UPDATEs as one transaction. With an indexed PK, it should be as fast as this would ever be expected to be.
BTW, I don't think df1.update(df2) is exactly correct - from my reading, that would update all rows with any differing fields, not just when updated_date > prev updated_date. But it's a moot point if updated_date in df2 is only ever more recent than those in df1.
Let's say that I have a PandasDataFrame df like so:
|---------------------|------------------|
| ID | Value |
|---------------------|------------------|
| 12 | 32 |
|---------------------|------------------|
| 354 | 43 |
|---------------------|------------------|
Values are random.
I want to split this data into different folds using sklearn.model_selection.StratifiedKFold. What I want is to add another column df[kfold] which will correspond to the particular fold number that was optained by splitting the dataframe. Is it possible ?
I would like to change the following dataframe:
--id--rating--timestamp--
-------------------------
| 0 | 5.0 | 231312231 |
| 1 | 3.0 | 192312311 | #Epoch time (seconds from 1 Thursday, 1 January 1970)
-------------------------
to the following dataframe:
--id--rating--timestamp--
--------------------------
| 0 | 5.0 | 05 |
| 1 | 3.0 | 04 | #Month of year
--------------------------
How I can do that?
It's easy using built-in functions
import org.apache.spark.sql.functions._;
import spark.implicits._
val newDF = dataset.withColumn("timestamp", month(from_unixtime('timestamp)));
Note that DataFrames are immutable, so you can create new DataFrame but not modify. Of course you can assign this Dataset to the same variable.
Note number 2: DataFrame = Dataset[Row], that's why I use both names
If you coming from scala, you can use sql.functions methods inside Dataframe.select or Dataframe.withClumn methods, for your case I think the method month(e: Column): Column can perform the change you want. It will be something like that :
import org.apache.spark.sql.functions.month
df.withColumn("timestamp", month("timestamp") as "month")
I do believe that there's an equivalent way in Java, Python and R