NetworkX find_cliques error using PySpark - apache-spark

I'm trying to calculate find_cliques functionality to locate the maximal cliques for each subgroup.
I'm using this implementation using pandas_udf grouped by each connected component.
def pd_create_subgroups(pdf):
index = pdf.component.unique()[0]
try:
# building the graph
gnx = nx.from_pandas_edgelist(pdf, "src", "dst")
bic = list(find_cliques(gnx))
if len(bic) <= 1:
return pd.DataFrame(data={"cliques": [[f"issue_{index}"]]})
bic_sorted = sorted(map(sorted, bic))
bic_sorted = [b for b in bic_sorted if len(b) >= 3]
if len(bic_sorted) == 0:
return pd.DataFrame(data={"cliques": [[f"issue_{index}"]]})
return pd.DataFrame([bic_sorted]).transpose().rename(columns={0: "cliques"})
except:
return pd.DataFrame(data={"cliques": [[f"issue_{index}"]]})
pdf is a pandas dataframe containing the fields src, dst, component
it has around 200M-300M undirected edges
and returns the following error -
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 12.0 failed 4 times, most recent failure: Lost task 0.3 in stage 12.0 (TID 331) (executor 9): java.lang.IndexOutOfBoundsException: index: 2147483628, length: 36 (expected: range(0, 2147483648))
When running on smaller graphs it works properly.

Related

Python Pandas indexing provides KeyError: (slice(None, None, None), )

I am indexing and slicing my data using Pandas in Python3 to calculate spatial statistics.
When I am running a for loop over the range of latitude and longitude using .loc, gives an error KeyError: (slice(None, None, None), ) for the particular set of latitude and longitude for what no values are available in the input file. Instead of skipping those values, it gives an error and stops running the code. Following is my code.
import numpy as np
import pandas as pd
from scipy import stats
filename='input.txt'
df = pd.read_csv(filename,delim_whitespace=True, header=None, names = ['year','month','lat','lon','aod'], index_col = ['year','month','lat','lon'])
idx=pd.IndexSlice
for i in range (1, 13):
for lat0 in N.arange(0.,40.25,0.25,dtype=float):
for lon0 in N.arange(20.0,75.25,0.25,dtype=float):
tmp = df.loc[idx[:,i,lat0,lon0],:]
if (len(tmp) <= 0):
continue
tmp2 = tmp.index.tolist()
In the code above, if I run for tmp = df.loc[idx[:,1,0.0,34.0],:], it works well and provides the following output, which I used for the further calculation.
aod
year month lat lon
2003 1 0.0 34.0 0.032000
2006 1 0.0 34.0 0.114000
2007 1 0.0 34.0 0.035000
2008 1 0.0 34.0 0.026000
2011 1 0.0 34.0 0.097000
2012 1 0.0 34.0 0.106333
2013 1 0.0 34.0 0.081000
2014 1 0.0 34.0 0.038000
2015 1 0.0 34.0 0.278500
2016 1 0.0 34.0 0.033000
2017 1 0.0 34.0 0.036333
2019 1 0.0 34.0 0.064333
2020 1 0.0 34.0 0.109500
But, a same code I run for tmp = df.loc[idx[:,1,0.0,32.75],:], for the respective latitude and longitude no values available in the input file. Instead of skipping those, it gives me the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/pandas/core/indexing.py", line 925, in __getitem__
return self._getitem_tuple(key)
File "/usr/lib/python3/dist-packages/pandas/core/indexing.py", line 1100, in _getitem_tuple
return self._getitem_lowerdim(tup)
File "/usr/lib/python3/dist-packages/pandas/core/indexing.py", line 822, in _getitem_lowerdim
return self._getitem_nested_tuple(tup)
File "/usr/lib/python3/dist-packages/pandas/core/indexing.py", line 906, in _getitem_nested_tuple
obj = getattr(obj, self.name)._getitem_axis(key, axis=axis)
File "/usr/lib/python3/dist-packages/pandas/core/indexing.py", line 1157, in _getitem_axis
locs = labels.get_locs(key)
File "/usr/lib/python3/dist-packages/pandas/core/indexes/multi.py", line 3347, in get_locs
indexer = _update_indexer(
File "/usr/lib/python3/dist-packages/pandas/core/indexes/multi.py", line 3296, in _update_indexer
raise KeyError(key)
KeyError: (slice(None, None, None), 1, 0.0, 32.75)
I tried to replace .loc with .iloc, but it came out with a too many indexers error. However, I tried solutions from internet using .to_numpy(), .values and .as_matrix(), but nothing work.
But, a same code I run for tmp = df.loc[idx[:,1,0.0,32.75],:], for the respective latitude and longitude no values available in the input file. Instead of skipping those, it gives me the following error:
The idiomatic Pandas solution would be to write this as a groupby. Example:
# split df into groups by the keys month, lat, and lon
for index, tmp in df.groupby(['month','lat','lon']):
# tmp is a dataframe where all rows have identical month, lat, and lon values
# ... do something with the tmp dataframe ...
This has three benefits.
Speed. A groupby will be faster because it only needs to loop over the dataframe once, rather than searching the whole dataframe for everything matching the first group, then searching for the second group, etc.
Simplicity.
Robustness. From a robustness perspective, if a dataframe doesn't have, for example, any rows matching "month=1,lat=0.0,lon=32.75", then it will not create that group.
More information: User guide on grouping
Remark about groupby aggregation functions
You'll also sometimes see groupby used with aggregation functions. For example, suppose you wanted to get the sum of each column within each group.
>>> l = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
>>> df = pd.DataFrame(l, columns=["a", "b", "c"])
>>> df.groupby(by=["b"]).sum()
a c
b
1.0 2 3
2.0 2 5
These aggregation functions are faster and easier to use, but sometimes I need something that is custom and unusual, so I'll write a loop. But if you're doing something common, like getting the average of a group, consider looking for an aggregation function.

Why I am getting matrices are not aligned error for DataFrame dot function?

I am trying to implement simple linear regression in Python using Numpy and Pandas. But I am getting a ValueError: matrices are not aligned error for calling the dot function which essentially calculates the matrix multiplication as the documentation says. Following is the code snippet:
import numpy as np
import pandas as pd
#initializing the matrices for X, y and theta
#dataset = pd.read_csv("data1.csv")
dataset = pd.DataFrame([[6.1101,17.592],[5.5277,9.1302],[8.5186,13.662],[7.0032,11.854],[5.8598,6.8233],[8.3829,11.886],[7.4764,4.3483],[8.5781,12]])
X = dataset.iloc[:, :-1]
y = dataset.iloc[:, -1]
X.insert(0, "x_zero", np.ones(X.size), True)
print(X)
print(f"\n{y}")
theta = pd.DataFrame([[0],[1]])
temp = pd.DataFrame([[1],[1]])
print(X.shape)
print(theta.shape)
print(X.dot(theta))
And this is the output for the same:
x_zero 0
0 1.0 6.1101
1 1.0 5.5277
2 1.0 8.5186
3 1.0 7.0032
4 1.0 5.8598
5 1.0 8.3829
6 1.0 7.4764
7 1.0 8.5781
0 17.5920
1 9.1302
2 13.6620
3 11.8540
4 6.8233
5 11.8860
6 4.3483
7 12.0000
Name: 1, dtype: float64
(8, 2)
(2, 1)
Traceback (most recent call last):
File "linear.py", line 16, in <module>
print(X.dot(theta))
File "/home/tejas/.local/lib/python3.6/site-packages/pandas/core/frame.py", line 1063, in dot
raise ValueError("matrices are not aligned")
ValueError: matrices are not aligned
As you can see the output of shape attributes for both of them, the second axis has same dimension (2) and dot function should return a 8*1 DataFrame. Then, why the error?
This misalignment is not a one coming from shapes, but the one coming from pandas indexes. You have 2 options to fix your problem:
Tweak theta assignment:
theta = pd.DataFrame([[0],[1]], index=X.columns)
So the indexes you multiply will match.
Remove indexes relevancy, by moving second df to numpy:
X.dot(theta.to_numpy())
This functionality is actually useful in pandas - that it tries to match smart the indexes, your case is just the quite specific one, when it becomes counterproductive ;)

calibration_and_holdout_data: AttributeError: 'int' object has no attribute 'n'

I'm trying to run a BG/NBD model using the lifetimes libary.
All my analysis are based on the following example, yet with my own data:
https://towardsdatascience.com/whats-a-customer-worth-8daf183f8a4f
Somehow I receive the following error and after reading 50+ stackoverflow articles without finding any answer, I'd like to ask my own question:
What am I doing wrong? :(
Thanks in Advance! :)
I tried to change the type of all columns that are part of my dataframe, without any changes.
df2 = df
df2.head()
person_id effective_date accounting_sales_total
0 219333 2018-08-04 1049.89
1 333219 2018-12-21 4738.97
2 344405 2018-07-16 253.99
3 455599 2017-07-14 2199.96
4 766665 2017-08-15 1245.00
from lifetimes.utils import calibration_and_holdout_data
summary_cal_holdout = calibration_and_holdout_data(df2, 'person_id', 'effective_date',
calibration_period_end='2017-12-31',
observation_period_end='2018-12-31')
print(summary_cal_holdout.head())
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-85-cdcb400098dc> in <module>()
7 summary_cal_holdout = calibration_and_holdout_data(df2, 'person_id', 'effective_date',
8 calibration_period_end='2017-12-31',
----> 9 observation_period_end='2018-12-31')
10
11 print(summary_cal_holdout.head())
/usr/local/envs/py3env/lib/python3.5/site-packages/lifetimes/utils.py in calibration_and_holdout_data(transactions, customer_id_col, datetime_col, calibration_period_end, observation_period_end, freq, datetime_format, monetary_value_col)
122 combined_data.fillna(0, inplace=True)
123
--> 124 delta_time = (to_period(observation_period_end) - to_period(calibration_period_end)).n
125 combined_data["duration_holdout"] = delta_time
126
AttributeError: 'int' object has no attribute 'n'
This actually runs fine as it is :)
data = {'person_id':[219333, 333219, 344405, 455599, 766665],
'effective_date':['2018-08-04', '2018-12-21', '2018-07-16', '2017-07-14', '2017-08-15'],
'accounting_sales_total':[1049.89, 4738.97, 253.99, 2199.96, 1245.00]}
df2 = pd.DataFrame(data)
from lifetimes.utils import calibration_and_holdout_data
summary_cal_holdout = calibration_and_holdout_data(df2, 'person_id', 'effective_date',
calibration_period_end='2017-12-31',
observation_period_end='2018-12-31')
print(summary_cal_holdout.head())
Returns:
frequency_cal recency_cal T_cal frequency_holdout \
person_id
455599 0.0 0.0 170.0 0.0
766665 0.0 0.0 138.0 0.0
duration_holdout
person_id
455599 365
766665 365
Which means your issue is probably with package versioning, try:
pip install lifetimes --upgrade

Weird error when selecting more than 100 spark udf columns

Starting with a simple spark dataframe with only one value, I create N simple udf columns.
N = 100
df = sqlContext.createDataFrame([{'value': 0}])
udf_columns = [pyspark.sql.functions.udf(lambda x: 0)('value') for _ in range(N)]
df.select(udf_columns).take(1)
For N <= 100 this code works perfectly.
But as soon as N >= 101, I found the following error
Py4JJavaError: An error occurred while calling z:org.apache.spark.sql.execution.EvaluatePython.takeAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 34.0 failed 1 times, most recent failure: Lost task 0.0 in stage 34.0 (TID 50, localhost): java.lang.UnsupportedOperationException: Cannot evaluate expression: PythonUDF#<lambda>(input[0, LongType])
at org.apache.spark.sql.catalyst.expressions.Unevaluable$class.genCode(Expression.scala:239)
at org.apache.spark.sql.execution.PythonUDF.genCode(python.scala:44)
at org.apache.spark.sql.catalyst.expressions.Expression$$anonfun$gen$2.apply(Expression.scala:104)

PySpark join two RDD results in an empty RDD

I'm a Spark newbie trying to edit and apply this movie recommendation tutorial(https://databricks-training.s3.amazonaws.com/movie-recommendation-with-mllib.html) on my dataset .But it keeps throwing This error :
ValueError: Can not reduce() empty RDD
This is the function that computes the Root Mean Squared Error of the model :
def computeRmse(model, data, n):
"""
Compute RMSE (Root Mean Squared Error).
"""
predictions = model.predictAll(data.map(lambda x: (x[0], x[1])))
print predictions.count()
print predictions.first()
print "predictions above"
print data.count()
print data.first()
print "validation data above"
predictionsAndRatings = predictions.map(lambda x: ((x[0], x[1]), x[2])) \
#LINE56
.join(data.map(lambda line: line.split(‘,’) ).map(lambda x: ((x[0], x[1]), x[2]))) \
.values()
print predictionsAndRatings.count()
print "predictions And Ratings above"
#LINE63
return sqrt(predictionsAndRatings.map(lambda x: (x[0] - x[1]) ** 2).reduce(add) / float(n))
model = ALS.train(training, rank, numIter, lambda). data is the validation data set.
training and validation set originally from a ratings.txt file in the format of : userID,productID,rating,ratingopID
These are parts of the output :
879
...
Rating(user=0, product=656, rating=4.122132631144641)
predictions above
...
1164
...
(u'640085', u'1590', u'5')
validation data above
...
16/08/26 12:47:18 INFO DAGScheduler: Registering RDD 259 (join at /path/myapp/MyappALS.py:56)
16/08/26 12:47:18 INFO DAGScheduler: Got job 20 (count at /path/myapp/MyappALS.py:59) with 12 output partitions
16/08/26 12:47:18 INFO DAGScheduler: Final stage: ResultStage 238 (count at /path/myapp/MyappALS.py:59)
16/08/26 12:47:18 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 237)
16/08/26 12:47:18 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 237)
16/08/26 12:47:18 INFO DAGScheduler: Submitting ShuffleMapStage 237 (PairwiseRDD[259] at join at /path/myapp/MyappALS.py:56), which has no missing parents
....
0
predictions And Ratings above
...
Traceback (most recent call last):
File "/path/myapp/MyappALS.py", line 130, in <module>
validationRmse = computeRmse(model, validation, numValidation)
File "/path/myapp/MyappALS.py", line 63, in computeRmse
return sqrt(predictionsAndRatings.map(lambda x: (x[0] - x[1]) ** 2).reduce(add) / float(n))
File "/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 805, in reduce
ValueError: Can not reduce() empty RDD
So from the count() i'm sure the initial RDD are not empty .
Than the INFO log Registering RDD 259 (join at /path/myapp/MyappALS.py:56) does it mean that the join job is launched ?
Is there something wrong i'm missing ?
Thank you .
That error disappeared when i added int() to :
predictionsAndRatings = predictions.map(lambda x: ((x[0], x[1]), x[2])) \
.join(data.map(lambda x: ((int(x[0]), int(x[1])), int(x[2])))) \
.values()
we think its because pediction is outputed from the method predictAll which gives tupple ,but the other data that was parsed manually by the algorithm

Resources