Featuretools' dfs() method fails to run on my entity set after upgrading from v0.1.21 to v0.2.x and v0.3.0.
The error is raised when the Pandas backend tries to calculate the aggregate features _calculate_agg_features(). In particular:
--> 442 to_merge.reset_index(1, drop=True, inplace=True)
...
IndexError: Too many levels: Index has only 1 level, not 2
This is working fine in v0.1.x and the entity set hasn't changed after the upgrade. The entity set is composed of 7 entities and 6 relationships. Each entity (dataframe) is added via entity_from_dataframe.
Use this:
df.columns = df.columns.droplevel(0)
where df is the name of dataframe. This may solve this problem.
Related
This is my first time posting to stackoverflow, so I hope I am doing this correctly. I am currently finishing up a 'jumpstart' introduction to data analytics. We are utilizing Python with a few different packages, such as pandas, seaborn, folium etc. For part of our final project/presentation, I am trying to make a zipcode choropleth map. I have successfully imported folium and have my map displayed - the choropleth concept is new to me and is completely extracurricular. Trying to challenge myself to succeed.
I found an example of creating a choropleth map here that I am trying to use: https://medium.com/#saidakbarp/interactive-map-visualization-with-folium-in-python-2e95544d8d9b. I believe I correctly substituted the different object names for the data frame and map name that I am working with. For the geoJSON data, I found this https://github.com/OpenDataDE/State-zip-code-GeoJSON. I opened the geoJSON file in Atom, and found the feature title for what I believe to be the five digit zipcode 'ZCTA5CE10'.
Here is my code:
folium.Choropleth(geo_data='../data/tn_tennessee_zip_codes_geo.min.json',
data=slow_to_resolve,
columns=['zipcode'],
key_on='feature.properties.ZCTA5CE10',
fill_color='BuPu', fill_opacity=0.7, line_opacity=0.2,
legend_name='Zipcode').add_to(nash_map)
folium.LayerControl().add_to(nash_map)
nash_map
When I try to run the code, I get this error:
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-114-a2968de30f1b> in <module>
----> 1 folium.Choropleth(geo_data='../data/tn_tennessee_zip_codes_geo.min.json',
2 data=slow_to_resolve,
3 columns=['zipcode'],
4 key_on='feature.properties.ZCTA5CE10',
5 fill_color='BuPu', fill_opacity=0.7, line_opacity=0.2,
~\anaconda3\lib\site-packages\folium\features.py in __init__(self, geo_data, data, columns, key_on, bins, fill_color, nan_fill_color, fill_opacity, nan_fill_opacity, line_color, line_weight, line_opacity, name, legend_name, overlay, control, show, topojson, smooth_factor, highlight, **kwargs)
1198 if hasattr(data, 'set_index'):
1199 # This is a pd.DataFrame
-> 1200 color_data = data.set_index(columns[0])[columns[1]].to_dict()
1201 elif hasattr(data, 'to_dict'):
1202 # This is a pd.Series
IndexError: list index out of range
Prior to this error, I had two columns from my dataframe specified, but I got some 'isnan' error that I am pretty sure was attributed to string type data in the second column, so I removed it. Now currently trying to figure out this posted error.
Can someone point me in the right direction? Please keep in mind that aside from this three week jumpstart program, I have zero programming knowledge or experience - so I am still learning terminology and concepts.
Thank you!
The error you got was an IndexError: list index out of range because you provided to the columns parameter with just one column columns=['zipcode']. It has to be two like this: columns=['zipcode', 'columnName_to_color_map'].
Where the first column 'zipcode' must match the object node in the GeoJSON file data (note the format/type must also match, that is string: '11372' IS NOT integer: 11372). The second column 'columnName_to_color_map' or whatever name you used should be the column which defined the choropleth colors.
Also note that key_on should match the first column 'zipcode'.
So the code should look like this:-
folium.Choropleth(geo_data='../data/tn_tennessee_zip_codes_geo.min.json',
data=slow_to_resolve,
columns=['zipcode', 'columnName_to_color_map'],
key_on='feature.properties.ZCTA5CE10',
fill_color='BuPu', fill_opacity=0.7, line_opacity=0.2,
legend_name='Zipcode').add_to(nash_map)
folium.LayerControl().add_to(nash_map)
nash_map
I am attempting to load data from Azure Synapse DW into a dataframe as shown in the image.
However, I'm getting the following error:
AttributeError: 'DataFrameReader' object has no attribute 'sqlanalytics'
Traceback (most recent call last):
AttributeError: 'DataFrameReader' object has no attribute 'sqlanalytics'
Any thoughts on what I'm doing wrong?
That particular method has changed its name to synapsesql (as per the notes here) and is Scala only currently as I understand it. The correct syntax would therefore be:
%%spark
val df = spark.read.synapsesql("yourDb.yourSchema.yourTable")
It is possible to share the Scala dataframe with Python via the createOrReplaceTempView method, but I'm not sure how efficient that is. Mixing and matching is described here. So for your example you could mix and match Scala and Python like this:
Cell 1
%%spark
// Get table from dedicated SQL pool and assign it to a dataframe with Scala
val df = spark.read.synapsesql("yourDb.yourSchema.yourTable")
// Save the dataframe as a temp view so it's accessible from PySpark
df.createOrReplaceTempView("someTable")
Cell 2
%%pyspark
## Scala dataframe is now accessible from PySpark
df = spark.sql("select * from someTable")
## !!TODO do some work in PySpark
## ...
The above linked example shows how to write the dataframe back to the dedicated SQL pool too if required.
This is a good article for importing / export data with Synpase notebooks and the limitation is described in the Constraints section:
https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/synapse-spark-sql-pool-import-export#constraints
I am extremely new to Python. I've created a DataFrame using a csv file. My file is a complex nested json file having header values at the lowest granular level.
[Example] df.columns = [ID1, fullID2, total.count, total.value, seedValue.id, seedValue.value1, seedValue.value2, seedValue.largeFile.id, seedValue.largeFile.value1, seedValue.largeFile.value2......]
Requirement: I have to create multiple smaller csvs using each of the columns that are granular and ID1, fullID2.
My approach that I figured is: save the smaller slices by splitting on the header value.
Problem 1: Not able to split the value correctly or traverse to the first location for comparison.
[Example]
I'm using df.columns.str.split('.').tolist(). Suppose I get the below listed value, I want to compare seedValue of id with seedValue of value1 and pull out this entire part as a new df.
{['seedValue','id'],['seedValue'.'value1'], ['seedValue'.'value2']}
Problem 2: Adding ID1 and fullID2 to this new df.
Any help or direction to achieve this would be super helpful !
[Final output]
df.columns = [ID1, fullID2, total.count, total.value, seedValue.id, seedValue.value1, seedValue.value2, seedValue.largeFile.id, seedValue.largeFile.value1, seedValue.largeFile.value2......]
post-processing the file -
seedValue.columns = ID1,fullID2,id,value1,value2
total.columns = ID1,fullID2,count,value
seedValue.largeFile.columns = ID1,fullID2,id,value1,value2
While I do not possess your complex data to provide a more particular solution. I was able to reproduce a similar case with a .csv sample data, which will exemplify how to achieve what you aim with your data.
In order to save in each ID in a different file, we need to loop through the ID's. Also, assuming there might be more duplicate ID's, the script will save each group of ID's into a .csv file. Below is the script, already with sample data:
import pandas as pd
import csv
my_dict = { 'ids' : [11,11,33,55,55],
'info' : ["comment_1","comment_2", "comment_3", "comment_4", "comment_5"],
'other_column': ["something", "something", "something", "", "something"]}
#Creating a dataframe from the .csv file
df = pd.DataFrame(my_dict)
#sorting the value
df = df.sort_values('ids')
#g=df.groupby('ids')
df
#looping through each group of ids and saving them into a file
for id,g in df.groupby('ids'):
g.to_csv('id_{}.csv'.format(id),index=False)#, header=True, index_label=False)
And the output,
id_11.csv
id_33.csv
id_55.csv
For instance, within id_11.csv:
ids info other_column
11 comment_1 something
11 comment_2 something
Notice that, we use the field ids in the name of each file. Moreover, index=False which means that a new column with indexes for each line of data won't be created.
ADDITIONAL INFO: I have used the Notebook in AI Platform within GCP to execute and test the code.
Compared to the more widely known pd.read_csv, pandas offers more granular json support through pd.json_normalize, which allows you to specify how to unnest the data, which meta-data to use, etc.
Apart from this, reading nested fields from a csv into a two-dimensional dataframe might not be the ideal solution here, and having nested objects inside a dataframe can often be tricky to work with.
Try to read the file as a pure dictionary or list of dictionaries. You can then loop through the keys and create a custom logic to check how many more levels you want to go down, how to return the values and so on. Once you are on a lower level and prefer to have this inside of a dataframe, create a new temporary dataframe, then append these parts together inside the loop.
I want to generate a graph, first i will be querying in the snowflake to fetch the data for the credits/resources consumed by warehouse over a year, i want to use this data to generate a line graph to see the trend of how a warehouse has consumed the costs/resources over past one year, for example if i have 5 warehouses, i want to see a line for each of them showing the trend for past one year..
i am new to this in graph thing in python and need help with this.
Regards
Vivek
You can perform this using matplotlib, pandas and the snowflake-connector-python Python 3.x modules installed.
You'll need to build a query that aggregates your warehouse metering history in the way you need, using the WAREHOUSE_METERING_HISTORY account usage view or equivalent. The example provided below uses a query that aggregates each month.
With the query results in a Pandas DataFrame, you can then use pivot to format the data such that each warehouse series can appear as a line of its own.
import matplotlib.pyplot as plt
from snowflake import connector
# Establish your connection here
con = connector.connect(…)
q = """
select
warehouse_name as warehouse,
date_trunc('month', end_time)::date in_month,
sum(credits_used) credits
from snowflake.account_usage.warehouse_metering_history
where warehouse_id != 0
group by warehouse, in_month
order by warehouse, in_month;
"""
df = con.cursor().execute(q).fetch_pandas_all()
# Explicitly specify datatypes for all columns so they behave well
df['IN_MONTH'] = pd.to_datetime(df['IN_MONTH'])
tdf = df.astype({'WAREHOUSE': 'string', 'CREDITS': 'float'})
pdf = tdf.pivot(index='IN_MONTH', columns='WAREHOUSE', values='CREDITS')
pdf.plot()
plt.show()
This yields:
P.s. You can alternatively try the native Snowsight features in Snowflake to plot interactive charts right from its SQL editor interface:
I am using PySpark to perform SparkSQL on my Hive tables.
records = sqlContext.sql("SELECT * FROM my_table")
which retrieves the contents of the table.
When I use the filter argument as a string, it works okay:
records.filter("field_i = 3")
However, when I try to use the filter method, as documented here
records.filter(records.field_i == 3)
I am encountering this error
py4j.protocol.Py4JJavaError: An error occurred while calling o19.filter.
: org.apache.spark.sql.AnalysisException: resolved attributes field_i missing from field_1,field_2,...,field_i,...field_n
eventhough this field_i column clearly exists in the DataFrame object.
I prefer to use the second way because I need to use Python functions to perform record and field manipulations.
I am using Spark 1.3.0 in Cloudera Quickstart CDH-5.4.0 and Python 2.6.
From Spark DataFrame documentation
In Python it’s possible to access a DataFrame’s columns either by attribute (df.age) or by indexing (df['age']). While the former is convenient for interactive data exploration, users are highly encouraged to use the latter form, which is future proof and won’t break with column names that are also attributes on the DataFrame class.
It seems that the name of your field can be a reserved word, try with:
records.filter(records['field_i'] == 3)
What I did was to upgrade my Spark from 1.3.0 to 1.4.0 in Cloudera Quick Start CDH-5.4.0 and the second filtering feature works. Although I still can't explain why 1.3.0 has problems on that.