I have a use case where I have multiple line plots (with legends), and I need to update the line plots based on a column condition. Below is an example of two data set, based on the country, the column data source changes. But the issue I am facing is, the number of columns is not fixed for the data source, and even the types can vary. So, when I update the data source based on a callback when there is a new country selected, I get this error:
Error: attempted to retrieve property array for nonexistent field 'pay_conv_7d.content'.
I am guessing because in the new data source, the pay_conv_7d.content column doesn't exist, but in my plot those lines were already there. I have been trying to fix this issue by various means (making common columns for all country selection - adding the missing column in the data source in callback, but still get issues.
Is there any clean way to have multiple line plots updating using callback, and not do a lot of hackish way? Any insights or help would be really appreciated. Thanks much in advance! :)
def setup_multiline_plots(x_axis, y_axis, title_text, data_source, plot):
num_categories = len(data_source.data['categories'])
legends_list = list(data_source.data['categories'])
colors_list = Spectral11[0:num_categories]
# xs = [data_source.data['%s.'%x_axis].values] * num_categories
# ys = [data_source.data[('%s.%s')%(y_axis,column)] for column in data_source.data['categories']]
# data_source.data['x_series'] = xs
# data_source.data['y_series'] = ys
# plot.multi_line('x_series', 'y_series', line_color=colors_list,legend='categories', line_width=3, source=data_source)
plot_list = []
for (colr, leg, column) in zip(colors_list, legends_list, data_source.data['categories']):
xs, ys = '%s.'%x_axis, ('%s.%s')%(y_axis,column)
plot.line(xs,ys, source=data_source, color=colr, legend=leg, line_width=3, name=ys)
plot_list.append(ys)
data_source.data['plot_names'] = data_source.data.get('plot_names',[]) + plot_list
plot.title.text = title_text
def update_plot(country, timeseries_df, timeseries_source,
aggregate_df, aggregate_source, category,
plot_pay_7d, plot_r_pay_90d):
aggregate_metrics = aggregate_df.loc[aggregate_df.country == country]
aggregate_metrics = aggregate_metrics.nlargest(10, 'cost')
category_types = list(aggregate_metrics[category].unique())
timeseries_df = timeseries_df[timeseries_df[category].isin(category_types)]
timeseries_multi_line_metrics = get_multiline_column_datasource(timeseries_df, category, country)
# len_series = len(timeseries_multi_line_metrics.data['time.'])
# previous_legends = timeseries_source.data['plot_names']
# current_legends = timeseries_multi_line_metrics.data.keys()
# common_legends = list(set(previous_legends) & set(current_legends))
# additional_legends_list = list(set(previous_legends) - set(current_legends))
# for legend in additional_legends_list:
# zeros = pd.Series(np.array([0] * len_series), name=legend)
# timeseries_multi_line_metrics.add(zeros, legend)
# timeseries_multi_line_metrics.data['plot_names'] = previous_legends
timeseries_source.data = timeseries_multi_line_metrics.data
aggregate_source.data = aggregate_source.from_df(aggregate_metrics)
def get_multiline_column_datasource(df, category, country):
df_country = df[df.country == country]
df_pivoted = pd.DataFrame(df_country.pivot_table(index='time', columns=category, aggfunc=np.sum).reset_index())
df_pivoted.columns = df_pivoted.columns.to_series().str.join('.')
categories = list(set([column.split('.')[1] for column in list(df_pivoted.columns)]))[1:]
data_source = ColumnDataSource(df_pivoted)
data_source.data['categories'] = categories
Recently I had to update data on a Multiline glyph. Check my question if you want to take a look at my algorithm.
I think you can update a ColumnDataSource in three ways at least:
You can create a dataframe to instantiate a new CDS
cds = ColumnDataSource(df_pivoted)
data_source.data = cds.data
You can create a dictionary and assign it to the data attribute directly
d = {
'xs0': [[7.0, 986.0], [17.0, 6.0], [7.0, 67.0]],
'ys0': [[79.0, 69.0], [179.0, 169.0], [729.0, 69.0]],
'xs1': [[17.0, 166.0], [17.0, 116.0], [17.0, 126.0]],
'ys1': [[179.0, 169.0], [179.0, 1169.0], [1729.0, 169.0]],
'xs2': [[27.0, 276.0], [27.0, 216.0], [27.0, 226.0]],
'ys2': [[279.0, 269.0], [279.0, 2619.0], [2579.0, 2569.0]]
}
data_source.data = d
Here if you need different sizes of columns or empty columns you can fill the gaps with NaN values in order to keep column sizes. And I think this is the solution to your question:
import numpy as np
d = {
'xs0': [[7.0, 986.0], [17.0, 6.0], [7.0, 67.0]],
'ys0': [[79.0, 69.0], [179.0, 169.0], [729.0, 69.0]],
'xs1': [[17.0, 166.0], [np.nan], [np.nan]],
'ys1': [[179.0, 169.0], [np.nan], [np.nan]],
'xs2': [[np.nan], [np.nan], [np.nan]],
'ys2': [[np.nan], [np.nan], [np.nan]]
}
data_source.data = d
Or if you only need to modify a few values then you can use the method patch. Check the documentation here.
The following example shows how to patch entire column elements. In this case,
source = ColumnDataSource(data=dict(foo=[10, 20, 30], bar=[100, 200, 300]))
patches = {
'foo' : [ (slice(2), [11, 12]) ],
'bar' : [ (0, 101), (2, 301) ],
}
source.patch(patches)
After this operation, the value of the source.data will be:
dict(foo=[11, 22, 30], bar=[101, 200, 301])
NOTE: It is important to make the update in one go to avoid performance issues
Related
im trying to add variables to a list that i created. Got a result from a session.execute.
i´ve done this:
def machine_id(session, machine_serial):
stmt_raw = '''
SELECT
id
FROM
machine
WHERE
machine.serial = :machine_serial_arg
'''
utc_now = datetime.datetime.utcnow()
utc_now_iso = pytz.utc.localize(utc_now).isoformat()
utc_start = datetime.datetime.utcnow() - datetime.timedelta(days = 30)
utc_start_iso = pytz.utc.localize(utc_start).isoformat()
stmt_args = {
'machine_serial_arg': machine_serial,
}
stmt = text(stmt_raw).columns(
#ts_insert = ISODateTime
)
result = session.execute(stmt, stmt_args)
ts = utc_now_iso
ts_start = utc_start_iso
ID = []
for row in result:
ID.append({
'id': row[0],
'ts': ts,
'ts_start': ts_start,
})
return ID
In trying to get the result over api like this:
def form_response(response, session):
result_machine_id = machine_id(session, machine_serial)
if not result_machine_id:
response['Error'] = 'Seriennummer nicht vorhanden/gefunden'
return
response['id_timerange'] = result_machine_id
Output looks fine.
{
"id_timerange": [
{
"id": 1,
"ts": "2020-08-13T08:32:25.835055+00:00",
"ts_start": "2020-07-14T08:32:25.835089+00:00"
}
]
}
Now i only want the id from it as a parameter for another function. Problem is i think its not a list. I cant select the first element. result_machine_id[0] result is like the posted Output. I think in my first function i only add ts & ts_start to the first row? Is it possible to add emtpy rows and then add 'ts':ts as value?
Help would be nice
If I have understood your question correctly ...
Your output looks like dict. so access its id_timerange key which gives you a list. Access the first element which gives you another dict. On this dict you have an id key:
result_machine_id["id_timerange"][0]["id"]
I have been working on COVID19 analysis for a dashboard and am using a JSON data source. I have converted the json to dataframe. I am working on plotting bar chart for "Days to reach deaths" over a "States" x-axis (categorical values). I am trying to use a function to update the slider.value. Upon running the bokeh serve with --log-level=DEBUG, I am getting a following error:
Can someone provide me with any direction or help with what might be causing the issue as I am new to Python and any help is appreciated? Or if there's any other alternative.
Please find the code below:
cases_summary = requests.get('https://api.rootnet.in/covid19-in/stats/history')
json_data = cases_summary.json()
#Data Cleaning
cases_summary=pd.json_normalize(json_data['data'], record_path='regional', meta='day')
cases_summary['loc']=np.where(cases_summary['loc']=='Nagaland#', 'Nagaland', cases_summary['loc'])
cases_summary['loc']=np.where(cases_summary['loc']=='Madhya Pradesh#', 'Madhya Pradesh', cases_summary['loc'])
cases_summary['loc']=np.where(cases_summary['loc']=='Jharkhand#', 'Jharkhand', cases_summary['loc'])
#Calculate cumulative days since 1st case for each state
cases_summary['day_count']=(cases_summary['day'].groupby(cases_summary['loc']).cumcount())+1
#Initial plot for default slider value=35
days_reach_death_count=cases_summary.loc[(cases_summary['deaths']>=35)].groupby(cases_summary['loc']).head(1).reset_index()
slider = Slider(start=10, end=max(cases_summary['deaths']), value=35, step=10, title="Total Deaths")
source = ColumnDataSource(data=dict(days_reach_death_count[['loc','day_count', 'deaths']]))
q = figure(x_range=days_reach_death_count['loc'], plot_width=1200, plot_height=600, sizing_mode="scale_both")
q.title.align = 'center'
q.title.text_font_size = '17px'
q.xaxis.axis_label = 'State'
q.yaxis.axis_label = 'Days since 1st Case'
q.xaxis.major_label_orientation = math.pi/2
q.vbar('loc', top='day_count', width=0.9, source=source)
deaths = slider.value
q.title.text = 'Days to reach %d Deaths' % deaths
hover = HoverTool(line_policy='next')
hover.tooltips = [('State', '#loc'),
('Days since 1st Case', '#day_count'), # #$name gives the value corresponding to the legend
('Deaths', '#deaths')
]
q.add_tools(hover)
def update(attr, old, new):
days_death_count = cases_summary.loc[(cases_summary['deaths'] >= slider.value)].groupby(cases_summary['loc']).head(1).reindex()
source.data = [ColumnDataSource().from_df(days_death_count)]
slider.on_change('value', update)
layout = row(q, slider)
tab = Panel(child=layout, title="New Confirmed Cases since Day 1")
tabs= Tabs(tabs=[tab])
curdoc().add_root(tabs)
Your code has 2 issues
(critical) source.data must be a dictionary, but you're assigning it an array
(minor) from_df is a class method, you don't have to construct an object of it
Try using source.data = ColumnDataSource.from_df(days_death_count) instead.
I am trying to create a bar graph in pygal that uses the api for hacker news and charts the most active news based on comments. I posted my code below, but I cannot figure out why my graph keep saying "No data"??? Any suggestions? Thanks!
import requests
import pygal
from pygal.style import LightColorizedStyle as LCS, LightenStyle as LS
from operator import itemgetter
# Make an API call, and store the response.
url = 'https://hacker-news.firebaseio.com/v0/topstories.json'
r = requests.get(url)
print("Status code:", r.status_code)
# Process information about each submission.
submission_ids = r.json()
submission_dicts = []
for submission_id in submission_ids[:30]:
# Make a separate API call for each submission.
url = ('https://hacker-news.firebaseio.com/v0/item/' +
str(submission_id) + '.json')
submission_r = requests.get(url)
print(submission_r.status_code)
response_dict = submission_r.json()
submission_dict = {
'comments': int(response_dict.get('descendants', 0)),
'title': response_dict['title'],
'link': 'http://news.ycombinator.com/item?id=' + str(submission_id),
}
submission_dicts.append(submission_dict)
# Visualization
my_style = LS('#336699', base_style=LCS)
my_config = pygal.Config()
my_config.show_legend = False
my_config.title_font_size = 24
my_config.label_font_size = 14
my_config.major_label_font_size = 18
my_config.show_y_guides = False
my_config.width = 1000
chart = pygal.Bar(my_config, style=my_style)
chart.title = 'Most Active News on Hacker News'
chart.add('', submission_dicts)
chart.render_to_file('hn_submissons_repos.svg')
The values in the array passed to the add function need to be either numbers or dicts that contain the key value (or a mixture of the two). The simplest solution would be to change the keys used when creating submission_dict:
submission_dict = {
'value': int(response_dict.get('descendants', 0)),
'label': response_dict['title'],
'xlink': 'http://news.ycombinator.com/item?id=' + str(submission_id),
}
Notice that link has become xlink, this is one of the optional parameters that are defined in the Value Configuration section of the pygal docs.
I have a 8 dataframes which contain date and a stock return. They are of different length (some 5 years, some 8, etc.). Some of them are randomly non-defined (depends on what the user do before - which countries she chooses). What I want to do, is to merge by date only those dataframes which have data and to find correlation between the columns. Please, advise me this issue. I use
the command below to merge data frames, but because some of them do not exist (this is random) I can not merge them.
merged =pd.concat([germany_return, france_return, usa_return, hongkong_return, india_return, japan_return, england_return, china_return], axis=1)
Below I present the last part of the code:
#Define Stock Price colum
stock_germany = data_germany['Settle']
stock_france = data_france['Settle']
stock_usa = data_usa['Value']
stock_hongkong = data_hongkong['Last Traded']
stock_india = data_india['Close']
stock_japan = data_india['Close']
stock_england = data_england['Settle']
stock_china = data_china['Value']
#Calculate Monthly returns
germany_return = stock_germany.pct_change(21)
france_return = stock_france.pct_change(21)
usa_return = stock_usa.pct_change(21)
hongkong_return = stock_hongkong.pct_change(21)
india_return = stock_india.pct_change(21)
japan_return = stock_japan.pct_change(21)
england_return = stock_england.pct_change(21)
china_return = stock_china.pct_change(21)
merged =pd.concat([germany_return, france_return, usa_return, hongkong_return, india_return, japan_return, england_return, china_return], axis=1)
I am using a data frame to create a marked point process using as.ppp function. I get an error Error: is.numeric(x) is not TRUE. The data I am using is as follows:
dput(head(pointDataUTM[,1:2]))
structure(list(POINT_X = c(439845.0069, 450018.3603, 451873.2925,
446836.5498, 445040.8974, 442060.0477), POINT_Y = c(4624464.56,
4629024.646, 4624579.758, 4636291.222, 4614853.993, 4651264.579
)), .Names = c("POINT_X", "POINT_Y"), row.names = c(NA, -6L), class = c("tbl_df",
"tbl", "data.frame"))
I can see that the first two columns are numeric, so I do not know why it is a problem.
> str(pointDataUTM)
Classes ‘tbl_df’, ‘tbl’ and 'data.frame': 5028 obs. of 31 variables:
$ POINT_X : num 439845 450018 451873 446837 445041 ...
$ POINT_Y : num 4624465 4629025 4624580 4636291 4614854 ...
Then I also checked for NA, which shows no NA
> sum(is.na(pointDataUTM$POINT_X))
[1] 0
> sum(is.na(pointDataUTM$POINT_Y))
[1] 0
When I tried even only the first two columns of the data.frame, the error I get on using as.ppp is this:
Error: is.numeric(x) is not TRUE
5.stop(sprintf(ngettext(length(r), "%s is not TRUE", "%s are not all TRUE"), ch), call. = FALSE, domain = NA)
4.stopifnot(is.numeric(x))
3.ppp(X[, 1], X[, 2], window = win, marks = marx, check = check)
2.as.ppp.data.frame(pointDataUTM[, 1:2], W = studyWindow)
1.as.ppp(pointDataUTM[, 1:2], W = studyWindow)
Could someone tell me what is the mistake here and why I get the not numeric error?
Thank you.
The critical check is whether PointDataUTM[,1] is numeric, rather than PointDataUTM$POINT_X.
Since PointDataUTM is a tbl object, and tbl is a function from the dplyr package, what is probably happening is that the subset operator for the tbl class is returning a data frame, and not a numeric vector, when a single column is extracted. Whereas the $ operator returns a numeric vector.
I suggest you convert your data to data.frame using as.data.frame() before calling as.ppp.
In the next version of spatstat we will make our code more robust against this kind of problem.
I'm on the phone, so can't check but I think it is happens because you have a tibble and not a data.frame. Please try to convert to a data.frame using as.data.frame first.