I'm trying to get monthly cost data from Azure using Azure SDK for Python, but Microsoft Documentation seems very confusing and outdated, without examples. I need to create a monthly evolution chart outside Azure Portal.
What is the right way to retrieve this information about monthly costs from Azure?
I already tried to use BillingManagementClient class, get_for_billing_period_by_billing_account method from ConsumptionManagementClient.balances, and now I'm trying to use usage_details.list method from ConsumptionManagementClient, but I'm receiving a strange duplicated data:
consumption_client = ConsumptionManagementClient(self.credential, self.subscription_id)
start_date = "2022-11-19T00:00:00.0000000Z"
end_date = "2022-11-20T00:00:00.0000000Z"
filters = f"properties/usageStart eq '{start_date}' and properties/usageEnd eq '{end_date}'"
consumption_list = consumption_client.usage_details.list(f"/subscriptions/{subscription_id}", None, filters)
for consumption_data in consumption_list:
print(f"date: {consumption_data.date} \nstart_date: {consumption_data.billing_period_start_date} \nend_date: {consumption_data.billing_period_end_date}\ncost: {consumption_data.cost} \n")
Script output:
date: 2022-11-20 00:00:00+00:00
start_date: 2022-11-11 00:00:00+00:00
end_date: 2022-12-10 00:00:00+00:00
cost: 0.658392
date: 2022-11-19 00:00:00+00:00
start_date: 2022-11-11 00:00:00+00:00
end_date: 2022-12-10 00:00:00+00:00
cost: 0.658392
date: 2022-11-19 00:00:00+00:00
start_date: 2022-11-11 00:00:00+00:00
end_date: 2022-12-10 00:00:00+00:00
cost: 0.67425593616
date: 2022-11-20 00:00:00+00:00
start_date: 2022-11-11 00:00:00+00:00
end_date: 2022-12-10 00:00:00+00:00
cost: 0.67425593616
Related
I've got the following dataset:
where:
customer id represents a unique customer
each customer has multiple invoices
each invoice is marked by a unique identifier (Invoice)
each invoice has multiple items (rows)
I want to determine the time difference between invoices for a customer. In other words, the time between one invoice and the next. Is this possible? and how should I do it with DiffDatetime?
Here is how I am setting up the entities:
es = ft.EntitySet(id="data")
es = es.add_dataframe(
dataframe=df,
dataframe_name="items",
index = "items",
make_index=True,
time_index="InvoiceDate",
)
es.normalize_dataframe(
base_dataframe_name="items",
new_dataframe_name="invoices",
index="Invoice",
copy_columns=["Customer ID"],
)
es.normalize_dataframe(
base_dataframe_name="invoices",
new_dataframe_name="customers",
index="Customer ID",
)
I tried:
feature_matrix, feature_defs = ft.dfs(
entityset=es,
target_dataframe_name="invoices",
agg_primitives=[],
trans_primitives=["diff_datetime"],
verbose=True,
)
And also changing the target dataframe to invoices or customers, but none of those work.
The df that I am trying to work on looks like this:
es["invoices"].head()
And what I want can be done with pandas like this:
es["invoices"].groupby("Customer ID")["first_items_time"].diff()
which returns:
489434 NaT
489435 0 days 00:01:00
489436 NaT
489437 NaT
489438 NaT
...
581582 0 days 00:01:00
581583 8 days 01:05:00
581584 0 days 00:02:00
581585 10 days 20:41:00
581586 14 days 02:27:00
Name: first_items_time, Length: 40505, dtype: timedelta64[ns]
Thank you for your question.
You can use the groupby_trans_primitives argument in the call to dfs.
Here is an example:
feature_matrix, feature_defs = ft.dfs(
entityset=es,
target_dataframe_name="invoices",
agg_primitives=[],
groupby_trans_primitives=["diff_datetime"],
return_types="all",
verbose=True,
)
The return_types argument is required since DiffDatetime returns a Feature with Timedelta logical type. Without specifying return_types="all", DeepFeatureSynthesis will only return Features with numeric, categorical, and boolean data types.
I would like to create a [function that returns] a pandas series of datetime values for the Nth calendar day of each month for the current year. An added wrinkle is I would also need it to be the previous business day if it happens to fall on the weekend. Bonus would be to check against known holidays as well.
For example, I'd like the output to look like this for the [business day prior to or equal to the] 14th day of the month
0 2021-01-14
1 2021-02-12
2 2021-03-12
3 2021-04-14
4 2021-05-14
5 2021-06-14
6 2021-07-14
7 2021-08-13
8 2021-09-14
9 2021-10-14
10 2021-11-12
11 2021-12-14
I've tried using pd.date_range() and pd.bdate_range() and did not get the desired results. Example:
pd.date_range("2021-01-14","2021-12-14", periods=12)
>> DatetimeIndex(['2021-01-14 00:00:00',
'2021-02-13 08:43:38.181818182',
'2021-03-15 17:27:16.363636364',
'2021-04-15 02:10:54.545454546',
'2021-05-15 10:54:32.727272728',
'2021-06-14 19:38:10.909090910',
'2021-07-15 04:21:49.090909092',
'2021-08-14 13:05:27.272727272',
'2021-09-13 21:49:05.454545456',
'2021-10-14 06:32:43.636363640',
'2021-11-13 15:16:21.818181820',
'2021-12-14 00:00:00'],
dtype='datetime64[ns]', freq=None)>>
Additionally this requires knowing the first and last month days that would be the start and end. Analogous tests with pd.bdate_range() resulted mostly in errors.
Similar approach to Pandas Date Range Monthly on Specific Day of Month but subtract a Bday to get the previous buisness day. Also start at 12/31 of the previous year to get all values for the current year:
def get_date_range(day_of_month, year=pd.Timestamp.now().year):
return (
pd.date_range(start=pd.Timestamp(year=year - 1, month=12, day=31),
periods=12, freq='MS') +
pd.Timedelta(days=day_of_month) -
pd.tseries.offsets.BDay()
)
Usage for year:
get_date_range(14)
DatetimeIndex(['2021-01-14', '2021-02-12', '2021-03-12', '2021-04-14',
'2021-05-14', '2021-06-14', '2021-07-14', '2021-08-13',
'2021-09-14', '2021-10-14', '2021-11-12', '2021-12-14'],
dtype='datetime64[ns]', freq=None)
Or for another year:
get_date_range(14, 2020)
DatetimeIndex(['2020-01-14', '2020-02-14', '2020-03-13', '2020-04-14',
'2020-05-14', '2020-06-12', '2020-07-14', '2020-08-14',
'2020-09-14', '2020-10-14', '2020-11-13', '2020-12-14'],
dtype='datetime64[ns]', freq=None)
With Holidays (this is non-vectorized so it will raise a PerformanceWarning):
import pandas as pd
from pandas.tseries.holiday import USFederalHolidayCalendar
from pandas.tseries.offsets import CustomBusinessDay
bday_us = CustomBusinessDay(calendar=USFederalHolidayCalendar())
def get_date_range(day_of_month, year=pd.Timestamp.now().year):
return (
pd.date_range(start=pd.Timestamp(year=year - 1, month=12, day=31),
periods=12, freq='MS') +
pd.Timedelta(days=day_of_month) -
bday_us
)
get_date_range(25)
DatetimeIndex(['2021-01-25', '2021-02-25', '2021-03-25', '2021-04-23',
'2021-05-25', '2021-06-25', '2021-07-23', '2021-08-25',
'2021-09-24', '2021-10-25', '2021-11-24', '2021-12-23'],
dtype='datetime64[ns]', freq=None)
You can use the months start and then add a timedelta to get it to the day you want. So for your example it would be:
pd.date_range(start=pd.Timestamp("2020-12-14"), periods=12, freq='MS') + pd.Timedelta(days=13)
Output:
DatetimeIndex(['2021-01-14', '2021-02-14', '2021-03-14', '2021-04-14',
'2021-05-14', '2021-06-14', '2021-07-14', '2021-08-14',
'2021-09-14', '2021-10-14', '2021-11-14', '2021-12-14'],
dtype='datetime64[ns]', freq=None)
to move to the previous business day use (see: Pandas offset DatetimeIndex to next business if date is not a business day and Most recent previous business day in Python) :
(pd.date_range(start=pd.Timestamp("2021-06-04"), periods=12, freq='MS') + pd.Timedelta(days=4)).map(lambda x: x - pd.tseries.offsets.BDay())
output:
DatetimeIndex(['2021-07-02', '2021-08-05', '2021-09-03', '2021-10-04',
'2021-11-04', '2021-12-03', '2022-01-06', '2022-02-04',
'2022-03-04', '2022-04-04', '2022-05-05', '2022-06-03'],
dtype='datetime64[ns]', freq=None)
I am struggling to feed data to the tf.esitimator.DNNClassifier after reloading it through tf.contrib.predictor.from_saved_model. I would very much appreciate your help.
I found this and this links but I am getting an error. Below is my implementation:
Saving Model:
feature_spec = tf.feature_column.make_parse_example_spec(feat_cols)
export_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
tuned_model.export_savedmodel('./model_dir/saved_models/', export_fn)
This successfully saves the model with the following info:
INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling
model_fn. INFO:tensorflow:Signatures INCLUDED in export for Classify:
['serving_default', 'classification'] INFO:tensorflow:Signatures
INCLUDED in export for Regress: ['regression']
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: None
INFO:tensorflow:Restoring parameters from
/nimble/kdalal/model_dir/model.ckpt-28917 INFO:tensorflow:Assets added
to graph. INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to:
./model_dir/saved_models/temp-b'1556819228'/saved_model.pb
Reloading For Predictions:
predict_prod = tf.contrib.predictor.from_saved_model('./model_dir/saved_models/1556819228')
predict_prod(dict(X_test))
I get the following error:
ValueError: Got unexpected keys in input_dict: {'DOW', 'JOB_FUNCTION',
'ACC_SIZE', 'answered_20D', 'MatchType', 'CONTACT_STATE', 'SEASONS',
'called_20D', 'st_cb_ans_20D', 'JOB_ROLE', 'st_cb_called_20D',
'CALL_BLOCKS'} expected: {'inputs'}
My X_test is a data frame that I'm trying to get predictions for.
[EDITED]:
My input dict looks like as follows:
{'JOB_ROLE': 714859 Manager-Level
714860 Manager-Level
714861 Manager-Level
714862 Manager-Level
714863 Director-Level
Name: JOB_ROLE, dtype: object,
'JOB_FUNCTION': 714859 Information Technology
714860 Information Technology
714861 Information Technology
714862 Information Technology
714863 Information Technology
Name: JOB_FUNCTION, dtype: object,
'MatchType': 714859 Work Phone
714860 Work Phone
714861 Work Phone
714862 Work Phone
714863 Account Main Phone
Name: MatchType, dtype: object,
'CALL_BLOCKS': 714859 17_18
714860 17_18
714861 17_18
714862 17_18
714863 17_18
Name: CALL_BLOCKS, dtype: object,
'ACC_SIZE': 714859 StartUps
714860 StartUps
714861 Small
714862 StartUps
714863 Small
Name: ACC_SIZE, dtype: object,
'CONTACT_STATE': 714859 WA
714860 CA
714861 CA
714862 CA
714863 CA
Name: CONTACT_STATE, dtype: object,
'SEASONS': 714859 Spring
714860 Spring
714861 Spring
714862 Spring
714863 Spring
Name: SEASONS, dtype: object,
'DOW': 714859 Monday
714860 Monday
714861 Monday
714862 Monday
714863 Monday
Name: DOW, dtype: object,
'called_20D': 714859 0.038760
714860 0.077519
714861 0.217054
714862 0.046512
714863 0.038760
Name: called_20D, dtype: float64,
'answered_20D': 714859 0.000000
714860 0.086957
714861 0.043478
714862 0.000000
714863 0.130435
Name: answered_20D, dtype: float64,
'st_cb_called_20D': 714859 0.050233
714860 0.282496
714861 0.282496
714862 0.282496
714863 0.282496
Name: st_cb_called_20D, dtype: float64,
'st_cb_ans_20D': 714859 0.059761
714860 0.314741
714861 0.314741
714862 0.314741
714863 0.314741
Name: st_cb_ans_20D, dtype: float64}
I am a beginner with tf and I don't know how to pass data frames to the model so that I can call predcit method and get the predictions.
Also, should I be converting my input data to some other dtype?
I found the answer. Please refer the link to understand how to feed data to the imported estimator model.
ValueError: Cannot feed value of shape (75116, 12) for Tensor 'input_example_tensor:0', which has shape '(?,)
about this question your model looks like to predict once 1 item
you can only feed one item like {'inputs': X_test.values[0]}
you can change the model to predict bunch of item
Good luck
Do you know if it is possible to make groupers by hour?
I know that by day you can.
context="{'group_by': 'my_datetime:day'}"
I mean odoo filters like this:
<filter name="booking_group" string="Group by Booking" context="{'group_by': 'booking_id'}"
No, it is not possible. The implemented values are 'day', 'week', 'month', 'quarter' or 'year' (See <path_to_v12/odoo/models.py lines 1878 to 1899):
1878 #api.model
1879 def read_group(self, domain, fields, groupby, offset=0, limit=None, orderby=False, lazy=True):
1880 """
1881 Get the list of records in list view grouped by the given ``groupby`` fields
1882
1883 :param domain: list specifying search criteria [['field_name', 'operator', 'value'], ...]
1884 :param list fields: list of fields present in the list view specified on the object
1885 :param list groupby: list of groupby descriptions by which the records will be grouped.
1886 A groupby description is either a field (then it will be grouped by that field)
1887 or a string 'field:groupby_function'. Right now, the only functions supported
1888 are 'day', 'week', 'month', 'quarter' or 'year', and they only make sense for
1889 date/datetime fields.
1890 :param int offset: optional number of records to skip
1891 :param int limit: optional max number of records to return
1892 :param list orderby: optional ``order by`` specification, for
1893 overriding the natural sort ordering of the
1894 groups, see also :py:meth:`~osv.osv.osv.search`
1895 (supported only for many2one fields currently)
1896 :param bool lazy: if true, the results are only grouped by the first groupby and the
1897 remaining groupbys are put in the __context key. If false, all the groupbys are
1898 done in one call.
1899 :return: list of dictionaries(one dictionary for each record) containing:
I am trying to store my users age on MongoDB and i want to calculate the age of the user dynamically and update it autonomously when possible.
I'm thinking of two approaches one is to store the date and on query use today's date as a reference and find the difference but the problem with this is the age will not be updated on the schema in MongoDB, how do i solve this?
The other is to set a hook for an API and trigger that API to update the details.
My schema looks something like this, though i am computing the age when saving it won't get updated as and when needed. Also as told taking today's date as a reference is affecting our analytics.
dateOfBirth: Date,
age: Number
You can store as DOB and project age while querying using aggregation framework
db.getCollection('callmodels').aggregate([{
$project: {
name : 1,
email: 1,
age: {$trunc:{$divide: [ {$subtract: [new Date(), '$dob']},1000 * 60 * 60 * 24* 365]}}}
}
])