How can one save a new LCIA method in Brightway - brightway

I have a list of characterization factors in the following format:
[(('biosphere3', key), characterization_factor)]
Here is a quick excerpt:
my_cfs = [(('biosphere3', 'e259263c-d1f1-449f-bb9b-73c6d0a32a00'), 1.0),
(('biosphere3', '16eeda8a-1ea2-408e-ab37-2648495058dd'), 1.0),
(('biosphere3', 'aa7cac3a-3625-41d4-bc54-33e2cf11ec46'), 1.0)
]
How do I save my_cfs to my Brightway Methods?

The procedure should be very similar to writing a new Database, as they share a lot of code.
Create and register a new Method:
my_method = Method(("some", "name"))
my_metadata = {"unit": "some unit", "something else": "goes here"}
my_method.register(**my_metadata)
Then write the data (list of CFs):
my_method.write(my_cfs)

Related

ML Studio language studio failing to detect the source language

I am running a program in python to detect a language and translate that to English using azure machine learning studio. The code block mentioned below throwing error when trying to detect the language.
Error 0002: Failed to parse parameter.
def sample_detect_language():
print(
"This sample statement will be translated to english from any other foreign language"
)
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]
text_analytics_client = TextAnalyticsClient(endpoint=endpoint)
documents = [
"""
The feedback was awesome
""",
"""
la recensione è stata fantastica
"""
]
result = text_analytics_client.detect_language(documents)
reviewed_docs = [doc for doc in result if not doc.is_error]
print("Check the languages we got review")
for idx, doc in enumerate(reviewed_docs):
print("Number#{} is in '{}', which has ISO639-1 name '{}'\n".format(
idx, doc.primary_language.name, doc.primary_language.iso6391_name
))
if doc.is_error:
print(doc.id, doc.error)
print(
"Storing reviews and mapping to their respective ISO639-1 name "
)
review_to_language = {}
for idx, doc in enumerate(reviewed_docs):
review_to_language[documents[idx]] = doc.primary_language.iso6391_name
if __name__ == '__main__':
sample_detect_language()
Any help to solve the issue is appreciated.
The issue was raised because of missing the called parameters in the function. While doing language detection in machine learning studio, we need to assign end point and key credentials. In the code mentioned above, endpoint details were mentioned, but missed AzureKeyCredential.
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]
text_analytics_client = TextAnalyticsClient(endpoint=endpoint)
replace the above line with the code block mentioned below
text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential= AzureKeyCredential(key))

Saving a complex object to dynamoDb using python

I am fairly new to python, I am trying to save a complex object to dynamoDb table, I am using boto3 as my interface with dynamodb
This is what the table I am trying to insert the object looks like :
email(primaryKey) | id | lastlogin(unix timestamp)
--------------------------------------------------------
myEmail#domain.come | someId | 923123122
--------------------------------------------------------
Now I am trying to add an object which looks like this in json :
{
"uri":"http://www.edamam.com/ontologies/edamam.owl#recipe_f6fa738e5eee260004693a9fae610bb2",
"label":"Sous Vide Chicken Breast Recipe",
"image":"https://www.edamam.com/web-img/e3d/e3d9d9a141a06d0c6adfb4e49b1224a5.jpg",
"source":"Serious Eats","url":"http://www.seriouseats.com/recipes/2015/07/sous-vide-chicken-breast-recipe.html",
"shareAs":"http://www.edamam.com/recipe/sous-vide-chicken-breast-recipe-f6fa738e5eee260004693a9fae610bb2/breast",
"dietLabels":["Low-Carb"],
"healthLabels":["Sugar-Conscious","Peanut-Free","Tree-Nut-Free","Alcohol-Free"],
"cautions":[],
"ingredientLines":["2 bone-in, skin-on chicken breast halves","Kosher salt and freshly ground black pepper","4 sprigs thyme or rosemary (optional)"],"ingredients":[{"text":"2 bone-in, skin-on chicken breast halves","weight":174},{"text":"Kosher salt and freshly ground black pepper","weight":0},{"text":"Kosher salt and freshly ground black pepper","weight":0.522}],
"calories":300.59022,
"totalWeight":174.522,
"totalTime":165,
"totalNutrients":
{"ENERC_KCAL":{"label":"Energy","quantity":300.59022,"unit":"kcal"},
"FAT":{"label":"Fat","quantity":16.1120172,"unit":"g"},
"FASAT":{"label":"Saturated","quantity":4.63566624,"unit":"g"},
"FATRN":{"label":"Trans","quantity":0.1827,"unit":"g"},"FAMS":{"label":"Monounsaturated","quantity":6.65065758,"unit":"g"},"FAPU":{"label":"Polyunsaturated","quantity":3.41560956,"unit":"g"},"CHOCDF":{"label":"Carbs","quantity":0.33381900000000003,"unit":"g"},"FIBTG":{"label":"Fiber","quantity":0.13206600000000002,"unit":"g"},"SUGAR":{"label":"Sugars","quantity":0.0033408000000000005,"unit":"g"},"PROCNT":{"label":"Protein","quantity":36.333235800000004,"unit":"g"},"CHOLE":{"label":"Cholesterol","quantity":111.36,"unit":"mg"},"NA":{"label":"Sodium","quantity":109.7244,"unit":"mg"},"CA":{"label":"Calcium","quantity":21.452460000000002,"unit":"mg"},"MG":{"label":"Magnesium","quantity":44.39262,"unit":"mg"},"K":{"label":"Potassium","quantity":389.73738000000003,"unit":"mg"},"FE":{"label":"Iron","quantity":1.3382862,"unit":"mg"},"ZN":{"label":"Zinc","quantity":1.3982118000000001,"unit":"mg"},"P":{"label":"Phosphorus","quantity":303.58476,"unit":"mg"},"VITA_RAE":{"label":"Vitamin A","quantity":41.90094,"unit":"µg"},"THIA":{"label":"Thiamin (B1)","quantity":0.11018375999999999,"unit":"mg"},"RIBF":{"label":"Riboflavin (B2)","quantity":0.14883960000000002,"unit":"mg"},"NIA":{"label":"Niacin (B3)","quantity":17.245886459999998,"unit":"mg"},"VITB6A":{"label":"Vitamin B6","quantity":0.9237190200000001,"unit":"mg"},"FOLDFE":{"label":"Folate equivalent (total)","quantity":7.04874,"unit":"µg"},"FOLFD":{"label":"Folate (food)","quantity":7.04874,"unit":"µg"},"VITB12":{"label":"Vitamin B12","quantity":0.5916,"unit":"µg"},"VITD":{"label":"Vitamin D","quantity":27.84,"unit":"IU"},"TOCPHA":{"label":"Vitamin E","quantity":0.47522880000000006,"unit":"mg"},"VITK1":{"label":"Vitamin K","quantity":0.8545140000000001,"unit":"µg"},"WATER":{"label":"Water","quantity":120.92544119999998,"unit":"g"}},"totalDaily":{"ENERC_KCAL":{"label":"Energy","quantity":15.029511,"unit":"%"},"FAT":{"label":"Fat","quantity":24.787718769230768,"unit":"%"},"FASAT":{"label":"Saturated","quantity":23.1783312,"unit":"%"},"CHOCDF":{"label":"Carbs","quantity":0.11127300000000001,"unit":"%"},"FIBTG":{"label":"Fiber","quantity":0.5282640000000001,"unit":"%"},"PROCNT":{"label":"Protein","quantity":72.66647160000001,"unit":"%"},"CHOLE":{"label":"Cholesterol","quantity":37.12,"unit":"%"},"NA":{"label":"Sodium","quantity":4.57185,"unit":"%"},"CA":{"label":"Calcium","quantity":2.145246,"unit":"%"},"MG":{"label":"Magnesium","quantity":10.569671428571429,"unit":"%"},"K":{"label":"Potassium","quantity":8.292284680851065,"unit":"%"},"FE":{"label":"Iron","quantity":7.434923333333334,"unit":"%"},"ZN":{"label":"Zinc","quantity":12.711016363636363,"unit":"%"},"P":{"label":"Phosphorus","quantity":43.36925142857143,"unit":"%"},"VITA_RAE":{"label":"Vitamin A","quantity":4.65566,"unit":"%"},"THIA":{"label":"Thiamin (B1)","quantity":9.181980000000001,"unit":"%"},"RIBF":{"label":"Riboflavin (B2)","quantity":11.449200000000001,"unit":"%"},"NIA":{"label":"Niacin (B3)","quantity":107.78679037499998,"unit":"%"},"VITB6A":{"label":"Vitamin B6","quantity":71.05530923076924,"unit":"%"},"FOLDFE":{"label":"Folate equivalent (total)","quantity":1.7621849999999997,"unit":"%"},"VITB12":{"label":"Vitamin B12","quantity":24.650000000000002,"unit":"%"},"VITD":{"label":"Vitamin D","quantity":185.6,"unit":"%"},"TOCPHA":{"label":"Vitamin E","quantity":3.1681920000000003,"unit":"%"},"VITK1":{"label":"Vitamin K","quantity":0.712095,"unit":"%"}}
}
The way I am trying to add it to dynamodb right now is converting it from json to a namedtuple and then adding it to the table using boto3 api:
rawRecipe = '{"uri":"http://www.edamam.com/ontologies/edamam.owl#recipe_f6fa738e5eee260004693a9fae610bb2","label":"Sous Vide Chicken Breast Recipe","image":"https://www.edamam.com/web-img/e3d/e3d9d9a141a06d0c6adfb4e49b1224a5.jpg","source":"Serious Eats","url":"http://www.seriouseats.com/recipes/2015/07/sous-vide-chicken-breast-recipe.html","shareAs":"http://www.edamam.com/recipe/sous-vide-chicken-breast-recipe-f6fa738e5eee260004693a9fae610bb2/breast","dietLabels":["Low-Carb"],"healthLabels":["Sugar-Conscious","Peanut-Free","Tree-Nut-Free","Alcohol-Free"],"cautions":[],"ingredientLines":["2 bone-in, skin-on chicken breast halves","Kosher salt and freshly ground black pepper","4 sprigs thyme or rosemary (optional)"],"ingredients":[{"text":"2 bone-in, skin-on chicken breast halves","weight":174},{"text":"Kosher salt and freshly ground black pepper","weight":0},{"text":"Kosher salt and freshly ground black pepper","weight":0.522}],"calories":300.59022,"totalWeight":174.522,"totalTime":165,"totalNutrients":{"ENERC_KCAL":{"label":"Energy","quantity":300.59022,"unit":"kcal"},"FAT":{"label":"Fat","quantity":16.1120172,"unit":"g"},"FASAT":{"label":"Saturated","quantity":4.63566624,"unit":"g"},"FATRN":{"label":"Trans","quantity":0.1827,"unit":"g"},"FAMS":{"label":"Monounsaturated","quantity":6.65065758,"unit":"g"},"FAPU":{"label":"Polyunsaturated","quantity":3.41560956,"unit":"g"},"CHOCDF":{"label":"Carbs","quantity":0.33381900000000003,"unit":"g"},"FIBTG":{"label":"Fiber","quantity":0.13206600000000002,"unit":"g"},"SUGAR":{"label":"Sugars","quantity":0.0033408000000000005,"unit":"g"},"PROCNT":{"label":"Protein","quantity":36.333235800000004,"unit":"g"},"CHOLE":{"label":"Cholesterol","quantity":111.36,"unit":"mg"},"NA":{"label":"Sodium","quantity":109.7244,"unit":"mg"},"CA":{"label":"Calcium","quantity":21.452460000000002,"unit":"mg"},"MG":{"label":"Magnesium","quantity":44.39262,"unit":"mg"},"K":{"label":"Potassium","quantity":389.73738000000003,"unit":"mg"},"FE":{"label":"Iron","quantity":1.3382862,"unit":"mg"},"ZN":{"label":"Zinc","quantity":1.3982118000000001,"unit":"mg"},"P":{"label":"Phosphorus","quantity":303.58476,"unit":"mg"},"VITA_RAE":{"label":"Vitamin A","quantity":41.90094,"unit":"µg"},"THIA":{"label":"Thiamin (B1)","quantity":0.11018375999999999,"unit":"mg"},"RIBF":{"label":"Riboflavin (B2)","quantity":0.14883960000000002,"unit":"mg"},"NIA":{"label":"Niacin (B3)","quantity":17.245886459999998,"unit":"mg"},"VITB6A":{"label":"Vitamin B6","quantity":0.9237190200000001,"unit":"mg"},"FOLDFE":{"label":"Folate equivalent (total)","quantity":7.04874,"unit":"µg"},"FOLFD":{"label":"Folate (food)","quantity":7.04874,"unit":"µg"},"VITB12":{"label":"Vitamin B12","quantity":0.5916,"unit":"µg"},"VITD":{"label":"Vitamin D","quantity":27.84,"unit":"IU"},"TOCPHA":{"label":"Vitamin E","quantity":0.47522880000000006,"unit":"mg"},"VITK1":{"label":"Vitamin K","quantity":0.8545140000000001,"unit":"µg"},"WATER":{"label":"Water","quantity":120.92544119999998,"unit":"g"}},"totalDaily":{"ENERC_KCAL":{"label":"Energy","quantity":15.029511,"unit":"%"},"FAT":{"label":"Fat","quantity":24.787718769230768,"unit":"%"},"FASAT":{"label":"Saturated","quantity":23.1783312,"unit":"%"},"CHOCDF":{"label":"Carbs","quantity":0.11127300000000001,"unit":"%"},"FIBTG":{"label":"Fiber","quantity":0.5282640000000001,"unit":"%"},"PROCNT":{"label":"Protein","quantity":72.66647160000001,"unit":"%"},"CHOLE":{"label":"Cholesterol","quantity":37.12,"unit":"%"},"NA":{"label":"Sodium","quantity":4.57185,"unit":"%"},"CA":{"label":"Calcium","quantity":2.145246,"unit":"%"},"MG":{"label":"Magnesium","quantity":10.569671428571429,"unit":"%"},"K":{"label":"Potassium","quantity":8.292284680851065,"unit":"%"},"FE":{"label":"Iron","quantity":7.434923333333334,"unit":"%"},"ZN":{"label":"Zinc","quantity":12.711016363636363,"unit":"%"},"P":{"label":"Phosphorus","quantity":43.36925142857143,"unit":"%"},"VITA_RAE":{"label":"Vitamin A","quantity":4.65566,"unit":"%"},"THIA":{"label":"Thiamin (B1)","quantity":9.181980000000001,"unit":"%"},"RIBF":{"label":"Riboflavin (B2)","quantity":11.449200000000001,"unit":"%"},"NIA":{"label":"Niacin (B3)","quantity":107.78679037499998,"unit":"%"},"VITB6A":{"label":"Vitamin B6","quantity":71.05530923076924,"unit":"%"},"FOLDFE":{"label":"Folate equivalent (total)","quantity":1.7621849999999997,"unit":"%"},"VITB12":{"label":"Vitamin B12","quantity":24.650000000000002,"unit":"%"},"VITD":{"label":"Vitamin D","quantity":185.6,"unit":"%"},"TOCPHA":{"label":"Vitamin E","quantity":3.1681920000000003,"unit":"%"},"VITK1":{"label":"Vitamin K","quantity":0.712095,"unit":"%"}}}'
def _json_object_hook(d): return namedtuple('Recipe', d.keys())(*d.values())
def json2obj(data): return json.loads(data, object_hook=_json_object_hook)
recipe = json2obj(rawRecipe) #Convert to namedtuple
userTable = dynamodb.Table('foodNutrition_users_table') # Add to table
userTable.update_item(
Key = {
'email' : userEmail
},
UpdateExpression = 'Set liked_recipes = :i',
ExpressionAttributeValues={
':i': [recipe],
}
)
But when I run this on AWS lambda I get this error :
{
"errorMessage": "Unsupported type \"<class 'likeRecipe.Recipe'>\" for value \"Recipe(uri='http://www....
I am not sure if I am doing something wrong or if this can't be achieved by boto3 as I wasn't able to find much to help me in their documentation :https://boto3.amazonaws.com/v1/documentation/api/latest/index.html
Not sure if I am not understanding your requirements, but it seems like you might be over-complicating it - this is an example of what I use:
import uuid
import boto3
import json
def lambda_handler(event, context):
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('<myTableName>')
body = json.loads(body)
print("--- body ---")
print(body)
if not "Id" in body:
body['Id'] = str(uuid.uuid4())
response = table.put_item(Item=body)
IN my case, the primary-key is 'Id', and if it doesn't come in with one, I assign one - pretty simple, straight json saved to DynamoDb - perhaps you can build on this to make it work for you.
Edit: and I should ad, this works with complex json objects no problem - that's how I use it.

When working with the Stripe API, is it better to sort each request or store locally and perform queries?

This is my first post, I've been lurking for a while.
Some context to my question;
I'm working with the Stripe API to pull transaction data and match these with booking numbers from another API source. (property reservations --> funds received for reconciliation)
I started by just making calls to the API and sorting the data in place using python 3, however it started to get very complicated and I thought I should persist the data in a mongodb stored on localhost. I began to do this, however I decided that storing the sorted data was still just as complicated and the request times were getting quite long, I thought, maybe I should pull all the stripe data and store it locally and then query whatever I needed.
So here I am, with a bunch of code I've written for both and still not alot of progress. I'm a bit lost with the next move. I feel like I should probably pick a path and stick with it. I'm a little unsure what is the "best practise" when working with API's, usually I would turn to YouTube, but I haven't been able to find a video which covers this specific scenario. The amount of data being pulled from the API would be around 100kb per request.
Here is the original code which would grab each query. Recently I've learnt I can use the expand method (I think this is what it's called) so I don't need to dig down so many levels in my for loop.
The goal was to get just the metadata which contains the booking reference numbers that can then be match against a response from my property management systems API. My code is a bit embarrassing, I've kinda just learnt it over the last little while in my downtime from work.
import csv
import datetime
import os
import pymongo
import stripe
"""
We need to find a Valid reservation_ref or reservation_id in the booking.com Metadata. Then we need to match this to a property ID from our list of properties in the book file.
"""
myclient = pymongo.MongoClient("mongodb://localhost:27017/")
mydb = myclient["mydatabase"]
stripe_payouts = mydb["stripe_payouts"]
stripe.api_key = "sk_live_thisismyprivatekey"
r = stripe.Payout.list(limit=4)
payouts = []
for data in r['data']:
if data['status'] == 'paid':
p_id = data['id']
amount = data['amount']
meta = []
txn = stripe.BalanceTransaction.list(payout=p_id)
amount_str = str(amount)
amount_dollar = str(amount / 100)
txn_len = len(txn['data'])
for x in range(txn_len):
if x != 0:
charge = (txn['data'][x]['source'])
if charge.startswith("ch_"):
meta_req = stripe.Charge.retrieve(charge)
meta = list(meta_req['metadata'])
elif charge.startswith("re_"):
meta_req = stripe.Refund.retrieve(charge)
meta = list(meta_req['metadata'])
if stripe_payouts.find({"_id": p_id}).count() == 0:
payouts.append(
{
"_id": str(p_id),
"payout": str(p_id),
"transactions": txn['data'],
"metadata": {
charge: [meta]
}
}
)
# TODO: Add error exception to check for po id already in the database.
if len(payouts) != 0:
x = stripe_payouts.insert_many(payouts)
print("Inserted into Database ", len(x.inserted_ids), x.inserted_ids)
else:
print("No entries made")
"_id": str(p_id),
"payout": str(p_id),
"transactions": txn['data'],
"metadata": {
charge: [meta]
This last section doesn't work properly, this is kinda where I stopped and starting calling all the data and storing it in mongodb locally.
I appreciate if you've read this wall of text this far.
Thanks
EDIT:
I'm unsure what the best practise is for adding additional information, but I've messed with the code below per the answer given. I'm now getting a "Key error" when trying to insert the entries into the database. I feel like It's duplicating keys somehow.
payouts = []
def add_metadata(payout_id, transaction_type):
transactions = stripe.BalanceTransaction.list(payout=payout_id, type=transaction_type, expand=['data.source'])
for transaction in transactions.auto_paging_iter():
meta = [transaction.source.metadata]
if stripe_payouts.Collection.count_documents({"_id": payout_id}) == 0:
payouts.append(
{
transaction.id: transaction
}
)
for data in r['data']:
p_id = data['id']
add_metadata(p_id, 'charge')
add_metadata(p_id, 'refund')
# TODO: Add error exception to check for po id already in the database.
if len(payouts) != 0:
x = stripe_payouts.insert_many(payouts)
#print(payouts)
print("Inserted into Database ", len(x.inserted_ids), x.inserted_ids)
else:
print("No entries made")```
To answer your high level question. If you're frequently accessing the same data and that data isn't changing much then it can make sense to try to keep your local copy of the data in sync and make your frequent queries against your local data.
No need to be embarrassed by your code :) we've all been new at something at some point.
Looking at your code I noticed a few things:
Rather than fetch all payouts, then use an if statement to skip all except paid, instead you can pass another filter to only query those paid payouts.
r = stripe.Payout.list(limit=4, status='paid')
You mentioned the expand [B] feature of the API, but didn't use it so I wanted to share how you can do that here with an example. In this case, you're making 1 API call to get the list of payouts, then 1 API call per payout to get the transactions, then 1 API call per charge or refund to get the metadata for charges or metadata for refunds. This results in 1 * (n payouts) * (m charges or refunds) which is a pretty big number. To cut this down, let's pass expand=['data.source'] when fetching transactions which will include all of the metadata about the charge or refund along with the transaction.
transactions = stripe.BalanceTransaction.list(payout=p_id, expand=['data.source'])
Fetching the BalanceTransaction list like this will only work as long as your results fit on one "page" of results. The API returns paginated [A] results, so if you have more than 10 transactions per payout, this will miss some. Instead, you can use an auto-pagination feature of the stripe-python library to iterate over all results from the BalanceTransaction list.
for transaction in transactions.auto_paging_iter():
I'm not quite sure why we're skipping over index 0 with if x != 0: so that may need to be addressed elsewhere :D
I didn't see how or where amount_str or amount_dollar was actually used.
Rather than determining the type of the object by checking the ID prefix like ch_ or re_ you'll want to use the type attribute. Again in this case, it's better to filter by type so that you only get exactly the data you need from the API:
transactions = stripe.BalanceTransaction.list(payout=p_id, type='charge', expand=['data.source'])
I'm unable to test because I lack the same database that you have, but wanted to share a refactoring of your code that you may consider.
r = stripe.Payout.list(limit=4, status='paid')
payouts = []
for data in r['data']:
p_id = data['id']
amount = data['amount']
meta = []
amount_str = str(amount)
amount_dollar = str(amount / 100)
transactions = stripe.BalanceTransaction.list(payout=p_id, type='charge', expand=['data.source'])
for transaction in transactions.auto_paging_iter():
meta = list(transaction.source.metadata)
if stripe_payouts.find({"_id": p_id}).count() == 0:
payouts.append(
{
"_id": str(p_id),
"payout": str(p_id),
"transactions": transactions,
"metadata": {
charge: [meta]
}
}
)
transactions = stripe.BalanceTransaction.list(payout=p_id, type='refund', expand=['data.source'])
for transaction in transactions.auto_paging_iter():
meta = list(transaction.source.metadata)
if stripe_payouts.find({"_id": p_id}).count() == 0:
payouts.append(
{
"_id": str(p_id),
"payout": str(p_id),
"transactions": transactions,
"metadata": {
charge: [meta]
}
}
)
# TODO: Add error exception to check for po id already in the database.
if len(payouts) != 0:
x = stripe_payouts.insert_many(payouts)
print("Inserted into Database ", len(x.inserted_ids), x.inserted_ids)
else:
print("No entries made")
Here's a further refactoring using functions defined to encapsulate just the bit adding to the database:
r = stripe.Payout.list(limit=4, status='paid')
payouts = []
def add_metadata(payout_id, transaction_type):
transactions = stripe.BalanceTransaction.list(payout=payout_id, type=transaction_tyep, expand=['data.source'])
for transaction in transactions.auto_paging_iter():
meta = list(transaction.source.metadata)
if stripe_payouts.find({"_id": payout_id}).count() == 0:
payouts.append(
{
"_id": str(payout_id),
"payout": str(payout_id),
"transactions": transactions,
"metadata": {
charge: [meta]
}
}
)
for data in r['data']:
p_id = data['id']
add_metadata('charge')
add_metadata('refund')
# TODO: Add error exception to check for po id already in the database.
if len(payouts) != 0:
x = stripe_payouts.insert_many(payouts)
print("Inserted into Database ", len(x.inserted_ids), x.inserted_ids)
else:
print("No entries made")
[A] https://stripe.com/docs/api/pagination
[B] https://stripe.com/docs/api/expanding_objects

Linear model subset selection goodness-of-fit with k-fold cross validation

I am studying 'An Introduction to Statistical Learning' from James et al (2015). In the experiment section, a script to calculate the goodness-of-fit of different subsets using the k-fold cross validation method.
When I try to plot the error coefficients, I get the error:
Error in UseMethod("predict") : no applicable method for 'predict' applied to an object of class "regsubsets"
The script makes too little sense for me to know what I'm doing wrong. Can anyone help me interpret?
library(leaps)
library(ISLR)
k=10
set.seed(1)
folds=sample(1:k,nrow(Hitters),replace=TRUE)
cv.errors=matrix(NA,k,19, dimnames=list(NULL, paste(1:19)))
for(j in 1:k){
best.fit=regsubsets(Salary~.,data=Hitters[folds!=j,],nvmax=19)
for(i in 1:19){
pred=predict(best.fit,Hitters[folds==j,],id=i)
cv.errors[j,i]=mean( (Hitters$Salary[folds==j]-pred)^2)
}
}
mean.cv.errors=apply(cv.errors,2,mean)
mean.cv.errors
par(mfrow=c(1,1))
plot(mean.cv.errors,type='b')
reg.best=regsubsets(Salary~.,data=Hitters, nvmax=19)
coef(reg.best,11)
I ran into the problem too. Hope you found the answer. If not, here is the answer.
I am sure that you already created the function below.
predict.regsubsets <- function(object, newdata, id,...) {
form <- as.formula(object$call[[2]])
mat <- model.matrix(form, newdata)
coefi <- coef(object, id = id)
xvars <- names(coefi)
mat[,xvars]%*%coefi
}
Now you have to change pred=predict(best.fit,Hitters[folds==j,],id=i) to pred <- predict.regsubsets(best.fit, hitters[folds == j, ], id = i)
Hope it helped.

How to parse a list in python3

So I get this list back from interactive brokers. (API 9.73 using this repo)
ib = IB()
ib.connect('127.0.0.1', 7497, clientId=2)
data = ib.positions()
print((data))
print(type(data))
The data comes back as , but here is the response.
`[Position(account='DUC00074', contract=Contract(conId=43645865, symbol='IBKR', secType='STK', exchange='NASDAQ', currency='USD', localSymbol='IBKR', tradingClass='NMS'), position=2800.0, avgCost=39.4058383), Position(account='DUC00074', contract=Contract(conId=72063691, symbol='BRK B', secType='STK',exchange='NYSE', currency='USD', localSymbol='BRK B', tradingClass='BRK B'), position=250.0, avgCost=163.4365424)]`
I have got this far:
for d in data:
for i in d:
print(i)
But I have no idea as to how I would parse and then dump into a DB, anything after Position(... So to be really clear, I don't know how I would parse, like I would say in php / json.
Okay, Im new to python, not new to programing. So the response from Interactive Brokers threw me off. I'm so used to JSON response. Regardless, what it comes down to is this is a list of objects, (the example above). That might be simple, but missed it. Once I figured it out it became a little easier.
Here is the final code, hopefully this will help someone else down the line.
for obj in data:
#the line below just bascially dumps the object
print("obj={} ".format(obj))
#looking for account number
#"contract" is an object inside an object, so I had to extract that.
if(getattr(obj, 'account') == '123456'):
print("x={} ".format(getattr(obj, 'account')))
account = (getattr(obj, 'account'))
contract = getattr(obj, 'contract')
contractId = getattr(contract, 'conId')
symbol = getattr(contract, 'symbol')
secType = getattr(contract, 'secType')
exdate = getattr(contract, 'lastTradeDateOrContractMonth')
strike = getattr(contract, 'strike')
opt_type = getattr(contract, 'right')
localSymbol = getattr(contract, 'localSymbol')
position = getattr(obj, 'position')
avgCost = getattr(obj, 'avgCost')
sql = "insert into IB_opt (account, contractId, symbol, secType, exdate, opt_type, localSymbol, position, avgCost,strike) values (%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)";
args = (account, contractId, symbol, secType, exdate, opt_type, localSymbol, position, avgCost,strike)
cursor.execute(sql, args)
Of all the sites I looked at this one was pretty helpful.

Resources