pytrends fail to get US City level data - python-3.x

According to the API Doc from https://github.com/GeneralMills/pytrends that interest_by_region has city parameter, so that what i want is to get California, Texas with keyword Corona Virus from 15-Jan-2020 to 15-Feb-2020
searchkey = ['Corona Virus']
city =['California','Texas']
region = pytrend.interest_by_region(resolution='CITY', inc_low_vol=True, inc_geo_code=True)
then i receive below error
KeyError: "['geoCode'] not in index"
Any help please

Please follow the below link and make changes to the "lib\site-packages\pytrends\request.py" file in your directory.
link: KeyError: "['geoCode'] not in index" #316
It wokred for me.
pytrend = TrendReq(hl='en-US',geo = 'US', tz=360)
data = pytrend.interest_by_region(resolution='CITY', inc_low_vol=True,inc_geo_code=False)
Note: Once you change the code keep the inc_geo_code as False

Related

bioMart Package error: error in function useDataset

I am trying to use the biomaRt package to access the data from Ensembl, however I got error message when using the useDataset() function. My codes are shown below.
library(httr)
listMarts()
ensembl = useMart("ENSEMBL_MART_ENSEMBL")
listDatasets(ensemble)
ensembl = useDataset("hsapiens_gene_ensembl",mart = ensemble)
When I type useDataset function i got error message like this:
> ensembl = useDataset("hsapiens_gene_ensembl",mart = ensembl)
Ensembl site unresponsive, trying asia mirror
Error in textConnection(text, encoding = "UTF-8") :
invalid 'text' argument
and sometimes another different error message showed as:
> ensembl = useDataset("hsapiens_gene_ensembl",mart = ensembl)
Ensembl site unresponsive, trying asia mirror
Error in textConnection(bmResult) : invalid 'text' argument
it seems like that the mirror automatically change to asia OR useast OR uswest, but the error message still shows up over and over again, and i don't know what to do.
So if anyone could help me with this? I will be very grateful for any help or suggestion.
Kind regards Riley Qiu, Dongguan, China

ML Studio language studio failing to detect the source language

I am running a program in python to detect a language and translate that to English using azure machine learning studio. The code block mentioned below throwing error when trying to detect the language.
Error 0002: Failed to parse parameter.
def sample_detect_language():
print(
"This sample statement will be translated to english from any other foreign language"
)
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]
text_analytics_client = TextAnalyticsClient(endpoint=endpoint)
documents = [
"""
The feedback was awesome
""",
"""
la recensione è stata fantastica
"""
]
result = text_analytics_client.detect_language(documents)
reviewed_docs = [doc for doc in result if not doc.is_error]
print("Check the languages we got review")
for idx, doc in enumerate(reviewed_docs):
print("Number#{} is in '{}', which has ISO639-1 name '{}'\n".format(
idx, doc.primary_language.name, doc.primary_language.iso6391_name
))
if doc.is_error:
print(doc.id, doc.error)
print(
"Storing reviews and mapping to their respective ISO639-1 name "
)
review_to_language = {}
for idx, doc in enumerate(reviewed_docs):
review_to_language[documents[idx]] = doc.primary_language.iso6391_name
if __name__ == '__main__':
sample_detect_language()
Any help to solve the issue is appreciated.
The issue was raised because of missing the called parameters in the function. While doing language detection in machine learning studio, we need to assign end point and key credentials. In the code mentioned above, endpoint details were mentioned, but missed AzureKeyCredential.
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]
text_analytics_client = TextAnalyticsClient(endpoint=endpoint)
replace the above line with the code block mentioned below
text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential= AzureKeyCredential(key))

Odoo 13: Creating invoice from purchase order in odoo via api

I am brand new to odoo! On odoo 13 EE I am trying to create and confirm a vendor bill after importing a purchase order and the item receipts. I can create an invoice directly, but haven't been able to link that to the PO/receipt?
Sadly under purchase.order the method action_create_invoice seems hidden from the API
order_id = PurchaseOrder.create(po)
purchaseorder = PurchaseOrder.browse([order_id])
print("Before validating:", purchaseorder.name, purchaseorder.state) # draft
odoo.env.context['check_move_validity'] = True
purchaseorder.button_confirm()
purchaseorder = PurchaseOrder.browse([order_id])
picking_count = purchaseorder.picking_count
print("After Post:", purchaseorder.name, purchaseorder.state, "picking_count = ", purchaseorder.picking_count)
if picking_count == 0:
print("Nothing to receive. Straight to to Billing.") # ok so far
tryme = purchaseorder.action_view_invoice()
## Error => odoorpc.error.RPCError: type object 'purchase.order' has no attribute 'action_create_invoice'
SO I tried overriding/extending this way
class PurchaseOrder(models.Model):
_inherit = 'purchase.order'
#api.model
def create_invoice(self, context=None):
# try 1 => odoorpc.error.RPCError: 'super' object has no attribute # 'action_create_invoice'
rtn = super().action_create_invoice(self)
# try2 => odoorpc.error.RPCError: name 'action_create_invoice' is # not defined
# rtn = action_create_invoice(self)
# try3 => Error %s 'super' object has no attribute ' # action_create_invoice'
# rtn = super(models.Model, self).action_create_invoice(self)
return rtn
I hope somebody can suggest a solution! Thank you.
Please dont customize it without having a functional knowledge in odoo. In odoo, if you go to purchase settings, you can find billing options under invoicing where you can find 2 options, ordered quantity and received quantity. if it is ordered quantity, then you can create invoice after confirming the Purchase order. if it is received quantity, then after confirming the purchase order, a incoming shipment will be created and after the incoming shipment is processed, you can find the create invoice button in purchase order
If you can do it from the browser client, than you should just look what API commands the browser sends to the odoo server (in Chrome by enabling the debug view by pressing F12, and looking in the network tab), so that you just need to copy that communication.

Followed IG scraping Tutorial and stuck on XPath/other issue

I've been working off this tutorial here: https://medium.com/swlh/tutorial-web-scraping-instagrams-most-precious-resource-corgis-235bf0389b0c
When I try to create a simpler version of function "insta_details", that would get the likes and comments of an Instagram photo post, I can't seem to tell what's gone wrong with the code. I think I'm using the xpaths wrongly (first time), but the error message is calling for "NoSuchElementException".
from selenium.webdriver import Chrome
def insta_details(urls):
browser = Chrome()
post_details = []
for link in urls:
browser.get(link)
likes = browser.find_element_by_partial_link_text('likes').text
age = browser.find_element_by_css_selector('a time').text
xpath_comment = '//*[#id="react-root"]/section/main/div/div/article/div[2]/div[1]/ul/li[1]/div/div/div'
comment = browser.find_element_by_xpath(xpath_comment).text
insta_link = link.replace('https://www.instagram.com/p', '')
post_details.append({'link': insta_link,'likes/views': likes,'age': age, 'comment': comment})
return post_details
urls = ['https://www.instagram.com/p/CFdNu1lnCmm/', 'https://www.instagram.com/p/CFYR2OtHDbD/']
insta_details(urls)
Error Message:
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"partial link text","selector":"likes"}
Copying and pasting the code from the tutorial hasn't worked for me yet. Am I calling the function wrongly or is there something else in the code?
Looking at the tutorial it seems like your code is incomplete.
Here, try this:
import time
import re
from selenium.webdriver.chrome.options import Options
from selenium.webdriver import Chrome
def find_mentions_or_hashtags(comment, pattern):
mentions = re.findall(pattern, comment)
if (len(mentions) > 1) & (len(mentions) != 1):
return mentions
elif len(mentions) == 1:
return mentions[0]
else:
return ""
def insta_link_details(url):
chrome_options = Options()
chrome_options.add_argument("--headless")
browser = Chrome(options=chrome_options)
browser.get(url)
try:
# This captures the standard like count.
likes = browser.find_element_by_xpath(
"""/html/body/div[1]/section/main/div/div/article/
div[3]/section[2]/div/div/button/span""").text.split()[0]
post_type = 'photo'
except:
# This captures the like count for videos which is stored
likes = browser.find_element_by_xpath(
"""/html/body/div[1]/section/main/div/div/article/
div[3]/section[2]/div/span/span""").text.split()[0]
post_type = 'video'
age = browser.find_element_by_css_selector('a time').text
comment = browser.find_element_by_xpath(
"""/html/body/div[1]/section/main/div/div[1]/article/
div[3]/div[1]/ul/div/li/div/div/div[2]/span""").text
hashtags = find_mentions_or_hashtags(comment, '#[A-Za-z]+')
mentions = find_mentions_or_hashtags(comment, '#[A-Za-z]+')
post_details = {'link': url, 'type': post_type, 'likes/views': likes,
'age': age, 'comment': comment, 'hashtags': hashtags,
'mentions': mentions}
time.sleep(10)
return post_details
for url in ['https://www.instagram.com/p/CFdNu1lnCmm/', 'https://www.instagram.com/p/CFYR2OtHDbD/']:
print(insta_link_details(url))
Output:
{'link': 'https://www.instagram.com/p/CFdNu1lnCmm/', 'type': 'photo', 'likes/views': '4', 'age': '6h', 'comment': 'Natural ingredients for natural skincare is the best way to go, with:\n\n🌿The Body Shop #thebodyshopaust\n☘️The Beauty Chef #thebeautychef\n\nWalk your body to a happier, healthier you with The Body Shop’s fair trade, high quality products. Be a powerhouse of digestive health with The Beauty Chef’s ingenious food supplements. 💪 Even at our busiest, there’s always a way to take care of our health. 💙\n\n5% rebate on all online purchases with #sosure. T&Cs apply. All rates for limited time only.', 'hashtags': '#sosure', 'mentions': ['#thebodyshopaust', '#thebeautychef']}
{'link': 'https://www.instagram.com/p/CFYR2OtHDbD/', 'type': 'photo', 'likes/views': '10', 'age': '2 DAYS AGO', 'comment': 'The weather can dry out your skin and hair this season, and there’s no reason to suffer through more when there’s so much going on! 😘 Look better, feel better and brush better with these great offers for haircare, skin rejuvenation and beauty 💋 Find 5% rewards for purchases at:\n\n💙 Shaver Shop\n💙 Fresh Fragrances\n💙 Happy Hair Brush\n💕 & many more online at our website bio 👆!\n\nSoSure T&Cs apply. All rates for limited time only.\n.\n.\n.\n#sosure #sosureapp #haircare #skincare #perfume #beauty #healthylifestyle #shavershop #freshfragrances #happyhairbrush #onlineshopping #deals #melbournelifestyle #australia #onlinedeals', 'hashtags': ['#sosure', '#sosureapp', '#haircare', '#skincare', '#perfume', '#beauty', '#healthylifestyle', '#shavershop', '#freshfragrances', '#happyhairbrush', '#onlineshopping', '#deals', '#melbournelifestyle', '#australia', '#onlinedeals'], 'mentions': ''}

How to parse a list in python3

So I get this list back from interactive brokers. (API 9.73 using this repo)
ib = IB()
ib.connect('127.0.0.1', 7497, clientId=2)
data = ib.positions()
print((data))
print(type(data))
The data comes back as , but here is the response.
`[Position(account='DUC00074', contract=Contract(conId=43645865, symbol='IBKR', secType='STK', exchange='NASDAQ', currency='USD', localSymbol='IBKR', tradingClass='NMS'), position=2800.0, avgCost=39.4058383), Position(account='DUC00074', contract=Contract(conId=72063691, symbol='BRK B', secType='STK',exchange='NYSE', currency='USD', localSymbol='BRK B', tradingClass='BRK B'), position=250.0, avgCost=163.4365424)]`
I have got this far:
for d in data:
for i in d:
print(i)
But I have no idea as to how I would parse and then dump into a DB, anything after Position(... So to be really clear, I don't know how I would parse, like I would say in php / json.
Okay, Im new to python, not new to programing. So the response from Interactive Brokers threw me off. I'm so used to JSON response. Regardless, what it comes down to is this is a list of objects, (the example above). That might be simple, but missed it. Once I figured it out it became a little easier.
Here is the final code, hopefully this will help someone else down the line.
for obj in data:
#the line below just bascially dumps the object
print("obj={} ".format(obj))
#looking for account number
#"contract" is an object inside an object, so I had to extract that.
if(getattr(obj, 'account') == '123456'):
print("x={} ".format(getattr(obj, 'account')))
account = (getattr(obj, 'account'))
contract = getattr(obj, 'contract')
contractId = getattr(contract, 'conId')
symbol = getattr(contract, 'symbol')
secType = getattr(contract, 'secType')
exdate = getattr(contract, 'lastTradeDateOrContractMonth')
strike = getattr(contract, 'strike')
opt_type = getattr(contract, 'right')
localSymbol = getattr(contract, 'localSymbol')
position = getattr(obj, 'position')
avgCost = getattr(obj, 'avgCost')
sql = "insert into IB_opt (account, contractId, symbol, secType, exdate, opt_type, localSymbol, position, avgCost,strike) values (%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)";
args = (account, contractId, symbol, secType, exdate, opt_type, localSymbol, position, avgCost,strike)
cursor.execute(sql, args)
Of all the sites I looked at this one was pretty helpful.

Resources