I wrote this web scraping program that extracts retail trading sentiment data for 26 forex pairs from IG markets.
The output in the console looks like:
AUD-CAD: 64% of client accounts are short.
AUD-CHF: 54% of client accounts are long.
AUD-JPY: 60% of client accounts are short.
AUD-NZD: 60% of client accounts are long.
AUD-USD: 53% of client accounts are short.
CAD-CHF: 56% of client accounts are long.
CAD-JPY: 56% of client accounts are long.
CHF-JPY: 68% of client accounts are short.
EUR-AUD: 68% of client accounts are long.
EUR-CAD: 65% of client accounts are short.
EUR-CHF: 66% of client accounts are long.
EUR-GBP: 53% of client accounts are short.
EUR-JPY: 57% of client accounts are short.
EUR-NZD: 55% of client accounts are long.
EUR-USD: 54% of client accounts are short.
GBP-AUD: 73% of client accounts are long.
GBP-CAD: 66% of client accounts are long.
GBP-CHF: 63% of client accounts are long.
GBP-JPY: 52% of client accounts are short.
GBP-NZD: 57% of client accounts are long.
GBP-USD: 59% of client accounts are short.
SPOT-FX-NZDCAD: 68% of client accounts are short.
NZD-CHF: 59% of client accounts are short.
NZD-JPY: 57% of client accounts are short.
NZD-USD: 72% of client accounts are short.
USD-CAD: 69% of client accounts are long.
USD-CHF: 79% of client accounts are long.
USD-JPY: 58% of client accounts are long.
I would like to export this data to a Google Sheet named "GsheetTest", but I'm stuck and
I have no idea how to do it.
Google API is enabled. I've created the credentials, got the service account json key.
I'm able to write simple text to this google sheet file "GsheetTest" using pygsheets and panda dataframe.
Here's the code:
import bs4, requests
def getIGsentiment(pairUrl):
res = requests.get(pairUrl)
res.raise_for_status()'
soup = bs4.BeautifulSoup(res.text, 'html.parser')
elems = soup.select('.price-ticket__sentiment')
return elems[0].get_text(" ", strip = True)
pair_list = ['aud-cad', 'aud-chf', 'aud-jpy', 'aud-nzd', 'aud-usd', 'cad-chf', 'cad-jpy',
'chf-jpy', 'eur-aud', 'eur-cad', 'eur-chf', 'eur-gbp', 'eur-jpy', 'eur-nzd',
'eur-usd', 'gbp-aud', 'gbp-cad', 'gbp-chf', 'gbp-jpy', 'gbp-nzd', 'gbp-usd',
'spot-fx-nzdcad', 'nzd-chf', 'nzd-jpy','nzd-usd', 'usd-cad', 'usd-chf',
'usd-jpy']
for i in range(len(pair_list)):
retail_positions = getIGsentiment('https://www.ig.com/us/forex/markets-forex/ +(pair_list[i]))
pair = pair_list[i]
print(pair.upper() +': ' + retail_positions[0:32].rstrip() + '.')
First open json file in browser and find email address, now share spreadsheet with that email along with editing right, after that use following code..
#Import these libraries or pip install if not installed already
from oauth2client.service_account import ServiceAccountCredentials
import gspread
import bs4, requests
def getIGsentiment(pairUrl):
res = requests.get(pairUrl)
res.raise_for_status()'
soup = bs4.BeautifulSoup(res.text, 'html.parser')
elems = soup.select('.price-ticket__sentiment')
return elems[0].get_text(" ", strip = True)
pair_list = ['aud-cad', 'aud-chf', 'aud-jpy', 'aud-nzd', 'aud-usd', 'cad-chf', 'cad-jpy',
'chf-jpy', 'eur-aud', 'eur-cad', 'eur-chf', 'eur-gbp', 'eur-jpy', 'eur-nzd',
'eur-usd', 'gbp-aud', 'gbp-cad', 'gbp-chf', 'gbp-jpy', 'gbp-nzd', 'gbp-usd',
'spot-fx-nzdcad', 'nzd-chf', 'nzd-jpy','nzd-usd', 'usd-cad', 'usd-chf',
'usd-jpy']
#now you need auth and set scope
scope = ['https://spreadsheets.google.com/feeds','https://www.googleapis.com/auth/drive']
creds = ServiceAccountCredentials.from_json_keyfile_name('path-to-your-json.json', scope)
#Auth
gc = gspread.authorize(creds)
sh = gc.open('Spread Sheet name You want to open')
worksheet = sh.add_worksheet('sheet to be added name', int(rows), int(columns))
for i in range(len(pair_list)):
retail_positions = getIGsentiment('https://www.ig.com/us/forex/markets-forex/ +(pair_list[i]))
pair = pair_list[i]
foo = pair.upper() +': ' + retail_positions[0:32].rstrip() + '.'
worksheet.insert_row(foo, 1)
If you have a list of strings and you want to write each string into single cell. which i think is how you want it.
import pygsheets
import numpy as np
pair_data = [] # data fro your code
gc = pygsheets.authorize()
# Open spreadsheet and then worksheet
sh = gc.open('GsheetTest')
wks = sh.sheet1
wks.wks.update_values('A1', pair_data)
Related
How can I change the take profit or stop loss of an order already created via ccxt python in Binance futures?
I would like an already created order to be able to change the stop loss, as if I did it from the binance web cli, there is some way, I create my order like this
exchange.create_order(symbol=par, type='limit', side=side, price = precio, amount= monto, params={})
When detecting a certain pattern I would like to update the SL and TP, it's possible?
I have not found information in the ccxt wiki
There is an edit_order function that you may want to try.
import ccxt
exchange = ccxt.binanceusdm()
exchange.apiKey = 'YOUR_API_KEY'
exchange.secret = 'YOUR_API_SECRET'
symbol = 'BTC/USDT'
order_id = 'ORDER_ID'
order_type = 'limit'
side = 'buy'
amount = 0.001
price = 16000
stop_loss = 15000
take_profit = 17000
exchange.edit_order(order_id, symbol, order_type, side, amount, price, {'stopLossPrice': stop_loss, 'takeProfitPrice': take_profit})
I am trying to find the behavior of azure blob storage with AD authentication when uploading took more than 90 min for a single big file, unfortunately my internet is quite fast and my disk can't fit TB scale file, so I am trying to simulate slow upload
I tried the following code
import os
from io import BufferedReader, FileIO
class ProgressFile(BufferedReader):
# For binary opening only
def __init__(self, filename, read_callback):
f = FileIO(file=filename, mode='r')
self._read_callback = read_callback
super().__init__(raw=f)
# I prefer Pathlib but this should still support 2.x
self.length = os.stat(filename).st_size
def read(self, size=None):
calc_sz = size
if not calc_sz:
calc_sz = self.length - self.tell()
self._read_callback(position=self.tell(), read_size=calc_sz, total=self.length)
return super(ProgressFile, self).read(size)
def my_callback(position, read_size, total):
if position > 0 and position <= 4194304:
time.sleep(5520)
print("position: {position}, read_size: {read_size}, total: {total}".format(position=position,
read_size=read_size,
total=total))
myfile = ProgressFile(filename='./testfile', read_callback=my_callback)
from azure.identity import ClientSecretCredential
token_credential = ClientSecretCredential(
)
container_client = ContainerClient(oauth_url, "containername", token_credential)
def upload(filename):
blob_client = container_client.get_blob_client("myfile")
blob_client.upload_blob(myfile, blob_type="BlockBlob")
print("finish uploading")
upload(int(time.time()))
However I don't see token expire error, even after 90 min
In what circumstance does token expiration appears?
As you are using azure.identity.ClientSecretCredential, it renews the token when it is close to expiration.
(I work in Microsoft Azure SDK team)
Here is the code
from azure.identity import ClientSecretCredential
token_credential = ClientSecretCredential(
"",# tenant id
"",# active directory application id
"", # active directory application secret
)
blob_service_client = BlobServiceClient(account_url=oauth_url, credential=token_credential)
def listcontainer():
from azure.storage.blob import BlobServiceClient
con = blob_service_client.list_containers()
for x in con:
print(x)
while True:
end = int(time.time())
if end - start > 4800:
break
else:
print("run time in minute: ", (end - start) / 60)
try:
listcontainer()
except Exception as e:
print("exception reached")
print(e)
break
time.sleep(60)
I set BlobServiceClient once, and I expect an exception to be reached after 90min
However I don't see that happening
In this doc
https://learn.microsoft.com/en-us/azure/active-directory/develop/active-directory-configurable-token-lifetimes
The default lifetime of an access token is variable. When issued, an access token's default lifetime is assigned a random value ranging between 60-90 minutes (75 minutes on average). The default lifetime also varies depending on the client application requesting the token or if conditional access is enabled in the tenant. For more information, see Access token lifetime.
What does expiration pertain to in this case?
The token does expire however SDK takes care of renewing it automatically when that happens. As a user, generally speaking you need not worry about it.
I need to check if a user has liked a new post without exceeding twitters rate limit.
Basically, I'm just making a fun code to prank my friend. It will detect when he likes a new post and send that post to him with some snarky comment. All love between me and him, and I've made it obvious who made the bot.
I understand what the rate limit is and why its there, and I have wait_on_rate_limit = True, but that stops the code from working.
Here's what I currently have.
import tweepy, random
comments = open('dumbcomments.txt', 'r')
# Authenticate to Twitter
auth = tweepy.OAuthHandler("authkey", "securityauthkey")
auth.set_access_token("accesstoken", "securityaccesstoken")
# Create API object
api = tweepy.API(auth, wait_on_rate_limit = True)
lines = comments.readlines()
friend = api.get_user(friendUser)
likes = api.favorites(friendUser, count = 1)
lastlike = likes
while True:
likes = api.favorites(friendUser, count = 1)
if likes != lastlike:
comment = random.randint(0, 23)
api.send_direct_message(friend.id, lines[comment] + '\n' + str(likes[0].text))
lastlike = likes
comments.close()
The code works, so long as I haven't exceeded the rate, which happens quickly.
I have a long code (430 lines) which is used to simulate an energy market following specific guidelines.
Four different processes: Home, Market, Weather, External.
Each process has a specific task listed below:
Home has an production and consumption float value, a trade policy as an integer and calculates energy exchanges between each home (multiple home processes are created for the simulation).
Market calculates the current energy price based on the production and consumption and external factors.
Weather determines random variables of temperature and season to be used in Market.
External is a child process of Market and provides random external factors I have created.
I have an issue in my code where I create a new thread to display the results of each day of the simulation (days pass every 2 seconds) but I feel my code doesn't launch the thread properly and I'm quite lost as to where the issue is occuring exactly and why. I have used various print(".") to show where the program goes and identify where it doesn't and I can't see why the thread doesn't launch properly.
I am on Windows and not Linux. If this could be the issues, please tell me. I will show a code snippet below of where the issue seems to be and the full code as well as a pdf explaining in more detail how the project should run in a github link (430 lines of code doesn't seem reasonable to post here).
def terminal(number_of_homes, market_queue, home_counter, clock_ready, energy_exchange_queue, console_connection, console_connection_2):
day = 1
while clock_ready.wait(1.5 * delay):
req1, req2, req3, req4 = ([] for i in range(4))
for i in range(number_of_homes):
a = market_queue.get()
req1.append(a)
req1 = sort(req1)
for i in range(number_of_homes):
a = market_queue.get()
req1.append(a)
req2 = sort(req2)
for i in range(number_of_homes):
a = market_queue.get()
req1.append(a)
req3 = sort(req3)
req1 = req1 + req2 + req3
for i in range(energy_exchange_queue.qsize()):
b = energy_exchange_queue.get()
req4.append(b)
req4 = sort(req4)
thread = threading.Thread(target = console_display, args = (number_of_homes, req1, day, req4, console_connection.recv(), console_connection_2.recv()))
thread.start()
thread.join()
day += 1
Github link: https://github.com/MaxMichel2/Energy-Market-Project