Related
How can I change the take profit or stop loss of an order already created via ccxt python in Binance futures?
I would like an already created order to be able to change the stop loss, as if I did it from the binance web cli, there is some way, I create my order like this
exchange.create_order(symbol=par, type='limit', side=side, price = precio, amount= monto, params={})
When detecting a certain pattern I would like to update the SL and TP, it's possible?
I have not found information in the ccxt wiki
There is an edit_order function that you may want to try.
import ccxt
exchange = ccxt.binanceusdm()
exchange.apiKey = 'YOUR_API_KEY'
exchange.secret = 'YOUR_API_SECRET'
symbol = 'BTC/USDT'
order_id = 'ORDER_ID'
order_type = 'limit'
side = 'buy'
amount = 0.001
price = 16000
stop_loss = 15000
take_profit = 17000
exchange.edit_order(order_id, symbol, order_type, side, amount, price, {'stopLossPrice': stop_loss, 'takeProfitPrice': take_profit})
I wrote this web scraping program that extracts retail trading sentiment data for 26 forex pairs from IG markets.
The output in the console looks like:
AUD-CAD: 64% of client accounts are short.
AUD-CHF: 54% of client accounts are long.
AUD-JPY: 60% of client accounts are short.
AUD-NZD: 60% of client accounts are long.
AUD-USD: 53% of client accounts are short.
CAD-CHF: 56% of client accounts are long.
CAD-JPY: 56% of client accounts are long.
CHF-JPY: 68% of client accounts are short.
EUR-AUD: 68% of client accounts are long.
EUR-CAD: 65% of client accounts are short.
EUR-CHF: 66% of client accounts are long.
EUR-GBP: 53% of client accounts are short.
EUR-JPY: 57% of client accounts are short.
EUR-NZD: 55% of client accounts are long.
EUR-USD: 54% of client accounts are short.
GBP-AUD: 73% of client accounts are long.
GBP-CAD: 66% of client accounts are long.
GBP-CHF: 63% of client accounts are long.
GBP-JPY: 52% of client accounts are short.
GBP-NZD: 57% of client accounts are long.
GBP-USD: 59% of client accounts are short.
SPOT-FX-NZDCAD: 68% of client accounts are short.
NZD-CHF: 59% of client accounts are short.
NZD-JPY: 57% of client accounts are short.
NZD-USD: 72% of client accounts are short.
USD-CAD: 69% of client accounts are long.
USD-CHF: 79% of client accounts are long.
USD-JPY: 58% of client accounts are long.
I would like to export this data to a Google Sheet named "GsheetTest", but I'm stuck and
I have no idea how to do it.
Google API is enabled. I've created the credentials, got the service account json key.
I'm able to write simple text to this google sheet file "GsheetTest" using pygsheets and panda dataframe.
Here's the code:
import bs4, requests
def getIGsentiment(pairUrl):
res = requests.get(pairUrl)
res.raise_for_status()'
soup = bs4.BeautifulSoup(res.text, 'html.parser')
elems = soup.select('.price-ticket__sentiment')
return elems[0].get_text(" ", strip = True)
pair_list = ['aud-cad', 'aud-chf', 'aud-jpy', 'aud-nzd', 'aud-usd', 'cad-chf', 'cad-jpy',
'chf-jpy', 'eur-aud', 'eur-cad', 'eur-chf', 'eur-gbp', 'eur-jpy', 'eur-nzd',
'eur-usd', 'gbp-aud', 'gbp-cad', 'gbp-chf', 'gbp-jpy', 'gbp-nzd', 'gbp-usd',
'spot-fx-nzdcad', 'nzd-chf', 'nzd-jpy','nzd-usd', 'usd-cad', 'usd-chf',
'usd-jpy']
for i in range(len(pair_list)):
retail_positions = getIGsentiment('https://www.ig.com/us/forex/markets-forex/ +(pair_list[i]))
pair = pair_list[i]
print(pair.upper() +': ' + retail_positions[0:32].rstrip() + '.')
First open json file in browser and find email address, now share spreadsheet with that email along with editing right, after that use following code..
#Import these libraries or pip install if not installed already
from oauth2client.service_account import ServiceAccountCredentials
import gspread
import bs4, requests
def getIGsentiment(pairUrl):
res = requests.get(pairUrl)
res.raise_for_status()'
soup = bs4.BeautifulSoup(res.text, 'html.parser')
elems = soup.select('.price-ticket__sentiment')
return elems[0].get_text(" ", strip = True)
pair_list = ['aud-cad', 'aud-chf', 'aud-jpy', 'aud-nzd', 'aud-usd', 'cad-chf', 'cad-jpy',
'chf-jpy', 'eur-aud', 'eur-cad', 'eur-chf', 'eur-gbp', 'eur-jpy', 'eur-nzd',
'eur-usd', 'gbp-aud', 'gbp-cad', 'gbp-chf', 'gbp-jpy', 'gbp-nzd', 'gbp-usd',
'spot-fx-nzdcad', 'nzd-chf', 'nzd-jpy','nzd-usd', 'usd-cad', 'usd-chf',
'usd-jpy']
#now you need auth and set scope
scope = ['https://spreadsheets.google.com/feeds','https://www.googleapis.com/auth/drive']
creds = ServiceAccountCredentials.from_json_keyfile_name('path-to-your-json.json', scope)
#Auth
gc = gspread.authorize(creds)
sh = gc.open('Spread Sheet name You want to open')
worksheet = sh.add_worksheet('sheet to be added name', int(rows), int(columns))
for i in range(len(pair_list)):
retail_positions = getIGsentiment('https://www.ig.com/us/forex/markets-forex/ +(pair_list[i]))
pair = pair_list[i]
foo = pair.upper() +': ' + retail_positions[0:32].rstrip() + '.'
worksheet.insert_row(foo, 1)
If you have a list of strings and you want to write each string into single cell. which i think is how you want it.
import pygsheets
import numpy as np
pair_data = [] # data fro your code
gc = pygsheets.authorize()
# Open spreadsheet and then worksheet
sh = gc.open('GsheetTest')
wks = sh.sheet1
wks.wks.update_values('A1', pair_data)
I have to export incident data from service now rest API. The incident state is one of new, in progress, pending not resolved and closed. I am able to fetch data that are in active state but not able to apply correct filter also in output it is showing one extra character 'b', so how to remove that extra character?
input:
import requests
URL = 'https://instance_name.service-now.com/incident.do?CSV&sysparm_query=active=true'
user = 'user_name'
password = 'password'
headers = {"Accept": "application/xml"}
response = requests.get(URL,auth=(user, password), headers=headers)
if response.status_code != 200:
print('Status:', response.status_code, 'Headers:', response.headers, 'Error Response:', response.content)
exit()
print(response.content.splitlines())
Output:
[b'"number","short_description","state"', b'"INC0010001","Test incident creation through REST","New"', b'"INC0010002","incident creation","Closed"', b'"INC0010004","test","In Progress"']
It's a Byte literal (for more info. please refer https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals)
To remove Byte literal, we need to decode the strings,
Just give a try as-
new_list = []
s = [b'"number","short_description","state"', b'"INC0010001","Test
incident creation through REST","New"', b'"INC0010002","incident
creation","Closed"', b'"INC0010004","test","In Progress"']
for ele in s:
ele= ele.decode("utf-8")
new_list.append(ele)
print(new_list)
Output:
['"number","short_description","state"', '"INC0010001","Test incident
creation through REST","New"', '"INC0010002","incid
ent creation","Closed"', '"INC0010004","test","In Progress"']
Hope! It will work
I'm new to Google Slides API and am trying to build a slide deck for daily news headlines by replacing image and text placeholders (for your reference, see https://www.youtube.com/watch?v=8LSUbKZq4ZY and http://wescpy.blogspot.com/2016/11/using-google-slides-api-with-python.html).
But when I try to run my modified program, I get an error message that says no file or directory exists called "client_secret.json" (which is included in the API tutorial's code). The tutorial code is from 2 years ago so I'm not sure if there's been any updates in the Google Slides API, but I'd really appreciate help on navigating this issue. Below is my code (note: "scraped list" is a list of dictionaries, with each dictionary containing a value for keys "headline" and "imgURL".)
from __future__ import print_function
from apiclient import discovery
from httplib2 import Http
from oauth2client import file, client, tools
from datetime import date
from scrapef2 import scrape
scrapedlist = scrape()
TMPLFILE = 'CrimsonTemplate' # use your own!
SCOPES = (
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/presentations',
)
store = file.Storage('storage.json')
creds = store.get()
if not creds or creds.invalid:
flow = client.flow_from_clientsecrets('client_secret.json', SCOPES)
creds = tools.run_flow(flow, store)
HTTP = creds.authorize(Http())
DRIVE = discovery.build('drive', 'v3', http=HTTP)
SLIDES = discovery.build('slides', 'v1', http=HTTP)
rsp = DRIVE.files().list(q="name='%s'" % TMPLFILE).execute().get('files')[0]
DATA = {'name': '[DN] '+ str(date.today())}
print('** Copying template %r as %r' % (rsp['name'], DATA['name']))
DECK_ID = DRIVE.files().copy(body=DATA, fileId=rsp['id']).execute().get('id') # TO DO: How to copy into a specific folder
for i in range(3):
print('** Get slide objects, search for image placeholder')
slide = SLIDES.presentations().get(presentationId=DECK_ID,
fields='slides').execute().get('slides')[i]
obj = None
for obj in slide['pageElements']:
if obj['shape']['shapeType'] == 'RECTANGLE':
break
print('** Replacing placeholder text and icon')
reqs = [
{'replaceAllText': {
'containsText': {'text': '{{Headline}}'},
'replaceText': scrapedlist[i]["headline"]
}},
{'createImage': {
'url': scrapedlist[i]["imgURL"],
'elementProperties': {
'pageObjectId': slide['objectId'],
'size': obj['size'],
'transform': obj['transform'],
}
}},
{'deleteObject': {'objectId': obj['objectId']}},
]
SLIDES.presentations().batchUpdate(body={'requests': reqs},
presentationId=DECK_ID).execute()
print('DONE')
Never used python google api but error indicates that you dont have your 'client_secret.json' file or it is in wrong place.
Scenario 1 - you dont have 'client_secret.json' file
This file is used by API to automatically verify that you are you. With this all API calls are made by your behalf. To get this file:
go to Google API console
open your project (or create new one)
click "Enable APIs and services" to find and enable Google Slides API
click "Credentials" in left menu, and then "Create credentials" -> "oAuth client ID"
choose Web application, accept all windows
now you should see new credentials on list, you can click on them and there will be button on top menu named "download JSON", there you will obtain your credentials (which by name are secret so keep them somewhere safe)
Scenario 2 - your 'client_secret.json' file is in wrong place
In this case I can't be very helpful, just try to inspect library to know where it looks for file and put it there (library directory, project root directory, hard to tell).
Let me know if it worked, as Google APIs and their libraries sometimes acts unexpectedly.
I can successfully access info about a user with this command:
curl http://gitlab.$INTERNAL_SERVER.com/api/v3/\
users/$USER_ID\?private_token\=$GITLAB_TOKEN
However, I can not find the API endpoint for getting a list of the commits that the user has pushed to the GitLab server. Does a URL with this info exist?
To the best of my knowledge, such an API endpoint does not exist. Essentially the best I've been able to come up with is this flow:
find all the projects the user is involved with (not 100% simple in itself)
then get commits for that project
THEN filter those commits based on useremail.
I am using java-gitlab-api to access the Gitlab server, so don't have curl samples handy (sorry!).
It looks like you can get a list of commits by using the Events endpoint
data = requests.get(host + "/api/v4/users/{id}/events".format(id=user_id),
params={"action": "pushed"})
And you can chain that by updating params to
params.update({"before": before_date})
Where before date can be the last element in data, and you can loop continuously to get all commits by user from a specific date
I have written a Python script that does what #demaniak suggest. Enjoy
import requests
import ujson as json
header ={...}
def get_all_commits_gitlab(project_id, username):
json_loads_of_commit = []
f_date = "2022-01-01T00:00:42.000+01:00"
params = {"until": f_date}
url_p = "https://gitlab.xxx.xx/api/v4/projects/%d/\
repository/commits" % project_id
r = requests.get(url_p, params, headers=header)
c = 0
while r.status_code == 200:
jsLoad = json.loads(r.content)
newDate = jsLoad[-1]["committed_date"]
if (params["until"] == newDate):
break
user_commits = []
for cm in jsLoad:
if cm["author_name"] == username:
user_commits.append(cm)
c += 1
json_loads_of_commit.append(user_commits)
params["until"] = newDate
r = requests.get(url_p, params, headers=header)
print("project %d: %d commits by user %s, \
the first one %s" % (project_id, c, username, newDate))
return json_loads_of_commit