Using ctrader-fix to download historical data from cTrader - python-3.x

I am using the python package ctrader-fix (https://pypi.org/project/ctrader-fix/) to download historical price data from ctrader's API (https://help.ctrader.com/fix/).
The code does not make clear to me at least where exactly I declare the symbol (e.g. 'NatGas') through its SymbolID code number (in the case of 'NatGas' the SymbolID code number is 10055) for which I request historical data but also it does not make clear where I specify the timeframe I am interested on (e.g. 'H' for hourly data) and the number of records I want to retrieve.
section of ctrader where the FIX SymbolID number of 'NatGas' is provided
The code that is provided is the following (I have filled the values except the username).
config = {
'Host': '',
'Port': 5201,
'SSL': False,
'Username': '****************',
'Password': '3672075',
'BeginString': 'FIX.4.4',
'SenderCompID': 'demo.pepperstoneuk.3672025',
'SenderSubID': 'QUOTE',
'TargetCompID': 'cServer',
'TargetSubID': 'QUOTE',
'HeartBeat': '30'
}
client = Client(config["Host"], config["Port"], ssl = config["SSL"])
def send(request):
diferred = client.send(request)
diferred.addCallback(lambda _: print("\nSent: ", request.getMessage(client.getMessageSequenceNumber()).replace("", "|")))
def onMessageReceived(client, responseMessage): # Callback for receiving all messages
print("\nReceived: ", responseMessage.getMessage().replace("", "|"))
# We get the message type field value
messageType = responseMessage.getFieldValue(35)
# we send a security list request after we received logon message response
if messageType == "A":
securityListRequest = SecurityListRequest(config)
securityListRequest.SecurityReqID = "A"
securityListRequest.SecurityListRequestType = 0
send(securityListRequest)
# After receiving the security list we send a market order request by using the security list first symbol
elif messageType == "y":
# We use getFieldValue to get all symbol IDs, it will return a list in this case
# because the symbol ID field is repetitive
symboldIds = responseMessage.getFieldValue(55)
if config["TargetSubID"] == "TRADE":
newOrderSingle = NewOrderSingle(config)
newOrderSingle.ClOrdID = "B"
newOrderSingle.Symbol = symboldIds[1]
newOrderSingle.Side = 1
newOrderSingle.OrderQty = 1000
newOrderSingle.OrdType = 1
newOrderSingle.Designation = "From Jupyter"
send(newOrderSingle)
else:
marketDataRequest = MarketDataRequest(config)
marketDataRequest.MDReqID = "a"
marketDataRequest.SubscriptionRequestType = 1
marketDataRequest.MarketDepth = 1
marketDataRequest.NoMDEntryTypes = 1
marketDataRequest.MDEntryType = 0
marketDataRequest.NoRelatedSym = 1
marketDataRequest.Symbol = symboldIds[1]
send(marketDataRequest)
# after receiving the new order request response we stop the reactor
# And we will be disconnected from API
elif messageType == "8" or messageType == "j":
print("We are done, stopping the reactor")
reactor.stop()
def disconnected(client, reason): # Callback for client disconnection
print("\nDisconnected, reason: ", reason)
def connected(client): # Callback for client connection
print("Connected")
logonRequest = LogonRequest(config)
send(logonRequest)
# Setting client callbacks
client.setConnectedCallback(connected)
client.setDisconnectedCallback(disconnected)
client.setMessageReceivedCallback(onMessageReceived)
# Starting the client service
client.startService()
# Run Twisted reactor, we imported it earlier
reactor.run()
Can you explain the code to me and provide instructions on how to get for example hourly data for NatGas (1,000 observations)?`

Related

How to ignore videos with like count disabled while using YouTube Data API V3

In a YouTube search result, some videos have their likes hidden. My code fails when it reaches those videos:
KeyError : 'likeCount'.
Is there a way to ignore such videos and continue with my iteration?
request = youtube.videos().list(part = "snippet,statistics", id = LIST)
A = request.execute()
for j in range(len(A['items'])):
Data.append({
'Views': A['items'][j]['statistics']['viewCount'],
'Likes': A['items'][j]['statistics']['likeCount'],
'Dislikes': A['items'][i]['statistics']['dislikeCount']
})
I would rewrite your original code as below:
request = youtube.videos().list(
part = "snippet,statistics",
id = LIST
)
response = request.execute()
for item in response.get('items', []):
stat = item['statistics']
# items without 'like' and 'dislike'
# count get those counts to be 0
Data.append({
'Views': stat.get('viewCount', 0),
'Likes': stat.get('likeCount', 0),
'Dislikes': stat.get('dislikeCount', 0)
})
Notice that all statistics data under the statistics property of the response obtained from the API is accessed with the get method, such that, if the respective property does not exists, then its associated value is taken to be 0.
Another possibility for your for loop above would look as follows:
for item in response.get('items', []):
stat = item['statistics']
# items without 'like' and 'dislike'
# counts are being ignored
if stat.get('likeCount') is None or \
stat.get('dislikeCount') is None:
continue
Data.append({
'Views': stat['viewCount'],
'Likes': stat['likeCount'],
'Dislikes': stat['dislikeCount']
})

How to listen for incoming emails in python 3?

With the objective of having an application that runs in python 3 and reads incoming emails on an specific gmail account, how would one listen for the reception of this emails?
What it should do is wait until a new mail is received on the inbox, read the subject and body from the email and get the text from the body (without format).
This is what I got so far:
import imaplib
import email
import datetime
import time
mail = imaplib.IMAP4_SSL('imap.gmail.com', 993)
mail.login(user, password)
mail.list()
mail.select('inbox')
status, data = mail.search(None, 'ALL')
for num in data[0].split():
status, data = mail.fetch(num, '(RFC822)')
email_msg = data[0][1]
email_msg = email.message_from_bytes(email_msg)
maintype = email_msg.get_content_maintype()
if maintype == 'multipart':
for part in email_msg.get_payload():
if part.get_content_maintype() == 'text':
print(part.get_payload())
elif maintype == 'text':
print(email_msg.get_payload())
But this has a couple of problems: When the message is multipart each part is printed and sometimes after that the last part is basically the whole message but in html format.
Also, this prints all the messages from the inbox, how would one listen for new emails with imaplib? or with other library.
I'm not sure about the synchronous way of doing that, but if you don't mind having an async loop and defining unread emails as your target then it could work.
(I didn't implement the IMAP polling loop, only the email fetching loop)
My changes
Replace the IMAP search filter from 'ALL' to '(UNSEEN)' to fetch unread emails.
Change the serializing policy to policy.SMTP from the default policy.Compat32.
Use the email.message.walk() method (new API) to run & filter message parts.
Replace the legacy email API calls with the new ones as described in the docs, and demonstrated in these examples.
The result code
import imaplib, email, getpass
from email import policy
imap_host = 'imap.gmail.com'
imap_user = 'example#gmail.com'
# init imap connection
mail = imaplib.IMAP4_SSL(imap_host, 993)
rc, resp = mail.login(imap_user, getpass.getpass())
# select only unread messages from inbox
mail.select('Inbox')
status, data = mail.search(None, '(UNSEEN)')
# for each e-mail messages, print text content
for num in data[0].split():
# get a single message and parse it by policy.SMTP (RFC compliant)
status, data = mail.fetch(num, '(RFC822)')
email_msg = data[0][1]
email_msg = email.message_from_bytes(email_msg, policy=policy.SMTP)
print("\n----- MESSAGE START -----\n")
print("From: %s\nTo: %s\nDate: %s\nSubject: %s\n\n" % ( \
str(email_msg['From']), \
str(email_msg['To']), \
str(email_msg['Date']), \
str(email_msg['Subject'] )))
# print only message parts that contain text data
for part in email_msg.walk():
if part.get_content_type() == "text/plain":
for line in part.get_content().splitlines():
print(line)
print("\n----- MESSAGE END -----\n")
Have you check below script (3_emailcheck.py) from here posted by git user nickoala? Its a python 2 script and in Python3 you need to decode the bytes with the email content first.
import time
from itertools import chain
import email
import imaplib
imap_ssl_host = 'imap.gmail.com' # imap.mail.yahoo.com
imap_ssl_port = 993
username = 'USERNAME or EMAIL ADDRESS'
password = 'PASSWORD'
# Restrict mail search. Be very specific.
# Machine should be very selective to receive messages.
criteria = {
'FROM': 'PRIVILEGED EMAIL ADDRESS',
'SUBJECT': 'SPECIAL SUBJECT LINE',
'BODY': 'SECRET SIGNATURE',
}
uid_max = 0
def search_string(uid_max, criteria):
c = list(map(lambda t: (t[0], '"'+str(t[1])+'"'), criteria.items())) + [('UID', '%d:*' % (uid_max+1))]
return '(%s)' % ' '.join(chain(*c))
# Produce search string in IMAP format:
# e.g. (FROM "me#gmail.com" SUBJECT "abcde" BODY "123456789" UID 9999:*)
def get_first_text_block(msg):
type = msg.get_content_maintype()
if type == 'multipart':
for part in msg.get_payload():
if part.get_content_maintype() == 'text':
return part.get_payload()
elif type == 'text':
return msg.get_payload()
server = imaplib.IMAP4_SSL(imap_ssl_host, imap_ssl_port)
server.login(username, password)
server.select('INBOX')
result, data = server.uid('search', None, search_string(uid_max, criteria))
uids = [int(s) for s in data[0].split()]
if uids:
uid_max = max(uids)
# Initialize `uid_max`. Any UID less than or equal to `uid_max` will be ignored subsequently.
server.logout()
# Keep checking messages ...
# I don't like using IDLE because Yahoo does not support it.
while 1:
# Have to login/logout each time because that's the only way to get fresh results.
server = imaplib.IMAP4_SSL(imap_ssl_host, imap_ssl_port)
server.login(username, password)
server.select('INBOX')
result, data = server.uid('search', None, search_string(uid_max, criteria))
uids = [int(s) for s in data[0].split()]
for uid in uids:
# Have to check again because Gmail sometimes does not obey UID criterion.
if uid > uid_max:
result, data = server.uid('fetch', uid, '(RFC822)') # fetch entire message
msg = email.message_from_string(data[0][1])
uid_max = uid
text = get_first_text_block(msg)
print 'New message :::::::::::::::::::::'
print text
server.logout()
time.sleep(1)

In flask i am not able fetch the session variables from the facebook messenger

I am using flask session to achieve multi-user concurrency in my chatbot which is embedded in FB messenger and backend implemented in python.
Below is code:
#app.route("/", methods=['GET','POST'])
def webhook():
state = session.get('state')
session.modified = True
print("state:",state)
q_number = session.get('q_number')
age = session.get('age')
print('age:',age)
sex = session.get('sex')
symptoms = session.get('symptoms', [])
prev = session.get('prev')
conditions = session.get('conditions')
data = request.get_json()
##pprint(data["entry"])
sender_id = data["entry"][0]["messaging"][0]["sender"]["id"]
# get user info
r = requests.get('https://graph.facebook.com/v3.1/'+sender_id+
'?fields=first_name,last_name&access_token='+ACCESS_TOKEN)
if data["object"] == "page":
user_input = data['entry'][0]['messaging'][0]['message']['text']
#print(type(user_input))
user_input = user_input.lower()
#print(user_input)
question = get_question(sender_id,user_input,state, q_number, age, sex, symptoms, prev, conditions)
send_message(sender_id, question)
return "ok", 200
session.get is not working in flask and i am unable to fetch the values. so i cant able to send the state,age,sex,q_number,symptoms,prev,conditions to other function.. Please help me to overcome this.

Lambda/boto3/python loop

This code acts as an early warning system for ADFS failures, which works fine when run locally. Problem is that when I run it in Lambda, it loops non stop.
In short:
lambda_handler() runs pagecheck()
pagecheck() produces the info needed then passes 2 lists (msgdet_list, error_list) and an int (error_count) to notification().
notification() collates and prints the output. The output is two key variables (notificationheader and notificationbody).
I've #commentedOut the SNS piece which would usually email the info, and am using print() to instead send the info to CloudWatch logs until I can get the loop sorted. Logs:
CloudWatch logs
If I run this locally, it produces a clean single output. In Lambda, the function will loop until it times out. It's almost like every time the lists are updated, they're passed to the notification() module and it's run. I can limit the function time, but would rather fix the code!
Cheers,
tac
# This python/boto3/lambda script sends a request to an Office 365 landing page, parses return details to confirm a successful redirect to /
# the organisation ADFS homepage, authenticates homepge is correct, raises any errors, and sends a consolodated report to /
# an AWS SNS topic.
# Run once to produce pageserver and htmlchar values for global variables.
# Import required modules
import boto3
import urllib.request
from urllib.request import Request, urlopen
from datetime import datetime
import time
import re
import sys
# Global variables to be set
url = "https://outlook.com/CONTOSSO.com"
adfslink = "https://sts.CONTOSSO.com/adfs/ls/?client-request-id="
# Input after first run
pageserver = "Microsoft-HTTPAPI/2.0 Microsoft-HTTPAPI/2.0"
htmlchar = 18600
# Input AWS SNS ARN
snsarn = 'arn:aws:sns:ap-southeast-2:XXXXXXXXXXXXX:Daily_Check_Notifications_CONTOSSO'
sns = boto3.client('sns')
def pagecheck():
# Present the request to the webpage as if coming from a user in a browser
user_agent = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64)'
values = {'name' : 'user'}
headers = { 'User-Agent' : user_agent }
data = urllib.parse.urlencode(values)
data = data.encode('ascii')
# "Null" the Message Detail and Error lists
msgdet_list = []
error_list = []
request = Request(url)
req = urllib.request.Request(url, data, headers)
response = urlopen(request)
with urllib.request.urlopen(request) as response:
# Get the URL. This gets the real URL.
acturl = response.geturl()
msgdet_list.append("\nThe Actual URL is:")
msgdet_list.append(str(acturl))
if adfslink not in acturl:
error_list.append(str("Redirect Fail"))
# Get the HTTP resonse code
httpcode = response.code
msgdet_list.append("\nThe HTTP code is: ")
msgdet_list.append(str(httpcode))
if httpcode//200 != 1:
error_list.append(str("No HTTP 2XX Code"))
# Get the Headers as a dictionary-like object
headers = response.info()
msgdet_list.append("\nThe Headers are:")
msgdet_list.append(str(headers))
if response.info() == "":
error_list.append(str("Header Error"))
# Get the date of request and compare to UTC (DD MMM YYYY HH MM)
date = response.info()['date']
msgdet_list.append("The Date is: ")
msgdet_list.append(str(date))
returndate = str(date.split( )[1:5])
returndate = re.sub(r'[^\w\s]','',returndate)
returndate = returndate[:-2]
currentdate = datetime.utcnow()
currentdate = currentdate.strftime("%d %b %Y %H%M")
if returndate != currentdate:
date_error = ("Date Error. Returned Date: ", returndate, "Expected Date: ", currentdate, "Times in UTC (DD MMM YYYY HH MM)")
date_error = str(date_error)
date_error = re.sub(r'[^\w\s]','',date_error)
error_list.append(str(date_error))
# Get the server
headerserver = response.info()['server']
msgdet_list.append("\nThe Server is: ")
msgdet_list.append(str(headerserver))
if pageserver not in headerserver:
error_list.append(str("Server Error"))
# Get all HTML data and confirm no major change to content size by character lenth (global var: htmlchar).
html = response.read()
htmllength = len(html)
msgdet_list.append("\nHTML Length is: ")
msgdet_list.append(str(htmllength))
msgdet_list.append("\nThe Full HTML is: ")
msgdet_list.append(str(html))
msgdet_list.append("\n")
if htmllength // htmlchar != 1:
error_list.append(str("Page HTML Error - incorrect # of characters"))
if adfslink not in str(acturl):
error_list.append(str("ADFS Link Error"))
error_list.append("\n")
error_count = len(error_list)
if error_count == 1:
error_list.insert(0, 'No Errors Found.')
elif error_count == 2:
error_list.insert(0, 'Error Found:')
else:
error_list.insert(0, 'Multiple Errors Found:')
# Pass completed results and data to the notification() module
notification(msgdet_list, error_list, error_count)
# Use AWS SNS to create a notification email with the additional data generated
def notification(msgdet_list, error_list, errors):
datacheck = str("\n".join(msgdet_list))
errorcheck = str("\n".join(error_list))
notificationbody = str(errorcheck + datacheck)
if errors >1:
result = 'FAILED!'
else:
result = 'passed.'
notificationheader = ('The daily ADFS check has been marked as ' + result + ' ' + str(errors) + ' ' + str(error_list))
if result != 'passed.':
# message = sns.publish(
# TopicArn = snsarn,
# Subject = notificationheader,
# Message = notificationbody
# )
# Output result to CloudWatch logstream
print('Response: ' + notificationheader)
else:
print('passed')
sys.exit()
# Trigger the Lambda handler
def lambda_handler(event, context):
aws_account_ids = [context.invoked_function_arn.split(":")[4]]
pagecheck()
return "Successful"
sys.exit()
Your CloudWatch logs contain the following error message:
Process exited before completing request
This is caused by invoking sys.exit() in your code. Locally your Python interpreter will just terminate when encountering such a sys.exit().
AWS Lambda on the other hand expects a Python function to just return and handles sys.exit() as an error. As your function probably got invoked asynchronously AWS Lambda retries to execute it twice.
To solve your problem, you can replace the occurences of sys.exit() with return or even better, just remove the sys.exit() calls, as there would be already implicit returns in the places where you use sys.exit().

Boto3/Lambda - Join multiple outputs from a loop and send in one email using AWS SNS

New to Python/Boto3, this should be an easy one but still learning :)
I have a Lambda function which creates a number of snapshots and works fine:
def create_snapshot():
volumes = ec2_client.describe_volumes(
Filters=[
{'N'...
...
for volume in volumes...
....
snap_name = 'Backup of ' + snap_desc
....
snap = ec2_client.create_snapshot(
VolumeId=vol_id,
Description=snap_desc
)
I then want to receive an email from AWS SNS to let me know which snapshots the function created, which I do using:
message = sns.publish(
TopicArn=SNSARN,
Subject=("Function executed"),
Message=("%s created" % snap_name)
)
The issue is that this creates an email for each snapshot, instead of one email listing all the snapshots. Should I create another function that calls all values produced by snap_desc, or can I send all values for snap_desc in the function? And most importantly what's the best way of doing this?
Cheers!
Scott
####################### UPDATE (Thanks #omuthu) #######################
I set an array inside and outside the loop, and put the string into the message. This produced the following being sent in one message:
The following snapshots have been created:
['vol-0e0b9a5dfb8379fc0 (Instance 1 - /dev/sda1)', 'vol-03aac6b65df64661e (Instance 4 - /dev/sda1)', 'vol-0fdde765dfg452631 (Instance 2 - /dev/sda1)', 'vol-0693a9568b11f625f (Instance 3 - /dev/sda1)', etc.
Okay got it sorted, finally!
def create_snapshot():
volumes = ec2_client.describe_volumes(
Filters=[
{'N'...
...
inst_list = []
for volume in volumes...
vol_id = volume['VolumeId']
....
snap_desc = vol_id
for name in volume['Tags']:
tag_key = name['Key']
tag_val = name['Value']
if tag_key == 'Name':
snap_desc = vol_id + ' (' + tag_val + ')'
....
....
....
if backup_mod is False or (current_hour + 10) % backup_mod != 0:
...
continue
else:
print("%s is scheduled this hour" % vol_id)
for name in volume['Tags']:
inst_tag_key = name['Key']
inst_tag_val = name['Value']
if inst_tag_key == 'Name':
inst_list.append(inst_tag_val)
snap = ec2_client.create_snapshot(
VolumeId=vol_id,
Description=snap_desc,
)
print("%s created" % snap['SnapshotId'])
msg = str("\n".join(inst_list))
if len(inst_list) != 0:
message = sns.publish(
TopicArn=SNSARN,
Subject=("Daily Lambda snapshot function complete"),
Message=("The following snapshots have been created today:\n\n" + msg + "\n")
)
print("Response: {}".format(message))

Resources