How can i change the time constantly - python-3.x

Hi guys i am very new to python and i would very much appreciate some help on this matter, i have been trying to get the actual time with each record i send to the database but i think because it is a loop it seems to record the beginning time and loop it.
could anybody help me out with this?
HOW CAN I GET REAL TIME INSTEAD OF THE SAME TIME GETTING LOOPED
# Author: Aswin Ramamchandran
# Version: 1.1
from time import sleep
import datetime
import pymongo
import time
# This URL provides connection to the database
uri = blahblah
# initialising pymongo client
client = pymongo.MongoClient(uri)
# Database where the records will be saved - reference to the database
db = client.Kostenanalyse
# Accessing the collection "latenz" from the Database
coll = db.latenz
#Defining the Start time
start_time = datetime.datetime.now()
start_time = start_time.isoformat()
end = time.perf_counter()
def create_latenz_data()-> dict:
return {
"Temperature" : "",
"Time when packet was sent" : start_time,
"Sensor A reading" : "",
"Latency" : end,
}
#While loop
while True:
data = create_latenz_data()
start = time.perf_counter()
coll.insert_one(data)
end = time.perf_counter() - start
print('{:.6f}s for the calculation'.format(end))
print(str(start_time) + 'Wrote data sample {} to collpipection {}'.format(data, 'latenz'))
sleep(0.5)

Your script stores the start_time variable at load and does not change it. As you have used that same variable inside while loop and inside create_latenz_data(), replacing start_time with datetime.datetime.now().isoformat() directly so new time is picked everytime it is called.
from time import sleep
import datetime
import pymongo
import time
# This URL provides connection to the database
uri = blahblah
# initialising pymongo client
client = pymongo.MongoClient(uri)
# Database where the records will be saved - reference to the database
db = client.Kostenanalyse
# Accessing the collection "latenz" from the Database
coll = db.latenz
def create_latenz_data()-> dict:
return {
"Temperature" : "",
"Time when packet was sent" : datetime.datetime.now().isoformat(),
"Sensor A reading" : "",
"Latency" : end,
}
#While loop
while True:
data = create_latenz_data()
start = time.perf_counter()
coll.insert_one(data)
end = time.perf_counter() - start
print('{:.6f}s for the calculation'.format(end))
print(str(datetime.datetime.now().isoformat()) + 'Wrote data sample {} to collpipection {}'.format(data, 'latenz'))
sleep(0.5)

Related

Python Discord Bot - VPS Reboot Behaviors?

With a friend of mine we created a simple discord bot in python, first let me explain how it works :
We created 366 different usernames, one for each day of the year. Each day at 0:01AM the bot should automatically post a message with :
Current date (day, month, year) and the username we associated with it
The bot should also rename its own username with the username of the day
Here is the code we made :
#!/usr/bin/python
# -*- coding: utf-8 -*-
import os
import discord
from discord.ext import commands
from dotenv import load_dotenv
from datetime import datetime
load_dotenv()
TOKEN = os.getenv('DISCORD_TOKEN')
GUILD = os.getenv('DISCORD_GUILD')
client = discord.Client()
channel = client.get_channel(CHANNELID)
bissextileSpec = datetime.today().strftime('%m-%d') # To handle bissextile years
nickFile = open("nicknames.txt", "r")
nickList = nickFile.readlines()
dayNumber = datetime.now().timetuple().tm_yday
# We also made special dates
if bissextileSpec == '06-01' :
nickOfTheDay = 'SpecialNick1'
elif bissextileSpec == '07-14' :
nickOfTheDay = 'SpecialNick2'
elif bissextileSpec == '30-12' :
nickOfTheDay = 'SpecialNick3'
elif bissextileSpec == '17-06' :
nickOfTheDay = 'SpecialNick4'
elif bissextileSpec == '05-04' :
nickOfTheDay = 'SpecialNick5'
else :
nickOfTheDay = nickList[dayNumber - 1]
await channel.send('MSG CONTENT', nickOfTheDay, 'MSG CONTENT')
await client.user.edit(username=nickOfTheDay)
We know our way a bit around python but we don't really know how discord bots works :
We are not quite sure how to instruct it to auto-post at midnight each day : We thought of a While loop with a sleep(50) on its end BUT :
How is it going to handle the hazardous VPS reboots ? If the vps reboot mid sleep is it going to reset it and shift the next post time further than 0:00 ?
On the other end, if we don't use a While loop, but if we use the CRON system in Linux to check and start the script everyday at midnight, does it mean the bot will be shown offline 23h59/24 on Discord and stay online just to post the message ? => We want to add a few more features later so we need the bot to run 24/24
Aswell, do not hesitate to point it if we did something wrong in the code ( ͡° ͜ʖ ͡°)
You can make a loop that iterates every 24h and change the nickname of the bot, you can get the seconds till midnight with some simple math and sleep for those seconds
import asyncio
from discord.ext import tasks
from datetime import datetime
#tasks.loop(hours=24)
async def change_nickname(guild):
"""Loops every 24 hours and changes the bots nick"""
nick = "" # Get the nick of the corresponding day
await guild.me.edit(nick=nick)
#change_nickname.before_loop
async def before_change_nickname(guild):
"""Delays the `change_nickname` loop to start at 00:00"""
hour, minute = 0, 0
now = datetime.now()
future = datetime(now.year, now.month, now.day + 1, now.month, now.day, hour, minute)
delta = (future - now).seconds
await asyncio.sleep(delta)
To start it you need to pass a discord.Guild instance (the main guild where the nickname should be changed)
change_nickname.start(guild) # You can start it in the `on_ready` event or some command or in the global scope, don't forget to pass the guild instance
No matter what hour the bot started the loop will change the bots nick at 00:00 everyday
Reference:
tasks.loop
Loop.before_loop
Loop.start
Łukasz's code has a tiny flaw, the future variable is wrongly initialized but everything else is working accordingly! This should do the trick:
import asyncio
from discord.ext import tasks
from datetime import datetime
#tasks.loop(hours=24)
async def change_nickname(guild):
nick = ""
await guild.me.edit(nick=nick)
#change_nickname.before_loop
async def before_change_nickname():
hour, minute = 0, 0
now = datetime.now()
future = datetime(now.year, now.month, now.day + 1, hour, minute)
delta = (future - now).seconds
await asyncio.sleep(delta)

How to get the list of followers from an Instagram account without getting banned?

I am trying to scrape all the followers of some particular Instagram accounts. I am using Python 3.8.3 and the latest version of Instaloader library. The code I have written is given below:
# Import the required libraries:
import instaloader
import time
from random import randint
# Start time:
start = time.time()
# Create an instance of instaloader:
loader = instaloader.Instaloader()
# Credentials & target account:
user_id = USERID
password = PASSWORD
target = TARGET # Account of which the list of followers need to be scraped;
# Login or load the session:
loader.login(user_id, password)
# Obtain the profile metadata of the target:
profile = instaloader.Profile.from_username(loader.context, target)
# Print the list of followers and save it in a text file:
try:
# The list to store the collected user handles of the followers:
followers_list = []
# Variables used to apply pauses to slow down scraping:
count = 0
short_counter = 1
short_pauser = randint(19, 24)
long_counter = 1
long_pauser = randint(4900, 5000)
# Fetch the followers one by one:
for follower in profile.get_followers():
sleeper = randint(840, 1020)
# Short pause for the process:
if (short_counter % short_pauser == 0):
short_counter = 0
short_pauser = randint(19, 24)
print('\nShort Pause.\n')
time.sleep(1)
# Long pause for the process:
if (long_counter % long_pauser == 0):
long_counter = 0
long_pauser = randint(4900, 5000)
print('\nLong pause.\n')
time.sleep(sleeper)
# Append the list and print the follower's user handle:
followers_list.append(follower.username)
print(count,'', followers_list[count])
# Increment the counters accordingly:
count = count + 1
short_counter = short_counter + 1
long_counter = long_counter + 1
# Store the followers list in a txt file:
txt_file = target + '.txt'
with open(txt_file, 'a+') as f:
for the_follower in followers_list:
f.write(the_follower)
f.write('\n')
except Exception as e:
print(e)
# End time:
end = time.time()
total_time = end - start
# Print the time taken for execution:
print('Time taken for complete execution:', total_time,'s.')
I am getting the following error after scraping some data:
HTTP Error 400 (Bad Request) on GraphQL Query. Retrying with shorter page length.
HTTP Error 400 (Bad Request) on GraphQL Query. Retrying with shorter page length.
400 Bad Request
In fact, the error occurs when Instagram detects unusual activity and disables the account for a while and prompts the user to change the password.
I have tried -
(1) Slowing down the process of scraping.
(2) Adding pauses in between in order to make the program more human-like.
Still, no progress.
How to bypass such restrictions and get the complete list of all the followers?
If getting the entire list is not possible, what is the best way to get at least 20,000 followers list (from multiple accounts) without getting banned / disabled account / facing such inconveniences?

How can I return a string from a Google BigQuery row iterator object?

My task is to write a Python script that can take results from BigQuery and email them out. I've written a code that can successfully send an email, but I am having trouble including the results of the BigQuery script in the actual email. The query results are correct, but the actual object I am returning from the query (results) always returns as a Nonetype.
For example, the email should look like this:
Hello,
You have the following issues that have been "open" for more than 7 days:
-List issues here from bigquery code
Thanks.
The code reads in contacts from a contacts.txt file, and it reads in the email message template from a message.txt file. I tried to make the bigquery object into a string, but it still results in an error.
from google.cloud import bigquery
import warnings
warnings.filterwarnings("ignore", "Your application has authenticated using end user credentials")
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from string import Template
def query_emailtest():
client = bigquery.Client(project=("analytics-merch-svcs-thd"))
query_job = client.query("""
select dept, project_name, reset, tier, project_status, IssueStatus, division, store_number, top_category,
DATE_DIFF(CURRENT_DATE(), in_review, DAY) as days_in_review
from `analytics-merch-svcs-thd.MPC.RESET_DETAILS`
where in_review IS NOT NULL
AND IssueStatus = "In Review"
AND DATE_DIFF(CURRENT_DATE(), in_review, DAY) > 7
AND ready_for_execution IS NULL
AND project_status = "Active"
AND program_name <> "Capital"
AND program_name <> "SSI - Capital"
LIMIT 50
""")
results = query_job.result() # Waits for job to complete.
return results #THIS IS A NONETYPE
def get_queryresults(results): #created new method to put query results into a for loop and store it in a variable
for i,row in enumerate(results,1):
bq_data = (i , '. ' + str(row.dept) + " " + row.project_name + ", Reset #: " + str(row.reset) + ", Store #: " + str(row.store_number) + ", " + row.IssueStatus + " for " + str(row.days_in_review)+ " days")
print (bq_data)
def get_contacts(filename):
names = []
emails = []
with open(filename, mode='r', encoding='utf-8') as contacts_file:
for a_contact in contacts_file:
names.append(a_contact.split()[0])
emails.append(a_contact.split()[1])
return names, emails
def read_template(filename):
with open(filename, 'r', encoding='utf-8') as template_file:
template_file_content = template_file.read()
return Template(template_file_content)
names, emails = get_contacts('mycontacts.txt') # read contacts
message_template = read_template('message.txt')
results = query_emailtest()
bq_results = get_queryresults(query_emailtest())
import smtplib
# set up the SMTP server
s = smtplib.SMTP(host='smtp-mail.outlook.com', port=587)
s.starttls()
s.login('email', 'password')
# For each contact, send the email:
for name, email in zip(names, emails):
msg = MIMEMultipart() # create a message
# bq_data = get_queryresults(query_emailtest())
# add in the actual person name to the message template
message = message_template.substitute(PERSON_NAME=name.title())
message = message_template.substitute(QUERY_RESULTS=bq_results) #SUBSTITUTE QUERY RESULTS IN MESSAGE TEMPLATE. This is where I am having trouble because the Row Iterator object results in Nonetype.
# setup the parameters of the message
msg['From']='email'
msg['To']='email'
msg['Subject']="This is TEST"
# body = str(get_queryresults(query_emailtest())) #get query results from method to put into message body
# add in the message body
# body = MIMEText(body)
#msg.attach(body)
msg.attach(MIMEText(message, 'plain'))
# query_emailtest()
# get_queryresults(query_emailtest())
# send the message via the server set up earlier.
s.send_message(msg)
del msg
Message template:
Dear ${PERSON_NAME},
Hope you are doing well. Please find the following alert for Issues that have been "In Review" for greater than 7 days.
${QUERY_RESULTS}
If you would like more information, please visit this link that contains a complete dashboard view of the alert.
ISE Services
The BQ result() function returns a generator, so I think you need to change your return to yield from.
I'm far from a python expert, but the following pared-down code worked for me.
from google.cloud import bigquery
import warnings
warnings.filterwarnings("ignore", "Your application has authenticated using end user credentials")
def query_emailtest():
client = bigquery.Client(project=("my_project"))
query_job = client.query("""
select field1, field2 from `my_dataset.my_table` limit 5
""")
results = query_job.result()
yield from results # NOTE THE CHANGE HERE
results = query_emailtest()
for row in results:
print(row.field1, row.field2)

Python 3 MQTT client storing received payload in Sqlite - Open DB once, store many times, finally close db?

I have a Python 3.6 code that connects to MQTT and subscribes to a topic. Every time that the callback function "on_message" gets triggered, it instantiates a class that has a single method that does the following: Opens the db file, save the received data, closes the db file.
The Python script described above works almost fine. It receives about 7 MQTT messages per second, so for each message it needs to [Open_DB - Save_Data - Close_DB]. There are some messages getting PUBACK but not saved, perhaps due to some many unnecesary operations, so I want to improve:
I spent a lot of time (not an expert) trying to create a class that would open the db once, write many thousands of times to the db, and only when done, close the db file. to create a class that would have three methods:
1. MyDbClass.open_db_file()
2. MyDbClass.save_data()
3. MyDbClass.close_db_file()
The problem as you may guess is that it is not possible to call MyDbClass.save_data() from within the "on_message" callback, even when the object has been placed on a global variable. Here is the non-working code with the proposed idea, that I cleaned up for easier reading:
# -----------------------------
This code has been cleaned-up for faster reading
import paho.mqtt.client as mqtt
import time
import json
import sqlite3
Global Variables
db_object = ""
class MyDbClass():
def __init__(self):
pass
def open_db_file(self, dbfile):
self.db_conn = sqlite3.connect(db_file)
return self.db_conn
def save_data(self, json_data):
self.time_stamp = time.strftime('%Y%m%d%H%M%S')
self.data = json.loads(json_data)
self.sql = '''INSERT INTO trans_reqs (received, field_a, field_b, field_c) \
VALUES (?, ?, ?, ?)'''
self.fields_values = ( self.time_stamp, self.data['one'], self.data['two'], self.data['three']] )
self.cur = self.db_conn.cursor()
self.cur.execute(self.sql, self.fields_values)
self.db_conn.commit()
def close_db_file(self):
self.cur.close()
self.db_conn.close()
def on_mqtt_message(client, userdata, msg):
global db_object
m_decode = msg.payload.decode("utf-8","ignore")
db_object.save_data(m_decode)
def main():
global db_object
Database to use - Trying to create an object to manage DB tasks (from MyDbClass)
db_file = "my_filename.sqlite"
db_object = MyDbClass.open_db_file(db_file)
# MQTT -- Set varibles
broker_address= "..."
port = 1883
client_id = "..."
sub_topic = "..."
sub_qos = 1
# MQTT -- Instanciate the MQTT Client class and set callbacks
client = mqtt.Client(client_id)
client.on_connect = on_mqtt_connect
client.on_disconnect = on_mqtt_disconnect
client.on_message = on_mqtt_message
client.on_log = on_mqtt_log
client.clean_session = True
#client.username_pw_set(usr, password=pwd) #set username and password
print('Will connect to broker ', broker_address)
client.connect(broker_address, port=port, keepalive=45 )
client.loop_start()
client.subscribe(sub_topic, sub_qos)
try:
while True:
time.sleep(.1)
except KeyboardInterrupt:
# Disconnects MQTT
client.disconnect()
client.loop_stop()
print("....................................")
print("........ User Interrupted ..........")
print("....................................")
db_object.close_db_file()
client.loop_stop()
client.disconnect()
if __name__ == "__main__":
main()
Any help on how to do this will be greatly appreciated!

Lambda/boto3/python loop

This code acts as an early warning system for ADFS failures, which works fine when run locally. Problem is that when I run it in Lambda, it loops non stop.
In short:
lambda_handler() runs pagecheck()
pagecheck() produces the info needed then passes 2 lists (msgdet_list, error_list) and an int (error_count) to notification().
notification() collates and prints the output. The output is two key variables (notificationheader and notificationbody).
I've #commentedOut the SNS piece which would usually email the info, and am using print() to instead send the info to CloudWatch logs until I can get the loop sorted. Logs:
CloudWatch logs
If I run this locally, it produces a clean single output. In Lambda, the function will loop until it times out. It's almost like every time the lists are updated, they're passed to the notification() module and it's run. I can limit the function time, but would rather fix the code!
Cheers,
tac
# This python/boto3/lambda script sends a request to an Office 365 landing page, parses return details to confirm a successful redirect to /
# the organisation ADFS homepage, authenticates homepge is correct, raises any errors, and sends a consolodated report to /
# an AWS SNS topic.
# Run once to produce pageserver and htmlchar values for global variables.
# Import required modules
import boto3
import urllib.request
from urllib.request import Request, urlopen
from datetime import datetime
import time
import re
import sys
# Global variables to be set
url = "https://outlook.com/CONTOSSO.com"
adfslink = "https://sts.CONTOSSO.com/adfs/ls/?client-request-id="
# Input after first run
pageserver = "Microsoft-HTTPAPI/2.0 Microsoft-HTTPAPI/2.0"
htmlchar = 18600
# Input AWS SNS ARN
snsarn = 'arn:aws:sns:ap-southeast-2:XXXXXXXXXXXXX:Daily_Check_Notifications_CONTOSSO'
sns = boto3.client('sns')
def pagecheck():
# Present the request to the webpage as if coming from a user in a browser
user_agent = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64)'
values = {'name' : 'user'}
headers = { 'User-Agent' : user_agent }
data = urllib.parse.urlencode(values)
data = data.encode('ascii')
# "Null" the Message Detail and Error lists
msgdet_list = []
error_list = []
request = Request(url)
req = urllib.request.Request(url, data, headers)
response = urlopen(request)
with urllib.request.urlopen(request) as response:
# Get the URL. This gets the real URL.
acturl = response.geturl()
msgdet_list.append("\nThe Actual URL is:")
msgdet_list.append(str(acturl))
if adfslink not in acturl:
error_list.append(str("Redirect Fail"))
# Get the HTTP resonse code
httpcode = response.code
msgdet_list.append("\nThe HTTP code is: ")
msgdet_list.append(str(httpcode))
if httpcode//200 != 1:
error_list.append(str("No HTTP 2XX Code"))
# Get the Headers as a dictionary-like object
headers = response.info()
msgdet_list.append("\nThe Headers are:")
msgdet_list.append(str(headers))
if response.info() == "":
error_list.append(str("Header Error"))
# Get the date of request and compare to UTC (DD MMM YYYY HH MM)
date = response.info()['date']
msgdet_list.append("The Date is: ")
msgdet_list.append(str(date))
returndate = str(date.split( )[1:5])
returndate = re.sub(r'[^\w\s]','',returndate)
returndate = returndate[:-2]
currentdate = datetime.utcnow()
currentdate = currentdate.strftime("%d %b %Y %H%M")
if returndate != currentdate:
date_error = ("Date Error. Returned Date: ", returndate, "Expected Date: ", currentdate, "Times in UTC (DD MMM YYYY HH MM)")
date_error = str(date_error)
date_error = re.sub(r'[^\w\s]','',date_error)
error_list.append(str(date_error))
# Get the server
headerserver = response.info()['server']
msgdet_list.append("\nThe Server is: ")
msgdet_list.append(str(headerserver))
if pageserver not in headerserver:
error_list.append(str("Server Error"))
# Get all HTML data and confirm no major change to content size by character lenth (global var: htmlchar).
html = response.read()
htmllength = len(html)
msgdet_list.append("\nHTML Length is: ")
msgdet_list.append(str(htmllength))
msgdet_list.append("\nThe Full HTML is: ")
msgdet_list.append(str(html))
msgdet_list.append("\n")
if htmllength // htmlchar != 1:
error_list.append(str("Page HTML Error - incorrect # of characters"))
if adfslink not in str(acturl):
error_list.append(str("ADFS Link Error"))
error_list.append("\n")
error_count = len(error_list)
if error_count == 1:
error_list.insert(0, 'No Errors Found.')
elif error_count == 2:
error_list.insert(0, 'Error Found:')
else:
error_list.insert(0, 'Multiple Errors Found:')
# Pass completed results and data to the notification() module
notification(msgdet_list, error_list, error_count)
# Use AWS SNS to create a notification email with the additional data generated
def notification(msgdet_list, error_list, errors):
datacheck = str("\n".join(msgdet_list))
errorcheck = str("\n".join(error_list))
notificationbody = str(errorcheck + datacheck)
if errors >1:
result = 'FAILED!'
else:
result = 'passed.'
notificationheader = ('The daily ADFS check has been marked as ' + result + ' ' + str(errors) + ' ' + str(error_list))
if result != 'passed.':
# message = sns.publish(
# TopicArn = snsarn,
# Subject = notificationheader,
# Message = notificationbody
# )
# Output result to CloudWatch logstream
print('Response: ' + notificationheader)
else:
print('passed')
sys.exit()
# Trigger the Lambda handler
def lambda_handler(event, context):
aws_account_ids = [context.invoked_function_arn.split(":")[4]]
pagecheck()
return "Successful"
sys.exit()
Your CloudWatch logs contain the following error message:
Process exited before completing request
This is caused by invoking sys.exit() in your code. Locally your Python interpreter will just terminate when encountering such a sys.exit().
AWS Lambda on the other hand expects a Python function to just return and handles sys.exit() as an error. As your function probably got invoked asynchronously AWS Lambda retries to execute it twice.
To solve your problem, you can replace the occurences of sys.exit() with return or even better, just remove the sys.exit() calls, as there would be already implicit returns in the places where you use sys.exit().

Resources