How to receive ParseHub project result in Zapier Webhook? - webhooks

I created a project in #parsehub and sending it to #zapier ,and a call arrives to Zapier, but just without a JSON itself. Example of JSON expected from ParseHub: enter image description here
Zapier configuration:
enter image description here
Response received in Zapier itself:
start_running_time
2022-02-17T04:13:53.829982
status
running
start_template
main_template
data_ready
0
webhook
https://hooks.zapier.com/hooks/catch/9583384/brhtvvh/
options_json
outputType
csv
loadJs
true
sendEmail
true
webhook
https://hooks.zapier.com/hooks/catch/9583384/brhtvvh/
rotateIPs
false
maxWorkers
0
maxPages
0
startValue
startTemplate
main_template
startUrl
https://www.dice.com/jobs?q=salesforce%20developer&location=Austin,%20TX,%20USA&latitude=30.267153&longitude=-97.7430608&countryCode=US&locationPrecision=City&adminDistrictCode=TX&radius=30&radiusUnit=mi&page=1&pageSize=100&filters.postedDate=ONE&language=en&eid=S2Q_,qA_3
customProxies
proxyAllowInsecure
false
allowPerfectSimulation
false
proxyDisableAdblock
false
proxyCustomRotationHybrid
false
preserveOrder
false
recoveryRules
allowReselection
false
ignoreDisabledElements
true
is_empty
False
custom_proxies
pages
0
project_token
tNbDh2np_ZA6
start_url
https://www.dice.com/jobs?q=salesforce%20developer&location=Austin,%20TX,%20USA&latitude=30.267153&longitude=-97.7430608&countryCode=US&locationPrecision=City&adminDistrictCode=TX&radius=30&radiusUnit=mi&page=1&pageSize=100&filters.postedDate=ONE&language=en&eid=S2Q_,qA_3
start_time
2022-02-17T04:13:53
start_value
run_token
tchOcTzTQbYf
querystring
And there is no JSON received.

Do you see any improvement with the Catch Raw Hook in Webhooks by Zapier trigger step instead?

Related

QLDB Python Driver Error Handling with Lambda and SQS

We have a QLDB ingestion process that consists of a Lambda function triggered by SQS.
We want to make sure our pipeline is airtight so if a failure or error occurs during driver execution, we don't lose that data if the data fails to commit to QLDB.
In our testing we noticed that if there's a failure within the Lambda itself, it automatically resends the message to the queue, but if the driver fails, the data is lost.
I understand that the default behavior for the driver is to retry four times after the initial failure. My question is, if I wrap qldb_driver.execute_lambda() in a try statement, will that allow the driver to retry upon failure or will it instantly return as a failure and be handled by the except statement?
Here is how I've written the first half of the function:
import json
import boto3
import datetime
from pyqldb.driver.qldb_driver import QldbDriver
from utils import upsert, resend_to_sqs, delete_from_sqs
queue_url = 'https://sqs.XXX/'
sqs = boto3.client('sqs', region_name='us-east-1')
ledger = 'XXXXX'
table = 'XXXXX'
qldb_driver = QldbDriver(ledger_name = ledger, region_name='us-east-1')
def lambda_handler(event, context):
# Simple iterable to identify messages
i = 0
# Error flag
error = False
# Empty list to store message send status as well as body or receipt_handle
batch_messages = []
for record in event['Records']:
payload = json.loads(record["body"])
payload['update_ts'] = str(datetime.datetime.now())
try:
qldb_driver.execute_lambda(lambda executor: upsert(executor, ledger = ledger, table_name = table, data = payload))
# If the message sends successfully, give it status 200 and add the recipt_handle to our list
# so in case an error occurs later, we can delete this message from the queue.
message_info = {f'message_{i}': 200, 'receiptHandle': record['receiptHandle']}
batch_messages.append(message_info)
except Exception as e:
print(e)
# Flip error flag to True
error = True
# If the commit fails, set status 400 and add the message's body to our list.
# This will allow us to send the message back to the queue during error handling.
message_info = {f'message_{i}': 400, 'body': record['body']}
batch_messages.append(message_info)
i += 1
Assuming that this try/except allows the driver to retry upon failure, I've written an additional process to record message data from our batch to delete successful commits and send failures back to the queue:
# Begin error handling
if error:
count = 0
for j in range(len(batch_messages)):
# If a message was sent successfully delete it from the queue
if batch_messages[j][f'message_{j}'] == 200:
receipt_handle = batch_messages[j]['receiptHandle']
delete_from_sqs(sqs, queue_url, receipt_handle)
# If the message failed to commit to QLDB, send it back to the queue
else:
body = batch_messages[j]['body']
resend_to_sqs(sqs, queue_url, body)
count += 1
print(f"ERROR(S) DETECTED - {count} MESSAGES RETURNED TO QUEUE")
else:
print("BATCH PROCESSING SUCCESSFUL")
Thank you for your insight!
The qldb python driver can be configured for more or less retries if you need. I'm not sure if you wanted it to only try 1 time, or if you were asking that the driver will try the transaction 4 times before triggering the try/catch exception. The driver will still try up-to 4 times, before throwing the except.
You can follow the example here to modify the retry amount. Also, note the default retry timeout is a random ms jitter and not exponential. With QLDB, you shouldn't need to wait long periods to retry since it uses optimistic concurrency control.
Also, with your design of throwing the failed message back into the queue...you might want to consider throwing it into a dead letter queue. Dead-letter queues would prevent trouble messages from retrying indefinitely, unless thats your goal.
(edit/additionally)
Observe that the qldb driver exhausting retires before raising an exception.

Why I cannot get all contributors of a GitHub Repository using GitHub API

Following is my code and every time it only returns 60 contributors:
base_url = "https://api.github.com"
owner = "http4s"
repo = "http4s"
# Set the headers for the request
headers = {
"Accept": "application/vnd.github+json",
"Authorization":"token xxxxxx"
}
url = f"{base_url}/repos/{owner}/{repo}/contributors?per_page=100"
more_pages = True
# Initialize a set to store the contributors
contributors = set()
while more_pages:
# Send the request to the API endpoint
response = requests.get(url, headers=headers)
# Check the status code of the response
if response.status_code == 200:
# If the request is successful, add the list of contributors to the `contributors` set
contributors.update([contributor["login"] for contributor in response.json()])
# Check the `Link` header in the response to see if there are more pages to fetch
link_header = response.headers.get("Link")
if link_header:
# The `Link` header has the format `<url>; rel="name"`, so we split it on `;` to get the URL
next_url = link_header.split(";")[0][1:-1]
# Check if the `rel` parameter is "next" to determine if there are more pages to fetch
if "rel=\"next\"" in link_header:
url = next_url
else:
more_pages = False
else:
more_pages = False
else:
# If the request is not successful, print the status code and the error message
print(f"Failed to get contributors: {response.status_code} {response.text}")
more_pages = False
Can someone tell me how can I optimize my code?
I have tried many ways to fetch more contributors, but every time it only returns 60 different contributors. I want to get the full list of contributors.

Pause/Delay sending of new batch of users from swarm

I have a test case where I need to spawn 1000 websocket connections and sustain a conversation over them through a Locust task (It has a prefedined send/receive process for the websocket connections). I can successfully do it by the following setup in locust:
Max Number of Users: 1000
Hatch rate: 1000
However, this setup opens up 1000 connection every second. Even if i lower down the hatch rate, it will come to a time where it will continue to spawn 1000 websocket connections per second. Is there a way to spawn 1000 users instantly and halt/delay the swarm in sending new 1000 connections for quite some time?
I am trying to test if a my server can handle 1000 users sending and receiving messages from my server through a websocket connection. I have tried multiprocessing approach in python but I'm having a hard time to spawn connections as fast as I can with Locust.
class UserBehavior(TaskSet):
statements = [
"Do you like coffee?",
"What's your favorite book?",
"Do you invest in crypto?",
"Who will host the Superbowl next year?",
"Have you listened to the new Adele?",
"Coldplay released a new album",
"I watched the premiere of Succession season 3 last night",
"Who is your favorite team in the NBA?",
"I want to buy the new Travis Scott x Jordan shoes",
"I want a Lamborghini Urus",
"Have you been to the Philippines?",
"Did you sign up for a Netflix account?"
]
def on_start(self):
pass
def on_quit(self):
pass
#task
def send_convo(self):
end = False
ws_url = "ws://xx.xx.xx.xx:8080/websocket"
self.ws = create_connection(ws_url)
body = json.dumps({"text": "start blender"})
self.ws.send(body)
while True:
#print("Waiting for response..")
response = self.ws.recv()
if response != None:
if "Sorry, this world closed" in response:
end = True
break
if not end:
body = json.dumps({"text": "begin"})
self.ws.send(body)
while True:
#print("Waiting for response..")
response = self.ws.recv()
if response != None:
# print("[BOT]: ", response)
if "Sorry, this world closed" in response:
end = True
self.ws.close()
break
if not end:
body = json.dumps({"text": random.choice(self.statements)})
start_at = time.time()
self.ws.send(body)
while True:
response = self.ws.recv()
if response != None:
if "Sorry, this world closed" not in response:
response_time = int((time.time() - start_at)*1000)
print(f"[BOT]Response: {response}")
response_length = len(response)
events.request_success.fire(
request_type='Websocker Recv',
name='test/ws/echo',
response_time=response_time,
response_length=response_length,
)
else:
end = True
self.ws.close()
break
if not end:
body = json.dumps({"text": "[DONE]"})
self.ws.send(body)
while True:
response = self.ws.recv()
if response != None:
if "Sorry, this world closed" in response:
end = True
self.ws.close()
break
if not end:
time.sleep(1)
body = json.dumps({"text": "EXIT"})
self.ws.send(body)
time.sleep(1)
self.ws.close()
class WebsiteUser(HttpUser):
tasks = [UserBehavior]
wait_time = constant(2)
host = "ws://xx.xx.xx.xx:8080/websocket"
For this particular test, I set the maximum users to 1 and the hatch rate to 1 and clearly, locust keeps on sending 1 request per second as seen on the following responsees:
[BOT]Response: {"text": "No, I don't have a netflix account. I do have a Hulu account, though.", "quick_replies": null}
enter code here
[BOT]Response: {"text": "I have not, but I would love to go. I have always wanted to visit the Philippines.", "quick_replies": null
[BOT]Response: {"text": "No, I don't have a netflix account. I do have a Hulu account, though.", "quick_replies": null}
[BOT]Response: {"text": "I think it's going to be New Orleans. _POTENTIALLY_UNSAFE__", "quick_replies": null}
My expectation is after I set the maximum user to 1, and a hatch rate of 1, there would instantly be 1 websocket connection sending a random message, and receiving 1 main response from the websocket server. but what's happening is it keeps on repeating the task per second until i explicitly hit the stop button on the locust dashboard.
I would debug your logic. Put more print statements in each if block at various places and between each block. When dealing with a long list of decisions, it's easy to get things tripped up.
In this case, you are only wanting to sleep in a very specific situation but it's not happening. Most likely you're setting end = True when you're not expecting it so you're not sleeping and are immediately going to get a new user.
EDIT:
Reviewing your question and issue description again, it sounds like you expect Locust to send a single request and then never send another one. That's not how Locust works. Locust will run your task code for a user. When it's done, that user goes away and it waits for a certain amount of time (looks like you have it set to 2 seconds) and then it spawns another user and starts the task over again. The idea is it will try to keep a near constant number of users you tell it to. It will not only run 1000 users and then end the test, by default.
If you want to keep all 1000 users running, you need to make them continue to execute code. For example, you could put everything in your task in another while loop with another way to break out and end. That way even after making your socket connection and sending the single message you expect, the user will stay alive in the loop and won't end because it ran out of things to do. Doing it this way requires a lot more work and coordination but is possible. There may be other questions on SO about different approaches if this isn't exactly what you're looking for.

Azure python sdk service bus receive message

I am a bit confused about the azure python servicebus.
I have a servicebus TOPIC and SUBSCRIPTION which listen to specific messages, I have the code to receive those messages which then they will be processed by aws comprehend.
Following Microsoft documentation, the basic code to receive the message work and I am able to print it, but when I integrate the same logic with comprehend it fails.
Here is the example, this is the bit of code from Microsoft documentation:
with servicebus_client:
# get the Queue Receiver object for the queue
receiver = servicebus_client.get_queue_receiver(queue_name=QUEUE_NAME, max_wait_time=5)
with receiver:
for msg in receiver:
print("Received: " + str(msg))
# complete the message so that the message is removed from the queue
receiver.complete_message(msg)
and the output is this
{"ModuleId":"123458", "Text":"This is amazing."}
Receive is done.
My first thought was that the message received, was a Json object. so I started writing the code to read data from a json outputs as follow:
servicebus_client = ServiceBusClient.from_connection_string(conn_str=CONNECTION_STR)
with servicebus_client:
receiver = servicebus_client.get_subscription_receiver(
topic_name=TOPIC_NAME,
subscription_name=SUBSCRIPTION_NAME
)
with receiver:
received_msgs = receiver.receive_messages(max_message_count=10, max_wait_time=5)
for msg in received_msgs:
# print(str(msg))
message = json.dumps(msg)
text = message['Text']
#passing the text to comprehend
result_json= json.dumps(comprehend.detect_sentiment(Text=text, LanguageCode='en'), sort_keys=True, indent=4)
result = json.loads(result_json) # converting json to python dictionary
#extracting the sentiment value
sentiment = result["Sentiment"]
#extracting the sentiment score
if sentiment == "POSITIVE":
value = round(result["SentimentScore"]["Positive"] * 100,2)
elif sentiment == "NEGATIVE":
value = round(result["SentimentScore"]["Negative"] * 100,2)
elif sentiment == "NEUTRAL":
value = round(result["SentimentScore"]["Neutral"] * 100,2)
elif sentiment == "MIXED":
value = round(result["SentimentScore"]["Mixed"] * 100,2)
#store the text, sentiment and value in a dictionary and convert it tp JSON
output={'Text':text,'Sentiment':sentiment, 'Value':value}
output_json = json.dumps(output)
print('Text: ',text,'\nSentiment: ',sentiment,'\nValue: ', value)
print('In JSON format\n',output_json)
receiver.complete_message(msg)
print("Receive is done.")
But when I run this I get the following error:
TypeError: Object of type ServiceBusReceivedMessage is not JSON serializable
Did this ever happened to anybody who can help me to understand what is the type of servicebus that is coming back from the receive?
Thank you so much everyone
Did this ever happened to anybody who can help me to understand what
is the type of servicebus that is coming back from the receive?
The type of the received message is ServiceBusReceivedMessage which is derived from ServiceBusMessage. The contents of the message can be fetched from its body property.
Can you please try something like:
message = json.dumps(msg.body)

Sending Mail in Python - Error code - SMTPDataError: 451, b 4.3.0 Mail server temporarily rejected message

I am able to send mail without issue when I implement the same code very plain without using any functions. But when I try to send mail using a function I am getting mail as my senders account is disabled and it is blocking my mail and account.
Have to implement a code for sending mails after certain filters are met in a function so that it can be used to send mail whenever wanted.
My code snippet:
import smtplib
def setup_mail():
global smtpObj,sender,receivers,message
sender = 'sample#gmail.com'
receivers = ['sample1#gmail.com','sample2#gmail.com']
message = """From: From Person <from#fromdomain.com>
To: To Person <to#todomain.com>
Subject: 1 MIN candle 15 points alert
TIME: watch out next 2 minutes
"""
smtpObj = smtplib.SMTP_SSL('smtp.gmail.com', 465)
smtpObj.login("sample#gmail.com","samplepassword")
setup_mail()
def sending_mails(smtpObj):
smtpObj.sendmail(sender, receivers, message)
for i in range(3):
sending_mails(smtpObj);
print("Successfully sent email")
The error i got,
Traceback (most recent call last): File "C:/Users/Arun/PycharmProjects/Astro/venv/Scripts/ZERODHA_DEV/sendmail.py", line 22, in sending_mails(smtpObj); File "C:/Users/Arun/PycharmProjects/Astro/venv/Scripts/ZERODHA_DEV/sendmail.py", line 20, in sending_mails smtpObj.sendmail(sender, receivers, message) File "C:\Python\lib\smtplib.py", line 888, in sendmail raise SMTPDataError(code, resp) smtplib.SMTPDataError: (451, b'4.3.0 Mail server temporarily rejected message. s77sm14888153pfc.164 - gsmtp')
However it works with a simple code like,
import smtplib
global smtpObj,sender,receivers,message
sender = 'sample#gmail.com'
receivers = ['sample1#gmail.com','sample2#gmail.com']
message = """From: From Person <from#fromdomain.com>
To: To Person <to#todomain.com>
Subject: 1 MIN candle 15 points alert
TIME: watch out next 2 minutes
"""
smtpObj = smtplib.SMTP_SSL('smtp.gmail.com', 465)
smtpObj.login("sample#gmail.com","samplepassword")
for i in range(3):
smtpObj.sendmail(sender, receivers, message)
print("Successfully sent email")
The immediate problem is that your message is indented; a valid SMTP message contains two literal adjacent newlines to demarcate the headers from the body.
While you can assemble simple email messages from ASCII strings, this is not tenable for real-world modern email messages. You really want to use the email library (or a higher level third-party wrapper or replacement) to handle all the corner cases of email message encapsulation, especially if you are not an expert on email and MIME yourself.

Resources