How to get messages from googles Pub/Sub sytsem by using the current pubsub subsciber - python-3.x

I need to receive published messages from googles Pub/Sub system by using a python based subscriber.
For this I did the following steps:
On the web console I created a project, a registry, a telemetry topic, a device and attached a subscription topic to the telemtry topic
A the Moment my code can publish messages over the mqtt bridge and also the publish functionality of the pubsub library
I can pull this messages over the terminal by using the following cmd:
gcloud pubsub subscriptions pull --auto-ack projects/{project_id}/subscriptions/{subscription_topic}
In the following you see the important snippet of my code. It is based on the git-examples but some functions do not seem to exist anymore in version 0.39.1 of the google-cloud-pubsub package. One example is the subscriber.subscription_path() method.
def receive_messages(subscription_path, service_account_json):
import time
from google.cloud import pubsub_v1
subscriber = pubsub_v1.SubscriberClient(credentials=service_account_json)
#subscription_path = subscriber.subscription_path(
# project_id, subscription_name)
def callback(message):
print('Received message: {}'.format(message))
message.ack()
subscriber.subscribe(subscription_path, callback=callback)
print('Listening for messages on {}'.format(subscription_path))
while True:
time.sleep(60)
When I run this function, countless threads are started in the background bit by bit, but none of them seem to ever quit or start the callback function.
I hopefully installed all requirements:
pip3 freeze
asn1crypto==0.24.0
cachetools==3.0.0
certifi==2018.11.29
cffi==1.11.5
chardet==3.0.4
cryptography==2.4.2
google-api-core==1.7.0
google-api-python-client==1.7.5
google-auth==1.6.2
google-auth-httplib2==0.0.3
google-auth-oauthlib==0.2.0
google-cloud-bigquery==1.8.1
google-cloud-core==0.29.1
google-cloud-datastore==1.7.3
google-cloud-monitoring==0.31.1
google-cloud-pubsub==0.39.1
google-resumable-media==0.3.2
googleapis-common-protos==1.5.6
grpc-google-iam-v1==0.11.4
grpcio==1.17.1
httplib2==0.12.0
idna==2.8
keyring==10.1
keyrings.alt==1.3
oauthlib==3.0.0
paho-mqtt==1.4.0
protobuf==3.6.1
pyasn1==0.4.5
pyasn1-modules==0.2.3
pycparser==2.19
pycrypto==2.6.1
pycurl==7.43.0
pygobject==3.22.0
PyJWT==1.6.4
python-apt==1.4.0b3
pytz==2018.9
pyxdg==0.25
redis==3.0.1
requests==2.21.0
requests-oauthlib==1.2.0
RPi.GPIO==0.6.5
rsa==4.0
SecretStorage==2.3.1
six==1.12.0
unattended-upgrades==0.1
uritemplate==3.0.0
urllib3==1.24.1
virtualenv==16.2.0
I run that code on debian aswell on windows 10 and updated the gcloud:
gcloud components update
For the past week, I've been trying different solutions out of the way or starting the seemingly obsolete google examples. Also, the documentation, which seems even older than the code examples did not help with. So I hope someone here can help me to finally receive python-based client messages via the Pub/Sub-Sytsem.
I hope I could provide the most important information and thank you in advance for your effort to help me.

The examples maintained on the python documentation site here should be up to date. Make sure that you've followed all the steps in the "In order to use this library, you first need to go through the following steps" section before running any code. In particular, you may not have properly set up authentication, I don't believe you should be passing the credentials path manually.

def callback(message: pubsub_v1.subscriber.message.Message) -> None:
print(f"Received {message}.")
message.ack()
streaming_pull_future = subscriber.subscribe(subscription_path,
callback=callback)
print(f"Listening for messages on {subscription_path}..\n")
try:
streaming_pull_future.result(timeout=timeout)
except TimeoutError:
streaming_pull_future.cancel() # Trigger the shutdown.
streaming_pull_future.result() # Block until the shutdown is

Related

How to increase the AWS lambda to lambda connection timeout or keep the connection alive?

I am using boto3 lambda client to invoke a lambda_S from a lambda_M. My code looks something like
cfg = botocore.config.Config(retries={'max_attempts': 0},read_timeout=840,
connect_timeout=600) # tried also by including the ,
# region_name="us-east-1"
lambda_client = boto3.client('lambda', config=cfg) # even tried without config
invoke_response = lambda_client.invoke (
FunctionName=lambda_name,
InvocationType='RequestResponse',
Payload=json.dumps(request)
)
Lambda_S is supposed to run for like 6 minutes and I want lambda_M to be still alive to get the response back from lambda_S but lambda_M is timing out, after giving a CW message like
"Failed to connect to proxy URL: http://aws-proxy..."
I searched and found someting like configure your HTTP client, SDK, firewall, proxy or operating system to allow for long connections with timeout or keep-alive settings. But the issue is I have no idea how to do any of these with lambda. Any help is highly appreciated.
I would approach this a bit differently. Lambdas charge you by second, so in general you should avoid waiting in them. One way you can do that is create an sns topic and use that as the messenger to trigger another lambda.
Workflow goes like this.
SNS-A -> triggers Lambda-A
SNS-B -> triggers lambda-B
So if you lambda B wants to send something to A to process and needs the results back, from lambda-B you send a message to SNS-A topic and quit.
SNS-A triggers lambda, which does its work and at the end sends a message to SNS-B
SNS-B triggers lambda-B.
AWS has example documentation on what policies you should put in place, here is one.
I don't know how you are automating the deployment of native assets like SNS and lambda, assuming you will use cloudformation,
you create your AWS::Lambda::Function
you create AWS::SNS::Topic
and in its definition, you add 'subscription' property and point it to you lambda.
So in our example, your SNS-A will have a subscription defined for lambda-A
lastly you grant SNS permission to trigger the lambda: AWS::Lambda::Permission
When these 3 are in place, you are all set to send messages to SNS topic which will now be able to trigger the lambda.
You will find SO answers to questions on how to do this cloudformation (example) but you can also read up on AWS cloudformation documentation.
If you are not worried about automating the stuff and manually tests these, then aws-cli is your friend.

FMETP WebGL Unity Build. Emscripten error when FMNetworkManager activated in Heirarchy

So I have been working on a project that streams a video feed from an Oculus quest to a WebGL build running on a remote server (Digital Ocean)
I have two issues currently...
1.When I build to WebGl and push the update online. It will only run if I disable the FMNetworkManager.
If I run the app locally, it has no issues and I have been able to have video sent from the Quest headset to the receiver app.
Part of the response is as follows:
An error occurred running the Unity content on this page. See your browser JavaScript console for more info. The error was:
uncaught exception: abort("To use dlopen, you need to use Emscripten's linking support, see https://github.com/kripken/emscripten/wiki/Linking") at jsStackTrace (Viewer.wasm.framework.unityweb:8:15620)
stackTrace (Viewer.wasm.framework.unityweb:8:15791)
onAbort#https://curtin-cooking-control-nr9un.ondigitalocean.app/Build/UnityLoader.js:4:11199
abort (Viewer.wasm.framework.unityweb:8:500966)
_dlopen (Viewer.wasm.framework.unityweb:8:181966)
#blob:https://***/de128118-3923-4c88-8092-7a9945d90746 line 8 > WebAssembly.instantiate:wasm-function[60882]:0x1413efb (blob:***/de128118-3923-4c88-8092-7a9945d90746 line 8 > WebAssembly.instantiate:wasm-function[62313]:0x1453761)
...
...
...WebAssembly.instantiate:wasm-function[63454]:0x148b9a9)
UnityModule [UnityModule/Module.dynCall_v] (Viewer.wasm.framework.unityweb:8:484391)
browserIterationFunc (Viewer.wasm.framework.unityweb:8:186188)
runIter (Viewer.wasm.framework.unityweb:8:189261)
Browser_mainLoop_runner (Viewer.wasm.framework.unityweb:8:187723)
So I understand there is an issue relating to (wasm) Emscripten and have scoured the internet looking for solutions to no avail.
While I have mentioned I have had video streaming from one device to another. I have only had this functioning locally. With a node.js server also running on Digital Ocean. Which appears to be functioning, seeing both devices being registered by the server at runtime. In each app, while I see what appears to be data transferring by seeing Last Sent Time updating, plus FM Web Socket Network_debug also pushes [connected: True] to a text ui. The IsConnected or Found Server checkboxes inside FM Client (script) fail to check as being connected.
FMNetworkManager
I'm by no means an expert in unity programming, webgl, and webserver setup so my understanding of getting this to function has left me looking at many irrelevant solutions while attempting to make little changes with elements that some solutions suggest with others leaving me blank-eyed looking into space wondering, where do I even implement that.
Any guidance would be great, a step-by-step solution would be fantastic.
[Edit - Detailed Error]
UnityLoader.js:1150 wasm streaming compile failed: TypeError: Could not download wasm module
printErr # UnityLoader.js:1150
Promise.catch (async)
doNativeWasm # 524174d7-d893-4b91-8…0-aa564a23702d:1176
(anonymous) # 524174d7-d893-4b91-8…0-aa564a23702d:1246
(anonymous) # 524174d7-d893-4b91-8…-aa564a23702d:20166
UnityLoader.loadCode.Module # UnityLoader.js:889
script.onload # UnityLoader.js:854
load (async)
loadCode # UnityLoader.js:849
processWasmFrameworkJob # UnityLoader.js:885
job.callback # UnityLoader.js:475
setTimeout (async)
job.complete # UnityLoader.js:490
(anonymous) # UnityLoader.js:951
decompressor.worker.onmessage # UnityLoader.js:89
Thanks in advance
Aaron
You have wrong use in combining FMNetworkUDP and FMWebsocket together.
For WebGL build, UDP is not allowed, which causes the error as expected.
Your websocket ip is reachable, because it's reachable via your IP.
But please try not to expose your server IP in public forum like stackoverflow, which everyone can connect to your server anytime in future.
You should take away FMNetworkManager completely, while keeping only FMWebsocket components for webgl streaming.
You may test it with their Websocket streaming example scene with webgl build.

AZURE FUNCTIONS: PDFInfoNotInstalledError: Unable to get page count. Is poppler installed and in PATH? for pdf2image

I am getting this error "Result: Failure Exception: PDFInfoNotInstalledError: Unable to get page count. Is poppler installed and in PATH? for azure functions."
I am using pdf2image library's convert_from_path() to process my pdf to image. This works fine while I test from local. While publishing the function to azure, poppler-utils package also gets installed there but still the error comes. I saw a lot of threads related to this error and tried it but wanted to know , if anyone experienced this specifically for azure functions.
Suggestion for this issue has been provided in the thread
"you should try to troubleshoot it by simply having a function that opens a process and prints the help of pdftoppm (poppler). You will be able to get a different message that might be more relevant.
Something like this:
import subprocess
def main():
p = subprocess.Popen(["pdftoppm", "-h"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
print(out, err)
As a general recommendation, I would bundle the poppler utilities with your package to avoid installing it in the function environment. You can call the function with poppler_path."

Rabbitmq keep request after stopping rabitmq procces and queue

I make a connection app with rabbitmq, it works fine but when I stop rabbitmq process all of my request get lost, I want even after killing rabitmq service, my requests get saved and after restart rabitmq service, all of my request return to their own places.
Here is my rabitmq.py:
import pika
import SimilarURLs
data = ''
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
def rabit_mq_start(Parameter):
channel.queue_declare(queue='req')
a = (take(datas=Parameter.decode()))
channel.basic_publish(exchange='',
routing_key='req',
body=str(a))
print(" [x] Sent {}".format(a))
return a
channel.start_consuming()
def take(datas):
returns = SimilarURLs.start(data=datas)
return returns
In addition, I'm sorry for writing mistakes in my question.
You need to enable publisher confirms (via the confirm_delivery method on your channel object). Then your application must keep track of what messages have been confirmed as published, and what messages have not. You will have to implement this yourself. When RabbitMQ is stopped and started again, your application can re-publish the messages that weren't confirmed.
It would be best to use the asynchronous publisher example as a guide. If you use BlockingConnection you won't get the async notifications when a message is confirmed, defeating their purpose.
If you need further assistance after trying to implement this yourself I suggest following up on the pika-python mailing list.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

Steam 2FA throwing "Failed to get a web session" consistantly

I'm using steam's python library to create an app for generating 2FA codes. The issue is that steam throws me this error whenever I try to add a phone number or add 2FA to said account:
RuntimeError: Failed to get a web session. Try again in a few minutes
Fri Mar 30 08:39:04 2018 <Greenlet at 0x6e64580: handle_after_logon> failed with RuntimeError
Yes I've waited a few minutes and have been trying for hours.
Here's the code that I'm using to attempt this:
from steam import SteamClient
from steam.enums.emsg import EMsg
from steam.guard import SteamAuthenticator
#Create our steamclient instance
client = SteamClient()
#client.on("logged_on")
def handle_after_logon():
print("You are now Logged in.")
#Setup Autenticator for our steamclient instance "client". "client" is our logged in SteamClient instance as requested by documentation
sa = SteamAuthenticator(medium=client)
#My account has no phone number
print(sa.has_phone_number())
#Adding phone number because I know ahead of time I don't have it
sa.add_phone_number("myphonenumber with area code")
sa.add() # SMS code will be sent to account's phone number
sa.secrets # dict with authenticator secrets
#We're gonna need these
print(sa.secrets)
sa.finalize(str(input("SMS CODE: "))) # activate the authenticator
sa.get_code() # generate 2FA code for login
sa.remove() # removes the authenticator from the account
try:
#Login to our steamclient instance
client.cli_login("myusername","mypassword")
#client.on("loggon_on") doesn't trigger without this
client.run_forever()
#Allow us to logout using keyboard interrupt
except KeyboardInterrupt:
if client.connected:
client.logout()
Due to a lack of example code on 2FA in particular I've followed the documentation the best I could, I've looked at all of the below:
http://steam.readthedocs.io/en/latest/api/steam.client.html
http://steam.readthedocs.io/en/latest/api/steam.guard.html
https://github.com/ValvePython/steam/blob/master/recipes/1.Login/persistent_login.py
I feel like there's simply a silly error in my code, but reading through the documentation doesn't appear to be helping me solve this.
Thanks for your guys help.
I debugged for a while and it's an issue with gevent
I added this line to fix it at the beginning of my script:
from gevent import monkey
monkey.patch_all(thread=False)

Resources