I'm trying to use python 2.7 qpid proton (0.17.0) on Debian 4.8.4-1 on google cloud to connect to the service AMQPS 1.0 endpoints on an Azure IoT hub.
I'm using a shared access policy and a SAS token pair (which successfully work in ampq net lite under C#) for SASL PLAIN. For azure iothub, you reference the shared access policy as the username in the format policyName#sas.root.IoTHubName
When I use these (obfuscated) credentials with qpid proton as follows
import sys
from proton import Messenger, Message
base = "amqps://iothubowner#sas.root.HubName:SharedAccessSignature sr=HubName.azure-devices.net%2fmessages&sig=PERCENT_ENCODED_SIG&se=1493454429&st=1490861529&skn=iothubowner#HubName.azure-devices.net:5671"
entityName = "/messages/servicebound/feedback"
messenger = Messenger()
messenger.subscribe("%s%s" % ( base, entityName))
messenger.start()
I get the following error
(debug print of connection string being passed) amqps://iothubowner#sas.root.HubName:SharedAccessSignature
sr=HubName.azure-devices.net%2fmessages&sig=PERCENT_ENCODED_SIG&se=1493454429&st=1490861529&skn=iothubowner#HubName.azure-devices.net:5671
Traceback (most recent call last): File "readProton2.py", line 27,
in
messenger.subscribe("%s%s" % ( base, entityName)) File "/usr/lib/python2.7/dist-packages/proton/__init__.py", line 496, in
subscribe
self._check(pn_error_code(pn_messenger_error(self._mng))) File "/usr/lib/python2.7/dist-packages/proton/__init__.py", line 300, in
_check
raise exc("[%s]: %s" % (err, pn_error_text(pn_messenger_error(self._mng))))
proton.MessengerException: [-2]: CONNECTION ERROR
(sas.root.HubName:SharedAccessSignature
sr=HubName.azure-devices.net%2fmessages&sig=PERCENT_ENCODED_SIG&se=1493454429&st=1490861529&skn=iothubowner#HubName.azure-devices.net:5671):
getaddrinfo(sas.root.HubName, SharedAccessSignature
sr=HubName.azure-devices.net%2fmessages&sig=PERCENT_ENCODED_SIG&se=1493454429&st=149086152
9&skn=iothubowner#HubName.azure-devices.net:5671): Servname not
supported for ai_socktype
The final error message looks like parsing of the connection string is being mangled (on a very quick look, pni_default_rewrite in qpid-proton/messenger.c appears to use the first occurrence of # to split the connection string which could be the problem)
However, I'm new to AMQP and proton, so before I raise a bug, want to check whether others have successfully use proton to iothub, or if I've missed something??
You need to encode amqp username and password.
iothubowner#sas.root.HubName
iothubowner%40sas.root.HubName
SharedAccessSignature sr=HubName.azure-devices.net%2fmessages&sig=PERCENT_ENCODED_SIG&se=1493454429&st=1490861529&skn=iothubowner
SharedAccessSignature%20sr%3DHubName.azure-devices.net%252fmessages%26sig%3DPERCENT_ENCODED_SIG%26se%3D1493454429%26st%3D1490861529%26skn%3Diothubowner
The server name should be HubName.azure-devices.net:5671, but as you can see in error message, server name changed to sas.root.HubName, SharedAccessSignature sr=HubName.azure-devices.net%2fmessages&sig=PERCENT_ENCODED_SIG&se=1493454429&st=149086152 9&skn=iothubowner#HubName.azure-devices.net:5671
Related
I am connecting to my GCP BigTable instance using python (google-cloud-bigtable library), after setting up "GOOGLE_APPLICATION_CREDENTIALS" in my environment variables. I'm successful at doing this.
However, my requirement is that I want to pass the credentials during the run-time while creating the BigTable Client object as shown below:
client = bigtable.Client(credentials='82309281204023049', project='xjkejfkx')
I have followed the GCP BigTable Client Documentation to connect to GCP BigTable, but I am getting this error:
Traceback (most recent call last):
File "D:/testingonlyinformix/bigtable.py", line 14, in <module>
client = bigtable.Client(credentials="9876543467898765", project="xjksjkdn", admin=True)
File "D:\testingonlyinformix\new_venv\lib\site-packages\google\cloud\bigtable\client.py", line 196, in __init__
project=project, credentials=credentials, client_options=client_options,
File "D:\testingonlyinformix\new_venv\lib\site-packages\google\cloud\client\__init__.py", line 320, in __init__
self, credentials=credentials, client_options=client_options, _http=_http
File "D:\testingonlyinformix\new_venv\lib\site-packages\google\cloud\client\__init__.py", line 167, in __init__
raise ValueError(_GOOGLE_AUTH_CREDENTIALS_HELP)
ValueError: This library only supports credentials from google-auth-library-python. See https://google-auth.readthedocs.io/en/latest/ for help on authentication with this library.
Can someone please suggest what are all the fields/attributes a Client object is expecting during the run-time when making a connection to GCP BigTable?
Thanks
After 2 hrs of searching I finally landed on these page(s), please check them out in order:
BigTable Authentication
Using end-user authentication
OAuth Scopes for BigTable
from google_auth_oauthlib import flow
appflow = flow.InstalledAppFlow.from_client_secrets_file(
"client_secrets.json", scopes=["https://www.googleapis.com/auth/bigtable.admin"])
appflow.run_console()
credentials = appflow.credentials
The credentials in the previous step will need to be provided to the BigTable client object:
client = bigtable.Client(credentials=credentials, project='xjkejfkx')
This solution worked for me, if anyone has any other suggestions, please do pitch-in.
I am running into a strange issue when using Google Translate API with a JSON authorization key. I can run the file without any issue directly in my editor of choice (VSCode). No errors.
But when I run the same file via terminal, the file executes up until after loading the Google Translate credentials from disk. The call then throws below error message.
Since this only ever happens when running the file from terminal, I find this bug hard to tackle.
The goal is to then have this file collect data and translate some of the fields using Google services, then store the data in a data base.
Below is the error message:
Error: 503 DNS resolution failed
Traceback (most recent call last):
File "/home/MY-USER-NAME/anaconda3/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
File "/home/MY-USER-NAME/anaconda3/lib/python3.6/site-packages/grpc/_channel.py", line 690, in __call__
File "/home/MY-USER-NAME/anaconda3/lib/python3.6/site-packages/grpc/_channel.py", line 592, in _end_unary_response_blocking
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed"
debug_error_string = "{"created":"#1584629013.091398712","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3934,"referenced_errors":[{"created":"#1584629013.091395769","description":"Resolver transient failure","file":"src/core/ext/filters/client_channel/resolving_lb_policy.cc","file_line":262,"referenced_errors":[{"created":"#1584629013.091394954","description":"DNS resolution failed","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/dns_resolver_ares.cc","file_line":370,"grpc_status":14,"referenced_errors":[{"created":"#1584629013.091389655","description":"C-ares status is not ARES_SUCCESS: Could not contact DNS servers","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.cc","file_line":244,"referenced_errors":[{"created":"#1584629013.091380513","description":"C-ares status is not ARES_SUCCESS: Could not contact DNS servers","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.cc","file_line":244}]}]}]}]}"
>
This is the relevant part of my code:
Dedicated CRON file (I use this two-step setup in many other projects without any issue)
#! anaconda3/bin/python
import os
os.chdir(r'/home/MY-USER-NAME/path-to-project')
import code-file
Code file (simple .py script)
[...]
from google.oauth2 import service_account
from google.cloud import translate
key_path = 'path-to-file/credentials.json'
credentials = service_account.Credentials.from_service_account_file(
key_path, scopes=['https://www.googleapis.com/auth/cloud-platform'])
def translate_to_en(contents, credentials=None, verbose=False, use_pinyin=False):
[...]
client = translate.TranslationServiceClient(credentials=credentials)
parent = client.location_path(credentials.project_id, 'global')
[...]
response = client.translate_text(
parent=parent,
contents=[contents],
mime_type='text/plain', # mime types: text/plain, text/html
target_language_code='en-US')
[...]
[...]
for c in trans_cols:
df[f'{c}__trans'] = df[c].apply(lambda x: translate_to_en(contents=x, credentials=credentials, verbose=False))
[...]
If there is anyone with a good idea to solve this, your help is greatly appreciated.
It seems the issue is relevant to the grpc bug reported in github. For gRPC name resolution, you can follow this doc on github.
I'm trying to build a slack bot with this turorial i managed with all the modules except the slack
when I am trying to connect the slack_rtm having error like this. I'm using python 3.7.5, slackclient==1.3.1 and also using proper app token. Im stuck here for long time please help!
Failed RTM connect
Traceback (most recent call last):
File "E:\Geeksters-Slack-Chatbot-master\venv\lib\site-packages\slackclient\client.py", line 140, in rtm_connect
self.server.rtm_connect(use_rtm_start=with_team_state, **kwargs)
File "E:\Geeksters-Slack-Chatbot-master\venv\lib\site-packages\slackclient\server.py", line 163, in rtm_connect
raise SlackLoginError(reply=reply)
slackclient.server.SlackLoginError
Connection Failed
check my code
from slackclient import SlackClient
SLACK_API_TOKEN = "Unsing_proper_token_here"
client = SlackClient(SLACK_API_TOKEN)
def say_hello(data):
if 'Hello' in data['text']:
channel_id = data['channel']
thread_ts = data['ts']
user = data['user']
client.api_call('chat.postMessage',
channel=channel_id,
text="Hi <#{}>!".format(user),
thread_ts=thread_ts
)
if client.rtm_connect():
while client.server.connected is True:
for data in client.rtm_read():
if "type" in data and data["type"] == "message":
say_hello(data)
else:
print("Connection Failed")
There is an issue with RTM when using the OAUTH tokens with the version of slackclient you are using. I suggest trying to revert back to the legacy token that can be found here. For more on this issue i suggest you look at the github issue
It might be related to the Slack App. RTM isn't supported for the new Slack App granular scopes (see python client issue #584 and node client issue #921). If you want to use RTM, you can create a classic slack app with the OAuth Scope bot. Note that a similar question has been asked before.
All,
I inadvertently uninstalled EventHub V1.3.3 and installed the latest V5.0.0. I had to change the code that I had but I never got it to work. Below is the code:
from azure.eventhub import EventHubProducerClient, EventData
connection_str = 'Endpoint=sb://eventhubns20.servicebus.windows.net/;SharedAccessKeyName=eventhub20_pol;SharedAccessKey=xxxxxxxxxx=;EntityPath=eventhub20' eventhub_name = 'eventhub20'
client = EventHubProducerClient.from_connection_string(connection_str, eventhub_name=eventhub_name)
event_data_batch = client.create_batch()
Below is the error:
Traceback:
File "C:\Users\grajee\AppData\Local\Programs\Python\Python38-32\lib\site-packages\uamqp\authentication\cbs_auth.py", line 69, in create_authenticator
self._cbs_auth = c_uamqp.CBSTokenAuth(
File ".\src/cbs.pyx", line 73, in uamqp.c_uamqp.CBSTokenAuth.__cinit__
ValueError: Unable to open CBS link.
The error is on this line - event_data_batch = client.create_batch().
Has anyone experienced this error. If it is too much of work I might have to rollback to V1.3.3. The Python version is V3.8.1 on Windows.
Your connection string is incorrect. The one you are using is EventHub instance connection string, but you should use Eventhub Namespace connection string.
The correct Eventhub Namespace connection string looks like this: Endpoint=sb://xxx.servicebus.windows.net/;SharedAccessKeyName=policyName;SharedAccessKey=xxxx.
Please follow the step below to get the Event hub NameSpace connection string(not the event hub instance connection string).
Nav to azure portal -> your Azure EventHub NameSpace -> Shared access policies -> click the policy you want to use -> then copy the connection string from Connection string–primary key.
The screenshot as blow:
Your hostname doesn't resolve. Please make sure you are using an active Event hubs namespace.
>ping eventhubns20.servicebus.windows.net
Ping request could not find host eventhubns20.servicebus.windows.net. Please check the name and try again.
OS and version used: Ubuntu 18.04
SDK version used: Release Dec. 13, 2018
Target: ESP32.
Description of the issue:
I am trying to connect the ESP32 to my Blob storage. I am getting an HTTP error 401 (unauthorized access).
I am using the example: iothub_client_sample_upload_to_blob_mb.
I tried connecting using just the Shared Access Key in my connection string, but this did not work (no connection). After that I generated an SAS token in Azure (Storage Accounts -> -> Shared Access Signature) and plugged that in into my connection string.
My connection string looks like this:
static const char* connectionString = "HostName=<Host name>;DeviceId=<Device ID>;SharedAccessSignature=<inserted here without the "?" at the beginning>";
Q1: Why is there a "?" in front of the token? When I look at the connection string, at SharedAccessSignature=.. I don't see the "?".
I also set up the Endpoint in Azure under IoT Hub -> Upload files.
In the example, I am using the option SET_TRUSTED_CERT_IN_SAMPLES.
Q2: What does that mean? I am not so familiar with basic encryption and should probably read up on that.
Q3: Why am I getting an 401 error? What could be a possible solution?
Log:
Initializing SNTP
ESP platform sntp inited!
Time is not set yet. Connecting to WiFi and getting time over NTP. timeinfo.tm_year:70
Waiting for system time to be set... tm_year:0[times:1]
Starting the IoTHub client sample upload to blob with multiple blocks...
Info: Waiting for TLS connection
Info: Waiting for TLS connection
Info: Waiting for TLS connection
Info: Waiting for TLS connection
Error: Time:Thu Jan 17 22:06:00 2019 File:/home/julian/eclipse-workspace/chaze-esp32/components/esp-azure/azure-iot-sdk-c/iothub_client/src/iothub_client_ll_uploadtoblob.c Func:send_http_request Line:142 HTTP code was 401
Error: Time:Thu Jan 17 22:06:00 2019 File:/home/julian/eclipse-workspace/chaze-esp32/components/esp-azure/azure-iot-sdk-c/iothub_client/src/iothub_client_ll_uploadtoblob.c Func:IoTHubClient_LL_UploadToBlob_step1and2 Line:494 unable to HTTPAPIEX_ExecuteRequest
Error: Time:Thu Jan 17 22:06:00 2019 File:/home/julian/eclipse-workspace/chaze-esp32/components/esp-azure/azure-iot-sdk-c/iothub_client/src/iothub_client_ll_uploadtoblob.c Func:IoTHubClient_LL_UploadMultipleBlocksToBlob_Impl Line:768 error in IoTHubClient_LL_UploadToBlob_step1
Received unexpected result FILE_UPLOAD_ERROR
hello world failed to upload
Press any key to continue
Here is the link to the GitHub Repo.
The example can be found here.
I generated an SAS token in Azure (Storage Accounts -> -> Shared Access Signature) and plugged that in into my connection string. My connection string looks like this:
static const char* connectionString = "HostName=<Host name>;DeviceId=<DeviceID>;SharedAccessSignature=<inserted here without the "?" at the beginning>";
Q1: Why is there a "?" in front of the token? When I look at the connection string, at SharedAccessSignature=.. I don't see the "?".
After registering a device on IoTHub you will need to retrieve it's connection string to use on this example. See here an example on how to register and retrieve the connection string from a device on IoTHub.
I also set up the Endpoint in Azure under IoT Hub -> Upload files. In the example, I am using the option SET_TRUSTED_CERT_IN_SAMPLES.
Q2: What does that mean? I am not so familiar with basic encryption and should probably read up on that.
That Flag is used when compiling the SDK for your device. See the CMake File:
#Conditionally use the SDK trusted certs in the samples
if(${use_sample_trusted_cert})
add_definitions(-DSET_TRUSTED_CERT_IN_SAMPLES)
include_directories(${PROJECT_SOURCE_DIR}/certs)
set(iothub_client_sample_upload_to_blob_mb_c_files ${iothub_client_sample_upload_to_blob_mb_c_files} ${PROJECT_SOURCE_DIR}/certs/certs.c)
endif()
Q3: Why am I getting an 401 error? What could be a possible solution?
Make sure you configure file upload on Azure IoTHub correctly - https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-file-upload and use the correct connection string on the sample. Also leverage the ESP8266 sample that should have similar steps as the ESP32 configuration.
To get rid of the 401 error: Use MSFT Baltimore certificate in the code.
To get rid of the panic on the ESP: Look at this GitHub issue.