Not able to connect with slack_rtm - python-3.x

I'm trying to build a slack bot with this turorial i managed with all the modules except the slack
when I am trying to connect the slack_rtm having error like this. I'm using python 3.7.5, slackclient==1.3.1 and also using proper app token. Im stuck here for long time please help!
Failed RTM connect
Traceback (most recent call last):
File "E:\Geeksters-Slack-Chatbot-master\venv\lib\site-packages\slackclient\client.py", line 140, in rtm_connect
self.server.rtm_connect(use_rtm_start=with_team_state, **kwargs)
File "E:\Geeksters-Slack-Chatbot-master\venv\lib\site-packages\slackclient\server.py", line 163, in rtm_connect
raise SlackLoginError(reply=reply)
slackclient.server.SlackLoginError
Connection Failed
check my code
from slackclient import SlackClient
SLACK_API_TOKEN = "Unsing_proper_token_here"
client = SlackClient(SLACK_API_TOKEN)
def say_hello(data):
if 'Hello' in data['text']:
channel_id = data['channel']
thread_ts = data['ts']
user = data['user']
client.api_call('chat.postMessage',
channel=channel_id,
text="Hi <#{}>!".format(user),
thread_ts=thread_ts
)
if client.rtm_connect():
while client.server.connected is True:
for data in client.rtm_read():
if "type" in data and data["type"] == "message":
say_hello(data)
else:
print("Connection Failed")

There is an issue with RTM when using the OAUTH tokens with the version of slackclient you are using. I suggest trying to revert back to the legacy token that can be found here. For more on this issue i suggest you look at the github issue

It might be related to the Slack App. RTM isn't supported for the new Slack App granular scopes (see python client issue #584 and node client issue #921). If you want to use RTM, you can create a classic slack app with the OAuth Scope bot. Note that a similar question has been asked before.

Related

How to connect to GCP BigTable using Python

I am connecting to my GCP BigTable instance using python (google-cloud-bigtable library), after setting up "GOOGLE_APPLICATION_CREDENTIALS" in my environment variables. I'm successful at doing this.
However, my requirement is that I want to pass the credentials during the run-time while creating the BigTable Client object as shown below:
client = bigtable.Client(credentials='82309281204023049', project='xjkejfkx')
I have followed the GCP BigTable Client Documentation to connect to GCP BigTable, but I am getting this error:
Traceback (most recent call last):
File "D:/testingonlyinformix/bigtable.py", line 14, in <module>
client = bigtable.Client(credentials="9876543467898765", project="xjksjkdn", admin=True)
File "D:\testingonlyinformix\new_venv\lib\site-packages\google\cloud\bigtable\client.py", line 196, in __init__
project=project, credentials=credentials, client_options=client_options,
File "D:\testingonlyinformix\new_venv\lib\site-packages\google\cloud\client\__init__.py", line 320, in __init__
self, credentials=credentials, client_options=client_options, _http=_http
File "D:\testingonlyinformix\new_venv\lib\site-packages\google\cloud\client\__init__.py", line 167, in __init__
raise ValueError(_GOOGLE_AUTH_CREDENTIALS_HELP)
ValueError: This library only supports credentials from google-auth-library-python. See https://google-auth.readthedocs.io/en/latest/ for help on authentication with this library.
Can someone please suggest what are all the fields/attributes a Client object is expecting during the run-time when making a connection to GCP BigTable?
Thanks
After 2 hrs of searching I finally landed on these page(s), please check them out in order:
BigTable Authentication
Using end-user authentication
OAuth Scopes for BigTable
from google_auth_oauthlib import flow
appflow = flow.InstalledAppFlow.from_client_secrets_file(
"client_secrets.json", scopes=["https://www.googleapis.com/auth/bigtable.admin"])
appflow.run_console()
credentials = appflow.credentials
The credentials in the previous step will need to be provided to the BigTable client object:
client = bigtable.Client(credentials=credentials, project='xjkejfkx')
This solution worked for me, if anyone has any other suggestions, please do pitch-in.

How to provide serializer, deserializer, and config arguments when instantiating Databricks in Azure Python SDK? [duplicate]

[Previously in this post I asked how to provision a databricks services without any workspace. Now I'm asking how to provision a service with a workspace as the first scenario seems unfeasible.]
As a cloud admin I'm asked to write a script using the Azure Python SDK which will provision a Databricks service for one of our big data dev teams.
I can't find much online about Databricks within the Azure Python SDK other than https://azuresdkdocs.blob.core.windows.net/$web/python/azure-mgmt-databricks/0.1.0/azure.mgmt.databricks.operations.html
and
https://azuresdkdocs.blob.core.windows.net/$web/python/azure-mgmt-databricks/0.1.0/azure.mgmt.databricks.html
These appear to offer some help provisioning a workspace, but I am not quite there yet.
What am I missing?
EDITS:
Thanks to #Laurent Mazuel and #Jim Xu for their help.
Here's the code I'm running now, and the error I'm receiving:
client = DatabricksClient(credentials, subscription_id)
workspace_obj = client.workspaces.get("example_rg_name", "example_databricks_workspace_name")
WorkspacesOperations.create_or_update(
workspace_obj,
"example_rg_name",
"example_databricks_workspace_name",
custom_headers=None,
raw=False,
polling=True
)
error:
TypeError: create_or_update() missing 1 required positional argument: 'workspace_name'
I'm a bit puzzled by that error as I've provided the workspace name as the third parameter, and according to this documentation, that's just what this method requires.
I also tried the following code:
client = DatabricksClient(credentials, subscription_id)
workspace_obj = client.workspaces.get("example_rg_name", "example_databricks_workspace_name")
client.workspaces.create_or_update(
workspace_obj,
"example_rg_name",
"example_databricks_workspace_name"
)
Which results in:
Traceback (most recent call last):
File "./build_azure_visibility_core.py", line 112, in <module>
ca_databricks.create_or_update_databricks(SUB_PREFIX)
File "/home/gitlab-runner/builds/XrbbggWj/0/SA-Cloud/azure-visibility-core/expd_az_databricks.py", line 34, in create_or_update_databricks
self.databricks_workspace_name
File "/home/gitlab-runner/builds/XrbbggWj/0/SA-Cloud/azure-visibility-core/azure-visibility-core/lib64/python3.6/site-packages/azure/mgmt/databricks/operations/workspaces_operations.py", line 264, in create_or_update
**operation_config
File "/home/gitlab-runner/builds/XrbbggWj/0/SA-Cloud/azure-visibility-core/azure-visibility-core/lib64/python3.6/site-packages/azure/mgmt/databricks/operations/workspaces_operations.py", line 210, in _create_or_update_initial
body_content = self._serialize.body(parameters, 'Workspace')
File "/home/gitlab-runner/builds/XrbbggWj/0/SA-Cloud/azure-visibility-core/azure-visibility-core/lib64/python3.6/site-packages/msrest/serialization.py", line 589, in body
raise ValidationError("required", "body", True)
msrest.exceptions.ValidationError: Parameter 'body' can not be None.
ERROR: Job failed: exit status 1
So Line 589 in serialization.py has an error. I don't see where an error in my code is causing that. Thanks to all who have been generous to assist!
you need to create a databrick client, and workspaces will be attached to it:
client = DatabricksClient(credentials, subscription_id)
workspace = client.workspaces.get(resource_group_name, workspace_name)
I don't think creating a service without a workspace is even possible, trying to create databricks service on the portal, you will see workspace name is required as well
so using the SDK I would look at the doc for client.workspaces.create_or_update
(I work at MS in the SDK team)
with help from #Laurent Mazuel and support engineers at Microsoft, I have a solution:
managed_resource_group_ID = ("/subscriptions/"+sub_id+"/resourceGroups/"+managed_rg_name)
client = DatabricksClient(credentials, subscription_id)
workspace_obj = client.workspaces.get(rg_name, databricks_workspace_name)
client.workspaces.create_or_update(
{
"managedResourceGroupId": managed_resource_group_ID,
"sku": {"name":"premium"},
"location":location
},
rg_name,
databricks_workspace_name
).wait()

Error: 503 DNS resolution failed in Google Translate API but ONLY when executing the Python file via terminal

I am running into a strange issue when using Google Translate API with a JSON authorization key. I can run the file without any issue directly in my editor of choice (VSCode). No errors.
But when I run the same file via terminal, the file executes up until after loading the Google Translate credentials from disk. The call then throws below error message.
Since this only ever happens when running the file from terminal, I find this bug hard to tackle.
The goal is to then have this file collect data and translate some of the fields using Google services, then store the data in a data base.
Below is the error message:
Error: 503 DNS resolution failed
Traceback (most recent call last):
File "/home/MY-USER-NAME/anaconda3/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
File "/home/MY-USER-NAME/anaconda3/lib/python3.6/site-packages/grpc/_channel.py", line 690, in __call__
File "/home/MY-USER-NAME/anaconda3/lib/python3.6/site-packages/grpc/_channel.py", line 592, in _end_unary_response_blocking
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed"
debug_error_string = "{"created":"#1584629013.091398712","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3934,"referenced_errors":[{"created":"#1584629013.091395769","description":"Resolver transient failure","file":"src/core/ext/filters/client_channel/resolving_lb_policy.cc","file_line":262,"referenced_errors":[{"created":"#1584629013.091394954","description":"DNS resolution failed","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/dns_resolver_ares.cc","file_line":370,"grpc_status":14,"referenced_errors":[{"created":"#1584629013.091389655","description":"C-ares status is not ARES_SUCCESS: Could not contact DNS servers","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.cc","file_line":244,"referenced_errors":[{"created":"#1584629013.091380513","description":"C-ares status is not ARES_SUCCESS: Could not contact DNS servers","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.cc","file_line":244}]}]}]}]}"
>
This is the relevant part of my code:
Dedicated CRON file (I use this two-step setup in many other projects without any issue)
#! anaconda3/bin/python
import os
os.chdir(r'/home/MY-USER-NAME/path-to-project')
import code-file
Code file (simple .py script)
[...]
from google.oauth2 import service_account
from google.cloud import translate
key_path = 'path-to-file/credentials.json'
credentials = service_account.Credentials.from_service_account_file(
key_path, scopes=['https://www.googleapis.com/auth/cloud-platform'])
def translate_to_en(contents, credentials=None, verbose=False, use_pinyin=False):
[...]
client = translate.TranslationServiceClient(credentials=credentials)
parent = client.location_path(credentials.project_id, 'global')
[...]
response = client.translate_text(
parent=parent,
contents=[contents],
mime_type='text/plain', # mime types: text/plain, text/html
target_language_code='en-US')
[...]
[...]
for c in trans_cols:
df[f'{c}__trans'] = df[c].apply(lambda x: translate_to_en(contents=x, credentials=credentials, verbose=False))
[...]
If there is anyone with a good idea to solve this, your help is greatly appreciated.
It seems the issue is relevant to the grpc bug reported in github. For gRPC name resolution, you can follow this doc on github.

Python 3 LDAP3 - Connect to AD with user from a different domain

In Python 3, using the LDAP3 module, is it possible to connect to an AD using a user that is from a different AD?
I tried to set the connection string to use the account in Domain2, in order to connect to Domain1, but it's not working, I'm getting an error stating that the credentials aren't valid, even though they work when I try to connect to Domain2, using that same account.
Traceback (most recent call last):
File "mailer.py", line 47, in <module>
conn = Connection(server, domain["connection_string"], domain["password"], auto_bind=True)
File "/usr/lib/python3/dist-packages/ldap3/core/connection.py", line 321, in __init__
self.do_auto_bind()
File "/usr/lib/python3/dist-packages/ldap3/core/connection.py", line 349, in do_auto_bind
raise LDAPBindError(self.last_error)
ldap3.core.exceptions.LDAPBindError: automatic bind not successful - invalidCredentials
Is this possible at all, or does LDAP3 connections only works with users from the same domain?
Edit:
This is what I'm trying to use to connect when I get the error message above.
server = Server(domain1.example.com)
conn = Connection(server, "CN=BindAccount,OU=Users,DC=domain2,DC=example,DC=com", '#Password1', auto_bind=True)
With ldap3 you can bind to any ldap server, there are no restrictions related to the Active Directory domain. Actually, LDAP is not aware of AD at all. Probably you are providing wrong credentials.

Proton connection to azure iothub amqps endpoints

I'm trying to use python 2.7 qpid proton (0.17.0) on Debian 4.8.4-1 on google cloud to connect to the service AMQPS 1.0 endpoints on an Azure IoT hub.
I'm using a shared access policy and a SAS token pair (which successfully work in ampq net lite under C#) for SASL PLAIN. For azure iothub, you reference the shared access policy as the username in the format policyName#sas.root.IoTHubName
When I use these (obfuscated) credentials with qpid proton as follows
import sys
from proton import Messenger, Message
base = "amqps://iothubowner#sas.root.HubName:SharedAccessSignature sr=HubName.azure-devices.net%2fmessages&sig=PERCENT_ENCODED_SIG&se=1493454429&st=1490861529&skn=iothubowner#HubName.azure-devices.net:5671"
entityName = "/messages/servicebound/feedback"
messenger = Messenger()
messenger.subscribe("%s%s" % ( base, entityName))
messenger.start()
I get the following error
(debug print of connection string being passed) amqps://iothubowner#sas.root.HubName:SharedAccessSignature
sr=HubName.azure-devices.net%2fmessages&sig=PERCENT_ENCODED_SIG&se=1493454429&st=1490861529&skn=iothubowner#HubName.azure-devices.net:5671
Traceback (most recent call last): File "readProton2.py", line 27,
in
messenger.subscribe("%s%s" % ( base, entityName)) File "/usr/lib/python2.7/dist-packages/proton/__init__.py", line 496, in
subscribe
self._check(pn_error_code(pn_messenger_error(self._mng))) File "/usr/lib/python2.7/dist-packages/proton/__init__.py", line 300, in
_check
raise exc("[%s]: %s" % (err, pn_error_text(pn_messenger_error(self._mng))))
proton.MessengerException: [-2]: CONNECTION ERROR
(sas.root.HubName:SharedAccessSignature
sr=HubName.azure-devices.net%2fmessages&sig=PERCENT_ENCODED_SIG&se=1493454429&st=1490861529&skn=iothubowner#HubName.azure-devices.net:5671):
getaddrinfo(sas.root.HubName, SharedAccessSignature
sr=HubName.azure-devices.net%2fmessages&sig=PERCENT_ENCODED_SIG&se=1493454429&st=149086152
9&skn=iothubowner#HubName.azure-devices.net:5671): Servname not
supported for ai_socktype
The final error message looks like parsing of the connection string is being mangled (on a very quick look, pni_default_rewrite in qpid-proton/messenger.c appears to use the first occurrence of # to split the connection string which could be the problem)
However, I'm new to AMQP and proton, so before I raise a bug, want to check whether others have successfully use proton to iothub, or if I've missed something??
You need to encode amqp username and password.
iothubowner#sas.root.HubName
iothubowner%40sas.root.HubName
SharedAccessSignature sr=HubName.azure-devices.net%2fmessages&sig=PERCENT_ENCODED_SIG&se=1493454429&st=1490861529&skn=iothubowner
SharedAccessSignature%20sr%3DHubName.azure-devices.net%252fmessages%26sig%3DPERCENT_ENCODED_SIG%26se%3D1493454429%26st%3D1490861529%26skn%3Diothubowner
The server name should be HubName.azure-devices.net:5671, but as you can see in error message, server name changed to sas.root.HubName, SharedAccessSignature sr=HubName.azure-devices.net%2fmessages&sig=PERCENT_ENCODED_SIG&se=1493454429&st=149086152 9&skn=iothubowner#HubName.azure-devices.net:5671

Resources