python3 exchangelib credentials - python-3.x

Im trying to log into an exchangeserver with the exchangelib.
When I try to run the script it gives me an error:
File "/usr/local/lib/python3.5/dist-packages/exchangelib/protocol.py", line 61, in init
assert isinstance(credentials, Credentials)
AssertionError
From what I can understand it says my credential variabel is not of the right type. I have tried both with and without autodiscover enabled. I get the same error.
Here is the relevant code.
credents = Credentials(username='domain\\aaa.fh', password= 'password'),
config = Configuration(server='domain.aaa.no', credentials= credents)
account = Account(
primary_smtp_address='fh#domain.no',
config=config,
autodiscover=True,
access_type=DELEGATE)

Try this way:
config = Configuration(
server='mail.example.com',
credentials=Credentials(username='Domain\username', password='password'),
auth_type=NTLM
)
account = Account(primary_smtp_address='Emailaddress#domain.com', config=config,
access_type=DELEGATE)

This is due to the dreaded Python trailing comma in the first line, which "helpfully" turns your credents variable into a tuple of Credentials.

Related

Google cloud function (python) does not deploy - Function failed on loading user code

I'm calling a simple python function in google cloud but cannot get it to save. It shows this error:
"Function failed on loading user code. This is likely due to a bug in the user code. Error message: Error: please examine your function logs to see the error cause: https://cloud.google.com/functions/docs/monitoring/logging#viewing_logs. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation."
Logs don't seem to show much that would indicate error in the code. I followed this guide: https://blog.thereportapi.com/automate-a-daily-etl-of-currency-rates-into-bigquery/
With the only difference environment variables and the endpoint I'm using.
Code is below, which is just a get request followed by a push of data into a table.
import requests
import json
import time;
import os;
from google.cloud import bigquery
# Set any default values for these variables if they are not found from Environment variables
PROJECT_ID = os.environ.get("PROJECT_ID", "xxxxxxxxxxxxxx")
EXCHANGERATESAPI_KEY = os.environ.get("EXCHANGERATESAPI_KEY", "xxxxxxxxxxxxxxx")
REGIONAL_ENDPOINT = os.environ.get("REGIONAL_ENDPOINT", "europe-west1")
DATASET_ID = os.environ.get("DATASET_ID", "currency_rates")
TABLE_NAME = os.environ.get("TABLE_NAME", "currency_rates")
BASE_CURRENCY = os.environ.get("BASE_CURRENCY", "SEK")
SYMBOLS = os.environ.get("SYMBOLS", "NOK,EUR,USD,GBP")
def hello_world(request):
latest_response = get_latest_currency_rates();
write_to_bq(latest_response)
return "Success"
def get_latest_currency_rates():
PARAMS={'access_key': EXCHANGERATESAPI_KEY , 'symbols': SYMBOLS, 'base': BASE_CURRENCY}
response = requests.get("https://api.exchangeratesapi.io/v1/latest", params=PARAMS)
print(response.json())
return response.json()
def write_to_bq(response):
# Instantiates a client
bigquery_client = bigquery.Client(project=PROJECT_ID)
# Prepares a reference to the dataset
dataset_ref = bigquery_client.dataset(DATASET_ID)
table_ref = dataset_ref.table(TABLE_NAME)
table = bigquery_client.get_table(table_ref)
# get the current timestamp so we know how fresh the data is
timestamp = time.time()
jsondump = json.dumps(response) #Returns a string
# Ensure the Response is a String not JSON
rows_to_insert = [{"timestamp":timestamp,"data":jsondump}]
errors = bigquery_client.insert_rows(table, rows_to_insert) # API request
print(errors)
assert errors == []
I tried just the part that does the get request with an offline editor and I can confirm a response works fine. I suspect it might have to do something with permissions or the way the script tries to access the database.

400 Caller's project doesn't match parent project

I have this block of code that basically translates text from one language to another using the cloud translate API. The problem is that this code always throws the error: "Caller's project doesn't match parent project". What could be the problem?
translation_separator = "translated_text: "
language_separator = "detected_language_code: "
translate_client = translate.TranslationServiceClient()
# parent = translate_client.location_path(
# self.translate_project_id, self.translate_location
# )
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = (
os.getcwd()
+ "/translator_credentials.json"
)
# Text can also be a sequence of strings, in which case this method
# will return a sequence of results for each text.
try:
result = str(
translate_client.translate_text(
request={
"contents": [text],
"target_language_code": self.target_language_code,
"parent": f'projects/{self.translate_project_id}/'
f'locations/{self.translate_location}',
"model": self.translate_model
}
)
)
print(result)
except Exception as e:
print("error here>>>>>", e)
Your issue seems to be related to the authentication method that you are using on your application, please follow the guide for authention methods with the translate API. If you are trying to pass the credentials using code, you can explicitly point to your service account file in code with:
def explicit():
from google.cloud import storage
# Explicitly use service account credentials by specifying the private key
# file.
storage_client = storage.Client.from_service_account_json(
'service_account.json')
Also, there is a codelab for getting started with the translation API with Python, this is a great step by step getting started guide for running the translate API with Python.
If the issue persists, you can try creating a Public Issue Tracker for Google Support

Azure AD Python - Unexpected polling state invalid_client error

store_token = context.acquire_token_with_device_code(resource_uri, code, client_id)
File "/Users/jyao/Desktop/azureblobtest/lib/python3.7/site-packages/adal/authentication_context.py", line 273, in acquire_token_with_device_code
return self._acquire_token(token_func)
File "/Users/jyao/Desktop/azureblobtest/lib/python3.7/site-packages/adal/authentication_context.py", line 109, in _acquire_token
return token_func(self)
File "/Users/jyao/Desktop/azureblobtest/lib/python3.7/site-packages/adal/authentication_context.py", line 266, in token_func
token = token_request.get_token_with_device_code(user_code_info)
File "/Users/jyao/Desktop/azureblobtest/lib/python3.7/site-packages/adal/token_request.py", line 398, in get_token_with_device_code
token = client.get_token_with_polling(oauth_parameters, interval, expires_in)
File "/Users/jyao/Desktop/azureblobtest/lib/python3.7/site-packages/adal/oauth2_client.py", line 345, in get_token_with_polling
wire_response)
adal.adal_error.AdalError: Unexpected polling state invalid_client
How could I get rid of this error after input the device code and sign-in successfully.
steps:
From a python interactive prompt, run this code [1] (All modules are already loaded). Where:
authority_url = 'https://login.microsoftonline.com/my_tenant_id'
resource_uri = "https://storage.azure.com/"
context = adal.AuthenticationContext(authority_uri)
code = context.acquire_user_code(resource_uri, client_id)
print(code['message'])
store_token = context.acquire_token_with_device_code(resource_uri, code, client_id)
credentials = AADTokenCredentials(store_token, client_id)
2.Open the URL https://microsoft.com/devicelogin in a browser.
3.Enter the code E8B2DVT67
4.Confirm the application's name, it is correct.
5.Authenticate using the user's credentials.
6.Get a message in browser saying "You have signed in to the TEST-APP application on your device. You may now close this window."
7.Get the error shown in my previous message in the python interactive prompt.
I use a native app and update manifest and set "allowPublicClient": true permission.
Your code works fine.
If we set allowPublicClient:false, we will encounter this error.
After updating allowPublicClient to true, it will work. Note: There will be some delay for the configuration to take effect.

How to save the output of Azure-cli commands in a variable

When using azure-cli in python 3.5 and calling the commands from a script I have no control on the output in the console.
When a command is executed it prints the result to the console, but I'm struggling to just take the result and put it in a variable to analyze it.
from azure.cli.core import get_default_cli
class AzureCmd():
def __init__(self, username, password):
self.username = username
self.password = password
def login(self, tenant):
login_successfull = get_default_cli().invoke(['login',
'--tenant', tenant,
'--username', self.username,
'--password', self.password]) == 0
return login_successfull
def list_vm(self, tenant):
list_vm = get_default_cli().invoke(['vm', 'list', '--output', 'json'])
print(list_vm)
tenant = 'mytenant.onmicrosoft.com'
cmd = AzureCmd('login', 'mypassword')
cmd.login(tenant)
cmd.list_vm(tenant)
Here is my my script attempt.
What I want to achieve : not getting any output when cmd.login(tenant) is executed.
Instead of getting 0 (success) or 1 (failure) in my variables login_successfull and list_vm, I want to save the output of the get_default_cli().invoke() in it.
I ran into the same problem, and found a solution, I also found out many people offered the standard solution that normally works in most cases, but they didn't verify it works for this scenario, and it turns out az cli is an edge case.
I think the issue has something to do with az cli is based on python.
Win10CommandPrompt:\> where az
C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\wbin\az.cmd
If you look in that file you'll see something like this and discover that Azure CLI is just python:
python.exe -IBm azure.cli
So to do what you want to do, try this (it works for me):
import subprocess
out = subprocess.run(['python', '-IBm', 'azure.cli', '-h'], stdout=subprocess.PIPE).stdout.decode('utf-8')
print(out)
#this is equivalent to "az -h'
The above syntax won't work unless every single arg is a comma separated list of strings, I found a syntax I like alot more after reading how to do multiple args with python popen:
import subprocess
azcmd = "az ad sp create-for-rbac --name " + SPName + " --scopes /subscriptions/" + subscriptionid
out = subprocess.run(azcmd, shell=True, stdout=subprocess.PIPE).stdout.decode('utf-8')
print(out)
I faced the same problem while trying to save the log of Azure Container Instance. None of the above solutions worked exactly as they are. After debugging the azure cli python code
(File : \Python39\Lib\site-packages\azure\cli\command_modules\container\custom.py , function : container_logs() ), i see that the container logs are just printed to the console but not returned. If you want to save the logs to any variable, add the return line (Not exactly a great solution but works for now). Hoping MS Azure updates their azure cli in upcoming versions.
def container_logs(cmd, resource_group_name, name, container_name=None, follow=False):
"""Tail a container instance log. """
container_client = cf_container(cmd.cli_ctx)
container_group_client = cf_container_groups(cmd.cli_ctx)
container_group = container_group_client.get(resource_group_name, name)
# If container name is not present, use the first container.
if container_name is None:
container_name = container_group.containers[0].name
if not follow:
log = container_client.list_logs(resource_group_name, name, container_name)
print(log.content)
# Return the log
return(log.content)
else:
_start_streaming(
terminate_condition=_is_container_terminated,
terminate_condition_args=(container_group_client, resource_group_name, name, container_name),
shupdown_grace_period=5,
stream_target=_stream_logs,
stream_args=(container_client, resource_group_name, name, container_name, container_group.restart_policy))
With this modification and along with the above solutions given (Using the get_default_cli), we can store the log of the Azure container instance in a variable.
from azure.cli.core import get_default_cli
def az_cli(args_str):
args = args_str.split()
cli = get_default_cli()
res = cli.invoke(args)
if cli.result.result:
jsondata = cli.result.result
elif cli.result.error:
print(cli.result.error)
I think you can use the subprocess and call the az cli to get the output instead using get_default_cli.
import subprocess
import json
process = subprocess.Popen(['az','network', 'ddos-protection', 'list'], stdout=subprocess.PIPE)
out, err = process.communicate()
d = json.loads(out)
print(d)
Well, we can execute the Azure CLI commands in Python as shown below.
Here, the res variable usually stores a value of integer type and therefore we might not be able to access the json response. To store the response in a variable, we need to do cli.result.result.
from azure.cli.core import get_default_cli
def az_cli(args_str):
args = args_str.split()
cli = get_default_cli()
res = cli.invoke(args)```
if cli.result.result:
jsondata = cli.result.result
elif cli.result.error:
print(cli.result.error)

Google datalab fails to query and create table

I'm trying to query large amount of data in BigQuery and then upload the table in the desired dataset (datasetxxx) using "datalab" in PyCharm as the IDE. Below is my code:
query = bq.Query(sql=myQuery)
job = query.execute_async(
output_options=bq.QueryOutput.table('datasetxxx._tmp_table', mode='overwrite', allow_large_results=True))
job.result()
However, I ended up with "No project ID found". Project Id is imported through a .jason file as os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = path to the file. I also tried to explicitly declare project Id above as follows.
self.project_id = 'xxxxx'
query = bq.Query(sql=myQuery, context = self.project_id)
This time I ended up with the following error:
TypeError: init() got an unexpected keyword argument 'context'.
It's also an up-to-date version. Thanks for your help.
Re: The project Id is specified in the "FROM" clause and I'm also able to see the path to the .json file using "echo" command. Below is the stack-trace:
Traceback (most recent call last):
File "xxx/Queries.py", line 265, in <module>
brwdata._extract_gbq()
File "xxx/Queries.py", line 206, in _extract_gbq
, allow_large_results=True))
File "xxx/.local/lib/python3.5/site packages/google/datalab/bigquery/_query.py", line 260, in execute_async
table_name = _utils.parse_table_name(table_name, api.project_id)
File "xxx/.local/lib/python3.5/site-packages/google/datalab/bigquery/_api.py", line 47, in project_id
return self._context.project_id
File "xxx/.local/lib/python3.5/site-packages/google/datalab/_context.py", line 62, in project_id
raise Exception('No project ID found. Perhaps you should set one by running'
Exception: No project ID found. Perhaps you should set one by running"%datalab project set -p <project-id>" in a code cell.
So, if you do "echo $GOOGLE_APPLICATION_CREDENTIALS" you can see the path of your JSON.
So, could you make sure if the "FROM" from the query has specified the right external project?
Also, if your QueryOutput destination is your very same project, you are doing it right,
table('dataset.table'.....)
But in order case you should specify:
table('project.dataset.table'....)
I don't exactly know how are you doing the query but the error might be there.
I reproduced this and it worked fine to me:
import google.datalab
from google.datalab import bigquery as bq
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] ="./bqauth.json"
myQuery="SELECT * FROM `MY_EXAMPLE_PROJECT.MY_EXAMPLE_DATASET.MY_EXAMPLE_TABLE` LIMIT 1000"
query = bq.Query(sql=myQuery)
job = query.execute_async(
output_options=bq.QueryOutput.table('MY_EXAMPLE_PROJECT.MY_EXAMPLE_DATASET2.MY_EXAMPLE_TABLE2', mode='overwrite', allow_large_results=True))
job.result()
Here's the updated way if someone in need:
Now you can use the Context in latest version as:
from google.datalab import bigquery as bq
from google.datalab import Context as ctx
ctx.project_id = 'PROJECT_ID'
df = bq.Query(query).execute()
...

Resources