Disable Authentication in OpenStack Swift - linux

I want that everyone (unauthorized) could store/read objects form my test swift server. Is there a way to disable authentication at all? I'm authorized with the following user (proxy-server.conf):
[filter:tempauth]
use = egg:swift#tempauth
user_test_tester = testing .admin
but want to give possibility to non-users make requests to my server also.

It depends on what kind of requests you want to use and what auth middleware you are using. If you are using keystone you are stuck using container level permissions. You can set permissions on a container to be public.
curl -X POST -i \
-H "X-Auth-Token: abcdeftoken" \
-H "X-Container-Read: .r:*" \
-H "X-Container-Write: .r:*" \
http://swift.example.com/v1/AUTH_testing/container

You can configure your proxy-server pipeline with no authentication middleware, tempauth or with keystoneauth. In the first solution you don't need to provide any password. in the second solution you can have user, group and password set in your configuration and the last one contacts keystone server for identification.
example:
[pipeline:main]
### no pass
# pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
### tempauth
# pipeline = catch_errors gatekeeper healthcheck proxy-logging cache listing_formats container_sync bulk tempurl ratelimit tempauth copy container-quotas account-quotas slo dlo versioned_writes symlink proxy-logging proxy-server
### keystoneauth
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = admin,user
# https://docs.openstack.org/keystonemiddleware/latest/middlewarearchitecture.html
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = swift
password = SWIFT_PASS # change this
delay_auth_decision = True
log_level = debug
service_token_roles_required = True
[filter:tempauth]
use = egg:swift#tempauth
user_admin_admin = admin .admin .reseller_admin
user_test_tester = testing .admin
user_test2_tester2 = testing2 .admin
user_test_tester3 = testing3
user_test5_tester5 = testing5 service

Related

Query on Microsoft Graph API Python

I want to pull emails by Graph API from client inbox using python.
I started with a tutorial and successfully experimented over my personal inbox.
My problem,
Every time my code generates an authorization URL.
I have to browse through it (using web browser library) , sign in using my credentials and copy paste the authorization code for generating access token.
Which is a lot of manual work every time.
Question :
Is there a way to automate the whole process of token generation ?
Such that my client only shares his application id and client secret, and email is pulled without his sign in credentials ?
My code is attached below -
import msal
from msal import PublicClientApplication
import webbrowser
import requests
import pandas as pd
APPLICATION_ID="app id"
CLIENT_SECRET="client secret"
authority_url='https://login.microsoftonline.com/common/'
base_url = 'https://graph.microsoft.com/v1.0/'
endpoint_url = base_url+'me'
SCOPES = ['Mail.Read','Mail.ReadBasic']
client_instance = msal.ConfidentialClientApplication(client_id = APPLICATION_ID,client_credential = CLIENT_SECRET,authority = authority_url)
authorization_request_url=client_instance.get_authorization_request_url(SCOPES)
#print(authorization_request_url)
# browsing authorization request URL for retrieving authorization code.
webbrowser.open(authorization_request_url,new=True)
# Manually pasting authorization code.
authorization_code='authorization code from authorization URL'
access_token = client_instance.acquire_token_by_authorization_code(code=authorization_code,scopes=SCOPES)
access_token_id=access_token['access_token']
# Rest of the codes are for hitting the end point and retrieving the messages
Any help with code suggestions will be much appreciated.
Thanks in advance
If you would like to authenticate only with a clientId and clientSecret, without any user context, you should leverage a client credentials flow.
You can check this official MS sample that uses the same MSAL library to handle the client credentials flow. It is quite straightforward, as you can see below:
import sys # For simplicity, we'll read config file from 1st CLI param sys.argv[1]
import json
import logging
import requests
import msal
# Optional logging
# logging.basicConfig(level=logging.DEBUG)
config = json.load(open(sys.argv[1]))
# Create a preferably long-lived app instance which maintains a token cache.
app = msal.ConfidentialClientApplication(
config["client_id"], authority=config["authority"],
client_credential=config["secret"],
# token_cache=... # Default cache is in memory only.
# You can learn how to use SerializableTokenCache from
# https://msal-python.rtfd.io/en/latest/#msal.SerializableTokenCache
)
# The pattern to acquire a token looks like this.
result = None
# Firstly, looks up a token from cache
# Since we are looking for token for the current app, NOT for an end user,
# notice we give account parameter as None.
result = app.acquire_token_silent(config["scope"], account=None)
if not result:
logging.info("No suitable token exists in cache. Let's get a new one from AAD.")
result = app.acquire_token_for_client(scopes=config["scope"])
if "access_token" in result:
# Calling graph using the access token
graph_data = requests.get( # Use token to call downstream service
config["endpoint"],
headers={'Authorization': 'Bearer ' + result['access_token']}, ).json()
print("Graph API call result: ")
print(json.dumps(graph_data, indent=2))
else:
print(result.get("error"))
print(result.get("error_description"))
print(result.get("correlation_id")) # You may need this when reporting a bug
The sample is retrieving a list of users from MS Graph, but it should be just a matter of adapting it to retrieve the list of emails of a specific user by changing the "endpoint" parameter in the parameters.json file to:
"endpoint": "https://graph.microsoft.com/v1.0/users//users/{id | userPrincipalName}/messages"
You can check here more information regarding the MS Graph request to list emails.
register your app
get your tenant id from azure portal and disable mfa
application_id = "xxxxxxxxxx"
client_secret = "xxxxxxxxxxxxx"
#authority_url = "xxxxxxxxxxx"
authority_url = 'xxxxxxxxxxxxxxxxxxxx'
base_url = "https://graph.microsoft.com/v1.0/"
endpoint = base_url+"me"
scopes = ["User.Read"]
tenant_id = "xxxxxxxxxxxx"
token_url = 'https://login.microsoftonline.com/'+tenant_id+'/oauth2/token'
token_data = {
'grant_type': 'password',
'client_id': application_id,
'client_secret': client_secret,
'resource': 'https://graph.microsoft.com',
'scope':'https://graph.microsoft.com',
'username':'xxxxxxxxxxxxxxxx', # Account with no 2MFA
'password':'xxxxxxxxxxxxxxxx',
}
token_r = requests.post(token_url, data=token_data)
token = token_r.json().get('access_token')
print(token)

Adding a Proxy to Twillio

I need to use twilio on a code and I would like to know where or how, in the example below, do I add the proxy settings?
from twilio.rest import Client
# Your Account SID from twilio.com/console
account_sid = "ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
# Your Auth Token from twilio.com/console
auth_token = "your_auth_token"
client = Client(account_sid, auth_token)
message = client.messages.create(
to="+15558675309",
from_="+15017250604",
body="Hello from Python!")
print(message.sid)
Please advise.

How to efficiently download whole directories from an azure data lake via the Python API?

I have some data (several GB) in an azure data lake spread across multiple files of 2MB each. I would like to write a download script to fetch the full directory. So far I have been trying an approach similar to the tutorial
azure_service_client = DataLakeServiceClient.from_connection_string(azure_connection_string)
file_system_client = service_client.get_file_system_client(file_system="my-file-system")
parent_directory_client = file_system_client.get_directory_client("my-directory")
for file_path in azure_all_files:
file_client = parent_directory_client.get_file_client(file_path)
download = file_client.download_file()
downloaded_bytes = download.readall()
target_path = os.path.join(self.local_data_directory, file_path)
with open(target_path, 'wb') as file:
file.write(downloaded_bytes)
But this is extremely slow, about 1 minute per file, i.e. 30 seconds per MB (no, it is not my internet connection). What am I missing here? Is the Python API just not the appropriate tool? Are some of the calls above redundant? Could it be parallelized?
I think we can use ADLDownloader Class in azure.datalake.store package to increase download rate. It launches multiple threads for efficient downloading, with chunksize assigned to each. The remote path can be a single file, a directory of files or a glob pattern. The example is here.
The pseudo code is as follows:
tenant_id = '<your Azure AD tenant id>'
username = '<your username in AAD>'
password = '<your password>'
store_name = '<your ADL name>'
token = lib.auth(tenant_id, username, password)
# Or you can register an app to get client_id and client_secret to get token
# If you want to apply this code in your application, I recommended to do the authentication by client
# client_id = '<client id of your app registered in Azure AD, like xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx'
# client_secret = '<your client secret>'
# token = lib.auth(tenant_id, client_id=client_id, client_secret=client_secret)
adl = core.AzureDLFileSystem(token, store_name=store_name)
ADLDownloader(adl, file_path, target_path, nthreads=None, chunksize=268435456, buffersize=4194304,blocksize=4194304,client=None, run=True, overwrite=False, verbose=False, progress_callback=None, timeout=0)

Automatically or manually refreshing access token with flask_client on Google App Engine

I am successfully able to authorize my application with a 3rd party OAuth2 provider (Xero), but have been unable to refresh the token, either automatically, or manually.
The documentation suggests authlib can do this automatically. I have tried two different approaches from the Authlib documentation, on the flask client docs they give an example of "Auto Update Token via Signal", and on the web client docs they register an "update_token" function.
Using either approach, there is never an attempt made to refresh the token, the request is passed to Xero with the expired token, I receive an error, and the only way to continue is to manually re-authorize the application with Xero.
Here is the relevant code for the "update_token" method from the web client docs:
#this never ends up getting called.
def save_xero_token(name,token,refresh_token=None,access_token=None,tenant_id=None):
logging.info('Called save xero token.')
#removed irrelevant code that stores token in NDB here.
cache = Cache()
oauth = OAuth(app,cache=cache)
oauth.register(name='xero',
client_id = Meta.xero_consumer_client_id,
client_secret = Meta.xero_consumer_secret,
access_token_url = 'https://identity.xero.com/connect/token',
authorize_url = 'https://login.xero.com/identity/connect/authorize',
fetch_token = fetch_xero_token,
update_token = save_xero_token,
client_kwargs={'scope':' '.join(Meta.xero_oauth_scopes)},
)
xero_tenant_id = 'abcd-123-placeholder-for-stackoverflow'
url = 'https://api.xero.com/api.xro/2.0/Invoices/ABCD-123-PLACEHOLDER-FOR-STACKOVERFLOW'
headers = {'Xero-tenant-id':xero_tenant_id,'Accept':'application/json'}
response = oauth.xero.get(url,headers=headers) #works fine until token is expired.
I am storing my token in the following NDB model:
class OAuth2Token(ndb.Model):
name = ndb.StringProperty()
token_type = ndb.StringProperty()
access_token = ndb.StringProperty()
refresh_token = ndb.StringProperty()
expires_at = ndb.IntegerProperty()
xero_tenant_id = ndb.StringProperty()
def to_token(self):
return dict(
access_token=self.access_token,
token_type=self.token_type,
refresh_token=self.refresh_token,
expires_at=self.expires_at
)
For completeness, here's how I store the initial response from Xero (which works fine):
#app.route('/XeroOAuthRedirect')
def xeroOAuthLanding():
token = oauth.xero.authorize_access_token()
connections_response = oauth.xero.get('https://api.xero.com/connections')
connections = connections_response.json()
for tenant in connections:
print('saving first org, this app currently supports one xero org only.')
save_xero_token('xero',token,tenant_id=tenant['tenantId'])
return 'Authorized application with Xero'
How can I get automatic refreshing to work, and how can I manually trigger a refresh request when using the flask client, in the event automatic refreshing fails?
I believe I've found the problem here, and the root of it was the passing of a Cache (for temporary credential storage) when initializing OAuth:
cache = Cache()
oauth = OAuth(app,cache=cache)
When the cache is passed, it appears to preempt the update_token (and possibly fetch_token) parameters.
It should be simply:
oauth = OAuth(app)
oauth.register(name='xero',
client_id = Meta.xero_consumer_client_id,
client_secret = Meta.xero_consumer_secret,
access_token_url = 'https://identity.xero.com/connect/token',
authorize_url = 'https://login.xero.com/identity/connect/authorize',
fetch_token = fetch_xero_token,
update_token = save_xero_token,
client_kwargs={'scope':' '.join(Meta.xero_oauth_scopes)},
)
in addition, the parameters on my "save_xero_token" function needed to be adjusted to match the documentation, however this was not relevant to the original problem the question was addressing.

HashiCorp Vault Python hvac read

I would like to read my secret from a pod with python.
I try with this:
import os
import hvac
f = open('/var/run/secrets/kubernetes.io/serviceaccount/token')
jwt = f.read()
client = hvac.Client()
client = hvac.Client(url='https://vault.mydomain.internal')
client.auth_kubernetes("default", jwt)
print(client.read('secret/pippo/pluto'))
I'm sure that secret/pippo/pluto exists.
I'm sure that I'm properly authenticated
But I always receive "None" in answer to my print.
Where can I look to solve this ?
Thx a lot
If you read KV value from Vault, you need the Mount Point and the Path.
Example:
vault_client.secrets.kv.v1.read_secret(
path=path,
mount_point=mount_point
)
i've tried the method you provided in my k8s Python3 pod, i can get Vault secret data successfully.
You need to specify the correct vault token parameter in your hvac.Client and disable client.auth_kubernetes method.
Give it a shot and remember your code should run in k8s Python container instead of your host machine.
import hvac
f = open('/var/run/secrets/kubernetes.io/serviceaccount/token')
jwt = f.read()
print("jwt:", jwt)
f.close()
client = hvac.Client(url='http://vault:8200', token='your_vault_token')
# res = client.auth_kubernetes("envelope-creator", jwt)
res = client.is_authenticated()
print("res:", res)
hvac_secrets_data_k8s = client.read('secret/data/compliance')
print("hvac_secrets_data_k8s:", hvac_secrets_data_k8s)
Below is the result:
92:qfedu shawn$ docker exec -it 202a119367a4 bash
airflow#airflow-858d8c6fcf-bgmwn:~$ ls
airflow-webserver.pid airflow.cfg config dags logs test_valut_in_webserver.py unittests.cfg webserver_config.py
airflow#airflow-858d8c6fcf-bgmwn:~$ python test_valut_in_webserver.py
jwt: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia
res: True
hvac_secrets_data_k8s: {'request_id': '80caf0cb-8c12-12d2-6517-530eecebd1e0', 'lease_id': '', 'renewable': False, 'lease_duration': 0, 'data': {'data': {'s3AccessKey': 'XXXX', 's3AccessKeyId': 'XXXX', 'sftpPassword': 'XXXX', 'sftpUser': 'XXXX'}, 'metadata': {'created_time': '2020-02-07T14:04:26.7986128Z', 'deletion_time': '', 'destroyed': False, 'version': 4}}, 'wrap_info': None, 'warnings': None, 'auth': None}
As #shawn mentioned above, below commands work for me as well
import hvac
vault_url = 'https://<vault url>:8200/'
vault_token = '<vault token>'
ca_path = '/run/secrets/kubernetes.io/serviceaccount/ca.crt'
secret_path = '<secret path in vault>'
client = hvac.Client(url=vault_url,token=vault_token,verify= ca_path)
client.is_authenticated()
read_secret_result = client.read(secret_path)
print(read_secret_result)
print(read_secret_result['data']['username'])
print(read_secret_result['data']['password'])
Note: ca_path is where the pod stores k8s CA and usually it should be found under "/run/secrets/kubernetes.io/serviceaccount/ca.crt"
I found it easier to use hvac for authentication, and then use the API directly
Can skip this and use root/dev token for testing
import hvac as h
client = h.Client(url='https://<vault url>:8200/')
username = input("username")
import getpass
password = getpass.getpass()
print(client.token)
del username,password
Get the list of mounts
import requests,json
vault_url = 'https://<vault url>:8200/'
vault_token = '<vault token>'
headers = {
'X-Vault-Token': vault_token
}
response = requests.get(vault_url+'v1/sys/mounts', headers=headers)
json.loads(response.text).keys() #The ones ending with / is your mount name
Then get the password (have to create one fist)
mount = '<mount name>'
secret = '<secret name>'
response = requests.get(vault_url+'v1/'+mount+'/'+secret, headers=headers)
response.text
For the username/password to get access to password created by root, you have to add path in the JSON under Policies.

Resources