Dataproc python API error permission denied - python-3.x

I try to create a dataproc cluster via python API, I use authentification with json fle containing credentials.
app = Flask(__name__)
# Explicitly use service account credentials by specifying the private key
# file.
credentials_gcp =
service_account.Credentials.from_service_account_file('credentials.json')
client = dataproc_v1.ClusterControllerClient(credentials = credentials_gcp)
clustertest = {
"project_id": "xxxx",
"cluster_name": "testcluster",
"config": {}
}
# launch cluster on Dataproc
#app.route('/cluster/<project_id>/<region>/<clustername>', methods=['POST'])
def cluster(project_id, region, clustername):
response = client.create_cluster(project_id, 'regions/europe-west1-b',
clustertest)
response.add_done_callback(callback)
result = response.metadata()
return jsonify(result)
I get the following error
google.api_core.exceptions.PermissionDenied: 403 Permission denied on 'locations/regions/europe-west1' (or it may not exist)
I don't know if I don't have the correct rights or I have an error in the syntax

I managed to solve the issue with adding the zone when instantiating the client:
your_region = "europe-west1"
client_cluster = dataproc_v1.ClusterControllerClient(credentials = credentials_gcp, client_options = {'api_endpoint': f'{your_region}-dataproc.googleapis.com:443'})

That error indicates your project cannot use that region. However, I think the issue is in how you specify the Dataproc region as regions/europe-west1-b. Instead, please try europe-west1

Related

Cannot use API 'https://www.googleapis.com/drive/v3/files/<file_id>/export?mimeType=image%2Fjpeg' to get image from google drive

My problem is that i can't get image on google drive when using 'https://www.googleapis.com/drive/v3/files/<file_id>/export?mimeType=image%2Fjpeg'
Here, my code:
from pydrive import auth, drive
import requests
gauth = auth.GoogleAuth()
scope = ["https://www.googleapis.com/auth/drive"]
gauth.credentials = auth.ServiceAccountCredentials.from_json_keyfile_name('my_json.json', scope)
drv = drive.GoogleDrive(gauth)
access_token = drv.auth.credentials.get_access_token().access_token
url = 'https://www.googleapis.com/drive/v3/files/' + file_id + '/export?mimeType=image%2Fjpeg'
res = requests.get(url, headers={'Authorization': 'Bearer ' + access_token})
Respone error (403):
{
"error": {
"errors": [
{
"domain": "usageLimits",
"reason": "dailyLimitExceededUnreg",
"message": "Daily Limit for Unauthenticated Use Exceeded. Continued use requires signup.",
"extendedHelp": "https://code.google.com/apis/console"
}
],
"code": 403,
"message": "Daily Limit for Unauthenticated Use Exceeded. Continued use requires signup."
}
}
I just reuqest 10-20 times. This error response seem wrong.
How i can fix above code to get response ?
Thanks in advance for help!
I believe your goal is as follows.
From My original file is *.jpg. I used mimeType=image/jpeg and your script, you want to download a JPEG file from Google Drive using the service account.
Modification points:
When you want to download a JPEG file, I think that your endpoint is required to be modified.
I think that when the access token is not used for the endpoint, the error of Daily Limit for Unauthenticated Use Exceeded. Continued use requires signup. occurs. But, when the invalid access token is used, an error of Invalid Credentials occurs. In your script, even when the access token is invalid, I think that an error of Daily Limit for Unauthenticated Use Exceeded. Continued use requires signup. doesn't occur. So I'm worried that your showing script might be different from your tested script. Please confirm this again.
If a JPEG file is downloaded from Google Drive using the service account, how about the following modification?
Modified script:
from pydrive import auth, drive
import requests
file_id = "###" # Please set the file ID of the JPEG file.
gauth = auth.GoogleAuth()
scope = ["https://www.googleapis.com/auth/drive"]
gauth.credentials = auth.ServiceAccountCredentials.from_json_keyfile_name('my_json.json', scope)
drv = drive.GoogleDrive(gauth)
access_token = drv.auth.credentials.get_access_token().access_token
url = "https://www.googleapis.com/drive/v3/files/" + file_id + "?alt=media"
res = requests.get(url, headers={"Authorization": "Bearer " + access_token})
Note:
In this case, it supposes that your service account can access the JPEG file on Google Drive. Please be careful about this. If an error like File not found occurs, please check about this, again.
For example, when you want to download a JPEG file using pydrive, you can also the following script.
from pydrive import auth, drive
import requests
file_id = "###" # Please set the file ID of the JPEG file.
gauth = auth.GoogleAuth()
scope = ["https://www.googleapis.com/auth/drive"]
gauth.credentials = auth.ServiceAccountCredentials.from_json_keyfile_name('my_json.json', scope)
drv = drive.GoogleDrive(gauth)
drv.CreateFile({"id": file_id}).GetContentFile("sample.jpg")
References:
Download files
PyDrive

SecretManagerServiceClient in Google Cloud Run and authentication via service account

I can create a SecretManagerServiceClient without using a key file successfully in Google Cloud Shell:
from google.cloud import secretmanager
from google.oauth2 import service_account
from google.auth.exceptions import DefaultCredentialsError
import logging
import sys
import os
def list_secrets(client, project_id):
"""
Retrieve all secrets associated with a project
:param project_id: the alpha-numeric name of the project
:return: a generator of Secrets
"""
try:
secret_list = client.list_secrets(request={"parent": "projects/{}".format(project_id)})
except Exception as e:
sys.exit("Did not successfully retrieve secret list.")
return secret_list
def set_env_secrets(client, secret_ids, label=None):
"""
Sets secrets retrieved from Google Secret Manager in the runtime environment
of the Python process
:param secret_ids: a generator of Secrets
:param label: Secrets with this label will be set in the environment
"""
for s in secret_ids:
# we only want secrets with matching labels (or all of them if label wasn't specified)
if not label or label in s.labels:
version = client.access_secret_version(request={'name': '{}/versions/latest'.format(s.name)})
payload_str = version.payload.data.decode("UTF-8")
os.environ[s.name.split('/')[-1]] = payload_str
if __name__ == "__main__":
client = secretmanager.SecretManagerServiceClient()
secrets = list_secrets(client, "myprojectid-123456")
set_env_secrets(client, secrets)
print(os.getenv("DATA_DB_HOST"))
However, when I use similar code as the basis for an entry point of a container in Google Cloud Run, the attempt to retrieve a client using the default service account's credentials fails with
File "entry_point.py", line 27, in get_client
client = secretmanager.SecretManagerServiceClient()
File "/usr/local/lib/python3.6/site-packages/google/cloud/secretmanager_v1/services/secret_manager_service/client.py", line 274, in __init__
client_info=client_info,
File "/usr/local/lib/python3.6/site-packages/google/cloud/secretmanager_v1/services/secret_manager_service/transports/grpc.py", line 162, in __init__
scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id
File "/usr/local/lib/python3.6/site-packages/google/auth/_default.py", line 340, in default
credentials, project_id = checker()
File "/usr/local/lib/python3.6/site-packages/google/auth/_default.py", line 186, in _get_explicit_environ_credentials
os.environ[environment_vars.CREDENTIALS]
File "/usr/local/lib/python3.6/site-packages/google/auth/_default.py", line 97, in load_credentials_from_file
"File {} was not found.".format(filename)
google.auth.exceptions.DefaultCredentialsError: File was not found.
The default service account has the Editor and Secret Manager Admin roles (thanks to #DanielOcando for his comment). Why is it that the ADC library, as described here does not pick up the permissions of the default service account and use them to instantiate the client?
Update 1
#guillaumeblaquiere asked about dependencies. The container is built with Python 3.6.12 and the following libraries:
Django==2.1.15
django-admin-rangefilter==0.3.7
django-extensions==2.1.2
django-ipware==1.1.6
pytz==2017.3
psycopg2==2.7.3.2
waitress==1.4.1
geoip2==2.6
gunicorn==19.9.0
social-auth-app-django==3.1.0
semver==2.8.1
sentry-sdk==0.6.9
google-api-core==1.23.0
google-auth==1.23.0
google-cloud-secret-manager==2.0.0
I created a custom service account, added Editor and Secret Manager Admin roles to it, and then used the Console to deploy a new revision with that account, but the same error resulted.
Update 2
Thinking that matching the CPython version in Cloud Shell would do the trick, I rebuilt the image with Python 3.7. No luck.
Update 3
Taking a different tack, I added Service Account Token Creator role to the default service account of the project and created a terraform file and configured it for service account impersonation. I also ran gcloud auth application-default login in the shell prior to invoking terraform.
provider "google" {
alias = "tokengen"
}
data "google_client_config" "default" {
provider = google.tokengen
}
data "google_service_account_access_token" "sa" {
provider = "google.tokengen"
target_service_account = "XXXXXXXXXXXX-compute#developer.gserviceaccount.com"
lifetime = "600s"
scopes = [
"https://www.googleapis.com/auth/cloud-platform",
]
}
provider "google" {
project = "myprojectid-123456"
region = "us-central1"
zone = "us-central1-f"
#impersonate_service_account = "XXXXXXXXXXXX-compute#developer.gserviceaccount.com
}
resource "google_cloud_run_service" "default" {
name = "myprojectid-123456"
location = "us-central1"
template {
spec {
containers {
image = "us.gcr.io/myprojectid-123456/testimage"
}
}
}
traffic {
percent = 100
latest_revision = true
}
}
This did work to create the service, but again, when the endpoint attempted to instantiate SecretManagerServiceClient, the same error resulted.

How to send a GraphQL query to AppSync from python?

How do we post a GraphQL request through AWS AppSync using boto?
Ultimately I'm trying to mimic a mobile app accessing our stackless/cloudformation stack on AWS, but with python. Not javascript or amplify.
The primary pain point is authentication; I've tried a dozen different ways already. This the current one, which generates a "401" response with "UnauthorizedException" and "Permission denied", which is actually pretty good considering some of the other messages I've had. I'm now using the 'aws_requests_auth' library to do the signing part. I assume it authenticates me using the stored /.aws/credentials from my local environment, or does it?
I'm a little confused as to where and how cognito identities and pools will come into it. eg: say I wanted to mimic the sign-up sequence?
Anyways the code looks pretty straightforward; I just don't grok the authentication.
from aws_requests_auth.boto_utils import BotoAWSRequestsAuth
APPSYNC_API_KEY = 'inAppsyncSettings'
APPSYNC_API_ENDPOINT_URL = 'https://aaaaaaaaaaaavzbke.appsync-api.ap-southeast-2.amazonaws.com/graphql'
headers = {
'Content-Type': "application/graphql",
'x-api-key': APPSYNC_API_KEY,
'cache-control': "no-cache",
}
query = """{
GetUserSettingsByEmail(email: "john#washere"){
items {name, identity_id, invite_code}
}
}"""
def test_stuff():
# Use the library to generate auth headers.
auth = BotoAWSRequestsAuth(
aws_host='aaaaaaaaaaaavzbke.appsync-api.ap-southeast-2.amazonaws.com',
aws_region='ap-southeast-2',
aws_service='appsync')
# Create an http graphql request.
response = requests.post(
APPSYNC_API_ENDPOINT_URL,
json={'query': query},
auth=auth,
headers=headers)
print(response)
# this didn't work:
# response = requests.post(APPSYNC_API_ENDPOINT_URL, data=json.dumps({'query': query}), auth=auth, headers=headers)
Yields
{
"errors" : [ {
"errorType" : "UnauthorizedException",
"message" : "Permission denied"
} ]
}
It's quite simple--once you know. There are some things I didn't appreciate:
I've assumed IAM authentication (OpenID appended way below)
There are a number of ways for appsync to handle authentication. We're using IAM so that's what I need to deal with, yours might be different.
Boto doesn't come into it.
We want to issue a request like any regular punter, they don't use boto, and neither do we. Trawling the AWS boto docs was a waste of time.
Use the AWS4Auth library
We are going to send a regular http request to aws, so whilst we can use python requests they need to be authenticated--by attaching headers.
And, of course, AWS auth headers are special and different from all others.
You can try to work out how to do it
yourself, or you can go looking for someone else who has already done it: Aws_requests_auth, the one I started with, probably works just fine, but I have ended up with AWS4Auth. There are many others of dubious value; none endorsed or provided by Amazon (that I could find).
Specify appsync as the "service"
What service are we calling? I didn't find any examples of anyone doing this anywhere. All the examples are trivial S3 or EC2 or even EB which left uncertainty. Should we be talking to api-gateway service? Whatsmore, you feed this detail into the AWS4Auth routine, or authentication data. Obviously, in hindsight, the request is hitting Appsync, so it will be authenticated by Appsync, so specify "appsync" as the service when putting together the auth headers.
It comes together as:
import requests
from requests_aws4auth import AWS4Auth
# Use AWS4Auth to sign a requests session
session = requests.Session()
session.auth = AWS4Auth(
# An AWS 'ACCESS KEY' associated with an IAM user.
'AKxxxxxxxxxxxxxxx2A',
# The 'secret' that goes with the above access key.
'kwWxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxgEm',
# The region you want to access.
'ap-southeast-2',
# The service you want to access.
'appsync'
)
# As found in AWS Appsync under Settings for your endpoint.
APPSYNC_API_ENDPOINT_URL = 'https://nqxxxxxxxxxxxxxxxxxxxke'
'.appsync-api.ap-southeast-2.amazonaws.com/graphql'
# Use JSON format string for the query. It does not need reformatting.
query = """
query foo {
GetUserSettings (
identity_id: "ap-southeast-2:8xxxxxxb-7xx4-4xx4-8xx0-exxxxxxx2"
){
user_name, email, whatever
}}"""
# Now we can simply post the request...
response = session.request(
url=APPSYNC_API_ENDPOINT_URL,
method='POST',
json={'query': query}
)
print(response.text)
Which yields
# Your answer comes as a JSON formatted string in the text attribute, under data.
{"data":{"GetUserSettings":{"user_name":"0xxxxxxx3-9102-42f0-9874-1xxxxx7dxxx5"}}}
Getting credentials
To get rid of the hardcoded key/secret you can consume the local AWS ~/.aws/config and ~/.aws/credentials, and it is done this way...
# Use AWS4Auth to sign a requests session
session = requests.Session()
credentials = boto3.session.Session().get_credentials()
session.auth = AWS4Auth(
credentials.access_key,
credentials.secret_key,
boto3.session.Session().region_name,
'appsync',
session_token=credentials.token
)
...<as above>
This does seem to respect the environment variable AWS_PROFILE for assuming different roles.
Note that STS.get_session_token is not the way to do it, as it may try to assume a role from a role, depending where it keyword matched the AWS_PROFILE value. Labels in the credentials file will work because the keys are right there, but names found in the config file do not work, as that assumes a role already.
OpenID
In this scenario, all the complexity is transferred to the conversation with the openid connect provider. The hard stuff is all the auth hoops you jump through to get an access token, and thence using the refresh token to keep it alive. That is where all the real work lies.
Once you finally have an access token, assuming you have configured the "OpenID Connect" Authorization Mode in appsync, then you can, very simply, drop the access token into the header:
response = requests.post(
url="https://nc3xxxxxxxxxx123456zwjka.appsync-api.ap-southeast-2.amazonaws.com/graphql",
headers={"Authorization": ACCESS_TOKEN},
json={'query': "query foo{GetStuff{cat, dog, tree}}"}
)
You can set up an API key on the AppSync end and use the code below. This works for my case.
import requests
# establish a session with requests session
session = requests.Session()
# As found in AWS Appsync under Settings for your endpoint.
APPSYNC_API_ENDPOINT_URL = 'https://vxxxxxxxxxxxxxxxxxxy.appsync-api.ap-southeast-2.amazonaws.com/graphql'
# setup the query string (optional)
query = """query listItemsQuery {listItemsQuery {items {correlation_id, id, etc}}}"""
# Now we can simply post the request...
response = session.request(
url=APPSYNC_API_ENDPOINT_URL,
method='POST',
headers={'x-api-key': '<APIKEYFOUNDINAPPSYNCSETTINGS>'},
json={'query': query}
)
print(response.json()['data'])
Building off Joseph Warda's answer you can use the class below to send AppSync commands.
# fileName: AppSyncLibrary
import requests
class AppSync():
def __init__(self,data):
endpoint = data["endpoint"]
self.APPSYNC_API_ENDPOINT_URL = endpoint
self.api_key = data["api_key"]
self.session = requests.Session()
def graphql_operation(self,query,input_params):
response = self.session.request(
url=self.APPSYNC_API_ENDPOINT_URL,
method='POST',
headers={'x-api-key': self.api_key},
json={'query': query,'variables':{"input":input_params}}
)
return response.json()
For example in another file within the same directory:
from AppSyncLibrary import AppSync
APPSYNC_API_ENDPOINT_URL = {YOUR_APPSYNC_API_ENDPOINT}
APPSYNC_API_KEY = {YOUR_API_KEY}
init_params = {"endpoint":APPSYNC_API_ENDPOINT_URL,"api_key":APPSYNC_API_KEY}
app_sync = AppSync(init_params)
mutation = """mutation CreatePost($input: CreatePostInput!) {
createPost(input: $input) {
id
content
}
}
"""
input_params = {
"content":"My first post"
}
response = app_sync.graphql_operation(mutation,input_params)
print(response)
Note: This requires you to activate API access for your AppSync API. Check this AWS post for more details.
graphql-python/gql supports AWS AppSync since version 3.0.0rc0.
It supports queries, mutation and even subscriptions on the realtime endpoint.
The documentation is available here
Here is an example of a mutation using the API Key authentication:
import asyncio
import os
import sys
from urllib.parse import urlparse
from gql import Client, gql
from gql.transport.aiohttp import AIOHTTPTransport
from gql.transport.appsync_auth import AppSyncApiKeyAuthentication
# Uncomment the following lines to enable debug output
# import logging
# logging.basicConfig(level=logging.DEBUG)
async def main():
# Should look like:
# https://XXXXXXXXXXXXXXXXXXXXXXXXXX.appsync-api.REGION.amazonaws.com/graphql
url = os.environ.get("AWS_GRAPHQL_API_ENDPOINT")
api_key = os.environ.get("AWS_GRAPHQL_API_KEY")
if url is None or api_key is None:
print("Missing environment variables")
sys.exit()
# Extract host from url
host = str(urlparse(url).netloc)
auth = AppSyncApiKeyAuthentication(host=host, api_key=api_key)
transport = AIOHTTPTransport(url=url, auth=auth)
async with Client(
transport=transport, fetch_schema_from_transport=False,
) as session:
query = gql(
"""
mutation createMessage($message: String!) {
createMessage(input: {message: $message}) {
id
message
createdAt
}
}"""
)
variable_values = {"message": "Hello world!"}
result = await session.execute(query, variable_values=variable_values)
print(result)
asyncio.run(main())
I am unable to add a comment due to low rep, but I just want to add that I tried the accepted answer and it didn't work. I was getting an error saying my session_token is invalid. Probably because I was using AWS Lambda.
I got it to work pretty much exactly, but by adding to the session token parameter of the aws4auth object. Here's the full piece:
import requests
import os
from requests_aws4auth import AWS4Auth
def AppsyncHandler(event, context):
# These are env vars that are always present in an AWS Lambda function
# If not using AWS Lambda, you'll need to add them manually to your env.
access_id = os.environ.get("AWS_ACCESS_KEY_ID")
secret_key = os.environ.get("AWS_SECRET_ACCESS_KEY")
session_token = os.environ.get("AWS_SESSION_TOKEN")
region = os.environ.get("AWS_REGION")
# Your AppSync Endpoint
api_endpoint = os.environ.get("AppsyncConnectionString")
resource = "appsync"
session = requests.Session()
session.auth = AWS4Auth(access_id,
secret_key,
region,
resource,
session_token=session_token)
The rest is the same.
Hope this Helps Everyone
import requests
import json
import os
from dotenv import load_dotenv
load_dotenv(".env")
class AppSync(object):
def __init__(self,data):
endpoint = data["endpoint"]
self.APPSYNC_API_ENDPOINT_URL = endpoint
self.api_key = data["api_key"]
self.session = requests.Session()
def graphql_operation(self,query,input_params):
response = self.session.request(
url=self.APPSYNC_API_ENDPOINT_URL,
method='POST',
headers={'x-api-key': self.api_key},
json={'query': query,'variables':{"input":input_params}}
)
return response.json()
def main():
APPSYNC_API_ENDPOINT_URL = os.getenv("APPSYNC_API_ENDPOINT_URL")
APPSYNC_API_KEY = os.getenv("APPSYNC_API_KEY")
init_params = {"endpoint":APPSYNC_API_ENDPOINT_URL,"api_key":APPSYNC_API_KEY}
app_sync = AppSync(init_params)
mutation = """
query MyQuery {
getAccountId(id: "5ca4bbc7a2dd94ee58162393") {
_id
account_id
limit
products
}
}
"""
input_params = {}
response = app_sync.graphql_operation(mutation,input_params)
print(json.dumps(response , indent=3))
main()

How to connect to redshift jdbc url using python?

I have a database url that looks like this:
jdbc:redshift://<database_name>.company.com:5439/<database_name>?ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory
How do I connect to this jdbc url using python? What is a jdbc url anyway? Can I connect to this using:
import psycopg2
con=psycopg2.connect(
dbname= 'jdbc:redshift://<database_name>.<company>.com:5439/<database_name>?ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory',
host='host',
port= '5439',
user= 'user',
password= 'pwd'
)
I am using a better way of connecting to Redshift via Python.
Please Follow the steps -
Create a IAM Policy for get credentials - DOCUMENTATION
Where to Attach this policy ? -
a. Run the Python Code on EC2 or any other service -> Attach the IAM Policy to a role and attach it to that particular service or IAM Role.
b. Local Machine -> Attach to the AWS User which you have configured on your local system (via aws configure CLI Command and by providing Access Key and Secret Access Key )
Lets Use a Config.ini ( as a central Place to store any static values ) -
My Redshift JDBC URL is like -
jdbc:redshift://dev.<some_value_like_company>.us-west-2.redshift.amazonaws.com:5439/dev_database
My Config.ini File is like -
[Redshift]
port = 5439
username = dev_user
database_name = dev_database
cluster_id = dev
url = dev.<some_value_like_company>.<region>.redshift.amazonaws.com
region = us-west-2
Create a connection -
#All Imports
import logging
import psycopg2
import boto3
import ConfigParser
def db_connection():
logger = logging.getLogger(__name__)
parser = ConfigParser.ConfigParser()
parser.read('config.ini')
RS_PORT = parser.get('Redshift','port')
RS_USER = parser.get('Redshift','username')
DATABASE = parser.get('Redshift','database_name')
CLUSTER_ID = parser.get('Redshift','cluster_id')
RS_HOST = parser.get('Redshift','url')
REGION_NAME = parser.get('Redshift','region')
client = boto3.client('redshift',region_name=REGION_NAME)
cluster_creds = client.get_cluster_credentials(DbUser=RS_USER,
DbName=DATABASE,
ClusterIdentifier=CLUSTER_ID,
AutoCreate=False)
try:
conn = psycopg2.connect(
host=RS_HOST,
port=RS_PORT,
user=cluster_creds['DbUser'],
password=cluster_creds['DbPassword'],
database=DATABASE
)
print "pass"
print conn
return conn
except psycopg2.Error:
logger.exception('Failed to open database connection.')
print "Failed"
db_connection()
Import and Call the function where-ever necessary.
I would prefer the above instead of hard-coding the values for UserName and Password for any user, because -
its simply not a good practice,
Besides if you use a public Repo (github), then it makes the username & password public which might be a nightmare if someone uses it for wrong reasons.
Using IAM is Free and Secured :p.
Do let me know if this helps, If you still need to connect to Redshift the way you wanted will post an answer later after trying it out myself.
Sample IAM Policy for Get_credentials -
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"redshift:GetClusterCredentials",
"redshift:CreateClusterUser",
"redshift:JoinGroup"
],
"Resource": [
"arn:aws:redshift:us-west-2:<account_number>:dbname:dev/dev_database",
"arn:aws:redshift:us-west-2:<account_number>:dbuser:dev/dev",
"arn:aws:redshift:us-west-2:<account_number>:dbuser:dev/dev_read"
]
}
]
}

HashiCorp Vault Python hvac read

I would like to read my secret from a pod with python.
I try with this:
import os
import hvac
f = open('/var/run/secrets/kubernetes.io/serviceaccount/token')
jwt = f.read()
client = hvac.Client()
client = hvac.Client(url='https://vault.mydomain.internal')
client.auth_kubernetes("default", jwt)
print(client.read('secret/pippo/pluto'))
I'm sure that secret/pippo/pluto exists.
I'm sure that I'm properly authenticated
But I always receive "None" in answer to my print.
Where can I look to solve this ?
Thx a lot
If you read KV value from Vault, you need the Mount Point and the Path.
Example:
vault_client.secrets.kv.v1.read_secret(
path=path,
mount_point=mount_point
)
i've tried the method you provided in my k8s Python3 pod, i can get Vault secret data successfully.
You need to specify the correct vault token parameter in your hvac.Client and disable client.auth_kubernetes method.
Give it a shot and remember your code should run in k8s Python container instead of your host machine.
import hvac
f = open('/var/run/secrets/kubernetes.io/serviceaccount/token')
jwt = f.read()
print("jwt:", jwt)
f.close()
client = hvac.Client(url='http://vault:8200', token='your_vault_token')
# res = client.auth_kubernetes("envelope-creator", jwt)
res = client.is_authenticated()
print("res:", res)
hvac_secrets_data_k8s = client.read('secret/data/compliance')
print("hvac_secrets_data_k8s:", hvac_secrets_data_k8s)
Below is the result:
92:qfedu shawn$ docker exec -it 202a119367a4 bash
airflow#airflow-858d8c6fcf-bgmwn:~$ ls
airflow-webserver.pid airflow.cfg config dags logs test_valut_in_webserver.py unittests.cfg webserver_config.py
airflow#airflow-858d8c6fcf-bgmwn:~$ python test_valut_in_webserver.py
jwt: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia
res: True
hvac_secrets_data_k8s: {'request_id': '80caf0cb-8c12-12d2-6517-530eecebd1e0', 'lease_id': '', 'renewable': False, 'lease_duration': 0, 'data': {'data': {'s3AccessKey': 'XXXX', 's3AccessKeyId': 'XXXX', 'sftpPassword': 'XXXX', 'sftpUser': 'XXXX'}, 'metadata': {'created_time': '2020-02-07T14:04:26.7986128Z', 'deletion_time': '', 'destroyed': False, 'version': 4}}, 'wrap_info': None, 'warnings': None, 'auth': None}
As #shawn mentioned above, below commands work for me as well
import hvac
vault_url = 'https://<vault url>:8200/'
vault_token = '<vault token>'
ca_path = '/run/secrets/kubernetes.io/serviceaccount/ca.crt'
secret_path = '<secret path in vault>'
client = hvac.Client(url=vault_url,token=vault_token,verify= ca_path)
client.is_authenticated()
read_secret_result = client.read(secret_path)
print(read_secret_result)
print(read_secret_result['data']['username'])
print(read_secret_result['data']['password'])
Note: ca_path is where the pod stores k8s CA and usually it should be found under "/run/secrets/kubernetes.io/serviceaccount/ca.crt"
I found it easier to use hvac for authentication, and then use the API directly
Can skip this and use root/dev token for testing
import hvac as h
client = h.Client(url='https://<vault url>:8200/')
username = input("username")
import getpass
password = getpass.getpass()
print(client.token)
del username,password
Get the list of mounts
import requests,json
vault_url = 'https://<vault url>:8200/'
vault_token = '<vault token>'
headers = {
'X-Vault-Token': vault_token
}
response = requests.get(vault_url+'v1/sys/mounts', headers=headers)
json.loads(response.text).keys() #The ones ending with / is your mount name
Then get the password (have to create one fist)
mount = '<mount name>'
secret = '<secret name>'
response = requests.get(vault_url+'v1/'+mount+'/'+secret, headers=headers)
response.text
For the username/password to get access to password created by root, you have to add path in the JSON under Policies.

Resources