What's the best practice to grant application code (during local development) access to Google Cloud resources,
without generate and download a Service Account JSON Key File (setting GOOGLE_APPLICATION_CREDENTIALS env. variable)
without giving Google user direct access to resources
I have following 2 options I could potentially use:
gcloud auth application-default login --> together with --impersonate-service-account=SERVICE_ACCOUNT_EMAILS flag
OR, use gcloud auth application-default login normally --> authenticate with MY Google User Account to generate application default credentials ---> and update my application code as follow as in this reference code
from google.auth import impersonated_credentials
target_credentials = impersonated_credentials.Credentials(
source_credentials = google.auth.default(),
target_principal='SERVICE_ACCOUNT_EMAILS')
client = storage.Client(credentials=target_credentials)
Is there any pros and cons of each approach?
Related
We've recently adopted the GitLab dependency proxy for our project on a self-hosted GitLab instance.
This works fine for normal users, but fails for pipelines created via the API using a project or group access token, regardless of access level.
We've tried with a project token that has API permission and a developer role as well as a group account with that permission and developer role.
We also tried to grant the tokens permission for read_registry, write_registry to no avail.
The outcome is always the same: Any pipeline triggered by a Token/Bot user runs into a wall where it says you're not authenticated to access the dependency proxy because no credentials were specified. If I restart the very same job as a human user from the UI everything works just fine.
How do I need to configure my access tokens so that their corresponding bot users can access the dependency proxy?
The issue is most likely about using wrong credentials.
According to the documentation, it won't work with project access token, or group access token – only personal access token & group deploy token besides username & password are supported.
Now let's say we want to use group deploy token. The docker-machine executor usually uses $CI_DEPENDENCY_PROXY_USER & $CI_DEPENDENCY_PROXY_PASSWORD (source) to authenticate to $CI_DEPENDENCY_PROXY_SERVER, as those variables are set up automatically.
Those credentials are the same as $CI_REGISTRY_USER & $CI_REGISTRY_PASSWORD (source) – the password in both cases is the job token. The job token has the same permissions as the user, and as mentioned above, group access token and project access token do not have access to the dependency proxy.
According to the deploy token documentation, you should authenticate to dependency proxy using the username (of the group deploy token) & token instead. To achieve that, I think the only option would be to embed deploy user & token inside $DOCKER_AUTH_CONFIG CI/CD variable.
I have not tried such scenario but I think it should work.
For docker-in-docker, you should be able to set DEPLOY_TOKEN_USERNAME & DEPLOY_TOKEN_TOKEN in CI/CD variables (using values from group deploy token) and then just login with those:
before_script:
- echo $DEPLOY_TOKEN_TOKEN | docker login -u $DEPLOY_TOKEN_USERNAME --password-stdin $CI_DEPENDENCY_PROXY_SERVER
script:
- docker pull $CI_DEPENDENCY_PROXY_DIRECT_GROUP_IMAGE_PREFIX/alpine
In python, I'm trying to call the GMail API via a service account with Delegated domain-wide authority, without using SERVICE_ACCOUNT_FILE.
My objective is to avoid creating a secret Key for the service account. Instead, I gave the Service Account Token Creator role to the process owner (me in local dev, App Engine Service Account in prod).
In the code below I successfully get and use an access token for the service account, without any SERVICE_ACCOUNT_FILE.
from google.cloud.iam_credentials_v1 import IAMCredentialsClient
from google.oauth2.credentials import Credentials
import googleapiclient.discovery
tk = IAMCredentialsClient().generate_access_token(
name=f'projects/-/serviceAccounts/{service_id}',
scope=['https://www.googleapis.com/auth/gmail.insert'],
# subject='admin#my.domain' doesn't work here :'(
)
service = googleapiclient.discovery.build('gmail', 'v1', credentials=Credentials(tk.access_token))
response = service.users().messages().insert(userId='user#my.domain', body=body).execute()
Problem is, after granting permissions to the service account in Google Admin of my.domain, I get the following error:
{'message': 'Precondition check failed.', 'domain': 'global', 'reason': 'failedPrecondition'}
I suspect that what I am missing is the subject, i.e. the email of an admin at my.domain.
I know I can provide the subject by constructing the credentials differently:
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file(key_file, scopes=scopes)
delegated_credentials = credentials.with_subject('admin#my.domain'). # <- I need this
service = googleapiclient.discovery.build('gmail', 'v1', credentials=delegated_credentials)
But this requires the creation of a secret key, which I'd like to avoid.
Is there a way to pass the subject in the first code example above ?
AFAIK this is not possible
Would be happy to be proved wrong on this!
I place this as a disclaimer because you are right, it does not explicitly deny the possibility. Its not generally a requirement for documentation to explicitly express everything you can not do with it, though I do see the potential for confusion here so it might be worth "Sending Feedback" to the documentation page.
What the instructions say
Preparing to make an authorized API call
After you obtain the client email address and private key from the API Console ...
source
In no place does it say "this is an optional step". This is also true in other places like the Directory API instructions on domain-wide delegation, or the Reports API
The Recommendation to not use Keys
The following is from the article you linked:
Use service account keys only if there is no viable alternative
A service account key lets an application authenticate as a service
account, similar to how a user might authenticate with a username and
password.
source
When you attach a service account, for example to an App Engine instance, the instance is like the user and the service account is its user account. It serves as a sort of identity for the instance. More info on that on this other StackOverflow thread.
So the only way for a service account to act as-if it were another account, is with a key. Likewise the only way for a user to act as-if it were a service account is with a key. Therefore, if you are trying to enable domain-wide delegation, which presupposes that you want to have a service account act as-if it were other accounts, having a key is essential, whether the service account is attached to an App Engine instance or not.
Granted, its not mentioned explicitly that you can not achieve this without a key. Though it doesn't mention that you can either. From the tests that I have run, I have run into similar results as yours, so it would seem that you just can't. As mentioned in the disclaimer, I would be happy to be proved wrong, however, I suspect that it would be due to a vulnerability, not intended behavior.
You haven't mentioned why you need domain-wide delegation of authority, so you may want to evaluate if you really need it. Usually all that is needed is the OAuth flow, in which an app acts oh-behalf of a user, not as-if it were the user.
References
Access to Google APIs with Service Accounts
How to Authenticate Service Accounts
Directory API instructions on domain-wide delegation
Reports API instructions on domain-wide delegation
"How to assign multiple service accounts to cloud functions" (you can't).
I have a pretty standard application written in Java which also runs queries against a DB. The application resides on GCP and the DB on Atlas.
For understandable reasons, I don't want to keep the username and password for the DB in the code.
So option number 1 that I had in mind, is to pass the username and password as environment variables to the application container in GCP.
Option number 2 is using Secret Manager in GCP and store my username and password there, and pass the GCP Credentials as an environment variable to the application container in GCP.
My question is, what is the added value of option number 2 if it has any? It seems that option 2 is even worse from a security aspect since if some hacker gets the google credentials, it has access to all of the secrets stored in the Secret Manager.
I don't know what are the best practices and what is advised to do in such cases. Thank you for your help.
Having credentials in GCP secret manager will help you to keep track of all the secrets and changes in a centralized location and access globally from any of your app.
For a standard application where one JAVA is connecting to a DB, may not add much values.
You may look into kubernetes secret for that reason.
If your application resides in GCP, you don't need a service account key file (which is your security concern, and you are right. I wrote an article on this)
TL;DR use ADC (Application Default Credential) to automatically get the service account credential provided automatically on Google Cloud Component (look at metadata server for more details).
Then grant this component identity (by default or user defined, when supported), i.e. the service account email, to access to your secrets.
And that's all! You haven't secrets in your code and your environment variable, neither the login/password, nor the service account key file.
If you have difficulties to use ADC in Java, don't hesitate to share your code. I will be able to help your to achieve this.
To use Secret Manager on Google Cloud you need to install the Secret Manager Java SDK Libraries. This documentation shows how to get started with the Cloud Client Libraries for the Secret Manager API, you only need to go to the Java section.
This Libraries helps you to access your keys in order that it can be used by your app.
The following link shows how to get details about a secret by viewing its metadata. Keep in mind that viewing a secret's metadata requires the Secret Viewer role (roles/secretmanager.viewer) on the secret, project, folder, or organization.
I recommend you to create a special Service Account to handle the proper permissions for your app, because if you don’t have a SA defined, the default SA is what is going to generate the request, and it is not secure. you can learn more about how to create a service account in this link
On the other hand, you can find an example on how you can use the following guide that contains a good example of finding your credentials automatically, that's more convenient and secure than manually passing credentials.
It is possible to set limits for the Google API credentials, to limit the access of an API key or token to an specific spreadsheet or folder, so this way those credentials can't access to all the information of the account but only the specify files.
Sure!
The Google Cloud Platform has robust tools to manage access to all sorts of things, including API credential access.
GCP IAM - Cloud Permissions and Access
You can create a cloud service to send respond with your key, only authorizing certain services to receive/request the key.
Here's the GCP IAM Documentation. Follow their instructions either via the frontend cloud console or command line tools to set a policy for your api key service.
Here's the gist of what you'll do for IAM:
authorize various google apis for your project
create a service account, e.g. my-api-key#see-the-docs-for-google-service-domain
For each of your apps that need the service key, create another service account, i.e. my-app#see-the-docs-for...
give whatever app service accounts your chosen access level/permission for the service account you created for your api key service
you're authorizing each app to access the api-key-service
deploy a simple Flask service to send your api-key using your api-key-service account
access the api credentials within your apps which have been given IAM permissions of their own
remember, you authorized your apps in step 4
On disk
For credentials stored on disk, it's best to encrypt/decrypt them on demand in app.
See this SO answer. If you encrypt your keys, go ahead and add to version control. Otherwise, avoid.
Secrets Manager or Berglas
However, I recommend you use either the open source Berglas tool or Google's managed Secrets product. You'll essentially give the secrets manager your api key, store it, then fetch it when necessary in-app or at load.
Adapted from the Google Cloud Documentation, almost verbatim:
# Import the Secret Manager client library.
from google.cloud import secretmanager_v1beta1 as sm
# GCP project in which to store secrets in Secret Manager.
project_id = 'YOUR_PROJECT_ID'
# ID of the secret to create.
secret_id = 'YOUR_SECRET_ID'
# Create the Secret Manager client.
client = sm.SecretManagerServiceClient()
# Build the parent name from the project.
parent = client.project_path(project_id)
# Create the parent secret
secret = client.create_secret(parent, secret_id, {
'replication': {
'automatic': {},
},
})
# Add the api key
version = client.add_secret_version(secret.name, {'data': b'my-google-api-credentials'})
# Access the api key
response = client.access_secret_version(version.name)
# Now you have your decoded api credentials you can use for authentication
payload = response.payload.data.decode('UTF-8')
I changed some comments in the above but be sure to check Google's documentation and their github examples.
If you're more adventurous, the Berglas library is fantastic and I use it directly in several projects, via its Go client locally and its docker image within deployed services.
I created an Azure Mobile App Service which is currently accessible 'Anonymously'
Anonymous access is enabled on the App Service app. Users will not be prompted for login.
To make it secure I can enable App Service Authentication which will ask users to log in
But this is not what I want - The data in this app is only accessed by Application without the need of each and every user to login to my app before using it.
So you might say, in this case, Anonymous access is fine but I want to restrict it with something at least like an API Key so I will have access to the API which my app can use to access the data to prevent random requests as anyone can just go and use Postman and start getting data without any authentication.
So in short, I don't want individual user authentication, but at least an API Key to ensure only requests made from my app are authenticated and nothing else.
I am using the following in my mobile app to create a connection and also doing Offline sync etc
MobileServiceClient client = new MobileServiceClient(applicationURL);
Any idea how do I do that?
FYI. My server side backend is in C#
Since you are using Azure Mobile Apps, for your requirement, you could leverage Custom Authentication for building your CustomAuthController to login and generate the JWT token for a specific user without user interaction. The core code snippet for logging would look like as follow:
MobileServiceClient client = new MobileServiceClient("https://{your-mobileapp-name}.azurewebsites.net/");
client.LoginAsync("custom", JObject.FromObject(new{Username="***",Password="***"}));
Note: As the above tutorial mentions as follows:
You must turn on Authentication / Authorization in your App Service. Set the Action to take when request is not authenticated to Allow Request (no action) and do not configure any of the supported authentication providers.
And you must explicitly add [Authorize] attribute for your controllers / actions which need to be authorized access. Details you could follow Authentication in the Backend.