I want to know whether Google Cloud provides AWS Named Profile like functionality i.e. I want to pass the credentials to the Google Cloud’s gsutil commands at run time.
For example, in the below scenario, I want to pass different configurations/credentials while coping data(using gsutil) in two different Google Cloud Buckets.
--Google Cloud (Not working Scenario for GCP)
gsutil cp -r C:/Users/Downloads/crm.txt gs://bucket_crm_123 --configuration=config1
gsutil cp -r C:/Users/Downloads/payroll.txt gs://bucket_payroll_123 --configuration=config2
--AWS (Working Scenario for AWS)
aws s3 cp C:/Users/Desktop/GCFR/testfiles/ s3://hive1s3/ --profile User1 --recursive
aws s3 cp C:/Users/Desktop/GCFR/testfiles1/ s3://hive2s3/ --profile User2--recursive
So in a nutshell, I want to make sure that a user/configuration can only access the Google Cloud bucket if he has privileges on it.
You mean you want flag like "run as" function??
If yes, You can do same work with "service account".
Create service accounts with appropriate permissions.(permissions that can access a bucket you want)
And add -i flag in your gsutil command with a service account you made.
Refer here to get more information about -i flag.
Or if you must specify a user, You can change user that runs command with command below.
gcloud config set account USERACCOUNT
Run gsutil command after changing user account via this command.
Plus) You must enroll user accounts you use for gsutil before run "gcloud config set account" command.
Run command "gcloud auth login" and authenticate user account.
Only user accounts registered with this process can be used.
If you want to know what user accounts can be use, run "gcloud auth list" command.
Related
As I am running my unit test through bitbucket pipelines, the below error occurs in some of the tests
Uncaught Error: Could not load the default credentials. Browse to https://cloud.google.com/docs/authentication/getting-started for more information.
at GoogleAuth.getApplicationDefaultAsync (node_modules/google-auth-library/build/src/auth/googleauth.js:183:19)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at GoogleAuth.getClient (node_modules/google-auth-library/build/src/auth/googleauth.js:565:17)
at GrpcClient._getCredentials (node_modules/google-gax/src/grpc.ts:202:20)
at GrpcClient.createStub (node_modules/google-gax/src/grpc.ts:404:19)
This error occurs only for the test cases that are testing Cloud functions that use the Logging service which is imported from #google-cloud/logging
Please note that my project is initialized with a service key
const serviceAccount = environment.FirebaseCredentials as admin.ServiceAccount;
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: `https://${projectId}.firebaseio.com`,
storageBucket: `${projectId}.appspot.com`,
});
When running the unit tests locally they usually work as well, however, they return an error when running through bitbucket pipelines
I would like to try this solution https://stackoverflow.com/a/42059661, but I think this could only work if you're doing it manually since you have to choose an email to login with.
I would like to know how to run this command or an alternative command in the bitbucket pipeline that may solve my problem.
From the documentation on deploy to Google Cloud
1.Create a Google service account key. For more guidance see Google's guide to creating service keys.
2.Once you have your key file, open up a terminal and browse to the location of your key file.
3.Encode your file in base64 format: base64 -w 0 and copy the output of the command.
Note: for some versions of MacOS the -w 0 is not necessary.
4.In your repo go to Repository settings, under Pipelines, select Repository variables and create a new variable named KEY_FILE and paste the encoded service account credentials.
5.Configure the pipe and add the variables: PROJECT and KEY_FILE
Activate the service account as follows
gcloud auth activate-service-account --key-file gcloud-api-key.json
Activated service account credentials for: [service-account-name#project-id.iam.gserviceaccount.com]
echo "$(gcloud auth list)"
To set the active account, run: $ gcloud config set account ACCOUNT
ACTIVE ACCOUNT
service-account-name#project-id.iam.gserviceaccount.com
Additionally you may have a look this stackoverflow link1 & link2
How can we pull ACR images from gov Cloud if we are working in commercial cloud?
You should be able to accomplish this using token authentication against your ACR. Be advised though that this is currently a preview feature and requires the Premium SKU for ACR. Having said that, here are the steps:
Generate an authentication token for your ACR in Azure Government, specifying _repositories_pull for the scope map. Make sure to generate the password too. You can do this after you create the token - just click on the token in the portal and there will be an option to generate a password.
After you generate the password, copy the Docker login command that is generated. It will look something like docker login -u token1 -p 3AP3Gf...wJ <youracr>.azurecr.us
From your terminal, where you have access to your AKS cluster in commercial, login to docker using the docker command from #2. Note: you will probably have to run this as sudo. This will generate a file at ~/.docker/config.json that contains the password to authenticate to your ACR in Azure Gov.
Use the config.json from #3 to create a secret based on existing Docker credentials in your cluster.
Finally, use an imagePullSecret in your pod spec to use the secret you generated in #4. Also, be sure to update your image to reference the full path of your container image in ACR. Example here.
We actually have multiple azure accounts (for some valid reason) and I want to be able to run azure-cli commands for different accounts at the same time from the same machine.
The problem with that is, once I login to one azure account with azure login, token will be stored in ~/.azure directory so I am not sure if I can login into another account exactly at the same time on that machine.
Is there any way to tell azure-cli not to store token in local profile so that I can use azure-cli to connect to multiple accounts at the same time from same machine?
If you are using a windows or mac machine then the tokens are stored in Windows token manager or OSx key chain respectively. Only on Linux systems the tokens are stored in ~/.azure/azureProfile.json
However, you should still be able to login with multiple accounts on Win/Mac or Linux machines.
azure account set "subscription-name" will set the subscription as your default subscription and all the commands that you execute will run against that subscription.
Every command has a -s or --subscription switch where you can explicitly specify the subscription id. Even if the subscription belongs to a different account, it should still work if you have authenticated with that account.
For Linux system, I would suggest to create multiple user accounts and then run the CLI from those accounts. I think there could be a race condition when two commands from different accounts try to access ~/.azure/azureProfile.json.
The latest update is that the environment variable AZURE_CONFIG_DIR has been introduced and that can be set differently for each environment before az login is called.
export AZURE_CONFIG_DIR=/tmp1
az login
and on other window
export AZURE_CONFIG_DIR=/tmp2
az login
Reference: configure the AZURE_CONFIG_DIR for fixing concurrency issue
For Windows, here are steps
Go to env variables and add AZURE_CONFIG_DIR with the value of new config folder (e.x. C:\Users\YourUser\.azure-personal)
restart your cli, then run this az login --use-device-code
use the code given on step 2 and use it with whatever browser to login to new azure account
Now, one of your accounts config is in default azure folder config (C:\Users\YourUser\.azure) and new one lives in the place you specified on step 1.
if you wanna switch between them, you need to flip that env variable to point to whatever config you want
We actually have multiple azure accounts (for some valid reason) and I want to be able to run azure-cli commands for different accounts at the same time from the same machine.
The problem with that is, once I login to one azure account with azure login, token will be stored in ~/.azure directory so I am not sure if I can login into another account exactly at the same time on that machine.
Is there any way to tell azure-cli not to store token in local profile so that I can use azure-cli to connect to multiple accounts at the same time from same machine?
If you are using a windows or mac machine then the tokens are stored in Windows token manager or OSx key chain respectively. Only on Linux systems the tokens are stored in ~/.azure/azureProfile.json
However, you should still be able to login with multiple accounts on Win/Mac or Linux machines.
azure account set "subscription-name" will set the subscription as your default subscription and all the commands that you execute will run against that subscription.
Every command has a -s or --subscription switch where you can explicitly specify the subscription id. Even if the subscription belongs to a different account, it should still work if you have authenticated with that account.
For Linux system, I would suggest to create multiple user accounts and then run the CLI from those accounts. I think there could be a race condition when two commands from different accounts try to access ~/.azure/azureProfile.json.
The latest update is that the environment variable AZURE_CONFIG_DIR has been introduced and that can be set differently for each environment before az login is called.
export AZURE_CONFIG_DIR=/tmp1
az login
and on other window
export AZURE_CONFIG_DIR=/tmp2
az login
Reference: configure the AZURE_CONFIG_DIR for fixing concurrency issue
For Windows, here are steps
Go to env variables and add AZURE_CONFIG_DIR with the value of new config folder (e.x. C:\Users\YourUser\.azure-personal)
restart your cli, then run this az login --use-device-code
use the code given on step 2 and use it with whatever browser to login to new azure account
Now, one of your accounts config is in default azure folder config (C:\Users\YourUser\.azure) and new one lives in the place you specified on step 1.
if you wanna switch between them, you need to flip that env variable to point to whatever config you want
I've been successful at creating and executing a snapshot script if I use gcloud auth to use my personal account. but if I have the cron run as root or as a selected user nothing happens during the cron.
I used this script https://gist.github.com/peihsinsu/73cb7e28780b137c2bcd and it works great and as the author notes: "Install gcloud and auth first" are required.
My problem is in using my personal account and not the service account.
When you execute gcloud auth login you get a very important message
Your credentials may be visible to others with access to this
virtual machine. Are you sure you want to authenticate with
your personal account?
Any thoughts or suggestions to avoid this security risk.
Took some time to figure it out. The script is valid. The tricky part is the user permissions. There are 2 user types - the OS user and the GCE user.
The gcloud uses the GCE user, which is most likely something like blabla#gmail.com. You need to figure out what is the OS user that can use GCE credentials. In my particular case (i've set up VM instance using Bitnami) the user was bitnami (NOT root!!!).
You need to make sure that:
you set up the default gcloud user your GCE user (gcloud config set account blabla#gmail.com)
your script file is executable (chmod +x)
your script file's owner is the user that has GCE credentials
you set up cron for the particular user (in my case sudo -u bitnami crontab -e)
you include full path to the script inside crontab