Deployed GAE Instance Does Not Have Permissions - node.js

I have successfully deployed apollo-server-express on a GAE instance, however, the instance is unable to fetch secrets from the Google Secrets Manager.
Error from GAE Logs:
Error: 7 PERMISSION_DENIED: The caller does not have permission
index.ts
// #note `SECRET_NAMES` is a comma separated string of the secrets' paths
const secretNames = process.env.SECRET_NAMES?.split(',') ?? []
for (const secretName of secretNames) {
// #note `loadSecret` will use Google Secrets Manager SDK to download the payload
// #note `secretName` is the fully qualified path to the secret located in Google Secret Manager API
Object.assign(process.env, await loadSecret(secretName))
}
app.yml
runtime: nodejs16
service: <service-name>
instance_class: F1
env_variables:
SECRET_NAMES: '<path-1>/versions/latest,<path-2>/versions/latest`
On my local machine, I would use the google-service-key.json to have the server run with an app service's credentials as default. This app service has the following roles:
Secrets Access
SQL Client
However, once I run gcloud app deploy, the server is no longer going to look for the google-service-key.json, and instead will use admin.credentials.applicatioDefault() to authenticate.
However, I'm not certain that the default credentials of the GAE instance are the same credentials that I referenced in the google-service-key.json.

Related

Can't create google_storage_bucket via Terraform

I'd like to create the following resource via Terraform:
resource "google_storage_bucket" "tf_state_bucket" {
name = var.bucket-name
location = "EUROPE-WEST3"
storage_class = "STANDARD"
versioning {
enabled = true
}
force_destroy = false
public_access_prevention = "enforced"
}
Unfortunately, during the execution of terraform apply, I got the following error:
googleapi: Error 403: X#gmail.com does not have storage.buckets.create access to the Google Cloud project. Permission 'storage.buckets.create' denied on resource (or it may not exist)., forbidden
Here's the list of things I tried and checked:
Verified that Google Cloud Storage (JSON) API is enabled on my project.
Checked the IAM roles and permissions: X#gmail.com has the Owner and the Storage Admin roles.
I can create a bucket manually via the Google Console.
Terraform is generally authorised to create resources, for example, I can create a VM using it.
What else can be done to authenticate Terraform to create Google Storage Buckets?
I think you run the Terraform code in a Shell session from your local machine and use an User identity instead of a Service Account identity.
In this case to solve your issue from your local machine :
Create a Service Account in GCP IAM console for Terraform with Storage Admin roles role.
Download a Service Account token key from IAM.
Set the GOOGLE_APPLICATION_CREDENTIALS env var in your Shell session to the Service Account token key file path.
If you run your Terraform code in other place, you need to check if Terraform is correctly authenticated to GCP.
The use of a token key is not recommended because it's not the more secure way, that's why it is better to launch Terraform from a CI tool like Cloud Build instead of launch it from your local machine.
From Cloud Build no need to download and set a token key.

How to keep GCS key json file Safe?

When I used a bucket a key file was downloaded and it said keep this file safe ?
now I Cannot use .env to encrypt because in the following code you have to link the json file directly to gain access to GCS bucket.
const {Storage} = require('#google-cloud/storage');
const storage = new Storage({
keyFilename:path.join(__dirname,'/<keyfilename>.json'),
projectId:'<project ID>'
});
Now I am concerned when i deploy my app on the app engine this file may be accessed by someone somehow
that is a serious threat because it gives direct access to my GCS bucket
Should I be concerned about that file being accessed by anyone??
Instead of using the Service Account JSON file in AppEngine, You can use the App Engine default service. account to access the GCS buckets or any other service in GCP. By default, the App Engine default service account has the Editor role in the project, Any user account with sufficient permissions to deploy changes to the Cloud project can also run code with read/write access to all resources within that project. However, you can change the service account permissions through the Console.
Open the Cloud Console.
In the Members list, locate the ID of the App Engine default
service account.
The App Engine default service account uses the member ID:
YOUR_PROJECT_ID#appspot.gserviceaccount.com
Use the dropdown menu to modify the roles assigned to the service
account.

How to access Files in Google Cloud Storage through GKE pods

I'm trying get image files of Google Cloud Storage (GCS) in my Node.js application using Axios client. On develop mode using my PC I pass a Bearer Token and all works properly.
But, I need to use this in production in a cluster hosted on Google Kubernetes Engine (GKE).
I made recommended tuturials to create a service account (GSA), then I vinculed with kubernetes account (KSA), via Workload identity approach, but when I try get files througt one endpoint on my app, I'm receiving:
{"statusCode":401,"message":"Unauthorized"}
What is missing to make?
Update: What I've done:
Create Google Service Account
https://cloud.google.com/iam/docs/creating-managing-service-accounts
Create Kubernetes Service Account
# gke-access-gcs.ksa.yaml file
apiVersion: v1
kind: ServiceAccount
metadata:
name: gke-access-gcs
kubectl apply -f gke-access-gcs.ksa.yaml
Relate KSAs and GSAs
gcloud iam service-accounts add-iam-policy-binding \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:cluster_project.svc.id.goog[k8s_namespace/ksa_name]" \
gsa_name#gsa_project.iam.gserviceaccount.com
Note the KSA and complete the link between KSA and GSA
kubectl annotate serviceaccount \
--namespace k8s_namespace \
ksa_name \
iam.gke.io/gcp-service-account=gsa_name#gsa_project.iam.gserviceaccount.com
Set Read and Write role:
gcloud projects add-iam-policy-binding project-id \
--member=serviceAccount:gsa-account#project-id.iam.gserviceaccount.com \
--role=roles/storage.objectAdmin
Test access:
kubectl run -it \
--image google/cloud-sdk:slim \
--serviceaccount ksa-name \
--namespace k8s-namespace \
workload-identity-test
The above command works correctly. Note that was passed --serviceaccount and workload-identity. Is this necessary to GKE?
PS: I don't know if this influences, but I am using SQL Cloud with proxy in the project.
EDIT
Issue portrayed in the question is related to the fact that axios client does not use the Application Default Credentials (as official Google libraries) mechanism that Workload Identity takes advantage of. The ADC checks:
If the environment variable GOOGLE_APPLICATION_CREDENTIALS is set, ADC uses the service account file that the variable points to.
If the environment variable GOOGLE_APPLICATION_CREDENTIALS isn't set, ADC uses the default service account that Compute Engine, Google Kubernetes Engine, App Engine, Cloud Run, and Cloud Functions provide.
-- Cloud.google.com: Authentication: Production
This means that axios client will need to fall back to the Bearer token authentication method to authenticate against Google Cloud Storage.
The authentication with Bearer token is described in the official documentation as following:
API authentication
To make requests using OAuth 2.0 to either the Cloud Storage XML API or JSON API, include your application's access token in the Authorization header in every request that requires authentication. You can generate an access token from the OAuth 2.0 Playground.
Authorization: Bearer OAUTH2_TOKEN
The following is an example of a request that lists objects in a bucket.
JSON API
Use the list method of the Objects resource.
GET /storage/v1/b/example-bucket/o HTTP/1.1
Host: www.googleapis.com
Authorization: Bearer ya29.AHES6ZRVmB7fkLtd1XTmq6mo0S1wqZZi3-Lh_s-6Uw7p8vtgSwg
-- Cloud.google.com: Storage: Docs: Api authentication
I've included basic example of a code snippet using Axios to query the Cloud Storage (requires $ npm install axios):
const Axios = require('axios');
const config = {
headers: { Authorization: 'Bearer ${OAUTH2_TOKEN}' }
};
Axios.get(
'https://storage.googleapis.com/storage/v1/b/BUCKET-NAME/o/',
config
).then(
(response) => {
console.log(response.data.items);
},
(err) => {
console.log('Oh no. Something went wrong :(');
// console.log(err) <-- Get the full output!
}
);
I left below example of Workload Identity setup with a node.js official library code snippet as it could be useful to other community members.
Posting this answer as I've managed to use Workload Identity and a simple nodejs app to send and retrieve data from GCP bucket.
I included some bullet points for troubleshooting potential issues.
Steps:
Check if GKE cluster has Workload Identity enabled.
Check if your Kubernetes service account is associated with your Google Service account.
Check if example workload is using correct Google Service account when connecting to the API's.
Check if your Google Service account is having correct permissions to access your bucket.
You can also follow the official documentation:
Cloud.google.com: Kubernetes Engine: Workload Identity
Assuming that:
Project (ID) named: awesome-project <- it's only example
Kubernetes namespace named: bucket-namespace
Kubernetes service account named: bucket-service-account
Google service account named: google-bucket-service-account
Cloud storage bucket named: workload-bucket-example <- it's only example
I've included the commands:
$ kubectl create namespace bucket-namespace
$ kubectl create serviceaccount --namespace bucket-namespace bucket-service-account
$ gcloud iam service-accounts create google-bucket-service-account
$ gcloud iam service-accounts add-iam-policy-binding --role roles/iam.workloadIdentityUser --member "serviceAccount:awesome-project.svc.id.goog[bucket-namespace/bucket-service-account]" google-bucket-service-account#awesome-project.iam.gserviceaccount.com
$ kubectl annotate serviceaccount --namespace bucket-namespace bucket-service-account iam.gke.io/gcp-service-account=google-bucket-service-account#awesome-project-ID.iam.gserviceaccount.com
Using the guide linked above check the service account authenticating to API's:
$ kubectl run -it --image google/cloud-sdk:slim --serviceaccount bucket-service-account --namespace bucket-namespace workload-identity-test
The output of $ gcloud auth list should show:
Credentialed Accounts
ACTIVE ACCOUNT
* google-bucket-service-account#AWESOME-PROJECT.iam.gserviceaccount.com
To set the active account, run:
$ gcloud config set account `ACCOUNT`
Google service account created earlier should be present in the output!
Also it's required to add the permissions for the service account to the bucket. You can either:
Use Cloud Console
Run: $ gsutil iam ch serviceAccount:google-bucket-service-account#awesome-project.iam.gserviceaccount.com:roles/storage.admin gs://workload-bucket-example
To download the file from the workload-bucket-example following code can be used:
// Copyright 2020 Google LLC
/**
* This application demonstrates how to perform basic operations on files with
* the Google Cloud Storage API.
*
* For more information, see the README.md under /storage and the documentation
* at https://cloud.google.com/storage/docs.
*/
const path = require('path');
const cwd = path.join(__dirname, '..');
function main(
bucketName = 'workload-bucket-example',
srcFilename = 'hello.txt',
destFilename = path.join(cwd, 'hello.txt')
) {
const {Storage} = require('#google-cloud/storage');
// Creates a client
const storage = new Storage();
async function downloadFile() {
const options = {
// The path to which the file should be downloaded, e.g. "./file.txt"
destination: destFilename,
};
// Downloads the file
await storage.bucket(bucketName).file(srcFilename).download(options);
console.log(
`gs://${bucketName}/${srcFilename} downloaded to ${destFilename}.`
);
}
downloadFile().catch(console.error);
// [END storage_download_file]
}
main(...process.argv.slice(2));
The code is exact copy from:
Googleapis.dev: NodeJS: Storage
Github.com: Googleapis: Nodejs-storage: downloadFile.js
Running this code should produce an output:
root#ubuntu:/# nodejs app.js
gs://workload-bucket-example/hello.txt downloaded to /hello.txt.
root#ubuntu:/# cat hello.txt
Hello there!

Access issues with Google Cloud Functions read access to Google Cloud Firestore Collection in Default Database

I am trying to write a cloud function in python that would read a collection in Google Cloud Firestore (Native) [not the Realtime Database or Datastore].
I have created a Service Account that has below Roles for the project:
- Project Owner
- Firebase Admin
- Service Account User
- Cloud Functions Developer
- Project Editor
When run on my local I am setting the service account credential in my environment: GOOGLE_APPLICATION_CREDENTIALS
My cloud function is able to access Cloud Storage. I am only having issues with Cloud Firestore.
I have tried using both the Client Python SDK and the Admin SDK (Python). The Admin SDK seems to only be available for the realtime database as it requires a Database URL to connect.
I have tried running both from my dev machine and as a cloud function.
I also changed the Firestore access rules to below for unrestricted access:
service cloud.firestore {
match /databases/{database}/documents {
match /{document=**} {
allow read, write: if true;
}
}
}
I am trying to run the same code in the Google Documentation..
from google.cloud import firestore
def process_storage_file(data, context):
# Add a new document
db = firestore.Client()
doc_ref = db.collection(u'users').document(u'alovelace')
doc_ref.set({
u'first': u'Ada',
u'last': u'Lovelace',
u'born': 1815
})
# Then query for documents
users_ref = db.collection(u'users')
docs = users_ref.get()
for doc in docs:
print(u'{} => {}'.format(doc.id, doc.to_dict()))
I am not able to get the Cloud Function to connect to Google Cloud Firestore. I get the error:
line 3, in raise_from google.api_core.exceptions.PermissionDenied: 403 Missing or insufficient permissions.
Both the cloud function and Firestore are in the same GCP Project.
The service account you specified on the cloud function UI configuration needs to have the Datastore User Role
First, check if you uploaded the service account's credential JSON file along with your code, and that the GOOGLE_APPLICATION_CREDENTIALS environment variable is also set in your Cloud Function's configuration page. (I know uploading credentials is a bad idea, but you need to put the JSON file somewhere, if you don't want to use the Compute Engine default service account.)
Second, you might want to provide a Cloud Datastore User role (or a similar one) to your service account, instead of Firebase Admin. It seems that the new Firestore can be accessed with the Cloud Datastore X roles, rather than the Firebase ones.

Nodejs firebase-admin authenticate with application default credentials in Google Kubernetes engine

I am trying integrate firebase-admin sdk to kubernetes cluster but I am getting following error on my pod. Cluster should have needed permissions.
FIREBASE WARNING: Provided authentication credentials for the app named
"[DEFAULT]" are invalid. This usually indicates your app was not
initialized correctly. Make sure the "credential" property provided to
initializeApp() is authorized to access the specified "databaseURL" and
is from the correct project.
Initialization code:
var admin = require("firebase-admin");
admin.initializeApp({
credential: admin.credential.applicationDefault(),
databaseURL: "https://<DATABASE_NAME>.firebaseio.com"
});
In my development environment initialization works such fine. gcloud is authenticated against my project.
How application default credentials are enabled to Kubernetes engine?
Thanks in advance.
Have you configured a service account within the Kubernetes/Container cluster to authenticate to your database/other Google Cloud Platform services?
Service accounts can not only be used for providing the required authorization to use Google Cloud Platform APIs but Kubernetes/Container Engine apps can also use them to authenticate to other services.
You can import the credentials created by the service account into the container cluster so that applications you run in Kubernetes can make use of them.
The Kubernetes Secret resource type enables you to store the credentials/key inside the container cluster so that applications deployed on the cluster can use them directly.
As Hiranya points out in his comment, the GOOGLE_APPLICATION_CREDENTIALS environment variable needs to then point to the key.
Take a look at this page, in particular steps 3, 4 and 5 for more details on how to do this.

Resources