There are many ways to provide the AWS SDK with credentials to perform operations.
I want to make sure any of the methods were successful in setting up the interface before I try my operation on our continuous deployment system.
How can I check if AWS SDK was able to find credentials?
You can access them via the config.credentials property on the main client. All AWS service libraries included in the SDK have a config property.
Class: AWS.Config
The main configuration class used by all service objects to set the region, credentials, and other options for requests.
By default, credentials and region settings are left unconfigured. This should be configured by the application before using any AWS service APIs.
// Using S3
var s3 = new AWS.S3();
console.log(s3.config.credentials);
Related
When I used a bucket a key file was downloaded and it said keep this file safe ?
now I Cannot use .env to encrypt because in the following code you have to link the json file directly to gain access to GCS bucket.
const {Storage} = require('#google-cloud/storage');
const storage = new Storage({
keyFilename:path.join(__dirname,'/<keyfilename>.json'),
projectId:'<project ID>'
});
Now I am concerned when i deploy my app on the app engine this file may be accessed by someone somehow
that is a serious threat because it gives direct access to my GCS bucket
Should I be concerned about that file being accessed by anyone??
Instead of using the Service Account JSON file in AppEngine, You can use the App Engine default service. account to access the GCS buckets or any other service in GCP. By default, the App Engine default service account has the Editor role in the project, Any user account with sufficient permissions to deploy changes to the Cloud project can also run code with read/write access to all resources within that project. However, you can change the service account permissions through the Console.
Open the Cloud Console.
In the Members list, locate the ID of the App Engine default
service account.
The App Engine default service account uses the member ID:
YOUR_PROJECT_ID#appspot.gserviceaccount.com
Use the dropdown menu to modify the roles assigned to the service
account.
I have written an AWS Lambda nodejs function for creating a stack in CloudFormation, using CloudFormation template and given input parameters from UI.
When I run my Lambda function with respected inputs, a stack is successfully creating and instances like (ec2, rds, and vpc, etc.) are also created and working perfectly.
Now I want to make this function as public and use this function with user AWS credentials.
So public user uses my function with his AWS credentials those resources should be created in his account and user doesn't want to see my template code.
How can I achieve this?
You can leverage AWS Cloud Development Kit better, than directly using CloudFormation for this purpose. Although CDK may not be directly used within Lambda, a workaround is mentioned here.
AWS CloudFormation will create resources in the AWS Account that is associated with the credentials used to create the stack.
The person who creates the stack will need to provide (upload) a template file or they can reference a template that is stored in Amazon S3, which is accessible to their credentials (meaning that it is either public, or their credentials have been given permission to access the template in S3).
My existing locally hosted server loads its iot identity + credentials like so:
function initIot() {
var device = awsIot.device({
keyPath: './iot_credentials/ident-private.pem.key',
certPath: './iot_credentials/ident-certificate.pem.crt',
caPath: './iot_credentials/rootca.pem',
clientId: 'iot-server-1',
host: endpoint
});
..and I don't commit the private key & cert anywhere. It lives securely on the server disk.
How would I securely migrate this to serverless cloud9 setup running on codestar? Assuming I trust my AWS team, can I just store it in the project's files?
Keep out the sensitive data from code regardless of the IDE. There are few options you can consider.
You can use a environmental variable in Lambda to store the file content.
Sore it in S3 private bucket with restricted access and retrieve it in code.
Use DevOps to append the config at CI/CD pipeline.
You can also use AWS KMS to store the sensitive data.
As long as those files are properly restricted from public access, I think that's fine.
I am trying integrate firebase-admin sdk to kubernetes cluster but I am getting following error on my pod. Cluster should have needed permissions.
FIREBASE WARNING: Provided authentication credentials for the app named
"[DEFAULT]" are invalid. This usually indicates your app was not
initialized correctly. Make sure the "credential" property provided to
initializeApp() is authorized to access the specified "databaseURL" and
is from the correct project.
Initialization code:
var admin = require("firebase-admin");
admin.initializeApp({
credential: admin.credential.applicationDefault(),
databaseURL: "https://<DATABASE_NAME>.firebaseio.com"
});
In my development environment initialization works such fine. gcloud is authenticated against my project.
How application default credentials are enabled to Kubernetes engine?
Thanks in advance.
Have you configured a service account within the Kubernetes/Container cluster to authenticate to your database/other Google Cloud Platform services?
Service accounts can not only be used for providing the required authorization to use Google Cloud Platform APIs but Kubernetes/Container Engine apps can also use them to authenticate to other services.
You can import the credentials created by the service account into the container cluster so that applications you run in Kubernetes can make use of them.
The Kubernetes Secret resource type enables you to store the credentials/key inside the container cluster so that applications deployed on the cluster can use them directly.
As Hiranya points out in his comment, the GOOGLE_APPLICATION_CREDENTIALS environment variable needs to then point to the key.
Take a look at this page, in particular steps 3, 4 and 5 for more details on how to do this.
$aws configure set region=CrossRegion-US
$ aws iam get-user.
Could not connect to the endpoint URL: https://iam.CrossRegion-US.amazonaws.com/
Is this happening because I have set an incorrect region or is Softlayer in progress of improving the API support?
I have also used the region from authentication endpoints. Still, I get the same error.
Setting custom endpoints is not possible within the ~/.aws/config or ~/.aws/credentials files, instead it must be passed as an argument to each command. In your example above, you were trying to connect to AWS because a custom endpoint was not provided to let the CLI know where to connect.
For example, to list the contents of bucket-1:
aws --endpoint-url=https://{endpoint} s3 ls s3://bucket-1/
In the case of IBM Cross-Region object storage, the default endpoint would be s3-api.us-geo.objectstorage.softlayer.net. (In this case, the region would be us-standard, although this is not necessary to explicitly declare as it is the only region currently offered.)
For more information, the documentation has information on both using the AWS CLI and connecting to endpoints.
All that said, user information is not accessible using the implementation of the S3 API. Some user information can be accessed using the SoftLayer API, but generally speaking user information isn't directly used by the object storage system in this release, as permissions are issued at the storage account level.