Configure AWS credentials to work with both the CLI and SDKs - node.js

In trying to automate some deploy tasks to S3, I noticed that the credentials I provided via aws configure are not picked up by the Node.js SDK. How can I get the shell and a gulp task to reference the same file?

After lots of searching, it was the excerpt from this article that caused a eureka moment.
If you've been using the AWS CLI, you might already have a credentials
file, which is in the same location as the new credentials file, but
is named config. If so, the CLI will continue to use that file.
However, if you create a new credentials file, the CLI will use that
one instead. (Be aware that the aws configure command that you can
use to set credentials from the command line will put the credentials
in the config file, not the credentials file.)
By moving ~/.aws/config to ~/.aws/credentials now both the CLI and SDK read from the same location. Sadly, I haven't found any interface for maintaining ~/.aws/credentials other than hand-editing just yet.

Related

How to put Google Pub/Sub service key file path in a config file and not in envionment variable?

I was going through the documentation for Google Cloud Pub/Sub and I found out that the key file has to be stored in environment variable. https://cloud.google.com/pubsub/docs/quickstart-client-libraries I store want to store it in a config.js file so that I don't have to play with environment variables again when i am deploying it on cloud run. How can I do that?
My answer isn't exactly what you should expect! In fact, if you run your container on Cloud Run, you don't need a service account key file.
Firstly, it's not secure
Then you can do almost all with ADC (Application Default Credential)
But there is some limitation; I wrote an article on this. And another one article is under review to narrow again these limitations.
So, when you deploy your Cloud Run revision, use the --serviceaccount parameter to specify the service account email to use, and that's all!!
So, to really answer your question, if you have your file set in the config.js, you can manually load the file content and pass it to the lib
const {auth} = require('google-auth-library');
const keys = JSON.parse("YOUR CONTENT");
const client = auth.fromJSON(keys);
If you are running on your local windows machine, you can go to the environment variable and create environment variable named - GOOGLE_APPLICATION_CREDENTIALS and set the complete path of service account key json file like - C:/keyfolder/sakey.json.
Or you can use command line given in the example of your link.
To get service account key file, you can go to the service Accounts in the GCP console and create service account. If you already have service account, just download the key json file by clicking on ... in action column of Service Accounts.

How to suppress shared aws credentials in development-mode app

In order to use the aws command-line tool, I have aws credentials stored in ~/.aws/credentials.
When I run my app locally, I want it to require the correct IAM permissions for the app; I want it to read these permissions from environment variables.
What has happened is that even without those environment variables defined - even without the permissions defined - my app allows calls to aws which should not be allowed, because it's running on a system with developer credentials.
How can I run my app on my system (not in a container), without blowing away the credentials I need for the aws command-line, but have the app ignore those credentials? I've tried setting the AWS_PROFILE environment variable to a non-existent value in my start script but that did not help.
I like to use named profiles, and run 2 sets of credentials eg DEV and PROD.
When you want to run production profile, run export AWS_PROFILE=PROD
Then return to the DEV credentials in the same way.
The trick here is to have no default credentials at all, and only use named profiles. Remove the credentials named default in the credentials file, and replace with only the named profiles.
See
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html

Deploy to Azure from CircleCI

I'm using CircleCI for the first time and having trouble publishing to Azure.
The docs don't have an example for Azure, they have an example for AWS and a note for Azure saying "To deploy to Azure, use a similar job to the above example that uses an appropriate command."
If anybody has an example YAML file that would be great, if not a nudge in the right direction would be handy. So far I think I've worked out the following.
I need a config that will install the Azure CLI
I need to put my Azure deployment credentials in an environment variable and
I need to run a deploy command in the YAML file to zip up all the right files and deploy to my Azure app service.
I have no idea if the above is correct, or how to do it, but that's my understanding right now.
I've also posted this on the CircleCi forum.
EDIT: Just to add a little more info, the AWS version of the config file used the following command:
- run:
name: Deploy to S3
command: aws s3 sync jekyll/_site/docs s3://circle-production-static-site/docs/ --delete
So I guess I'm looking for the Azure equivalent.
The easiest way is that on the azure management console you setup as deployment from source control and you can follow this two links
https://medium.com/#strid/automatic-deploy-to-azure-web-app-with-circle-ci-v2-0-1e4bda0626e5
https://www.bradleyportnoy.com/how-to-set-up-continuous-deployment-to-azure-from-circle-ci/
if you want to do the copy of the files from ci to the iis server or azure you will need ssh access the keys etc.. and In the Dependencies section of circle.yml you can have a line such as this:
deployment:
production:
branch: master
commands:
- scp -r circle-pushing/* username#my-server:/path-to-put-files-on-server/
“circle-pushing” is your repo name, which is whatever it’s called in GitHub or Bitbucket, and the rest is the hostname and filepath of the server you want to upload files to.
and probably this could help you understand it better
https://learn.microsoft.com/en-us/azure/virtual-machines/linux/copy-files-to-linux-vm-using-scp

Error when deploying from codeship to amazon aws

I have a local git repo and I am trying to do continuous integration and deployment using Codeship. https://documentation.codeship.com
I have the github hooked up to the continuous integration and it seems to work fine.
I have an AWS account and a bucket on there with my access keys and permissions.
When I run the deploy script I get this error:
How can I fix the error?
I had this very issue when using aws-cli and relying on the following files to hold AWS credentials and config for the default profile:
~/.aws/credentials
~/.aws/config
I suspect there is an issue with this technique; as reported in github: Unable to locate credentials
I ended up using codeship project's Environment Variables for the following:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Now, this is not ideal. However my AWS-IAM user has very limited access to perform the specific task of uploading to the bucket being used for the deployment.
Alternatively, depending on your need, you could also check out the Codeshop Pro platform; it allows you to have an encrypted file with environment variables that are decrypted at runtime, during your build.
On both Basic and Pro platforms, if you want/need to use credentials in a file, you can store the credentials in environment variables (like suggested by Nigel) and then echo it into the file as part of your test setup.

AWS Elastic Beanstalk NodeJS Project access S3 Bucket

i want to access S3 Bucket from my NodeJS application without write and commit the credentials for this Bucket in my application. I see that it is possible to set a .config file in the .elasticbeanstalk folder where you can specified RDS databases. In the application you can use this this RDS without set any credentials with variables like process.env.RDS_HOSTNAME. I want the same with S3 Bucket, but process.env.S3_xxx doesn't work. How should the .config look?
Alternatively,
you can explicitly set an environment variable from elasticbeanstalk at http://console.aws.amazon.com
Step 1: go to the above url login and open your elasticbeanstalk app.
Step 2: open the configuration tab and in that open software configuration.
Step 3 : scroll to environment properties and add your variable there i.e Property Name:S3_xxx,Property Value:"whatever value".
now you can access this variable in your app using process.env.S3_xxx
without any .config in your app.

Resources