Error when deploying from codeship to amazon aws - node.js

I have a local git repo and I am trying to do continuous integration and deployment using Codeship. https://documentation.codeship.com
I have the github hooked up to the continuous integration and it seems to work fine.
I have an AWS account and a bucket on there with my access keys and permissions.
When I run the deploy script I get this error:
How can I fix the error?

I had this very issue when using aws-cli and relying on the following files to hold AWS credentials and config for the default profile:
~/.aws/credentials
~/.aws/config
I suspect there is an issue with this technique; as reported in github: Unable to locate credentials
I ended up using codeship project's Environment Variables for the following:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Now, this is not ideal. However my AWS-IAM user has very limited access to perform the specific task of uploading to the bucket being used for the deployment.

Alternatively, depending on your need, you could also check out the Codeshop Pro platform; it allows you to have an encrypted file with environment variables that are decrypted at runtime, during your build.
On both Basic and Pro platforms, if you want/need to use credentials in a file, you can store the credentials in environment variables (like suggested by Nigel) and then echo it into the file as part of your test setup.

Related

Can I define terraform cloud credentials in environment variables instead of .terraformrc file?

I understand that I need to define terraform cloud credentials in the .terraformrc file, as explained here:
https://www.terraform.io/docs/commands/cli-config.html#credentials-1
Is there any way not to use the .terraformrc file and set the credential and token in environment variables?
PS:
Just a side question, do we have a StackOverflow tag for Terraform Enterprise or Cloud?
The answer to this is both yes and no.
If the question is authenticating the TFE provider with environment variables, then the answer is yes. That change was made in this PR to enable TFE_TOKEN and TFE_HOSTNAME for authenticating the TFE provider as an alternative to the Terraform CLI config file. You can then interact with your TFE/Terraform Cloud with that provider and authenticating with environment variables.
If the question is authenticating TFE interactions via the Terraform CLI with environment variables, then the answer is no. TFE authentication is not among the listed environment variables for the Terraform CLI. I have also verified in a quick test that the provider authentication environment variables do not similarly function for the CLI. For that, you must use a terraform.rc, .terraformrc, or credentials.tfrc.json.
The lookup of credentials in the CLI configuration is the default way Terraform handles credentials, but you can override that behavior by configuring a credentials helper, which is an external program Terraform will run in order to obtain credentials, instead of consulting the configuration files directly.
Credentials helpers are arbitrary programs that happen to support a particular protocol over their stdin/stdout, and so they can in principle do anything, including checking environment variables. I previously wrote terraform-credentials-env as a credentials helper which does exactly that, and so configuring that helper might be sufficient to get what you needed here, or if not you could potentially use it as an example to write your own credentials helper.
Note that Terraform's model of credentials is host-oriented rather than service-oriented, so in setting this up we're configuring Terraform to use the given credentials for all services on app.terraform.io. That includes both the Terraform Cloud/Enterprise-specific remote backend and the other services that Terraform Cloud is just one implementation of, like the module registry protocol.

GitHub Actions for Terraform - How to provide "terraform.tfvars" file with aws credentials

I am trying to setup GitHub Actions for execute a terraform template.
My confusion is - how do I provide *.tfvars file which has aws credentials. (I can't check-in these files).
Whats the best practice to share the variable's values expected by terraform commands like plan or apply where they need aws_access_key and aws_secret_key.
Here is my GitHub project - https://github.com/samtiku/terraform-ec2
Any guidance here...
You don't need to provide all variables through *.tfvars file. Apart from -var-file option, terraform command provides also -var parameter, which you can use for passing secrets.
In general, secrets are passed to scripts through environment variables. CI tools give you an option to define environment variables in project configuration. It's a manual step, because as you have already noticed, secrets cannot be stored in the repository.
I haven't used Github Actions in particular, but after setting environment variables, all you need to do is run terraform with secrets read from them:
$ terraform -var-file=some.tfvars -var "aws-secret=${AWS_SECRET_ENVIRONMENT_VARIABLE}
This way no secrets are ever stored in the repository code. If you'd like to run terraform locally, you'll need first to export these variables in your shell :
$ export AWS_SECRET_ENVIRONMENT_VARIABLE="..."
Although Terraform allows providing credentials to some providers via their configuration arguments for flexibility in complex situations, the recommended way to pass credentials to providers is via some method that is standard for the vendor in question.
For AWS in particular, the main standard mechanisms are either a credentials file or via environment variables. If you configure the action to follow what is described in one of those guides then Terraform's AWS provider will automatically find those credentials and use them in the same way that the AWS CLI does.
It sounds like environment variables will be the easier way to go within GitHub actions, in which case you can just set the necessary environment variables directly and the AWS provider should use them automatically. If you are using the S3 state storage backend then it will also automatically use the standard AWS environment variables.
If your system includes multiple AWS accounts then you may wish to review the Terraform documentation guide Multi-account AWS Architecture for some ideas on how to model that. The summary of what that guide recommends is to have a special account set aside only for your AWS users and their associated credentials, and then configure your other accounts to allow cross-account access via roles, and then you can use a single set of credentials to run Terraform but configure each instance of the AWS provider to assume the appropriate role for whatever account that provider instance should interact with.

How to suppress shared aws credentials in development-mode app

In order to use the aws command-line tool, I have aws credentials stored in ~/.aws/credentials.
When I run my app locally, I want it to require the correct IAM permissions for the app; I want it to read these permissions from environment variables.
What has happened is that even without those environment variables defined - even without the permissions defined - my app allows calls to aws which should not be allowed, because it's running on a system with developer credentials.
How can I run my app on my system (not in a container), without blowing away the credentials I need for the aws command-line, but have the app ignore those credentials? I've tried setting the AWS_PROFILE environment variable to a non-existent value in my start script but that did not help.
I like to use named profiles, and run 2 sets of credentials eg DEV and PROD.
When you want to run production profile, run export AWS_PROFILE=PROD
Then return to the DEV credentials in the same way.
The trick here is to have no default credentials at all, and only use named profiles. Remove the credentials named default in the credentials file, and replace with only the named profiles.
See
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html

Deploy to Azure from CircleCI

I'm using CircleCI for the first time and having trouble publishing to Azure.
The docs don't have an example for Azure, they have an example for AWS and a note for Azure saying "To deploy to Azure, use a similar job to the above example that uses an appropriate command."
If anybody has an example YAML file that would be great, if not a nudge in the right direction would be handy. So far I think I've worked out the following.
I need a config that will install the Azure CLI
I need to put my Azure deployment credentials in an environment variable and
I need to run a deploy command in the YAML file to zip up all the right files and deploy to my Azure app service.
I have no idea if the above is correct, or how to do it, but that's my understanding right now.
I've also posted this on the CircleCi forum.
EDIT: Just to add a little more info, the AWS version of the config file used the following command:
- run:
name: Deploy to S3
command: aws s3 sync jekyll/_site/docs s3://circle-production-static-site/docs/ --delete
So I guess I'm looking for the Azure equivalent.
The easiest way is that on the azure management console you setup as deployment from source control and you can follow this two links
https://medium.com/#strid/automatic-deploy-to-azure-web-app-with-circle-ci-v2-0-1e4bda0626e5
https://www.bradleyportnoy.com/how-to-set-up-continuous-deployment-to-azure-from-circle-ci/
if you want to do the copy of the files from ci to the iis server or azure you will need ssh access the keys etc.. and In the Dependencies section of circle.yml you can have a line such as this:
deployment:
production:
branch: master
commands:
- scp -r circle-pushing/* username#my-server:/path-to-put-files-on-server/
“circle-pushing” is your repo name, which is whatever it’s called in GitHub or Bitbucket, and the rest is the hostname and filepath of the server you want to upload files to.
and probably this could help you understand it better
https://learn.microsoft.com/en-us/azure/virtual-machines/linux/copy-files-to-linux-vm-using-scp

Configure AWS credentials to work with both the CLI and SDKs

In trying to automate some deploy tasks to S3, I noticed that the credentials I provided via aws configure are not picked up by the Node.js SDK. How can I get the shell and a gulp task to reference the same file?
After lots of searching, it was the excerpt from this article that caused a eureka moment.
If you've been using the AWS CLI, you might already have a credentials
file, which is in the same location as the new credentials file, but
is named config. If so, the CLI will continue to use that file.
However, if you create a new credentials file, the CLI will use that
one instead. (Be aware that the aws configure command that you can
use to set credentials from the command line will put the credentials
in the config file, not the credentials file.)
By moving ~/.aws/config to ~/.aws/credentials now both the CLI and SDK read from the same location. Sadly, I haven't found any interface for maintaining ~/.aws/credentials other than hand-editing just yet.

Resources