At my unit tests, I'm using aws-sdk to test the SES, which needs some credentials, we are facing a problem to access the secrets with GitHub Actions.
At beginning I was trying to set the values to ~/.aws/credentials using the run command from github workflows:
# .github/workflows/nodejs.yml
steps:
...
- name: Unit Test
run: |
mkdir -p ~/.aws
touch ~/.aws/credentials
echo "[default]
aws_access_key_id = ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_access_key = ${{ secrets.AWS_SECRET_KEY_ID }}
region = ${AWS_DEFAULT_REGION}
[github]
role_arn = arn:aws:iam::{accountID}:role/{role}
source_profile = default" > ~/.aws/credentials
npm test
env:
AWS_DEFAULT_REGION: us-east-1
CI: true
Originally my test file:
// ses.test.js
const AWS = require("aws-sdk")
const credentials = new AWS.SharedIniFileCredentials({ profile: "github" })
AWS.config.update({ credentials })
...
I tried to use another way to get credentials at my tests like, and also doesn't work:
const AWS = require("aws-sdk")
const credentials = new AWS.ChainableTemporaryCredentials({
params: {RoleArn: "arn:aws:iam::{accountID}:role/{role}"},
masterCredentials: new AWS.EnvironmentCredentials("AWS")
)}
AWS.config.update({ credentials })
Finally I tried to create an Action customized (using actions js library like: #actions/core, #actions/io, #actions/exec), to get the AWS env values and set it at ~/.aws/credentials, but also doesn't work as expected
One way that worked was exposing AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (without use GitHub Actions secrets, not ideal, for security purposes)
Someone has any ideas how AWS credentials could work at GitHub Actions with secrets ?
Thanks a lot for your attention.
Luckily the aws-sdk should automatically detect credentials set as environment variables and use them for requests
To get access to secrets in your action, you need to set them in the repo. Then you can expose them to the step as an env var.
For more details see GitHub Encrypted secrets
On GitHub, navigate to the main page of the repository
Under your repository name, click the ⚙ Settings tab
Repository settings button
In the left sidebar, click Secrets
Type a name for your secret in the "Name" input box
Type the value for your secret
Click Add secret
In your case you will want to add secrets for both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
Now that those are set you can pass those values into the action via the workflow yaml:
steps:
...
- name: Unit Test
uses: ...
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: ...
Avoid using long term and hard coded credentials.
The configure-aws-credentials action provides a mechanism to configure AWS credential and region environment variables for use in other GitHub Actions. The environment variables will be detected by both the AWS SDKs and the AWS CLI to determine the credentials and region to use for AWS API calls.
I recommend configuring configure-aws-credentials to use OpenID Connect (OIDC). This allows your GitHub Actions workflows to access resources in AWS, without needing to store the AWS credentials as long-lived GitHub secrets. The GitHub Configuring OpenID Connect in AWS post walks through setting this up.
To give you a practical example, I set up a pipeline to upload dummy data to a S3 bucket. First set up an OpenID Connect provider, and a role for github to federate into in your AWS account. The examples in configure-aws-credentials are written in CloudFormation but I've translated them to the Python Cloud-Development-Kit(CDK) below. Make sure to change the role condition to match your repository.
github_oidc_provider = iam.OpenIdConnectProvider(
self,
"GithubOIDC",
url="https://token.actions.githubusercontent.com",
thumbprints=["a031c46782e6e6c662c2c87c76da9aa62ccabd8e"],
client_ids=[
"sts.amazonaws.com"
]
)
github_actions_role = iam.Role(
self,
"DeployToBucketRole",
max_session_duration=cdk.Duration.seconds(3600),
role_name="github-actions-role",
description="Github actions deployment role to S3",
assumed_by=iam.FederatedPrincipal(
federated=github_oidc_provider.open_id_connect_provider_arn,
conditions={
"StringLike": {
# <GITHUB USERNAME>/<YOUR REPO NAME>
"token.actions.githubusercontent.com:sub": 'repo:arbitraryrw/cdk-github-actions-demo:*'
}
},
assume_role_action="sts:AssumeRoleWithWebIdentity"
)
)
bucket = s3.Bucket(
self,
f"example_bucket",
bucket_name="cdk-github-actions-demo",
encryption=s3.BucketEncryption.S3_MANAGED,
enforce_ssl=True,
block_public_access=s3.BlockPublicAccess.BLOCK_ALL,
removal_policy=cdk.RemovalPolicy.DESTROY,
auto_delete_objects=True
)
# Give the role permissions to read / write to the bucket
bucket.grant_read_write(github_actions_role)
You can then reference this in your pipeline and run AWS CLI / SDK commands using these credentials. Notice that the snippet references Github Encrypted Secrets, I recommend leveraging this functionality:
name: Example CDK Pipeline
on:
push:
branches: [ main ]
jobs:
build:
name: Emulate build step
runs-on: ubuntu-latest
steps:
- name: Checking out repository
uses: actions/checkout#v2
- name: "Upload artifacts"
uses: actions/upload-artifact#v2
with:
name: build-artifacts
path: ${{ github.workspace }}/resources
deploy:
needs: build
name: Deploy build artifacts to S3
runs-on: ubuntu-latest
# These permissions are needed to interact with GitHub's OIDC Token endpoint.
permissions:
id-token: write
contents: read
steps:
- name: "Download build artifacts"
uses: actions/download-artifact#v2
with:
name: build-artifacts
path: ${{ github.workspace }}/resources
- name: Configure AWS credentials from Test account
uses: aws-actions/configure-aws-credentials#v1
with:
aws-region: us-east-1
role-to-assume: ${{ secrets.AWS_ROLE_FOR_GITHUB }}
role-session-name: GitHubActions
- run: aws sts get-caller-identity
- name: Copy files to the test website with the AWS CLI
run: |
aws s3 sync ./resources s3://${{ secrets.BUCKET_NAME }}
For a full example on how to set this up using the CDK you can take a look at the cdk-github-actions-demo repo I set up.
Take a look at: https://github.com/aws-actions/configure-aws-credentials
It allows you to configure AWS credential and region environment variables for use in other GitHub Actions. The environment variables will be detected by both the AWS SDKs and the AWS CLI to determine the credentials and region to use for AWS API calls.
I was hitting my head against the wall on the same thing for a while.
In my case the setting profile = default was the issue.
I was able to remove that from my script and only having env. If I had both it would throw an error.
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'us-east-1'
If running aws from the command line is acceptable for you, you can set the following ENV vars and just use aws commands without needing to run aws configure:
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: us-east-1
AWS_DEFAULT_OUTPUT: json
Related
I have a GitHub Action that uses azure/docker-login#v1 for building and pushing images to the Azure image registry, and it works.
Now, I want to pass GITHUB_TOKEN using Docker's secret flag, but it only accepts a file, and I don't know how to create a file using this action.
Is it possible?
For example, with docker/build-push-action I can do this bellow
- name: Build docker image
uses: docker/build-push-action#v2
with:
context: .
secrets: |
"github_token=${{ secrets.GITHUB_TOKEN }}"
How can I secure my image using azure/docker-login?
As the readme.md of the azure/docker-login action suggests:
Use this GitHub Action to log in to a private container registry such as Azure Container registry. Once login is done, the next set of actions in the workflow can perform tasks such as building, tagging and pushing containers.
You can setup your workflow so that it logs in using azure/docker-login and builds and pushes the image using docker/build-push-action, like this:
- uses: azure/docker-login#v1
with:
login-server: contoso.azurecr.io
username: ${{ secrets.ACR_USERNAME }}
password: ${{ secrets.ACR_PASSWORD }}
- uses: docker/build-push-action#v2
with:
push: true
context: .
secrets: |
"github_token=${{ secrets.GITHUB_TOKEN }}"
We have our frontend and our backend in Azure and we linked the production environment to our backend successfully. As we deploy this SWA using the default GitHub Action, we were wondering how one would specify the Backend Resource Name. We can link it manually in Azure, but only for static environments. How would one do this for all environments? The docs don't really mention anything here.
- name: Build And Deploy
id: builddeploy
uses: Azure/static-web-apps-deploy#v1
with:
deployment_environment: 'develop'
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }}
repo_token: ${{ secrets.GITHUB_TOKEN }}
action: "upload"
We are using a Zoom API in our React app and we want to use the Git and Heroku Secret to push and deploy our web application.
In our YAML file, we define the git as well as Heroku keys
jobs:
build:
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # auto-generated
HEROKU_KEY: ${{ secrets.HEROKU_API_KEY }}
HEROKU_EMAIL: ${{ secrets.HEROKU_EMAIL }}
HEROKU_APP_NAME: ${{ secrets.HEROKU_APP_NAME }}
REACT_APP_ZOOM_KEY: ${{ secrets.ZOOM_API_KEY}}
REACT_APP_ZOOM_SECRET: ${{ secrets.ZOOM_API_SECRET }}
Now we want to access the Zoom Key and Secret in our config file to generate the signature (used for generating a Zoom meeting).
We wanted to access them with process.env, like this:
sdkKey: isProduction()? process.env.ZOOM_Key : process.env.REACT_APP_ZOOM_KEY,
sdkSecret: isProduction()?process.env.ZOOM_Secret: process.env.REACT_APP_ZOOM_SECRET,
If I do this, I get the error, that my Zoom Key and secret are required and need to be strings.
I already tried to use JSON.stringify(process.env.REACT_APP_ZOOM_KEY) but I get the same error.
Maybe also important to mention, if I hardcode the key and secret directly in the config file, it is working, so I assume the error is with accessing the environment variable from the YAML file.
We would really appreciate it if someone could help us :)
Initially, I had a working git repo with a workflow setup so as to upload some NodeJS source code on a git push request which all worked fine. However I have a Steam API key that was in an .env file which I did not want public (removing .env altogether) so I wanted to use Github Secrets to store the STEAM_API_KEY (along with other variables such as BASE_URL) within yml env variables used in the workflow as follows:
jobs:
test:
runs-on: ubuntu-latest
env:
BASE_URL: ${{ secrets.BASE_URL }}
STEAM_API_KEY: ${{ secrets.STEAM_API_KEY }}
steps:
- name: Checkout Repo v2
uses: actions/checkout#v2
- name: Use Node.js 14.x
uses: actions/setup-node#v1
with:
node-version: 14.x
- name: Running CI Installation
run: npm ci
- name: Running Application/Server Unit Tests
run: npm test
I than accessed them in my code with process.env.<variable_name> (following How can I use Github secrets in JS files):
module.exports = new SteamAuth({
realm: `${process.env.BASE_URL}/steam/user/auth`,
returnUrl: `${process.env.BASE_URL}/steam/user/auth`,
apiKey: process.env.STEAM_API_KEY
});
But throws me this error on Heroku:
Error: Missing realm, returnURL or apiKey parameter(s). These are required.
This does not happen if I simply hard-code the strings directly into realm, returnUrl and apiKey.
Upon further troubleshooting:
var url1 = `${process.env.BASE_URL}/steam/user/auth`; // BASE_URL = "https://<app_name>.herokuapp.com"
var url2 = "https://<app_name>.herokuapp.com/steam/user/auth";
console.log(url1 === url2);
console.log(url1);
console.log(url2);
Outputs:
true
***/steam/user/auth
***/steam/user/auth
Where url1 has process.env.BASE_URL encrypted. But url2 gets encrypted too since it resembles BASE_URL?? Is this a flaw with Github Actions?
At this point I am out of ideas. I am doing something wrong but don't know where to go from here. Anyone have any idea on how to properly use Github secrets in .js code?
PS: Github secrets/workflows are very new to me, please be indulgent with my lack of knowledge/understanding.
I figured the problem out:
The env variables are only available when running the Github Action, NOT while executing from Heroku.
Still doesn't explain the last part tho with url2
No cfnRole warned and unnecessary files was created after deploy
Serverless: Safeguards Processing...
Serverless: Safeguards Results:
Summary --------------------------------------------------
passed - no-unsafe-wildcard-iam-permissions
passed - framework-version
warned - require-cfn-role
passed - allowed-runtimes
passed - no-secret-env-vars
passed - allowed-regions
passed - allowed-stages
Details --------------------------------------------------
1) Warned - no cfnRole set
details: http://slss.io/sg-require-cfn-role
Require the cfnRole option, which specifies a particular role for CloudFormation to assume while deploying.
I had been go to the site that write in details.
details: http://slss.io/sg-require-cfn-role
Anyway, I don't know how to fix it.
s_hello.py & s_hello2.py always generated after deploy.
This is my serverless.yaml file
service: myapp
app: sample-app
org: xxx
provider:
name: aws
runtime: python3.7
stage: dev
region: us-east-1
package:
individually: true
functions:
hello:
handler: src/handler/handler.hello
hello2:
handler: src/handler2/handler2.hello2
It's always happen although follow this site .
My Lambda-function will create "s_xxx.py (Where xxx is handler.py file.
I solved this issue creating a cfn-role in AWS IAM following these steps:
Roles -> Create Role -> AWS Service -> Select Cloud Formation from the list
Next: Permisions
You need to choose all the policies you need to deploy your lambda function (S3FullAccess, SQSFullAccess, LambdaFullAccess...)
There's one that it's mandatory AWSConfigRole who allows to serverless framework to get this role.
After setting the role, you need to copy its arn and create behind provider level cfnRole like this:
provider:
name: aws
runtime: python3.7
stage: ${opt:stage, 'dev'}
profile: ${self:custom.deploy-profile.${opt:stage, 'dev'}}
region: us-west-2
environment:
TEMP: "/tmp"
cfnRole: arn:aws:iam::xxxxxxxxxx:role/cfn-Role
That's work for me, I hope to help!