How to SSO login to AWS in Docker container (using aws-sdk v3) - node.js

When developing locally, I need to have access to an S3 bucket.
The access is provided via SSO.
I'm using aws-sdk v3 and node.js.
When running the same node.js app without docker, I get access and everything works fine.
Here's what I do:
aws configure sso
aws sso login --profile **profile name**
And here's how my code looks like:
const { S3Client } = require('#aws-sdk/client-s3');
const { fromSSO } = require('#aws-sdk/credential-provider-sso');
const credentials = fromSSO({
profile: process.env.AWS_PROFILE,
ssoStartUrl: process.env.AWS_SSO_START_URL,
ssoAccountId: process.env.AWS_ACCOUNT_ID,
ssoRegion: process.env.AWS_REGION,
ssoRoleName: process.env.AWS_SSO_ROLE_NAME,
});
const client = new S3Client({ credentials });
However, when running the same app in docker (using docker compose), I keep getting the error
The SSO session associated with this profile is invalid. To refresh this SSO session run aws sso login with the corresponding profile.
I'm using the node:18-alpine image and to add aws-cli to container, I do
docker compose run api sh
apk update && apk add --no-cache curl gcompat zip && \
curl -s https://awscli.amazonaws.com/awscli-exe-linux-x86_64-2.1.39.zip -o awscliv2.zip && \
unzip awscliv2.zip && ./aws/install
/usr/local/bin/aws configure sso
/usr/local/bin/aws sso login --profile **my profile**
I've checked the env variables, they're OK. However, it keeps crashing my app with the error above.
Also, here's the contents of my docker-compose.yml just in case.
What am I missing or doing wrong?
I feel this is a completely incorrect way to do this, but is there a better way?
SSO is my only option and I'm fine with the flow without Docker, but also really need to make this work with Docker.
I'm seeing at least 2 problems:
add aws-cli installation to docker-compose.yml
figure out why SSO sessions keep being invalid.

The problem was that the SSO session information was not being properly persisted within the container.
To fix that, I had to mount the .aws directory in the container and also add AWS_CONFIG_FILE=/root/.aws/config to environment:
api:
image: node:18-alpine
env_file:
- .env
volumes:
- ./api:/usr/src/app
- ~/.aws:/root/.aws
With this setup everything works as it should.
When the SSO session expires, I can re-run aws sso login --profile MyProfile and restart my docker container.

Related

How do you deploy Nodejs using Zeit now?

I'm trying to deploy my back-end nodejs server ising now by Zeit
I installed it using the npm i -g now command
and I used the now command to deploy, but I'm getting this error:
Now CLI 17.1.1
Error! The content of "~\AppData\Roaming\now\Data\auth.json" is invalid. No `token` property found inside. Run `now login` to authorize.
I'm confused on what I did wrong, any suggestions?
I had to run the command:
now login
then it asks for my Zeit account info. Afterwards, I navigated to the directory where my server files are and used the command:
now
to innitiate and deploy the backend.
Once it was done, it gave me a URL which I can use to access the backend within my front-end code

Reading from a private Nuget feed - .NET Core 3.1 Windows Docker Container

Does anyone have any experience with developing microservices in Azure with the .NET Core 3.1 using Windows containers? I am running into an issue when
I am trying to make my Dockerfile read from a private Nuget feed. Here is my Dockerfile:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-nanoserver-1809 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-nanoserver-1809 AS build
WORKDIR /src
COPY ./Nuget.config ./
COPY ["MyService/MyService.csproj", "MyService/"]
ENV NUGET_CREDENTIALPROVIDER_SESSIONTOKENCACHE_ENABLED true
ENV DOTNET_SYSTEM_NET_HTTP_USESOCKETSHTTPHANDLER=0
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS "{\"endpointCredentials\": [{\"endpoint\":\"my_private_feed", \"password\":\"my_personal_access_token\"}]}"
RUN dotnet restore "MyService/MyService.csproj"
COPY . .
WORKDIR "/src/MyService"
RUN dotnet build "MyService.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "MyService.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MyService.dll"]
This dockerfile gives me a 401 Unauthenticated error even though I know the credentials I am providing are correct.
I've also tried setting the user and password in my nuget.config file and that seems to work, but I don't want to have
to make a code change to update the password each time a token expires.
Any advice on how to move forward from here? Am I just not formatting the setting of the VSS_NUGET_EXTERNAL_FEED_ENDPOINTS variable properly?
I ran into the same problem in the past. I stopped using VSS_NUGET_EXTERNAL_FEED_ENDPOINTS and created the nuget.config on the fly
First step: Create small helper executable
namespace DockerBuild
{
class Program
{
static void Main(string[] args)
{
if (args.Count() != 4)
{
Console.WriteLine("Use it: dotnet dockerbuild.dll <username> <pat> <sourceUrl> <target>");
}
var user = $"{args[0]}";
var pat = $"{args[1]}";
var source = $"{args[2]}";
var target = $"{args[3]}";
var xml = $"<?xml version=\"1.0\" encoding=\"utf-8\"?><configuration><packageSources><add key=\"YourFeedName\" value=\"{source}\" /></packageSources><packageSourceCredentials><YourFeedName><add key=\"Username\" value=\"{user}\" /><add key=\"ClearTextPassword\" value=\"{pat}\"/></YourFeedName></packageSourceCredentials></configuration>";
using (var file = new StreamWriter(target))
{
file.WriteLine(xml);
}
}
}
}
The output of this console program will be: DockerBuild.exe
Second step: Run the executable on docker build
ENV devops_user=$user
ENV devops_pat=$pat
ENV devops_nuget_source=$nuget_source
RUN DockerBuild.exe %devops_user% %devops_pat% %devops_nuget_source% nuget.config
Now you have a nuget.config file in your folder structure and you only need to use (or copy) it and run your dotnet build/publish command(s)
Attention:
This example was done on a windows machine with running docker for windows. On linux the call to DockerBuild.exe must be changed to use the environment variables without % and use linux equivalent.
Your docker image probably doesn't have the Azure Artifacts NuGet credential provider installed. Its repo readme has a link to a sample dockerfile, which includes this line:
# download and install latest credential provider. Not required after https://github.com/dotnet/dotnet-docker/issues/878
RUN wget -qO- https://raw.githubusercontent.com/Microsoft/artifacts-credprovider/master/helpers/installcredprovider.sh | bash
Since your dockerfile is using a Windows base image, not Linux, you'll need to adapt it, but conceptually it's the same issue, the credential provider isn't installed (or NuGet doesn't know how to find it), so NuGet doesn't know how to authenticate to your feed.
edit: there's an issue asking how to install on a Windows docker image: https://github.com/microsoft/artifacts-credprovider/issues/169

How to reuse successfully built docker images in Azure ML?

In our company I use Azure ML and I have the following issue. I specify a conda_requirements.yaml file with the PyTorch estimator class, like so (... are placeholders so that I do not have to type everything out):
from azureml.train.dnn import PyTorch
est = PyTorch(source_directory=’.’, script_params=..., compute_target=..., entry_script=..., conda_dependencies_file_path=’conda_requirements.yaml’, environment_variables=..., framework_version=’1.1’)
The conda_requirements.yaml (shortened version of the pip part) looks like this:
dependencies:
- conda=4.5.11
- conda-package-handling=1.3.10
- python=3.6.2
- cython=0.29.10
- scikit-learn==0.21.2
- anaconda::cloudpickle==1.2.1
- anaconda::cffi==1.12.3
- anaconda::mxnet=1.1.0
- anaconda::psutil==5.6.3
- anaconda::pip=19.1.1
- anaconda::six==1.12.0
- anaconda::mkl==2019.4
- conda-forge::openmpi=3.1.2
- conda-forge::pycparser==2.19
- tensorboard==1.13.1
- tensorflow==1.13.1
- pip:
- torch==1.1.0
- torchvision==0.2.1
This successfully builds on Azure. Now in order to reuse the resulting docker image in that case, I use the custom_docker_image parameter to pass to the
from azureml.train.estimator import Estimator
est = Estimator(source_directory=’.’, script_params=..., compute_target=..., entry_script=..., custom_docker_image=’<container registry name>.azurecr.io/azureml/azureml_c3a4f...’, environment_variables=...)
But now Azure somehow seems to rebuild the image again and when I run the experiment it cannot install torch. So it seems to only install the conda dependencies and not the pip dependencies, but actually I do not want Azure to rebuild the image. Can I solve this somehow?
I attempted to somehow build a docker image from my Docker file and then add to the registry. I can do az login and according to https://learn.microsoft.com/en-us/azure/container-registry/container-registry-authentication I then should also be able to do an acr login and push. This does not work.
Even using the credentials from
az acr credential show –name <container registry name>
and then doing a
docker login <container registry name>.azurecr.io –u <username from credentials above> -p <password from credentials above>
does not work.
The error message is authentication required even though I used
az login
successfully. Would also be happy if someone could explain that to me in addition to how to reuse docker images when using Azure ML.
Thank you!
AzureML should actually cache your docker image once it was created. The service will hash the base docker info and the contents of the conda.yaml file and will use that as the hash key -- unless you change any of that information, the docker should come from the ACR.
As for the custom docker usage, did you set the parameter user_managed=True? Otherwise, AzureML will consider your docker to be a base image on top of which it will create the conda environment per your yaml file.
There is an example of how to use a custom docker image in this notebook:
https://github.com/Azure/MachineLearningNotebooks/blob/4170a394edd36413edebdbab347afb0d833c94ee/how-to-use-azureml/training-with-deep-learning/how-to-use-estimator/how-to-use-estimator.ipynb

Is there a way to avoid storing the AWS_SECRET_KEY on the .ebextensions?

I'm deploying a Django based project on AWS Elastic Beanstalk.
I have been following the Amazon example, where I add my credentials (ACCESS_KEY/SECRET) to my app.config under the .ebextentions directory.
The same config file has:
container_commands:
01_syncdb:
command: "django-admin.py migrate --noinput"
leader_only: true
02_collectstatic:
command: "django-admin.py collectstatic --noinput"
leader_only: true
Problem is that this is forcing me to store my credentials under Version Control, and I will like to avoid that.
I tried to remove the credentials and then add them with eb setenv, but the problem is that the two django commands require the these settings to be set on the environment.
I'm using the v3 cli:
eb create -db -c foo bar --profile foobar
where foobar is the name of the profile under ~/.aws/credentials, and where I want to keep my secret credentials.
What is the best security practices for the AWS credentials using EB?
One solution is to keep the AWS credentials, but create a policy that ONLY allows them to POST objects on the one bucket used for /static.
I ended up removing the collecstatic step from the config file, and simply take care of uploading statics on the build side.
After that, all credentials can be removed and all other boto commands will grab the credentials from the security role on the EC2 instance.

Expecting an auth URL via either error thrown openstack

ubuntu#ubuntu-14-lts:~$ export OS_USERNAME=admin
ubuntu#ubuntu-14-lts:~$ export OS_TENANT_NAME=admin
ubuntu#ubuntu-14-lts:~$ export OS_PASSWORD=admin
ubuntu#ubuntu-14-lts:~$ export OS_AUTH_URL=http://localhost:35357/v2.0/
Executed the command to create the Admin tenant
ubuntu#ubuntu-14-lts:~$ sudo keystone tenant-create --name admin --description "Admin Tenant"
got the below error
Expecting an auth URL via either --os-auth-url or env[OS_AUTH_URL]
modified the url
ubuntu#ubuntu-14-lts:~$ export OS_AUTH_URL="http://localhost:35357/v2.0/"
re-run the same command and same error thrown
ubuntu#ubuntu-14-lts:~$ sudo keystone tenant-create --name admin --description "Admin Tenant"
Expecting an auth URL via either --os-auth-url or env[OS_AUTH_URL]
Is there any Issues in running the command ?
The issue is probably with sudo - sudo may not maintain environment variables. Depends on configuration.
Why do you need sudo anyway? The keystone command does not require it. Either drop sudo, or add
--os-auth-url http://localhost:35357/v2.0/
to your command. You can also do
sudo -e keystone ...
You have failed to create a new user or tenant because you have no access to keystone... just like you need to login to mysql to create new tables and all, the same is here. The following steps will help you through:
# unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
# keystone --os-username=ADMIN_USERNAME --os-password=ADMIN_PASSWORD --os-auth-url=http://controller:35357/v2.0 token-get
# source admin_creds //this is the file where you have saved the admin credentials
# keystone token-get
# source creds // this is the other file where you have backed up your admin credentials
now you can run your keystone commands normally. Please put a tick mark if it helped you! lol

Resources