Accessing aws credentials set as env variables in docker run command - linux

Is there a way for AWS credentials passed as environment variables to the docker run command to be put to use for getting the caller identity details while the container is running?
This is the docker run command being executed in the application
docker run -e AWS_ACCESS_KEY={user_credentials["AccessKeyId"]} -e AWS_SECRET_ACCESS_KEY={user_credentials["SecretAccessKey"]} -e AWS_SESSION_TOKEN={user_credentials["SessionToken"]} image_name --rm'

The answer is actually simple, but definitely something I was not aware of.
Initialized an STS client with the given credentials and then made a call to to get the caller identity details. Retrieved the credentials using the OS module. The scope of my application is very limited, hence using the credentials to get the user account details. This is what worked for me.
sts_client = boto3.client('sts', aws_access_key_id=os.environ['AccessKeyId'],
aws_secret_access_key=os.environ['SecretAccessKey'],
aws_session_token=os.environ['SessionToken'])

Related

Azure App Service - How to use environment variables in docker container?

I'm using an Azure App Service to run my docker image. Running my docker container requires using a couple of environment variables. Locally, when I run my container I just do something like:
docker run -e CREDENTIAL -e USERNAME myapp
However, in the Azure App Service, after defining CREDENTIAL and USERNAME as Application Settings, I'm unsure how to pass these to the container. I see from the logs that on startup Azure passes some of its own environment variables, but if I add a startup command with my environment variables, it tacks it on at the end of the one generated by Azure creating an invalid command. How can I pass mine to the container?
As I understand you want to set environment variables in that docker container with -e option.
You don't need to use startup command for that. Pass these variables as application settings:
Application Settings are exposed as environment variables for access by your application at runtime.
Documentation

What all POST Rest API commands we can execute inside a azure container instance

I have build a docker linux image which includes azure cli, kubectl and terraform installation. I have pushed the image to azure container registry and created a container instance manually with that image. My container is running successfully and I am able to connect to it from the azure portal.
But my requirement is, I have to run some Rest API commands which is provided by the microsoft to perform certain action on the container. I have followed below microsoft documentation for executing the rest api command.
Link: https://learn.microsoft.com/en-us/rest/api/container-instances/containers/execute-command#code-try-0
I just provided my container details and added body as below:
{
"command": "/bin/bash",
"terminalSize": {
"rows": 12,
"cols": 12
}
}
I have received 200 response after running the above command. But when I tried running some different commands I am getting the 200 response but output is not changing. Can someone please share the information like what commands I can execute in azure container instance through the rest api.
Actually, this REST API is for the exec command that executes the bash command in the container. So it will create a socket session to communicate with the container, the returns are the socket session and the password.
And for the exec command, also for this REST API, it only can execute the single command, like ls, /bin/bash. But if you want to execute multiple commands like ls -al or curl $url, then it will fail. Actually, ACI does not support running multiple commands through the REST API or the exec command. The solution is you use the Azure CLI command az container exec to run the bash command /bin/bash, it will create a socket session for you, like an SSH connection. Then you can run commands inside the container. Here is the screenshot:

How do I run multiple commands when deploying a container group?

I am deploying a Container group with the template https://learn.microsoft.com/en-us/azure/templates/microsoft.containerinstance/2018-10-01/containergroups
It has command parameter, but it is just a string and runs one command. I would like to run multiple commands when deploying. Is it possible?
If not, is there a way to run those commands to the container after it has been deployed, using PowerShell?
My usecase:
I need a SFTP server in Azure for customers to be able to send us data. I then poll that with a Logic App.
What I have done:
I found this template to be good for my needs, as it is easier to poll Azure Storage File Share.
https://github.com/Azure/azure-quickstart-templates/blob/master/201-aci-sftp-files
My problem is I have multiple users. Everyone needs their own username/password and their own file share or sub-directory in that share. I also can't understand how to configure multiple users through the environment variable. I tried separating them with ;. It deploys, but the server doesn't respond to requests at all.
I can deploy multiple containers, one for each user, but that doesn't sound like a good idea when the number of customers rises.
Unfortunately, it seems that you cannot run multi-command in one time. See the Restrictions of the exec command for ACI:
Azure Container Instances currently supports launching a single
process with az container exec, and you cannot pass command arguments.
For example, you cannot chain commands like in sh -c "echo FOO && echo
BAR", or execute echo FOO.
I suggest that you can run the command to create an interactive session with the container instance to execute command continuously after you create the ACI.
For Linux:
az container exec -g groupName -n containerName --exec-command "/bin/bash"
For Windows:
az container exec -g groupName -n containerName --exec-command "cmd.exe"

How to logon as non-root user in Kubernetes pod/container

I am trying to log into a kubernetes pod using the kubectl exec command. I am successful but it logs me in as the root user. I have created some other users too as part of the system build.
Command being used is "kubectl exec -it /bin/bash". I guess this means that run /bin/bash on the pod which results into a shell entry into the container.
Can someone please guide me on the following -
How to logon using a non-root user?
Is there a way to disable root user login?
How can I bind our organization's ldap into the container?
Please let me know if more information is needed from my end to answer this?
Thanks,
Anurag
You can use su - <USERNAME> to login as a non-root user.
Run cat /etc/passwd to get a list of all available users then identify a user with a valid shell compiler e.g
/bin/bash or /bin/sh
Users with /bin/nologin and /bin/false as the set compiler are used by system processes and as such you can't log in as them.
I think its because the container user is root, that is why when you kubectl exec into it, the default user is root. If you run your container or pod with non root then kubectl exec will not be root.
In most cases, there is only one process that runs in a Docker container inside a Kubernetes Pod. There are no other processes that can provide authentication or authorization features. You can try to run a wrapper with several nested processes in one container, but this way you spoil the containerization idea to run an immutable application code with minimum overhead.
kubectl exec runs another process in the same container environment with the main process, and there is no option to set the user ID for this process.
However, you can do it by using docker exec with the additional option:
--user , -u Username or UID (format: <name|uid>[:<group|gid>])
In any case, these two articles might be helpful for you to run IBM MQ in Kubernetes cluster
Availability and scalability of IBM MQ in containers
Administering Kubernetes

Azure Docker Container - how to pass startup commands to a docker run?

Faced with this screen, I have managed to easily deploy a rails app to azure, on docker container app service, but logging it is a pain since the only way they have access to logs is through FTP.
Has anyone figured out a good way to running the docker run command inside azure so it essentially accepts any params.
in this case it's trying to simply log to a remote service, if anyone also has other suggestions of retrieving logs except FTP, would massively appreciate.
No, at the time of writing this is not possible, you can only pass in anything that you would normally pass to docker run container:tag %YOUR_STARTUP_COMMAND_WILL_GO_HERE_AS_IS%, so after your container name.
TLDR you cannot pass any startup parameters to Linux WebApp except for the command that needs to be run in the container. Lets say you want to run your container called MYPYTHON using the PROD tag and run some python code, you would do something like this
Startup Command = /usr/bin/python3 /home/code/my_python_entry_point.py
and that would get appended (AT THE VERY END ONLY) to the actual docker command:
docker run -t username/MYPYTHON:PROD /usr/bin/python3 /home/code/my_python_entry_point.py

Resources