docker - cannot find aws credentials in container although they exist - linux

Running the following docker command on mac works and on linux, running ubuntu cannot find the aws cli credentials. It returns the following message: Unable to locate credentials
Completed 1 part(s) with ... file(s) remaining
The command which runs an image and mounts a data volume and then copies a file from and s3 bucket, and starts the bash shell in the docker container.
sudo docker run -it --rm -v ~/.aws:/root/.aws username/docker-image sh -c 'aws s3 cp s3://bucketname/filename.tar.gz /home/emailer && cd /home/emailer && tar zxvf filename.tar.gz && /bin/bash'
What am I missing here?
This is my Dockerfile:
FROM ubuntu:latest
#install node and npm
RUN apt-get update && \
apt-get -y install curl && \
curl -sL https://deb.nodesource.com/setup | sudo bash - && \
apt-get -y install python build-essential nodejs
#install and set-up aws-cli
RUN sudo apt-get -y install \
git \
nano \
unzip && \
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" && \
unzip awscli-bundle.zip
RUN sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /home/emailer && cp -a /tmp/node_modules /home/emailer/

Mounting $HOME/.aws/ into the container should work. Make sure to mount it as read-only.
It is also worth mentioning, if you have several profiles in your ~/.aws/config -- you must also provide the AWS_PROFILE=somethingsomething environment variable. E.g. via docker run -e AWS_PROFILE=xxx ... otherwise you'll get the same error message (unable to locate credentials).
Update: Added example of the mount command
docker run -v ~/.aws:/root/.aws …

You can use environment variable instead of copying ~/.aws/credentials and config file into container for aws-cli
docker run \
-e AWS_ACCESS_KEY_ID=AXXXXXXXXXXXXE \
-e AWS_SECRET_ACCESS_KEY=wXXXXXXXXXXXXY \
-e AWS_DEFAULT_REGION=us-west-2 \
<img>
Ref: AWS CLI Doc

what do you see if you run
ls -l ~/.aws/config
within your docker instance?

the only solution that worked for me in this case is:
volumes:
- ${USERPROFILE}/.aws:/root/.aws:ro

There are a few things that could be wrong. One, as mentioned previously you should check if your ~/.aws/config file is set accordingly. If not, you can follow this link to set it up. Once you have done that you can map the ~/.aws folder using the -v flag on docker run.
If your ~/.aws folder is mapped correctly, make sure to check the permissions on the files under ~/.aws so that they are able to be accessed safely by whatever process is trying to access them. If you are running as the user process, simply running chmod 444 ~/.aws/* should do the trick. This will give full read permissions to the file. Of course, if you want write permissions you can add whatever other modifiers you need. Just make sure the read octal is flipped for your corresponding user and/or group.

The issue I had was that I was running Docker as root. When running as root it was unable to locate my credentials at ~/.aws/credentials, even though they were valid.
Directions for running Docker without root on Ubuntu are here: https://askubuntu.com/a/477554/85384

You just have to pass the credential in order to be the AWS_PROFILE, if you do not pass anything it will use the default, but if you want you can copy the default and add your desired credentials.
In Your credentials
[profile_dev]
aws_access_key_id = xxxxxxxxxxxxxxxxxxxxxxxx
aws_secret_access_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
output = json
region = eu-west-1
In Your docker-compose
version: "3.8"
services:
cenas:
container_name: cenas_app
build: .
ports:
- "8080:8080"
environment:
- AWS_PROFILE=profile_dev
volumes:
- ~/.aws:/app/home/.aws:ro

Related

Nextcloud docker install with SSH access enabled

I’m trying to install SSH (and enable the service) on top of my Nextcloud installation in Docker, and have it work on reboot. Having run through many Dockerfile, docker-compose combinations I can’t seem to get this to work. Ive tried using entrypoint.sh scripts with Dockerfile, but it wants a CMD at the end and then it doesn’t execute the “normal” nextcloud start up.
entrypoint.sh:
#!/bin/sh
# Start the ssh server
service ssh start
# Execute the CMD
exec "$#"
Dockerfile:
FROM nextcloud:latest
RUN apt update -y && apt-get install ssh -y
RUN apt-get install python3 -y && apt-get install sudo -y
RUN echo 'ansible ALL=(ALL:ALL) NOPASSWD:ALL' >> /etc/sudoers
RUN useradd -m ansible -s /bin/bash
RUN sudo -u ansible mkdir /home/ansible/.ssh
RUN mkdir -p /var/run/sshd
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/usr/sbin/sshd", "-D"]
Any help would be much appreciated. Thank you
In general I'd say - break the problem you're having down into smaller parts - it'll help isolate the source of the problem.
Here's how I'd approach the reported issue.
First - replace (in your Dockerfile)
apt-get install -y ssh
with the recommended
apt install -y openssh-server
Then - test just the required parts of your Dockerfile addressing the issue - simplify it just to the following:
FROM nextcloud:latest
RUN apt update
RUN apt install -y openssh-server
Then build a test image using this Dockerfile via the command
docker build . -t test_nextcloud
This will build the image - giving it the name (tag) of test_nextcloud.
Then run a container from this newly built image via the docker run command
docker run -p 8080:80 -d --name nextcloud test_nextcloud
This will run the container on port 8080 in detatched mode, and give the assicated container the name of nextcloud.
Then - with the container running - you should be able to enter into it using the following command
docker container exec -u 0 -it nextcloud bash
as root.
Now that you are in, you should be able to startup the ssh server via the command
service ssh start
Having followed a set of steps like this to confirm that you can indeed startup an ssh server in the nextcloud container, begin adding back in your additional logic (begining with the original Dockerfile).

Why is this container missing one file in its volume mount?

Title is the question.
I'm hosting many docker containers on a rather large linux ec2 instance. One container in particular needs access to a file that gets transferred to the host before run time. The file in question is copied from a windows file server to the ec2 instance using control-m.
When the container image runs, we give it -v to specify a volume mount with a path on the host to that transferred file.
The file is not found in the container. If I make a new file in the container, the new file appears on the host. When I make a file on the host, it appears in the container. When I make a copy of the transferred file using cp -p the copied file DOES show up in the container, but the original still does not.
I don't understand why this is? My suspicion is something to do with it being on a windows server before control-m copies it to the ec2 instance.
Details:
The file lives in the path /folder_path/project_name/resources/file.txt
Its permissions are -rwxrwxr-x 1 pyadmin pyadmin where pyadmin maps to the containers root user.
It's approximately 38mb in size and when I run file file.txt I get the output ASCII text, with CRLF line terminators.
The repo also has a resources folder with files already in it when it is cloned, but none of their names conflict.
Docker Version: 20.10.13
Dockerfile:
FROM python:3.9.11-buster
SHELL ["/bin/bash", "-c"]
WORKDIR /folder_path/project_name
RUN apt-get auto-clean && apt-get update && apt-get install -y unixodbc unixodbc-dev && apt-get upgrade -y
RUN python -m pip install --upgrade pip poetry
COPY . .
RUN python -m pip install --upgrade pip poetry && \
poetry config virtualenvs.create false && \
poetry install
ENTRYPOINT [ "python" ]
Command to start container:
docker run --pull always --rm \
-v /folder_path/project_name/logs:/folder_path/project_name/logs \
-v /folder_path/project_name/extracts:/folder_path/project_name/extracts \
-v /folder_path/project_name/input:/folder_path/project_name/input \
-v /folder_path/project_name/output:/folder_path/project_name/output \
-v /folder_path/project_name/resources:/folder_path/project_name/resources \
my-registry.com/folder_path/project_name:image_tag

Docker: files missing after build

I'm trying to build a docker container that runs a Python script. I want the code to be cloned from git when I build the image. I'm using this docker file as a base and added the following BEFORE the first line:
FROM debian:buster-slim AS intermediate
RUN apt-get update
RUN apt-get install -y git
ARG SSH_PRIVATE_KEY
RUN mkdir /root/.ssh/
RUN echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa
RUN chmod 600 /root/.ssh/id_rsa
RUN touch /root/.ssh/known_hosts
RUN ssh-keyscan [git hostname] >> /root/.ssh/known_hosts
RUN git clone git#...../myApp.git
... then added the following directly after the first line:
# Copy only the repo from the intermediate image
COPY --from=intermediate /myApp /myApp
... then at the end I added this to install some dependencies:
RUN set -ex; \
apt-get update; \
apt-get install -y gcc g++ unixodbc-dev libpq-dev; \
\
pip install pyodbc; \
pip install paramiko; \
pip install psycopg2
And I changed the command to run to:
CMD ["python3 /myApp/main.py"]
If, at the end of the dockerfile before the CMD, I add the command "RUN ls -l /myApp" it lists all the files I would expect during the build. But when I use "docker run" to run the image, it gives me the following error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: "python3 /myApp/main.py": stat python3 /myApp/main.py: no such file or directory": unknown.
My build command is:
docker build --file ./Dockerfile --tag my_app --build-arg SSH_PRIVATE_KEY="$(cat sshkey)" .
Then run with docker run my_app
There is probably some docker fundamental that I am misunderstanding, but I can't seem to figure out what it is.
This is hard to answer without your command line or your docker-compose.yml (if any). A recurrent mistake is to map a volume from the host into the container at a non empty location, in this case, your container files are hidden by the content of the host folder.
The last CMD should be like this:
CMD ["python3", "/myApp/main.py"]

/bin/sh: /usr/sbin/sshd-keygen: No such file or directory

I'm completely new to linux and docker concepts
In my windows machine I boot up centos7 in virtual box
while running docker-compose build I get
/bin/sh: /usr/sbin/sshd-keygen: No such file or directory
How to rectify it
I tried to create a remote user
docker-compose.yml
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins/jenkins
ports:
- "8080:8080"
volumes:
- "$PWD/jenkins_home:/var/jenkins_home"
networks:
- net
remote_host:
container_name: remote-host
image: remote-host
build:
context: centos7
networks:
- net
networks:
net:
DockerFile
FROM centos
RUN yum -y install openssh-server
RUN useradd remote_user && \
echo "Thevenus987$" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user/.ssh && \
chmod 600 /home/remote_user/.ssh/authorized_keys
RUN /usr/sbin/sshd-keygen
CMD /usr/sbin/sshd -D
In Dockerfile
Change RUN /usr/sbin/sshd-keygen // Centos8 doesn't accept this command
to RUN ssh-keygen -A // This works.
I hope this solution works fine.
Change the FROM as centos:7
Replace RUN /usr/sbin/sshd-keygen to RUN ssh
The Dockerfile should be like this:
FROM centos:7
RUN yum -y install openssh-server && \
yum install -y passwd && \ #Added
yum install -y initscripts #Added
RUN useradd remote_user && \
echo "1234" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user/.ssh/ && \
chmod 600 /home/remote_user/.ssh/authorized_keys
RUN /usr/sbin/sshd-keygen
#CMD /usr/sbin/sshd -D
CMD ["/usr/sbin/sshd", "-D"]
just use FROM centos:7 (instead of using centos8 base image)
and
yum install -y initscripts
Note: Updated initscripts bug fix enhancement package fixes several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6/7.
you don't need to remove or twek this below line at all
RUN /usr/sbin/sshd-keygen
it will work 100% ..
To learn more about initscripts bug fix enhancement:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/6.5_technical_notes/initscripts
Change the base image FROM centos to FROM centos:7 and it will work
The problem is with this line in your Dockerfile:
RUN /usr/sbin/sshd-keygen
This is what you get when this line gets executed: /etc/rc.d/init.d/functions: No such file or directory.
/usr/sbin/sshd-keygen: command not found.
This init.d/functions file is different for different linux distros. It's specific to whatever distribution you're running. This file contains functions to be used by most or all shell scripts stored in the /etc/init. d directory.
To try this yourself simply pull the CentOS:7 image from docker hub and test your RUN steps from your Dockerfile as follows:
docker container run -i -t -d --name test centos:7
docker exec -it test bash
cd /etc/rc.d/init.d
ls -a
There is no file called functions in this directory.
In CentOS:7 Docker image you have to simply install the package initscripts in order for this script to be installed, so add these lines to your Dockerfile:
FROM centos:7
RUN yum install -y initscripts
FROM centos pulls the latest by default which does not include sshd-keygen.
You need to change your Dockerfile to:
FROM centos:7
...
&& yum install -y initscripts \
&& /usr/sbin/sshd-keygen
CMD ["/usr/sbin/sshd", "-D"]
Just change FROM centos
FROM centos:7
That error happened because before docker centos run centos7 and now run centos 8
try below command instead of RUN /usr/sbin/sshd-keygen
and also as others pointed out use:
FROM centos:7
RUN ssh-keygen -A
1)
in Dockerfile change:
RUN /usr/sbin/sshd-keygen
to
RUN /usr/bin/ssh-keygen
2) or try
RUN sshd-keygen
if that is included and exists anywhere in your $PATH, it will execute.

Ubuntu Docker container immediately stops, issue with Dockerfile?

I'm pretty new to Docker, and completely baffled as to why my container exits upon start.
I've built an Ubuntu image of which starts Apache and fail2ban upon boot. I'm unsure as to whether it's an issue with the Dockerfile, or the command I am running to start the container.
I've tried:
docker run -d -p 127.0.0.1:80:80 image
docker run -d -ti -p 127.0.0.1:80:80 image
docker run -d -ti -p 127.0.0.1:80:80 image /bin/bash
The Dockerfile is as follows:
FROM ubuntu:latest
RUN \
apt-get update && \
apt-get -y upgrade && \
apt-get install -y build-essential && \
apt-get install -y iptables && \
apt-get install -y software-properties-common && \
apt-get install -y apache2 fail2ban && \
rm -rf /etc/fail2ban/jail.conf
ADD index.html /var/www/html/
ADD jail.conf /etc/fail2ban/
ENV HOME /root
WORKDIR /root
EXPOSE 80 443
ENTRYPOINT service apache2 start && service fail2ban start
CMD ["bash"]
I can jump into the container itself with:
docker exec -it image /bin/bash
But the moment I try to run it whilst staying within the host, it fails. Help?
Considering your question, where you mention "upon boot" I think it would be useful to read https://docs.docker.com/config/containers/multi-service_container/.
In a nutshell docker containers do not "boot" as a normal system, they start a process and execute it until it exits.
So, if you want to start two processes you can do a wrapper script as explained at the link above.
Remove the following line from your Dockerfile:
CMD ["bash"]
Also, when you want to get a shell into your container, you have to override the ENTRYPOINT definition of your Dockerfile:
docker exec -it --entrypoint "/bin/bash" image
See Dockerfile "ENTRYPOINT" documentation for more details

Resources