Installing sshfs on Docker with Debian Image - issues - linux

I have following problem.
Here is Dockerfile with Python app where I need to install sshfs to mount files from the sftp server via ssh.
# Set base image
FROM python:3.9
# Copy files
COPY id_rsa requirements.txt app.py /app/
COPY known_hosts /root/.ssh/
COPY ssh_config /etc/ssh
# Set working directory
WORKDIR app
# Install libraries
RUN pip install -U pip \
&& pip install -r requirements.txt \
&& mkdir source \
&& chmod 600 id_rsa \
&& apt-get -y upgrade \
&& apt-get -y update
COPY fuse.conf /etc/
RUN dpkg --configure -a \
&& apt-get install -y sshfs \
&& sshfs user#ip_address:/C:/folder_name /app/source -o IdentityFile=/app/id_rsa,auto_cache,reconnect,transform_symlinks,follow_symlinks
# Run deamon
CMD ["python", "app.py"]
When I build Docker Image, it gives me the error
Configuration file '/etc/fuse.conf'
==> File on system created by you or by a script.
==> File also in package provided by package maintainer.
What would you like to do about it ? Your options are:
Y or I : install the package maintainer's version
N or O : keep your currently-installed version
D : show the differences between the versions
Z : start a shell to examine the situation
The default action is to keep your current version.
*** fuse.conf (Y/I/N/O/D/Z) [default=N] ? dpkg: error processing package fuse (--configure):
end of file on stdin at conffile prompt
Processing triggers for libc-bin (2.31-13+deb11u2) ...
Errors were encountered while processing:
fuse
Here is how my fuse.conf looks like:
# The file /etc/fuse.conf allows for the following parameters:
#
# user_allow_other - Using the allow_other mount option works fine as root, in
# order to have it work as user you need user_allow_other in /etc/fuse.conf as
# well. (This option allows users to use the allow_other option.) You need
# allow_other if you want users other than the owner to access a mounted fuse.
# This option must appear on a line by itself. There is no value, just the
# presence of the option.
user_allow_other
# mount_max = n - this option sets the maximum number of mounts.
# Currently (2014) it must be typed exactly as shown
# (with a single space before and after the equals sign).
mount_max = 1000
I don't know whether the problem is that somehow DEFAULT=N is not taken when the image is built?
For the evidence, I can run docker in the interactive mode, I can install sshfs inside and mount it without problems (--privileged moded).

Related

Why is this container missing one file in its volume mount?

Title is the question.
I'm hosting many docker containers on a rather large linux ec2 instance. One container in particular needs access to a file that gets transferred to the host before run time. The file in question is copied from a windows file server to the ec2 instance using control-m.
When the container image runs, we give it -v to specify a volume mount with a path on the host to that transferred file.
The file is not found in the container. If I make a new file in the container, the new file appears on the host. When I make a file on the host, it appears in the container. When I make a copy of the transferred file using cp -p the copied file DOES show up in the container, but the original still does not.
I don't understand why this is? My suspicion is something to do with it being on a windows server before control-m copies it to the ec2 instance.
Details:
The file lives in the path /folder_path/project_name/resources/file.txt
Its permissions are -rwxrwxr-x 1 pyadmin pyadmin where pyadmin maps to the containers root user.
It's approximately 38mb in size and when I run file file.txt I get the output ASCII text, with CRLF line terminators.
The repo also has a resources folder with files already in it when it is cloned, but none of their names conflict.
Docker Version: 20.10.13
Dockerfile:
FROM python:3.9.11-buster
SHELL ["/bin/bash", "-c"]
WORKDIR /folder_path/project_name
RUN apt-get auto-clean && apt-get update && apt-get install -y unixodbc unixodbc-dev && apt-get upgrade -y
RUN python -m pip install --upgrade pip poetry
COPY . .
RUN python -m pip install --upgrade pip poetry && \
poetry config virtualenvs.create false && \
poetry install
ENTRYPOINT [ "python" ]
Command to start container:
docker run --pull always --rm \
-v /folder_path/project_name/logs:/folder_path/project_name/logs \
-v /folder_path/project_name/extracts:/folder_path/project_name/extracts \
-v /folder_path/project_name/input:/folder_path/project_name/input \
-v /folder_path/project_name/output:/folder_path/project_name/output \
-v /folder_path/project_name/resources:/folder_path/project_name/resources \
my-registry.com/folder_path/project_name:image_tag

Attempting to host a flutter project on Azure App Services using a docker image; local image and cloud image behave differently

I am having trouble with azure and docker where my local machine image is behaving differently than the image I push to ACR. while trying to deploy to web, I get this error:
ERROR - failed to register layer: error processing tar file(exit status 1): Container ID 397546 cannot be mapped to a host IDErr: 0, Message: mapped to a host ID
So in trying to fix it, I have come to find out that azure has a limit on uid numbers of 65000. Easy enough, just change ownership of the affected files to root, right?
Not so. I put the following command into my Dockerfile:
RUN chown -R root:root /usr/local/flutter/bin/cache/artifacts/gradle_wrapper/
Works great locally for changing the uids of the affected files from 397546 to 0. I do a command in the cli of the container:
find / -uid 397546
It finds none of the same files it found before. Yay! I even navigate to the directories where the affected files are, and do a quick
ls -n to double confirm they are fine, and sure enough the uids are now 0 on all of them. Good to go?
Next step, push to cloud. When I push and reset the app service, I still continue to get the same exact error above. I have confirmed on multiple fronts that it is indeed pushing the correct image to the cloud.
All of this means that somehow my local image and the cloud image are behaving differently.
I am stumped guys please help.
The Dockerfile is as below:
RUN apt-get update
RUN apt-get install -y curl git wget unzip libgconf-2-4 gdb libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3 psmisc
RUN apt-get clean
# Clone the flutter repo
RUN git clone https://github.com/flutter/flutter.git /usr/local/flutter
# Set flutter path
ENV PATH="/usr/local/flutter/bin:/usr/local/flutter/bin/cache/dart-sdk/bin:${PATH}"
# Enable flutter web
RUN flutter upgrade
RUN flutter config --enable-web
# Run flutter doctor
RUN flutter doctor -v
# Change ownership to root of affected files
RUN chown -R root:root /usr/local/flutter/bin/cache/artifacts/gradle_wrapper/
# Copy the app files to the container
COPY ./build/web /usr/local/bin/app
COPY ./startup /usr/local/bin/app/server
COPY ./pubspec.yaml /usr/local/bin/app/pubspec.yaml
# Set the working directory to the app files within the container
WORKDIR /usr/local/bin/app
# Get App Dependencies
RUN flutter pub get
# Build the app for the web
# Document the exposed port
EXPOSE 4040
# Set the server startup script as executable
RUN ["chmod", "+x", "/usr/local/bin/app/server/server.sh"]
# Start the web server
ENTRYPOINT [ "/usr/local/bin/app/server/server.sh" ]```
So basically we have made a shell script to build web BEFORE building the docker image. we then use the static js from the build/web folder and host that on the server. No need to download all of flutter. Makes pipelines a little harder, but at least it works.
New Dockerfile:
FROM ubuntu:20.04 as build-env
RUN apt-get update && \
apt-get install -y --no-install-recommends apt-utils && \
apt-get -y install sudo
## for apt to be noninteractive
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
## preesed tzdata, update package index, upgrade packages and install needed software
RUN echo "tzdata tzdata/Areas select US" > /tmp/preseed.txt; \
echo "tzdata tzdata/Zones/US select Colorado" >> /tmp/preseed.txt; \
debconf-set-selections /tmp/preseed.txt && \
apt-get update && \
apt-get install -y tzdata
RUN apt-get install -y curl git wget unzip libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3 python3 nginx nano vim
RUN apt-get clean
# Copy files to container and build
RUN mkdir /app/
COPY . /app/
WORKDIR /app/
RUN cd /app/
# Configure nginx and remove secret files
RUN mv /app/build/web/ /var/www/html/patient
RUN cd /etc/nginx/sites-enabled
RUN cp -f /app/default /etc/nginx/sites-enabled/default
RUN cd /app/ && rm -r .dart_tool .vscode assets bin ios android google_place lib placepicker test .env .flutter-plugins .flutter-plugins-dependencies .gitignore .metadata analysis_options.yaml flutter_01.png pubspec.lock pubspec.yaml README.md
# Record the exposed port
EXPOSE 5000
# Start the python server
RUN ["chmod", "+x", "/app/server/server.sh"]
ENTRYPOINT [ "/app/server/server.sh"]

CentOS with Python as base image

I created a Dockerfile and then built it for my team to use. Currently I am pulling from the CentOS:latest image, then building the latest version of Python and saving the image to a .tar file. The idea is for my colleagues to use this image to add their Python projects to the /pyscripts folder. Is this the recommended way of building a base image or is there a better way I can go about doing it?
# Filename: Dockerfile
From centos
RUN yum -y update && yum -y install gcc openssl-devel bzip2-devel libffi-devel wget make && yum clean all
RUN cd /opt && wget --no-check-certificate https://www.python.org/ftp/python/3.8.3/Python-3.8.3.tgz && tar xzf Python-3.8.3.tgz && cd Python-3.8*/ && ./configure --enable-optimizations && make altinstall && rm -rf /opt/Python* && mkdir /pyscripts
Many thanks!
Yes this is the standard and recommended way of building a base image from a parent image (CentOS in this example) if that is what you need Python 3.8.3 (latest version) on CentOS system.
Alternatively you can pull a generic Python image with latest Python version (which is now 3.8.3) but based on other Linux distribution (Debian) from Docker HUB repository by running:
docker pull python:latest
And then build a base image from it where you will just need to create the directory /pyscripts
So the Dockerfile would look like that:
FROM python:latest
RUN mkdir /pyscripts
Or you can pull CentOS/Python already built image (with lower version 3.6) from Docker HUB repository by running:
docker pull centos/python-36-centos7
And then build a base image from it where you will just need to create the directory /pyscripts
So the Dockerfile would look like that:
FROM centos/python-36-centos7:latest
USER root
RUN mkdir /pyscripts
Remember to add this line just after the first line to run the commands as root:
USER root
Otherwise you would get a Permission Denied error message

How to get npm to use cache

UPDATE
I changed directions from this question and ended up taking advantage of Docker image layers to cache the npm install unless there is changes to the package.config, see here.
Note, in relation to this question, I still build my AngularJs Docker image in a slave Jenkins Docker image but I no longer run the npm install in the Docker slave, I copy my app files to my AngularJs Docker image and run the npm install in the AngularJs Docker image, thus getting a Docker cache layer of the npm install, inspiration from this great idea/answer here.
-------------------------------END UPDATE------------------------------
Ok, I should add the caveat that I am in a Docker container but that really shouldn't matter much possibly, I do not stop the container and I have volumes for the for the npm cache folder as well as the /home folder for the user running npm commands.
The purpose of the Docker container, with npm installed, is that it is a build slave, spun up by Jenkins to build an AngularJs application. The problem is that it is incredibly slow, downloading all the needed npm packages, every time.
jenkins is the user, a jenkins account on a build server is "whom" is running npm install
I have Volumes for both the npm folder for the user running the npm install cmd: /home/jenkins/.npm and also the folder that the command npm config get cache says is my cache directory: /root/.npm. Not that container volumes should even matter because I have not stopped the container after running npm install.
Ok the steps I take to start debugging, to start, I "bash into the container" with this command:
docker exec -it <container_id> bash
All commands I run from this point forward I am connected to the running container with npm installed.
echo "$HOME" results in /root
npm config get cache results in root/.npm
Any time jenkins runs npm install in this container, after that command finishes successfully, I run npm cache ls which always yields empty, nothing cached: ~/.npm
Many packages were downloaded however as we can see with ls -a /home/jenkins/.npm/:
So I tried setting the cache-min to a very long expiration time: npm config set cache-min 9999999 that didn't help.
I am not sure what else to do, it just seems that none of my npm packages are being cached, how do I get npm to cache packages?
here is a truncated npm install output:
Downloading binary from https://github.com/sass/node-sass/releases/download/v4.5.3/linux-x64-48_binding.node
Download complete
Binary saved to /home/jenkins/workspace/tsl.frontend.development/node_modules/node-sass/vendor/linux-x64-48/binding.node
Caching binary to /home/jenkins/.npm/node-sass/4.5.3/linux-x64-48_binding.node
Binary found at /home/jenkins/workspace/tsl.frontend.development/node_modules/node-sass/vendor/linux-x64-48/binding.node
Testing binary
Binary is fine
typings WARN deprecated 3/24/2017: "registry:dt/core-js#0.9.7+20161130133742" is deprecated (updated, replaced or removed)
[?25h
+-- app (global)
`-- core-js (global)
And here is my Dockerfile:
FROM centos:7
MAINTAINER Brian Ogden
RUN yum update -y && \
yum clean all
#############################################
# Jenkins Slave setup
#############################################
RUN yum install -y \
git \
openssh-server \
java-1.8.0-openjdk \
sudo \
make && \
yum clean all
# gen dummy keys, centos doesn't autogen them like ubuntu does
RUN /usr/bin/ssh-keygen -A
# Set SSH Configuration to allow remote logins without /proc write access
RUN sed -ri 's/^session\s+required\s+pam_loginuid.so$/session optional pam_loginuid.so/' /etc/pam.d/sshd
# Create Jenkins User
RUN useradd jenkins -m -s /bin/bash
# Add public key for Jenkins login
RUN mkdir /home/jenkins/.ssh
COPY /files/id_rsa.pub /home/jenkins/.ssh/authorized_keys
#setup permissions for the new folders and files
RUN chown -R jenkins /home/jenkins
RUN chgrp -R jenkins /home/jenkins
RUN chmod 600 /home/jenkins/.ssh/authorized_keys
RUN chmod 700 /home/jenkins/.ssh
# Add the jenkins user to sudoers
RUN echo "jenkins ALL=(ALL) ALL" >> etc/sudoers
#############################################
# Expose SSH port and run SSHD
EXPOSE 22
#Technically, the Docker Plugin enforces this call when it starts containers by overriding the entry command.
#I place this here because I want this build slave to run locally as it would if it was started in the build farm.
CMD ["/usr/sbin/sshd","-D"]
#############################################
# Docker and Docker Compose Install
#############################################
#install required packages
RUN yum install -y \
yum-utils \
device-mapper-persistent-data \
lvm2 \
curl && \
yum clean all
#add Docker CE stable repository
RUN yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
#Update the yum package index.
RUN yum makecache fast
#install Docker CE
RUN yum install -y docker-ce-17.06.0.ce-1.el7.centos
#install Docker Compose 1.14.0
#download Docker Compose binary from github repo
RUN curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
#Apply executable permissions to the binary
RUN chmod +x /usr/local/bin/docker-compose
#############################################
ENV NODE_VERSION 6.11.1
#############################################
# NodeJs Install
#############################################
RUN yum install -y \
wget
#Download NodeJs package
RUN wget https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.gz
#extract the binary package into our system's local package hierarchy with the tar command.
#The archive is packaged within a versioned directory, which we can get rid of by passing the --strip-components 1 option.
#We will specify the target directory of our command with the -C command:
#This will install all of the components within the /usr/local branch
RUN tar --strip-components 1 -xzvf node-v* -C /usr/local
#############################################
#############################################
# npm -setup volume for package cache
# this will speed up builds
#############################################
RUN mkdir /home/jenkins/.npm
RUN chown jenkins /home/jenkins/.npm .
RUN mkdir /root/.npm
RUN chown jenkins /root/.npm .
#for npm cache, this cannot be expressed in docker-compose.yml
#the reason for this is that Jenkins spins up slave containers using
#the docker plugin, this means that there
VOLUME /home/jenkins/.npm
VOLUME /root/.npm
#############################################
When you run docker exec -it <container> bash you access the Docker container as the root user. npm install thus saves the cache to /root/.npm, which isn't a volume saved by the container. Jenkins, on the other hand, uses the jenkins user, which saves to /home/jenkins/.npm, which is being cached. So in order to emulate the functionality of the actual Jenkins workflow, you need to su jenkins before you can npm install.
That being said, the npm cache is not a perfect solution (especially if you have a ton of automated Jenkins builds). Some things to look into that would be better long-term solutions:
Install a local NPM Cache like sinopia. I found this guide to be particularly helpful.
Use Docker to build you app (which would work fine with Docker In Docker). Docker would cache after each build step, saving the repeated fetching of dependencies.

docker - cannot find aws credentials in container although they exist

Running the following docker command on mac works and on linux, running ubuntu cannot find the aws cli credentials. It returns the following message: Unable to locate credentials
Completed 1 part(s) with ... file(s) remaining
The command which runs an image and mounts a data volume and then copies a file from and s3 bucket, and starts the bash shell in the docker container.
sudo docker run -it --rm -v ~/.aws:/root/.aws username/docker-image sh -c 'aws s3 cp s3://bucketname/filename.tar.gz /home/emailer && cd /home/emailer && tar zxvf filename.tar.gz && /bin/bash'
What am I missing here?
This is my Dockerfile:
FROM ubuntu:latest
#install node and npm
RUN apt-get update && \
apt-get -y install curl && \
curl -sL https://deb.nodesource.com/setup | sudo bash - && \
apt-get -y install python build-essential nodejs
#install and set-up aws-cli
RUN sudo apt-get -y install \
git \
nano \
unzip && \
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" && \
unzip awscli-bundle.zip
RUN sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /home/emailer && cp -a /tmp/node_modules /home/emailer/
Mounting $HOME/.aws/ into the container should work. Make sure to mount it as read-only.
It is also worth mentioning, if you have several profiles in your ~/.aws/config -- you must also provide the AWS_PROFILE=somethingsomething environment variable. E.g. via docker run -e AWS_PROFILE=xxx ... otherwise you'll get the same error message (unable to locate credentials).
Update: Added example of the mount command
docker run -v ~/.aws:/root/.aws …
You can use environment variable instead of copying ~/.aws/credentials and config file into container for aws-cli
docker run \
-e AWS_ACCESS_KEY_ID=AXXXXXXXXXXXXE \
-e AWS_SECRET_ACCESS_KEY=wXXXXXXXXXXXXY \
-e AWS_DEFAULT_REGION=us-west-2 \
<img>
Ref: AWS CLI Doc
what do you see if you run
ls -l ~/.aws/config
within your docker instance?
the only solution that worked for me in this case is:
volumes:
- ${USERPROFILE}/.aws:/root/.aws:ro
There are a few things that could be wrong. One, as mentioned previously you should check if your ~/.aws/config file is set accordingly. If not, you can follow this link to set it up. Once you have done that you can map the ~/.aws folder using the -v flag on docker run.
If your ~/.aws folder is mapped correctly, make sure to check the permissions on the files under ~/.aws so that they are able to be accessed safely by whatever process is trying to access them. If you are running as the user process, simply running chmod 444 ~/.aws/* should do the trick. This will give full read permissions to the file. Of course, if you want write permissions you can add whatever other modifiers you need. Just make sure the read octal is flipped for your corresponding user and/or group.
The issue I had was that I was running Docker as root. When running as root it was unable to locate my credentials at ~/.aws/credentials, even though they were valid.
Directions for running Docker without root on Ubuntu are here: https://askubuntu.com/a/477554/85384
You just have to pass the credential in order to be the AWS_PROFILE, if you do not pass anything it will use the default, but if you want you can copy the default and add your desired credentials.
In Your credentials
[profile_dev]
aws_access_key_id = xxxxxxxxxxxxxxxxxxxxxxxx
aws_secret_access_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
output = json
region = eu-west-1
In Your docker-compose
version: "3.8"
services:
cenas:
container_name: cenas_app
build: .
ports:
- "8080:8080"
environment:
- AWS_PROFILE=profile_dev
volumes:
- ~/.aws:/app/home/.aws:ro

Resources