How to mount docker container home directory to Azure Storage - azure

I am new to docker. I'm trying to get atmoz/sftp container work with Azure Storage.
My goal is to have multiple SFTP users who will upload files to their own folders which I can then find on Azure Storage.
I used the following command:
az container create \
--resource-group test \
--name testsftpcontainer \
--image atmoz/sftp \
--dns-name-label testsftpcontainer \
--ports 22 \
--location "East US" \
--environment-variables SFTP_USERS="ftpuser1:yyyy:::incoming ftpuser2:xxx:::incoming" \
--azure-file-volume-share-name test-sftp-file-share \
--azure-file-volume-account-name storagetest \
--azure-file-volume-account-key "zzzzzz" \
--azure-file-volume-mount-path /home
The container is created and run but when I unsuccessfully try to connect via Filezilla I get this in log:
Accepted password for ftpuser2 from 10.240.xxx.xxx port 64982 ssh2
bad ownership or modes for chroot directory component "/home/"
If I use /home/ftpuser1/incoming it works for one of the users.
Do I need to change permissions on the /home directory first? If so, how?

Of course, you can mount the Azure File Share to the container directory /home. And it works perfectly on my side:
And I also make a test with the image atmoz/sftp. And it also works fine. The command here:
az container create -g myResourceGroup \
-n azuresftp \
--image atmoz/sftp \
--ports 22 \
--ip-address Public \
-l eastus \
--environment-variables SFTP_USERS="ftpuser1:yyyy:::incoming ftpuser2:xxx:::incoming" \
--azure-file-volume-share-name fileshare \
--azure-file-volume-mount-path /home \
--azure-file-volume-account-name xxxxxx \
--azure-file-volume-account-key xxxxxx
Here is the screenshot:
Update:
With the requirements, the error shows the bad ownership and it's impossible to control the permissions when you mount the Azure file share to the path /home or /home/user right now. So I recommend you mount the Azure file share to the path /home/user/upload of every user and it will go to the same result as you need.

I could not find a solution to the problem. In the end I used another approach:
- I mounted the Azure storage into another unrelated folder /mount/sftpfiles
- After the container was built, I ran these commands:
apt update
apt-get -y install lsyncd
lsyncd -rsync /home /mnt/sftpfiles
They download a tool called lsyncd which watches for file system changes and copies files to another folder when a change occurs.
This solves my requirement but it has a side effect of duplicating all files (that's not a problem for me).
I'm still open to other suggestions that would help me make this cleaner.

Related

Gitlab generate Not found URL for user and repository

I installed Gitlab on my own Server(CentOS7) with docker and portainer and I create one user.
everything seems okay but when I clicked on the user URL I see this message:
I don't know why but it generates http://0de09c2e3bc1/Parisa_hr randomly. means 0de09c2e3bc1
on the other hand, when I want to clone my repo I have problems too. the URL that it generates is git#0de09c2e3bc1:groupname/projectname.git and http://0de09c2e3bc1/groupname/projectname.git
I got this error as I want to clone it :
ssh: Could not resolve hostname 0de09c2e3bc1: Temporary failure in
name resolution fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository
exists.
I don't know which things make it create 0de09c2e3bc1, I think I should have seen my IP address.
I noticed that 0de09c2e3bc1 is the name of portainer because as I checked its console I see it.
root#0de09c2e3bc1:/#
now, how can I fix it?
I also changed external_url to https://IP:port of my server but it didn't work.
Double-check your external-url which is part of the generated URL on each query.
This gist about installing portainer and gitlab shows a docker run like:
docker run --detach \
--name gitlab \
--publish 8001:80 \
--publish 44301:443 \
--publish 2201:22 \
--hostname gitlab.c2a-system.dev \
--env GITLAB_OMNIBUS_CONFIG="external_url 'http://gitlab.c2a-system.dev/'; gitlab_rails['gitlab_shell_ssh_port'] = 2201;" \
--volume /srv/gitlab/config:/etc/gitlab \
--volume /srv/gitlab/logs:/var/log/gitlab \
--volume /srv/gitlab/data:/var/opt/gitlab \
--restart unless-stopped \
gitlab/gitlab-ce:latest
See Pre-configure Docker container, using the environment variable GITLAB_OMNIBUS_CONFIG.
Then, for accessing a private repository like http://gitlab.c2a-system.dev/groupname/projectname.git, you will need to define a credential helper and store your PAT:
git config --global credential.helper cache
printf "host=gitlab.c2a-system.dev\nprotocol=http\nusername=YourGitLabAccount\npassword=YourGitLabToken"|\
git credential-cache store

Why do I need to install certificates for an external URL when installing gitlab?

I am confused.
For now, I just want to self-host gitlab in my local home network without exposing it to the internet. Is this possible? If so can i do this without installing ca-certificates?
Why is gitlab force (?) me to expose my gitlab server to the internet?
Nothing else I've locally installed my NAS/Server requires ca certificates for me to connect to its webservice?: I can just go to xyz.456.abc.123:port in chrome
e.g. in this article, the public url is referenced: https://www.cloudsavvyit.com/2234/how-to-set-up-a-personal-gitlab-server/
You don't need to install certificates to use GitLab and you do not have to have GitLab exposed to the internet to have TLS security.
You can also opt to not use TLS/SSL at all if you really want. In fact, GitLab does not use HTTPS by default.
Using docker is probably the easiest way to demonstrate it's possible:
mkdir -p /opt/gitlab
export GITLAB_HOME=/opt/gitlab
docker run --detach \
--hostname localhost \
--publish 443:443 --publish 80:80 --publish 22:22 \
--name gitlab \
--volume $GITLAB_HOME/config:/etc/gitlab \
--volume $GITLAB_HOME/logs:/var/log/gitlab \
--volume $GITLAB_HOME/data:/var/opt/gitlab \
-e GITLAB_OMNIBUS_CONFIG='external_url "http://localhost"' \
gitlab/gitlab-ee:latest
# give it 15 or 20 minutes to start up
curl http://localhost
You can replace http://localhost in the external_url configuration with the computer hostname you want to use for your local server or even an IP address.

Running Superset on Azure container Instance

I am trying to run superset on Azure container Instance with volume mapping on Azure file Share storage. When I build the container with below command the container instance gets on the running state but I can't launch the superset url.
az container create --resource-group $ACI_PERS_RESOURCE_GROUP --name superset01 --image superset-image:v1 --dns-name-label superset01 --ports 8088 --azure-file-volume-account-name $ACI_PERS_STORAGE_ACCOUNT_NAME --azure-file-volume-account-key $STORAGE_KEY --azure-file-volume-share-name $ACI_PERS_SHARE_NAME --azure-file-volume-mount-path "/app/superset_home/"
Also I can see the volume map files are created on the file share but it doesn't grow to the initial size of superset.db.
Name Content Length Type Last Modified
------------------- ---------------- ------ ---------------
cache/ dir
superset.db 0 file
superset.db-journal 0 file
Any inputs on this please?
I tested in my environment, and it is working fine for me. I am using superset images from docker hub and pushed to container registry.
Use below command to pull the superset images from dockerhub
docker pull apache/superset
Tag the Superset images to push in Container registry
docker tag 15e66259003c testmyacr90.azurecr.io/superset
Now login in the conatiner registry
docker login testmyacr90.azurecr.io
And then push the images to Container Repositoty
You can use the below cmdlet to create the Container Instace and mount to a FileShare as well
az container create --resource-group $ACI_PERS_RESOURCE_GROUP --name mycontainerman --image $ACR_LOGIN_SERVER/superset:latest --registry-login-server $ACR_LOGIN_SERVER --registry-username $ACR_USERNAME --registry-password $ACR_PASSWORD --dns-name-label aci-demo-7oi --ports 80 --azure-file-volume-account-name $ACI_PERS_STORAGE_ACCOUNT_NAME --azure-file-volume-account-key $STORAGE_KEY --azure-file-volume-share-name $ACI_PERS_SHARE_NAME --azure-file-volume-mount-path /aci/logs/
Would suggest you use FDQN or Public IPAddress of container instance to hit on browser and able to see response from container instance.
Update-----
I have mounted the fileShare at "/app/superset_home/ as you did and getting the same output like you.
Based on above picture would suggest you to do not mount the fileshare at /app/superset_home/ because here superset.db and superset.db-journal resides. if we are mounting to this location there is conflict happing. So better to suggest you mount at /aci/logs/ or any location you want rather than /app/superset_home/
Update 2-------
As I am using the same storage account for mount the vloume at /aci/logs/ as previously volume mounted to /app/superset_home/.
Getting the same error like you
Soultion: To avoid thease kind of issue create a new storage accout and file share with in and mount the volume at /aci/logs/.
Once i did like that i sorted my issue

Azure container can't access a mounted volume on startup why?

I am having problems using a volume mount in my container on Azure container instances.
I can mount a volume mount to my container no problem. Here is the az cli command I am using and it works well. I followed this tutorial
az container create --resource-group mydemo --name paulwx --image containerregistry.azurecr.io/container:master --registry-username username --registry-password password --dns-name-label paulwx --ports 8080 --assign-identity --azure-file-volume-account-name accountname --azure-file-volume-account-key secretkey --azure-file-volume-share-name myshare --azure-file-volume-mount-path /opt/application/config
This works great and I can attach to the container console and access the shared volume. I can touch files and read files.
The problem comes when I try to have the application read this folder on startup (to get its application configuration). Again the command is almost identical except the application is flagged to read the configs from the volume mount via an ENV variable called SPRING_CONFIG_LOCATION.
az container create --resource-group mydemo --name paulwx --image containerregistry.azurecr.io/container:master --registry-username username --registry-password password --dns-name-label paulwx --ports 8080 --assign-identity --azure-file-volume-account-name accountname --azure-file-volume-account-key secretkey --azure-file-volume-share-name myshare --azure-file-volume-mount-path /opt/application/config --environment-variables SPRING_CONFIG_LOCATION=file:/opt/application/config/
The container now terminates with the following error.
Error: Failed to start container paulwx, Error response: to create
containerd task: failed to mount container storage: guest modify:
guest RPC failure: failed to mount container root filesystem using
overlayfs
/run/gcs/c/e51f86c414ae83c7c279a4252864a381399069d358f5d2303c97c630e17b049f/rootfs:
no such file or directory: unknown
So I can mount a volume as long as I don't access it on start up. Am I missing something very fundamental here? Surely having a volume mount means the mount point will be available when the container starts.
The volume mount type is samba and Standard_LRS.
So to cut a long story short the issue here was that the application on startup was configured to read the property file from a folder that was not where I expected it to be be.
The application failed to start due to not finding the application.properties where it expected to find them.The Azure error message was a red herring and not at all reflective of the problem. The application logger was not logging to standard out properly so I actually missed the application log which clearly stated application.properties not found.

What directories do containers launched in Azure Container Instances have read access to?

My goal is to run a python script, which is passed to a docker container containing source code and dependencies. This is to be run using Azure container instances (ACI). On my home machine I can do this in docker (setting ENTRYPOINT or CMD to "python") with
docker run myimage "myscript.py"
provided I spin up the container from a directory containing myscript.py.
After some reading I thought a similar thing could be achieved on azure using az container create --command-line as indicated here. My container creation would be something like
az container create \
--resource-group myResourceGroup \
--name my-container \
--image myimage:latest \
--restart-policy Never \
--command-line "python 'myscript.py'"
However the container is unable to find myscript.py. I am using the Azure Cloud Shell. I have tried spinning up the container from a directory containing myscript.py, and I have also tried adding a file share with myscript.py inside. In both cases I get the error
python3: can't open file 'myscript.py': [Errno 2] No such file or
directory
I think I simply do not understand the containers and how they interact with the host directory structure on azure. Can anyone provide some suggestions or pointers to resources?
provided I spin up the container from a directory containing
myscript.py.
From where you issue the command to start an container instance does not matter at all. The files need to be present inside your image. Or you can also mount storage into it and read from there.

Resources