I'm trying to host Jenkins in a Docker container in the Azure App Service. This means it's 'linux' hosting.
By default the jenkins/jenkins-2.110-alpine Docker image stores its data in the /var/jenkins_home folder in the container. I want this data/config persisted to Azure persistent storage so that it's persisted across container restarts.
I've read documentation and blogs stating that you can have container data persisted if it's stored in the /home folder.
So I've customized the Jenkins Dockerfile to look like this...
FROM jenkins/jenkins:2.110-alpine
USER root
RUN mkdir /home/jenkins
RUN ln -s /var/jenkins_home /home/jenkins
USER jenkins
However, when I deploy to Azure App Service I don't see the file in my /home folder (looking in Kudu console). The app starts just fine, but I lose all of my data when I restart my container.
What am I missing?
That's expected because you only persist a symlink (ln -s /var/jenkins_home /home/jenkins) on the Azure host. All the files physically exist inside the container.
To do this, you have to actually change Jenkins configuration to store all data in /home/jenkins which you have already created in your Dockerfile above.
A quick search for Jenkins data folder suggests that you set the environment variable JENKINS_HOME to your directory.
In your Dockerfile:
ENV JENKINS_HOME /home/jenkins
Related
We have a SpringBoot app which requires a keystore file located at "/secrets/app.keystore.jks" to run.
We want to run the app in a container on a Azure App Service Linux instance. And For security reasons we don't want to include the "/secrets/app.keystore.jks" file in the container itself. Instead, we have managed to upload the file to the "/home/site/wwwroot/secrets/" folder on the app service.
And we use the following command to start up the container on the app service
docker run -d myacr.azurecr.io/myAPp:latest -p 80:80 --name myApp
-e WEBSITES_ENABLE_APP_SERVICE_STORAGE=TRUE -v /home/site/wwwroot/secrets:/secrets
In the app service's log, we have the error:
java.lang.IllegalStateException: java.io.IOException: Could not open
/secrets/app.keystore.jks as a file, class path resource, or URL.
It looks to me the volume was not set up and the app cannot access to the file "/secrets/app.keystore.jks"
Does anyone know how to setup a volume so the app in the container can access a file on the host?
There are two ways to achieve your purpose. One is set the environment variable WEBSITES_ENABLE_APP_SERVICE_STORAGE as true and you can mount the persistent volume to your container like belowin the docker-compose file:
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot/secrets:/secrets
Get more details here.
Another way is that you can mount the Azure Storage to your container and upload the files to the storage. Follow the steps here.
For development: I have a node API deployed in a docker container. That docker container runs in a Linux virtual machine.
For deployment: I push the docker image to Azure (ARC) and then our admin creates the container (ACI).
I need to copy a data file named "config.json" in a shared volume "data_storage" in Azure.
I don't understand how to write the command in dockerfile that will copy the JSON file in Azure because when I build the dockerfile I am building the image, and that folder should be mapped when creating the container with "docker run -v", so not at the stage of building the image.
Any help please?
As I know, you can put the copy action and all the actions that depend on the JSON file into a script and execute this script as a command when you run this image so that you do not need the JSON file in the image creation. And when you run the image and the volume mounted, the JSON file already exists in the container and all the actions include the copy action will go ahead.
For the ACI, you can store the JSON file in the Azure File Share and mount it to the ACI, follow the steps that Mount an Azure file share in Azure Container Instances.
to copy your file to the shared volume "data_storage" you will need to ask your admin to create the volume mount:
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-volume-azure-files
Assuming you are referring to a data_storage that's an Azure File.
I'm trying to build an application with python to scrape and serve data.
All data is stored as sqlite3 database in /app/data folder.
Here's my Dockerfile
FROM python:3.6.0
WORKDIR /app
COPY './requirements.txt' .
RUN mkdir /app/data
RUN mkdir /app/logs
RUN chmod -R 777 /app/data
RUN chmod -R 777 /app/logs
RUN pip install -r requirements.txt
COPY . .
ENTRYPOINT [ "python", "app.py" ]
Azure is taking image source from the private docker hub repository.
At first, the application worked fine but after a few hours image got updated(I didn't change anything) and the container got cleared, which means all my data(database/logs) is gone.
Continuous Deployment is set to Off and I'm not updating the image in docker hub.
How I can prevent container rebuilding?
Is Always On turned on in the App Service settings?
Also, the nature of containers makes them ephemeral so you should never store data that you want to keep inside them. That being said, App Service provides you with an easy way map a volume to the storage included in your App Service. The feature is called Persistent Shared Storage and it maps the WEBAPP_STORAGE_HOME env variable to the App Service's /home folder.
In the Web App's Application Settings You need to set WEBSITES_ENABLE_APP_SERVICE_STORAGE to true and inside your container, you'll now see a /home folder. That folder points to the storage part of your App Service.
Using a Docker Compose file you can also define a volume using that env variable:
${WEBAPP_STORAGE_HOME}/LogFiles:/app/logs
Link to the doc
As per title, I have created an Azure App Service running a tomcat image (docker container).
When I have set up the path map to a FIle Share, the container or tomcat keeps on complaining that the folder that I mounted into the container itself is not writeable....
I read on Azure's website that File Share's mounted is Read/Write => https://learn.microsoft.com/en-us/azure/app-service/containers/how-to-serve-content-from-azure-storage
So I'm confused as to why it's still not working.... Any help would be really appreciated with this issue...
Not sure how you mounted the storage in your dockerfile, drop your snippet if possible, but I simply made the WORKDIR match the mount path. My dockerfile was a node app but shouldn't make a difference and realized that I didn't need the VOLUME keyword (it's commented).
I'm trying to use a stack built with Docker container to run a Symfony2 application (SfDocker). The stack consists of interlinked containers where ubuntu:14.04 is a base:
mysql db
nginx
php-fpm
The recurring problem that I'm facing is managing directory permission inside the container. When I mount a vloume from the host, e.g.
volumes:
- symfony-code:/var/www/app
The mounted directories will always be owned by root or an unidentified user (only user ID visible when running ls -al) inside the container.
This, essentially, makes it impossible to access the application through the browser. Of course running chown -R root:www-data on public directories solves the problem, but as soon as I want to write to e.g. 'cache' directory as from the host (where the user is ltarasiewicz) I'd get permission denied error. On top of that, whenever an application running inside a container creates new directories (e.h. 'logs'), they again are owned byroot and later inaccessible by the browser or my desktop user.
So my question are:
How I should manage permission accross the host and container
environments (when I want to run commands on the container from both
environments) ?
Is it possible to configure Docker so that directories mounted as volumes receive specific ownership/permissions (e.g. 'root:www-data') automatically?
Am I free to create new users and user groups inside my 'nginx' container built from the Ubuntu:14.04 image ?
A few general points, apologies if I don't answer your questions directly.
Don't run as root in the container. Create a user in the Dockerfile and switch to it, either with the USER statement or in an entrypoint or command script. See the Redis official image for a good example of this. (So the answer to Q3 is yes, and do, but via a Dockerfile - don't make changes to containers by hand).
Note that the official images often do a chown on volumes in the entrypoint script to avoid this issue you describe in 2.
Consider using a data container rather than linking directly to host directories. See the official docs for more information.
Don't run commands from the host on the volumes. Just create a temporary container to do it or use docker exec (e.g. docker run -v /myvol:/myvol myimage touch /myvol/x).